[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-stereolabs--zed-sdk":3,"tool-stereolabs--zed-sdk":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":115,"forks":116,"last_commit_at":117,"license":118,"difficulty_score":119,"env_os":120,"env_gpu":121,"env_ram":122,"env_deps":123,"category_tags":134,"github_topics":136,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":154,"updated_at":155,"faqs":156,"releases":182},5482,"stereolabs\u002Fzed-sdk","zed-sdk","⚡️The spatial perception framework for rapidly building smart robots and spaces","zed-sdk 是由 Stereolabs 推出的跨平台空间感知框架，旨在帮助开发者充分利用 ZED 系列相机及 Ouster 激光雷达设备，快速构建智能机器人与空间感知应用。它解决了传统开发中需要分别处理视觉、定位与传感器融合数据的难题，通过统一的 API 接口将深度感知、物体检测、人体追踪、位置跟踪及全局定位等核心功能整合在一起，大幅降低了多传感器协同开发的复杂度。\n\n这款工具特别适合机器人工程师、计算机视觉研究人员以及自动驾驶领域的开发者使用。无论是需要让机器人在复杂环境中自主导航，还是希望实现高精度的三维空间重建，zed-sdk 都能提供强大的底层支持。其最新 5.2 版本带来了显著的技术亮点：在 Jetson 平台上实现了高达 85% 的 CPU 负载降低，并引入了先进的零拷贝 NV12 接口以提升数据传输效率；同时，全新的测试版传感器 API 首次实现了对相机与激光雷达数据的统一流水线管理，无需再编写繁琐的自定义融合代码。此外，它还增强了对低光照环境的成像质量及位姿跟踪的鲁棒性，是打造下一代空间智能系统的理想选择。","\u003Ch1 align=\"center\">\n  \u003C!--- Stereolabs Banner --->\n  \u003C!--- a href=\"http:\u002F\u002Fwww.stereolabs.com\u002Fdocs\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_51597f3ea964.jpg\" alt=\"Stereolabs\">\u003C\u002Fa --->\n  ZED SDK\n  \u003Cbr>\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  The ZED SDK is a cross-platform library designed to get the best out of the \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fstore\u002F\">ZED\u003C\u002Fa> cameras. \n  \u003Cbr \u002F>\n  In this project, we provide tutorials and code samples to get started using the ZED SDK API.\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\">Website\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fstore.stereolabs.com\u002F\">Store\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002F\">API Reference\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fcommunity.stereolabs.com\u002F\">Community\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fblog\u002F\">Blog\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fstereolabs\u002Fzed-sdk?color=%2300aeec&label=ZED%20SDK\" alt=\"SDK Version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcommunity.stereolabs.com\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscourse\u002Fposts?server=https%3A%2F%2Fcommunity.stereolabs.com%2F\" alt=\"ZED Discourse\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhub.docker.com\u002Fu\u002Fstereolabs\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fstereolabs\u002Fzed\" alt=\"Docker Pulls\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-examples\u002Fstargazers\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fstereolabs\u002Fzed-sdk?style=social\" alt=\"Github Stars\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n:tada: The **ZED SDK 5.2** is released!\n\n**ZED SDK 5.2** delivers major performance gains on Jetson with up to 85% lower CPU load, improved GMSL driver reliability at 200 Hz IMU rate, and sharper images in low-resolution modes. It adds support for an advanced zero-copy NV12 interface on Jetson.\n\nThis release also introduces the new beta Sensors API (sl::Sensors), a unified interface for managing ZED cameras and Ouster LiDAR devices in a single pipeline — replacing the need for separate APIs and custom fusion code.\n\nThis release adds support for JetPack 7.1 \u002F L4T 38.4, unlocking hardware video encoding and decoding on Jetson Thor. Alongside these platform updates, version 5.2 brings important improvements to positional tracking robustness and the Python API, as well as numerous other bug fixes and feature enhancements across the SDK.\n\nPlease check the [Release Notes](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F) of the latest version for more details.\n\n## Overview\n\nDepth Sensing | Object Detection | Body Tracking |\n:------------: |  :----------: | :-------------:  |\n[![Depth Sensing](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_36e9cfe55c3c.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fdepth-sensing)  | [![Object Detection](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_35a1c8eb4696.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fobject-detection)  | [![Body Tracking](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_9e81660c22a2.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fbody-tracking)  |\n\nPositional Tracking | Global Localization | Spatial Mapping |\n:------------: |  :----------: | :-------------:  |\n[![Positional Tracking](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_56de9ac49f77.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fpositional-tracking\u002F) | [![Global Localization](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_530dbdf25259.gif)](\u002Fglobal%20localization) | [![Spatial Mapping](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_b72f698a73cd.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fspatial-mapping) |\n\nCamera Control | Plane Detection | Multi Camera Fusion |\n:------------: |  :----------: | :-------------:  |\n[![Camera Control](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_8e8b490e09a5.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fvideo\u002Fcamera-controls\u002F) | [![Plane Detection](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_dd1f1b0dbc63.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fspatial-mapping\u002Fplane-detection\u002F)  | [![Multi Camera Fusion](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_520d014e33ee.gif)](\u002Ffusion) |\n\n\n## Why ZED?\n\n- 🎯 End-to-end spatial perception platform for human-like sensing capabilities.\n- ⚡ Real-time performance: all algorithms of the ZED SDK are designed and optimized to run in real-time. \n- 📷 Reduce time-to-market with our comprehensive, ready-to-use hardware and software designed for multiple applications.\n- 📖 User-friendly and intuitive, with easy-to-use integrations and well-documented API for streamlined development.\n- 🛠️ Wide range of supported platforms, from desktop to embedded PCs.\n\n## Getting started\n\nThe ZED SDK contains all the libraries that power your camera along with tools that let you experiment with its features and settings.\n\nTo get started:\n- [Get a ZED from the Stereolabs Store](https:\u002F\u002Fstore.stereolabs.com\u002F)\n- [Download the ZED SDK](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F#downloads)\n- [Install the ZED SDK](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002F) on [Windows](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fwindows\u002F), [Linux](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Flinux\u002F) or [Jetson](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fjetson\u002F)\n- [Start experimenting with the ZED SDK's tutorials](\u002Ftutorials)\n\nThe [documentation](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002F) and [API reference](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002F) are great starting points to learn more about the ZED SDK and its many modules.\n\n## Samples\n\nThis repository contains ready-to-use and samples to start using the ZED SDK with only a few lines of code. They are organized by ZED SDK module: \n\n* [**Tutorials**](\u002Ftutorials) - A series of basic tutorials that demonstrate the use of each API module.\n\n* [**Camera Control**](\u002Fcamera%20control) - This sample shows how to adjust the **ZED camera parameters**.\n\n* [**Camera Streaming**](\u002Fcamera%20streaming) - This sample shows how to **stream** and receive on local network the ZED's video feed.\n\n* [**Depth Sensing**](\u002Fdepth%20sensing) - This sample shows how to capture a **3D point cloud** and display with OpenGL. It also shows how to save depth data in different formats.\n\n* [**Positional Tracking**](\u002Fpositional%20tracking) - This sample shows how to use **positional tracking** and display the result with *OpenGL*.\n\n* [**Global Localization**](\u002Fglobal%20localization) - This sample shows how to fuse the ZED SDK's **positional tracking with GNSS data** for global positioning.\n\n* [**Spatial Mapping**](\u002Fspatial%20mapping) - This sample shows how to capture **3D meshes** with the ZED and display it with *OpenGL*. Classic Mesh and Point Cloud fusion are available.\n\n* [**Object Detection**](\u002Fobject%20detection) - This sample shows how to use the **Object Detection API** module with the ZED.\n\n* [**Body Tracking**](\u002Fbody%20tracking) - This sample shows how to use the **Body Tracking API** with the ZED.\n\n* [**Recording**](\u002Frecording) - This sample shows how to **record** and **playback** video files in SVO format. SVO files let you use all the ZED SDK features without having a ZED connected.\n\n## Supported platforms\n\nHere is the list of all supported operating systems for the latest version of the ZED SDK. Please find the [recommended specifications](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fspecifications\u002F) to make sure your configuration is compatible with the ZED SDK.\n\n| Ubuntu LTS | Windows | Jetson |\n| -------- | ------------------------- | ----------------- |\n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Flinux\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_4372fe867a55.png\" width=\"40%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fwindows\">\u003Cimg  src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_cf8df3d169f7.png\" width=\"40%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fjetson\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ad553d6504b2.png\" width=\"40%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\nThe ZED SDK requires the use of an **NVIDIA GPU** with a **Compute Capability > 5**.\n\nIf you are not familiar with the corresponding versions between NVIDIA JetPack SDK and Jetson Linux, please take a look at our [blog post](https:\u002F\u002Fwww.stereolabs.com\u002Fblog\u002Fnvidia-jetson-l4t-and-jetpack-support\u002F). \n\n\n## Integrations\n\nThe ZED SDK can be easily integrated into projects using the following programming languages:\n\n| C++ | Python | C# | C |\n| -------- | ------------------------- | ----------------- | -------- | \n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_1bc4cfcc0816.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002Fpython\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ce5d54ef0337.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002Fcsharp\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_0887efbaf0bb.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002Fc\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_a974934bc99b.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cbr \u002F>\n\nThanks to its comprehensive API, ZED cameras can be interfaced with **multiple third-party libraries** and environments.\n\n| Unity | Unreal Engine 5 | OpenCV | ROS | ROS 2\n| -------- | ------------------------- | ----------------- | ----- | ----- |\n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Funity\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_66a1030bb5fe.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fue5\u002F\">\u003Cimg  src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_f8ee0c1e9b2a.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fopencv\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_7fe826adaa78.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fros\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_f991ce860213.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fros2\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ede9758ebbda.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\n| Pytorch | YOLO | Matlab | Isaac SIM | Touch Designer |  \n| -------- | ------------------------- | ----------------- | ----- | ----- |\n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fpytorch\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ddcda01d577c.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fyolo\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_1188b8df2b41.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fmatlab\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_b600cbba8d7c.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_59664c3a056f.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fderivative.ca\u002FUserGuide\u002FZED\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_928bc5eb59c2.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\n\n\u003Cbr \u002F>\n\n## Community\n\nJoin the conversation and connect with other ZED SDK users to share ideas, solve problems, and help make the ZED SDK awesome. Our aim is to make it extremely convenient for everyone to communicate with us.\n\n- **Discourse** is our forum where all ZED users can connect. This is the best place to brainstorm and exchange about ZED cameras, ZED SDK software, and other Stereolabs products. Feel free to create an account and ask your questions, or even share your awesome projects!\n\n- **Twitter** Follow Stereolabs [@Stereolabs3D](https:\u002F\u002Ftwitter.com\u002Fstereolabs3d) for official news and release announcements.\n- **GitHub** If you come across a bug, please raise an issue in this [**GitHub repository**](https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-examples\u002Fissues).\n\n- **Email** To talk to Stereolabs directly, the easiest way is by email. Get in touch with us at support@stereolabs.com.\n\n\u003Cbr \u002F>\n\u003Cbr \u002F>\n\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstereolabs\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892870-fbac3f33-49d9-4575-9a2b-10fc2ba26091.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fstereolabs\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892887-d12a8d98-4245-4121-8d23-52bd61431b29.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fstereolabs3d\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892805-93d657be-a54c-4e12-83c6-6e7b15a256e2.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002FStereolabs3d\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892815-f04bb1ce-aa42-49b0-bfe7-d1d051ead830.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Fcommunity.stereolabs.com\u002F\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892794-8840d6c5-54bf-44d3-a95b-d9c51927914f.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n\u003C\u002Fdiv>\n","\u003Ch1 align=\"center\">\n  \u003C!--- 斯特雷奥拉布斯横幅 --->\n  \u003C!--- a href=\"http:\u002F\u002Fwww.stereolabs.com\u002Fdocs\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_51597f3ea964.jpg\" alt=\"Stereolabs\">\u003C\u002Fa --->\n  ZED SDK\n  \u003Cbr>\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  ZED SDK 是一款跨平台的开发库，旨在充分发挥 \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fstore\u002F\">ZED\u003C\u002Fa> 相机的性能。本项目提供了教程和代码示例，帮助您快速上手使用 ZED SDK API。\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\">官网\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fstore.stereolabs.com\u002F\">商店\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002F\">API 参考文档\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fcommunity.stereolabs.com\u002F\">社区\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fblog\u002F\">博客\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fstereolabs\u002Fzed-sdk?color=%2300aeec&label=ZED%20SDK\" alt=\"SDK 版本\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcommunity.stereolabs.com\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscourse\u002Fposts?server=https%3A%2F%2Fcommunity.stereolabs.com%2F\" alt=\"ZED 论坛\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhub.docker.com\u002Fu\u002Fstereolabs\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fstereolabs\u002Fzed\" alt=\"Docker 拉取次数\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-examples\u002Fstargazers\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fstereolabs\u002Fzed-sdk?style=social\" alt=\"GitHub 星标数\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n:tada: **ZED SDK 5.2** 已发布！\n\n**ZED SDK 5.2** 在 Jetson 平台上实现了显著的性能提升，CPU 负载降低多达 85%；在 200 Hz IMU 采样率下，GMSL 驱动程序的可靠性也得到了改进；同时，在低分辨率模式下图像更加清晰锐利。此外，该版本还新增了对 Jetson 上高级零拷贝 NV12 接口的支持。\n\n本次发布还引入了全新的 Sensors API（sl::Sensors）测试版，这是一个统一的接口，用于在一个管道中同时管理 ZED 相机和 Ouster LiDAR 设备，从而取代了以往需要分别调用不同 API 和编写自定义融合代码的做法。\n\n此外，此版本新增了对 JetPack 7.1 \u002F L4T 38.4 的支持，解锁了 Jetson Thor 上的硬件视频编解码功能。除了这些平台更新外，5.2 版本还在定位跟踪的鲁棒性、Python API 以及 SDK 的其他多个方面进行了重要改进，并修复了大量 bug 和功能增强。\n\n更多详细信息，请参阅最新版本的 [发行说明](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F)。\n\n## 概览\n\n深度感知 | 对象检测 | 人体追踪 |\n:------------: |  :----------: | :-------------:  |\n[![深度感知](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_36e9cfe55c3c.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fdepth-sensing)  | [![对象检测](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_35a1c8eb4696.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fobject-detection)  | [![人体追踪](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_9e81660c22a2.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fbody-tracking)  |\n\n位置跟踪 | 全局定位 | 空间映射 |\n:------------: |  :----------: | :-------------:  |\n[![位置跟踪](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_56de9ac49f77.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fpositional-tracking\u002F) | [![全局定位](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_530dbdf25259.gif)](\u002Fglobal%20localization) | [![空间映射](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_b72f698a73cd.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fspatial-mapping) |\n\n相机控制 | 平面检测 | 多摄像头融合 |\n:------------: |  :----------: | :-------------:  |\n[![相机控制](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_8e8b490e09a5.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fvideo\u002Fcamera-controls\u002F) | [![平面检测](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_dd1f1b0dbc63.gif)](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fspatial-mapping\u002Fplane-detection\u002F)  | [![多摄像头融合](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_520d014e33ee.gif)](\u002Ffusion) |\n\n\n## 为什么选择 ZED？\n\n- 🎯 提供端到端的空间感知平台，实现类人感知能力。\n- ⚡ 实时性能：ZED SDK 中的所有算法均经过精心设计和优化，可在实时环境中运行。\n- 📷 凭借我们为多种应用场景量身打造的全面且开箱即用的软硬件解决方案，缩短产品上市时间。\n- 📖 用户友好且直观，提供易于使用的集成方式和文档完善的 API，助力开发流程高效顺畅。\n- 🛠️ 支持广泛的平台，从桌面设备到嵌入式 PC 均可兼容。\n\n## 开始使用\n\nZED SDK 包含驱动您的相机所需的所有库，同时还提供工具让您能够试验其各项功能和设置。\n\n开始使用步骤：\n- [从 Stereolabs 商店购买 ZED 相机](https:\u002F\u002Fstore.stereolabs.com\u002F)\n- [下载 ZED SDK](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F#downloads)\n- [在 Windows](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fwindows\u002F)、[Linux](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Flinux\u002F) 或 [Jetson](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fjetson\u002F) 上安装 ZED SDK\n- [开始体验 ZED SDK 的教程](\u002Ftutorials)\n\n[文档](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002F) 和 [API 参考文档](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002F) 是深入了解 ZED SDK 及其众多模块的绝佳起点。\n\n## 示例\n\n此仓库包含开箱即用的示例代码，只需几行代码即可开始使用 ZED SDK。这些示例按 ZED SDK 模块组织：\n\n* [**教程**](\u002Ftutorials) - 一系列基础教程，演示如何使用每个 API 模块。\n\n* [**相机控制**](\u002Fcamera%20control) - 此示例展示了如何调整 **ZED 相机参数**。\n\n* [**相机流传输**](\u002Fcamera%20streaming) - 此示例展示了如何在本地网络中 **流式传输** 并接收 ZED 的视频流。\n\n* [**深度感知**](\u002Fdepth%20sensing) - 此示例展示了如何捕获 **3D 点云** 并使用 OpenGL 进行显示。它还演示了如何以不同格式保存深度数据。\n\n* [**位置跟踪**](\u002Fpositional%20tracking) - 此示例展示了如何使用 **位置跟踪** 功能，并通过 *OpenGL* 显示结果。\n\n* [**全局定位**](\u002Fglobal%20localization) - 此示例展示了如何将 ZED SDK 的 **位置跟踪与 GNSS 数据** 融合，实现全球定位。\n\n* [**空间建图**](\u002Fspatial%20mapping) - 此示例展示了如何使用 ZED 捕获 **3D 网格**，并使用 *OpenGL* 进行显示。支持经典网格与点云融合。\n\n* [**目标检测**](\u002Fobject%20detection) - 此示例展示了如何使用 ZED 的 **目标检测 API 模块**。\n\n* [**人体追踪**](\u002Fbody%20tracking) - 此示例展示了如何使用 ZED 的 **人体追踪 API**。\n\n* [**录制**](\u002Frecording) - 此示例展示了如何以 SVO 格式 **录制** 和 **回放** 视频文件。SVO 文件允许您在未连接 ZED 的情况下使用 ZED SDK 的所有功能。\n\n## 支持的平台\n\n以下是最新版本 ZED SDK 支持的所有操作系统的列表。请参阅[推荐配置](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fspecifications\u002F)，以确保您的配置与 ZED SDK 兼容。\n\n| Ubuntu LTS | Windows | Jetson |\n| -------- | ------------------------- | ----------------- |\n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Flinux\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_4372fe867a55.png\" width=\"40%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fwindows\">\u003Cimg  src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_cf8df3d169f7.png\" width=\"40%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fjetson\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ad553d6504b2.png\" width=\"40%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\nZED SDK 需要使用具有 **计算能力 > 5** 的 **NVIDIA GPU**。\n\n如果您不熟悉 NVIDIA JetPack SDK 和 Jetson Linux 之间的对应版本关系，请查看我们的 [博客文章](https:\u002F\u002Fwww.stereolabs.com\u002Fblog\u002Fnvidia-jetson-l4t-and-jetpack-support\u002F)。\n\n\n## 集成\n\nZED SDK 可以轻松集成到使用以下编程语言的项目中：\n\n| C++ | Python | C# | C |\n| -------- | ------------------------- | ----------------- | -------- | \n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_1bc4cfcc0816.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002Fpython\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ce5d54ef0337.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002Fcsharp\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_0887efbaf0bb.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002Fc\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_a974934bc99b.png\" width=\"50%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cbr \u002F>\n\n凭借其全面的 API，ZED 相机可以与 **多个第三方库** 和环境进行对接。\n\n| Unity | Unreal Engine 5 | OpenCV | ROS | ROS 2\n| -------- | ------------------------- | ----------------- | ----- | ----- |\n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Funity\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_66a1030bb5fe.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fue5\u002F\">\u003Cimg  src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_f8ee0c1e9b2a.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fopencv\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_7fe826adaa78.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fros\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_f991ce860213.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fros2\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ede9758ebbda.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\n| Pytorch | YOLO | Matlab | Isaac SIM | Touch Designer |  \n| -------- | ------------------------- | ----------------- | ----- | ----- |\n| \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fpytorch\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_ddcda01d577c.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>  | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fyolo\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_1188b8df2b41.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fmatlab\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_b600cbba8d7c.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.stereolabs.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_59664c3a056f.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv> | \u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fderivative.ca\u002FUserGuide\u002FZED\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_928bc5eb59c2.png\" width=\"70%\" alt=\"\" \u002F>\u003C\u002Fa>\u003C\u002Fdiv>\n\n\n\u003Cbr \u002F>\n\n## 社区\n\n加入讨论，与其他 ZED SDK 用户交流，分享创意、解决问题，共同让 ZED SDK 更加出色。我们的目标是让大家与我们沟通变得无比便捷。\n\n- **Discourse** 是我们的论坛，所有 ZED 用户都可以在这里交流互动。这里是探讨 ZED 相机、ZED SDK 软件以及其他 Stereolabs 产品的好去处。欢迎注册账号，提出你的问题，或者分享你精彩的项目！\n\n- **Twitter** 关注 Stereolabs [@Stereolabs3D](https:\u002F\u002Ftwitter.com\u002Fstereolabs3d)，获取官方新闻和发布公告。\n- **GitHub** 如果你发现了 bug，请在这个 [**GitHub 仓库**](https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-examples\u002Fissues) 中提交 issue。\n- **电子邮件** 如果你想直接与 Stereolabs 沟通，最简单的方式就是发送邮件至 support@stereolabs.com。\n\n\u003Cbr \u002F>\n\u003Cbr \u002F>\n\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstereolabs\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892870-fbac3f33-49d9-4575-9a2b-10fc2ba26091.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fstereolabs\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892887-d12a8d98-4245-4121-8d23-52bd61431b29.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fstereolabs3d\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892805-93d657be-a54c-4e12-83c6-6e7b15a256e2.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002FStereolabs3d\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892815-f04bb1ce-aa42-49b0-bfe7-d1d051ead830.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_readme_896994f79d98.png\" width=\"3%\" alt=\"\" \u002F>\n  \u003Ca href=\"https:\u002F\u002Fcommunity.stereolabs.com\u002F\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Fuser-images.githubusercontent.com\u002F32394882\u002F228892794-8840d6c5-54bf-44d3-a95b-d9c51927914f.svg\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa>\n\u003C\u002Fdiv>","# ZED SDK 快速上手指南\n\nZED SDK 是 Stereolabs 推出的跨平台库，旨在充分发挥 ZED 系列立体相机的性能，提供深度感知、物体检测、人体追踪、空间映射及定位等核心功能。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n### 硬件要求\n*   **相机设备**：ZED、ZED Mini、ZED2、ZED2i 或 ZED X 系列相机。\n*   **GPU**：必须配备 **NVIDIA GPU**，且计算能力 (Compute Capability) **> 5.0**。\n*   **嵌入式平台**：若使用 NVIDIA Jetson 系列，需确认型号兼容性。\n\n### 软件与系统支持\n*   **操作系统**：\n    *   Ubuntu LTS (Linux)\n    *   Windows 10\u002F11\n    *   NVIDIA JetPack (Jetson Linux)\n*   **编程语言支持**：C++, Python, C#, C。\n*   **前置依赖**：\n    *   NVIDIA CUDA Toolkit (版本需与 SDK 匹配)\n    *   OpenGL (用于可视化示例)\n    *   Git (用于克隆示例代码)\n\n> **注意**：具体推荐的硬件配置和 CUDA 版本对应关系，请参考官方 [规格说明](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Finstallation\u002Fspecifications\u002F)。\n\n## 2. 安装步骤\n\n### 第一步：下载 SDK\n访问 Stereolabs 官网下载页面获取最新版本的安装包（目前最新版本为 **ZED SDK 5.2**）：\n*   **下载地址**：[https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F#downloads](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F#downloads)\n\n> **国内开发者提示**：由于服务器位于海外，下载大文件时可能较慢。建议使用支持断点续传的下载工具，或检查是否有社区提供的国内镜像源。\n\n### 第二步：安装 SDK\n\n#### Windows 用户\n1.  运行下载的 `.exe` 安装程序。\n2.  按照向导完成安装，默认路径通常为 `C:\\Program Files (x86)\\ZED SDK`。\n3.  安装完成后重启计算机。\n\n#### Linux (Ubuntu) 用户\n1.  打开终端，进入下载目录。\n2.  赋予执行权限并运行安装脚本：\n    ```bash\n    chmod +x ZED_SDK_Linux_Ubuntu20.run # 文件名根据实际版本调整\n    .\u002FZED_SDK_Linux_Ubuntu20.run\n    ```\n3.  按照终端提示选择组件（建议全选以包含样例和工具）。\n\n#### Jetson 用户\n1.  确保已刷入兼容的 JetPack 版本（SDK 5.2 支持 JetPack 7.1 \u002F L4T 38.4）。\n2.  运行针对 Jetson 的安装包：\n    ```bash\n    chmod +x ZED_SDK_Tegra_L4T35.run\n    .\u002FZED_SDK_Tegra_L4T35.run\n    ```\n\n### 第三步：验证安装\n安装完成后，连接 ZED 相机，运行自带的诊断工具 `ZED Diagnostic Tool` (Windows) 或在终端输入 `zed-diagnostic` (Linux) 检查相机连接状态及固件版本。\n\n## 3. 基本使用\n\n安装完成后，您可以立即通过官方提供的示例代码进行体验。\n\n### 获取示例代码\n克隆官方示例仓库：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-examples.git\ncd zed-examples\n```\n\n### 快速体验：深度感知 (Depth Sensing)\n以下以 **Python** 为例，展示如何初始化相机并获取深度图像。\n\n**前置条件**：确保已安装 `pyzed` 包（通常随 SDK 自动安装，若未安装可运行 `pip install pyzed`）。\n\n**代码示例 (`tutorial_depth.py`)**：\n\n```python\nimport pyzed.sl as sl\n\ndef main():\n    # 1. 创建相机对象\n    zed = sl.Camera()\n\n    # 2. 设置初始化参数\n    init_params = sl.InitParameters()\n    init_params.camera_resolution = sl.RESOLUTION.HD720\n    init_params.depth_mode = sl.DEPTH_MODE.PERFORMANCE\n    init_params.coordinate_units = sl.UNIT.METER\n\n    # 3. 打开相机\n    err = zed.open(init_params)\n    if err != sl.ERROR_CODE.SUCCESS:\n        print(f\"Camera Open Failed: {err}\")\n        exit(1)\n\n    # 4. 准备图像内存\n    image_left = sl.Mat()\n    depth_image = sl.Mat()\n\n    runtime_params = sl.RuntimeParameters()\n\n    print(\"Press 'q' to exit...\")\n    \n    while True:\n        # 5. 抓取图像\n        if zed.grab(runtime_params) == sl.ERROR_CODE.SUCCESS:\n            # 获取左眼彩色图像\n            zed.retrieve_image(image_left, sl.VIEW.LEFT)\n            # 获取深度图像\n            zed.retrieve_image(depth_image, sl.VIEW.DEPTH)\n            \n            # 此处可添加 OpenCV 显示逻辑或数据处理逻辑\n            # 例如：print(f\"Depth at center: {depth_image.get_value(360, 240)}\")\n        else:\n            break\n            \n    # 6. 关闭相机\n    zed.close()\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### 更多模块示例\n仓库中包含了按功能分类的丰富示例，您可以直接编译或运行以下目录中的代码：\n*   **物体检测**：`\u002Fobject_detection` - 实时检测行人、车辆等。\n*   **人体追踪**：`\u002Fbody_tracking` - 获取人体骨骼关键点。\n*   **空间映射**：`\u002Fspatial_mapping` - 构建 3D 网格模型。\n*   **定位追踪**：`\u002Fpositional_tracking` - 获取相机在空间中的位姿。\n\n详细 API 文档请参阅：[https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002F](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fapi\u002F)","某物流仓储团队正在为自主移动机器人（AMR）开发一套能在动态人流环境中实现高精度导航与避障的系统。\n\n### 没有 zed-sdk 时\n- **感知开发周期漫长**：工程师需手动编写复杂的立体匹配算法来计算深度图，耗时数月且难以在低纹理的仓库地面保持精度。\n- **多传感器融合困难**：想要结合激光雷达与视觉数据，必须自行开发底层驱动和对齐代码，导致系统架构臃肿且极易出现时间同步误差。\n- **动态障碍物识别缺失**：传统方案仅能检测静态障碍，无法区分静止货架与行走的工人，机器人常因误判而频繁急停或发生碰撞。\n- **边缘端算力瓶颈**：未优化的算法在 Jetson 等嵌入式设备上 CPU 占用率极高，导致机器人续航大幅缩短且帧率不稳定。\n\n### 使用 zed-sdk 后\n- **快速构建空间感知**：直接调用 zed-sdk 内置的深度传感与空间映射 API，几天内即可生成厘米级精度的实时 3D 环境模型。\n- **统一传感器管道**：利用新版 Sensors API 将 ZED 相机与 Ouster 激光雷达纳入单一数据流，自动完成硬件级同步与融合，无需重复造轮子。\n- **智能人体追踪避障**：启用身体追踪功能，机器人能精准识别工人的骨骼关节与运动轨迹，实现平滑绕行而非机械式急停。\n- **高性能边缘部署**：借助针对 Jetson 优化的零拷贝接口与硬件编解码支持，CPU 负载降低 85%，显著提升了机器人的运行效率与续航。\n\nzed-sdk 将原本需要数月攻坚的底层视觉难题转化为可调用的标准模块，让团队能专注于上层业务逻辑，加速了智能机器人的落地交付。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstereolabs_zed-sdk_36e9cfe5.gif","stereolabs","Stereolabs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fstereolabs_5bdadeae.jpg","Stereolabs is the leading provider of vision-based AI perception for autonomy and 3D sensing.",null,"support@stereolabs.com","stereolabs3d","https:\u002F\u002Fstereolabs.com","https:\u002F\u002Fgithub.com\u002Fstereolabs",[82,86,90,94,98,102,106,109,112],{"name":83,"color":84,"percentage":85},"C++","#f34b7d",83.4,{"name":87,"color":88,"percentage":89},"Python","#3572A5",9.3,{"name":91,"color":92,"percentage":93},"C#","#178600",5.3,{"name":95,"color":96,"percentage":97},"CMake","#DA3434",1.2,{"name":99,"color":100,"percentage":101},"C","#555555",0.4,{"name":103,"color":104,"percentage":105},"CSS","#663399",0.1,{"name":107,"color":108,"percentage":105},"HTML","#e34c26",{"name":110,"color":111,"percentage":105},"Cuda","#3A4E3A",{"name":113,"color":114,"percentage":105},"JavaScript","#f1e05a",1103,494,"2026-04-07T17:08:54","MIT",4,"Linux (Ubuntu LTS), Windows, Jetson (L4T)","必需 NVIDIA GPU，计算能力 (Compute Capability) > 5.0；Jetson 平台需特定 JetPack 版本支持","未说明",{"notes":124,"python":125,"dependencies":126},"该工具是 Stereolabs ZED 相机的专用 SDK，并非纯软件 AI 模型。必须配合 ZED 系列立体相机或 Ouster LiDAR 硬件使用。支持 Ubuntu LTS、Windows 和 NVIDIA Jetson 嵌入式平台。最新版本 (5.2) 针对 Jetson Thor 优化并支持 JetPack 7.1。提供 C++、Python、C# 和 C 语言接口，并可集成至 ROS、Unity、UE5 等环境。","支持 Python API (具体版本未在片段中明确，通常需 3.8+)",[127,128,129,130,131,132,133],"CUDA Toolkit","OpenGL","ROS\u002FROS2 (可选)","Unity (可选)","Unreal Engine 5 (可选)","OpenCV (可选)","PyTorch (可选)",[135,15,14],"其他",[137,138,139,140,141,142,143,144,145,146,147,148,149,150,72,151,152,153],"zed-camera","stereo-vision","slam","3d-reconstruction","computer-vision","depth-camera","depth-estimation","human-pose-estimation","machine-learning","object-detection","perception","point-cloud","realtime","stereo","zed","zed-x","zed2","2026-03-27T02:49:30.150509","2026-04-08T19:00:32.912323",[157,162,167,172,177],{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},24888,"如何在 ZED 相机上结合使用 PyTorch (如 MMDetection) 进行目标检测而不发生冲突？","这是一个由 CUDA 上下文冲突引起的常见问题，PyTorch 和 ZED SDK 都有各自的 CUDA 上下文，不能隐式混合使用。\n解决方案是将 ZED 的函数放在与 PyTorch 不同的线程中运行。可以参考 zed-tensorflow 项目中的实现方式：https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-tensorflow\u002Fblob\u002Fmaster\u002Fobject_detection_zed.py","https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-sdk\u002Fissues\u002F451",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},24889,"读取 SVO 文件时出现内存泄漏（内存占用随时间增长）怎么办？","这通常不是真正的内存泄漏，而是 glibc 管理内存的机制导致的。当使用 malloc\u002Ffree 时，释放的内存会被标记为可用并保留在进程堆中以供未来分配，而不会立即返回给操作系统。\n建议尝试使用 tcmalloc 来改善内存管理行为。","https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-sdk\u002Fissues\u002F534",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},24890,"在 Ubuntu 17.10 或新内核版本上检测到 ZED 相机但无法打开（报错 'ZED Detected, but cannot open Camera'）如何解决？","这通常是由于较新的 Linux 内核（如 4.13+）与旧版 SDK 或驱动不兼容导致的。\n解决方法是降级内核或使用较早版本的 Ubuntu 内核。可以在 GRUB 启动加载时选择旧内核，或者永久修改 GRUB 配置以默认加载旧内核。参考教程：https:\u002F\u002Funix.stackexchange.com\u002Fquestions\u002F198003\u002Fset-default-kernel-in-grub","https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-sdk\u002Fissues\u002F34",{"id":173,"question_zh":174,"answer_zh":175,"source_url":176},24891,"升级 SDK 或 CUDA 后对象检测（Object Detection）失败或诊断工具显示显卡问题，该如何修复？","这可能是由于旧的深度学习模型文件与新版本不兼容导致的。请尝试以下步骤重置并优化模型：\n1. 首先清理所有剩余的模型相关文件：\n   `\u002Fusr\u002Flocal\u002Fzed\u002Ftools\u002FZED_Diagnostic -aic`\n2. 然后重新优化模型：\n   `\u002Fusr\u002Flocal\u002Fzed\u002Ftools\u002FZED_Diagnostic -aio`\n如果问题依旧，建议在 gdb 中运行样本程序以获取更详细的崩溃堆栈信息。","https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-sdk\u002Fissues\u002F436",{"id":178,"question_zh":179,"answer_zh":180,"source_url":181},24892,"在 Windows 上使用 NEURAL 深度模式时，运行应用后深度图全黑且模型未优化，如何解决？","根据官方说明，首次使用 NEURAL 深度模式需要等待优化完成。如果运行 `ZED_Diagnostic -nrlo` 后仍提示未优化，且应用直接启动导致黑屏，说明 DL 模型优化过程未正确执行。\n目前该问题已被开发团队记录并尝试复现。临时建议确保以管理员权限运行诊断工具，并在运行应用程序前确认诊断工具明确显示优化已完成。如果是在虚拟环境中，请尝试退出虚拟环境后测试。","https:\u002F\u002Fgithub.com\u002Fstereolabs\u002Fzed-sdk\u002Fissues\u002F469",[183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258,263,268,273,278],{"id":184,"version":185,"summary_zh":186,"released_at":187},154356,"5.2.1","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease#521-b35bdf54f33238c75ea27a24a1a21a4e\n\n## 5.2.1\n\n### SDK\n\n#### 通用\n- 修复了在使用 `NV12` 视图及默认分辨率时，`Camera::retrieveImage` 和 `CameraOne::retrieveImage` 的问题：现在会正确使用相机分辨率，而不再要求用户手动设置。\n- 修复了 ZED2\u002FZED2i 可能报告位置跟踪锁定信息的问题。\n- 修复了 `Sensors::enableObjectDetection` 在未显式设置 `sl::ObjectDetectionSensorsParameters` 中的 `fused_objects_group_name` 时无法正常工作的问题。当该字段留空时，现会自动使用默认分组名称，从而简化单个融合分组配置的设置流程。若使用多个融合分组，则仍需设置自定义名称。\n- 修复了在使用 `Camera::retrieveImage(RawBuffer&)` 时，`RawBuffer::getWidth()` 和 `RawBuffer::getHeight()` 总是返回 0 的问题。现在缓冲区元数据会正确报告相机的捕获分辨率。\n- 将 GEN_3 设置为全局默认模式。对 C++、C#、Python 参数、ZEDFu 以及此前遗漏的相关示例进行了统一性修复。\n- 现在，在模块启用过程中出现的负值 ERROR_CODE 结果会被正确记录为警告，与 grab() 的错误处理方式保持一致。\n- 修复了在启用损坏帧检测器时，使用 `Camera::grab()` 可能出现的罕见锁问题。\n- 修复了 ZED X 相机在获取 `RawBuffer` 时可能导致左右缓冲区互换的问题。\n- 降低了每个相机实例的 SDK 内存占用（最高可节省 40MB）。\n- 修复了 `CameraOne::retrieveImage` 对于 `VIEW::LEFT_UNRECTIFIED` 的行为：在 5.2.0 版本中，该模式曾返回校正后的图像。\n- 修复了 GMSL2 相机可能无限期停留在 `sl::CAMERA_STATE::REBOOTING` 状态的问题。\n- 修复了 ZED X One UHD 相机在 QHDPLUS 模式下可能出现的打开问题。\n- 禁用了 ZED X One UHD 相机的伽马补偿功能，因为它与 ISP 存在冲突。\n\n#### 流媒体\n- 修复了因控制包混入视频数据流而导致的流媒体损坏和闪烁问题。\n- 修复了双传感器相机（ZED X、ZED X Mini）右侧传感器的自动曝光\u002F自动增益 ROI 不正确的问题。\n- 修复了立体相机中发送端报告副传感器（右侧）曝光\u002F增益范围值错误的问题。\n- 修复了 ZED X 相机可能导致左右图像互换的问题，该问题会导致深度图损坏并触发潜在标定问题的警告。\n- 修复了流媒体接收超时时可能出现的内存泄漏问题。\n- 修复了关闭流媒体接收端时可能出现的卡死问题。\n\n#### 对象检测\n- 修复了 `CustomObjectDetectionRuntimeParameters` 的类级属性在使用自定义对象检测模型（`ingestCustomBoxObjects` 和 `ingestCustomMaskObjects`）时无效的问题。现在，使用 `CUSTOM_BOX_OBJECTS` 时，类级属性（置信度阈值、`is_grounded`、跟踪参数、类别过滤等）能够正确应用。","2026-03-03T13:15:43",{"id":189,"version":190,"summary_zh":191,"released_at":192},154357,"5.2.0","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease#520-beta-8a756bc18083fc9b21b949fb5fa8ffbb\n\n## 5.2.0\n\nZED SDK 5.2 在 Jetson 平台上实现了显著的性能提升，CPU 负载最高可降低 85%，200 Hz IMU 采样率下的 GMSL 驱动程序可靠性也得到改善，并且在低分辨率模式下图像更加清晰。此外，此版本还引入了全新的 Sensors API（`sl::Sensors`），这是一个用于在单个数据流中统一管理 ZED 相机和 Ouster LiDAR 设备的接口，从而取代了以往需要分别使用不同 API 和自定义融合代码的做法。\n\n### SDK\n\n#### 通用\n- 在 Jetson AGX 上，CPU 负载降低了 51%–85%，抓取延迟降低了 5%–45%（基于 1× 至 8× GMSL 摄像头、SVGA 至 HD1200 分辨率，在启用或禁用神经深度以及进行流式传输或录制的情况下进行基准测试）。多摄像头配置下的内存占用最高可减少 28%。\n- 向 `sl::CameraOne` 添加了 `getCudaStream()` 方法，以匹配 `sl::Camera` API，实现 CUDA 流的统一管理。\n- 向 `InitParametersOne` 添加了 `sdk_gpu_id` 参数。\n- 修复了在运行时参数中禁用深度时可能出现的死锁问题。\n- 现在，`grab()` 方法仅在图像确实损坏（出现绿色或紫色画面）时返回 `CORRUPTED_FRAME`；其他质量问题（镜头遮挡、立体不匹配、模糊、光线不足等）则返回 `SUCCESS`。\n- 修复了在 Windows 系统上，`sl::CameraConfiguration::fps` 无法正确返回用户请求的帧率的问题。\n- 弃用了 `InitParameters::async_image_retrieval` 参数，该参数现已不再生效。\n- 新增了 `sl::Camera::retrieveTensor` 和 `sl::CameraOne::retrieveTensor` 方法，用于获取包含输入图像的 `sl::Tensor` 对象，这些图像已针对使用 SVO 或实时相机进行推理进行了预处理。该方法可与 `sl::TensorParameters` 结合使用，以定义深度学习推理所需的输入选项。有关使用示例，请参阅教程 12。\n- 更新了 `convertCoordinateSystem` 和 `convertUnit` 方法，使其接受 `cudaStream_t` 作为参数，从而便于对 GPU 内存中的 `sl::Mat` 对象执行操作。\n- 新增了 `applyTransform` 方法，用于对点云矩阵应用旋转和平移变换。\n- 将默认压缩模式统一设置为 H.265，适用于流式传输和录制。\n\n#### 捕获\n- 改进了低分辨率模式下的图像处理，使图像明显更清晰，模糊现象也有所减少。\n- 提升了高负载和高 FPS 场景下的捕获稳定性。\n- 引入了 `InputType::setFromGMSLPort(int gmsl_port)` 方法，可根据物理连接端口打开 GMSL 摄像头，这对于布线固定但序列号可能变化的静态生产设备尤为有用。\n- 向 `MAT_TYPE` 中添加了 `NV12`，并向 `VIEW` 中添加了 `LEFT_NV12_UNRECTIFIED` 和 `RIGHT_NV12_UNRECTIFIED`。这些新视图允许通过 `Camera::retrieveImage` 或 `CameraOne::retrieveImage` 请求 NV12 格式的 `sl::Mat`。\n- 修复了在不同进程中并行打开 GMSL 摄像头的问题。\n\n#### 零拷贝捕获（仅限 Jetson）\n\n**仅供高级用户使用：** 引入了 `RawBuffer` API，支持以 NV12 格式零拷贝访问原生相机捕获缓冲区（NvBufSurface \u002F Argus）。\n- 消除了内存复制…","2026-02-10T13:15:42",{"id":194,"version":195,"summary_zh":196,"released_at":197},154365,"5.0.3","https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#503-01552fd28a68e574d70fcf78818f9e1b\n\n## 5.0.3\n\n### SDK\n\n- 将 CameraOne 的 `InitParametersOne` 中的 `svo_real_time_mode` 默认值由 `true` 改为 `false`，以匹配双目相机的默认参数。\n- 改进了 Python API 的安装脚本，增强了错误处理能力，尤其针对权限问题和不支持的平台。\n\n### Bug 修复\n\n- 修复了在重复设置目标检测运行时参数时可能出现的竞争条件问题。\n- 修复了目标检测模块预处理中的精度回归问题，该问题自 5.0.0 版本引入。在使用某些相机分辨率（如 HD1200）时，边界框可能会出现错误的偏移。\n- 修复了 Windows 系统下无法从融合标定文件中正确解析 SVO 文件的问题。\n- 修复了 ZED X One 录制 SVO 文件时，在使用默认码率的情况下，生成的 SVO 文件大小与双目相机录制时相同的问题。现在，ZED X One 录制的 SVO 文件大小已恢复正常。\n- 修复了在反复启动和停止录制时出现的内存泄漏问题。\n\n### 示例代码\n\n- 修复了 Python 示例中 OpenGL 窗口初始化的类型错误，该错误可能导致程序崩溃。\n\n### 工具\n\n- 修复了 ZEDSVOEditor 在导出为 MCAP 格式时，针对使用 ZEDOne 录制的 SVO 文件存在的问题。","2025-07-22T14:59:53",{"id":199,"version":200,"summary_zh":201,"released_at":202},154366,"5.0.2","https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#502-ga-dd38a90c2359bfc27d07c490fdadd733\r\n\r\n## 5.0.2\r\n\r\n### SDK\r\n\r\n- Significantly improved the accuracy of `NEURAL` and `NEURAL_PLUS` depth modes, even compared to the NEURAL model released in 5.0.0 EA, with enhanced temporal stability and robustness against reflections, repetitive patterns, and complex-structured objects.  \r\n- Added NEURAL model files in the ZED SDK installer to avoid additional downloads at runtime.  \r\n- Added supporting functions `isCameraOne`, `isAvailable`, `supportHDR`, and `getAvailableCameraFPS` to easily check cameras compatible and available modes and capabilities.\r\n- Switched default streaming mode to GEN1 while GEN2 is being improved and stabilized. Users can still use streaming GEN2 by setting the environment variable ZED_SDK_STREAM_VERSION=2.\r\n\r\n\r\n### Bug Fixes\r\n\r\n- Fixed a crash that occurred when calling `retrieve_objects` using `CUSTOM_BOX_OBJECTS` with the resolution set to VGA.  \r\n- Fixed a rare undefined behavior that led to random values when calling `getCurrentFPS`.  \r\n- Fixed a rare crash that could occur after repeatedly opening and closing the same camera within the same process.  \r\n- Fixed a possible race condition when opening multiple GMSL cameras from different processes.  \r\n- Fixed a crash when using Fusion with multiple cameras and multiple GPUs.  \r\n- Fixed an accuracy regression with body fitting in NEURAL modes.  \r\n- Fixed a `UDPSocket` crash when streaming from a Jetson to a desktop.  \r\n- Fixed an issue that prevented `enableRecording` from working with localhost. This is now supported.\r\n- Fixed a memory leak in Local Streaming IPC mode (with the ZED Media Server).  \r\n- Fixed a segmentation fault on the sender side in Local Streaming that occurred randomly after some time.  \r\n- Fixed `getVideoSettings(sl::VIDEO_SETTINGS::WHITEBALANCE_AUTO)` on ZED-X \u002F ZED-XOne, which was returning an incorrect value at launch (noticeable in ZED Explorer with multiple cameras).\r\n\r\n### Tools\r\n\r\n- Added focal length information (in millimeters) in ZED Explorer, within the 'Calibration' window.  \r\n- Added SVO auto-repair support in ZED Explorer. It will now attempt to auto-repair corrupted SVO files upon opening, similar to ZED Depth Viewer (or SDK).  \r\n- Fixed ZED Explorer framerate calculator.\r\n- Fixed model downloads in ZED Diagnostic tool when GPU is not available.\r\n- Fixed minor UI issues in ZED Explorer.  \r\n- Fixed minor UI issues in ZED Media Server.  \r\n- Fixed video settings control through receiver\u002Fhost in ZED Media Server. Users can now control virtual ZED-X camera video settings from the receiver side.  \r\n- Fixed stop signal handling in CLI mode for a proper and clean exit in ZED Media Server.  \r\n- Added support for different ZED-XOne camera models connected to ZED Media Server (identical resolution is still required).  \r\n- Added support for multiple JSON configuration files for virtual cameras in CLI mode via the `--config` option in ZED Media Server.\r\n\r\n### Wrappers\r\n\r\n- Improved Python wrapper performance when using multiple cameras in multiple threads.  \r\n- Improved pip installation behavior in the Python wrapper: now uses `--force-reinstall` by default to avoid issues with stale `pyzed` after reinstallation.  \r\n- Fixed Docker images with OpenGL display; they are now available again.  \r\n- Fixed minor issues in the C and C# wrappers.\r\n\r\n### Samples\r\n\r\n- Improved C++ and Python samples for camera streaming and recording. They are now available and optimized for both single and multi-camera setups.\r\n","2025-07-22T14:59:04",{"id":204,"version":205,"summary_zh":206,"released_at":207},154367,"5.0.1_RC","https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#501-rc-e3e315a2d07b432b0db3e2b5d5c8d257\r\n\r\n# 5.0.1 RC\r\n\r\nPlease note that 5.0.1 and 5.0.0 are not binary compatible. The Python API package will only work with 5.0.1. You should compile it from source if needed to run with 5.0.0.\r\n\r\n### SDK\r\n\r\n- Added SVO real-time mode pausing function.  \r\n- Added dedicated setter functions for the Object Detection and Body Tracking runtime parameters. New `retrieveObjects` and `retrieveBodies` functions without runtime parameters have been added as well.\r\n- Added functions `ENU2Geo` and `Geo2ENU` in `sl::Fusion`, making it easy to compute latitude-longitude coordinates of a 3D point. See the C++ Geoloc playback sample (`convert2Geo` function).  \r\n- Improved SVO playback for large files.  \r\n- Added covariance output for Positional Tracking `GEN_3`.  \r\n- Improved Positional Tracking `GEN_3` loop closure detection and map optimization.  \r\n- Reduced Positional Tracking `GEN_3` area mapping file size and saving\u002Floading times.  \r\n- Improved Region of Interest precision when using Object Detection or Body Tracking. The boxes are now filtered depending on the ratio in the region of interest mask instead of the center and bounding box edges.  \r\n- Improved Python API code autocompletion support for most IDEs by including the .pyi stub in the .whl package.\r\n- Improved Object Detection fusion internal synchronization — the process is now safer and more efficient.  \r\n- Added ONNX model name as a prefix in the optimized model name when using Object Detection with a custom YOLO-like ONNX model.\r\n\r\n### Bug fixes\r\n\r\n- Fixed ZED X auto-recovery function. A regression introduced in 5.0.0 prevented the GMSL camera recovery in case of an interruption.  \r\n- Fixed a rare crash that could occur when enabling NEURAL depth mode.  \r\n- Fixed a deadlock in the Object Detection module with the new internal threaded mode introduced in 5.0.0.  \r\n- Fixed an unclosed file descriptor on Jetson when using SVO H26X input. This could lead to undefined behavior if the Camera class was opened and closed hundreds of times in the same instance processing hardware-decoded SVOs.  \r\n- Fixed a regression when using multiple GPUs. It now correctly uses the selected device ID.\r\n- Fixed multiple bugs in `setSVOPosition` functions using index or timestamp input. It should now set the expected frame.  \r\n- Fixed a small memory leak when using Fusion.  \r\n- Fixed AI model optimization log when using ROS.  \r\n- Fixed Object Detection crash when passing an invalid or missing custom YOLO-like ONNX file.  \r\n- Fixed undefined behavior in Object Detection and Body Tracking when processing detector output.  \r\n- Fixed incorrect `retrieveImage` output when using specific resolutions. The issue could affect grayscale or low-resolution images.  \r\n- Fixed `isVideoSettingsSupported` function with the `AEC_AGC_ROI` setting that would return invalid results.\r\n\r\n### Tools\r\n\r\n- Fixed ZEDfu NEURAL depth mode optimization.  \r\n- Improved Depth Viewer camera open when switching between camera models.  \r\n- Improved ZED Explorer firmware update GUI on ZED X for clarity.\r\n\r\n### Samples\r\n\r\n- Added support for YOLOv11, YOLOv12, and more when using a custom YOLO-like ONNX model. Check out the [dedicated documentation page](https:\u002F\u002Fwww.stereolabs.com\u002Fdocs\u002Fyolo).  \r\n- Updated C++ Spatial Mapping sample.  \r\n- Updated C++ Positional Tracking sample.\r\n\r\n### Camera Driver\r\n\r\n- Added support for Jetson RT Kernel for ZED X camera with [dedicated drivers.](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Fdrivers)\r\n\r\n### Deprecation\r\n\r\n  - Using `retrieveObjects` and `retrieveBodies` with runtime parameters is now deprecated. Setting runtime parameters should now be done using the dedicated setters.\r\n","2025-05-16T14:51:04",{"id":209,"version":210,"summary_zh":211,"released_at":212},154368,"4.2.1","[Download the ZED SDK 4.2](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.2)\r\n\r\n## 4.2.1\r\n\r\n### SDK\r\n- Added verbosity in case of self-calibration issue (a specific error code will be added in the future minor update).\r\n- Removed some GNSS verbose.\r\n\r\n### Bug fixes\r\n- Fixed Positional tracking initial position scale for tracking GEN_2.\r\n- Fixed SVO recording regression leading to oversized file.\r\n\r\n## 4.2.0\r\n\r\n### SDK\r\n- Added a new `InitParameters::async_image_retrieval` parameter that enables the ZED SDK to stream or record SVO2 files at a different framerate than the one of the depth computation. \r\n- Added ZED One compatibility with ZED SDK. ZED One can now be created with `sl::CameraOne` objects and the API is the same as with other ZED cameras. The available modules are Capture, Recording, and Streaming. Samples are available too.\r\n- Added support for HDR modes for ZED X One 4k cameras for two resolutions: 1290x1200 and 3200x1800. You can enable HDR with the boolean `sl::InitParametersOne::enable_hdr`, or within ZED Media Server.\r\n- Added a Health Check module: the status of the camera can now be retrieved with `sl::Camera::getHealthStatus`. The status will detect and report issues if the camera is down, the image looks occulted or corrupted, depending on the parameter set in `sl::InitParameters::enable_image_validity_check`.\r\n- Improved the speed of the `NEURAL` depth mode, especially when running on several cameras at the same time, by reducing internal data copy and improving computation parallelism.\r\n- Added a new custom ONNX Object detection model input for YOLO models. This allows users to provide an ONNX file directly to the ZED SDK, without further coding. The ZED SDK will take care of running the inference using an optimized workflow with TensorRT. The Custom Object Detection box input option is still available for users who need flexibility.\r\n- Improved initial gravity estimation.\r\n- Fixed the mixing of the cameras when using multiple ZED X cameras and a unified driver (on JetPack 6.0).\r\n- Improved ZED X One stability.\r\n- Added a new way of serializing ZED SDK parameters using JSON to easily load and save.\r\n- Added a semantic mask input in the Object Detection module, similar to bounding box input using the `Camera::ingestCustomMaskObjects` function. The instance mask is used to compute the object's 3D position in addition to the previous way when the instance mask is not available.\r\n- Improved the Positional Tracking GEN2 initialization with the IMU data.\r\n- Improved `sl::Mat` memory handling safety by switching to smart pointers.\r\n\r\n### Fusion\r\n- The Fusion is now compatible with the Object Detection module. It can be enabled with `Fusion::enableObjectDetection` and objects are retrieved with `Fusion::retrieveObjects`. A `fused_objects_group_name` at the sender level can be set to group the objects from different detection models.\r\n- Improved the Fusion data synchronization quality when the sender has low or irregular framerates.\r\n- Fixed incorrect application of Regions of interest within the Fusion module.\r\n- Fixed the retrieved position in rare cases where the IMU orientations are corrupted.\r\n- Added a `FusedPositionalTrackingStatus` in the Fusion module when retrieving the position. This new object contains information on the status of the different modules acting in the fusion of the positional tracking.\r\n- Updated the `FUSION_ERROR_CODE` to fit the ZED SDK standard: negative values are a warning, and positive values are errors.\r\n\r\n### Tools\r\n- Added ZED One compatibility with ZED Explorer.\r\n- Added ZED One compatibility with ZED Sensor Viewer.\r\n- Fixed IMU recording at full frame rate for ZED Sensor Viewer.\r\n- Improved ZED Depth Viewer opening reliability.\r\n- Added accelerometer bias calibration for Sensor Viewer, see --help.\r\n\r\n### Wrappers\r\n- Added ZED One compatibility with Python.\r\n- Fixed the `Fusion` implementation of the C# wrapper.","2024-11-06T14:39:15",{"id":214,"version":215,"summary_zh":216,"released_at":217},154369,"4.1.3","[Download the ZED SDK 4.1](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.1)\r\n\r\n## 4.1.3\r\n\r\n### SDK\r\n- Added status reporting for NEURAL depth model optimization errors. The `Camera::open()` function can now fail to reflect the NEURAL depth status.\r\n- Enabled mask retrieval for custom object detection.\r\n- Improved GNSS calibration speed when covariance is very low, such as with RTK.\r\n\r\n### Bug fixes\r\n- Fixed an issue with the quad camera configuration that prevented all cameras from opening on Jetpack 6.\r\n- Fixed the depth min and max getter methods in the Python API.\r\n- Fixed an issue with custom object detection where the 3D bounding box height increased along with the width.\r\n- Fixed several edge cases in positional tracking generation 2.\r\n\r\n### Tools\r\n- Added the ability to enable and disable the zed_media_server_cli.service from the ZED Media Server.\r\n- Fixed a crash in Depth Viewer on Windows when opening SVO files.\r\n\r\n### Samples\r\n- Improved Global Localization samples.","2024-07-16T08:17:53",{"id":219,"version":220,"summary_zh":221,"released_at":222},154370,"4.1.2","[Download the ZED SDK 4.1](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.1)\r\n\r\n## 4.1.2\r\n\r\n### SDK\r\n- Improved fusion sources synchronization for enhanced robustness in case of inconsistent data rates.\r\n- Improved robustness of Positional Tracking Gen 2 IMU initialization.\r\n- Added support for the Jetpack 6.0 Production Release.\r\n\r\n### Bug fixes\r\n- Fixed a memory leak when closing the camera during SVO playback.\r\n- Fixed a bug that could prevent recording lossless H26X SVO on Jetson.\r\n- Fixed a regression in the image acquisition pipeline for HD1080 at 30FPS in USB mode that could impact performance.\r\n- Fixed SVGA capture mode with ZED X cameras, which could result in black images.\r\n\r\n### Tools\r\n- Fixed the AI model diagnostic not being available in the command line option for ZED_Diagnostic.\r\n\r\n### Samples\r\n- Improved Global Localization Python samples.","2024-05-27T10:01:40",{"id":224,"version":225,"summary_zh":226,"released_at":227},154371,"4.1.1","[Release Notes](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.1#whats-new-5962bb448658)\r\n[Download the ZED SDK 4.1](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.1)","2024-05-07T07:16:49",{"id":229,"version":230,"summary_zh":231,"released_at":232},154372,"4.1.0","This release of the ZED SDK is now a stable release of the major version 4. It brings significant stability and performance improvements. We're introducing a new generation of the Positional Tracking module for more precise and robust localization. A new version of SVO is also available, recording full sensor data and supporting custom user data for ease of use.\r\n\r\n[Release Notes](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.1#whats-new-5962bb448658)\r\n[Download the ZED SDK 4.1](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F4.1)\r\n\r\nFeatures\r\n\r\nWhat’s new with the fourth generation of ZED SDK ? With the release of 4.1, we have focused on:\r\n\r\n[Positional Tracking GEN2](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#positional-tracking-gen2-d8bab5d56574)\r\n[Recording using SVO2](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#recording-using-svo2-16d9e6349727)\r\n[Improved Neural Depth](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#improve-neural-depth-273ff3464639)\r\n[Region of interest per module](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#region-of-interest-per-module-fd64a196ae6a)\r\n[Improved Global localization \u002F Geo Tracking](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#global-localization-\u002F-geo-tracking-dd89efdbb687)","2024-05-02T08:36:56",{"id":234,"version":235,"summary_zh":236,"released_at":237},154358,"5.1.2","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease#512-2b6224afd2d4\n\n## 5.1.2\n\n\n### SDK\n- 修复了损坏帧检测问题：当仅使用 `Camera::read` 而未使用 `Camera::grab` 时，关闭相机可能会导致程序卡死。\n- 修复了在右侧 ZED One 的 cameraID 为 0 的情况下，无法从 ZED One 打开虚拟立体视图的问题。\n- 修复了一个问题，该问题会导致保存的虚拟立体配置的 `sl::InputType` 无法正确加载。\n- 为 ZED One 的 CameraOne 添加了非标定模式支持。可通过设置环境变量 `export ZED_SDK_ALLOW_UNCALIBRATED_MODE=1` 来启用此功能。与 Camera 类似，当标定文件无效或无法找到时，此模式允许获取未校正的图像、进行流式传输或录制。\n- 修复了在强制禁用 IMU（通过环境变量 `ZED_SDK_IMU_DISABLE=1`）的情况下打开 CameraOne 时出现的 `CAMERA_MOTION_SENSORS_NOT_DETECTED` 错误。\n- 在调用 `sl::Camera::open()` 期间，降低了自标定过程对 CPU 内存的占用。现在，该占用被限制在一个 40MB 的临时内存池内。\n- 为 Orin Nano 添加了 H264 软件（CPU）编码的测试版支持，用于 SVO 录制和本地流式传输。\n\n### 工具\n- 在 ZED Depth Viewer 中添加了对已保存 `sl::InputType` 的拖放支持；现在打开流媒体或虚拟立体设置变得更加方便。\n\n#### SLAM\n- 修复了 GEN_3 SLAM 模块的重复性问题。这确保了模块在所有运行场景下的性能更加一致和可靠。\n- 为 `POSITIONAL_TRACKING::GEN_1` 补充了对 landmark2D 的支持。\n- 修复了 `POSITIONAL_TRACKING::GEN_3` 在 SVO 的第一条元数据损坏时产生错误重力对齐的问题。\n\n#### 对象检测\n- 修复了在使用自定义对象检测并连续数小时反复设置自定义参数时可能出现的随机崩溃问题。\n- 修复了在跟踪那些估计器未能初始化的对象时可能出现的随机崩溃问题；现在当帧中测量数据消失时，跟踪会安全地退出。\n- 提高了对象检测的速度精度。\n- 修复了在使用 `CUSTOM_YOLOLIKE_BOX_OBJECTS` 并启用 `enable_segmentation = true` 时分割掩码输出的问题。\n- 修复了在使用 YOLOv10 模型时 `CUSTOM_YOLOLIKE_BOX_OBJECT` 后处理中的问题。此前，只有少量检测到的对象会被返回。","2025-12-17T12:50:28",{"id":239,"version":240,"summary_zh":241,"released_at":242},154359,"5.1.1","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease\u002F5.1#511-2b6224afd2d4\n\n## 5.1.1\n\n### 错误修复\n\n#### SDK\n- 修复了一个问题：在所有神经深度模式下，无法为 `sl::InitParameters::depth_minimum_distance` 设置特定值。\n\n#### SLAM\n- 提升了 `POSITIONAL_TRACKING::GEN_3` 的整体稳定性。此次更新解决了随机的竞争条件问题。\n- 修复了在使用 `POSITIONAL_TRACKING::GEN_3` 并搭配 `DEPTH_MODE::NONE` 时出现的 GPU 流同步问题。该竞争条件会导致计算结果不一致。\n- 改进了 `getPositionalTrackingLandmarks` 和 `getPositionalTrackingLandmarks2D` 方法的运行时性能。在大型地图区域中，性能提升尤为明显。\n\n#### 工具\n- 修复了同时使用多个 ZED Explorer 实例时出现的相机打开问题。\n\n#### Fusion\n- 补充了 `FUSION_REFERENCE_FRAME` 枚举的缺失文档。\n\n### 封装库\n\n#### Python\n- 在 `FusionConfiguration` 中添加了缺失的 `override_gravity` 字段。\n\n### 示例代码\n- 改进了示例代码中的错误处理逻辑，在检测相机打开和帧抓取是否成功时，不再使用相等性检查（`==`、`!=`），而是采用比较运算符（`\u003C`、`>`、`\u003C=`、`>=`）。此举遵循最佳实践，能够更准确地区分警告（负值）和错误（正值）。","2025-11-12T16:00:18",{"id":244,"version":245,"summary_zh":246,"released_at":247},154360,"5.1.0","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease\u002F5.1#510-8a756bc18083fc9b21b949fb5fa8ffbb\n\n## 5.1.0\n\n### SDK\n\n#### 深度\n- 改进了近距离深度性能，新的 ZED SDK 现在即使在非常短的距离也能提供深度值。\n\n#### 采集\n- 增强了 GMSL 相机的图像采集，从而提高了整体稳定性并减少了丢帧现象。\n- 引入了通过 API 直接从两台 ZED X One 相机生成的实时虚拟立体视觉功能。该功能使用 Camera 类及新函数 sl::InputType::setVirtualStereoFromCameraIDs 或 sl::InputType::setVirtualStereoFromSerialNumbers。性能优化显著：CPU 使用率降低了 60%，使得此版本的 CPU 效率比 ZED_MediaServer 高出 2.5 倍，即便在 IPC 模式下也是如此。有关使用示例，请参阅 virtual stereo\u002Fcpp 示例代码。\n- 为虚拟立体视觉引入了未校准模式。此模式允许查看未经校正的图像、录制 SVO 文件或进行流式传输，以供后续校准或故障排除之用。注意：在此模式下，深度感知、定位跟踪及其他模块均不可用。\n- 改进了近距场景下的自校准功能。增强了对更复杂条件的鲁棒性，并增加了自我诊断功能。新增警告 sl::ERROR_CODE::POTENTIAL_CALIBRATION_ISSUE，当相机校准不佳且可能需要重新校准时，sl::Camera::open 函数会返回该警告。要验证这一点，可打开 ZED_DepthViewer 并目视检查场景（深度图像是否完整、平面是否平整、物体形状是否正确等）。\n- 为 Camera 和 CameraOne 添加了 OpenCV 鱼眼镜头校准支持。\n- 为 CameraOne API 增加了 Windows 平台支持。现在可以在 Windows 系统上从 ZED X One 相机进行流式传输或读取 SVO 文件。\n- 新增 getSensorsDataBatch() 函数，用于获取与最新捕获帧相关联的所有高频传感器数据。\n- 新增 sl::VIEW 以支持更多色彩规范。增加了额外的 3 通道（BGR）和 GRAY 色彩模式。DEPTH 和 CONFIDENCE 视图现在以彩色形式呈现。\n- 修复了 USB 摄像头的色调控制问题。\n- 改进了 VIDEO_SETTINGS 重置调用。现在，无论是 live 模式还是 streaming 模式，都可以通过基础函数 sl::Camera::setCameraSettings(setting, sl::VIDEO_SETTINGS_VALUE_AUTO) 来重置所有设置，适用于 sl::Camera 和 sl::CameraOne。\n- 移除了 sl::InputType::setFromCameraID 中的默认参数 camera_type 以及 sl::InputType::setFromSerialNumber 中的 bus_type 参数。同时移除了 CAMERA_TYPE 枚举类型。现在，Camera 和 CameraOne 对象会自动处理相机和总线类型。\n- 修复了在 GMSL 相机中选择帧率时出现的不一致问题，即当请求的帧率不受支持时的情况。\n- 将 InitParametersOne 的默认 sdk_verbose 设置为 1，以与 InitParameters 保持一致。\n- 修复了部分 sl::CameraOne 的校正后视场问题。在某些相机上，校正后的视场远低于预期。若需恢复到之前的校正方式，可设置环境变量：ZED_SDK_OLD_FOV_COMPUTE=1。\n- 解决了一个可能导致 GMSL ca","2025-10-24T12:44:30",{"id":249,"version":250,"summary_zh":251,"released_at":252},154361,"5.0.7","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease\u002F5.0#507-b1c25d6cde63b9ff6afa80f4e2d1fc33\n\n## 5.0.7\n\n### 错误修复\n\n- 修复了在启用 health_check 功能时，不同配置的系统上可能以不同频率随机出现的 `std::future_error` 异常。\n- 修复了在读取 SVO2 文件时，尤其是在文件末尾可能出现的崩溃问题。\n\n### 封装库\n\n- 修复了使用 Python API 调用 ZEDOne 的 `get_camera_information` 方法时出现的 `TypeError`。\n\n### 工具\n\n- 深度查看器现在允许在图像上绘制感兴趣区域（ROI）。此功能可通过设置面板访问。掩码会被保存，并可稍后由 ZED SDK 加载。","2025-09-23T13:42:51",{"id":254,"version":255,"summary_zh":256,"released_at":257},154362,"5.0.6","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease\u002F5.0#506-b1c25d6cde63b9ff6afa80f4e2d1fc33\n\n## 5.0.6\n\n### 错误修复\n\n- 修复了在拔插 ZED 设备或在运行的应用程序中调用 sl::Camera::reboot 时，USB 摄像头出现的崩溃问题。\n- 修复了 sl::GeoPose 返回空 pose_data 的问题。\n- 修复了 sl::Fusion::getPosition 中的 sl::Pose::valid 始终返回 true 的问题。\n- 改进了图像锐化功能，此前该功能会导致图像质量相比 ZED SDK 4 有所下降。\n- 修复了 Gen2 版本中从流媒体或 ZED 媒体服务器录制 SVO 时，IMU 高频数据缺失的问题。\n\n### 封装库\n\n- 更新了 C 语言封装库中的 SL_GNSSData 结构体。\n- 在 ZED Python API 的 sl.InitParametersOne 中添加了缺失的 enable_hdr 参数。\n\n### 工具\n\n- 修复了 ZED_Calibration 工具中的 IMU 偏置校准问题（--cimu 命令行选项）。\n\n","2025-09-09T14:26:02",{"id":259,"version":260,"summary_zh":261,"released_at":262},154363,"5.0.5","https:\u002F\u002Fwww.stereolabs.com\u002Fen-fr\u002Fdevelopers\u002Frelease\u002F5.0#505-b1c25d6cde63b9ff6afa80f4e2d1fc33\n\n## 5.0.5\n\n### 错误修复\n\n- 修复了多进程环境下同时列出 GMSL 摄像头时出现的回归问题。现在 ZED X 摄像头可以在不同进程中并发列出并打开，不会出现任何问题。\n- 修复了在 QHPlus 分辨率下解析 ZED X One 校准文件时的一个问题。\n- 修复了在位置跟踪模块重启时，若抓取操作正在并行运行，可能会导致段错误的问题。\n- 修复了 SVO 录制中的一个 bug，该 bug 会导致内部元数据中摄像机型号被错误设置。\n\n### 封装库\n\n- 为 ZED Python API (`pyzed`) 的所有枚举类型新增了 `\u003C`、`\u003C=`、`>` 和 `>=` 运算符。此前仅支持 `==` 和 `!=`。","2025-07-29T09:15:46",{"id":264,"version":265,"summary_zh":266,"released_at":267},154364,"5.0.4","https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease#504-b1c25d6cde63b9ff6afa80f4e2d1fc33\n\n## 5.0.4\n\n### SDK\n\n- 修复了位置跟踪 GEN_3 的区域地图行为，使其与 GEN_1\u002FGEN_2 保持一致。现在它可以更新已加载的区域地图，并且也可以将其保存。\n\n### 错误修复\n\n- 修复了一个可能导致在使用位置跟踪 GEN_3 保存区域地图时，`getAreaExportState` 函数返回错误代码的 bug。\n- 修复了一个阻止在位置跟踪模式 GEN_3 下获取绝对位置跟踪协方差的问题。\n- 修复了一个导致 CameraOne 在 `focal_length_metric` 中返回 0 的问题。\n- 修复了 `findFloorPlane` 中的一个转换问题，该问题可能导致根据所使用的坐标系输出错误的变换矩阵。\n\n### 示例程序\n\n- 修复了所有 Python 示例程序，使其同时支持 .svo 和 .svo2 文件格式。此前，部分示例仅识别 .svo 扩展名。\n- 修正了示例程序，使其在遇到 WARNING 级别的抓取结果（即 `ERROR_CODE` 为负值）时继续运行，仅在 ERROR 级别时才停止。这提高了鲁棒性，即使偶尔出现诸如由 `enable_image_validity_check` 检测到的图像质量下降等问题，ZED SDK 仍能正常工作。\n\n### 工具\n\n- 修复了 ZED 诊断工具在优化 AI 模型时发生的崩溃问题。\n\n### 封装库\n\n- 修复了在 Windows 平台上使用 Python 3.12 及以上版本时，通过脚本 `get_python_api` 安装 OpenGL 时出现的问题。\n- 修复了 ZED Python API (`pyzed`) 与 `numpy` 之间的二进制兼容性问题，该问题特别出现在 Windows 平台上的 Python 3.9、3.10 和 3.11 版本中。此修复确保了稳定的集成，并防止在这些配置下因 ABI 不匹配而导致的运行时错误。","2025-07-22T15:03:40",{"id":269,"version":270,"summary_zh":271,"released_at":272},154373,"4.0.8","We are excited to announce the release of ZED SDK 4.0, which introduces a range of new features and enhancements to our ZED cameras. Our latest update supports the ZED X and ZED X Mini cameras, designed specifically for autonomous mobile robots in indoor and outdoor environments. It also introduces an improved Neural depth, a new global-scale location tracking API and a new body tracking technology for smart spaces.\r\n\r\nWe are also proud to introduce the new multi-camera Fusion API, which makes it easier than ever to deploy multi-camera setups with data fusion. This module handles time synchronization and geometric calibration issues, along with 360° fusion coming from multiple cameras and sensor sources. We believe that these updates will unlock even more potential for our users to create next-generation space aware applications that push the boundaries of what is possible with AI and vision.\r\n\r\n- **[Release Notes](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F)**\r\n- [Download the ZED SDK 4.0](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F#downloads)\r\n\r\n## Features\r\nWhat’s new with the fourth generation of ZED SDK ? With the release of 4.0, we have focused on:\r\n\r\n- Support for the new [ZED X](https:\u002F\u002Fwww.stereolabs.com\u002Fzed-x\u002F) and [ZED X Mini](https:\u002F\u002Fwww.stereolabs.com\u002Fzed-x\u002F) cameras\r\n- Introducing new Multi-camera Fusion API\r\n- Introducing new Geotracking API for real-time location monitoring at global scale\r\n- Improved Neural depth which is now more robust to challenging situations such as low-light, image compression, noise and textureless areas\r\n- Introducing new Body Tracking Gen2\r\n\r\nThere’s some other features and improvements in 4.0 too, so without any further delay head over to the [announcement on the Stereolabs website](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F) for all the details.\r\n\r\n## Upcoming Changes\r\nOur SDK 4.0 is currently in early access and we would love to hear your feedback on [Discourse ](https:\u002F\u002Fcommunity.stereolabs.com\u002F) or by email. Our team is working tirelessly to implement the changes and improvements based on the community feedback to ensure that the final RC version release exceeds your expectations. So stay tuned for upcoming updates!","2023-04-07T16:40:12",{"id":274,"version":275,"summary_zh":276,"released_at":277},154374,"v3.8","# 3.8\r\n\r\nZED SDK 3.8 is a **major release** that includes new features and performance improvements to **depth sensing** and **body tracking**. This update brings up to 50% performance improvement for Neural depth sensing. Human skeleton tracking has also been sped up by +40% on most platforms. The Linux installer size has been reduced by 50%.\r\n\r\nYou can find the full Release Notes and SDK download page here:\r\n- [Release Notes](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F3.8)\r\n- [Download the ZED SDK 4.0](https:\u002F\u002Fwww.stereolabs.com\u002Fdevelopers\u002Frelease\u002F3.8\u002F#downloads)","2023-04-07T16:34:16",{"id":279,"version":280,"summary_zh":76,"released_at":281},154375,"v2.X","2020-01-24T14:45:01"]