[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ShisatoYano--AutonomousVehicleControlBeginnersGuide":3,"tool-ShisatoYano--AutonomousVehicleControlBeginnersGuide":64},[4,17,27,35,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,43,44,45,15,46,26,13,47],"数据工具","视频","插件","其他","音频",{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,46],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[26,14,13,46],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":23,"env_os":105,"env_gpu":106,"env_ram":106,"env_deps":107,"category_tags":116,"github_topics":117,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":132,"updated_at":133,"faqs":134,"releases":172},2605,"ShisatoYano\u002FAutonomousVehicleControlBeginnersGuide","AutonomousVehicleControlBeginnersGuide","Python sample codes and documents about Autonomous vehicle control algorithm. This project can be used as a technical guide book to study the algorithms and the software architectures for beginners.","AutonomousVehicleControlBeginnersGuide 是一套专为自动驾驶初学者打造的开源学习资源，包含用 Python 编写的控制算法示例代码及配套技术文档。它旨在解决新手在入门自动驾驶领域时，面对复杂理论难以上手、缺乏可运行代码参考的痛点。通过将抽象的算法转化为直观、可执行的仿真程序，帮助用户快速理解从感知、定位到路径规划与控制的全流程技术架构。\n\n该项目非常适合高校学生、刚入行的工程师以及对自动驾驶技术感兴趣的研究人员使用。无论是用于课程学习、自我提升还是作为教学辅助材料，都能提供极大的便利。其核心亮点在于覆盖了自动驾驶栈中的关键模块：包括扩展卡尔曼滤波等定位算法，A*、RRT* 等多种路径规划方法，以及纯追踪、LQR、Stanley 等经典路径跟踪控制器。所有代码均基于 Python 生态（如 NumPy、Matplotlib）实现，无需昂贵的硬件即可在本地运行可视化仿真，让学习者能清晰地观察算法效果与内部逻辑，是通往自动驾驶技术殿堂的优质“敲门砖”。","# AutonomousVehicleControlBeginnersGuide\n[![Linux_CI](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FLinux_CI.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FLinux_CI.yml) [![Windows_CI](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FWindows_CI.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FWindows_CI.yml) [![MacOS_CI](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FMacOS_CI.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FMacOS_CI.yml) [![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_9ee0cb95ac54.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fshisatoyano\u002Fautonomousvehiclecontrolbeginnersguide)  \n\nPython sample codes and documents about Autonomous vehicle control algorithm. This project can be used as a technical guide book to study the algorithms and the software architectures for beginners.  \n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_6a560d8be1ea.gif)  \n\n\n## Table of Contents\n* [What is this?](#what-is-this)\n* [Goal of this project](#goal-of-this-project)\n* [Requirements](#requirements)\n* [How to use](#how-to-use)\n* [Examples of Simulation](#examples-of-simulation)\n    * [Localization](#localization)\n        * [Extended Kalman Filter Localization](#extended-kalman-filter-localization)\n        * [Unscented Kalman Filter Localization](#unscented-kalman-filter-localization)\n        * [Particle Filter Localization](#particle-filter-localization)\n    * [Mapping](#mapping)\n        * [Binary Occupancy Grid Map](#binary-occupancy-grid-map)\n        * [Cost Map](#cost-map)\n        * [Potential Field Map](#potential-field-map)\n        * [NDT Map](#ndt-map)\n    * [Path Planning](#path-planning)\n        * [A*](#a)\n        * [Bidirectional A*](#bidirectional-a)\n        * [Hybrid A*](#hybrid-a)\n        * [D*](#d)\n        * [Dijkstra](#dijkstra)\n        * [RRT](#rrt)\n        * [Bidirectional RRT*](#bidirectional-rrt)\n        * [RRT*](#rrt-star)\n        * [Informed RRT*](#informed-rrt)\n    * [Path Tracking](#path-tracking)\n        * [Pure pursuit Path Tracking](#pure-pursuit-path-tracking)\n        * [Adaptive Pure pursuit Path Tracking](#adaptive-pure-pursuit-path-tracking)\n        * [Rear wheel feedback Path Tracking](#rear-wheel-feedback-path-tracking)\n        * [LQR(Linear Quadratic Regulator) Path Tracking](#lqrlinear-quadratic-regulator-path-tracking)\n        * [Stanley steering control Path tracking](#stanley-steering-control-path-tracking)\n        * [MPPI Path Tracking](#mppi-path-tracking)\n    * [Perception](#perception)\n        * [Rectangle fitting Detection](#rectangle-fitting-detection)\n        * [Sensor's Extrinsic Parameters Estimation](#sensors-extrinsic-parameters-estimation)\n* [Documents](#documents)\n* [License](#license)\n* [Use Case](#use-case)\n* [Contribution](#contribution)\n* [Author](#author)\n\n\n## What is this?\nThis is a sample codes collections about Autonomous vehicle control algorithm. Each source codes are implemented with Python to help your understanding. You can fork this repository and use for studying, education or work freely.  \n\n\n## Goal of this project\nI want to release my own technical book about Autonomous Vehicle algorithms in the future. The book will include all of codes and documents in this repository as contents.  \n\n\n## Requirements\nPlease satisfy with the following requirements on native or VM Linux in advance.  \nFor running each sample codes:  \n* [Python 3.13.x](https:\u002F\u002Fwww.python.org\u002F)\n* [Matplotlib](https:\u002F\u002Fmatplotlib.org\u002F)\n* [NumPy](https:\u002F\u002Fnumpy.org\u002F)\n* [SciPy](https:\u002F\u002Fscipy.org\u002F)\n\nFor development:\n* [pytest](https:\u002F\u002Fdocs.pytest.org\u002Fen\u002F7.4.x\u002F) (for unit tests)\n* [pytest-cov](https:\u002F\u002Fgithub.com\u002Fpytest-dev\u002Fpytest-cov) (for coverage measurement)\n\nFor setting up the environment with Docker:\n* [VS Code](https:\u002F\u002Fcode.visualstudio.com\u002F)\n* [Docker](https:\u002F\u002Fwww.docker.com\u002F)\n\n\n## How to use\n1. Clone this repository  \n    ```bash\n    $ git clone https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\n    ```\n\n2. Set up the environment for running each codes\n    * Set up with Docker on WSL:\n        * Before cloning thi repo, [install Docker](https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Flinux-install\u002F) in advance\n        * Clone this repo following the above Step 1\n        * Open this repo's folder by VSCode\n        * [Create Dev Container](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Fdevcontainers\u002Fcreate-dev-container)\n        * And then, all required libraries are installed automatically\n3. Execute unit tests to confirm the environment were installed successfully\n    ```bash\n    $ . run_test_suites.sh\n    ```\n4. Execute a python script at src\u002Fsimulations directory\n    * For example, when you want to execute localization simulation of Extended Kalman Filter:\n        ```bash\n        $ python src\u002Fsimulations\u002Flocalization\u002Fextended_kalman_filter_localization\u002Fextended_kalman_filter_localization.py\n        ```\n5. Add star to this repository if you like it!!\n\n\n## Examples of Simulation\n### Localization\n#### Extended Kalman Filter Localization\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_100b6e5b7fcc.gif)  \n#### Unscented Kalman Filter Localization\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_87d4812e83c5.gif)  \n#### Particle Filter Localization\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_1f4b008ea375.gif)  \n### Mapping\n#### Binary Occupancy Grid Map\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_711a8df3ca1d.gif)  \n#### Cost Map\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_45495570834b.gif)  \n#### Potential Field Map\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_bb53053e37da.gif)   \n#### NDT Map\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_6a560d8be1ea.gif)  \n### Path Planning\n#### A*\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_859376a8a7fe.gif)  \n#### Bidirectional A*\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_f2d884e6f933.gif)  \n#### Hybrid A*\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_140268425776.gif)  \n#### D*\nPlanning with dynamic obstacle replanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_ffaa271ec5c0.gif)  \n#### Dijkstra\nPlanning(Reduce frames by sampling every nth node to prevent memory exhaustion)  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_7b1c941da3e0.gif)  \n#### Elastic Bands\nA* seed path smoothed with Elastic Bands optimisation  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_98485375694c.gif)  \n#### RRT\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_638ba01495cc.gif)  \n#### Bidirectional RRT*\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_4834b6deb4c0.gif)  \n#### RRT*\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_96476a935e3f.gif)  \n#### Informed RRT*\nPlanning  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_bdc6a9e05cd5.gif)  \n### Path Tracking\n#### Pure pursuit Path Tracking\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_3e7c758f2d20.gif)  \n#### Adaptive Pure pursuit Path Tracking\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_7197910af617.gif)  \n#### Rear wheel feedback Path Tracking\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_36074ab48dfb.gif)  \n#### LQR(Linear Quadratic Regulator) Path Tracking\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_53f521032a50.gif)  \n#### Stanley steering control Path Tracking\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_01e6bc6eabd0.gif)  \n#### MPPI Path Tracking\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_158d6d9dd8b9.gif)  \n### Perception\n#### Rectangle fitting Detection\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_540850d3c0b6.gif)  \n#### Sensor's Extrinsic Parameters Estimation\nEstimation by Unscented Kalman Filter  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_f851bb0b2d10.gif)  \n\n\n## Documents\nDesign documents of each Python programs are prepared here. The documents are still not completed. They have been being updated. If you found any problems in them, please tell me by creating an issue.  \n[Documents link](\u002Fdoc\u002FDESIGN_DOCUMENT.md)  \n\n\n## License\nMIT  \n\n\n## Use Case\nI started this project to study an algorithm and software development for Autonomous Vehicle system by myself. You can also use this repo for your own studying, education, researching and development.  \n\nIf this project helps your task, please let me know by creating a issue.  \nAny paper, animation, video as your output, always welcome!! It will encourage me to continue this project.  \n\nYour comment and output is added to [this list of user comments](\u002FUSERS_COMMENTS.md).  \n\n\n## Contribution\nAny contribution by creating an issue or sending a pull request is welcome!! Please check [this document about how to contribute](\u002FHOWTOCONTRIBUTE.md).  \n\n\n## Author\n[Shisato Yano](https:\u002F\u002Fgithub.com\u002FShisatoYano)  ","# 自动驾驶车辆控制入门指南\n[![Linux_CI](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FLinux_CI.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FLinux_CI.yml) [![Windows_CI](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FWindows_CI.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FWindows_CI.yml) [![MacOS_CI](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FMacOS_CI.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousDrivingSamplePrograms\u002Factions\u002Fworkflows\u002FMacOS_CI.yml) [![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_9ee0cb95ac54.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fshisatoyano\u002Fautonomousvehiclecontrolbeginnersguide)  \n\n关于自动驾驶车辆控制算法的 Python 示例代码和文档。该项目可作为初学者学习相关算法及软件架构的技术指南。  \n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_6a560d8be1ea.gif)  \n\n\n## 目录\n* [这是什么？](#what-is-this)\n* [本项目的目标](#goal-of-this-project)\n* [要求](#requirements)\n* [使用方法](#how-to-use)\n* [仿真示例](#examples-of-simulation)\n    * [定位](#localization)\n        * [扩展卡尔曼滤波定位](#extended-kalman-filter-localization)\n        * [无迹卡尔曼滤波定位](#unscented-kalman-filter-localization)\n        * [粒子滤波定位](#particle-filter-localization)\n    * [建图](#mapping)\n        * [二值占用栅格地图](#binary-occupancy-grid-map)\n        * [代价地图](#cost-map)\n        * [势场地图](#potential-field-map)\n        * [NDT 地图](#ndt-map)\n    * [路径规划](#path-planning)\n        * [A*](#a)\n        * [双向 A*](#bidirectional-a)\n        * [混合 A*](#hybrid-a)\n        * [D*](#d)\n        * [迪杰斯特拉算法](#dijkstra)\n        * [RRT](#rrt)\n        * [双向 RRT*](#bidirectional-rrt)\n        * [RRT*](#rrt-star)\n        * [启发式 RRT*](#informed-rrt)\n    * [路径跟踪](#path-tracking)\n        * [纯追踪路径跟踪](#pure-pursuit-path-tracking)\n        * [自适应纯追踪路径跟踪](#adaptive-pure-pursuit-path-tracking)\n        * [后轮反馈路径跟踪](#rear-wheel-feedback-path-tracking)\n        * [LQR（线性二次型调节器）路径跟踪](#lqrlinear-quadratic-regulator-path-tracking)\n        * [斯坦利转向控制路径跟踪](#stanley-steering-control-path-tracking)\n        * [MPPI 路径跟踪](#mppi-path-tracking)\n    * [感知](#perception)\n        * [矩形拟合检测](#rectangle-fitting-detection)\n        * [传感器外参估计](#sensors-extrinsic-parameters-estimation)\n* [文档](#documents)\n* [许可证](#license)\n* [使用场景](#use-case)\n* [贡献](#contribution)\n* [作者](#author)\n\n\n## 这是什么？\n这是一个关于自动驾驶车辆控制算法的示例代码集合。每个源代码都用 Python 实现，以帮助您更好地理解。您可以自由地 fork 该仓库，并将其用于学习、教育或工作。  \n\n\n## 本项目的目标\n我希望在未来出版一本关于自动驾驶车辆算法的个人技术书籍。这本书将包含本仓库中的所有代码和文档作为内容。  \n\n\n## 要求\n请提前在原生或虚拟机 Linux 系统上满足以下要求。  \n运行各示例代码所需：  \n* [Python 3.13.x](https:\u002F\u002Fwww.python.org\u002F)\n* [Matplotlib](https:\u002F\u002Fmatplotlib.org\u002F)\n* [NumPy](https:\u002F\u002Fnumpy.org\u002F)\n* [SciPy](https:\u002F\u002Fscipy.org\u002F)\n\n开发所需：\n* [pytest](https:\u002F\u002Fdocs.pytest.org\u002Fen\u002F7.4.x\u002F)（用于单元测试）\n* [pytest-cov](https:\u002F\u002Fgithub.com\u002Fpytest-dev\u002Fpytest-cov)（用于覆盖率测量）\n\n使用 Docker 搭建环境所需：\n* [VS Code](https:\u002F\u002Fcode.visualstudio.com\u002F)\n* [Docker](https:\u002F\u002Fwww.docker.com\u002F)\n\n\n## 使用方法\n1. 克隆此仓库  \n    ```bash\n    $ git clone https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\n    ```\n\n2. 搭建运行各代码的环境\n    * 在 WSL 上使用 Docker 搭建：\n        * 克隆仓库之前，请先安装 [Docker](https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Flinux-install\u002F)\n        * 按照上述第 1 步克隆仓库\n        * 使用 VSCode 打开仓库文件夹\n        * [创建开发容器](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Fdevcontainers\u002Fcreate-dev-container)\n        * 随后，所有所需库将自动安装\n3. 执行单元测试以确认环境已成功搭建\n    ```bash\n    $ . run_test_suites.sh\n    ```\n4. 在 src\u002Fsimulations 目录下执行 Python 脚本\n    * 例如，若要运行扩展卡尔曼滤波的定位仿真：\n        ```bash\n        $ python src\u002Fsimulations\u002Flocalization\u002Fextended_kalman_filter_localization\u002Fextended_kalman_filter_localization.py\n        ```\n5. 如果喜欢这个项目，请给它加星！！\n\n\n## 仿真示例\n### 定位\n#### 扩展卡尔曼滤波定位\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_100b6e5b7fcc.gif)  \n#### 无迹卡尔曼滤波定位\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_87d4812e83c5.gif)  \n#### 粒子滤波定位\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_1f4b008ea375.gif)  \n### 建图\n#### 二值占用栅格地图\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_711a8df3ca1d.gif)  \n#### 代价地图\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_45495570834b.gif)  \n#### 势场地图\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_bb53053e37da.gif)   \n#### NDT 地图\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_6a560d8be1ea.gif)\n\n### 路径规划\n#### A* 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_859376a8a7fe.gif)  \n#### 双向 A* 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_f2d884e6f933.gif)  \n#### 混合 A* 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_140268425776.gif)  \n#### D* 规划（含动态障碍物重规划）  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_ffaa271ec5c0.gif)  \n#### 迪杰斯特拉规划（通过每隔第 n 个节点采样来减少帧数，防止内存耗尽）  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_7b1c941da3e0.gif)  \n#### 弹性带优化  \n基于 A* 生成的初始路径，使用弹性带优化进行平滑处理  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_98485375694c.gif)  \n#### RRT 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_638ba01495cc.gif)  \n#### 双向 RRT* 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_4834b6deb4c0.gif)  \n#### RRT* 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_96476a935e3f.gif)  \n#### 有向 RRT* 规划  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_bdc6a9e05cd5.gif)  \n### 路径跟踪\n#### 纯追踪路径跟踪  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_3e7c758f2d20.gif)  \n#### 自适应纯追踪路径跟踪  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_7197910af617.gif)  \n#### 后轮反馈路径跟踪  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_36074ab48dfb.gif)  \n#### LQR（线性二次型调节器）路径跟踪  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_53f521032a50.gif)  \n#### 斯坦利转向控制路径跟踪  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_01e6bc6eabd0.gif)  \n#### MPPI 路径跟踪  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_158d6d9dd8b9.gif)  \n### 感知\n#### 矩形拟合检测  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_540850d3c0b6.gif)  \n#### 传感器外参估计  \n采用无迹卡尔曼滤波进行估计  \n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_readme_f851bb0b2d10.gif)  \n\n\n## 文档\n此处准备了各 Python 程序的设计文档。目前文档尚未完成，仍在持续更新中。若您发现任何问题，请通过创建 issue 告知我。  \n[文档链接](\u002Fdoc\u002FDESIGN_DOCUMENT.md)  \n\n\n## 许可证\nMIT  \n\n\n## 使用场景\n我启动此项目是为了自学自动驾驶系统中的算法及软件开发。您也可以将此仓库用于个人学习、教学、研究和开发。\n\n若本项目对您的工作有所帮助，请通过创建 issue 告知我。无论您产出论文、动画还是视频，都十分欢迎！这将激励我继续推进该项目。\n\n您的评论及成果将被添加到 [用户评论列表](\u002FUSERS_COMMENTS.md) 中。  \n\n\n## 贡献\n欢迎通过创建 issue 或发送 pull request 提出任何贡献！！请查阅 [关于如何贡献的文档](\u002FHOWTOCONTRIBUTE.md)。  \n\n\n## 作者\n[Shisato Yano](https:\u002F\u002Fgithub.com\u002FShisatoYano)","# AutonomousVehicleControlBeginnersGuide 快速上手指南\n\n本指南旨在帮助开发者快速搭建环境并运行自动驾驶控制算法的 Python 示例代码。\n\n## 环境准备\n\n### 系统要求\n- **操作系统**：Linux（原生或虚拟机）、WSL (Windows Subsystem for Linux)、macOS\n- **推荐开发工具**：VS Code + Docker（最简便的环境配置方式）\n\n### 前置依赖\n若不使用 Docker，需手动安装以下核心库：\n- Python 3.13.x\n- Matplotlib\n- NumPy\n- SciPy\n\n开发测试可选依赖：\n- pytest\n- pytest-cov\n\n## 安装步骤\n\n### 方法一：使用 Docker（推荐）\n此方法可自动安装所有依赖，避免环境冲突。\n\n1. **安装 Docker**\n   在宿主机上安装 Docker Desktop 或 Docker Engine。\n   > 国内用户建议配置 Docker 镜像加速器（如阿里云、腾讯云等）以提升拉取速度。\n\n2. **克隆项目**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\n   ```\n\n3. **启动开发容器**\n   - 使用 VS Code 打开克隆后的文件夹。\n   - 安装 \"Dev Containers\" 扩展插件。\n   - 按下 `F1` 或 `Ctrl+Shift+P`，选择 `Dev Containers: Reopen in Container`。\n   - 等待容器构建完成，所有依赖将自动安装。\n\n### 方法二：本地手动安装\n1. **克隆项目**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\n   cd AutonomousVehicleControlBeginnersGuide\n   ```\n\n2. **安装 Python 依赖**\n   建议使用虚拟环境，并配置国内源加速下载：\n   ```bash\n   python3 -m venv venv\n   source venv\u002Fbin\u002Factivate  # Windows 用户使用: venv\\Scripts\\activate\n   \n   pip install matplotlib numpy scipy pytest pytest-cov -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   ```\n\n3. **验证环境**\n   运行测试套件确认安装成功：\n   ```bash\n   .\u002Frun_test_suites.sh\n   ```\n   > 若提示权限不足，请先执行 `chmod +x run_test_suites.sh`。\n\n## 基本使用\n\n项目包含定位、建图、路径规划、路径跟踪和感知等多个模块的仿真示例。\n\n### 运行示例：扩展卡尔曼滤波定位 (EKF)\n\n进入项目根目录，执行以下命令运行定位仿真：\n\n```bash\npython src\u002Fsimulations\u002Flocalization\u002Fextended_kalman_filter_localization\u002Fextended_kalman_filter_localization.py\n```\n\n执行成功后，将弹出一个窗口展示车辆定位轨迹的动画演示。\n\n### 其他模块示例\n你可以参照上述路径结构运行其他算法，例如：\n- **路径规划 (A*)**:\n  ```bash\n  python src\u002Fsimulations\u002Fpath_planning\u002Fastar_path_planning\u002Fastar_search.py\n  ```\n- **路径跟踪 (Pure Pursuit)**:\n  ```bash\n  python src\u002Fsimulations\u002Fpath_tracking\u002Fpure_pursuit_path_tracking\u002Fpure_pursuit_path_tracking.py\n  ```\n\n所有仿真脚本均位于 `src\u002Fsimulations` 目录下，可根据子目录名称找到对应的算法实现。","某高校自动驾驶实验室的研究生团队正在从零构建一辆无人小车的控制原型，急需验证定位与路径规划算法的可行性。\n\n### 没有 AutonomousVehicleControlBeginnersGuide 时\n- 团队成员需分散查阅大量晦涩的学术论文和碎片化博客，难以将扩展卡尔曼滤波（EKF）或混合 A*等理论公式转化为可运行的代码。\n- 缺乏统一的仿真环境，每个人编写的测试脚本接口不一，导致定位模块与路径跟踪模块无法联调，集成耗时极长。\n- 遇到算法发散或车辆失控时，因没有标准的基准代码（Baseline）作为对照，排查是数学推导错误还是代码实现 bug 如同大海捞针。\n- 新人入门门槛极高，往往需要数月时间才能复现基础功能，严重拖慢了整个项目的研发进度。\n\n### 使用 AutonomousVehicleControlBeginnersGuide 后\n- 直接调用项目中成熟的 Python 示例代码，快速理解并复现了从粒子滤波定位到 LQR 路径跟踪的全套核心算法，将理论学习周期从数月缩短至数周。\n- 利用内置的可视化仿真演示（如 NDT 建图、RRT*规划），团队在统一框架下迅速完成了各模块的对接与联合调试，直观验证了系统逻辑。\n- 以项目提供的高质量代码为基准进行对比测试，迅速锁定了自定义算法中的参数整定问题，大幅降低了调试难度。\n- 新生通过阅读其清晰的文档和架构设计，一周内即可上手参与核心开发，显著提升了团队的整体产出效率。\n\nAutonomousVehicleControlBeginnersGuide 将抽象的自动驾驶算法理论转化为可视化的工程实践，成为初学者跨越“从公式到代码”鸿沟的高效加速器。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShisatoYano_AutonomousVehicleControlBeginnersGuide_6a560d8b.gif","ShisatoYano","Shisato Yano","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FShisatoYano_fb201b27.jpg","Algorithm and Software engineer for Autonomous navigation system",null,"Japan","shisatoyano@gmail.com","4310sy","https:\u002F\u002Fwww.eureka-moments-blog.com\u002F","https:\u002F\u002Fgithub.com\u002FShisatoYano",[86,90,94,98],{"name":87,"color":88,"percentage":89},"Python","#3572A5",99.8,{"name":91,"color":92,"percentage":93},"Dockerfile","#384d54",0.1,{"name":95,"color":96,"percentage":97},"Shell","#89e051",0,{"name":99,"color":100,"percentage":97},"Batchfile","#C1F12E",1482,220,"2026-04-01T01:24:38","MIT","Linux, macOS, Windows","未说明",{"notes":108,"python":109,"dependencies":110},"建议在原生 Linux 或虚拟机中的 Linux 环境下运行。支持使用 Docker 和 VS Code 创建开发容器以自动安装所有依赖库。项目主要用于自动驾驶控制算法的教学与仿真，不包含重型深度学习模型，因此未明确提及 GPU 和大内存需求。","3.13.x",[111,112,113,114,115],"Matplotlib","NumPy","SciPy","pytest","pytest-cov",[15,14],[118,119,120,121,122,123,124,125,126,127,128,129,130,131],"algorithm","autonomous-driving","autonomous-navigation","autonomous-vehicles","calibration","localization","mapping","object-detection","object-tracking","path-planning","path-tracking","perception","python","slam","2026-03-27T02:49:30.150509","2026-04-06T08:35:05.001700",[135,140,145,150,155,159,163,168],{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},12069,"如何为项目贡献代码（例如实现新算法）？","在开始编写代码之前，请务必先阅读项目的贡献指南。维护者建议首先阅读 `HOWTOCONTRIBUTE.md` 文件以了解具体的贡献流程和规范，然后再开始实现你的代码。指南地址：https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Fblob\u002Fmain\u002FHOWTOCONTRIBUTE.md","https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Fissues\u002F42",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},12070,"提交包含多个功能的大型 PR 时有什么建议？","为了方便维护者审查，建议将大型的功能请求拆分为多个独立的 Pull Request (PR)。例如，如果计划同时实现 RRT、RRT* 和 Informed RRT*，最好将它们分成三个单独的 PR 提交，而不是合并为一个。这样可以提高审查效率并加快合并速度。","https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Fissues\u002F34",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},12071,"新的算法文档应该放在项目的哪个目录下？","现有的文档位于 `doc` 目录中。对于新的路径跟踪算法（如 Stanley 控制器），维护者建议在 `doc` 目录下创建一个新的子目录专门用于路径跟踪算法，并将相关的算法解释文档添加其中。参考路径：https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Ftree\u002Fmain\u002Fdoc","https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Fissues\u002F66",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},12072,"如何实现鲁棒的矩形拟合算法以处理噪声和异常值？","建议使用 `scipy.optimize.least_squares` 进行优化，并配合鲁棒损失函数（如 `soft_l1` 或 `huber`）来减少异常值的影响。关键参数包括设置与预期内点噪声匹配的 `f_scale` 值（单位为米），并使用兼容鲁棒损失的求解器方法（例如 `trf`）。此外，初始化可以使用基于 PCA 的方向和范围，或在优化过程中确保宽度\u002F长度为正（例如优化半尺寸或对数参数）。","https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Fissues\u002F30",{"id":156,"question_zh":157,"answer_zh":158,"source_url":139},12073,"Informed RRT* 算法的实现重点是什么？","Informed RRT* 是在标准 RRT* 基础上的改进，其核心任务是在找到初始解后，实现从允许的椭圆启发式区域中进行直接采样（direct sampling from an admissible elliptical heuristic）。这利用了椭球采样来提高搜索效率。实现时应参考现有的 RRT 代码结构作为基础。",{"id":160,"question_zh":161,"answer_zh":162,"source_url":149},12074,"Stanley 控制器算法的主要原理和改进方向是什么？","Stanley 控制器是一种成熟的横向控制算法，通过计算机器人与路径之间的航向误差（heading error）和横向误差（cross_track_error）来生成转向指令。在该项目中，建议实现一个改进版本，该版本不仅考虑上述误差，还将偏航稳定性（yaw stability）纳入考量，以提升轨迹跟踪性能。相关理论可参考 Stanford Racing Team 的原始论文及后续的改进研究。",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},12075,"粒子滤波定位算法的核心思想是什么？","粒子滤波的核心思想是用一组从后验分布中抽取的随机状态样本来表示后验概率分布。这种表示方法虽然是非参数的近似，但能够表示比高斯分布更广泛的状态空间分布。算法流程主要包括：根据运动模型采样预测粒子、根据观测模型计算权重、以及根据权重进行重采样（Resampling）以更新粒子集。","https:\u002F\u002Fgithub.com\u002FShisatoYano\u002FAutonomousVehicleControlBeginnersGuide\u002Fissues\u002F27",{"id":169,"question_zh":170,"answer_zh":171,"source_url":154},12076,"如何在项目中添加新的仿真示例脚本？","新的示例脚本应遵循现有代码的结构。例如，对于感知类算法，应在 `src\u002Fsimulations\u002Fperception\u002F` 下创建新脚本，复用现有的聚类或分割步骤，并保持相同的输入输出接口和绘图风格。对于控制类算法，则在 `src\u002Fsimulations\u002Fcontrol\u002F` 下添加。同时，建议添加单元测试（使用合成数据验证误差）和可视化动画（如 Matplotlib 动画展示优化过程或树生长过程）。",[]]