[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-MaliosDark--wifi-3d-fusion":3,"tool-MaliosDark--wifi-3d-fusion":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,2,"2026-04-18T11:18:24",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":68,"readme_en":69,"readme_zh":70,"quickstart_zh":71,"use_case_zh":72,"hero_image_url":73,"owner_login":74,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":102,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":120,"github_topics":77,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":122,"updated_at":123,"faqs":124,"releases":125},9331,"MaliosDark\u002Fwifi-3d-fusion","wifi-3d-fusion","WiFi-3D-Fusion is an open-source research project that leverages WiFi CSI signals and deep learning to estimate 3D human pose, fusing wireless sensing with computer vision techniques for next-generation spatial awareness.","wifi-3d-fusion 是一个前沿的开源研究项目，旨在利用普通的 WiFi 信号实现高精度的实时 3D 人体姿态估计。它巧妙地结合了无线感知技术与计算机视觉算法，通过分析 WiFi 信道状态信息（CSI）和深度学习模型，让设备能够“看”到人的动作，即使在没有摄像头或光线不足的环境下也能正常工作。\n\n这一工具主要解决了传统视觉方案在隐私保护、光照依赖以及遮挡场景下的局限性。无需佩戴任何传感器，仅凭现有的 WiFi 基础设施即可捕捉细腻的人体运动轨迹，为智能家居、安防监控及人机交互提供了全新的空间感知思路。\n\n该项目特别适合人工智能研究人员、无线通信开发者以及计算机视觉工程师使用。对于希望探索多模态融合感知、研发无感监测应用的技术团队而言，wifi-3d-fusion 提供了宝贵的参考实现。其技术亮点在于创新性地融合了 Nexmon CSI 数据采集、PyTorch 深度学习框架以及 NeRF² 辐射场技术，并支持 ESP32 等低成本硬件与 Three.js\u002FOpen3D 可视化方案。需要注意的是，目前该项目主要用于学术研究与实验验证，尚未达到商业级产品的稳定性，但其开放的代码架构","wifi-3d-fusion 是一个前沿的开源研究项目，旨在利用普通的 WiFi 信号实现高精度的实时 3D 人体姿态估计。它巧妙地结合了无线感知技术与计算机视觉算法，通过分析 WiFi 信道状态信息（CSI）和深度学习模型，让设备能够“看”到人的动作，即使在没有摄像头或光线不足的环境下也能正常工作。\n\n这一工具主要解决了传统视觉方案在隐私保护、光照依赖以及遮挡场景下的局限性。无需佩戴任何传感器，仅凭现有的 WiFi 基础设施即可捕捉细腻的人体运动轨迹，为智能家居、安防监控及人机交互提供了全新的空间感知思路。\n\n该项目特别适合人工智能研究人员、无线通信开发者以及计算机视觉工程师使用。对于希望探索多模态融合感知、研发无感监测应用的技术团队而言，wifi-3d-fusion 提供了宝贵的参考实现。其技术亮点在于创新性地融合了 Nexmon CSI 数据采集、PyTorch 深度学习框架以及 NeRF² 辐射场技术，并支持 ESP32 等低成本硬件与 Three.js\u002FOpen3D 可视化方案。需要注意的是，目前该项目主要用于学术研究与实验验证，尚未达到商业级产品的稳定性，但其开放的代码架构为下一代空间智能技术的演进奠定了坚实基础。","\n\n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_6b3e02dc6698.png\" width=\"950\" alt=\"WiFi-3D-Fusion — Real-Time 3D Motion Sensing with WiFi\"\u002F>\n\u003C\u002Fp>\n\n# WiFi-3D-Fusion\n\n\u003Cp align=\"center\">\n  \u003C!-- Core Project Badges -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-WiFi_3D_Fusion-blue?style=flat&logo=github\" alt=\"Project\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDISCLAIMER-Research_Only⚠️-critical?style=flat&logo=exclamation&logoColor=white\" alt=\"Disclaimer\"\u002F>\n  \u003Ca href=\"https:\u002F\u002Fdeepwiki.org\u002FMaliosDark\u002Fwifi-3d-fusion\u002F\" target=\"_blank\" rel=\"noopener noreferrer\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCLICK_FOR_EXTENDED_README_>>>>>>_DeepWiki_\u003C\u003C\u003C\u003C\u003C-0A7EEE?style=flat&logo=readthedocs&logoColor=white\" alt=\"DeepWiki Documentation\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fblob\u002Fmain\u002FLICENSE\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-GPL--2.0-informational?style=flat\" alt=\"GPL-2.0 License\"\u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- GitHub Stats -->\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fstargazers\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat&logo=github\" alt=\"GitHub Stars\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fnetwork\u002Fmembers\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat&logo=github\" alt=\"GitHub Forks\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fwatchers\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fwatchers\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat&logo=github\" alt=\"Watchers\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fissues\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat\" alt=\"Open Issues\"\u002F>\n  \u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat\" alt=\"Last Commit\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fy\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat\" alt=\"Commit Activity\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- Languages & Frameworks -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLanguage-Python|C++|JavaScript-3776AB?style=flat&logo=python&logoColor=white\" alt=\"Languages\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-PyTorch-EE4C2C?style=flat&logo=pytorch&logoColor=white\" alt=\"PyTorch\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDevice-ESP32-E7352C?style=flat&logo=espressif&logoColor=white\" alt=\"ESP32\"\u002F>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FThree.js-3D_Graphics-000000?style=flat&logo=three.js&logoColor=white\" alt=\"Three.js\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOpen3D-Viewer-0A7EEE?style=flat&logo=opengl&logoColor=white\" alt=\"Open3D\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- Tools & Libraries -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FScapy-Capture-FFD43B?style=flat&logo=wireshark&logoColor=black\" alt=\"Scapy\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Ftcpdump-Pcap-888888?style=flat&logo=linux&logoColor=white\" alt=\"tcpdump\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNexmon-CSI-8A2BE2?style=flat&logo=gnu-bash&logoColor=white\" alt=\"Nexmon\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOpenMMLab-mmcv|mmdet-FF6F00?style=flat&logo=opencv&logoColor=white\" alt=\"OpenMMLab\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNeRF²-RF_Fields-6A0DAD?style=flat&logo=ai&logoColor=white\" alt=\"NeRF²\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocker-Compose-2496ED?style=flat&logo=docker&logoColor=white\" alt=\"Docker\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- Data & ML -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumPy-Arrays-013243?style=flat&logo=numpy&logoColor=white\" alt=\"NumPy\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSciPy-Scientific-8CAAE6?style=flat&logo=scipy&logoColor=white\" alt=\"SciPy\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOpenCV-Computer_Vision-5C3EE8?style=flat&logo=opencv&logoColor=white\" alt=\"OpenCV\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCUDA-GPU_Acceleration-76B900?style=flat&logo=nvidia&logoColor=white\" alt=\"CUDA\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCNN-Neural_Networks-FF4B4B?style=flat&logo=tensorflow&logoColor=white\" alt=\"CNN\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReID-Person_Tracking-9932CC?style=flat&logo=ai&logoColor=white\" alt=\"ReID\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- System & Networking -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUDP-Protocol-4B8BBE?style=flat&logo=wifi&logoColor=white\" alt=\"UDP\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWebSocket-Real_Time-010101?style=flat&logo=socket.io&logoColor=white\" alt=\"WebSocket\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMonitor_Mode-WiFi-1E90FF?style=flat&logo=wifi&logoColor=white\" alt=\"Monitor Mode\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAircrack_ng-Tools-8B0000?style=flat&logo=kali-linux&logoColor=white\" alt=\"Aircrack-ng\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FYAML-Config-CB171E?style=flat&logo=yaml&logoColor=white\" alt=\"YAML\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- Performance -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReal_Time-Processing-FF1493?style=flat&logo=speedtest&logoColor=white\" alt=\"Real-Time\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMulti_Threading-Performance-32CD32?style=flat&logo=threading&logoColor=white\" alt=\"Multi-Threading\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContinuous_Learning-Adaptive-9370DB?style=flat&logo=brain&logoColor=white\" alt=\"Continuous Learning\"\u002F>\n\u003C\u002Fp>\n\n\n**Live local Wi-Fi sensing** with CSI: real-time motion detection + visualization, with optional bridges to:\n- **Person-in-WiFi-3D** (multi-person **3D pose** from Wi-Fi) [CVPR 2024].  \n- **NeRF²** (neural RF radiance fields).  \n- **3D Wi-Fi Scanner** (RSSI volumetric mapping).\n\nThis monorepo is production-oriented: robust CSI ingestion from **local Wi-Fi** (ESP32-CSI via UDP, or **Nexmon** via `tcpdump` + `csiread`), a realtime movement detector, and a 3D viewer.\n\n> **Explore More**: The DeepWiki page offers an extended README with additional setup guides, advanced configurations, and community contributions. **[Click Here](https:\u002F\u002Fdeepwiki.org\u002FMaliosDark\u002Fwifi-3d-fusion\u002F)**\n\n---\n\n\n## 🎥 Demo\n\nWatch WiFi-3D-Fusion in action:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_36b92ac2a95e.webp\" width=\"800\" alt=\"WiFi-3D-Fusion Demo Animation\"\u002F>\n\u003C\u002Fp>\n\n\n\n\n\n\n## 🧩 Architecture\n\n> **Note**: Diagram typos (e.g., “Wayelet CSi tensas”) are being fixed—check `docs\u002Fimg\u002F` for updates.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_a445ed423a28.png\" width=\"950\" alt=\"WiFi-3D-Fusion — Layered Neural Network Architecture\"\u002F>\n\u003C\u002Fp>\n\n### High-level runtime\n\n```mermaid\nflowchart LR\n  subgraph Capture\n    A1(ESP32 UDP JSON):::node -->|csi_batch| B[esp32_udp.py]\n    A2(Nexmon + tcpdump):::node -->|pcap| C[nexmon_pcap.py]\n    A3(Monitor Radiotap):::node -->|RSSI stream| D[monitor_radiotap.py]\n  end\n\n  B & C & D --> E[realtime_detector.py]\n  E --> F[fusion rf\u002Frssi]\n  F --> G[Open3D live viewer]\n\n  classDef node fill:#0b7285,stroke:#083344,color:#fff;\n```\n\n### Model Training\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_a647684054d1.png\" width=\"950\" alt=\"WiFi-3D-Fusion — End-to-End Pipeline from CSI to 3D Pose\"\u002F>\n\u003C\u002Fp>\n\n\n### Processing loop\n\n```mermaid\nsequenceDiagram\n  participant SRC as CSI\u002FRSSI Source\n  participant DET as MovementDetector\n  participant FUS as Fusion\n  participant VIZ as Open3D Viewer\n\n  loop Frames\n    SRC->>DET: (ts, vector)\n    DET-->>DET: sliding var \u002F threshold\n    DET->>FUS: events + buffers\n    FUS-->>VIZ: point cloud + overlays\n    VIZ-->>User: interactive 3D scene\n  end\n```\n\n---\n\n## 🚀 Quick Start Guide\n\n### Hardware Requirements\n- **Single Device**: Developed using a **dual-band USB WiFi adapter with the Realtek RTL8812AU chipset** (ideal for Nexmon) **OR** an **ESP32 with CSI firmware**.\n- Linux system (Ubuntu 22.04+ recommended).\n- Optional: CUDA-capable GPU for faster training.\n\n\n### Method 1: Web-Based Real-Time Visualization (Recommended)\n```bash\n# Install dependencies\nbash scripts\u002Finstall_all.sh\n\n# Start web-based real-time visualization\nsource venv\u002Fbin\u002Factivate\npython run_js_visualizer.py\n\n# Open browser to http:\u002F\u002Flocalhost:5000\n```\n\n### Method 2: Traditional Pipeline\n```bash\n# ESP32-CSI UDP (default port 5566):\n.\u002Fscripts\u002Frun_realtime.sh --source esp32\n\n# Or Nexmon (requires monitor-mode interface)\nsudo .\u002Fscripts\u002Frun_realtime.sh --source nexmon\n```\n\n## 🎯 Model Training & Continuous Learning\n\n### Train Your Own Detection Model\n```bash\n# Basic training with current configuration\n.\u002Ftrain_wifi3d.sh\n\n# Quick training session with continuous learning\n.\u002Ftrain_wifi3d.sh --quick --continuous\n\n# Train with specific device source\n.\u002Ftrain_wifi3d.sh --source esp32 --device cuda --epochs 200\n\n# Enable continuous learning (model improves automatically)\n.\u002Ftrain_wifi3d.sh --continuous --auto-improve\n\n# Advanced training with custom parameters\n.\u002Ftrain_wifi3d.sh \\\n    --source nexmon \\\n    --device cuda \\\n    --epochs 500 \\\n    --batch-size 64 \\\n    --lr 0.0005 \\\n    --continuous \\\n    --auto-improve\n```\n\n### Continuous Learning Features\n- **Real-time model improvement**: The system automatically learns from new detections\n- **Adaptive training**: Model updates based on detection confidence and user feedback  \n- **Self-improvement**: System gets better at person detection over time\n- **Background learning**: Training happens continuously without interrupting visualization\n\n## 📊 System Architecture & Features\n\n### Core Components\n1. **CSI Data Acquisition**\n   - ESP32-CSI via UDP (recommended for beginners)\n   - Nexmon firmware on Broadcom chips (advanced users)\n   - Real-time CSI amplitude and phase extraction\n\n2. **Advanced Detection Pipeline**\n   - Convolutional Neural Network for person detection\n   - Real-time skeleton estimation and tracking\n   - Multi-person identification and re-identification (ReID)\n   - Adaptive movement threshold adjustment\n\n3. **3D Visualization System**\n   - Web-based Three.js renderer with professional UI\n   - Real-time 3D skeleton visualization\n   - Animated CSI noise patterns on ground plane\n   - Interactive camera controls and HUD overlays\n\n4. **Machine Learning Features**\n   - Continuous learning during operation\n   - Automatic model improvement based on feedback\n   - Self-adaptive detection thresholds\n   - Person re-identification across sessions\n\n### Real-Time Pipeline Flow\n```mermaid\nflowchart TD\n    A[CSI Data Source] --> B[Signal Processing]\n    B --> C[Neural Detection]\n    C --> D[3D Visualization]\n    \n    A1[ESP32\u002FNexmon] --> B1[Amplitude\u002FPhase]\n    B1 --> C1[CNN Classifier]\n    C1 --> D1[Three.js Web UI]\n    \n    A2[UDP\u002FPCap] --> B2[Movement Detection]\n    B2 --> C2[Person Tracking]\n    C2 --> D2[Skeleton Rendering]\n    \n    A3[Config YAML] --> B3[Adaptive Thresholding]\n    B3 --> C3[ReID System]\n    C3 --> D3[Activity Logging]\n```\n\n## 🛠️ Installation & Setup\n\n### Prerequisites\n- Linux system (Ubuntu 18.04+ recommended)\n- Python 3.8+\n- WiFi adapter with monitor mode support (for Nexmon)\n- ESP32 with CSI firmware (for ESP32 mode)\n- CUDA-capable GPU (optional, improves training speed)\n\n### Complete Installation\n```bash\n# Clone repository\ngit clone https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion.git\ncd wifi-3d-fusion\n\n# Install all dependencies and setup environment\nbash scripts\u002Finstall_all.sh\n\n# Activate Python environment\nsource venv\u002Fbin\u002Factivate\n\n# Verify installation\npython -c \"import torch, numpy, yaml; print('✅ All dependencies installed')\"\n```\n\n### Hardware Setup\n\n> **Note**: See [WiFi Adapter, Driver, and Monitor Mode Setup](#wifi-adapter-driver-and-monitor-mode-setup-rtl8812au-example) for detailed RTL8812AU configuration.\n\n#### Option A: ESP32-CSI Setup\n1. **Flash ESP32 with CSI firmware**\n   ```bash\n   # Download ESP32-CSI-Tool firmware\n   # Flash to ESP32 using esptool or Arduino IDE\n   ```\n\n2. **Configure ESP32**\n   - Set WiFi network and password\n   - Configure UDP target IP (your PC's IP)\n   - Set UDP port to 5566 (or modify `configs\u002Ffusion.yaml`)\n\n3. **Update configuration**\n   ```yaml\n   # configs\u002Ffusion.yaml\n   source: esp32\n   esp32_udp_port: 5566\n   ```\n\n#### Option B: Nexmon Setup\n1. **Install Nexmon firmware**\n   ```bash\n   # For Raspberry Pi 4 with bcm43455c0\n   git clone https:\u002F\u002Fgithub.com\u002Fseemoo-lab\u002Fnexmon_csi.git\n   cd nexmon_csi\n   # Follow installation instructions for your device\n   ```\n\n2. **Enable monitor mode**\n   ```bash\n   sudo ip link set wlan0 down\n   sudo iw dev wlan0 set type monitor\n   sudo ip link set wlan0 up\n   ```\n\n3. **Update configuration**\n   ```yaml\n   # configs\u002Ffusion.yaml  \n   source: nexmon\n   nexmon_iface: wlan0\n   ```\n\n## 🎮 Running the System\n\n### Web-Based Visualization (Recommended)\n```bash\n# Start the web server with real-time visualization\nsource venv\u002Fbin\u002Factivate\npython run_js_visualizer.py\n\n# Optional: specify device source\npython run_js_visualizer.py --source esp32\npython run_js_visualizer.py --source nexmon\n\n# Access web interface\n# Open browser to: http:\u002F\u002Flocalhost:5000\n```\n\n### Traditional Terminal-Based\n```bash\n# Run with default configuration\n.\u002Frun_wifi3d.sh\n\n# Run with specific source\n.\u002Frun_wifi3d.sh esp32\n.\u002Frun_wifi3d.sh nexmon\n\n# Run with custom channel hopping (Nexmon only)\nsudo IFACE=mon0 HOP_CHANNELS=1,6,11 python run_realtime_hop.py\n```\n\n### Training Mode\n```bash\n# Collect training data first by running the system\npython run_js_visualizer.py\n\n# Train model on collected data\nbash train_wifi3d.sh --epochs 100 --device cuda\n\n# Train with continuous learning enabled\nbash train_wifi3d.sh --continuous --auto-improve\n\n# Resume training from checkpoint\nbash train_wifi3d.sh --resume env\u002Fweights\u002Fcheckpoint_epoch_50.pth\n```\n\n## 📋 Configuration\n\n### Main Configuration File: `configs\u002Ffusion.yaml`\n```yaml\n# CSI Data Source\nsource: esp32                    # esp32, nexmon, or dummy\nesp32_udp_port: 5566            # UDP port for ESP32\nnexmon_iface: wlan0             # Network interface for Nexmon\n\n# Detection Parameters  \nmovement_threshold: 0.002        # Sensitivity for movement detection\ndebounce_seconds: 0.3           # Minimum time between detections\nwin_seconds: 3.0                # CSI analysis window size\n\n# 3D Visualization\nscene_bounds: [[-2,2], [-2,2], [0,3]]  # 3D scene boundaries\nrf_res: 64                      # RF field resolution\nalpha: 0.6                      # Visualization transparency\n\n# Machine Learning\nenable_reid: true               # Enable person re-identification\nreid:\n  checkpoint: env\u002Fweights\u002Fwho_reid_best.pth\n  seq_secs: 2.0                # Sequence length for ReID\n  fps: 20.0                    # Processing framerate\n\n# Advanced Features  \nenable_pose3d: false            # 3D pose estimation (experimental)\nenable_nerf2: false             # Neural RF fields (experimental)\n```\n\n## 🔧 Advanced Features\n\n### Continuous Learning System\nThe system includes an advanced continuous learning pipeline that:\n\n1. **Monitors detection confidence** in real-time\n2. **Automatically collects training samples** from high-confidence detections  \n3. **Updates the model** in the background without interrupting visualization\n4. **Adapts detection thresholds** based on environment characteristics\n5. **Improves person re-identification** over time\n\n### Model Training Pipeline\n```python\n# Example: Custom training script\nfrom train_model import WiFiTrainer, TrainingConfig\n\n# Configure training\nconfig = TrainingConfig(\n    batch_size=64,\n    learning_rate=0.001,  \n    epochs=200,\n    continuous_learning=True,\n    auto_improvement=True\n)\n\n# Initialize trainer\ntrainer = WiFiTrainer('configs\u002Ffusion.yaml', args)\n\n# Start training with continuous learning\ntrainer.train()\n```\n\n### Real-Time Performance Optimization\n- **Multi-threaded processing**: Separate threads for data acquisition, processing, and visualization\n- **Adaptive frame rates**: Automatically adjusts processing speed based on system load\n- **Memory management**: Efficient CSI buffer management for long-running sessions\n- **GPU acceleration**: CUDA support for neural network inference and training\n\n## 🌐 Web Interface Features\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_b18ccb47596d.png\" width=\"950\" alt=\"WiFi-3D-Fusion — Real-Time Web Interface Dashboard\"\u002F>\n\u003C\u002Fp>\n\n### Responsive Dashboard\n- **Real-time CSI metrics**: Signal variance, amplitude, activity levels\n- **Person detection status**: Count, confidence, positions\n- **Skeleton visualization**: 3D animated skeletons with joint tracking\n- **System performance**: FPS, memory usage, processing time\n- **Activity logging**: Real-time event log with timestamps\n\n### Interactive 3D Scene\n- **Manual camera controls**: Orbit, zoom, pan with mouse\n- **Ground noise visualization**: Animated circular wave patterns\n- **Skeleton rendering**: Full 3D human skeletons for detected persons\n- **Real-time updates**: Live data streaming at 10 FPS\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_79a58470ee7b.png\" width=\"800\" alt=\"WiFi-3D-Fusion — 3D Scene with Skeleton Rendering and Ground Noise Patterns\"\u002F>\n\u003C\u002Fp>\n\n### Dashboard Component Panels\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_21641822a8ff.png\" width=\"400\" alt=\"WiFi-3D-Fusion — Real-Time System Performance Metrics Panel\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_48b1aa73e3ef.png\" width=\"400\" alt=\"WiFi-3D-Fusion — CSI Signal Analysis and Detection Status Panel\"\u002F>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cb>System Performance Metrics\u003C\u002Fb> (left) and \u003Cb>CSI Signal Analytics\u003C\u002Fb> (right)\n\u003C\u002Fp>\n\n### HUD Information Panels\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_9f8b6320cccd.png\" width=\"600\" alt=\"WiFi-3D-Fusion — Real-Time HUD Information Panels\"\u002F>\n\u003C\u002Fp>\n\n\n### Validate Model Performance\n```bash\n# Evaluate trained model\npython tools\u002Feval_reid.py --checkpoint env\u002Fweights\u002Fbest_model.pth\n\n# Record test sequences\npython tools\u002Frecord_reid_sequences.py --duration 60\n\n# Simulate CSI data for testing\npython tools\u002Fsimulate_csi.py --samples 1000\n```\n\n## 📊 Data Collection & Management\n\n### CSI Data Storage\n```\nenv\u002F\n├── csi_logs\u002F              # Raw CSI data files (*.pkl)\n├── logs\u002F                  # System and training logs  \n├── weights\u002F               # Trained model checkpoints\n└── visualization\u002F         # Web interface files\n    ├── index.html         # Main dashboard\n    ├── js\u002Fapp.js         # Visualization logic\n    └── css\u002Fstyle.css     # UI styling\n```\n\n### Training Data Organization\n```\ndata\u002F\n├── reid\u002F                  # Person re-identification data\n│   ├── person_000\u002F       # Individual person sequences\n│   ├── person_001\u002F\n│   └── ...\n├── splits\u002F               # Training\u002Fvalidation splits\n│   ├── train.txt\n│   ├── val.txt  \n│   └── gallery.txt\n└── logs\u002F                 # Training history and metrics\n```\n\n## 🚨 Troubleshooting\n\n### Common Issues\n\n#### 1. No CSI Data Received\n```bash\n# Check ESP32 connection\nping \u003CESP32_IP>\n\n# Verify UDP port\nnetstat -ulnp | grep 5566\n\n# Test with dummy data\npython run_js_visualizer.py --source dummy\n```\n\n#### 2. Monitor Mode Issues (Nexmon)\n```bash\n# Reset interface\nsudo ip link set wlan0 down\nsudo iw dev wlan0 set type managed  \nsudo ip link set wlan0 up\n\n# Re-enable monitor mode\nsudo ip link set wlan0 down\nsudo iw dev wlan0 set type monitor\nsudo ip link set wlan0 up\n```\n\n#### 3. Training Fails\n```bash\n# Check GPU availability\npython -c \"import torch; print(torch.cuda.is_available())\"\n\n# Reduce batch size for limited memory\nbash train_wifi3d.sh --batch-size 16\n\n# Use CPU training\nbash train_wifi3d.sh --device cpu\n```\n\n#### 4. Web Interface Issues\n```bash\n# Check if server is running\ncurl http:\u002F\u002Flocalhost:5000\u002Fdata\n\n# Clear browser cache and reload\n# Check browser console for JavaScript errors (F12)\n\n# Restart server\npkill -f run_js_visualizer.py\npython run_js_visualizer.py\n```\n\n### Debug Logging\nEnable verbose logging for troubleshooting:\n```bash\n# Set debug mode\nexport WIFI3D_DEBUG=1\n\n# Run with verbose output\npython run_js_visualizer.py --verbose\n\n# Check log files\ntail -f env\u002Flogs\u002Fwifi3d_*.log\n```\n---\n\n## Optional bridges (disabled by default)\n\n### 1) Person-in-WiFi-3D (3D pose)\n\n* Repo: `third_party\u002FPerson-in-WiFi-3D-repo`\n* Enable in `configs\u002Ffusion.yaml`: `enable_pose3d: true`\n* Place a compatible checkpoint at `env\u002Fweights\u002Fpwifi3d.pth`.\n* Prepare test data under the repo’s expected structure (`data\u002Fwifipose\u002Ftest_data\u002F...`), then run:\n\n  ```bash\n  python -m src.bridges.pwifi3d_runner \\\n    third_party\u002FPerson-in-WiFi-3D-repo config\u002Fwifi\u002Fpetr_wifi.py env\u002Fweights\u002Fpwifi3d.pth\n  ```\n\n  *(We shell out to OpenMMLab’s `tools\u002Ftest.py` inside the repo.)*\n\n### 2) NeRF² (RF field)\n\n* Repo: `third_party\u002FNeRF2`\n* Enable in `configs\u002Ffusion.yaml`: `enable_nerf2: true`\n* Train:\n\n  ```bash\n  python -m src.bridges.nerf2_runner\n  ```\n\n### 3) 3D Wi-Fi Scanner (RSSI volume)\n\n* Repo: `third_party\u002F3D_wifi_scanner`\n* Use that tooling to generate volumetric RSSI datasets; you can integrate them into your own fusion pipeline if desired.\n\n---\n\n## Configuration\n\nEdit `configs\u002Ffusion.yaml`:\n\n* `source: esp32 | nexmon`\n* `esp32_udp_port`, `nexmon_iface`, etc.\n* Detector thresholds: `movement_threshold`, `win_seconds`, `debounce_seconds`.\n\n---\n\n## Docker\n\n```bash\ndocker compose build\ndocker compose run --rm fusion\n```\n\n---\n\n## Notes\n\n* For **Nexmon**, you need `tcpdump` privileges. The Dockerfile includes it; on host, install it and run as root\u002Fsudo.\n* For **Person-in-WiFi-3D**, follow that repo’s requirements (PyTorch, MMCV\u002FMMDet). Our `scripts\u002Finstall_all.sh` installs compatible versions.\n* For **ESP32-CSI**, UDP JSON payloads compatible with common forks are supported.\n\n---\n\n### Usage (super short)\n\n### Adaptive monitor-mode pipeline (recommended for RTL8812AU, Nexmon, or any monitor-mode interface)\n```bash\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 HOP_CHANNELS=1,6,11 python3 run_realtime_hop.py\n```\nThis will launch the self-learning pipeline described above.\n\n\nIf you want the Docker path:\n\n```bash\ndocker compose build\ndocker compose run --rm fusion\n```\n---\n\n## 🔧 System Requirements & Dependencies\n\n* **OS:** Ubuntu 22.04+ (tested with Kernel 6.14)\n* **Python:** 3.12 (venv managed by `scripts\u002Finstall_all.sh`)\n* **GPU:** Optional (only for Pose3D\u002FNeRF² bridges)\n* **Packages (auto-installed):**\n\n  * Base: `numpy`, `pyyaml`, `loguru`, `tqdm`, `open3d`, `opencv-python`, `einops`, `watchdog`, `pyzmq`, `matplotlib`, `csiread==1.4.1`\n  * Optional Pose3D: `torch` + `torchvision` (cu118\u002Fcu121 or cpu), `openmim`, `mmengine`, `mmcv`, `mmdet`\n* **System tools for capture (optional):** `tcpdump`, `tshark\u002Fwireshark`, `aircrack-ng`, `iw`\n\n> The installer keeps Torch\u002F`openmim` on **default PyPI** (no PyTorch index bleed) and pins `csiread` to a wheel compatible with Python 3.12.\n\n---\n\n## 🛠️ WiFi Adapter, Driver, and Monitor Mode Setup (RTL8812AU Example)\n\n### Supported Adapters\nThis project was developed using a **dual-band USB WiFi adapter with the Realtek RTL8812AU chipset**, which supports both 2.4 GHz and 5 GHz bands, monitor mode, and packet injection. This adapter is widely used for WiFi security research and is compatible with Linux distributions such as Ubuntu, Kali, and Parrot. Other Nexmon-compatible adapters or ESP32 with CSI firmware are also supported.\n\n### Driver Installation (RTL8812AU)\n\nThe default kernel driver may not provide full monitor mode support. For best results, install the latest driver from the [aircrack-ng\u002Frtl8812au](https:\u002F\u002Fgithub.com\u002Faircrack-ng\u002Frtl8812au) repository:\n\n```bash\nsudo apt update\nsudo apt install dkms git build-essential\ngit clone https:\u002F\u002Fgithub.com\u002Faircrack-ng\u002Frtl8812au.git\ncd rtl8812au\nsudo make dkms_install\n```\n\nThis will build and install the driver for your current kernel, enabling reliable monitor mode and packet capture.\n\n### Enabling Monitor Mode\n\nAfter installing the driver, connect your RTL8812AU adapter and identify its interface name (e.g., `wlx...`):\n\n```bash\niw dev\niwconfig\n```\n\nTo enable monitor mode and create a `mon0` interface:\n\n```bash\nsudo airmon-ng check kill\nsudo airmon-ng start \u003Cyour-interface>\n# Or manually:\nsudo ip link set \u003Cyour-interface> down\nsudo iw dev \u003Cyour-interface> set type monitor\nsudo ip link set \u003Cyour-interface> up\n```\n\nVerify monitor mode:\n\n```bash\niwconfig\n```\nYou should see `Mode:Monitor` for `mon0` or your chosen interface.\n\n### Verifying Packet Capture\n\nTo confirm that your interface is capturing WiFi packets in monitor mode:\n\n```bash\nsudo airodump-ng mon0\nsudo tcpdump -i mon0\n```\n\nYou should see networks and packets. If not, ensure there is active WiFi traffic in your environment.\n\n### Additional Tools\n\nFor debugging and traffic generation, you may also want to install:\n\n```bash\nsudo apt install aircrack-ng tcpdump tshark\n```\n\n---\n\n## 🧑‍💻 Running the Real-Time Pipeline with Monitor Mode\n\n### Prerequisites\n1. **WiFi adapter in monitor mode** (see setup instructions above)\n2. **Virtual environment activated**\n3. **All dependencies installed**\n\n### Step-by-Step Execution\n\n1. **Activate the virtual environment:**\n```bash\nsource venv\u002Fbin\u002Factivate\n```\n\n2. **Setup monitor interface (if not already done):**\n```bash\nsudo bash scripts\u002Fsetup_monitor.sh\n```\n\n3. **Verify monitor mode is working:**\n```bash\nsudo iwconfig mon0\nsudo tshark -i mon0 -c 5\n```\n\n4. **Run the real-time pipeline:**\n```bash\n# Basic execution with monitor mode\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 python run_js_visualizer.py --source monitor\n\n# Advanced: Multi-channel hopping\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 HOP_CHANNELS=1,6,11 python run_realtime_hop.py\n\n# Web interface with monitor mode\nsudo python run_js_visualizer.py --source monitor\n```\n\n5. **Open the web interface:**\n```bash\n# In your browser, navigate to:\nhttp:\u002F\u002Flocalhost:5000\n```\n\n### What This Does:\n- ✅ **Live CSI\u002FRSSI Capture**: Real-time packet analysis from monitor interface\n- ✅ **Automatic Training**: Continuous learning and model improvement\n- ✅ **3D Visualization**: Web-based Three.js viewer with skeleton rendering\n- ✅ **Channel Scanning**: Adaptive hopping across active WiFi channels\n- ✅ **Person Detection**: Real-time person tracking and identification\n- ✅ **Activity Logging**: Complete debug and status information\n\n\n---\n\n## 🧑‍💻 Running the Real-Time Adaptive Python Pipeline\n\nOnce your adapter is in monitor mode and capturing packets, run:\n\n```bash\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 HOP_CHANNELS=1,6,11 python3 run_realtime_hop.py\n```\n\nThis will:\n- Start live CSI\u002FRSSI capture and analytics\n- Train the detection model automatically\n- Launch the Open3D viewer (robust, never blank)\n- Adaptively scan and focus on the most active WiFi channels\n- Show detections and all debug\u002Fstatus info in English\n\n---\n\n## 🛡️ Troubleshooting\n\n* **Blank Open3D window**\n  Ensure data is flowing:\n\n  * ESP32: `sudo tcpdump -n -i any udp port 5566`\n  * Nexmon: `sudo tcpdump -i wlan0 -s 0 -vv -c 20`\n  * Monitor: `sudo tshark -I -i mon0 -a duration:5 -T fields -e radiotap.dbm_antsignal | head`\n    Install GL if needed: `sudo apt-get install -y libgl1`\n\n* **`openmim` not found \u002F Torch index issues**\n  Use the provided `install_all.sh` (Torch from PyTorch index only for Torch, `openmim` from PyPI).\n  For Pose3D:\n  `WITH_POSE=true TORCH_CUDA=cu121 bash scripts\u002Finstall_all.sh`\n\n* **`csiread` wheel mismatch**\n  Python 3.12 → pin to `csiread==1.4.1` (already in requirements flow).\n\n* **Monitor interface won’t capture**\n  Kill network managers, recreate `mon0`, fix channel:\n  `sudo airmon-ng check kill && bash scripts\u002Fsetup_monitor.sh`\n\n---\n\n\u003C!-- ====================== WHY I BUILT THIS ====================== -->\n\u003Ch2>🌌 Why Built WiFi-3D-Fusion\u003C\u002Fh2>\n\u003Cp>\n  I built \u003Cstrong>WiFi-3D-Fusion\u003C\u002Fstrong> because I couldn’t stand the silence.\u003Cbr\u002F>\n  The world is full of invisible signals, oceans of information passing through us every second yet most people never even notice. Researchers publish papers, companies whisper promises, but almost nobody shows the truth.\n\u003C\u002Fp>\n\u003Cp>\n  I wanted to tear the veil.\n\u003C\u002Fp>\n\u003Cp>\n  This project is not just software. It’s proof that what we call “air” is alive with data that the invisible can be sculpted into form, movement, presence.\u003Cbr\u002F>\n  It’s not about spying. It’s not about control.\u003Cbr\u002F>\n  It’s about showing that technology can reveal without violating, sense without watching, protect without chains.\n\u003C\u002Fp>\n\u003Cp>\n  Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death.\n\u003C\u002Fp>\n\u003Cp>\n  I experiment because I refuse to accept “impossible.”\u003Cbr\u002F>\n  I build because the world needs to see what it denies exists.\u003Cbr\u002F>\n  WiFi-3D-Fusion is not a product, it’s a signal flare in the dark.\n\u003C\u002Fp>\n\n> **Limitations**: WiFi sensing faces challenges like signal interference and resolution limits (2.4GHz: ~12.5cm, 5GHz: ~6cm). This is a research project, not for critical applications without validation.\n\n\n\n## 🔏 Legal \u002F Research Notice\n\n\u003C!-- ====================== ETHICS \u002F PRIVACY ====================== -->\n\u003Ch2>Ethics &amp; Privacy\u003C\u002Fh2>\n\u003Cul>\n  \u003Cli>Operate only with explicit authorization on networks and environments you own or control.\u003C\u002Fli>\n  \u003Cli>Prefer non-identifying sensing modes where possible; avoid storing personal data.\u003C\u002Fli>\n  \u003Cli>Inform participants when running live demos in shared spaces.\u003C\u002Fli>\n  \u003Cli>Respect local laws and regulations at all times.\u003C\u002Fli>\n\u003C\u002Ful>\n\n\u003C!-- ====================== DISCLAIMER (ASCII WALL) ====================== -->\n\u003Ch2>Disclaimer\u003C\u002Fh2>\n\u003Cpre>\n╔══════════════════════════════════════════════════════════════════════════╗\n║                              🔏 DISCLAIMER                               ║\n╚══════════════════════════════════════════════════════════════════════════╝\n\nThis project WiFi-3D-Fusion is provided strictly for research,\neducational, and experimental purposes only.\n\nIt must ONLY be used on networks, devices, and environments where you\nhave explicit permission and authorization.\n\n────────────────────────────────────────────────────────────────────────────\n\n⚠️  LEGAL NOTICE:\n- Unauthorized use may violate local laws, privacy regulations, and wiretap acts.\n- The author does NOT condone or support surveillance, spying, or privacy invasion.\n- You are fully responsible for lawful and ethical operation.\n\n────────────────────────────────────────────────────────────────────────────\n\n⚠️  LIMITATION OF LIABILITY:\n- The author (MaliosDark) is NOT responsible for misuse, illegal activities,\n  or any damages arising from this software.\n- By downloading, compiling, or executing this project, you accept full\n  responsibility for compliance with all applicable laws.\n\n────────────────────────────────────────────────────────────────────────────\n\n✔️  SAFE USE RECOMMENDATIONS:\n- Use ONLY on your own Wi-Fi networks or authorized testbeds.\n- Prefer demo\u002Fdummy modes for public showcases.\n- Inform participants when operating in live environments.\n- Do NOT attempt covert monitoring of individuals.\n\n────────────────────────────────────────────────────────────────────────────\n\n📌 By using WiFi-3D-Fusion, you acknowledge:\n1) You understand this disclaimer in full.\n2) You accept sole responsibility for all outcomes of use.\n3) The author is indemnified against legal claims or damages.\n\n╔══════════════════════════════════════════════════════════════════════════╗\n║         END OF DISCLAIMER – USE RESPONSIBLY OR DO NOT USE AT ALL         ║\n╚══════════════════════════════════════════════════════════════════════════╝\n\u003C\u002Fpre>\n\n---\n\n## Star History\n\n\u003Ca href=\"https:\u002F\u002Fwww.star-history.com\u002F#MaliosDark\u002Fwifi-3d-fusion&Date\">\n \u003Cpicture>\n   \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_2b32ce59fda4.png&theme=dark\" \u002F>\n   \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_2b32ce59fda4.png\" \u002F>\n   \u003Cimg alt=\"Star History Chart\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_2b32ce59fda4.png\" \u002F>\n \u003C\u002Fpicture>\n\u003C\u002Fa>\n\n\n\u003C!-- ========= Full-width animated visitor counter (Moe Counter) ========= -->\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fcount.getloli.com\u002F@MaliosDark.wifi-3d-fusion?theme=moebooru\" width=\"100%\" alt=\"Visitors • animated counter\"\u002F>\n\u003C\u002Fp>\n\n## 📚 Citations \u002F Upstreams\n\n1. [End-to-End Multi-Person 3D Pose Estimation with Wi-Fi (CVPR 2024)](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FYan_Person-in-WiFi_3D_End-to-End_Multi-Person_3D_Pose_Estimation_with_Wi-Fi_CVPR_2024_paper.pdf)  \n2. [GitHub - aiotgroup\u002FPerson-in-WiFi-3D-repo](https:\u002F\u002Fgithub.com\u002Faiotgroup\u002FPerson-in-WiFi-3D-repo)  \n3. [NeRF2: Neural Radio-Frequency Radiance Fields (MobiCom 2023)](https:\u002F\u002Fweb.comp.polyu.edu.hk\u002Fcsyanglei\u002Fdata\u002Ffiles\u002Fnerf2-mobicom23.pdf)  \n4. [GitHub - XPengZhao\u002FNeRF2](https:\u002F\u002Fgithub.com\u002FXPengZhao\u002FNeRF2)  \n5. [GitHub - Neumi\u002F3D_wifi_scanner](https:\u002F\u002Fgithub.com\u002FNeumi\u002F3D_wifi_scanner)  \n6. [Hackaday - Visualizing WiFi With A Converted 3D Printer](https:\u002F\u002Fhackaday.com\u002F2021\u002F11\u002F22\u002Fvisualizing-wifi-with-a-converted-3d-printer\u002F)  \n7. [GitHub - StevenMHernandez\u002FESP32-CSI-Tool](https:\u002F\u002Fgithub.com\u002FStevenMHernandez\u002FESP32-CSI-Tool)  \n8. [GitHub - citysu\u002Fcsiread](https:\u002F\u002Fgithub.com\u002Fcitysu\u002Fcsiread)  \n\n\n---\n\n\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_6b3e02dc6698.png\" width=\"950\" alt=\"WiFi-3D-Fusion — 实时3D运动感知，基于WiFi\"\u002F>\n\u003C\u002Fp>\n\n# WiFi-3D-Fusion\n\n\u003Cp align=\"center\">\n  \u003C!-- 核心项目徽章 -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-WiFi_3D_Fusion-blue?style=flat&logo=github\" alt=\"项目\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDISCLAIMER-Research_Only⚠️-critical?style=flat&logo=exclamation&logoColor=white\" alt=\"免责声明\"\u002F>\n  \u003Ca href=\"https:\u002F\u002Fdeepwiki.org\u002FMaliosDark\u002Fwifi-3d-fusion\u002F\" target=\"_blank\" rel=\"noopener noreferrer\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCLICK_FOR_EXTENDED_README_>>>>>>_DeepWiki_\u003C\u003C\u003C\u003C\u003C-0A7EEE?style=flat&logo=readthedocs&logoColor=white\" alt=\"DeepWiki文档\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fblob\u002Fmain\u002FLICENSE\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-GPL--2.0-informational?style=flat\" alt=\"GPL-2.0许可证\"\u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- GitHub统计 -->\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fstargazers\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat&logo=github\" alt=\"GitHub星标\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fnetwork\u002Fmembers\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat&logo=github\" alt=\"GitHub叉子\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fwatchers\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fwatchers\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat&logo=github\" alt=\"关注者\"\u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion\u002Fissues\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat\" alt=\"未解决问题\"\u002F>\n  \u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat\" alt=\"最近一次提交\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fy\u002FMaliosDark\u002Fwifi-3d-fusion?style=flat\" alt=\"提交活跃度\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- 语言与框架 -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLanguage-Python|C++|JavaScript-3776AB?style=flat&logo=python&logoColor=white\" alt=\"语言\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-PyTorch-EE4C2C?style=flat&logo=pytorch&logoColor=white\" alt=\"PyTorch\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDevice-ESP32-E7352C?style=flat&logo=espressif&logoColor=white\" alt=\"ESP32\"\u002F>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FThree.js-3D_Graphics-000000?style=flat&logo=three.js&logoColor=white\" alt=\"Three.js\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOpen3D-Viewer-0A7EEE?style=flat&logo=opengl&logoColor=white\" alt=\"Open3D\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- 工具与库 -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FScapy-Capture-FFD43B?style=flat&logo=wireshark&logoColor=black\" alt=\"Scapy\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Ftcpdump-Pcap-888888?style=flat&logo=linux&logoColor=white\" alt=\"tcpdump\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNexmon-CSI-8A2BE2?style=flat&logo=gnu-bash&logoColor=white\" alt=\"Nexmon\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOpenMMLab-mmcv|mmdet-FF6F00?style=flat&logo=opencv&logoColor=white\" alt=\"OpenMMLab\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNeRF²-RF_Fields-6A0DAD?style=flat&logo=ai&logoColor=white\" alt=\"NeRF²\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocker-Compose-2496ED?style=flat&logo=docker&logoColor=white\" alt=\"Docker\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- 数据与机器学习 -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumPy-Arrays-013243?style=flat&logo=numpy&logoColor=white\" alt=\"NumPy\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSciPy-Scientific-8CAAE6?style=flat&logo=scipy&logoColor=white\" alt=\"SciPy\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOpenCV-Computer_Vision-5C3EE8?style=flat&logo=opencv&logoColor=white\" alt=\"OpenCV\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCUDA-GPU_Acceleration-76B900?style=flat&logo=nvidia&logoColor=white\" alt=\"CUDA\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCNN-Neural_Networks-FF4B4B?style=flat&logo=tensorflow&logoColor=white\" alt=\"CNN\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReID-Person_Tracking-9932CC?style=flat&logo=ai&logoColor=white\" alt=\"ReID\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- 系统与网络 -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUDP-Protocol-4B8BBE?style=flat&logo=wifi&logoColor=white\" alt=\"UDP\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWebSocket-Real_Time-010101?style=flat&logo=socket.io&logoColor=white\" alt=\"WebSocket\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMonitor_Mode-WiFi-1E90FF?style=flat&logo=wifi&logoColor=white\" alt=\"监控模式\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAircrack_ng-Tools-8B0000?style=flat&logo=kali-linux&logoColor=white\" alt=\"Aircrack-ng\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FYAML-Config-CB171E?style=flat&logo=yaml&logoColor=white\" alt=\"YAML\"\u002F>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003C!-- 性能 -->\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReal_Time-Processing-FF1493?style=flat&logo=speedtest&logoColor=white\" alt=\"实时性\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMulti_Threading-Performance-32CD32?style=flat&logo=threading&logoColor=white\" alt=\"多线程\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContinuous_Learning-Adaptive-9370DB?style=flat&logo=brain&logoColor=white\" alt=\"持续学习\"\u002F>\n\u003C\u002Fp>\n\n\n**本地Wi‑Fi实时感知**，利用信道状态信息（CSI）：实时运动检测与可视化，并可选桥接至：\n- **Person-in-WiFi-3D**（基于Wi‑Fi的多人**3D姿态**）[CVPR 2024]。\n- **NeRF²**（神经射频辐射场）。\n- **3D Wi‑Fi扫描仪**（RSSI体积映射）。\n\n本单体仓库面向生产环境：从**本地Wi‑Fi**稳定获取CSI数据（通过ESP32‑CSI经UDP传输，或使用**Nexmon**结合`tcpdump`和`csiread`），配备实时运动检测器及3D可视化工具。\n\n> **探索更多**：DeepWiki页面提供了扩展的README，包含额外的安装指南、高级配置以及社区贡献。**[点击此处](https:\u002F\u002Fdeepwiki.org\u002FMaliosDark\u002Fwifi-3d-fusion\u002F)**\n\n---\n\n\n## 🎥 演示\n\n观看WiFi‑3D‑Fusion的实际运行效果：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_36b92ac2a95e.webp\" width=\"800\" alt=\"WiFi‑3D‑Fusion演示动画\"\u002F>\n\u003C\u002Fp>\n\n\n\n\n\n\n## 🧩 架构\n\n> **注意**：图中的拼写错误（如“Wayelet CSi tensas”）正在修复中，请查看`docs\u002Fimg\u002F`以获取更新。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_a445ed423a28.png\" width=\"950\" alt=\"WiFi‑3D‑Fusion — 分层神经网络架构\"\u002F>\n\u003C\u002Fp>\n\n### 高层次运行流程\n\n```mermaid\nflowchart LR\n  subgraph Capture\n    A1(ESP32 UDP JSON):::node -->|csi_batch| B[esp32_udp.py]\n    A2(Nexmon + tcpdump):::node -->|pcap| C[nexmon_pcap.py]\n    A3(Monitor Radiotap):::node -->|RSSI stream| D[monitor_radiotap.py]\n  end\n\n  B & C & D --> E[realtime_detector.py]\n  E --> F[fusion rf\u002Frssi]\n  F --> G[Open3D live viewer]\n\n  classDef node fill:#0b7285,stroke:#083344,color:#fff;\n```\n\n### 模型训练\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_a647684054d1.png\" width=\"950\" alt=\"WiFi-3D-Fusion — 从CSI到3D姿态的端到端流程\"\u002F>\n\u003C\u002Fp>\n\n\n### 处理循环\n\n```mermaid\nsequenceDiagram\n  participant SRC as CSI\u002FRSSI源\n  participant DET as 运动检测器\n  participant FUS as 融合模块\n  participant VIZ as Open3D查看器\n\n  loop 帧\n    SRC->>DET: (时间戳, 向量)\n    DET-->>DET: 滑动窗口 \u002F 阈值判断\n    DET->>FUS: 事件 + 缓冲区\n    FUS-->>VIZ: 点云 + 叠加层\n    VIZ-->>用户: 交互式3D场景\n  end\n```\n\n---\n\n## 🚀 快速入门指南\n\n### 硬件要求\n- **单设备**：使用配备瑞昱RTL8812AU芯片组的双频USB WiFi适配器（非常适合Nexmon）或带有CSI固件的ESP32开发板。\n- Linux系统（推荐Ubuntu 22.04及以上版本）。\n- 可选：支持CUDA的GPU，以加速训练过程。\n\n\n### 方法1：基于Web的实时可视化（推荐）\n```bash\n# 安装依赖\nbash scripts\u002Finstall_all.sh\n\n# 启动基于Web的实时可视化\nsource venv\u002Fbin\u002Factivate\npython run_js_visualizer.py\n\n# 打开浏览器访问 http:\u002F\u002Flocalhost:5000\n```\n\n### 方法2：传统流程\n```bash\n# ESP32-CSI UDP（默认端口5566）：\n.\u002Fscripts\u002Frun_realtime.sh --source esp32\n\n# 或者使用Nexmon（需要处于监控模式的网络接口）\nsudo .\u002Fscripts\u002Frun_realtime.sh --source nexmon\n```\n\n## 🎯 模型训练与持续学习\n\n### 训练您自己的检测模型\n```bash\n# 使用当前配置进行基础训练\n.\u002Ftrain_wifi3d.sh\n\n# 快速训练并启用持续学习\n.\u002Ftrain_wifi3d.sh --quick --continuous\n\n# 针对特定设备来源进行训练\n.\u002Ftrain_wifi3d.sh --source esp32 --device cuda --epochs 200\n\n# 启用持续学习（模型会自动优化）\n.\u002Ftrain_wifi3d.sh --continuous --auto-improve\n\n# 高级训练，自定义参数\n.\u002Ftrain_wifi3d.sh \\\n    --source nexmon \\\n    --device cuda \\\n    --epochs 500 \\\n    --batch-size 64 \\\n    --lr 0.0005 \\\n    --continuous \\\n    --auto-improve\n```\n\n### 持续学习功能\n- **实时模型优化**：系统会根据新的检测结果自动学习。\n- **自适应训练**：模型会根据检测置信度和用户反馈不断更新。\n- **自我改进**：系统在人员检测方面会随着时间推移变得更好。\n- **后台学习**：训练过程会在不中断可视化的情况下持续进行。\n\n## 📊 系统架构与特性\n\n### 核心组件\n1. **CSI数据采集**\n   - ESP32-CSI通过UDP传输（初学者推荐）\n   - Broadcom芯片上的Nexmon固件（高级用户）\n   - 实时提取CSI幅值和相位信息\n\n2. **高级检测流水线**\n   - 卷积神经网络用于人员检测\n   - 实时骨骼估计与跟踪\n   - 多人识别与重识别（ReID）\n   - 自适应运动阈值调整\n\n3. **3D可视化系统**\n   - 基于Three.js的Web渲染器，配备专业UI\n   - 实时3D骨骼可视化\n   - 地面平面上的CSI噪声图案动画\n   - 交互式相机控制与HUD叠加层\n\n4. **机器学习特性**\n   - 运行时持续学习\n   - 基于反馈的自动模型优化\n   - 自适应检测阈值\n   - 跨会话的人员重识别\n\n### 实时流程图\n```mermaid\nflowchart TD\n    A[CSI数据源] --> B[信号处理]\n    B --> C[神经网络检测]\n    C --> D[3D可视化]\n    \n    A1[ESP32\u002FNexmon] --> B1[幅值\u002F相位]\n    B1 --> C1[CNN分类器]\n    C1 --> D1[Three.js Web UI]\n    \n    A2[UDP\u002FPCap] --> B2[运动检测]\n    B2 --> C2[人员跟踪]\n    C2 --> D2[骨骼渲染]\n    \n    A3[配置文件YAML] --> B3[自适应阈值]\n    B3 --> C3[ReID系统]\n    C3 --> D3[活动日志记录]\n```\n\n## 🛠️ 安装与设置\n\n### 先决条件\n- Linux系统（推荐Ubuntu 18.04及以上版本）\n- Python 3.8+\n- 支持监控模式的WiFi适配器（用于Nexmon）\n- 带有CSI固件的ESP32开发板（用于ESP32模式）\n- 支持CUDA的GPU（可选，可提升训练速度）\n\n### 完整安装步骤\n```bash\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion.git\ncd wifi-3d-fusion\n\n# 安装所有依赖并设置环境\nbash scripts\u002Finstall_all.sh\n\n# 激活Python虚拟环境\nsource venv\u002Fbin\u002Factivate\n\n# 验证安装\npython -c \"import torch, numpy, yaml; print('✅ 所有依赖已安装')\"\n```\n\n### 硬件设置\n\n> **注意**：请参阅[WiFi适配器、驱动程序及监控模式设置（RTL8812AU示例）]以获取详细的RTL8812AU配置说明。\n\n#### 选项A：ESP32-CSI设置\n1. **为ESP32刷写CSI固件**\n   ```bash\n   # 下载ESP32-CSI-Tool固件\n   # 使用esptool或Arduino IDE将其刷入ESP32\n   ```\n\n2. **配置ESP32**\n   - 设置WiFi网络和密码\n   - 配置UDP目标IP地址（即您的PC IP）\n   - 将UDP端口设置为5566（或修改`configs\u002Ffusion.yaml`中的配置）\n\n3. **更新配置文件**\n   ```yaml\n   # configs\u002Ffusion.yaml\n   source: esp32\n   esp32_udp_port: 5566\n   ```\n\n#### 选项B：Nexmon设置\n1. **安装Nexmon固件**\n   ```bash\n   # 适用于搭载bcm43455c0芯片的树莓派4\n   git clone https:\u002F\u002Fgithub.com\u002Fseemoo-lab\u002Fnexmon_csi.git\n   cd nexmon_csi\n   # 按照针对您设备的安装说明进行操作\n   ```\n\n2. **启用监控模式**\n   ```bash\n   sudo ip link set wlan0 down\n   sudo iw dev wlan0 set type monitor\n   sudo ip link set wlan0 up\n   ```\n\n3. **更新配置文件**\n   ```yaml\n   # configs\u002Ffusion.yaml  \n   source: nexmon\n   nexmon_iface: wlan0\n   ```\n\n## 🎮 系统运行\n\n### 基于Web的可视化（推荐）\n```bash\n# 启动Web服务器并进行实时可视化\nsource venv\u002Fbin\u002Factivate\npython run_js_visualizer.py\n\n# 可选：指定设备来源\npython run_js_visualizer.py --source esp32\npython run_js_visualizer.py --source nexmon\n\n# 访问Web界面\n# 打开浏览器访问：http:\u002F\u002Flocalhost:5000\n```\n\n### 传统终端模式\n```bash\n# 使用默认配置运行\n.\u002Frun_wifi3d.sh\n\n# 指定设备来源运行\n.\u002Frun_wifi3d.sh esp32\n.\u002Frun_wifi3d.sh nexmon\n\n# 对于Nexmon，可启用自定义信道跳变\nsudo IFACE=mon0 HOP_CHANNELS=1,6,11 python run_realtime_hop.py\n```\n\n### 训练模式\n```bash\n# 首先运行系统收集训练数据\npython run_js_visualizer.py\n\n# 利用收集的数据训练模型\nbash train_wifi3d.sh --epochs 100 --device cuda\n\n# 启用持续学习进行训练\nbash train_wifi3d.sh --continuous --auto-improve\n\n# 从检查点恢复训练\nbash train_wifi3d.sh --resume env\u002Fweights\u002Fcheckpoint_epoch_50.pth\n```\n\n## 📋 配置文件\n\n### 主配置文件：`configs\u002Ffusion.yaml`\n```yaml\n# CSI数据源\nsource: esp32                    # esp32、nexmon或模拟数据\nesp32_udp_port: 5566            # ESP32使用的UDP端口\nnexmon_iface: wlan0             # Nexmon使用的网络接口\n\n# 检测参数  \nmovement_threshold: 0.002        # 运动检测灵敏度\ndebounce_seconds: 0.3           # 检测之间的最小时间间隔\nwin_seconds: 3.0                # CSI 分析窗口大小\n\n# 3D 可视化\nscene_bounds: [[-2,2], [-2,2], [0,3]]  # 3D 场景边界\nrf_res: 64                      # RF 场分辨率\nalpha: 0.6                      # 可视化透明度\n\n# 机器学习\nenable_reid: true               # 启用行人重识别\nreid:\n  checkpoint: env\u002Fweights\u002Fwho_reid_best.pth\n  seq_secs: 2.0                # 用于 ReID 的序列长度\n  fps: 20.0                    # 处理帧率\n\n# 高级功能  \nenable_pose3d: false            # 3D 姿态估计（实验性）\nenable_nerf2: false             # 神经 RF 场（实验性）\n```\n\n## 🔧 高级功能\n\n### 持续学习系统\n该系统包含一个先进的持续学习流水线，能够：\n\n1. **实时监控检测置信度**\n2. **自动从高置信度检测中收集训练样本**\n3. **在后台更新模型，不影响可视化**\n4. **根据环境特征自适应调整检测阈值**\n5. **随时间不断提升行人重识别性能**\n\n### 模型训练流水线\n```python\n# 示例：自定义训练脚本\nfrom train_model import WiFiTrainer, TrainingConfig\n\n# 配置训练\nconfig = TrainingConfig(\n    batch_size=64,\n    learning_rate=0.001,  \n    epochs=200,\n    continuous_learning=True,\n    auto_improvement=True\n)\n\n# 初始化训练器\ntrainer = WiFiTrainer('configs\u002Ffusion.yaml', args)\n\n# 开始持续学习训练\ntrainer.train()\n```\n\n### 实时性能优化\n- **多线程处理**：分别使用独立线程进行数据采集、处理和可视化\n- **自适应帧率**：根据系统负载自动调整处理速度\n- **内存管理**：高效管理 CSI 缓冲区，支持长时间运行\n- **GPU 加速**：支持 CUDA 进行神经网络推理和训练\n\n## 🌐 Web 界面功能\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_b18ccb47596d.png\" width=\"950\" alt=\"WiFi-3D-Fusion — 实时Web界面仪表盘\"\u002F>\n\u003C\u002Fp>\n\n### 响应式仪表盘\n- **实时 CSI 指标**：信号方差、幅度、活动水平\n- **人员检测状态**：人数、置信度、位置\n- **骨骼可视化**：带关节追踪的 3D 动画骨骼\n- **系统性能**：FPS、内存使用、处理时间\n- **活动日志**：带时间戳的实时事件记录\n\n### 交互式 3D 场景\n- **手动相机控制**：鼠标可实现环绕、缩放、平移\n- **地面噪声可视化**：动画圆形波纹图案\n- **骨骼渲染**：为检测到的人员显示完整的 3D 人体骨骼\n- **实时更新**：以 10 FPS 的速率进行数据直播\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_79a58470ee7b.png\" width=\"800\" alt=\"WiFi-3D-Fusion — 带骨骼渲染和地面噪声模式的3D场景\"\u002F>\n\u003C\u002Fp>\n\n### 仪表盘组件面板\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_21641822a8ff.png\" width=\"400\" alt=\"WiFi-3D-Fusion — 实时系统性能指标面板\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_48b1aa73e3ef.png\" width=\"400\" alt=\"WiFi-3D-Fusion — CSI信号分析与检测状态面板\"\u002F>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cb>系统性能指标\u003C\u002Fb>（左）和\u003Cb>CSI信号分析\u003C\u002Fb>（右）\n\u003C\u002Fp>\n\n### HUD 信息面板\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_9f8b6320cccd.png\" width=\"600\" alt=\"WiFi-3D-Fusion — 实时HUD信息面板\"\u002F>\n\u003C\u002Fp>\n\n\n### 验证模型性能\n```bash\n# 评估训练好的模型\npython tools\u002Feval_reid.py --checkpoint env\u002Fweights\u002Fbest_model.pth\n\n# 录制测试序列\npython tools\u002Frecord_reid_sequences.py --duration 60\n\n# 模拟 CSI 数据用于测试\npython tools\u002Fsimulate_csi.py --samples 1000\n```\n\n## 📊 数据收集与管理\n\n### CSI 数据存储\n```\nenv\u002F\n├── csi_logs\u002F              # 原始 CSI 数据文件 (*.pkl)\n├── logs\u002F                  # 系统和训练日志  \n├── weights\u002F               # 训练好的模型检查点\n└── visualization\u002F         # Web 界面文件\n    ├── index.html         # 主仪表盘\n    ├── js\u002Fapp.js         # 可视化逻辑\n    └── css\u002Fstyle.css     # UI 样式\n```\n\n### 训练数据组织\n```\ndata\u002F\n├── reid\u002F                  # 行人重识别数据\n│   ├── person_000\u002F       # 单个行人的序列\n│   ├── person_001\u002F\n│   └── ...\n├── splits\u002F               # 训练\u002F验证划分\n│   ├── train.txt\n│   ├── val.txt  \n│   └── gallery.txt\n└── logs\u002F                 # 训练历史和指标\n```\n\n## 🚨 故障排除\n\n### 常见问题\n\n#### 1. 未收到 CSI 数据\n```bash\n# 检查 ESP32 连接\nping \u003CESP32_IP>\n\n# 验证 UDP 端口\nnetstat -ulnp | grep 5566\n\n# 使用模拟数据测试\npython run_js_visualizer.py --source dummy\n```\n\n#### 2. 监控模式问题（Nexmon）\n```bash\n# 重置网卡接口\nsudo ip link set wlan0 down\nsudo iw dev wlan0 set type managed  \nsudo ip link set wlan0 up\n\n# 重新启用监控模式\nsudo ip link set wlan0 down\nsudo iw dev wlan0 set type monitor\nsudo ip link set wlan0 up\n```\n\n#### 3. 训练失败\n```bash\n# 检查 GPU 是否可用\npython -c \"import torch; print(torch.cuda.is_available())\"\n\n# 在内存有限的情况下降低批量大小\nbash train_wifi3d.sh --batch-size 16\n\n# 使用 CPU 训练\nbash train_wifi3d.sh --device cpu\n```\n\n#### 4. Web 界面问题\n```bash\n# 检查服务器是否运行\ncurl http:\u002F\u002Flocalhost:5000\u002Fdata\n\n# 清除浏览器缓存并刷新页面\n# 检查浏览器控制台是否有 JavaScript 错误（按 F12）\n\n# 重启服务器\npkill -f run_js_visualizer.py\npython run_js_visualizer.py\n```\n\n### 调试日志\n启用详细日志以进行故障排除：\n```bash\n# 设置调试模式\nexport WIFI3D_DEBUG=1\n\n# 以详细输出运行\npython run_js_visualizer.py --verbose\n\n# 查看日志文件\ntail -f env\u002Flogs\u002Fwifi3d_*.log\n```\n---\n\n## 可选桥接模块（默认禁用）\n\n### 1) Person-in-WiFi-3D（3D 姿态）\n\n* 仓库：`third_party\u002FPerson-in-WiFi-3D-repo`\n* 在 `configs\u002Ffusion.yaml` 中启用：`enable_pose3d: true`\n* 将兼容的检查点放置于 `env\u002Fweights\u002Fpwifi3d.pth`。\n* 按照仓库预期的结构准备测试数据（`data\u002Fwifipose\u002Ftest_data\u002F...`），然后运行：\n\n  ```bash\n  python -m src.bridges.pwifi3d_runner \\\n    third_party\u002FPerson-in-WiFi-3D-repo config\u002Fwifi\u002Fpetr_wifi.py env\u002Fweights\u002Fpwifi3d.pth\n  ```\n\n  *(我们调用仓库内 OpenMMLab 的 `tools\u002Ftest.py`。)*\n\n### 2) NeRF²（RF 场）\n\n* 仓库：`third_party\u002FNeRF2`\n* 在 `configs\u002Ffusion.yaml` 中启用：`enable_nerf2: true`\n* 训练：\n\n  ```bash\n  python -m src.bridges.nerf2_runner\n  ```\n\n### 3) 3D Wi-Fi 扫描仪（RSSI 体积）\n\n* 仓库：`third_party\u002F3D_wifi_scanner`\n* 使用该工具生成体积 RSSI 数据集；如有需要，可将其集成到您自己的融合管道中。\n\n---\n\n## 配置\n\n编辑 `configs\u002Ffusion.yaml`：\n\n* `source: esp32 | nexmon`\n* `esp32_udp_port`, `nexmon_iface` 等。\n* 检测器阈值：`movement_threshold`, `win_seconds`, `debounce_seconds`。\n\n---\n\n## Docker\n\n```bash\ndocker compose build\ndocker compose run --rm fusion\n```\n\n---\n\n## 注意事项\n\n* 对于 **Nexmon**，您需要 `tcpdump` 权限。Dockerfile 中已包含该工具；在主机上，请安装并以 root 或 sudo 权限运行。\n* 对于 **Person-in-WiFi-3D**，请遵循该仓库的要求（PyTorch、MMCV\u002FMMDet）。我们的 `scripts\u002Finstall_all.sh` 脚本会安装兼容版本。\n* 对于 **ESP32-CSI**，支持与常见分叉兼容的 UDP JSON 数据包。\n\n---\n\n### 使用方法（超简）\n\n### 自适应监听模式流水线（推荐用于 RTL8812AU、Nexmon 或任何监听模式接口）\n```bash\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 HOP_CHANNELS=1,6,11 python3 run_realtime_hop.py\n```\n这将启动上述自学习流水线。\n\n\n如果您想使用 Docker 方式：\n\n```bash\ndocker compose build\ndocker compose run --rm fusion\n```\n---\n\n## 🔧 系统要求与依赖\n\n* **操作系统:** Ubuntu 22.04 及以上版本（已测试内核 6.14）\n* **Python:** 3.12（由 `scripts\u002Finstall_all.sh` 脚本管理的 venv 环境）\n* **GPU:** 可选（仅用于 Pose3D\u002FNeRF² 桥接）\n* **自动安装的软件包：**\n\n  * 基础包：`numpy`, `pyyaml`, `loguru`, `tqdm`, `open3d`, `opencv-python`, `einops`, `watchdog`, `pyzmq`, `matplotlib`, `csiread==1.4.1`\n  * 可选 Pose3D 包：`torch` + `torchvision`（cu118\u002Fcu121 或 CPU 版本）、`openmim`, `mmengine`, `mmcv`, `mmdet`\n* **捕获用系统工具（可选）：** `tcpdump`, `tshark\u002Fwireshark`, `aircrack-ng`, `iw`\n\n> 安装脚本会将 Torch\u002F`openmim` 保持在 **默认 PyPI** 上（避免 PyTorch 索引污染），并将 `csiread` 锁定为与 Python 3.12 兼容的 wheel 包。\n\n---\n\n## 🛠️ WiFi 适配器、驱动程序及监听模式设置（RTL8812AU 示例）\n\n### 支持的适配器\n本项目基于一款 **双频 USB WiFi 适配器，搭载 Realtek RTL8812AU 芯片组** 开发，该适配器同时支持 2.4 GHz 和 5 GHz 频段、监听模式以及数据包注入功能。这款适配器广泛应用于 WiFi 安全研究领域，并且与 Ubuntu、Kali、Parrot 等 Linux 发行版兼容。其他 Nexmon 兼容适配器或配备 CSI 固件的 ESP32 也均受支持。\n\n### 驱动程序安装（RTL8812AU）\n\n默认内核驱动可能无法完全支持监听模式。为获得最佳效果，建议从 [aircrack-ng\u002Frtl8812au](https:\u002F\u002Fgithub.com\u002Faircrack-ng\u002Frtl8812au) 仓库安装最新驱动：\n\n```bash\nsudo apt update\nsudo apt install dkms git build-essential\ngit clone https:\u002F\u002Fgithub.com\u002Faircrack-ng\u002Frtl8812au.git\ncd rtl8812au\nsudo make dkms_install\n```\n\n这将为当前内核构建并安装驱动程序，从而实现可靠的监听模式和数据包捕获。\n\n### 启用监听模式\n\n安装驱动后，连接您的 RTL8812AU 适配器并确认其接口名称（例如 `wlx...`）：\n\n```bash\niw dev\niwconfig\n```\n\n要启用监听模式并创建 `mon0` 接口：\n\n```bash\nsudo airmon-ng check kill\nsudo airmon-ng start \u003Cyour-interface>\n# 或者手动操作：\nsudo ip link set \u003Cyour-interface> down\nsudo iw dev \u003Cyour-interface> set type monitor\nsudo ip link set \u003Cyour-interface> up\n```\n\n验证监听模式是否生效：\n\n```bash\niwconfig\n```\n您应能看到 `mon0` 或您选择的接口显示“Mode:Monitor”。\n\n### 验证数据包捕获\n\n为确认您的接口正在监听模式下捕获 WiFi 数据包：\n\n```bash\nsudo airodump-ng mon0\nsudo tcpdump -i mon0\n```\n\n如果一切正常，您应该能够看到网络和数据包。若未显示，请确保周围环境中存在活跃的 WiFi 流量。\n\n### 其他工具\n\n为了调试和生成流量，您还可以安装以下工具：\n\n```bash\nsudo apt install aircrack-ng tcpdump tshark\n```\n\n---\n\n## 🧑‍💻 运行监听模式下的实时流水线\n\n### 前提条件\n1. **WiFi 适配器处于监听模式**（参见上述设置说明）\n2. **虚拟环境已激活**\n3. **所有依赖项已安装**\n\n### 步骤执行\n\n1. **激活虚拟环境：**\n```bash\nsource venv\u002Fbin\u002Factivate\n```\n\n2. **设置监听接口（如尚未完成）：**\n```bash\nsudo bash scripts\u002Fsetup_monitor.sh\n```\n\n3. **验证监听模式是否正常工作：**\n```bash\nsudo iwconfig mon0\nsudo tshark -i mon0 -c 5\n```\n\n4. **运行实时流水线：**\n```bash\n# 基本监听模式运行\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 python run_js_visualizer.py --source monitor\n\n# 高级：多信道跳转\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 HOP_CHANNELS=1,6,11 python run_realtime_hop.py\n\n# 带有监听模式的 Web 界面\nsudo python run_js_visualizer.py --source monitor\n```\n\n5. **打开 Web 界面：**\n```bash\n# 在浏览器中访问：\nhttp:\u002F\u002Flocalhost:5000\n```\n\n### 功能说明：\n- ✅ **实时 CSI\u002FRSSI 捕获**：从监听接口进行实时数据包分析\n- ✅ **自动训练**：持续学习并优化模型\n- ✅ **3D 可视化**：基于 Three.js 的 Web 查看器，支持骨骼渲染\n- ✅ **信道扫描**：自适应地在活跃的 WiFi 信道间切换\n- ✅ **人员检测**：实时跟踪与识别人员\n- ✅ **日志记录**：完整的调试与状态信息\n\n\n---\n\n## 🧑‍💻 运行实时自适应 Python 流水线\n\n当您的适配器处于监听模式并开始捕获数据包后，运行以下命令：\n\n```bash\nsudo -E env PATH=\"$PWD\u002Fvenv\u002Fbin:$PATH\" IFACE=mon0 HOP_CHANNELS=1,6,11 python3 run_realtime_hop.py\n```\n\n这将：\n- 开始实时 CSI\u002FRSSI 捕获与分析\n- 自动训练检测模型\n- 启动 Open3D 查看器（稳定可靠，不会出现空白画面）\n- 自适应扫描并聚焦于最活跃的 WiFi 信道\n- 以英文展示检测结果及所有调试\u002F状态信息\n\n---\n\n## 🛡️ 故障排除\n\n* **Open3D 窗口空白**\n  确保数据正在传输：\n\n  * ESP32: `sudo tcpdump -n -i any udp port 5566`\n  * Nexmon: `sudo tcpdump -i wlan0 -s 0 -vv -c 20`\n  * 监控模式: `sudo tshark -I -i mon0 -a duration:5 -T fields -e radiotap.dbm_antsignal | head`\n    如有需要，安装 GL 库：`sudo apt-get install -y libgl1`\n\n* **未找到 `openmim` \u002F Torch 索引问题**\n  使用提供的 `install_all.sh` 脚本（仅从 PyTorch 索引安装 Torch，`openmim` 从 PyPI 安装）。\n  对于 Pose3D：\n  `WITH_POSE=true TORCH_CUDA=cu121 bash scripts\u002Finstall_all.sh`\n\n* **`csiread` 轮子版本不匹配**\n  Python 3.12 → 锁定到 `csiread==1.4.1`（已在依赖流程中）。\n\n* **监控接口无法捕获**\n  终止网络管理器，重新创建 `mon0`，并修复信道：\n  `sudo airmon-ng check kill && bash scripts\u002Fsetup_monitor.sh`\n\n---\n\n\u003C!-- ====================== 我为何构建此项目 ====================== -->\n\u003Ch2>🌌 为什么构建 WiFi-3D-Fusion\u003C\u002Fh2>\n\u003Cp>\n  我构建了 \u003Cstrong>WiFi-3D-Fusion\u003C\u002Fstrong>,因为我无法忍受沉默。\u003Cbr\u002F>\n  这个世界充满了看不见的信号，每秒钟都有海量信息穿过我们身边，但大多数人甚至从未察觉。研究人员发表论文，公司许下承诺，然而几乎没有人揭示真相。\n\u003C\u002Fp>\n\u003Cp>\n  我想揭开这层面纱。\n\u003C\u002Fp>\n\u003Cp>\n  该项目不仅仅是一套软件。它证明了我们所说的“空气”其实充满着数据，那些无形的信息可以被塑造成形态、运动和存在感。\u003Cbr\u002F>\n  这不是关于监视，也不是关于控制。\u003Cbr\u002F>\n  而是展示技术可以在不侵犯隐私的情况下揭示信息，在不监控的前提下感知事物，在没有束缚的情况下保护人们。\n\u003C\u002Fp>\n\u003Cp>\n  为什么？因为在某些地方，摄像头会失效——黑暗的房间、燃烧的大楼、坍塌的隧道、深埋地下的空间。而在这些地方，像这样的系统可能意味着生与死的差别。\n\u003C\u002Fp>\n\u003Cp>\n  我进行实验，是因为我拒绝接受“不可能”。\u003Cbr\u002F>\n  我建造，是因为这个世界需要看到那些它否认存在的东西。\u003Cbr\u002F>\n  WiFi-3D-Fusion 不是一个产品，而是一束在黑暗中闪耀的信号弹。\n\u003C\u002Fp>\n\n> **局限性**：WiFi 感知面临信号干扰和分辨率限制等挑战（2.4GHz：约 12.5cm，5GHz：约 6cm）。这是一个研究项目，未经验证不得用于关键应用。\n\n\n\n## 🔏 法律\u002F研究声明\n\n\u003C!-- ====================== 伦理与隐私 ====================== -->\n\u003Ch2>伦理与隐私\u003C\u002Fh2>\n\u003Cul>\n  \u003Cli>仅在您拥有或控制的网络和环境中，经过明确授权后方可操作。\u003C\u002Fli>\n  \u003Cli>尽可能使用非识别性的感知模式；避免存储个人数据。\u003C\u002Fli>\n  \u003Cli>在公共场合进行现场演示时，请提前告知参与者。\u003C\u002Fli>\n  \u003Cli>始终遵守当地法律法规。\u003C\u002Fli>\n\u003C\u002Ful>\n\n\u003C!-- ====================== 免责声明（ASCII 墙） ====================== -->\n\u003Ch2>免责声明\u003C\u002Fh2>\n\u003Cpre>\n╔══════════════════════════════════════════════════════════════════════════╗\n║                              🔏 免责声明                               ║\n╚══════════════════════════════════════════════════════════════════════════╝\n\n本项目 WiFi-3D-Fusion 仅用于研究、教育和实验目的。\n\n您必须仅在获得明确许可和授权的网络、设备和环境中使用本项目。\n\n────────────────────────────────────────────────────────────────────────────\n\n⚠️  法律提示：\n- 未经授权的使用可能违反当地法律、隐私法规以及窃听相关法律。\n- 作者绝不纵容或支持任何形式的监视、间谍活动或侵犯隐私的行为。\n- 您需对合法合规及道德操作承担全部责任。\n\n────────────────────────────────────────────────────────────────────────────\n\n⚠️  责任限制：\n- 作者（MaliosDark）不对因滥用本软件、非法行为或由此产生的任何损害承担责任。\n- 下载、编译或执行本项目即表示您完全同意遵守所有适用法律，并自行承担相应责任。\n\n────────────────────────────────────────────────────────────────────────────\n\n✔️  安全使用建议：\n- 仅在您自己的 Wi‑Fi 网络或经授权的测试环境中使用。\n- 在公开场合展示时，优先选择演示或模拟模式。\n- 在实际环境中操作时，请务必告知相关人员。\n- 请勿尝试对他人进行隐蔽监控。\n\n────────────────────────────────────────────────────────────────────────────\n\n📌 使用 WiFi-3D-Fusion 即表示您确认：\n1) 您已充分理解本声明内容。\n2) 您将对使用结果承担全部责任。\n3) 作者免于任何法律诉讼或损害赔偿。\n\n╔══════════════════════════════════════════════════════════════════════════╗\n║         免责声明结束 – 请负责任地使用，否则请勿使用         ║\n╚══════════════════════════════════════════════════════════════════════════╝\n\u003C\u002Fpre>\n\n---\n\n## 星标历史\n\n\u003Ca href=\"https:\u002F\u002Fwww.star-history.com\u002F#MaliosDark\u002Fwifi-3d-fusion&Date\">\n \u003Cpicture>\n   \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_2b32ce59fda4.png&theme=dark\" \u002F>\n   \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_2b32ce59fda4.png\" \u002F>\n   \u003Cimg alt=\"星标历史图表\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_readme_2b32ce59fda4.png\" \u002F>\n \u003C\u002Fpicture>\n\u003C\u002Fa>\n\n\n\u003C!-- ========= 全宽动画访客计数器（Moe 计数器） ========= -->\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fcount.getloli.com\u002F@MaliosDark.wifi-3d-fusion?theme=moebooru\" width=\"100%\" alt=\"访客 • 动画计数器\"\u002F>\n\u003C\u002Fp>\n\n## 📚 引用\u002F上游项目\n\n1. [基于 WiFi 的端到端多人 3D 姿态估计（CVPR 2024）](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FYan_Person-in-WiFi_3D_End-to-End_Multi-Person_3D_Pose_Estimation_with_Wi-Fi_CVPR_2024_paper.pdf)  \n2. [GitHub - aiotgroup\u002FPerson-in-WiFi-3D-repo](https:\u002F\u002Fgithub.com\u002Faiotgroup\u002FPerson-in-WiFi-3D-repo)  \n3. [NeRF2：神经无线电频率辐射场（MobiCom 2023）](https:\u002F\u002Fweb.comp.polyu.edu.hk\u002Fcsyanglei\u002Fdata\u002Ffiles\u002Fnerf2-mobicom23.pdf)  \n4. [GitHub - XPengZhao\u002FNeRF2](https:\u002F\u002Fgithub.com\u002FXPengZhao\u002FNeRF2)  \n5. [GitHub - Neumi\u002F3D_wifi_scanner](https:\u002F\u002Fgithub.com\u002FNeumi\u002F3D_wifi_scanner)  \n6. [Hackaday - 用改装的 3D 打印机可视化 WiFi](https:\u002F\u002Fhackaday.com\u002F2021\u002F11\u002F22\u002Fvisualizing-wifi-with-a-converted-3d-printer\u002F)  \n7. [GitHub - StevenMHernandez\u002FESP32-CSI-Tool](https:\u002F\u002Fgithub.com\u002FStevenMHernandez\u002FESP32-CSI-Tool)  \n8. [GitHub - citysu\u002Fcsiread](https:\u002F\u002Fgithub.com\u002Fcitysu\u002Fcsiread)  \n\n\n---","# WiFi-3D-Fusion 快速上手指南\n\nWiFi-3D-Fusion 是一个基于本地 Wi-Fi 信道状态信息（CSI）的实时 3D 运动感知与可视化工具。它支持通过 ESP32 或 Nexmon 固件采集数据，利用深度学习模型实现实时人员检测、骨架估计及 3D 可视化。\n\n## 1. 环境准备\n\n### 系统要求\n- **操作系统**: Linux (推荐 Ubuntu 22.04+)\n- **Python 版本**: 3.8+\n- **硬件设备** (二选一):\n  - **方案 A (推荐新手)**: 刷写了 CSI 固件的 ESP32 开发板。\n  - **方案 B (进阶)**: 支持监控模式 (Monitor Mode) 的 USB Wi-Fi 适配器 (如搭载 Realtek RTL8812AU 芯片的网卡，需配合 Nexmon 固件)。\n- **可选加速**: 支持 CUDA 的 NVIDIA GPU (用于加速模型训练)。\n\n### 前置依赖\n确保系统已安装基础构建工具：\n```bash\nsudo apt update\nsudo apt install -y git python3-pip python3-venv build-essential\n# 若使用 Nexmon 方案，还需安装 tcpdump 和 libpcap-dev\nsudo apt install -y tcpdump libpcap-dev\n```\n\n## 2. 安装步骤\n\n### 克隆项目\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FMaliosDark\u002Fwifi-3d-fusion.git\ncd wifi-3d-fusion\n```\n\n### 一键安装依赖\n项目提供了自动化脚本安装所有 Python 依赖并配置虚拟环境。\n> **国内加速提示**: 如果下载依赖缓慢，可先配置 pip 国内镜像源（如清华源）：\n> `pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n执行安装脚本：\n```bash\nbash scripts\u002Finstall_all.sh\n```\n\n### 激活环境\n安装完成后，激活 Python 虚拟环境：\n```bash\nsource venv\u002Fbin\u002Factivate\n```\n\n### 验证安装\n```bash\npython -c \"import torch, numpy, yaml; print('✅ 所有依赖安装成功')\"\n```\n\n## 3. 基本使用\n\n### 硬件配置简述\n在运行前，请确保已完成以下任一硬件配置：\n- **ESP32 模式**: 将 ESP32 刷入 CSI 固件，配置其连接 Wi-Fi，并将 UDP 目标 IP 设为本机 IP，端口设为 `5566`。\n- **Nexmon 模式**: 确保网卡已进入监控模式 (Monitor Mode)，并能通过 `tcpdump` 抓取数据包。\n\n### 启动方式一：Web 实时可视化 (推荐)\n此模式启动基于浏览器的 3D 可视化界面，交互性更好。\n\n```bash\n# 确保已在虚拟环境中\nsource venv\u002Fbin\u002Factivate\n\n# 启动 Web 可视化服务\npython run_js_visualizer.py\n```\n启动后，打开浏览器访问 `http:\u002F\u002Flocalhost:5000` 即可看到实时 3D 运动感知画面。\n\n### 启动方式二：传统命令行管道\n根据数据来源选择对应的启动命令：\n\n**使用 ESP32 数据源 (默认 UDP 端口 5566):**\n```bash\n.\u002Fscripts\u002Frun_realtime.sh --source esp32\n```\n\n**使用 Nexmon 数据源 (需要 sudo 权限抓取数据包):**\n```bash\nsudo .\u002Fscripts\u002Frun_realtime.sh --source nexmon\n```\n\n### 简易模型训练 (可选)\n如果需要针对特定环境优化检测模型，可运行以下命令进行快速训练：\n```bash\n# 使用当前配置进行基础训练\n.\u002Ftrain_wifi3d.sh\n\n# 开启持续学习模式（模型会根据新数据自动优化）\n.\u002Ftrain_wifi3d.sh --quick --continuous\n```","某智慧养老社区正在试点一套非侵入式老人跌倒检测与行为分析系统，旨在保护长者隐私的同时确保突发状况能被即时发现。\n\n### 没有 wifi-3d-fusion 时\n- **隐私侵犯风险高**：传统方案依赖摄像头监控，老人在卧室、浴室等私密空间会产生强烈的被监视感，导致抵触情绪。\n- **环境适应性差**：视觉方案受光线影响极大，夜间或强光逆光下识别率骤降，且无法穿透遮挡物（如床单、家具）。\n- **硬件部署成本高**：需要安装多个高清摄像头及专用边缘计算盒子，布线复杂，维护成本高昂。\n- **数据维度单一**：仅能获取二维平面坐标，难以精准还原人体真实的三维姿态，容易将“弯腰捡东西”误判为“跌倒”。\n\n### 使用 wifi-3d-fusion 后\n- **实现无感隐私保护**：利用现有的 WiFi 信号和 CSI 数据进行感知，无需任何光学镜头，彻底消除隐私顾虑，让老人安心生活。\n- **全天候稳定运行**：基于无线信号的特性，完全不受光照变化、烟雾或轻微遮挡影响，在黑暗环境中依然能精准捕捉动作。\n- **低成本快速落地**：直接复用社区已有的 WiFi 基础设施配合 ESP32 设备，无需大规模改造布线，大幅降低硬件与部署门槛。\n- **高精度三维重构**：通过深度学习融合技术，实时输出精确的 3D 人体骨骼姿态，有效区分跌倒、坐下、躺卧等细微动作，显著降低误报率。\n\nwifi-3d-fusion 通过将无形的 WiFi 信号转化为可视化的三维动作数据，在零隐私泄露风险的前提下，为智慧空间提供了全天候、高精度的感知能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaliosDark_wifi-3d-fusion_b18ccb47.png","MaliosDark","Andryu Schittone","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMaliosDark_a408b087.jpg",null,"aschittone","https:\u002F\u002Fmaliosdark.com","https:\u002F\u002Fgithub.com\u002FMaliosDark",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",94.4,{"name":87,"color":88,"percentage":89},"Shell","#89e051",5.5,{"name":91,"color":92,"percentage":93},"Dockerfile","#384d54",0.1,{"name":95,"color":96,"percentage":97},"Makefile","#427819",0,1357,151,"2026-04-18T11:09:12","NOASSERTION",5,"Linux","可选（用于加速训练），需支持 CUDA 的 NVIDIA GPU，具体型号和显存未说明","未说明",{"notes":107,"python":108,"dependencies":109},"1. 推荐使用 Ubuntu 22.04+ 系统。2. 硬件方面需要支持监控模式的 WiFi 适配器（如基于 Realtek RTL8812AU 芯片组的网卡以使用 Nexmon）或刷写了 CSI 固件的 ESP32 开发板。3. 项目涉及底层网络包捕获（tcpdump）和 WiFi 监控模式配置，可能需要 root 权限。4. 可视化部分依赖 Web 浏览器（通过 Three.js）。5. 训练和推理流程支持持续学习功能。","3.8+",[110,111,112,113,114,115,116,117,118,119],"PyTorch","NumPy","SciPy","OpenCV","Open3D","Three.js","Scapy","Nexmon","OpenMMLab (mmcv, mmdet)","Docker",[15,121],"其他","2026-03-27T02:49:30.150509","2026-04-19T06:04:22.552098",[],[126],{"id":127,"version":128,"summary_zh":129,"released_at":130},333890,"Release","# WiFi-3D-Fusion v3.0.0 发行说明\n\n## 概述\nWiFi-3D-Fusion v3.0 为我们基于 WiFi CSI 的监控与可视化系统带来了显著的改进，包括增强的实时 3D 可视化、强大的人员检测能力以及持续学习功能。本次发布重点提升了系统的稳定性、性能，并优化了专业化的用户界面。\n\n## 主要特性\n\n### 基于 Web 的可视化\n- 使用 Three.js 和 WebGL 渲染全新设计的可视化界面\n- 具有专业 WiFi\u002FCSI 主题的仪表盘，配备实时分析面板\n- 提供 OrbitControls 手动相机控制，便于场景探索\n- 对检测到的人员进行 3D 骨骼网格渲染\n- 动画化的圆形地面图案，用于表示 WiFi 信号噪声\n\n### 增强的人员检测\n- 改进的基于 ReID 的人员检测与跟踪\n- 实时显示完整的 3D 骨骼可视化\n- 自动人员注册及模型持续优化\n- GPU 加速的训练流水线，并支持检查点管理\n\n### 系统稳定性\n- 基于看门狗的死锁保护与自动恢复机制\n- 多线程数据处理，具备健壮的错误处理能力\n- 套接字复用与端口自动递增，确保 HTTP 服务器的可靠性\n- 完善的日志系统，包含性能指标记录\n\n### 训练改进\n- 新增 `train_model.py` 脚本，简化模型训练流程\n- 提供 `train_wifi3d.sh` 启动脚本，可自动检测设备\n- 持续学习系统，可在运行过程中不断优化模型\n- 通过 `fusion.yaml` 配置文件实现训练参数的灵活配置\n\n### 文档更新\n- 详尽的 README 文件，包含基于 Mermaid 的流程图\n- 常用操作的快速命令参考\n- 详细的项目状态跟踪\n- Web 界面截图及可视化指南\n\n## 技术优化\n- 高效的 NumPy 数组序列化，用于实时数据传输\n- 改进的 CSI 帧处理，支持多通道融合\n- 先进的 HUD 叠加系统，实现像素级精准定位\n- 实时活动日志，记录检测事件与系统状态\n- 支持多种 CSI 数据源（Monitor Mode、Nexmon、ESP32、Dummy）\n\n## Bug 修复\n- 修复骨骼生成中的维度问题\n- 解决长时间运行时的可视化卡顿问题\n- 纠正 HTTP 服务器端口冲突及套接字复用问题\n- 修正数据处理流水线中的类型错误\n- 解决 JavaScript 可视化更新时机的问题\n\n## 开发变更\n- 代码结构模块化，提升可维护性\n- 全面增强代码库中的错误处理机制\n- 可配置的可视化参数\n- 改进 Docker 支持，便于容器化部署\n\n## 快速入门\n请参阅更新后的 README.md 文件，获取安装说明、使用示例及配置选项。","2025-08-25T22:55:22"]