[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-EnVision-Research--Lotus":3,"tool-EnVision-Research--Lotus":64},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,2,"2026-04-06T11:32:50",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[43,15,13,14],"语言模型",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,52],"视频",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85013,"2026-04-06T11:09:19",[15,16,52,61,13,62,43,14,63],"插件","其他","音频",{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":10,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":109,"github_topics":78,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":110,"updated_at":111,"faqs":112,"releases":143},5120,"EnVision-Research\u002FLotus","Lotus","Official implementation of \"Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction\"","Lotus 是一款基于扩散模型构建的视觉基础模型，专注于生成高质量的密集预测结果。简单来说，它能像“透视眼”一样，从普通图片中精准推算出每个像素的深度信息（物体远近）和法线方向（表面朝向），从而让计算机真正理解图像的三维结构。\n\n传统视觉模型在处理细节丰富的场景时，往往难以兼顾全局结构与微小纹理的准确性。Lotus 通过引入先进的扩散生成技术，有效解决了这一痛点，显著提升了深度估计和表面法线预测的精细度与鲁棒性，即使在复杂光照或纹理缺失的情况下也能表现出色。\n\n这款工具非常适合计算机视觉研究人员、AI 开发者以及从事 3D 重建、虚拟现实内容创作的设计师使用。研究人员可将其作为强大的基线模型探索新算法；开发者能轻松将其集成到 ComfyUI 等工作流中，构建更智能的图像分析应用；设计师则可利用它快速从单张照片生成高精度的 3D 几何数据，大幅降低建模门槛。其核心亮点在于将生成式 AI 的创造力与判别式任务的精确性完美结合，为密集预测任务树立了新的标杆。","# \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_readme_6c14161104ca.png\" alt=\"lotus\" style=\"height:1em; vertical-align:bottom;\"\u002F> Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction\n\n[![Page](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Website-pink?logo=googlechrome&logoColor=white)](https:\u002F\u002Flotus3d.github.io\u002F)\n[![Paper](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-b31b1b?logo=arxiv&logoColor=white)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18124)\n[![HuggingFace Demo](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20HuggingFace-Demo%20(Depth)-yellow)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth)\n[![HuggingFace Demo](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20HuggingFace-Demo%20(Normal)-yellow)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)\n[![ComfyUI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%20ComfyUI-Demo%20&%20Cloud%20API-%23C18CC1?logo=data:image\u002Fsvg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJhIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNzk5Ljk5OTk5OTMgNzk5Ljk5OTk5OTciPjxkZWZzPjxzdHlsZT4uZHtmaWxsOnVybCgjYyk7fS5kLC5le3N0cm9rZS13aWR0aDowcHg7fS5le2ZpbGw6dXJsKCNiKTt9PC9zdHlsZT48bGluZWFyR3JhZGllbnQgaWQ9ImIiIHgxPSI1Ljk4MzYwMzUiIHkxPSIzMzAuNTI0Mjc4NCIgeDI9IjcyMC43Mjg5NjYiIHkyPSI0NTYuNTUzMTcwMSIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIG9mZnNldD0iMCIgc3RvcC1jb2xvcj0iIzVlOTRmZiIvPjxzdG9wIG9mZnNldD0iMSIgc3RvcC1jb2xvcj0iIzgwOGVmZiIvPjwvbGluZWFyR3JhZGllbnQ+PGxpbmVhckdyYWRpZW50IGlkPSJjIiB4MT0iNDcxLjI3MzY2NDEiIHkxPSIzNjEuOTQ5OTI5NSIgeDI9IjgxNy4xOTE0NzE4IiB5Mj0iNDIyLjk0NDU3MjEiIGdyYWRpZW50VW5pdHM9InVzZXJTcGFjZU9uVXNlIj48c3RvcCBvZmZzZXQ9IjAiIHN0b3AtY29sb3I9IiNiNWI4ZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiNjM2I1ZmYiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48cGF0aCBjbGFzcz0iZSIgZD0ibTM5OS45OTk5OTkzLDY0OS45OTk5OTk3Yy0xMzcuOTEzOTIxMiwwLTI0OS43NDUwMjAyLTExMS42NzQwMzIzLTI0OS45OTk1NjQtMjQ5LjUyODM2NTEtLjI1NDMzNTYtMTM3Ljc0MTYwNDMsMTEyLjA3NzgzODYtMjUwLjM3NDU3NDMsMjQ5LjgxOTY0MzYtMjUwLjQ3MTU3MTMsNjcuMzQ3NzU5Mi0uMDQ3NDI1OSwxMjguNDg2NDQ4LDI2LjUzNTk2OTUsMTczLjQ2NTI1NTQsNjkuNzk5MTQ4MywxLjk1OTI3NzIsMS44ODQ1NDQ0LDUuMDY1OTI5NSwxLjg0OTUyMzUsNi45ODgyMDM1LS4wNzI3NTA1bDk5LjAxMTMxNjQtOTkuMDExMzE2NGMxLjk2MTMxNzYtMS45NjEzMTc2LDEuOTYzNTEyNi01LjE1Njc2MDEtLjAyMjE2OTQtNy4wOTM0MDY3QzYwNi45NDc1NTU4LDQzLjA5MjM5NDUsNTA4LjAxODgyNjYtLjI3NTg4NjEsMzk4Ljk2MDY4NjcuMDAxMzIxLDE3OC4wNzkzMzU1LjU2Mjc2MzUtLjAwMDQ4ODMsMTc5LjExODgzOTIsMCw0MDAuMDAwOTAzOWMuMDAwNDg4NCwyMjAuOTEzNDY0MiwxNzkuMDg2NDIxNiwzOTkuOTk5MDk1OCwzOTkuOTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ2MzI2OCwwLDIwNy4xNzU2NDE5LTQzLjMxNTY5OTksMjc5LjI2MDgxODktMTEzLjYxOTkxNSwxLjk4NjQxOTEtMS45MzczNDE5LDEuOTg0MjI1OS01LjEzNTA0MDIuMDIyMTkyLTcuMDk3MDc0MmwtOTkuMDA5MTYwMy05OS4wMDkxNjAzYy0xLjkxOTc3OTUtMS45MTk3Nzk1LTUuMDI1MzA2MS0xLjk2MTUyNTMtNi45ODE5NzI5LS4wNzkzNTU5LTQ0LjkzOTM2NTUsNDMuMjI4MzYzNS0xMDYuMDExODg5NCw2OS44MDU1MDUzLTE3My4yOTE4Nzc3LDY5LjgwNTUwNTNaIi8+PHBhdGggY2xhc3M9ImQiIGQ9Im02ODkuNjA1OTc3NiwyNzkuOTk5OTk5N2wuMDAwMDAwMywxNDEuOTkxNzUxYy4wMDAwMDAxLDU2LjMxOTQyMjgtNDUuMjIyNTg2OSwxMDIuODEyNzMxNC0xMDEuNTQxNjcyNSwxMDMuMDA3NjI0Ni01Ni40NDQ1MzQzLjE5NTMyNzMtMTAyLjI2MjY3NTYtNDUuNTAyNDM4OS0xMDIuMjYyNjc1Ni0xMDEuOTAxNTQ5NHYtMTQzLjA5NzgyNjFjMC0yLjc2MTQyMzcsMi4yMzg1NzYzLTUsNS01aDQ5LjQ0MjkzNDhjMi43NjE0MjM4LDAsNSwyLjIzODU3NjMsNSw1bC0uMDAwMDAwMywxNDIuMzk5MDEwMWMwLDIzLjI4MzY5MzMsMTguNDUyOTIwNiw0Mi43NjMyMzQzLDQxLjczMzM2ODMsNDMuMTUxOTkwMywyMy43ODE4OTM3LjM5NzEyOTYsNDMuMTg1MTEwMi0xOC43NjIxMDg1LDQzLjE4NTExMDItNDIuNDUzMTc0NHYtMTQzLjA5NzgyNjFjMC0yLjc2MTQyMzcsMi4yMzg1NzYzLTUsNS01aDQ5LjQ0MjkzNDhjMi43NjE0MjM3LDAsNSwyLjIzODU3NjIsNSw1Wm01MC45NTEwODY5LDB2MjQwYzAsMi43NjE0MjM3LDIuMjM4NTc2Myw1LDUsNWg0OS40NDI5MzQ4YzIuNzYxNDIzNywwLDUtMi4yMzg1NzYzLDUtNXYtMjQwYzAtMi43NjE0MjM3LTIuMjM4NTc2My01LTUtNWgtNDkuNDQyOTM0OGMtMi43NjE0MjM3LDAtNSwyLjIzODU3NjMtNSw1WiIvPjwvc3ZnPg==)](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-Lotus)\n[![Replicate](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_readme_200812735dfa.png)](https:\u002F\u002Freplicate.com\u002Fchenxwh\u002Flotus)\n\n[Jing He](https:\u002F\u002Fscholar.google.com\u002Fcitations?hl=en&user=RsLS11MAAAAJ)\u003Csup>1\u003Cspan style=\"color:red;\">&#10033;\u003C\u002Fspan>\u003C\u002Fsup>,\n[Haodong Li](https:\u002F\u002Fhaodong-li.com\u002F)\u003Csup>1\u003Cspan style=\"color:red;\">&#10033;\u003C\u002Fspan>\u003C\u002Fsup>,\n[Wei Yin](https:\u002F\u002Fyvanyin.net\u002F)\u003Csup>2\u003C\u002Fsup>,\n[Yixun Liang](https:\u002F\u002Fyixunliang.github.io\u002F)\u003Csup>1\u003C\u002Fsup>,\n[Leheng Li](https:\u002F\u002Flen-li.github.io\u002F)\u003Csup>1\u003C\u002Fsup>,\n[Kaiqiang Zhou]()\u003Csup>3\u003C\u002Fsup>,\n[Hongbo Zhang]()\u003Csup>3\u003C\u002Fsup>,\n[Bingbing Liu]()\u003Csup>3\u003C\u002Fsup>,\u003Cbr>\n[Ying-Cong Chen](https:\u002F\u002Fwww.yingcong.me\u002F)\u003Csup>1,4&#9993;\u003C\u002Fsup>\n\n\u003Cspan class=\"author-block\">\u003Csup>1\u003C\u002Fsup>HKUST(GZ)\u003C\u002Fspan>\n\u003Cspan class=\"author-block\">\u003Csup>2\u003C\u002Fsup>University of Adelaide\u003C\u002Fspan>\n\u003Cspan class=\"author-block\">\u003Csup>3\u003C\u002Fsup>Noah's Ark Lab\u003C\u002Fspan>\n\u003Cspan class=\"author-block\">\u003Csup>4\u003C\u002Fsup>HKUST\u003C\u002Fspan>\u003Cbr>\n\u003Cspan class=\"author-block\">\n    \u003Csup style=\"color:red;\">&#10033;\u003C\u002Fsup>**Both authors contributed equally.**\n    \u003Csup>&#9993;\u003C\u002Fsup>Corresponding author.\n\u003C\u002Fspan>\n\n🔥🔥🔥 **Please also check our latest Lotus-2! Useful links:** [**Project Page**](https:\u002F\u002Flotus-2.github.io\u002F)**,** [**Github Repo**](https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus-2)**.** 🔥🔥🔥\n\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_readme_4336220a9850.jpg)\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_readme_c8516fe99e2a.jpg)\n\nWe present **Lotus**, a diffusion-based visual foundation model for dense geometry prediction. With minimal training data, Lotus achieves SoTA performance in two key geometry perception tasks, i.e., zero-shot depth and normal estimation. \"Avg. Rank\" indicates the average ranking across all metrics, where lower values are better. Bar length represents the amount of training data used.\n\n## 📢 News\n- 2025-04-03: The training code of Lotus (Generative & Discriminative) is now available!\n- 2025-01-17: Please check out our latest models ([lotus-normal-g-v1-1](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-normal-g-v1-1), [lotus-normal-d-v1-1](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-normal-d-v1-1)), which were trained with aligned surface normals, leading to improved performance!  \n- 2024-11-13: The demo now supports video depth estimation!\n- 2024-11-13: The Lotus disparity models ([Generative](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-g-v2-0-disparity) & [Discriminative](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-d-v2-0-disparity)) are now available, which achieve better performance!\n- 2024-10-06: The demos are now available ([Depth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth) & [Normal](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)). Please have a try! \u003Cbr>\n- 2024-10-05: The inference code is now available! \u003Cbr>\n- 2024-09-26: [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18124) released. Click [here](https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F14#issuecomment-2409094495) if you are curious about the 3D point clouds of the teaser's depth maps! \u003Cbr>\n\n## 🛠️ Setup\nThis installation was tested on: Ubuntu 20.04 LTS, Python 3.10, CUDA 12.3, NVIDIA A800-SXM4-80GB.  \n\n1. Clone the repository (requires git):\n```\ngit clone https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus.git\ncd Lotus\n```\n\n2. Install dependencies (requires conda):\n```\nconda create -n lotus python=3.10 -y\nconda activate lotus\npip install -r requirements.txt \n```\n\n## 🤗 Gradio Demo\n\n1. Online demo: [Depth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth) & [Normal](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)\n2. Local demo\n- For **depth** estimation, run:\n    ```\n    python app.py depth\n    ```\n- For **normal** estimation, run:\n    ```\n    python app.py normal\n    ```\n\n## 🔥 Training\n1. Initialize your Accelerate environment with:\n    ```\n    accelerate config --config_file=$PATH_TO_ACCELERATE_CONFIG_FILE\n    ```\n    Please make sure you have installed the accelerate package. We have tested our training scripts with the accelerate version 0.29.3.\n\n2. Prepare your training data:\n- [Hypersim](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-hypersim): \n    - Download this [script](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-hypersim\u002Fblob\u002Fmain\u002Fcontrib\u002F99991\u002Fdownload.py) into your `$PATH_TO_RAW_HYPERSIM_DATA` directory for data downloading.\n    - Run the following command to download the data:\n        ```\n        cd $PATH_TO_RAW_HYPERSIM_DATA\n\n        # Download the tone-mapped images\n        python .\u002Fdownload.py --contains scene_cam_ --contains final_preview --contains tonemap.jpg --silent\n\n        # Download the depth maps\n        python .\u002Fdownload.py --contains scene_cam_ --contains geometry_hdf5 --contains depth_meters --silent\n\n        # Download the normal maps\n        python .\u002Fdownload.py --contains scene_cam_ --contains geometry_hdf5 --contains normal --silent\n        ```\n    - Download the split file from [here](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-hypersim\u002Fblob\u002Fmain\u002Fevermotion_dataset\u002Fanalysis\u002Fmetadata_images_split_scene_v1.csv) and put it in the `$PATH_TO_RAW_HYPERSIM_DATA` directory.\n    - Process the data with the command: `bash utils\u002Fprocess_hypersim.sh`.\n- [Virtual KITTI](https:\u002F\u002Feurope.naverlabs.com\u002Fproxy-virtual-worlds-vkitti-2\u002F):\n    - Download the [rgb](https:\u002F\u002Fdownload.europe.naverlabs.com\u002F\u002Fvirtual_kitti_2.0.3\u002Fvkitti_2.0.3_rgb.tar), [depth](https:\u002F\u002Fdownload.europe.naverlabs.com\u002F\u002Fvirtual_kitti_2.0.3\u002Fvkitti_2.0.3_depth.tar), and [textgz](https:\u002F\u002Fdownload.europe.naverlabs.com\u002F\u002Fvirtual_kitti_2.0.3\u002Fvkitti_2.0.3_textgt.tar.gz) into the `$PATH_TO_VKITTI_DATA` directory and unzip them.\n    - Make sure the directory structure is as follows:\n        ```\n        SceneX\u002FY\u002Fframes\u002Frgb\u002FCamera_Z\u002Frgb_%05d.jpg\n        SceneX\u002FY\u002Fframes\u002Fdepth\u002FCamera_Z\u002Fdepth_%05d.png\n        SceneX\u002FY\u002Fcolors.txt\n        SceneX\u002FY\u002Fextrinsic.txt\n        SceneX\u002FY\u002Fintrinsic.txt\n        SceneX\u002FY\u002Finfo.txt\n        SceneX\u002FY\u002Fbbox.txt\n        SceneX\u002FY\u002Fpose.txt\n        ```\n        where $`X \\in \\{01, 02, 06, 18, 20\\}`$ and represent one of 5 different locations.\n        $`Y \\in \\{\\texttt{15-deg-left}, \\texttt{15-deg-right}, \\texttt{30-deg-left}, \\texttt{30-deg-right}, \\texttt{clone}, \\texttt{fog}, \\texttt{morning}, \\texttt{overcast}, \\texttt{rain}, \\texttt{sunset}\\}`$ and represent the different variations.\n        $`Z \\in [0, 1]`$ and represent the left or right camera. \n        Note that the indexes always start from 0.\n    - Generate the normal maps with the command: `bash utils\u002Fdepth2normal.sh`.\n3. Run the training command! 🚀\n    - `bash train_scripts\u002Ftrain_lotus_g_{$TASK}.sh` for training Lotus Generative models;\n    - `bash train_scripts\u002Ftrain_lotus_d_{$TASK}.sh` for training Lotus Discriminative models.\n\n## 🕹️ Inference\n### Testing on your images\n1. Place your images in a directory, for example, under `assets\u002Fin-the-wild_example` (where we have prepared several examples). \n2. Run the inference command: `bash infer.sh`. \n\n### Evaluation on benchmark datasets\n1. Prepare benchmark datasets:\n- For **depth** estimation, you can download the [evaluation datasets (depth)](https:\u002F\u002Fshare.phys.ethz.ch\u002F~pf\u002Fbingkedata\u002Fmarigold\u002Fevaluation_dataset\u002F) by the following commands (referred to [Marigold](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FMarigold?tab=readme-ov-file#-evaluation-on-test-datasets-)):\n    ```\n    cd datasets\u002Feval\u002Fdepth\u002F\n    \n    wget -r -np -nH --cut-dirs=4 -R \"index.html*\" -P . https:\u002F\u002Fshare.phys.ethz.ch\u002F~pf\u002Fbingkedata\u002Fmarigold\u002Fevaluation_dataset\u002F\n    ```\n- For **normal** estimation, you can download the  [evaluation datasets (normal)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1t3LMJIIrSnCGwOEf53Cyg0lkSXd3M4Hm?usp=drive_link) (`dsine_eval.zip`) into the path `datasets\u002Feval\u002Fnormal\u002F` and unzip it (referred to [DSINE](https:\u002F\u002Fgithub.com\u002Fbaegwangbin\u002FDSINE?tab=readme-ov-file#getting-started)). \n\n2. Run the evaluation command: `bash eval_scripts\u002Feval-[task]-[mode].sh`, where `[task]` represents the task name (**depth** or **normal**) and `[mode]` refers to the mode name (**g** or **d**). \u003C\u002Fbr>\n\n3. (Optional) To reproduce the results presented in our paper, you can set the `--rng_state_path` option in the evaluation command. The RNG state files are available at `.\u002Frng_states\u002F`.\n\n### Choose your model\nBelow are the released models and their corresponding configurations:\n|CHECKPOINT_DIR |TASK_NAME |MODE |\n|:--:|:--:|:--:|\n| [`jingheya\u002Flotus-depth-g-v1-0`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-g-v1-0) | depth| `generation`|\n| [`jingheya\u002Flotus-depth-d-v1-0`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-d-v1-0) | depth|`regression` |\n| [`jingheya\u002Flotus-depth-g-v2-1-disparity`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-g-v2-1-disparity) | depth (disparity)| `generation`|\n| [`jingheya\u002Flotus-depth-d-v2-0-disparity`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-d-v2-0-disparity) | depth (disparity)|`regression` |\n| [`jingheya\u002Flotus-normal-g-v1-1`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-normal-g-v1-1) |normal | `generation` |\n| [`jingheya\u002Flotus-normal-d-v1-1`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-normal-d-v1-1) |normal |`regression` |\n\n## 🎓 Citation\nIf you find our work useful in your research, please consider citing our paper:\n```bibtex\n@article{he2024lotus,\n    title={Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction},\n    author={He, Jing and Li, Haodong and Yin, Wei and Liang, Yixun and Li, Leheng and Zhou, Kaiqiang and Liu, Hongbo and Liu, Bingbing and Chen, Ying-Cong},\n    journal={arXiv preprint arXiv:2409.18124},\n    year={2024}\n}\n```\n","# \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_readme_6c14161104ca.png\" alt=\"lotus\" style=\"height:1em; vertical-align:bottom;\"\u002F> Lotus：基于扩散的视觉基础模型，用于高质量密集预测\n\n[![页面](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Website-pink?logo=googlechrome&logoColor=white)](https:\u002F\u002Flotus3d.github.io\u002F)\n[![论文](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-b31b1b?logo=arxiv&logoColor=white)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18124)\n[![HuggingFace演示](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20HuggingFace-Demo%20(Depth)-yellow)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth)\n[![HuggingFace演示](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20HuggingFace-Demo%20(Normal)-yellow)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)\n[![ComfyUI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%20ComfyUI-Demo%20&%20Cloud%20API-%23C18CC1?logo=data:image\u002Fsvg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJhIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNzk5Ljk5OTk5OTMgNzk5Ljk5OTk5OTciPjxkZWZzPjxzdHlsZT4uZHtmaWxsOnVybCgjYyk7fS5kLC5le3N0cm9rZS13aWR0aDowcHg7fS5le2ZpbGw6dXJsKCNiKTt9PC9zdHlsZT48bGluZWFyR3JhZGllbnQgaWQ9ImIiIHgxPSI1Ljk4MzYwMzUiIHkxPSIzMzAuNTI0Mjc4NCIgeDI9IjcyMC43Mjg5NjYiIHkyPSI0NTYuNTUzMTcwMSIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIG9mZnNldD0iMCIgc3RvcC1jb2xvcj0iIzVlOTRmZiIvPjxzdG9wIG9mZnNldD0iMSIgc3RvcC1jb2xvcj0iIzgwOGVmZiIvPjwvbGluZWFyR3JhZGllbnQ+PGxpbmVhckdyYWRpZW50IGlkPSJjIiB4MT0iNDcxLjI3MzY2NDEiIHkxPSIzNjEuOTQ5OTI5NSIgeDI9IjgxNy4xOTE0NzE4IiB5Mj0iNDIyLjk0NDU3MjEiIGdyYWRpZW50VW5pdHM9InVzZXJTcGFjZU9uVXNlIj48c3RvcCBvZmZzZXQ9IjAiIHN0b3AtY29sb3I9IiNiNWI4ZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiNjM2I1ZmYiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48cGF0aCBjbGFzcz0iZSIgZD0ibTM5OS45OTk5OTkzLDY0OS45OTk5OTk3Yy0xMzcuOTEzOTIxMiwwLTI0OS43NDUwMjAyLTExMS42NzQwMzIzLTI0OS45OTk1NjQtMjQ5LjUyODM2NTEtLjI1NDMzNTYtMTM3Ljc0MTYwNDMsMTEyLjA3NzgzODYtMjUwLjM3NDU3NDMsMjQ5LjgxOTY0MzYtMjUwLjQ3MTU3MTMsNjcuMzQ3NzU5Mi0uMDQ3NDI1OSwxMjguNDg2NDQ4LDI2LjUzNTk2OTUsMTAzLjQ2NTI1NTQsNjkuNzk5MTQ4MywxLjk1OTI3NzIsMS44ODQ1NDQ0LDUuMDY1OTI9NSwxLjg0OTUyMzUsNi45ODgyMDM1LS4wNzI3NTA1bDk5LjAxMTMxNjQtOTkuMDExMzE6bGMxLjk2MTMxNzYtMS45NjEzMTc2LDEuOTYzNTEyNi01LjE1Njc2MDEtLjAyMjE6bCMzNjAuOTk5MDk1OCwxNzkuODg2NDIxNiwzOTkuOTk5MDk1OCwxNzkuOTk5OTk5MywxNzkuOTk5MDk1OCwxMDguNjQ2MzI2OCwwLDIwNy.4L.17.61.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.91.......# \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_readme_6c14161104ca.png\" alt=\"lotus\" style=\"height:1em; vertical-align:bottom;\"\u002F> Lotus：基于扩散的视觉基础模型，用于高质量密集预测\n\n[![页面](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Website-pink?logo=googlechrome&logoColor=white)](https:\u002F\u002Flotus3d.github.io\u002F)\n[![论文](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-b31b1b?logo=arxiv&logoColor=white)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18124)\n[![HuggingFace演示](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20HuggingFace-Demo%20(Depth)-yellow)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth)\n[![HuggingFace演示](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20HuggingFace-Demo%20(Normal)-yellow)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)\n[![ComfyUI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%20ComfyUI-Demo%20&%20Cloud%20API-%23C18CC1?logo=data:image\u002Fsvg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJhIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNzk5Ljk5OTk5OTMgNzk5Ljk5OTk5OTciPjxkZWZzPjxzdHlsZT4uZHtmaWxsOnVybCgjYyk7fS5kLC5le3N0cm9rZS13aWR0aDowcHg7fS5le2ZpbGw6dXJsKCNiKTt9PC9zdHlsZT48bGluZWFyR3JhZGllbnQgaWQ9ImIiIHgxPSI1Ljk4MzYwMzUiIHkxPSIzMzAuNTI0Mjc4NCIgeDI9IjcyMC43Mjg5NjYiIHkyPSI0NTYuNTUzMTcwMSIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPjxzdG9wIG9mZnNldD0iMCIgc3RvcC1jb2xvcj0iIzVlOTRmZiIvPjxzdG9wIG9mZnNldD0iMSIgc3RvcC1jb2xvcj0iIzgwOGVmZiIvPjwvbGluZWFyR3JhZGllbnQ+PGxpbmVhckdyYWRpZW50IGlkPSJjIiB4MT0iNDcxLjI3MzY2NDEiIHkxPSIzNjEuOTQ5OTI5NSIgeDI9IjgxNy4xOTE0NzE4IiB5Mj0iNDIyLjk0NDU3MjEiIGdyYWRpZW50VW5pdHM9InVzZXJTcGFjZU9uVXNlIj48c3RvcCBvZmZzZXQ9IjAiIHN0b3AtY29sb3I9IiNiNWI4ZmYiLz48c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiNjM2I1ZmYiLz48L2xpbmVhckdyYWRpZW50PjwvZGVmcz48cGF0aCBjbGFzcz0iZSIgZD0ibTM5OS45OTk5OTkzLDY0OS45OTk5OTk3Yy0xMzcuOTEzOTIxMiwwLTI0OS43NDUwMjAyLTExMS42NzQwMzIzLTI0OS45OTk1NjQtMjQ5LjUyODM2NTEtLjI1NDMzNTYtMTM3Ljc0MTYwNDMsMTEyLjA3NzgzODYtMjUwLjM3NDU3NDMsMjQ5LjgxOTY0MzYtMjUwLjQ3MTU3MTMsNjcuMzQ3NzU5Mi0uMDQ3NDI1OSwxMjguNDg2NDQ4LDI2LjUzNTk2OTUsMTAzLjQ2NTI1NTQsNjkuNzk5MTQ4MywxLjk1OTI3NzIsMS44ODQ1NDQ0LDUuMDY1OTI9NSwxLjg0OTUyMzUsNi45ODgyMDM1LS4wNzI3NTA1bDk5LjAxMTMxNjQtOTkuMDExMzE6NmMxLjk2MTMxNzYtMS45NjEzMTc2LDEuOTYzNTEyNi01LjE1Njc6MDEtLjAyMjE6MDEtNy4wOTM0MDY3QzYwNi45NDc5NTU4LDQzLjA9MjM9NDUsNTA4LjAxODgyNjYtLjI7NTg6MDEtMzk4Ljk6MDY8NjcuMDAxMzIxLDE7OCuOz9kMjAsNDAuMDAwOTA3OWMuMDAwNDg8NCwyMjAuOTEzNDY0MiwxNzkuODg6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5MDk1OCwxMDguNjQ6MDE6MDE9NjcuMzk9OTk5MDk1OCwxMzk9OTk5OTk5MywzOTkuOTk5......\n\n## 🛠️ 设置\n此安装已在以下环境中测试通过：Ubuntu 20.04 LTS、Python 3.10、CUDA 12.3、NVIDIA A800-SXM4-80GB。\n\n1. 克隆仓库（需要 git）：\n```\ngit clone https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus.git\ncd Lotus\n```\n\n2. 安装依赖（需要 conda）：\n```\nconda create -n lotus python=3.10 -y\nconda activate lotus\npip install -r requirements.txt \n```\n\n## 🤗 Gradio 演示\n\n1. 在线演示：[深度](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth) & [法线](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)\n2. 本地演示\n- 对于**深度**估计，运行：\n    ```\n    python app.py depth\n    ```\n- 对于**法线**估计，运行：\n    ```\n    python app.py normal\n    ```\n  \n\n## 🔥 训练\n1. 使用以下命令初始化 Accelerate 环境：\n    ```\n    accelerate config --config_file=$PATH_TO_ACCELERATE_CONFIG_FILE\n    ```\n    请确保已安装 accelerate 包。我们已使用 accelerate 0.29.3 版本测试过训练脚本。\n\n2. 准备训练数据：\n- [Hypersim](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-hypersim)：\n    - 将此[脚本](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-hypersim\u002Fblob\u002Fmain\u002Fcontrib\u002F99991\u002Fdownload.py)下载到 `$PATH_TO_RAW_HYPERSIM_DATA` 目录中以进行数据下载。\n    - 运行以下命令下载数据：\n        ```\n        cd $PATH_TO_RAW_HYPERSIM_DATA\n\n        # 下载色调映射图像\n        python .\u002Fdownload.py --contains scene_cam_ --contains final_preview --contains tonemap.jpg --silent\n\n        # 下载深度图\n        python .\u002Fdownload.py --contains scene_cam_ --contains geometry_hdf5 --contains depth_meters --silent\n\n        # 下载法线图\n        python .\u002Fdownload.py --contains scene_cam_ --contains geometry_hdf5 --contains normal --silent\n        ```\n    - 从[这里](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-hypersim\u002Fblob\u002Fmain\u002Fevermotion_dataset\u002Fanalysis\u002Fmetadata_images_split_scene_v1.csv)下载分割文件，并将其放入 `$PATH_TO_RAW_HYPERSIM_DATA` 目录中。\n    - 使用命令 `bash utils\u002Fprocess_hypersim.sh` 处理数据。\n- [Virtual KITTI](https:\u002F\u002Feurope.naverlabs.com\u002Fproxy-virtual-worlds-vkitti-2\u002F)：\n    - 将 [rgb](https:\u002F\u002Fdownload.europe.naverlabs.com\u002F\u002Fvirtual_kitti_2.0.3\u002Fvkitti_2.0.3_rgb.tar)、[深度](https:\u002F\u002Fdownload.europe.naverlabs.com\u002F\u002Fvirtual_kitti_2.0.3\u002Fvkitti_2.0.3_depth.tar) 和 [textgz](https:\u002F\u002Fdownload.europe.naverlabs.com\u002F\u002Fvirtual_kitti_2.0.3\u002Fvkitti_2.0.3_textgt.tar.gz) 下载到 `$PATH_TO_VKITTI_DATA` 目录并解压。\n    - 确保目录结构如下：\n        ```\n        SceneX\u002FY\u002Fframes\u002Frgb\u002FCamera_Z\u002Frgb_%05d.jpg\n        SceneX\u002FY\u002Fframes\u002Fdepth\u002FCamera_Z\u002Fdepth_%05d.png\n        SceneX\u002FY\u002Fcolors.txt\n        SceneX\u002FY\u002Fextrinsic.txt\n        SceneX\u002FY\u002Fintrinsic.txt\n        SceneX\u002FY\u002Finfo.txt\n        SceneX\u002FY\u002Fbbox.txt\n        SceneX\u002FY\u002Fpose.txt\n        ```\n        其中 $`X \\in \\{01, 02, 06, 18, 20\\}`$ 表示 5 个不同地点之一。\n        $`Y \\in \\{\\texttt{15-deg-left}, \\texttt{15-deg-right}, \\texttt{30-deg-left}, \\texttt{30-deg-right}, \\texttt{clone}, \\texttt{fog}, \\texttt{morning}, \\texttt{overcast}, \\texttt{rain}, \\texttt{sunset}\\}`$ 表示不同的场景变化。\n        $`Z \\in [0, 1]`$ 表示左或右摄像头。注意索引始终从 0 开始。\n    - 使用命令 `bash utils\u002Fdepth2normal.sh` 生成法线图。\n3. 运行训练命令！🚀\n    - `bash train_scripts\u002Ftrain_lotus_g_{$TASK}.sh` 用于训练 Lotus 生成模型；\n    - `bash train_scripts\u002Ftrain_lotus_d_{$TASK}.sh` 用于训练 Lotus 判别模型。\n\n## 🕹️ 推理\n### 在您的图像上测试\n1. 将您的图像放入一个目录中，例如 `assets\u002Fin-the-wild_example`（我们在此处准备了几张示例）。\n2. 运行推理命令：`bash infer.sh`。\n\n### 在基准数据集上评估\n1. 准备基准数据集：\n- 对于**深度**估计，您可以按照以下命令下载[评估数据集（深度）](https:\u002F\u002Fshare.phys.ethz.ch\u002F~pf\u002Fbingkedata\u002Fmarigold\u002Fevaluation_dataset\u002F)（参考 [Marigold](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FMarigold?tab=readme-ov-file#-evaluation-on-test-datasets-)）：\n    ```\n    cd datasets\u002Feval\u002Fdepth\u002F\n    \n    wget -r -np -nH --cut-dirs=4 -R \"index.html*\" -P . https:\u002F\u002Fshare.phys.ethz.ch\u002F~pf\u002Fbingkedata\u002Fmarigold\u002Fevaluation_dataset\u002F\n    ```\n- 对于**法线**估计，您可以将[评估数据集（法线）](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1t3LMJIIrSnCGwOEf53Cyg0lkSXd3M4Hm?usp=drive_link)（`dsine_eval.zip`）下载到路径 `datasets\u002Feval\u002Fnormal\u002F` 并解压（参考 [DSINE](https:\u002F\u002Fgithub.com\u002Fbaegwangbin\u002FDSINE?tab=readme-ov-file#getting-started)）。\n\n2. 运行评估命令：`bash eval_scripts\u002Feval-[task]-[mode].sh`，其中 `[task]` 表示任务名称（**深度**或**法线**），`[mode]` 表示模式名称（**g** 或 **d**）。 \u003C\u002Fbr>\n\n3. （可选）若要复现论文中的结果，您可以在评估命令中设置 `--rng_state_path` 选项。RNG 状态文件可在 `.\u002Frng_states\u002F` 中找到。\n\n### 选择您的模型\n以下是已发布的模型及其对应配置：\n|CHECKPOINT_DIR |TASK_NAME |MODE |\n|:--:|:--:|:--:|\n| [`jingheya\u002Flotus-depth-g-v1-0`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-g-v1-0) | 深度| `generation`|\n| [`jingheya\u002Flotus-depth-d-v1-0`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-d-v1-0) | 深度|`regression` |\n| [`jingheya\u002Flotus-depth-g-v2-1-disparity`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-g-v2-1-disparity) | 深度（视差）| `generation`|\n| [`jingheya\u002Flotus-depth-d-v2-0-disparity`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-depth-d-v2-0-disparity) | 深度（视差）|`regression` |\n| [`jingheya\u002Flotus-normal-g-v1-1`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-normal-g-v1-1) |法线 | `generation` |\n| [`jingheya\u002Flotus-normal-d-v1-1`](https:\u002F\u002Fhuggingface.co\u002Fjingheya\u002Flotus-normal-d-v1-1) |法线 |`regression` |\n\n## 🎓 引用\n如果您在研究中使用了我们的工作，请考虑引用我们的论文：\n```bibtex\n@article{he2024lotus,\n    title={Lotus: 基于扩散的高质量密集预测视觉基础模型},\n    author={He, Jing and Li, Haodong and Yin, Wei and Liang, Yixun and Li, Leheng and Zhou, Kaiqiang and Liu, Hongbo and Liu, Bingbing and Chen, Ying-Cong},\n    journal={arXiv 预印本 arXiv:2409.18124},\n    year={2024}\n}\n```","# Lotus 快速上手指南\n\nLotus 是一个基于扩散模型的视觉基础模型，专为高质量的密集几何预测（如深度估计和法线估计）而设计。它在零样本设置下即可达到业界领先（SoTA）的性能。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Ubuntu 20.04 LTS (推荐)\n*   **Python**: 3.10\n*   **CUDA**: 12.3 (需兼容的 NVIDIA 显卡，测试环境为 A800)\n*   **依赖管理**: Conda\n*   **其他工具**: Git\n\n> **注意**：虽然官方测试环境为 CUDA 12.3，但通常兼容较新的 CUDA 版本。请确保已正确安装 NVIDIA 驱动。\n\n## 安装步骤\n\n### 1. 克隆项目\n使用 Git 将代码库克隆到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus.git\ncd Lotus\n```\n\n### 2. 创建虚拟环境并安装依赖\n推荐使用 Conda 创建独立的 Python 环境以避免冲突：\n\n```bash\nconda create -n lotus python=3.10 -y\nconda activate lotus\npip install -r requirements.txt \n```\n\n> **提示**：如果 `pip` 下载速度较慢，可添加国内镜像源加速（例如清华源）：\n> `pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 3. 配置 Accelerate (仅训练需要)\n如果您计划进行模型训练，需初始化 Accelerate 环境：\n\n```bash\naccelerate config --config_file=$PATH_TO_ACCELERATE_CONFIG_FILE\n```\n*(注：官方测试使用的 accelerate 版本为 0.29.3)*\n\n## 基本使用\n\n### 方式一：运行本地 Gradio 演示 (推荐新手)\n这是最简单的体验方式，启动后可在浏览器中上传图片进行测试。\n\n*   **深度估计 (Depth Estimation)**:\n    ```bash\n    python app.py depth\n    ```\n*   **法线估计 (Normal Estimation)**:\n    ```bash\n    python app.py normal\n    ```\n运行后，终端会显示本地访问地址（通常为 `http:\u002F\u002F127.0.0.1:7860`），在浏览器打开即可使用。\n\n### 方式二：命令行推理 (批量处理)\n如果您需要对本地图片文件夹进行批量推理：\n\n1.  将待处理的图片放入目录，例如 `assets\u002Fin-the-wild_example`。\n2.  运行推理脚本：\n    ```bash\n    bash infer.sh\n    ```\n结果将保存在输出目录中。\n\n### 方式三：在线体验\n无需安装即可通过 Hugging Face Spaces 体验：\n*   [深度估计 Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Depth)\n*   [法线估计 Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhaodongli\u002FLotus_Normal)","某建筑科技公司正在开发一款基于手机照片的旧房改造评估系统，需要快速从非专业拍摄的图片中提取高精度的墙面深度与法线信息以生成 3D 模型。\n\n### 没有 Lotus 时\n- **细节丢失严重**：传统深度估计模型在处理瓷砖纹理、复杂光影或低对比度区域时，生成的深度图往往模糊不清，导致重建的墙面平整度失真。\n- **边缘处理粗糙**：在门窗边框、踢脚线等物体交界处，现有工具容易产生锯齿或伪影，后续需要人工花费大量时间进行手动修补。\n- **泛化能力不足**：面对用户随手拍摄的倾斜角度大、光照不均的照片，模型预测结果极不稳定，经常需要多次重试或更换算法组合。\n- **流程繁琐低效**：为了获得可用的法线图辅助渲染，工程师不得不串联多个专用小模型，导致推理延迟高，难以满足实时预览需求。\n\n### 使用 Lotus 后\n- **高密度精准预测**：Lotus 凭借扩散模型的特性，能敏锐捕捉微小的纹理变化，即使在光滑瓷砖或暗部区域也能输出像素级精确的深度细节。\n- **几何结构清晰**：在物体边缘和复杂结构处，Lotus 生成的边界锐利且自然，大幅减少了后期人工修正的工作量，直接产出可用于建模的数据。\n- **鲁棒性显著提升**：面对各种非理想拍摄条件（如强光反射、奇怪视角），Lotus 依然保持稳定的高质量输出，显著降低了因输入图像质量导致的失败率。\n- **一站式高效解决**：单个 Lotus 模型即可同时高精度完成深度估计和法线预测任务，简化了技术栈，将单张图片的处理耗时缩短至秒级。\n\nLotus 通过扩散基础模型的高密度预测能力，将非专业影像转化为工业级 3D 数据的门槛降至最低，极大加速了数字化重建流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEnVision-Research_Lotus_148e0d4d.png","EnVision-Research","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FEnVision-Research_9bd08b80.png","Research group of visual generative models. (PI: Ying-Cong Chen)",null,"https:\u002F\u002Fwww.yingcong.me","https:\u002F\u002Fgithub.com\u002FEnVision-Research",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",96.8,{"name":87,"color":88,"percentage":89},"Shell","#89e051",3.2,794,47,"2026-04-05T23:44:50","Apache-2.0","Linux","必需 NVIDIA GPU，测试环境为 NVIDIA A800-SXM4-80GB，需支持 CUDA 12.3","未说明",{"notes":98,"python":99,"dependencies":100},"官方仅在 Ubuntu 20.04 LTS 上测试通过。训练和推理依赖 accelerate 库（推荐版本 0.29.3），使用前需运行 'accelerate config' 初始化环境。项目提供生成式（Generative）和判别式（Discriminative）两种模型架构，支持深度估计和法线估计任务。建议使用 conda 管理虚拟环境。","3.10",[101,102,103,104,105,106,107,108],"torch","accelerate==0.29.3","gradio","diffusers","transformers","opencv-python","pillow","numpy",[15,62],"2026-03-27T02:49:30.150509","2026-04-08T01:08:03.661163",[113,118,123,128,133,138],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},23255,"运行推理时遇到 CUDA 显存不足（OOM）错误，但 GPU 实际上还有很多空闲显存，如何解决？","可以在每次推理后添加 `torch.cuda.empty_cache()` 来清理缓存。具体代码实现可参考项目中的 [infer.py 第 185 行](https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fblob\u002Fmain\u002Finfer.py#L185)。","https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F10",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},23256,"Lotus 是否支持从不同角度生成多视角的法线图（Normal Maps）？","目前 Lotus 仅支持为当前视图生成**相对**法线图。如果需要跨视角一致的法线图，建议利用相机外参（camera extrinsics）将法线图转换到世界坐标系中，这是一个无需训练的方法。如果对一致性要求极高，另一种方案是先重建 3D 模型，然后通过渲染生成法线图。","https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F19",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},23257,"项目是否会开源训练代码以便复现？","是的，训练代码已经公开可用。维护者已在评论区确认发布，用户可以直接在仓库中查找相关训练脚本。","https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F33",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},23258,"NYUv2 等数据集的表面法线真值（Ground Truth）是如何从 RGBD 图像中计算出来的？","从深度图计算法线（depth2normal）在数学上已有成熟解法（如 PlaneSVD 算法），许多现有仓库已实现该功能。虽然这是基于几何计算的近似值，但在深度图为真值的情况下，计算出的法线通常也被视为真值。对于图像直接预测法线或深度的任务，由于相机内参未知，这属于病态问题，更推荐关注结合相机参数的几何预测方法（如 MoGe）。","https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F34",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},23259,"既然判别式版本（Lotus-D）在大多数数据集上表现优于生成式版本（Lotus-G），为什么还要使用基于扩散模型的公式？","这是一个开放性问题。采用扩散模型公式（Lotus-G）是为了严格遵循扩散模型的范式，从而继承其特性（如概率分布输出）。虽然判别式方法在当前数据设置下表现优异，但生成式方法在结合真实世界参数（如焦距）生成具有绝对距离的度量深度图方面具有潜力，且 Lotus 展示了用少量数据即可达到竞争力的性能。","https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F31",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},23260,"如何将预训练模型部署到 Hugging Face 或使用其功能？","项目维护者已与 Hugging Face 团队合作。如果是 PyTorch 模型，可以使用 `PyTorchModelHubMixin` 类，它为模型添加了 `from_pretrained` 和 `push_to_hub` 方法，方便上传和下载。此外，用户也可以直接使用 `hf_hub_download` 下载文件，或在 Hugging Face Spaces 上构建演示 Demo（项目已有相关草稿）。","https:\u002F\u002Fgithub.com\u002FEnVision-Research\u002FLotus\u002Fissues\u002F4",[144],{"id":145,"version":146,"summary_zh":147,"released_at":148},136950,"v1.0.0","发布Lotus的所有训练和推理代码。","2025-04-03T11:27:52"]