[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Turoad--lanedet":3,"tool-Turoad--lanedet":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":79,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":102,"env_deps":104,"category_tags":111,"github_topics":112,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":124,"updated_at":125,"faqs":126,"releases":157},760,"Turoad\u002Flanedet","lanedet","An open source lane detection toolbox based on PyTorch, including SCNN, RESA, UFLD, LaneATT, CondLane, etc.","lanedet 是一款基于 PyTorch 构建的开源车道线检测工具箱，旨在汇聚当前最先进（SOTA）的车道检测模型。在自动驾驶领域，准确识别车道线至关重要，但不同算法的实现往往复杂且分散。lanedet 通过统一框架，让开发者能够轻松复现 SCNN、RESA、UFLD、LaneATT 等多种主流算法，并在此基础上构建自己的方法。\n\nlanedet 特别适合计算机视觉研究人员、自动驾驶算法工程师以及对深度学习感兴趣的学生使用。lanedet 不仅支持 ResNet、MobileNet 等多种骨干网络，还兼容 CULane 和 TuSimple 等常用数据集。其功能涵盖模型训练、验证、推理及可视化输出，提供了完整的实验流程。无论是为了快速验证新想法，还是进行基准测试，lanedet 都能提供高效便捷的解决方案，帮助团队降低开发门槛，加速技术落地。","# LaneDet\n## Introduction\nLaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.\n\n![demo image](.github\u002F_clips_0601_1494452613491980502_20.jpg)\n\n## Table of Contents\n* [Introduction](#Introduction)\n* [Benchmark and model zoo](#Benchmark-and-model-zoo)\n* [Installation](#Installation)\n* [Getting Started](#Getting-started)\n* [Contributing](#Contributing)\n* [Licenses](#Licenses)\n* [Acknowledgement](#Acknowledgement)\n\n## Benchmark and model zoo\nSupported backbones:\n- [x] ResNet\n- [x] ERFNet\n- [x] VGG\n- [x] MobileNet\n- [] DLA(coming soon)\n\nSupported detectors:\n- [x] [SCNN](configs\u002Fscnn)\n- [x] [UFLD](configs\u002Fufld)\n- [x] [RESA](configs\u002Fresa)\n- [x] [LaneATT](configs\u002Flaneatt)\n- [x] [CondLane](configs\u002Fcondlane)\n- [] CLRNet(coming soon)\n\n\n## Installation\n\u003C!--\nPlease refer to [INSTALL.md](INSTALL.md) for installation.\n-->\n\n### Clone this repository\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fturoad\u002Flanedet.git\n```\nWe call this directory as `$LANEDET_ROOT`\n\n### Create a conda virtual environment and activate it (conda is optional)\n\n```Shell\nconda create -n lanedet python=3.8 -y\nconda activate lanedet\n```\n\n### Install dependencies\n\n```Shell\n# Install pytorch firstly, the cudatoolkit version should be same in your system.\n\nconda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.1 -c pytorch\n\n# Or you can install via pip\npip install torch==1.8.0 torchvision==0.9.0\n\n# Install python packages\npython setup.py build develop\n```\n\n### Data preparation\n\n#### CULane\n\nDownload [CULane](https:\u002F\u002Fxingangpan.github.io\u002Fprojects\u002FCULane.html). Then extract them to `$CULANEROOT`. Create link to `data` directory.\n\n```Shell\ncd $LANEDET_ROOT\nmkdir -p data\nln -s $CULANEROOT data\u002FCULane\n```\n\nFor CULane, you should have structure like this:\n```\n$CULANEROOT\u002Fdriver_xx_xxframe    # data folders x6\n$CULANEROOT\u002Flaneseg_label_w16    # lane segmentation labels\n$CULANEROOT\u002Flist                 # data lists\n```\n\n#### Tusimple\nDownload [Tusimple](https:\u002F\u002Fgithub.com\u002FTuSimple\u002Ftusimple-benchmark\u002Fissues\u002F3). Then extract them to `$TUSIMPLEROOT`. Create link to `data` directory.\n\n```Shell\ncd $LANEDET_ROOT\nmkdir -p data\nln -s $TUSIMPLEROOT data\u002Ftusimple\n```\n\nFor Tusimple, you should have structure like this:\n```\n$TUSIMPLEROOT\u002Fclips # data folders\n$TUSIMPLEROOT\u002Flable_data_xxxx.json # label json file x4\n$TUSIMPLEROOT\u002Ftest_tasks_0627.json # test tasks json file\n$TUSIMPLEROOT\u002Ftest_label.json # test label json file\n\n```\n\nFor Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation. \n\n```Shell\npython tools\u002Fgenerate_seg_tusimple.py --root $TUSIMPLEROOT\n# this will generate seg_label directory\n```\n\n## Getting Started\n### Training\n\nFor training, run\n\n```Shell\npython main.py [configs\u002Fpath_to_your_config] --gpus [gpu_ids]\n```\n\n\nFor example, run\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --gpus 0\n```\n\n### Testing\nFor testing, run\n```Shell\npython main.py [configs\u002Fpath_to_your_config] --validate --load_from [path_to_your_model] [gpu_num]\n```\n\nFor example, run\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0\n```\n\nCurrently, this code can output the visualization result when testing, just add `--view`.\nWe will get the visualization result in `work_dirs\u002Fxxx\u002Fxxx\u002Fvisualization`.\n\nFor example, run\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0 --view\n```\n\n### Inference\nSee `tools\u002Fdetect.py` for detailed information.\n```\npython tools\u002Fdetect.py --help\n\nusage: detect.py [-h] [--img IMG] [--show] [--savedir SAVEDIR]\n                 [--load_from LOAD_FROM]\n                 config\n\npositional arguments:\n  config                The path of config file\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --img IMG             The path of the img (img file or img_folder), for\n                        example: data\u002F*.png\n  --show                Whether to show the image\n  --savedir SAVEDIR     The root of save directory\n  --load_from LOAD_FROM\n                        The path of model\n```\nTo run inference on example images in `.\u002Fimages` and save the visualization images in `vis` folder:\n```\npython tools\u002Fdetect.py configs\u002Fresa\u002Fresa34_culane.py --img images\\\n          --load_from resa_r34_culane.pth --savedir .\u002Fvis\n```\n\n\n## Contributing\nWe appreciate all contributions to improve LaneDet.  Any pull requests or issues are welcomed.\n\n## Licenses\nThis project is released under the [Apache 2.0 license](LICNESE).\n\n\n## Acknowledgement\n\u003C!--ts-->\n* [open-mmlab\u002Fmmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)\n* [pytorch\u002Fvision](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)\n* [cardwing\u002FCodes-for-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcardwing\u002FCodes-for-Lane-Detection)\n* [XingangPan\u002FSCNN](https:\u002F\u002Fgithub.com\u002FXingangPan\u002FSCNN)\n* [ZJULearning\u002Fresa](https:\u002F\u002Fgithub.com\u002FZJULearning\u002Fresa)\n* [cfzd\u002FUltra-Fast-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcfzd\u002FUltra-Fast-Lane-Detection)\n* [lucastabelini\u002FLaneATT](https:\u002F\u002Fgithub.com\u002Flucastabelini\u002FLaneATT)\n* [aliyun\u002Fconditional-lane-detection](https:\u002F\u002Fgithub.com\u002Faliyun\u002Fconditional-lane-detection)\n\u003C!--te-->\n\n\u003C!-- \n## Citation\nIf you use\n```\n@misc{zheng2021lanedet,\n  author =       {Tu Zheng},\n  title =        {LaneDet},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fturoad\u002Flanedet}},\n  year =         {2021}\n}\n``` -->\n","# LaneDet\n## 简介\nLaneDet 是一个基于 PyTorch（深度学习框架）的开源车道线检测（Lane Detection）工具箱，旨在整合各种最先进的（State-of-the-Art, SOTA）车道线检测模型。开发者可以复现这些 SOTA 方法，并在此基础上构建自己的方法。\n\n![demo image](.github\u002F_clips_0601_1494452613491980502_20.jpg)\n\n## 目录\n* [简介](#简介)\n* [基准测试与模型库](#benchmark-and-model-zoo)\n* [安装](#安装)\n* [快速开始](#getting-started)\n* [贡献](#contributing)\n* [许可证](#licenses)\n* [致谢](#acknowledgement)\n\n## 基准测试与模型库\n支持的骨干网络（Backbones）:\n- [x] ResNet (残差网络)\n- [x] ERFNet\n- [x] VGG\n- [x] MobileNet\n- [] DLA(coming soon)\n\n支持的检测器（Detectors）:\n- [x] [SCNN](configs\u002Fscnn)\n- [x] [UFLD](configs\u002Fufld)\n- [x] [RESA](configs\u002Fresa)\n- [x] [LaneATT](configs\u002Flaneatt)\n- [x] [CondLane](configs\u002Fcondlane)\n- [] CLRNet(coming soon)\n\n\n## 安装\n\u003C!--\nPlease refer to [INSTALL.md](INSTALL.md) for installation.\n-->\n\n### Clone this repository\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fturoad\u002Flanedet.git\n```\nWe call this directory as `$LANEDET_ROOT`\n\n### Create a conda virtual environment and activate it (conda is optional)\n\n```Shell\nconda create -n lanedet python=3.8 -y\nconda activate lanedet\n```\n\n### Install dependencies\n\n```Shell\n# Install pytorch firstly, the cudatoolkit version should be same in your system.\n\nconda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.1 -c pytorch\n\n# Or you can install via pip\npip install torch==1.8.0 torchvision==0.9.0\n\n# Install python packages\npython setup.py build develop\n```\n\n### Data preparation\n\n#### CULane\n\nDownload [CULane](https:\u002F\u002Fxingangpan.github.io\u002Fprojects\u002FCULane.html). Then extract them to `$CULANEROOT`. Create link to `data` directory.\n\n```Shell\ncd $LANEDET_ROOT\nmkdir -p data\nln -s $CULANEROOT data\u002FCULane\n```\n\nFor CULane, you should have structure like this:\n```\n$CULANEROOT\u002Fdriver_xx_xxframe    # data folders x6\n$CULANEROOT\u002Flaneseg_label_w16    # lane segmentation labels\n$CULANEROOT\u002Flist                 # data lists\n```\n\n#### Tusimple\nDownload [Tusimple](https:\u002F\u002Fgithub.com\u002FTuSimple\u002Ftusimple-benchmark\u002Fissues\u002F3). Then extract them to `$TUSIMPLEROOT`. Create link to `data` directory.\n\n```Shell\ncd $LANEDET_ROOT\nmkdir -p data\nln -s $TUSIMPLEROOT data\u002Ftusimple\n```\n\nFor Tusimple, you should have structure like this:\n```\n$TUSIMPLEROOT\u002Fclips # data folders\n$TUSIMPLEROOT\u002Flable_data_xxxx.json # label json file x4\n$TUSIMPLEROOT\u002Ftest_tasks_0627.json # test tasks json file\n$TUSIMPLEROOT\u002Ftest_label.json # test label json file\n\n```\n\nFor Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation. \n\n```Shell\npython tools\u002Fgenerate_seg_tusimple.py --root $TUSIMPLEROOT\n# this will generate seg_label directory\n```\n\n## 快速开始\n### Training\n\nFor training, run\n\n```Shell\npython main.py [configs\u002Fpath_to_your_config] --gpus [gpu_ids]\n```\n\n\nFor example, run\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --gpus 0\n```\n\n### Testing\nFor testing, run\n```Shell\npython main.py [configs\u002Fpath_to_your_config] --validate --load_from [path_to_your_model] [gpu_num]\n```\n\nFor example, run\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0\n```\n\nCurrently, this code can output the visualization result when testing, just add `--view`.\nWe will get the visualization result in `work_dirs\u002Fxxx\u002Fxxx\u002Fvisualization`.\n\nFor example, run\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0 --view\n```\n\n### Inference\nSee `tools\u002Fdetect.py` for detailed information.\n```\npython tools\u002Fdetect.py --help\n\nusage: detect.py [-h] [--img IMG] [--show] [--savedir SAVEDIR]\n                 [--load_from LOAD_FROM]\n                 config\n\npositional arguments:\n  config                The path of config file\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --img IMG             The path of the img (img file or img_folder), for\n                        example: data\u002F*.png\n  --show                Whether to show the image\n  --savedir SAVEDIR     The root of save directory\n  --load_from LOAD_FROM\n                        The path of model\n```\nTo run inference on example images in `.\u002Fimages` and save the visualization images in `vis` folder:\n```\npython tools\u002Fdetect.py configs\u002Fresa\u002Fresa34_culane.py --img images\\\n          --load_from resa_r34_culane.pth --savedir .\u002Fvis\n```\n\n\n## 贡献\n我们欢迎所有有助于改进 LaneDet 的贡献。任何 Pull Requests 或 Issues 均表示欢迎。\n\n## 许可证\n本项目采用 [Apache 2.0 license](LICNESE) 发布。\n\n\n## 致谢\n\u003C!--ts-->\n* [open-mmlab\u002Fmmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)\n* [pytorch\u002Fvision](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)\n* [cardwing\u002FCodes-for-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcardwing\u002FCodes-for-Lane-Detection)\n* [XingangPan\u002FSCNN](https:\u002F\u002Fgithub.com\u002FXingangPan\u002FSCNN)\n* [ZJULearning\u002Fresa](https:\u002F\u002Fgithub.com\u002FZJULearning\u002Fresa)\n* [cfzd\u002FUltra-Fast-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcfzd\u002FUltra-Fast-Lane-Detection)\n* [lucastabelini\u002FLaneATT](https:\u002F\u002Fgithub.com\u002Flucastabelini\u002FLaneATT)\n* [aliyun\u002Fconditional-lane-detection](https:\u002F\u002Fgithub.com\u002Faliyun\u002Fconditional-lane-detection)\n\u003C!--te-->\n\n\u003C!-- \n## Citation\nIf you use\n```\n@misc{zheng2021lanedet,\n  author =       {Tu Zheng},\n  title =        {LaneDet},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fturoad\u002Flanedet}},\n  year =         {2021}\n}\n``` -->","# LaneDet 快速上手指南\n\nLaneDet 是一个基于 PyTorch 的开源车道检测工具箱，集成了多种最先进（SOTA）的车道检测模型，支持开发者复现现有方法或构建自己的算法。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **操作系统**: Linux \u002F Windows\n- **Python 版本**: 3.8\n- **深度学习框架**: PyTorch 1.8.0, torchvision 0.9.0\n- **CUDA 工具包**: 10.1 (需与系统安装的 cudatoolkit 版本一致)\n\n## 安装步骤\n\n### 1. 克隆代码库\n\n```Shell\ngit clone https:\u002F\u002Fgithub.com\u002Fturoad\u002Flanedet.git\n```\n将当前目录定义为 `$LANEDET_ROOT`。\n\n### 2. 创建虚拟环境\n\n建议使用 conda 管理环境（可选）：\n\n```Shell\nconda create -n lanedet python=3.8 -y\nconda activate lanedet\n```\n\n### 3. 安装依赖\n\n首先安装 PyTorch（根据系统选择 conda 或 pip）：\n\n```Shell\n# 方式一：Conda 安装\nconda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.1 -c pytorch\n\n# 方式二：Pip 安装\npip install torch==1.8.0 torchvision==0.9.0\n```\n\n然后安装项目依赖：\n\n```Shell\npython setup.py build develop\n```\n\n### 4. 数据准备\n\n为了进行训练和测试，需要准备数据集并建立链接。\n\n**CULane 数据集：**\n下载 [CULane](https:\u002F\u002Fxingangpan.github.io\u002Fprojects\u002FCULane.html) 并解压至 `$CULANEROOT`，然后在项目中创建软链接：\n\n```Shell\ncd $LANEDET_ROOT\nmkdir -p data\nln -s $CULANEROOT data\u002FCULane\n```\n\n**TuSimple 数据集：**\n下载 [TuSimple](https:\u002F\u002Fgithub.com\u002FTuSimple\u002Ftusimple-benchmark\u002Fissues\u002F3) 并解压至 `$TUSIMPLEROOT`，创建软链接：\n\n```Shell\ncd $LANEDET_ROOT\nmkdir -p data\nln -s $TUSIMPLEROOT data\u002Ftusimple\n```\n\n由于 TuSimple 未提供分割标注，需生成 segmentation 标签：\n\n```Shell\npython tools\u002Fgenerate_seg_tusimple.py --root $TUSIMPLEROOT\n```\n\n## 基本使用\n\n### 训练模型\n\n运行训练脚本，指定配置文件路径和 GPU ID：\n\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --gpus 0\n```\n\n### 验证测试\n\n加载预训练模型进行验证，添加 `--view` 参数可查看可视化结果：\n\n```Shell\npython main.py configs\u002Fresa\u002Fresa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0 --view\n```\n可视化结果将保存在 `work_dirs\u002Fxxx\u002Fxxx\u002Fvisualization` 目录下。\n\n### 推理预测\n\n使用 `tools\u002Fdetect.py` 对单张图片或文件夹进行推理：\n\n```Shell\npython tools\u002Fdetect.py configs\u002Fresa\u002Fresa34_culane.py --img images\\\n      --load_from resa_r34_culane.pth --savedir .\u002Fvis\n```\n\n更多推理参数可通过 `python tools\u002Fdetect.py --help` 查看。","某自动驾驶初创团队正在为新款 L2 级辅助驾驶系统开发核心感知模块，急需在一个月内完成多种车道线检测算法的选型与验证。\n\n### 没有 lanedet 时\n- 工程师需手动搜集 SCNN、UFLD 等独立开源项目，依赖库版本冲突导致环境搭建耗时数天。\n- 不同数据集（如 CULane）标注格式各异，需单独编写解析脚本进行清洗和对齐。\n- 训练过程中缺乏统一的评估指标接口，难以横向对比各模型在复杂路况下的表现。\n- 推理部署时需重新封装代码，且缺少可视化调试功能，错误排查效率低下。\n\n### 使用 lanedet 后\n- 基于 lanedet 统一架构，仅需修改配置文件即可无缝切换骨干网络与检测器，环境配置一次搞定。\n- 内置标准数据准备流程，自动处理 Tusimple 和 CULane 的标注转换，大幅减少预处理工作量。\n- 提供 Benchmark 基准测试，一键运行多模型对比，快速锁定最适合车端算力的算法方案。\n- 利用内置检测脚本与 --view 参数，实时输出可视化结果，加速模型迭代与缺陷修复。\n\nlanedet 以模块化设计整合了前沿算法与工程实践，让开发者能专注于业务逻辑而非底层基建。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTuroad_lanedet_fe1f03a8.jpg","Turoad","TuZheng","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FTuroad_45cf1bc4.jpg",null,"Hangzhou","https:\u002F\u002Fgithub.com\u002FTuroad",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",96.4,{"name":87,"color":88,"percentage":89},"Cuda","#3A4E3A",2.4,{"name":91,"color":92,"percentage":93},"C++","#f34b7d",0.9,{"name":95,"color":96,"percentage":97},"Dockerfile","#384d54",0.3,620,97,"2026-03-28T06:47:07","Apache-2.0","未说明","需要 NVIDIA GPU，CUDA 10.1，显存大小未说明",{"notes":105,"python":106,"dependencies":107},"数据准备需下载 CULane 和 Tusimple 数据集并使用 ln -s 创建软链接（暗示 Linux\u002FmacOS 环境）；PyTorch 版本固定为 1.8.0；支持训练、验证及可视化功能。","3.8",[108,109,110],"torch==1.8.0","torchvision==0.9.0","cudatoolkit=10.1",[13],[113,114,115,116,117,118,119,120,121,122,123],"lane-detection","scnn","resa","ufld","laneatt","culane","lane-line-detection","deep-learning","lane-detection-toolbox","tusimple","conditional-lane-detection","2026-03-27T02:49:30.150509","2026-04-06T05:36:46.334091",[127,132,137,142,147,152],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},3257,"运行时报错 `ImportError: cannot import name 'nms_impl'` 循环导入错误如何解决？","此问题通常由 CUDA 驱动不可用、版本不兼容或未编译扩展导致。解决方法：1. 严格按照 README 编译项目。2. 尝试注释掉 `lanedet\u002Fops\u002F__init__.py` 文件中的初始化代码（如 `from .nms import nms` 和 `__all__ = ['nms']`）。3. 确保 torch 和相关 CUDA 库版本兼容（特别是在 Linux 或虚拟环境中）。","https:\u002F\u002Fgithub.com\u002FTuroad\u002Flanedet\u002Fissues\u002F26",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},3258,"如何在 CondLane 配置中更改输入图像尺寸（如 FHD）？","需同步修改多个配置参数以匹配新尺寸。关键参数包括：`ori_img_h`, `ori_img_w`, `crop_bbox`, `sample_y`。例如对于 1920x1080，设置 `ori_img_h=1080`, `ori_img_w=1920`, `crop_bbox=[0, 540, 1920, 1080]`, `sample_y=range(1080, 540, -8)`。同时需调整 `aggregator` 中的 `pos_shape`，计算公式约为 `(batch_size, img_height\u002F32, img_width\u002F32)`。若遇困难，也可将图像 resize 至默认尺寸 (1640, 590)。","https:\u002F\u002Fgithub.com\u002FTuroad\u002Flanedet\u002Fissues\u002F49",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},3259,"如何为 RESA 模型配置自定义图像尺寸？","在配置文件中调整以下参数以匹配自定义图像：`img_height`, `img_width`, `cut_height`, `ori_img_h`, `ori_img_w`。例如 CULane 默认配置为 `img_height=288`, `img_width=800`, `cut_height=240`, `ori_img_h=590`, `ori_img_w=1640`。修改时请确保裁剪区域和原始尺寸与输入图像一致，避免维度不匹配错误。","https:\u002F\u002Fgithub.com\u002FTuroad\u002Flanedet\u002Fissues\u002F57",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},3260,"在哪里可以找到推理代码？支持 MobileNet 吗？","推理脚本位于 `tools\u002Fdetect.py`。项目已添加 MobileNetV2 支持，可参考相关配置文件（如 `configs\u002Flaneatt\u002Fresnet18_culane.py` 结构）。验证时无需 batch 标签，直接运行即可。","https:\u002F\u002Fgithub.com\u002FTuroad\u002Flanedet\u002Fissues\u002F2",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},3261,"如何实现实时车道线检测及曲率估计？","推荐使用深度学习模型如 LaneATT（基于锚点）或 RESA（基于分割）。若使用 UFLD，曲率可在后处理中计算。另一种方法是利用传统视觉算法，通过透视变换将检测到的车道线转换为 BEV（鸟瞰图）视图来计算曲率。","https:\u002F\u002Fgithub.com\u002FTuroad\u002Flanedet\u002Fissues\u002F43",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},3262,"为什么模型推理结果很差？","首先检查是否使用了正确的预训练权重和配置文件。稀疏车道点（特定高度定义）通常足够用于推理，如需密集点可进行插值。建议简化代码移除非必要部分以便理解模型行为。当前依赖主要为 mmcv，无需 mmdetection，减少依赖可能有助于排查问题。","https:\u002F\u002Fgithub.com\u002FTuroad\u002Flanedet\u002Fissues\u002F17",[158],{"id":159,"version":160,"summary_zh":161,"released_at":162},102797,"1.0","Upload models, including SCNN(resnet18, resnet50), RESA(resnet18, resnet34, resnet50), UFLD(resnet18),  LaneATT(resnet18, resnet34, mobilenetv2), CondLane (resnet101)","2021-05-08T02:54:53"]