[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-l3p-cv--lost":3,"tool-l3p-cv--lost":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":112,"forks":113,"last_commit_at":114,"license":115,"difficulty_score":23,"env_os":116,"env_gpu":117,"env_ram":117,"env_deps":118,"category_tags":123,"github_topics":124,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":134,"updated_at":135,"faqs":136,"releases":162},1350,"l3p-cv\u002Flost","lost","Label Objects and Save Time (LOST) - Design your own smart Image Annotation process in a web-based environment.","LOST 是一款开源的在线图像标注平台，名字里的“Label Objects and Save Time”已经点明主旨：帮你在浏览器里快速、协作地完成图片打标签。它自带画框、画多边形、打点、画线等常用标注界面，也支持一次性给整组图片打标签，并能把结果一键导出成训练集。  \n最贴心的是，LOST 把“半自动”做成标配：你可以接入自己的 AI 模型，让它先生成候选框，再由人工快速确认或微调，标注效率瞬间翻倍。  \n如果你不想写代码，直接选一条现成的流水线就能开工；如果想深度定制，也能把不同标注工具、算法、外部存储（S3、Azure Blob 等）自由串成自己的工作流，并通过 Docker 在本地或云端一键部署。  \n因此，无论是需要大量训练数据的 AI 研究员、做数据标注外包的团队，还是只想给相册图片分类的普通用户，都能在 LOST 里找到适合自己的方式。","# LOST - Label Objects and Save Time\n\n[![pipeline status](https:\u002F\u002Fgitlab.com\u002Fl3p-cv\u002Flost\u002Fbadges\u002Fmaster\u002Fpipeline.svg)](https:\u002F\u002Fgitlab.com\u002Fl3p-cv\u002Flost\u002Fpipelines)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_6bf48b3e9a6d.png)](https:\u002F\u002Flost.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n## Description\n\nLOST (Label Object and Save Time) is a **flexible** **web-based** framework\nfor **simple collaborative** image annotation.\nIt provides multiple annotation interfaces for fast image annotation.\n\nLOST offers a set of **out of the box annotation pipelines** to instantly annotate images without programming knowledge.\n\nNevertheless LOST is **flexible** since it allows to run user defined annotation\npipelines where different\nannotation interfaces\u002F tools and algorithms can be combined in one process.\n\nThe application is highly scalable and offers, for example, easy-to-set-up connectivity to **external file systems**, such as S3 Bucket or Azure Blobstorage via the user interface.\n\nIt is **web-based** since the whole annotation process is visualized in\nyour browser.\nYou can quickly setup LOST with docker on your local machine or run it\non a web server to make an annotation process available to your\nannotators around the world.\nLOST allows to organize label trees, to monitor the state of an\nannotation process and to do annotations inside the browser.\n\nLOST was especially designed to model **semi-automatic** annotation\npipelines to speed up the annotation process.\nSuch a semi-automatic can be achieved by using AI generated annotation\nproposals that are presented to an annotator inside the annotation tool.\n\n### Key Features\n\n- :earth_americas: Collaborative annotation - distribute your annotation tasks around  the world\n- :rocket: Out of the box annotation pipelines\n  - Annotate bboxes, polygons, points or lines with the Single Image Annotation Tool (SIA)\n  - Annotate whole image clusters with the Multi Image Annotation Tool (MIA)\n  - Export your datasets\n- :open_file_folder: Connect external file systems, such as AWS S3 bucket, MS Azure blobstorage or FTP server\n- :inbox_tray: Instant annotation export allows you to access all annotations at any time\n- :chart_with_upwards_trend: Personal and project based annotation statistics\n- :label: Organize your labels with colored label trees\n- :repeat: Review your annotations\n\n### Additional Features\n\n- :pill: Customized annotation pipelines\n  - Import and export your pipeline projects\n  - Share your pipeline projects with colleagues\n- :orange_book: Jupyter-Lab integration for easy pipeline development\n- :dancers: LDAP integration\n- :e-mail: E-Mail notifications\n- :cloud: Scalable design - distribute intensive computing processes across multiple machines\n\n## Getting Started\n\n### Documentation\n\nA lot of new features have been added and improvements have been made compared to version 1 (see [Changelog](.\u002FCHANGELOG.md)).\nThe adaptation of the documentation is currently still in progress.\n\nIf you feel LOST, please find our full documentation here: [https:\u002F\u002Flost.readthedocs.io](https:\u002F\u002Flost.readthedocs.io).\n\n### LOST 3.x QuickSetup\n\nLOST releases are hosted on DockerHub and shipped in Containers. For a quick setup perform the following steps (these steps have been tested for Ubuntu):\n\n1. Install docker on your machine or server:\n    [https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F](https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F)\n2. Clone LOST:\n\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost.git\n    ```\n\n3. Install the *cryptography* package in your python environment:\n\n    ```bash\n    pip install cryptography\n    ```\n\n4. Run quick_setup script:\n\n    ```bash\n    python3 quick_setup.py \u002Fpath\u002Fto\u002Finstall\u002Flost --release 3.1.0\n    ```\n\n5. Run LOST:  \n    Follow instructions of the quick_setup script, printed in the command line.\n\n\u003C!-- ## I want to annotate now !\nA detailed step by step guide is provided here:  [Start your first Pipeline ](.\u002Fdocs\u002FGettingStartedFirstPipeline.md) -->\n\n## Roadmap\n\nSee our [Roadmap](https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fmilestone\u002F1)\n\n## Citing LOST\n\n```bibtex\n@article{jaeger2019lost,\n    title={{LOST}: A flexible framework for semi-automatic image annotation},\n    author={Jonas J\\\"ager and Gereon Reus and Joachim Denzler and Viviane Wolff and Klaus Fricke-Neuderth},\n    year={2019},\n    Journal = {arXiv preprint arXiv:1910.07486},\n    eprint={1910.07486},\n    archivePrefix={arXiv},\n    primaryClass={cs.CV}\n}\n```\n\nFind our paper on [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.07486)\n\n## Projects using LOST\n\n- PlantVillage @ Pennsylvania State University is using LOST to build [a tool that is positively impacting millions of lives in Kenya with the Desert Locust Crisis](https:\u002F\u002Fnews.psu.edu\u002Fstory\u002F609265\u002F2020\u002F02\u002F21\u002Fresearch\u002Fpenn-state-responds-app-aids-un-efforts-control-africas-locust). See this [blog article](https:\u002F\u002Fplantvillage.psu.edu\u002Fblogposts\u002F97-getting-lost-can-be-good) by Annalyse Kehs to get more information how LOST is utilized in the project.\n\nIf you are using LOST and like to share your project, please contact [@jaeger-j](https:\u002F\u002Fgithub.com\u002Fjaeger-j).\n\n## Institutions\n\n| L3bm GmbH | CVG University Jena | Hochschule Fulda |\n|--|--|--|\n|[![L3bm GmbH](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_950ee8411ea4.png)](https:\u002F\u002Fl3bm.com\u002F) | [![CVG Uni Jena](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_500ddfd58420.png)](https:\u002F\u002Fwww.inf-cv.uni-jena.de\u002F) | [![Hochschule Fulda](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_888d99b24f52.png)](https:\u002F\u002Fwww.hs-fulda.de\u002Felektrotechnik-und-informationstechnik\u002F)\n","# LOST - 标注对象，节省时间\n\n[![流水线状态](https:\u002F\u002Fgitlab.com\u002Fl3p-cv\u002Flost\u002Fbadges\u002Fmaster\u002Fpipeline.svg)](https:\u002F\u002Fgitlab.com\u002Fl3p-cv\u002Flost\u002Fpipelines)\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_6bf48b3e9a6d.png)](https:\u002F\u002Flost.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n## 描述\n\nLOST（Label Object and Save Time）是一个**灵活的**、**基于Web**的框架，用于**简单的协作式**图像标注。\n它提供多种标注界面，以实现快速的图像标注。\n\nLOST提供了一套**开箱即用的标注流水线**，无需编程知识即可立即对图像进行标注。\n\n然而，LOST同样具有**灵活性**，因为它允许运行用户自定义的标注流水线，在其中可以将不同的标注界面、工具和算法组合成一个流程。\n\n该应用程序具有高度可扩展性，例如，通过用户界面即可轻松设置与**外部文件系统**的连接，如S3存储桶或Azure Blob存储。\n\n它是**基于Web**的，因为整个标注过程都在您的浏览器中可视化呈现。\n您可以在本地机器上使用Docker快速搭建LOST，也可以将其部署在Web服务器上，以便让全球的标注人员都能参与标注工作。\nLOST支持组织标签树、监控标注进程的状态，并直接在浏览器中完成标注。\n\nLOST专为构建**半自动**标注流水线而设计，以加速标注过程。\n这种半自动标注可以通过使用AI生成的标注建议来实现，这些建议会在标注工具中呈现给标注员。\n\n### 主要功能\n\n- :earth_americas: 协作式标注——将您的标注任务分布到世界各地\n- :rocket: 开箱即用的标注流水线\n  - 使用单张图像标注工具（SIA）标注边界框、多边形、点或线\n  - 使用多张图像标注工具（MIA）标注整组图像\n  - 导出您的数据集\n- :open_file_folder: 连接外部文件系统，如AWS S3存储桶、MS Azure Blob存储或FTP服务器\n- :inbox_tray: 即时标注导出，让您随时访问所有标注结果\n- :chart_with_upwards_trend: 基于个人和项目的标注统计\n- :label: 使用彩色标签树组织您的标签\n- :repeat: 审核您的标注\n\n### 其他功能\n\n- :pill: 自定义标注流水线\n  - 导入和导出您的流水线项目\n  - 与同事共享您的流水线项目\n- :orange_book: Jupyter Lab集成，便于流水线开发\n- :dancers: LDAP集成\n- :e-mail: 邮件通知\n- :cloud: 可扩展设计——将密集型计算任务分散到多台机器上\n\n## 快速入门\n\n### 文档\n\n与1.0版本相比，本次更新添加了许多新功能并进行了多项改进（详见[更新日志](.\u002FCHANGELOG.md)）。\n目前文档的适配工作仍在进行中。\n\n如果您对LOST感兴趣，请在此处查阅我们的完整文档：[https:\u002F\u002Flost.readthedocs.io](https:\u002F\u002Flost.readthedocs.io)。\n\n### LOST 3.x 快速搭建\n\nLOST的发布版本托管在DockerHub上，并以容器形式交付。要快速搭建，请执行以下步骤（这些步骤已在Ubuntu上测试过）：\n\n1. 在您的机器或服务器上安装Docker：\n    [https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F](https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F)\n2. 克隆LOST仓库：\n\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost.git\n    ```\n\n3. 在您的Python环境中安装*cryptography*包：\n\n    ```bash\n    pip install cryptography\n    ```\n\n4. 运行quick_setup脚本：\n\n    ```bash\n    python3 quick_setup.py \u002Fpath\u002Fto\u002Finstall\u002Flost --release 3.1.0\n    ```\n\n5. 启动LOST：\n    按照quick_setup脚本在命令行中给出的指示操作。\n\n\u003C!-- ## 我现在就想开始标注！\n这里提供了详细的分步指南：[启动您的第一个流水线](.\u002Fdocs\u002FGettingStartedFirstPipeline.md) -->\n\n## 路线图\n\n请参阅我们的[路线图](https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fmilestone\u002F1)\n\n## 引用LOST\n\n```bibtex\n@article{jaeger2019lost,\n    title={{LOST}: 一种用于半自动图像标注的灵活框架},\n    author={Jonas J\\\"ager, Gereon Reus, Joachim Denzler, Viviane Wolff 和 Klaus Fricke-Neuderth},\n    year={2019},\n    Journal = {arXiv预印本 arXiv:1910.07486},\n    eprint={1910.07486},\n    archivePrefix={arXiv},\n    primaryClass={cs.CV}\n}\n```\n\n请在[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.07486)上查看我们的论文。\n\n## 使用LOST的项目\n\n- 宾夕法尼亚州立大学的PlantVillage正在使用LOST构建一款工具，该工具正积极影响着肯尼亚数百万民众的生活，以应对沙漠蝗虫危机（[https:\u002F\u002Fnews.psu.edu\u002Fstory\u002F609265\u002F2020\u002F02\u002F21\u002Fresearch\u002Fpenn-state-responds-app-aids-un-efforts-control-africas-locust](https:\u002F\u002Fnews.psu.edu\u002Fstory\u002F609265\u002F2020\u002F02\u002F21\u002Fresearch\u002Fpenn-state-responds-app-aids-un-efforts-control-africas-locust)）。请参阅Annalyse Kehs撰写的这篇[博客文章](https:\u002F\u002Fplantvillage.psu.edu\u002Fblogposts\u002F97-getting-lost-can-be-good)，了解更多关于LOST在该项目中的应用情况。\n\n如果您正在使用LOST并希望分享您的项目，请联系[@jaeger-j](https:\u002F\u002Fgithub.com\u002Fjaeger-j)。\n\n## 机构\n\n| L3bm GmbH | CVG耶拿大学 | 富尔达应用科学大学 |\n|--|--|--|\n|[![L3bm GmbH](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_950ee8411ea4.png)](https:\u002F\u002Fl3bm.com\u002F) | [![CVG耶拿大学](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_500ddfd58420.png)](https:\u002F\u002Fwww.inf-cv.uni-jena.de\u002F) | [![富尔达应用科学大学](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_readme_888d99b24f52.png)](https:\u002F\u002Fwww.hs-fulda.de\u002Felektrotechnik-und-informationstechnik\u002F)","# LOST 快速上手指南（3.x）\n\n## 环境准备\n\n- 操作系统：Ubuntu（官方测试通过）  \n- 前置依赖：Docker、Python3、pip\n\n```bash\n# 安装 Docker（如已安装可跳过）\ncurl -fsSL https:\u002F\u002Fget.docker.com | bash -s docker --mirror Aliyun\n```\n\n## 安装步骤\n\n1. 克隆代码  \n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost.git\n   cd lost\n   ```\n\n2. 安装 Python 依赖  \n   ```bash\n   pip install cryptography\n   ```\n\n3. 一键初始化  \n   ```bash\n   python3 quick_setup.py \u002Fopt\u002Flost --release 3.1.0\n   ```\n\n4. 按终端提示启动 LOST  \n   ```bash\n   # 示例输出\n   cd \u002Fopt\u002Flost\n   docker-compose up -d\n   ```\n\n浏览器访问 `http:\u002F\u002F\u003C服务器IP>:8080`，默认账号 `admin \u002F admin`。\n\n## 基本使用\n\n1. 登录后点击 **“Create Pipeline”**  \n2. 选择 **“SIA”**（单图标注）或 **“MIA”**（多图标注）  \n3. 上传图片 → 选择标签 → 开始标注  \n4. 标注完成后点击 **“Export”** 导出 COCO \u002F Pascal VOC 格式\n\n> 首次使用建议直接体验内置示例：  \n> 进入 **“Pipelines” → “Examples” → “SIA Example” → “Run”**","一家做智慧农业的初创公司，需要在 2 周内为 5 万张无人机拍摄的稻田图像标注稻穗、杂草、虫害三类目标，以训练病虫害检测模型。\n\n### 没有 lost 时\n- 3 名标注员各自用 LabelImg 单机作业，图片来回拷贝，版本混乱，平均每人每天只能标 300 张  \n- 标注规则靠微信群口头同步，颜色、框粗细不统一，返工率高达 20 %  \n- 图片先下载到本地硬盘，再分批上传云端训练机，来回传输占满带宽，夜里常被运维叫醒  \n- 没有统计面板，项目经理只能靠 Excel 手动汇总进度，永远不知道“今天到底还差多少”  \n- 想先用 YOLOv5 跑一遍预标注，再把结果给人修，结果脚本、接口、权限一堆坑，3 天过去还没跑通\n\n### 使用 lost 后\n- 一键把 S3 桶里的 5 万张图挂到 lost，3 名标注员浏览器里直接开工，实时协同，日产量飙到 1200 张  \n- 在 lost 里建好“稻穗-绿色、杂草-红色、虫害-黄色”标签树，颜色、线宽自动锁定，返工率降到 2 %  \n- 外存直连，标注结果实时写回 S3，训练机直接读取，省掉下载-上传 6 小时  \n- 项目仪表盘实时显示“已标 62 %，剩余 1.9 万张”，项目经理把刷新频率从 1 小时改成 5 分钟  \n- 用 lost 的半自动管线：YOLOv5 预标注 → 人工微调 → 一键导出 COCO，整个流程 30 分钟搞定，提前 4 天交付\n\nlost 让 3 个人的小团队像 10 个人一样高效，把 2 周苦差事变成 1 周轻松活。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fl3p-cv_lost_dfdd55fa.png","l3p-cv","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fl3p-cv_6e83b54a.png",null,"https:\u002F\u002Fgithub.com\u002Fl3p-cv",[80,84,88,92,96,100,104,108],{"name":81,"color":82,"percentage":83},"Python","#3572A5",51.9,{"name":85,"color":86,"percentage":87},"TypeScript","#3178c6",26.7,{"name":89,"color":90,"percentage":91},"JavaScript","#f1e05a",17,{"name":93,"color":94,"percentage":95},"HTML","#e34c26",3.3,{"name":97,"color":98,"percentage":99},"Shell","#89e051",0.5,{"name":101,"color":102,"percentage":103},"SCSS","#c6538c",0.4,{"name":105,"color":106,"percentage":107},"CSS","#663399",0.2,{"name":109,"color":110,"percentage":111},"Dockerfile","#384d54",0,576,79,"2026-04-03T04:42:32","MIT","Linux","未说明",{"notes":119,"python":120,"dependencies":121},"官方仅提供 Ubuntu 下的快速安装脚本；需先安装 Docker，并通过 Docker 容器运行；首次运行需执行 quick_setup.py 脚本完成初始化配置","3.x",[122],"cryptography",[14,51,13],[125,126,127,128,129,130,131,132,133],"annotation-tool","computer-vision","annotation-framework","machine-learning","machine-vision","bounding-boxes","polygon-annotations","annotation-process","image-annotation","2026-03-27T02:49:30.150509","2026-04-06T06:46:09.680974",[137,142,147,152,157],{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},6171,"为什么挂载的图像目录在创建流水线时能识别，但标注时却显示 0 张图片？","这是 LOST 1.x 的已知限制：挂载目录在流水线创建阶段能被识别，但执行阶段无法读取其中的图片。官方已在 2.x 分支通过新的文件系统抽象（基于 fsspec 库）解决，支持 s3fs、本地挂载等多种文件系统。升级至 2.x 或等待正式版即可解决。","https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fissues\u002F110",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},6172,"在 Chrome 中标注时，已画好的多边形突然丢失，只剩一半或全部消失，怎么办？","这是早期版本的随机 Bug，官方已修复。请确认 LOST 版本：打开 `docker\u002F.env` 查看 `LOST_VERSION`，建议升级到 ≥1.2.0。若仍出现，可尝试换浏览器或清除缓存后重试。","https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fissues\u002F51",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},6173,"SIA 界面里只能画矩形，找不到多边形、线条或点工具，如何开启？","在 SIA 界面左侧工具栏点击“工具切换”图标（或按快捷键 T）即可展开全部工具：多边形、折线、点、画笔等。如果仍只显示矩形，请确认已升级到 ≥1.0.0-alpha0，新版 SIA 已用 React 重写并默认支持这些工具。","https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fissues\u002F43",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},6174,"数据集还在不断增加，如何在不完成全部标注的情况下导出已标注的部分？","临时方案：使用官方提供的「即时导出流水线」`backend\u002Flost\u002Fpyapi\u002Fexamples\u002Fpipes\u002Finstant_export`，它会立即导出当前已标注的数据。LOST 2.x 将在界面中增加“即时导出”按钮，无需再跑完整流水线即可下载。","https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fissues\u002F41",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},6175,"如何强制让流水线在导出前必须经过人工审核（Review）？","官方建议：在最后一个标注任务后手动暂停（不要点“完成”），等审核完成后再补标最后一张图并结束任务。也可通过脚本实现：在两个标注任务之间插入脚本节点，把第一次标注结果重新作为输入，第二次标注即充当审核，最后再导出。","https:\u002F\u002Fgithub.com\u002Fl3p-cv\u002Flost\u002Fissues\u002F94",[163,168],{"id":164,"version":165,"summary_zh":166,"released_at":167},105709,"3.1.0","## [3.1.0] - 2026-03-23\r\n### Added\r\n- Added SIA time travel (undo\u002Fredo changes using ctrl + z\u002Fy)\r\n- BaseModal now has backdropOption (which can be true, false or 'static'); used 'static' for EditInstruction\r\n- Groups and Roles viewable und \"My Profile\"\r\n- AnnotationTop component to use for each SIA\u002FMIA\r\n- BaseModal now has \"backdrop\" option\r\n- Added spinner to CoreDataTable, for when data ist still loading (and gave it isLoading from all of its usages)\r\n- Added InfoText component, to streamline tooltips and texts\r\n- Option overrideDisabledColor added to CoreIconButton\r\n- BaseModal has onShow() option now\r\n- Queries for MIA (replacing redux)\r\n- New values \"anno_update_id\" and \"anno_chunk_id\" to annotations\r\n- \"Got to first\u002Flatest\" buttons added to MIA UI\r\n- Loading-spinner when changing SIA images\r\n- Added \"inverse\" argument to CoreIconButton\r\n- Added \"permanent reverse\" button to MIA\r\n- Loading feedback when applying filters to SIA images\r\n### Changed\r\n- Updated React to 19.2.1 (major update from v18)\r\n- Switched chonky filebrowser to chonky2 (chonky is not maintained anymore)\r\n- Reworked authentication + inactivity warning\r\n- Rewrote ImageFilterButton (to use CButton + CTooltip)\r\n- Rewrote image Search in SIA Toolbar (additional button, to end the search-mode)\r\n- Reworte useAnnotask() in anno_task_api (kept old implementation as useAnnotaskOld())\r\n- Replaced DataTable component with CoreDataTable\r\n- Reworked Review Image-Search UI (table + buttons)\r\n- Reworked UI of Dataset exports\r\n- Began replacement of old Infobutton component\r\n- Began conversion of .jsx-Files to .tsx-Files\r\n- Used component BaseModal whenever applicable + converted to .tsx\r\n- Rewrote pipeline modals (UI streamline) and changed them to typescript\r\n- Replaced component Loading with CenteredSpinner\r\n- Reworked architecture of MIA components (so that they use queries, not redux)\r\n- Query useGetCurentAnnotask now uses 'currentannotask' instead of 'getcurrentannotask' as queryname\r\n- Converted MIA-Components to .tsx\r\n- Reworked MIA navigation (backend and frontend) with new db-values (path 0.5.0)\r\n- Moved and renamed dataset_review_api, anno_task_api and mia_api to new api directory\r\n- Renamed second \"BaseModal\" to \"PipeElementBaseModal\" for more clarity (BaseModal already exists)\r\n- Replaced all async (dispatch) methods with equivalent not interacting with Redux\r\n- Unfinished Annotask\u002FDataset exports now deletable (to prevent other errors)\r\n- Overhaul of MIA UI regarding inactive images\r\n- Made time frame for authentication reset longer\r\n### Fixed\r\n- Fixed using alternative pagesizes for whole-data CoreDataTable\r\n- Fixed lingering bug when saving\u002Fupdating userdata (EditUserModal)\r\n- Acutally show intended Icons in filter of \"my annotation tasks\"\r\n- Fixed border colors of CoreDataTable\r\n- Added missing import of \"delete_ds_export\" to endpoint.py\r\n- Fix numpy 2.x incompability in inout.py\r\n- No longer showing \"undefined\" when changing SIA-Images\r\n- Fixed error when creating new nodes\r\n- Reactivated SIA filter menu\r\n- Bilateral filter settings stay saved after closing menu\r\n- Fixed minor chunk_id\u002Fupdate_id errors\r\n### Removed\r\n- Removed WorkingOnMIA, WorkingOnSIA - both replaced by AnnotationTop\r\n- Removed old DataTable component and everything importing from outdated 'react-table'\r\n- Removed unused components NewDataTable, SimpleTable\r\n- Removed now unused components IconButton, Progress, Helpbutton, Loading\r\n- Removed unused \"MiaImage.js\" (NewMIAImage replaces it for a long time now)\r\n- Removed old MIA-API\r\n- Removed libs\u002Fhist.js (was only used in MIA; lost-sia package has own copy of file)\r\n- Deleted unused SiaReviewComponent\r\n- Removed react-redux package and all reducers\r\n- SiaReview and annotask acctions\u002Fdirectory","2026-03-23T12:31:21",{"id":169,"version":170,"summary_zh":171,"released_at":172},105710,"3.1.0-alpha.2","Numpy 2x pipeline fix","2026-02-06T14:04:41"]