[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-av--harbor":3,"tool-av--harbor":62},[4,18,28,36,45,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":24,"last_commit_at":25,"category_tags":26,"status":17},9989,"n8n","n8n-io\u002Fn8n","n8n 是一款面向技术团队的公平代码（fair-code）工作流自动化平台，旨在让用户在享受低代码快速构建便利的同时，保留编写自定义代码的灵活性。它主要解决了传统自动化工具要么过于封闭难以扩展、要么完全依赖手写代码效率低下的痛点，帮助用户轻松连接 400 多种应用与服务，实现复杂业务流程的自动化。\n\nn8n 特别适合开发者、工程师以及具备一定技术背景的业务人员使用。其核心亮点在于“按需编码”：既可以通过直观的可视化界面拖拽节点搭建流程，也能随时插入 JavaScript 或 Python 代码、调用 npm 包来处理复杂逻辑。此外，n8n 原生集成了基于 LangChain 的 AI 能力，支持用户利用自有数据和模型构建智能体工作流。在部署方面，n8n 提供极高的自由度，支持完全自托管以保障数据隐私和控制权，也提供云端服务选项。凭借活跃的社区生态和数百个现成模板，n8n 让构建强大且可控的自动化系统变得简单高效。",184740,2,"2026-04-19T23:22:26",[16,14,13,15,27],"插件",{"id":29,"name":30,"github_repo":31,"description_zh":32,"stars":33,"difficulty_score":10,"last_commit_at":34,"category_tags":35,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":24,"last_commit_at":42,"category_tags":43,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",161147,"2026-04-19T23:31:47",[14,13,44],"语言模型",{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":51,"last_commit_at":52,"category_tags":53,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,27],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":24,"last_commit_at":60,"category_tags":61,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":76,"owner_location":77,"owner_email":76,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":120,"forks":121,"last_commit_at":122,"license":123,"difficulty_score":10,"env_os":124,"env_gpu":125,"env_ram":126,"env_deps":127,"category_tags":133,"github_topics":134,"view_count":24,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":154,"updated_at":155,"faqs":156,"releases":186},9931,"av\u002Fharbor","harbor","One command brings a complete pre-wired LLM stack with hundreds of services to explore.","Harbor 是一款旨在简化大语言模型（LLM）部署的开源工具，它让用户只需执行一条命令，即可在本地搭建起一个功能完备且预配置好的 LLM 技术栈。面对当前 AI 生态中服务繁多、环境配置复杂、依赖关系难以处理的痛点，Harbor 通过集成数百种可探索的服务组件，将原本繁琐的安装调试过程极大简化，帮助用户快速从“配置环境”转向“实际探索”。\n\n这款工具特别适合开发者、AI 研究人员以及希望深入体验本地大模型生态的技术爱好者使用。无论是想要测试不同模型的表现，还是构建复杂的 AI 应用原型，Harbor 都能提供开箱即用的支持。其核心亮点在于高度集成的架构设计，基于容器化技术自动编排各类服务，确保系统稳定运行的同时，保留了极高的灵活性与扩展性。用户无需具备深厚的运维背景，也能轻松管理包括模型推理、向量数据库、前端界面在内的全套设施。对于渴望在私有环境中安全、高效地探索人工智能潜力的用户而言，Harbor 提供了一个可靠且便捷的起点。","\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8a7705e1-6f0e-4374-8784-62b95816aebc\n\n\n\n[![GitHub Tag](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Ftag\u002Fav\u002Fharbor)](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Freleases)\n![GitHub repo size](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frepo-size\u002Fav\u002Fharbor)\n![GitHub repo file or directory count](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdirectory-file-count\u002Fav\u002Fharbor?type=file&extension=yml&label=compose%20files&color=orange)\n![GitHub language count](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flanguages\u002Fcount\u002Fav\u002Fharbor)\n[![Visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_6f89ae43c086.png)](https:\u002F\u002Fvisitorbadge.io\u002Fstatus?path=av%2Fharbor)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Harbor-blue?logo=discord&logoColor=white)](https:\u002F\u002Fdiscord.gg\u002F8nDRphrhSF)\n![Harbor Ko-fi](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FKo--fi-white?style=social&logo=kofi)\n\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-claude-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?ask=claude#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-chatgpt-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?ask=chatgpt#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-perplexity-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?ask=perplexity#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-Harbor-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?failmsg=It%20appears%20that%20you%20do%20not%20have%20Harbor%20installed%2C%20or%20Open%20WebUI%20is%20not%20running%20on%20default%20port&redirect=http%3A%2F%2Flocalhost%3A33801%3Fq%3D__TEXT__#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n\n\nSetup your local LLM stack effortlessly.\n\n```bash\n# Starts fully configured Open WebUI and Ollama\nharbor up\n\n# Now, Open WebUI can do Web RAG and TTS\u002FSTT\nharbor up searxng speaches\n```\n\nHarbor is a CLI and companion app that lets you spin up a complete local LLM stack—backends like Ollama, llama.cpp, or vLLM, frontends like Open WebUI, plus supporting services like SearXNG for web search, Speaches for voice chat, and ComfyUI for image generation—all pre-wired to work together with a single `harbor up` command. No manual setup: just pick the services you want and Harbor handles the Docker Compose orchestration, configuration, and cross-service connectivity so you can focus on actually using your models.\n\n![Screenshot of Harbor CLI and App together](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_1445c2b988d7.png)\n\n## News\n\n- **v0.4.10** - Fixed SearXNG JSON format and workspace config, fixed CDI seed script skipping build variants\n- **v0.4.9** - Two new Boost modules (`analogical`, `deaf`), Open WebUI native function calling enabled by default, CLI validation fixes\n- **v0.4.8** - Solo CLI and ROS MCP Server services, MiniMax cloud provider, extensive documentation updates\n- **v0.4.7** - Hermes Agent service, llama.cpp build-from-source support, updated llama.cpp docs\n- **v0.4.6** - SillyTavern service, fixed llama.cpp cache paths, improved Jupyter workspace\n- **v0.4.5** - Harbor App: built-in terminal, service logs, model management\n- **v0.4.4** - Harbor integration test suite with mock OpenAI, enhanced service management UI\n\n## Documentation\n\n- [Installing Harbor](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F1.0.-Installing-Harbor)\u003Cbr\u002F>\n  Guides to install Harbor CLI and App\n- [Harbor User Guide](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F1.-Harbor-User-Guide)\u003Cbr\u002F>\n  High-level overview of working with Harbor\n- [Harbor App](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F1.1-Harbor-App)\u003Cbr\u002F>\n  Overview and manual for the Harbor companion application\n- [Harbor Services](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.-Services)\u003Cbr\u002F>\n  Catalog of services available in Harbor\n- [Harbor CLI Reference](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F3.-Harbor-CLI-Reference)\u003Cbr\u002F>\n  Read more about Harbor CLI commands and options.\n  Read about supported services and the ways to configure them.\n- [Join our Discord](https:\u002F\u002Fdiscord.gg\u002F8nDRphrhSF)\u003Cbr\u002F>\n  Get help, share your experience, and contribute to the project.\n\n### Maintainers: regenerate docs\n\nRun the docs workflow from the Harbor repo root with:\n\n```bash\nharbor dev docs\n```\n\nFresh-maintainer prerequisites:\n\n- Check out the wiki repo as a sibling directory at `..\u002Fharbor.wiki`, because the docs script copies the generated wiki pages there.\n\n  ```bash\n  git clone https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor.wiki.git ..\u002Fharbor.wiki\n  ```\n\n- Use the Harbor CLI from this checkout. If `harbor` is not already on your `PATH`, either run the repo-local entrypoint directly:\n\n  ```bash\n  .\u002Fharbor.sh dev docs\n  ```\n\n  or link the checkout first and then use the maintainer command above:\n\n  ```bash\n  .\u002Fharbor.sh link\n  harbor dev docs\n  ```\n\n- Docker Engine with `docker compose` must be working before you regenerate docs. The docs script shells out to `harbor run boost uv run ...` to rebuild the Boost-generated pages, so Docker is required even when Harbor falls back to a containerized Deno runtime.\n\nThis workflow updates `docs\u002F`, syncs the sibling wiki checkout, refreshes the app docs copy, and rewrites the generated package READMEs.\n\n## What can Harbor do?\n\n![Diagram outlining Harbor's service structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_8e530a4d657d.png)\n\n\n#### ✦ Local LLMs\n\nRun LLMs and related services locally, with no or minimal configuration, typically in a single command or click.\n\n```bash\n# All backends are pre-connected to Open WebUI\nharbor up ollama\nharbor up llamacpp\nharbor up vllm\n\n# Set and remember args for llama.cpp\nharbor llamacpp args -ngl 32\n```\n\n####  Cutting Edge Inference\n\nHarbor supports most of the major inference engines as well as a few of the lesser-known ones.\n\n```bash\n# We sincerely hope you'll never try to run all of them at once\nharbor up vllm llamacpp tgi litellm tabbyapi aphrodite sglang ktransformers mistralrs airllm\n```\n\n#### Tool Use\n\nEnjoy the benefits of MCP ecosystem, extend it to your use-cases.\n\n```bash\n# Manage MCPs with a convenient Web UI\nharbor up metamcp\n\n# Connect MCPs to Open WebUI\nharbor up metamcp mcpo\n```\n\n#### Generate Images\n\nHarbor includes ComfyUI + Flux + Open WebUI integration.\n\n```bash\n# Use FLUX in Open WebUI in one command\nharbor up comfyui\n```\n\n#### Local Web RAG \u002F Deep Research\n\nHarbor includes [SearXNG](.\u002Fdocs\u002F2.3.1-Satellite&colon-SearXNG.md) that is pre-connected to a lot of services out of the box: [Perplexica](.\u002Fdocs\u002F2.3.2-Satellite&colon-Perplexica.md), [ChatUI](.\u002Fdocs\u002F2.1.4-Frontend&colon-ChatUI.md), [Morphic](.\u002Fdocs\u002F2.3.34-Satellite-Morphic.md), [Local Deep Research](.\u002Fdocs\u002F2.3.45-Satellite-Local-Deep-Research.md) and more.\n\n```bash\n# SearXNG is pre-connected to Open WebUI\nharbor up searxng\n\n# And to many other services\nharbor up searxng chatui\nharbor up searxng morphic\nharbor up searxng perplexica\nharbor up searxng ldr\n```\n\n#### LLM Workflows\n\nHarbor includes multiple services for build LLM-based data and chat workflows: [Dify](.\u002Fdocs\u002F2.3.3-Satellite&colon-Dify.md), [LitLytics](.\u002Fdocs\u002F2.3.21-Satellite&colon-LitLytics.md), [n8n](.\u002Fdocs\u002F2.3.23-Satellite&colon-n8n.md), [Open WebUI Pipelines](.\u002Fdocs\u002F2.3.25-Satellite&colon-Open-WebUI-Pipelines.md), [FloWise](.\u002Fdocs\u002F2.3.31-Satellite&colon-Flowise.md), [LangFlow](.\u002Fdocs\u002F2.3.32-Satellite&colon-LangFlow.md)\n\n```bash\n# Use Dify in Open WebUI\nharbor up dify\n```\n\n#### Talk to your LLM\n\nSetup voice chats with your LLM in a single command. Open WebUI + Speaches\n\n```bash\n# Speaches includes OpenAI-compatible SST and TTS\n# and connected to Open WebUI out of the box\nharbor up speaches\n```\n\n#### Chat from the phone\n\nYou can access Harbor services from your phone with a QR code. Easily get links for local, LAN or Docker access.\n\n```bash\n# Print a QR code to open the service on your phone\nharbor qr\n# Print a link to open the service on your phone\nharbor url webui\n```\n\n#### Chat from anywhere\n\nHarbor includes a [built-in tunneling service](.\u002Fdocs\u002F3.-Harbor-CLI-Reference.md#harbor-tunnel-service) to expose your Harbor to the internet.\n\n> [!WARN]\n> Be careful exposing your computer to the Internet, it's not safe.\n\n```bash\n# Expose default UI to the internet\nharbor tunnel\n\n# Expose a specific service to the internet\n# ⚠️ Ensure to configure authentication for the service\nharbor tunnel vllm\n\n# Harbor comes with traefik built-in and pre-configured\n# for all included services\nharbor up traefik\n```\n\n#### LLM Scripting\n\n[Harbor Boost](.\u002Fdocs\u002F5.2.-Harbor-Boost.md) allows you to [easily script workflows](.\u002Fdocs\u002F5.2.1.-Harbor-Boost-Custom-Modules.md) and interactions with downstream LLMs.\n\n```bash\n# Use Harbor Boost to script LLM workflows\nharbor up boost\n```\n\n#### Config Profiles\n\nSave and manage configuration profiles for different scenarios. For example - save [llama.cpp](.\u002Fdocs\u002F2.2.2-Backend&colon-llama.cpp.md) args for different models and contexts and switch between them easily.\n\n```bash\n# Save and use config profiles\nharbor profile save llama4\nharbor profile use default\n\n# Import profiles from a URL\nharbor profile use https:\u002F\u002Fexample.com\u002Fpath\u002Fto\u002Fharbor-profile.env\n```\n\n#### Command History\n\nHarbor keeps a [local-only history of recent commands](.\u002Fdocs\u002F3.-Harbor-CLI-Reference.md#harbor-history). Look up and re-run easily, standalone from the system shell history.\n\n```bash\n# Lookup recently used harbor commands\nharbor history\n```\n\n#### Eject\n\nReady to move to your own setup? Harbor [will give you](.\u002Fdocs\u002F3.-Harbor-CLI-Reference.md#harbor-eject) a docker-compose file replicating your setup.\n\n```bash\n# Eject from Harbor into a standalone Docker Compose setup\n# Will export related services and variables into a standalone file.\nharbor eject searxng llamacpp > docker-compose.harbor.yml\n```\n\n---\n\n## Services\n\n##### UIs\n[Open WebUI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.1-Frontend:-Open-WebUI) ⦁︎\n[ComfyUI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.2-Frontend:-ComfyUI) ⦁︎\n[LibreChat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.3-Frontend:-LibreChat) ⦁︎\n[HuggingFace ChatUI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.4-Frontend:-ChatUI) ⦁︎\n[Lobe Chat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.5-Frontend:-Lobe-Chat) ⦁︎\n[Hollama](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.6-Frontend:-hollama) ⦁︎\n[parllama](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.7-Frontend:-parllama) ⦁︎\n[BionicGPT](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.8-Frontend:-BionicGPT) ⦁︎\n[AnythingLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.9-Frontend:-AnythingLLM) ⦁︎\n[Chat Nio](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.10-Frontend:-Chat-Nio) ⦁︎\n[mikupad](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.11-Frontend:-Mikupad) ⦁︎\n[oterm](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.12-Frontend-oterm) ⦁︎\n[omnichain](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.16-Satellite:-omnichain) ⦁︎\n[ol1](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.19-Satellite:-ol1)\n\n##### Backends\n[Ollama](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.1-Backend:-Ollama) ⦁︎\n[llama.cpp](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.2-Backend:-llama.cpp) ⦁︎\n[vLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.3-Backend:-vLLM) ⦁︎\n[TabbyAPI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.4-Backend:-TabbyAPI) ⦁︎\n[Aphrodite Engine](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.5-Backend:-Aphrodite-Engine) ⦁︎\n[mistral.rs](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.6-Backend:-mistral.rs) ⦁︎\n[openedai-speech](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.7-Backend:-openedai-speech) ⦁︎\n[Speaches](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.14-Backend:-Speaches) ⦁︎\n[Parler](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.8-Backend:-Parler) ⦁︎\n[text-generation-inference](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.9-Backend:-text-generation-inference) ⦁︎\n[LMDeploy](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.10-Backend:-lmdeploy) ⦁︎\n[AirLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.11-Backend:-AirLLM) ⦁︎\n[SGLang](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.12-Backend:-SGLang) ⦁︎\n[KTransformers](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.13-Backend:-KTransformers) ⦁︎\n[Nexa SDK](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.15-Backend:-Nexa-SDK) ⦁︎\n[KoboldCpp](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.16-Backend:-KoboldCpp) ⦁︎\n[Modular MAX](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.17-Backend-Modular-MAX)\n\n##### Satellites\n[Harbor Bench](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F5.1.-Harbor-Bench) ⦁︎\n[Harbor Boost](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F5.2.-Harbor-Boost) ⦁︎\n[SearXNG](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.1-Satellite:-SearXNG) ⦁︎\n[Perplexica](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.2-Satellite:-Perplexica) ⦁︎\n[Dify](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.3-Satellite:-Dify) ⦁︎\n[Plandex](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.4-Satellite:-Plandex) ⦁︎\n[LiteLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.5-Satellite:-LiteLLM) ⦁︎\n[LangFuse](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.6-Satellite:-langfuse) ⦁︎\n[Open Interpreter](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.7-Satellite:-Open-Interpreter) ⦁\n︎[cloudflared](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.8-Satellite:-cloudflared) ⦁︎\n[cmdh](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.9-Satellite:-cmdh) ⦁︎\n[fabric](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.10-Satellite:-fabric) ⦁︎\n[txtai RAG](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.11-Satellite:-txtai-RAG) ⦁︎\n[TextGrad](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.12-Satellite:-TextGrad) ⦁︎\n[Aider](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.13-Satellite:-aider) ⦁︎\n[aichat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.14-Satellite:-aichat) ⦁︎\n[autogpt](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.15-Satellite:-AutoGPT) ⦁︎\n[lm-evaluation-harness](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.17-Satellite:-lm-evaluation-harness) ⦁︎\n[JupyterLab](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.18-Satellite:-JupyterLab) ⦁︎\n[ol1](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.19-Satellite:-ol1) ⦁︎\n[OpenHands](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.20-Satellite:-OpenHands) ⦁︎\n[LitLytics](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.21-Satellite:-LitLytics) ⦁︎\n[Repopack](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.22-Satellite:-Repopack) ⦁︎\n[n8n](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.23-Satellite:-n8n) ⦁︎\n[Bolt.new](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.24-Satellite:-Bolt.new) ⦁︎\n[Open WebUI Pipelines](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.25-Satellite:-Open-WebUI-Pipelines) ⦁︎\n[Qdrant](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.26-Satellite:-Qdrant) ⦁︎\n[K6](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.27-Satellite:-K6) ⦁︎\n[Promptfoo](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.28-Satellite:-Promptfoo) ⦁︎\n[Webtop](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.29-Satellite:-Webtop) ⦁︎\n[OmniParser](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.30-Satellite:-OmniParser) ⦁︎\n[Flowise](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.31-Satellite:-Flowise) ⦁︎\n[Langflow](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.32-Satellite:-LangFlow) ⦁︎\n[OptiLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.33-Satellite:-OptiLLM) ⦁︎\n[Morphic](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.34-Satellite-Morphic) ⦁︎\n[SQL Chat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.35-Satellite-SQL-Chat) ⦁︎\n[gptme](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.36-Satellite-gptme) ⦁︎\n[traefik](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.37-Satellite-traefik) ⦁︎\n[Latent Scope](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.38-Satellite-Latent-Scope) ⦁︎\n[RAGLite](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.39-Satellite-RAGLite) ⦁︎\n[llama-swap](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.40-Satellite-llamaswap) ⦁︎\n[LibreTranslate](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.41-Satellite-LibreTranslate) ⦁︎\n[MetaMCP](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.42-Satellite-MetaMCP) ⦁︎\n[mcpo](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.43-Satellite-mcpo) ⦁︎\n[SuperGateway](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.44-Satellite-supergateway) ⦁︎\n[Local Deep Research](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.45-Satellite-Local-Deep-Research) ⦁︎\n[LocalAI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.46-Satellite-LocalAI) ⦁︎\n[AgentZero](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.47-Satellite-Agent-Zero) ⦁︎\n[Airweave](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.48-Satellite-Airweave) ⦁︎\n[Docling](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.49-Satellite-Docling) ⦁︎\n[Browser Use](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.50-Satellite-Browser-Use) ⦁︎\n[Unsloth](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.51-Satellite-Unsloth) ⦁︎\n[Windmill](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.52-Satellite-Windmill)\n\n\nSee [services documentation](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.-Services) for a brief overview of each.\n\n## CLI Tour\n\n```bash\n# Run Harbor with default services:\n# Open WebUI and Ollama\nharbor up\n\n# Run Harbor with additional services\n# Running SearXNG automatically enables Web RAG in Open WebUI\nharbor up searxng\n\n# Speaches includes OpenAI-compatible SST and TTS\n# and connected to Open WebUI out of the box\nharbor up speaches\n\n# Run additional\u002Falternative LLM Inference backends\n# Open Webui is automatically connected to them.\nharbor up llamacpp tgi litellm vllm tabbyapi aphrodite sglang ktransformers\n\n# Run different Frontends\nharbor up librechat chatui bionicgpt hollama\n\n# Get a free quality boost with\n# built-in optimizing proxy\nharbor up boost\n\n# Use FLUX in Open WebUI in one command\nharbor up comfyui\n\n# Use custom models for supported backends\nharbor llamacpp model https:\u002F\u002Fhuggingface.co\u002Fuser\u002Frepo\u002Fmodel.gguf\n\n# Access service CLIs without installing them\n# Caches are shared between services where possible\nharbor hf scan-cache\nharbor hf download google\u002Fgemma-2-2b-it\nharbor ollama list\n\n# Shortcut to HF Hub to find the models\nharbor hf find gguf gemma-2\n# Use HFDownloader and official HF CLI to download models\nharbor hf dl -m google\u002Fgemma-2-2b-it -c 10 -s .\u002Fhf\nharbor hf download google\u002Fgemma-2-2b-it\n\n# Where possible, cache is shared between the services\nharbor tgi model google\u002Fgemma-2-2b-it\nharbor vllm model google\u002Fgemma-2-2b-it\nharbor aphrodite model google\u002Fgemma-2-2b-it\nharbor tabbyapi model google\u002Fgemma-2-2b-it-exl2\nharbor mistralrs model google\u002Fgemma-2-2b-it\nharbor opint model google\u002Fgemma-2-2b-it\nharbor sglang model google\u002Fgemma-2-2b-it\n\n# Convenience tools for docker setup\nharbor logs llamacpp\nharbor exec llamacpp .\u002Fscripts\u002Fllama-bench --help\nharbor shell vllm\n\n# Tell your shell exactly what you think about it\nharbor opint\nharbor aider\nharbor aichat\nharbor cmdh\n\n# Use fabric to LLM-ify your linux pipes\ncat .\u002Ffile.md | harbor fabric --pattern extract_extraordinary_claims | grep \"LK99\"\n\n# Open services from the CLI\nharbor open webui\nharbor open llamacpp\n# Print yourself a QR to quickly open the\n# service on your phone\nharbor qr\n# Feeling adventurous? Expose your Harbor\n# to the internet\nharbor tunnel\n\n# Config management\nharbor config list\nharbor config set webui.host.port 8080\n\n# Create and manage config profiles\nharbor profile save l370b\nharbor profile use default\n# Import profile from a URL\nharbor profile use https:\u002F\u002Fexample.com\u002Fpath\u002Fto\u002Fharbor-profile.env\n\n# Lookup recently used harbor commands\nharbor history\n\n# Eject from Harbor into a standalone Docker Compose setup\n# Will export related services and variables into a standalone file.\nharbor eject searxng llamacpp > docker-compose.harbor.yml\n\n# Run a built-in LLM benchmark with\n# your own tasks\nharbor bench run\n\n# Gimmick\u002FFun Area\n\n# Argument scrambling, below commands are all the same as above\n# Harbor doesn't care if it's \"vllm model\" or \"model vllm\", it'll\n# figure it out.\nharbor model vllm\nharbor vllm model\n\nharbor config get webui.name\nharbor get config webui_name\n\nharbor tabbyapi shell\nharbor shell tabbyapi\n\n# 50% gimmick, 50% useful\n# Ask harbor about itself\nharbor how to ping ollama container from the webui?\n```\n\n## Harbor App Demo\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa5cd2ef1-3208-400a-8866-7abd85808503\n\nIn the demo, Harbor App is used to launch a default stack with [Ollama](.\u002F2.2.1-Backend:-Ollama) and [Open WebUI](.\u002F2.1.1-Frontend:-Open-WebUI) services. Later, [SearXNG](.\u002F2.3.1-Satellite:-SearXNG) is also started, and WebUI can connect to it for the Web RAG right out of the box. After that, [Harbor Boost](.\u002F5.2.-Harbor-Boost) is also started and connected to the WebUI automatically to induce more creative outputs. As a final step, Harbor config is adjusted in the App for the [`klmbr`](.\u002F5.2.-Harbor-Boost#klmbr---boost-llm-creativity) module in the [Harbor Boost](.\u002F5.2.-Harbor-Boost), which makes the output unparsable for the LLM (yet still undetstandable for humans).\n\n## Why?\n\n- If you're comfortable with Docker and Linux administration - you likely don't need Harbor to manage your local LLM environment. However, while growing it - you're also likely to eventually arrive to a similar solution. I know this for a fact, since that's exactly how Harbor came to be.\n- Harbor is not designed as a deployment solution, but rather as a helper for the local LLM development environment. It's a good starting point for experimenting with LLMs and related services.\n- Workflow\u002Fsetup centralisation - you can be sure where to find a specific config or service, logs, data and configuration files.\n- Convenience factor - single CLI with a lot of services and features, accessible from anywhere on your host.\n\n## Supporters\n\n![@av's wife](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_5cdfc756d246.png)\n![@burnth3heretic](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_1b635c469880.png)\n![@vood](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_02ece01155dd.png)\n![@anonymous](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_ee9c17179741.png)\n\u003Ca href=\"https:\u002F\u002Fx.com\u002FTheAhmadOsman\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_78c67947c9ef.png\" width=\"32\" height=\"32\" alt=\"@TheAhmadOsman\" \u002F>\u003C\u002Fa>\n","https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8a7705e1-6f0e-4374-8784-62b95816aebc\n\n\n\n[![GitHub Tag](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Ftag\u002Fav\u002Fharbor)](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Freleases)\n![GitHub仓库大小](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frepo-size\u002Fav\u002Fharbor)\n![GitHub仓库文件或目录数量](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdirectory-file-count\u002Fav\u002Fharbor?type=file&extension=yml&label=compose%20files&color=orange)\n![GitHub语言统计](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flanguages\u002Fcount\u002Fav\u002Fharbor)\n[![访问量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_6f89ae43c086.png)](https:\u002F\u002Fvisitorbadge.io\u002Fstatus?path=av%2Fharbor)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Harbor-blue?logo=discord&logoColor=white)](https:\u002F\u002Fdiscord.gg\u002F8nDRphrhSF)\n![Harbor Ko-fi](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FKo--fi-white?style=social&logo=kofi)\n\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-claude-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?ask=claude#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-chatgpt-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?ask=chatgpt#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-perplexity-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?ask=perplexity#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n[![ask](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fask-Harbor-252421?style=flat&labelColor=555)](https:\u002F\u002Ftextclip.sh?failmsg=It%20appears%20that%20you%20do%20not%20have%20Harbor%20installed%2C%20or%20Open%20WebUI%20is%20not%20running%20on%20default%20port&redirect=http%3A%2F%2Flocalhost%3A33801%3Fq%3D__TEXT__#c=TZTNjiM1EIDveQofga2N4LJCkQZpZ2AAaVZCzBNU29XdtSm7PLY7P2QjjYQUshyjleC6J8S8Bnd4h34S5E4GcbH6p7r8lb-q_g5To2l8__Hm7vvx-PQ6xsXXapeUbtRHzTQeTqIW5e7ujRni-OvPa2oGfqEi6PEFpm41m_39YfETJX1pNbTcjcfHNScOnfnCejceTu0gkgvapcGhTFGBbBmPj7truIX7vYlJWxa6X2M09La-O5xyweBQNJApQwgk4-HEoVAKVMxDGg8nrw0LmfH4p0Xbk_nn8ZOK86npORfz12_2XIB5dz0en27H49P9u69efT6bXV_tzvQwrTZGWIl4KB1DwabZYmTA2Cd1XAg855JQUobcCYYOliVhyK0mTymDeEdRdAvIqWYJtEFYaqPiwKsbBJPHDeRIFTJDxCSU9rdXu-kgQbhJZHssUJf6QJvzfX-hrF9MFw1rYNvFAhi2pefQ1Q1rbGAFz8shogMtlDxY9e124P391S4Tpk3oIFKKQhu2CI7bLUTB4GgDwoVqolpcO2QCjRwK2NaB9a6HFpvEFpAdJUCe2KrICiKeVijwdojbQgk0UugxuFyTyrawzZAoakS7hPBlgEalQORIwoEyPLiEocDyFcSkPpZWFdbUFI2gPnDElClBK7rmTGdA0TVoLFyJvabYs4X8IBNVF4snKAmp5SUIFgolW40ECbta5tl4XmM8n_tkssaBp4LeRvA2KohLMDU9MmBHodTuroLXhCsCp1Y4dNAkXWdK9ciGkEVLD2sOzrMINBRsD41qLvvZbFI9_vLh2lxkjIfTxf_F-nMp_5MkLu0XP77-1jz3zmKlbMlc1C7Yd-aCPR7_qOCLNzc_jIfTlHg22w0RnK4DxCohF0wFRLsMtCELuaeKObC4SRsMSeAhwXnY9tmch3m-E-gg75er5ymd7zIMIHuT5zuvjgQwdXlv-na-azk4cALZYtibc__OP_tvprdezsN5_i-Mx99ns282UZCDKT3nukUNNUWNp_m_)\n\n\n轻松搭建本地LLM环境。\n\n```bash\n# 启动已完全配置的Open WebUI和Ollama\nharbor up\n\n# 现在，Open WebUI可以进行网络检索增强和TTS\u002FSTT\nharbor up searxng speaches\n```\n\nHarbor是一款CLI工具及配套应用，能够帮助您快速搭建完整的本地大模型运行环境——包括后端如Ollama、llama.cpp或vLLM，前端如Open WebUI，以及支持服务如用于网络搜索的SearXNG、用于语音对话的Speaches和用于图像生成的ComfyUI等。所有组件都预先配置好，只需一条`harbor up`命令即可实现无缝协作。无需手动设置：您只需选择所需的服务，Harbor便会自动完成Docker Compose编排、配置及各服务间的互联互通，让您专注于实际使用模型。\n\n![Harbor CLI与App同屏截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_1445c2b988d7.png)\n\n## 新闻\n\n- **v0.4.10** - 修复了 SearXNG 的 JSON 格式和工作区配置，解决了 CDI 种子脚本跳过构建变体的问题\n- **v0.4.9** - 新增两个 Boost 模块（`analogical`、`deaf`），Open WebUI 原生函数调用默认启用，修复了 CLI 验证问题\n- **v0.4.8** - 单独的 CLI 和 ROS MCP 服务器服务，新增 MiniMax 云提供商，文档进行了大量更新\n- **v0.4.7** - Hermes Agent 服务，支持从源码构建 llama.cpp，更新了 llama.cpp 文档\n- **v0.4.6** - SillyTavern 服务，修复了 llama.cpp 缓存路径，改进了 Jupyter 工作区\n- **v0.4.5** - Harbor App：内置终端、服务日志、模型管理\n- **v0.4.4** - Harbor 集成测试套件，使用模拟的 OpenAI 接口，增强了服务管理 UI\n\n## 文档\n\n- [安装 Harbor](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F1.0.-Installing-Harbor)\u003Cbr\u002F>\n  安装 Harbor CLI 和 App 的指南\n- [Harbor 用户指南](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F1.-Harbor-User-Guide)\u003Cbr\u002F>\n  使用 Harbor 的高级概述\n- [Harbor App](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F1.1-Harbor-App)\u003Cbr\u002F>\n  Harbor 伴侣应用的概述和手册\n- [Harbor 服务](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.-Services)\u003Cbr\u002F>\n  Harbor 中可用的服务目录\n- [Harbor CLI 参考](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F3.-Harbor-CLI-Reference)\u003Cbr\u002F>\n  了解更多关于 Harbor CLI 命令和选项的信息。\n  阅读支持的服务以及如何配置它们的内容。\n- [加入我们的 Discord](https:\u002F\u002Fdiscord.gg\u002F8nDRphrhSF)\u003Cbr\u002F>\n  获取帮助、分享经验并为项目贡献力量。\n\n### 维护者：重新生成文档\n\n在 Harbor 仓库根目录下运行文档工作流：\n\n```bash\nharbor dev docs\n```\n\n新维护者的前提条件：\n\n- 将 wiki 仓库克隆到父级目录 `..\u002Fharbor.wiki` 中，因为文档脚本会将生成的 wiki 页面复制到那里。\n\n  ```bash\n  git clone https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor.wiki.git ..\u002Fharbor.wiki\n  ```\n\n- 使用此仓库中的 Harbor CLI。如果 `harbor` 尚未在您的 `PATH` 中，您可以直接运行本地入口点：\n\n  ```bash\n  .\u002Fharbor.sh dev docs\n  ```\n\n  或者先链接该仓库，再使用上述维护者命令：\n\n  ```bash\n  .\u002Fharbor.sh link\n  harbor dev docs\n  ```\n\n- 在重新生成文档之前，必须确保 Docker Engine 和 `docker compose` 正常工作。文档脚本会调用 `harbor run boost uv run ...` 来重建由 Boost 生成的页面，因此即使 Harbor 回退到容器化的 Deno 运行时，也需要 Docker。\n\n此工作流会更新 `docs\u002F` 目录，同步兄弟 wiki 仓库，刷新应用文档副本，并重写生成的软件包 README 文件。\n\n## Harbor 能做什么？\n\n![展示 Harbor 服务结构的示意图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_8e530a4d657d.png)\n\n\n#### ✦ 本地 LLM\n\n在本地运行 LLM 和相关服务，几乎无需或只需少量配置，通常只需一个命令或一次点击即可。\n\n```bash\n# 所有后端都已预先连接到 Open WebUI\nharbor up ollama\nharbor up llamacpp\nharbor up vllm\n\n# 设置并记住 llama.cpp 的参数\nharbor llamacpp args -ngl 32\n```\n\n#### 最前沿的推理\n\nHarbor 支持大多数主流的推理引擎，同时也支持一些不太知名的引擎。\n\n```bash\n# 我们真诚地希望您不会尝试同时运行所有这些引擎\nharbor up vllm llamacpp tgi litellm tabbyapi aphrodite sglang ktransformers mistralrs airllm\n```\n\n#### 工具使用\n\n享受 MCP 生态系统带来的好处，并将其扩展到您的应用场景中。\n\n```bash\n# 使用便捷的 Web UI 管理 MCP\nharbor up metamcp\n\n# 将 MCP 连接到 Open WebUI\nharbor up metamcp mcpo\n```\n\n#### 图像生成\n\nHarbor 包含 ComfyUI + Flux + Open WebUI 的集成。\n\n```bash\n# 通过一个命令即可在 Open WebUI 中使用 FLUX\nharbor up comfyui\n```\n\n#### 本地 Web RAG \u002F 深度研究\n\nHarbor 内置 [SearXNG](.\u002Fdocs\u002F2.3.1-Satellite&colon-SearXNG.md)，它开箱即用，预连接了许多服务：[Perplexica](.\u002Fdocs\u002F2.3.2-Satellite&colon-Perplexica.md)、[ChatUI](.\u002Fdocs\u002F2.1.4-Frontend&colon-ChatUI.md)、[Morphic](.\u002Fdocs\u002F2.3.34-Satellite-Morphic.md)、[本地深度研究](.\u002Fdocs\u002F2.3.45-Satellite-Local-Deep-Research.md)等。\n\n```bash\n# SearXNG 已预先连接到 Open WebUI\nharbor up searxng\n\n# 并且连接到许多其他服务\nharbor up searxng chatui\nharbor up searxng morphic\nharbor up searxng perplexica\nharbor up searxng ldr\n```\n\n#### LLM 工作流\n\nHarbor 包括多个用于构建基于 LLM 的数据和聊天工作流的服务：[Dify](.\u002Fdocs\u002F2.3.3-Satellite&colon-Dify.md)、[LitLytics](.\u002Fdocs\u002F2.3.21-Satellite&colon-LitLytics.md)、[n8n](.\u002Fdocs\u002F2.3.23-Satellite&colon-n8n.md)、[Open WebUI 流水线](.\u002Fdocs\u002F2.3.25-Satellite&colon-Open-WebUI-Pipelines.md)、[FloWise](.\u002Fdocs\u002F2.3.31-Satellite&colon-Flowise.md)、[LangFlow](.\u002Fdocs\u002F2.3.32-Satellite&colon-LangFlow.md)。\n\n```bash\n# 在 Open WebUI 中使用 Dify\nharbor up dify\n```\n\n#### 与您的 LLM 对话\n\n只需一个命令即可设置与您的 LLM 的语音聊天。Open WebUI + Speaches\n\n```bash\n# Speaches 包含兼容 OpenAI 的 SST 和 TTS\n# 并且开箱即用连接到 Open WebUI\nharbor up speaches\n```\n\n#### 通过手机聊天\n\n您可以通过二维码从手机访问 Harbor 服务。轻松获取本地、局域网或 Docker 访问的链接。\n\n```bash\n# 打印一个二维码，以便在手机上打开服务\nharbor qr\n# 打印一个链接，以便在手机上打开服务\nharbor url webui\n```\n\n#### 随时随地聊天\n\nHarbor 内置一个 [内建隧道服务](.\u002Fdocs\u002F3.-Harbor-CLI-Reference.md#harbor-tunnel-service)，可将您的 Harbor 暴露到互联网上。\n\n> [!WARN]\n> 请谨慎对待将您的计算机暴露到互联网上的行为，这并不安全。\n\n```bash\n# 将默认界面暴露到互联网\nharbor tunnel\n\n# 将特定服务暴露到互联网\n# ⚠️ 请务必为该服务配置身份验证\nharbor tunnel vllm\n\n# Harbor 自带 Traefik，并已为所有包含的服务预配置好\nharbor up traefik\n```\n\n#### LLM 脚本编写\n\n[Harbor Boost](.\u002Fdocs\u002F5.2.-Harbor-Boost.md) 允许您 [轻松编写工作流脚本](.\u002Fdocs\u002F5.2.1.-Harbor-Boost-Custom-Modules.md)以及与下游 LLM 的交互。\n\n```bash\n# 使用 Harbor Boost 编写 LLM 工作流\nharbor up boost\n```\n\n#### 配置文件\n\n保存和管理不同场景下的配置文件。例如，为不同的模型和上下文保存 [llama.cpp](.\u002Fdocs\u002F2.2.2-Backend&colon-llama.cpp.md) 参数，并轻松切换。\n\n```bash\n# 保存和使用配置文件\nharbor profile save llama4\nharbor profile use default\n\n# 从 URL 导入配置文件\nharbor profile use https:\u002F\u002Fexample.com\u002Fpath\u002Fto\u002Fharbor-profile.env\n```\n\n#### 命令历史\n\nHarbor 保留一份 [仅限本地的最近命令历史](.\u002Fdocs\u002F3.-Harbor-CLI-Reference.md#harbor-history)。您可以轻松查找并重新执行命令，而无需依赖系统的 shell 历史记录。\n\n```bash\n# 查看最近使用的 Harbor 命令\nharbor history\n```\n\n#### 迁出\n\n准备好迁移到您自己的设置了吗？Harbor [会为您提供](.\u002Fdocs\u002F3.-Harbor-CLI-Reference.md#harbor-eject)一个 docker-compose 文件，以复制您的当前设置。\n\n```bash\n\n# 从 Harbor 中弹出并迁移到独立的 Docker Compose 配置\n# 将相关服务和变量导出到一个独立的文件中。\nharbor eject searxng llamacpp > docker-compose.harbor.yml\n```\n\n---\n\n## 服务\n\n##### 前端界面\n[Open WebUI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.1-Frontend:-Open-WebUI) ⦁︎\n[ComfyUI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.2-Frontend:-ComfyUI) ⦁︎\n[LibreChat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.3-Frontend:-LibreChat) ⦁︎\n[HuggingFace ChatUI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.4-Frontend:-ChatUI) ⦁︎\n[Lobe Chat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.5-Frontend:-Lobe-Chat) ⦁︎\n[Hollama](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.6-Frontend:-hollama) ⦁︎\n[parllama](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.7-Frontend:-parllama) ⦁︎\n[BionicGPT](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.8-Frontend:-BionicGPT) ⦁︎\n[AnythingLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.9-Frontend:-AnythingLLM) ⦁︎\n[Chat Nio](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.10-Frontend:-Chat-Nio) ⦁︎\n[mikupad](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.11-Frontend:-Mikupad) ⦁︎\n[oterm](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.12-Frontend-oterm) ⦁︎\n[omnichain](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.16-Satellite:-omnichain) ⦁︎\n[ol1](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.19-Satellite:-ol1)\n\n##### 后端服务\n[Ollama](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.1-Backend:-Ollama) ⦁︎\n[llama.cpp](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.2-Backend:-llama.cpp) ⦁︎\n[vLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.3-Backend:-vLLM) ⦁︎\n[TabbyAPI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.4-Backend:-TabbyAPI) ⦁︎\n[Aphrodite Engine](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.5-Backend:-Aphrodite-Engine) ⦁︎\n[mistral.rs](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.6-Backend:-mistral.rs) ⦁︎\n[openedai-speech](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.7-Backend:-openedai-speech) ⦁︎\n[Speaches](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.14-Backend:-Speaches) ⦁︎\n[Parler](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.8-Backend:-Parler) ⦁︎\n[text-generation-inference](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.9-Backend:-text-generation-inference) ⦁︎\n[LMDeploy](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.10-Backend:-lmdeploy) ⦁︎\n[AirLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.11-Backend:-AirLLM) ⦁︎\n[SGLang](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.12-Backend:-SGLang) ⦁︎\n[KTransformers](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.13-Backend:-KTransformers) ⦁︎\n[Nexa SDK](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.15-Backend:-Nexa-SDK) ⦁︎\n[KoboldCpp](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.16-Backend:-KoboldCpp) ⦁︎\n[Modular MAX](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.2.17-Backend-Modular-MAX)\n\n##### 卫星项目\n[Harbor Bench](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F5.1.-Harbor-Bench) ⦁︎\n[Harbor Boost](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F5.2.-Harbor-Boost) ⦁︎\n[SearXNG](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.1-Satellite:-SearXNG) ⦁︎\n[Perplexica](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.2-Satellite:-Perplexica) ⦁︎\n[Dify](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.3-Satellite:-Dify) ⦁︎\n[Plandex](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.4-Satellite:-Plandex) ⦁︎\n[LiteLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.5-Satellite:-LiteLLM) ⦁︎\n[LangFuse](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.6-Satellite:-langfuse) ⦁︎\n[Open Interpreter](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.7-Satellite:-Open-Interpreter) ⦁\n︎[cloudflared](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.8-Satellite:-cloudflared) ⦁︎\n[cmdh](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.9-Satellite:-cmdh) ⦁︎\n[fabric](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.10-Satellite:-fabric) ⦁︎\n[txtai RAG](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.11-Satellite:-txtai-RAG) ⦁︎\n[TextGrad](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.12-Satellite:-TextGrad) ⦁︎\n[Aider](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.13-Satellite:-aider) ⦁︎\n[aichat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.14-Satellite:-aichat) ⦁︎\n[autogpt](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.15-Satellite:-AutoGPT) ⦁︎\n[lm-evaluation-harness](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.17-Satellite:-lm-evaluation-harness) ⦁︎\n[JupyterLab](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.18-Satellite:-JupyterLab) ⦁︎\n[ol1](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.19-Satellite:-ol1) ⦁︎\n[OpenHands](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.20-Satellite:-OpenHands) ⦁︎\n[LitLytics](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.21-Satellite:-LitLytics) ⦁︎\n[Repopack](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.22-Satellite:-Repopack) ⦁︎\n[n8n](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.23-Satellite:-n8n) ⦁︎\n[Bolt.new](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.24-Satellite:-Bolt.new) ⦁︎\n[Open WebUI Pipelines](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.25-Satellite:-Open-WebUI-Pipelines) ⦁︎\n[Qdrant](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.26-Satellite:-Qdrant) ⦁︎\n[K6](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.27-Satellite:-K6) ⦁︎\n[Promptfoo](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.28-Satellite:-Promptfoo) ⦁︎\n[Webtop](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.29-Satellite:-Webtop) ⦁︎\n[OmniParser](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.30-Satellite:-OmniParser) ⦁︎\n[Flowise](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.31-Satellite:-Flowise) ⦁︎\n[Langflow](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.32-Satellite:-LangFlow) ⦁︎\n[OptiLLM](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.33-Satellite:-OptiLLM) ⦁︎\n[Morphic](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.34-Satellite-Morphic) ⦁︎\n[SQL Chat](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.35-Satellite-SQL-Chat) ⦁︎\n[gptme](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.36-Satellite-gptme) ⦁︎\n[traefik](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.37-Satellite-traefik) ⦁︎\n[Latent Scope](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.38-Satellite-Latent-Scope) ⦁︎\n[RAGLite](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.39-Satellite-RAGLite) ⦁︎\n[llama-swap](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.40-Satellite-llamaswap) ⦁︎\n[LibreTranslate](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.41-Satellite-LibreTranslate) ⦁︎\n[MetaMCP](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.42-Satellite-MetaMCP) ⦁︎\n[mcpo](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.43-Satellite-mcpo) ⦁︎\n[SuperGateway](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.44-Satellite-supergateway) ⦁︎\n[Local Deep Research](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.45-Satellite-Local-Deep-Research) ⦁︎\n[LocalAI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.46-Satellite-LocalAI) ⦁︎\n[AgentZero](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.47-Satellite-Agent-Zero) ⦁︎\n[Airweave](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.48-Satellite-Airweave) ⦁︎\n[Docling](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.49-Satellite-Docling) ⦁︎\n[Browser Use](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.50-Satellite-Browser-Use) ⦁︎\n[Unsloth](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.51-Satellite-Unsloth) ⦁︎\n[Windmill](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.52-Satellite-Windmill)\n\n\n有关每个项目的简要介绍，请参阅 [服务文档](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.-Services)。\n\n## CLI 使用指南\n\n```bash\n# 使用默认服务运行 Harbor：\n# Open WebUI 和 Ollama\nharbor up\n\n# 使用附加服务运行 Harbor\n# 运行 SearXNG 会自动在 Open WebUI 中启用 Web RAG 功能\nharbor up searxng\n\n# Speaches 包含与 OpenAI 兼容的 SST 和 TTS\n# 并开箱即用地连接到 Open WebUI\nharbor up speaches\n\n# 运行额外的或替代的 LLM 推理后端\n# Open Webui 会自动连接到这些后端。\nharbor up llamacpp tgi litellm vllm tabbyapi aphrodite sglang ktransformers\n\n# 运行不同的前端\nharbor up librechat chatui bionicgpt hollama\n\n# 通过内置优化代理获得免费的质量提升\nharbor up boost\n\n# 使用一条命令在 Open WebUI 中使用 FLUX\nharbor up comfyui\n\n# 为支持的后端使用自定义模型\nharbor llamacpp model https:\u002F\u002Fhuggingface.co\u002Fuser\u002Frepo\u002Fmodel.gguf\n\n# 无需安装即可访问服务的 CLI\n# 在可能的情况下，服务之间共享缓存\nharbor hf scan-cache\nharbor hf download google\u002Fgemma-2-2b-it\nharbor ollama list\n\n# 快速查找 HF Hub 上的模型\nharbor hf find gguf gemma-2\n# 使用 HFDownloader 和官方 HF CLI 下载模型\nharbor hf dl -m google\u002Fgemma-2-2b-it -c 10 -s .\u002Fhf\nharbor hf download google\u002Fgemma-2-2b-it\n\n# 在可能的情况下，服务之间共享缓存\nharbor tgi model google\u002Fgemma-2-2b-it\nharbor vllm model google\u002Fgemma-2-2b-it\nharbor aphrodite model google\u002Fgemma-2-2b-it\nharbor tabbyapi model google\u002Fgemma-2-2b-it-exl2\nharbor mistralrs model google\u002Fgemma-2-2b-it\nharbor opint model google\u002Fgemma-2-2b-it\nharbor sglang model google\u002Fgemma-2-2b-it\n\n# 用于 Docker 设置的便捷工具\nharbor logs llamacpp\nharbor exec llamacpp .\u002Fscripts\u002Fllama-bench --help\nharbor shell vllm\n\n# 向你的 Shell 表达你的真实想法\nharbor opint\nharbor aider\nharbor aichat\nharbor cmdh\n\n# 使用 Fabric 将你的 Linux 管道 LLM 化\ncat .\u002Ffile.md | harbor fabric --pattern extract_extraordinary_claims | grep \"LK99\"\n\n# 从 CLI 打开服务\nharbor open webui\nharbor open llamacpp\n# 打印一个二维码，以便快速在手机上打开服务\nharbor qr\n# 想要冒险吗？将你的 Harbor 暴露到互联网上\nharbor tunnel\n\n# 配置管理\nharbor config list\nharbor config set webui.host.port 8080\n\n# 创建和管理配置文件\nharbor profile save l370b\nharbor profile use default\n# 从 URL 导入配置文件\nharbor profile use https:\u002F\u002Fexample.com\u002Fpath\u002Fto\u002Fharbor-profile.env\n\n# 查看最近使用的 Harbor 命令\nharbor history\n\n# 从 Harbor 中退出，进入独立的 Docker Compose 设置\n# 会将相关服务和变量导出到一个独立的文件中。\nharbor eject searxng llamacpp > docker-compose.harbor.yml\n\n# 使用你自己的任务运行内置的 LLM 基准测试\nharbor bench run\n\n# 花样\u002F趣味区\n\n# 参数乱序，以下命令都与上面相同\n# Harbor 不在乎是“vllm model”还是“model vllm”，它都会搞定。\nharbor model vllm\nharbor vllm model\n\nharbor config get webui.name\nharbor get config webui_name\n\nharbor tabbyapi shell\nharbor shell tabbyapi\n\n# 50% 花样，50% 实用\n# 向 Harbor 询问关于它自身的问题\nharbor how to ping ollama container from the webui?\n\n## Harbor 应用演示\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa5cd2ef1-3208-400a-8866-7abd85808503\n\n在演示中，Harbor 应用被用来启动一个默认堆栈，包含 [Ollama](.\u002F2.2.1-Backend:-Ollama) 和 [Open WebUI](.\u002F2.1.1-Frontend:-Open-WebUI) 服务。随后，[SearXNG](.\u002F2.3.1-Satellite:-SearXNG) 也被启动，WebUI 可以直接连接到它，实现开箱即用的 Web RAG 功能。之后，[Harbor Boost](.\u002F5.2.-Harbor-Boost) 也被启动，并自动连接到 WebUI，以产生更具创造性的输出。最后一步，在应用中调整了 Harbor 的配置，针对 [Harbor Boost](.\u002F5.2.-Harbor-Boost) 中的 [`klmbr`](.\u002F5.2.-Harbor-Boost#klmbr---boost-llm-creativity) 模块，使输出对 LLM 来说变得无法解析（但仍然可以被人理解）。\n\n## 为什么？\n\n- 如果你已经熟悉 Docker 和 Linux 管理，那么你可能并不需要 Harbor 来管理你的本地 LLM 环境。然而，在搭建和扩展这个环境的过程中，你最终很可能会得到类似的解决方案。这一点我深有体会，因为 Harbor 正是这样诞生的。\n- Harbor 并不是作为一个部署解决方案设计的，而是作为本地 LLM 开发环境的辅助工具。它是尝试 LLM 和相关服务的一个良好起点。\n- 工作流\u002F设置的集中化——你可以确切知道在哪里找到特定的配置或服务、日志、数据和配置文件。\n- 便利性——单一的 CLI 提供大量服务和功能，可以从主机上的任何位置访问。\n\n## 支持者\n\n![@av 的妻子](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_5cdfc756d246.png)\n![@burnth3heretic](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_1b635c469880.png)\n![@vood](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_02ece01155dd.png)\n![@anonymous](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_ee9c17179741.png)\n\u003Ca href=\"https:\u002F\u002Fx.com\u002FTheAhmadOsman\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_readme_78c67947c9ef.png\" width=\"32\" height=\"32\" alt=\"@TheAhmadOsman\" \u002F>\u003C\u002Fa>","# Harbor 快速上手指南\n\nHarbor 是一个命令行工具（CLI）及配套应用，旨在帮助用户一键部署完整的本地大语言模型（LLM）栈。它自动编排 Docker Compose，将后端（如 Ollama、vLLM）、前端（如 Open WebUI）以及辅助服务（如 SearXNG 网页搜索、Speaches 语音交互、ComfyUI 图像生成）预配置并互联，无需手动调整配置文件。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：Linux、macOS 或 Windows (需安装 WSL2)。\n*   **核心依赖**：\n    *   **Docker Engine**：必须已安装并正在运行。\n    *   **Docker Compose**：必须支持 `docker compose` 命令（通常包含在现代 Docker Desktop 或 Docker Engine 插件中）。\n*   **硬件建议**：运行本地 LLM 需要较高的 GPU 显存或较大的系统内存。建议至少 16GB RAM，若运行大模型推荐配备 NVIDIA GPU。\n*   **网络环境**：首次运行需要下载较大的 Docker 镜像和模型文件，请确保网络连接稳定。\n    *   *国内用户提示*：建议配置 Docker 镜像加速器（如阿里云、腾讯云等）以加快镜像拉取速度。\n\n## 安装步骤\n\nHarbor 提供了一键安装脚本，会自动检测环境并完成 CLI 及必要组件的安装。\n\n在终端中执行以下命令：\n\n```bash\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002Fav\u002Fharbor\u002Fmain\u002Finstall.sh | sh\n```\n\n安装完成后，建议重启终端或运行以下命令使环境变量生效：\n\n```bash\nsource ~\u002F.bashrc  # 如果使用 zsh，请改为 source ~\u002F.zshrc\n```\n\n验证安装是否成功：\n\n```bash\nharbor --version\n```\n\n## 基本使用\n\nHarbor 的核心理念是通过简单的命令启动复杂的服务栈。所有服务默认通过 Docker Compose  orchestration 自动连接。\n\n### 1. 启动基础 LLM 环境\n启动默认的完整栈（包含 Open WebUI 前端和 Ollama 后端）：\n\n```bash\nharbor up\n```\n\n等待服务启动完成后，终端会显示访问地址（通常为 `http:\u002F\u002Flocalhost:33801`），直接在浏览器打开即可使用。\n\n### 2. 添加特定功能模块\n您可以随时通过追加服务名称来扩展功能，Harbor 会自动处理服务间的连接配置。\n\n**启用网页搜索 (RAG) 和语音功能：**\n```bash\nharbor up searxng speaches\n```\n*   `searxng`: 提供本地网页搜索能力，支持深度研究。\n*   `speaches`: 提供语音转文字 (STT) 和文字转语音 (TTS) 功能，并已预连接至 Open WebUI。\n\n**启用图像生成 (ComfyUI + Flux)：**\n```bash\nharbor up comfyui\n```\n\n**切换或添加其他推理后端：**\nHarbor 支持多种推理引擎，可按需启动：\n```bash\n# 启动 llama.cpp\nharbor up llamacpp\n\n# 启动 vLLM\nharbor up vllm\n\n# 同时启动多个后端（不建议同时运行所有以节省资源）\nharbor up ollama vllm llamacpp\n```\n\n### 3. 移动端访问与内网穿透\n**在手机端访问：**\n生成二维码以便在局域网内的手机浏览器直接访问：\n```bash\nharbor qr\n```\n或直接获取访问链接：\n```bash\nharbor url webui\n```\n\n**配置模型参数：**\n例如，为 `llama.cpp` 设置 GPU 层数并保存配置：\n```bash\nharbor llamacpp args -ngl 32\n```\n\n### 4. 停止服务\n停止当前运行的所有 Harbor 服务：\n```bash\nharbor down\n```\n\n---\n*更多高级用法、服务目录及详细配置请参考官方文档或运行 `harbor --help`。*","某初创公司的 AI 研发团队急需在本地搭建一套包含大模型、向量数据库及监控面板的完整实验环境，以便快速验证新算法。\n\n### 没有 harbor 时\n- **环境配置繁琐**：工程师需手动编写数十个 Docker Compose 文件，逐个拉取并配置 LLM 推理引擎、嵌入模型及依赖服务，耗时数天且极易出错。\n- **版本兼容困难**：不同组件（如 vLLM、Ollama、Postgres）的版本依赖复杂，经常因端口冲突或镜像不匹配导致启动失败，排查问题占据大量时间。\n- **资源浪费严重**：团队成员各自为战，重复下载相同的大型模型镜像，占用大量带宽和存储空间，且难以统一环境标准。\n- **探索成本高昂**：想要尝试新的开源模型或服务组合，需要重新研究文档和部署脚本，试错门槛极高，严重拖慢研发节奏。\n\n### 使用 harbor 后\n- **一键极速部署**：只需执行一条命令，harbor 即可自动编排并启动包含数百种预配置服务的完整 LLM 技术栈，将环境搭建时间从几天缩短至几分钟。\n- **预集成零冲突**：harbor 内置了经过验证的服务依赖关系和版本组合，彻底消除了手动配置带来的兼容性报错，确保系统开箱即用。\n- **统一高效管理**：团队通过 harbor 共享标准化的本地环境，避免了重复下载和配置差异，显著节省了存储资源并提升了协作效率。\n- **灵活自由探索**：研究人员可轻松切换不同的模型后端或辅助工具组合，无需关心底层部署细节，能够专注于核心算法的创新与验证。\n\nharbor 通过“一键式”全栈交付能力，将复杂的 AI 基础设施搭建转化为简单的开发体验，让团队真正实现了从“运维配置”到“模型创新”的重心转移。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fav_harbor_1445c2b9.png","av","Ivan Charapanau","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fav_2d46589a.png",null,"Warszawa","av.codes","https:\u002F\u002Fgithub.com\u002Fav",[81,85,89,93,97,101,104,108,112,116],{"name":82,"color":83,"percentage":84},"TypeScript","#3178c6",35.2,{"name":86,"color":87,"percentage":88},"Python","#3572A5",22.7,{"name":90,"color":91,"percentage":92},"HTML","#e34c26",21.4,{"name":94,"color":95,"percentage":96},"Shell","#89e051",14.5,{"name":98,"color":99,"percentage":100},"JavaScript","#f1e05a",2.1,{"name":102,"color":103,"percentage":24},"CSS","#663399",{"name":105,"color":106,"percentage":107},"Dockerfile","#384d54",1.2,{"name":109,"color":110,"percentage":111},"Jupyter Notebook","#DA5B0B",0.5,{"name":113,"color":114,"percentage":115},"Rust","#dea584",0.2,{"name":117,"color":118,"percentage":119},"Pug","#a86454",0.1,2851,193,"2026-04-19T07:44:48","Apache-2.0","Linux, macOS, Windows","非必需（取决于所选后端服务，如 Ollama, vLLM, llama.cpp 等支持 CPU 或 GPU 运行）","未说明（取决于运行的模型大小和服务数量）",{"notes":128,"python":129,"dependencies":130},"Harbor 是一个基于 Docker Compose 的命令行工具，用于编排本地 LLM 栈（包括后端如 Ollama\u002FvLLM，前端如 Open WebUI，以及搜索、语音等服务）。安装和运行必须预先安装并配置好 Docker Engine 和 Docker Compose。具体的硬件资源需求（GPU\u002F内存）完全取决于用户通过命令启动的具体服务组合及加载的模型大小。","未说明",[131,132],"Docker Engine","Docker Compose",[44,27,13,15,14],[135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153],"cli","docker","docker-compose","llm","tools","ai","self-hosted","tool","bash","container","local","npm","package","pypi","safetensors","mcp","automation","homelab","server","2026-03-27T02:49:30.150509","2026-04-20T10:25:15.997931",[157,162,167,172,177,181],{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},44593,"运行 harbor up 时出现 \"unknown shorthand flag: 'f' in -f\" 错误怎么办？","这通常是因为系统安装的 Docker 版本过旧，不支持 `docker compose` 的新语法参数。请确保已安装最新版本的 Docker 和 Docker Compose 插件。如果使用的是较新的 Docker 版本但仍报错，请检查是否混淆了 `docker-compose` (v1) 和 `docker compose` (v2) 命令，Harbor 脚本通常需要 `docker compose` (作为 Docker CLI 插件)。","https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fissues\u002F55",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},44594,"如何永久覆盖 Open WebUI 的默认嵌入模型（embedding model）配置？","在 v0.0.15 及更高版本中，配置优先级问题已修复。您可以通过以下两种方式永久覆盖配置：\n1. 在 `.env` 文件中设置相关环境变量。\n2. 将自定义配置写入 `config.override.json` 文件。\n之前的版本中，WebUI 的配置可能会在重启后被重置，更新到最新版本即可解决此问题。","https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fissues\u002F5",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},44595,"Aider 服务启动失败，提示 \"\u002Fbin\u002Fbash: ... Permission denied\" 或退出代码 126 如何解决？","这是由于 Docker 容器内脚本权限问题导致的，并非特定于 Mac OS 或 ARM 架构。该问题已在 v0.2.26 版本中修复。如果您遇到此问题，请将 Harbor 更新至最新版本（v0.2.26 或更高），新版本已修正了入口点脚本的权限处理方式。","https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fissues\u002F111",{"id":173,"question_zh":174,"answer_zh":175,"source_url":176},44596,"Open WebUI 无法打开或一直加载中，但日志显示服务似乎已启动，该如何排查？","此问题通常由旧版本的兼容性探测机制引起。请尝试拉取最新的 Harbor 版本并重启服务（`sudo .\u002Fharbor.sh up`）。维护者指出，旧版本中 Open WebUI 即使在不兼容的服务未运行时也会不断探测，导致无法正常加载。更新到最新版本通常能解决此问题。","https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fissues\u002F1",{"id":178,"question_zh":179,"answer_zh":180,"source_url":161},44597,"在 AMD 显卡系统上安装 nvidia-container-toolkit 后，WebUI 出现黑屏或无法加载怎么办？","nvidia-container-toolkit 仅用于启用 CUDA GPU 加速，AMD 用户无需也不应安装此工具包，否则会导致 WebUI 加载失败（表现为登录后黑屏）。请卸载该工具包，WebUI 即可恢复正常。如果是 AppImage 启动时出现黑屏或 EGL 错误，可以尝试设置以下环境变量来禁用硬件加速渲染：\n`export WEBKIT_DISABLE_DMABUF_RENDERER=1`\n或者：\n`export WEBKIT_DISABLE_COMPOSITING_MODE=1`",{"id":182,"question_zh":183,"answer_zh":184,"source_url":185},44598,"如何在 Harbor 中添加和使用 Langflow 服务？","Langflow 支持已通过 PR 合并并在后续版本中发布。您可以在 Harbor 的 Wiki 文档中找到详细的安装和使用指南：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.32-Satellite:-LangFlow。确保您的 Harbor 版本已更新到包含该功能的版本，然后按照文档步骤启用 Langflow 卫星服务。","https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fissues\u002F82",[187,192,197,202,207,212,217,222,227,232,237,242,247,252,257,262,267,272,277,282],{"id":188,"version":189,"summary_zh":190,"released_at":191},352023,"v0.4.10","### 其他\n\n- 修复了 SearXNG 的 `settings.yml` 文件未启用 JSON 格式的问题，该问题导致 WebUI 的网页搜索返回 403 错误。\n- 修复了 SearXNG 工作区指向服务模板目录之外的问题。\n- 修复了 `seed-cdi` 脚本错误处理 `.build.yml` 变体文件的问题。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.9...v0.4.10","2026-04-16T23:33:47",{"id":193,"version":194,"summary_zh":195,"released_at":196},352024,"v0.4.9","### 其他\n\n- 新增 Boost 模块 `analogical`：生成类比，用于为 LLM 的回答提供依据。\n- 新增 Boost 模块 `deaf`：以幽默的方式误解用户查询，并对扭曲后的版本作出回应。\n- Open WebUI 现已默认启用原生函数调用模式。\n- 修复了 CLI 验证逻辑，使其能够接受基于变体的服务。\n- 修复了 `deaf` 模块，使其在处理误听的调用时使用一个干净的中间 LLM 实例。\n\n**完整更新日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.8...v0.4.9","2026-04-16T23:03:26",{"id":198,"version":199,"summary_zh":200,"released_at":201},352032,"v0.4.1","### 其他\n\n- 修复 CLI 版本号错误\n\n**完整更新日志**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.0...v0.4.1","2026-02-09T20:34:21",{"id":203,"version":204,"summary_zh":205,"released_at":206},352031,"v0.4.2","### [OpenFang](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.73-Satellite-OpenFang)\n\n\u003Cimg width=\"2108\" height=\"1225\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8828be15-a0d7-4b13-869f-9cf58f1bb653\" \u002F>\n\n\n基于 Rust 的代理操作系统，可在计划任务中运行自主 AI 代理，支持 27 家大语言模型提供商和 40 种通道适配器。\n\n```bash\nharbor up openfang\n```\n\n### 其他\n\n- llama.cpp：新增对 ROCm 镜像的支持，并更新了 CPU、NVIDIA 和 ROCm 配置的镜像变量。\n- 改进了 AMD GPU 支持，实现了更精准的 ROCm 自动检测。\n- 添加了迁移系统，并提供 CLI 支持以管理版本过渡。\n- `harbor doctor` 现在无需使用 sudo 即可检查 Docker。\n- 优化了无人值守安装流程，并更新了安装文档。\n- 修复了 `harbor link`、`harbor update` 以及 Docker 检查中的多项问题。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.1...v0.4.2","2026-03-02T22:20:22",{"id":208,"version":209,"summary_zh":210,"released_at":211},352025,"v0.4.8","### [Solo CLI](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.77-Satellite-Solo-CLI)\n\n本地优先的 AI 代理，用于自动化任务并在边缘设备上部署物理 AI，支持 Ollama、vLLM 和 llama.cpp 后端。\n\n```bash\nharbor build solo\nharbor up solo\n```\n\n### [ROS MCP 服务器](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.78-Satellite-ROS-MCP-Server)\n\n一个 MCP 桥接器，允许大型语言模型通过 rosbridge 与机器人操作系统（ROS）节点、话题和服务进行交互。\n\n```bash\nharbor build ros-mcp-server\nharbor up ros-mcp-server\n```\n\n### 其他\n\n* 功能：新增 MiniMax 作为一级云端 LLM 提供商（M2.7 默认），由 @octo-patch 在 https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fpull\u002F221 中实现。\n* 功能：新增 solo-cli 和 ros-mcp-server 服务，由 @ddiddi 在 https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fpull\u002F227 中实现。\n* 移除已弃用的 VLLM_ATTENTION_BACKEND 环境变量及其相关文档，由 @genevera 在 https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fpull\u002F228 中完成。\n* 对所有服务和后端进行了全面的文档正确性检查。\n* 修复了多个 CLI 边缘场景问题：TTY 处理、统计信息挂起、配置更新数据丢失以及命令冒泡等问题。\n* 修复了重启时保留所有活动服务的问题；同时修复了 SillyTavern 的初始化脚本及 server_urls 合并问题。\n* 添加了基于 WebdriverIO 的 Tauri 端到端自动化测试。\n\n## 新贡献者\n* @octo-patch 在 https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fpull\u002F221 中完成了首次贡献。\n* @ddiddi 在 https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fpull\u002F227 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.7...v0.4.8","2026-04-12T18:55:21",{"id":213,"version":214,"summary_zh":215,"released_at":216},352026,"v0.4.7","### [Hermes 代理](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.76-Satellite-Hermes-Agent)\n\n\u003Cimg width=\"1830\" height=\"1392\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F79d575b4-10d8-46c5-b0ad-59a5de91376c\" \u002F>\n\n由 Nous Research 开发的自主 AI 代理，具备持久化内存、可扩展技能、多平台网关以及与 OpenAI 兼容的 API。\n\n```bash\nharbor up hermes\n```\n\n### 杂项\n\n- 现在可以通过 `harbor llamacpp build on` 从源代码构建 `llamacpp`（支持 CUDA 和 ROCm）。\n- 重写了 `llamacpp` 的文档，更新了模型管理流程。\n- 新增了 `rocm` 和 `build` 作为可识别的服务能力。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.6...v0.4.7","2026-04-03T00:01:21",{"id":218,"version":219,"summary_zh":220,"released_at":221},352027,"v0.4.6","### [SillyTavern](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.1.15-Frontend-SillyTavern)\n\n\u003Cimg width=\"1360\" height=\"891\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fe149ac1f-5217-4041-91d4-d2b38560436d\" \u002F>\n\n功能丰富、可本地安装的 Web 界面，用于与大语言模型交互，支持多种后端、角色设定、背景故事库和扩展插件。\n\n```bash\nharbor up sillytavern\n```\n\n### 其他\n\n- 修复了容器中 llama.cpp 缓存的位置。\n- 在 fixfs 中添加了 Jupyter 工作区，以正确处理权限。\n- 移除了已弃用的 uWSGI 配置文件和工具设置。\n\n**完整更新日志**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.5...v0.4.6","2026-04-02T18:58:54",{"id":223,"version":224,"summary_zh":225,"released_at":226},352028,"v0.4.5","### Harbor 应用更新\n\n\u003Cimg width=\"1794\" height=\"1421\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdb8946a1-91d8-42f4-8351-e67dd291760d\" \u002F>\n\n\u003Cimg width=\"1794\" height=\"1421\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc95c1b7e-69eb-4f2c-88ac-419710cd9940\" \u002F>\n\n\u003Cimg width=\"1794\" height=\"1421\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdfa8f848-b389-4937-a2a3-4b6b18355c0a\" \u002F>\n\n\n本次发布在 Harbor 桌面应用中引入了多项用户体验优化：\n- 可将常用服务置顶，服务卡片更加简洁清晰；\n- `harbor models` 的用户界面优化——可通过应用直接下载 llama.cpp、HuggingFace 和 Ollama 模型；\n- 全局可用的终端面板，并支持全局快捷键（Ctrl + `）；\n- 服务详情页新增打开并实时查看服务日志的功能；\n- 应用侧边栏显示当前正在运行的服务快捷方式；\n- 新增 `harbor log` 别名，等同于 `harbor logs` 或 `harbor l`；\n- 修复：Harbor 停止时并发执行 `harbor ls` 导致 CPU 使用率过高的问题；\n- 修复：llama.cpp 的模型拉取逻辑现使用 `run_llamacpp_pull`。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.4...v0.4.5","2026-03-16T19:58:18",{"id":228,"version":229,"summary_zh":230,"released_at":231},352029,"v0.4.4","### [打开终端](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.75-Satellite-Open-Terminal)\n\n\u003Cimg width=\"2007\" height=\"1447\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8a7520db-5e5c-45e3-887e-d2fdd5475b8c\" \u002F>\n\n面向 AI 代理的轻量级远程 Shell 和文件管理 API —— 可与 Open WebUI 开箱即用。\n\n```bash\nharbor up openterminal\n```\n\n### [`harbor models`](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F3.-Harbor-CLI-Reference#harbor-models)\n\n现在，您可以通过一个简洁的 CLI 统一管理 llama.cpp、HuggingFace 和 Ollama 模型。\n\n```bash\n$ ▼ harbor models ls\n来源    模型                                          大小      详情\nollama  qwen3.5:35b                                    23.9 GB   qwen35moe 36.0B Q4_K_M\nhf      hexgrad\u002FKokoro-82M                             358 MB    \nhf      Systran\u002Ffaster-distil-whisper-large-v3         1.5 GB    \nllamacpp unsloth\u002FQwen3-Next-80B-A3B-Instruct-GGUF:Q4_0  45.3 GB   Q4_0\n\n# 可以使用 jq 等工具以编程方式调用\nharbor models ls --json\n\n# 拉取 Ollama 模型或 HuggingFace 仓库\nharbor models pull qwen3:8b\nharbor models pull bartowski\u002FLlama-3.2-1B-Instruct-GGUF\n\n# 使用 `ls` 中显示的 ID 来移除模型\nharbor models rm qwen3:8b\n```\n\n### 其他\n\n- 添加集成测试，并为 CI 模拟 OpenAI 服务。\n- 修复 default.env 中的工作区路径，使其指向 services 目录。\n- 修复 Open WebUI 启动脚本中 JSON 合并的输出路径。\n- 更新 Fabric 文档并修复 CLI 问题。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.3...v0.4.4","2026-03-13T17:36:17",{"id":233,"version":234,"summary_zh":235,"released_at":236},352030,"v0.4.3","### [Cognee](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.74-Satellite-Cognee)\n\n\u003Cimg width=\"1329\" height=\"1067\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0fdf894a-9c2a-491f-aec3-f807fef7932d\" \u002F>\n\n一款开源的知识引擎，能够将原始数据转化为持久、动态的 AI 记忆——它结合了向量搜索、图数据库和关系型存储。包含 API 服务器以及用于 IDE 集成的直接模式 MCP 服务器。\n\n```bash\nharbor build cognee\nharbor up cognee\n```\n\n### [`harbor config search \u003Cquery>`](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F3.-Harbor-CLI-Reference#harbor-config-search-query)\n\n内置的配置字段\u002F值搜索功能，可补充原生 `grep` 工作流：\n\n```bash\n$ ▼ h config search cache\nhf.cache                       \u002Fhome\u002Feverlier\u002F.cache\u002Fhuggingface\nllamacpp.cache                 ~\u002F.cache\u002Fllama.cpp\nollama.cache                   ~\u002F.ollama\nvllm.cache                     ~\u002F.cache\u002Fvllm\ntxtai.cache                    ~\u002F.cache\u002Ftxtai\nnexa.cache                     ~\u002F.cache\u002Fnexa\nparllama.cache                 ~\u002F.parllama\nlmeval.cache                   .\u002Flmeval\u002Fcache\n```\n\n### [`harbor config \u003Cservice>`](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F3.-Harbor-CLI-Reference#harbor-config-service)\n\n现在你可以使用 `harbor config` 直接管理指定服务的环境变量。其功能与 `harbor env` 非常相似。\n\n```bash\n$ ▼ h config ollama ls\nOLLAMA_CONTEXT_LENGTH          16384\nOLLAMA_NUM_PARALLEL            2\nOLLAMA_ORIGINS                 *\n\n$ ▼ h config ollama get ollama.context_length\n16384\n```\n\n该命令支持与 [`harbor config`](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F3.-Harbor-CLI-Reference#harbor-config) 相同的命名规范。\n\n### 其他\n\n- `harbor size` — 跳过 opencode 工作区\n- `harbor up` — 启动后打印 URL\n- 修复 Linux 上 Harbor 应用中的“打开”问题\n- 在命令输入拼写错误时提供 CLI 建议\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.4.2...v0.4.3","2026-03-08T11:37:12",{"id":238,"version":239,"summary_zh":240,"released_at":241},352033,"v0.4.0","### ⚠️ Breaking Change: Services Folder Structure\r\n\r\nHarbor v0.4.0 reorganizes the repository structure by moving all service-related files from the root to a dedicated `services\u002F` directory. **Existing installations require migration.**\r\n\r\n```bash\r\n# Preview changes\r\nharbor migrate --dry-run\r\n\r\n# Run migration (creates automatic backup)\r\nharbor migrate\r\n```\r\n\r\n### [nanobot](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.71-Satellite-nanobot)\r\n\r\n\u003Cimg width=\"2344\" height=\"440\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0fa16abb-6368-43f7-a623-7c5d4320de3f\" \u002F>\r\n\r\n\r\nAn ultra-lightweight personal AI assistant with gateway mode for multi-channel messaging (Telegram, Discord, WhatsApp, Feishu).\r\n\r\n```bash\r\nharbor up nanobot\r\n```\r\n\r\nPre-configured to work together with most of inference backends included in Harbor.\r\n\r\n### [Postiz](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.72-Satellite-Postiz)\r\n\r\n\u003Cimg width=\"2501\" height=\"1454\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F7a465615-7f59-4064-9659-569146745bae\" \u002F>\r\n\r\nAn open-source, AI-powered social media scheduling tool that supports X, LinkedIn, Reddit, Facebook, Instagram, and more.\r\n\r\n```bash\r\nharbor up postiz\r\n```\r\n\r\n### Misc\r\n\r\n- Services are in `services` folder\r\n- `open-webui` folder renamed to `webui` to match the rule of service handles\r\n- Documentation improvements and typo corrections.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.41...v0.4.0","2026-02-08T18:01:36",{"id":243,"version":244,"summary_zh":245,"released_at":246},352034,"v0.3.41","### Misc\r\n\r\n- Renamed Moltbot to OpenClaw across compose files, docs, and assets.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.40...v0.3.41\r\n","2026-01-30T11:51:39",{"id":248,"version":249,"summary_zh":250,"released_at":251},352035,"v0.3.40","### Misc\r\n\r\n- Rename Clawdbot to Moltbot (handle changed to `moltbot`).\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.39...v0.3.40\r\n","2026-01-27T14:19:18",{"id":253,"version":254,"summary_zh":255,"released_at":256},352036,"v0.3.39","### [Clawdbot](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.69-Satellite-Clawdbot)\r\n\r\n\u003Cimg width=\"2499\" height=\"1452\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F2790f0ae-8abd-4c08-b4cc-1c22918f52c5\" \u002F>\r\n\r\nPersonal AI assistant that connects to your chat channels through a self-hosted gateway.\r\n\r\n```bash\r\nharbor up clawdbot\r\n```\r\n\r\n### Misc\r\n\r\n- Added support for llamacpp model pulls via `harbor pull \u003Cmodel>`, where `\u003Cmodel>` is same model ID as used by llama.cpp for HuggingFace\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.38...v0.3.39","2026-01-26T00:33:53",{"id":258,"version":259,"summary_zh":260,"released_at":261},352037,"v0.3.38","### [OpenCode](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.68-Satellite-OpenCode)\r\n\r\n\u003Cimg width=\"2500\" height=\"1386\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F7eca2625-f972-48ce-849a-e26cb8398fd6\" \u002F>\r\n\r\nAI-powered coding assistant with server API, terminal UI, and multi-provider LLM support.\r\n\r\n```bash\r\nharbor up opencode\r\n```\r\n\r\n### Misc\r\n\r\n- **TypeScript-based compose transforms** for dynamic configuration\r\n- Update PostgreSQL data volume path in litellm compose file by @genevera in https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fpull\u002F208\r\n- Service logos added to App sidebar\r\n- LiteLLM PostgreSQL pinned to v18 for stability\r\n- Routines path discovery fixes\r\n- Documentation updates for router mode in `llamacpp`\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.37...v0.3.38","2026-01-17T15:32:10",{"id":263,"version":264,"summary_zh":265,"released_at":266},352038,"v0.3.37","### Misc\r\n\r\n- Added logprobs workflow module for Harbor Boost with interactive HTML visualization.\r\n- Updated CLI Reference documentation to reflect harbor.sh commands.\r\n- Fixed ask harbor link in textclip badges.\r\n- `landing` service for Harbor's future landing page\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.35...v0.3.37","2026-01-10T18:45:32",{"id":268,"version":269,"summary_zh":270,"released_at":271},352039,"v0.3.35","### [Photoprism](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.66-Satellite-PhotoPrism)\r\n\r\n\u003Cimg width=\"1665\" height=\"1016\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0781d2d1-8ebd-4dd6-b8a8-8ae26e31efe5\" \u002F>\r\n\r\nAI-powered photo management app with face recognition, image classification, and automatic organization.\r\n\r\n```bash\r\nharbor up photoprism\r\n```\r\n\r\n### [Khoj](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.67-Satellite-Khoj)\r\n\r\n\u003Cimg width=\"2498\" height=\"1407\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F02b1a386-ee55-4d44-bcd6-4329cb3615fb\" \u002F>\r\n\r\nOpen-source personal AI \"second brain\" with chat, semantic search, and RAG for your documents.\r\n\r\n```bash\r\nharbor up khoj\r\n```\r\n\r\n### Misc\r\n\r\n- Better initial setup for Omarchy: NVIDIA Container Toolkit installation instructions, Docker Compose version detection for modern non-semver versions\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.34...v0.3.35","2026-01-06T18:11:17",{"id":273,"version":274,"summary_zh":275,"released_at":276},352040,"v0.3.34","### [DeerFlow](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.65-Satellite-DeerFlow)\r\n\r\n\u003Cimg width=\"2497\" height=\"1412\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fff0db57b-8d01-4ddb-89e3-8f4a7e05ef75\" \u002F>\r\n\r\nDeerFlow is a community-driven deep research framework that combines LLMs with web search, web crawling, and multi-agent workflows to generate comprehensive research reports.\r\n\r\n```bash\r\nharbor up deerflow\r\n```\r\n\r\n### [ActivePieces](https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fwiki\u002F2.3.64-Satellite-Activepieces)\r\n\r\n\u003Cimg width=\"2499\" height=\"1372\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc9250210-78e6-444c-bfea-64a25eca35e3\" \u002F>\r\n\r\nactivepieces is an open-source workflow automation tool that allows you to connect apps and automate workflows with AI capabilities. It provides a visual workflow builder similar to Zapier or n8n, with 200+ app connectors and built-in AI features.\r\n\r\n```bash\r\nharbor up activepieces\r\n```\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.33...v0.3.34","2026-01-04T22:33:22",{"id":278,"version":279,"summary_zh":280,"released_at":281},352041,"v0.3.33","### AstrBot \r\n\r\n\u003Cimg width=\"2502\" height=\"1410\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdfb409cf-95eb-44d8-839e-c97457ae411f\" \u002F>\r\n\r\nProbably the easiest way to bring your local LLM into Discord\u002FSlack and other messengers.\r\n\r\n```bash\r\nharbor up astrbot\r\n```\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.32...v0.3.33","2026-01-03T17:52:22",{"id":283,"version":284,"summary_zh":285,"released_at":286},352042,"v0.3.32","### MCP Forge\r\n\r\n\u003Cimg width=\"2501\" height=\"1409\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F92f1eaea-6886-401a-b0dd-4a7fcbb48e0c\" \u002F>\r\n\r\nA decent MCP Proxy \u002F Gateway to manage many MCPs at once with some extra functionality for agents, auth management.\r\n\r\n```bash\r\nharbor up mcpforge\r\n```\r\n\r\n### Misc\r\n\r\n- `harbor llamacpp models` - CURL `\u002Fv1\u002Fmodels` endpoint in running `llamacpp` service\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fav\u002Fharbor\u002Fcompare\u002Fv0.3.31...v0.3.32","2026-01-03T14:58:49"]