[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-eclaire-labs--eclaire":3,"tool-eclaire-labs--eclaire":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":110,"forks":111,"last_commit_at":112,"license":113,"difficulty_score":10,"env_os":114,"env_gpu":115,"env_ram":115,"env_deps":116,"category_tags":119,"github_topics":120,"view_count":140,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":141,"updated_at":142,"faqs":143,"releases":149},124,"eclaire-labs\u002Feclaire","eclaire","Local-first, open-source AI assistant for your data. Unify tasks, notes, docs, photos, and bookmarks. Private, self-hosted, and extensible via APIs.","Eclaire 是一个注重隐私的开源 AI 助手，专为本地优先（local-first）设计，帮助用户统一管理任务、笔记、文档、照片和书签等个人数据。它将各类信息集中在一个私有环境中，通过 AI 实现智能搜索、内容理解、分类、OCR 和自动化操作，避免依赖云端服务或封闭生态。\n\nEclaire 解决了当前主流 AI 工具普遍存在的隐私风险与数据分散问题，让用户在完全掌控数据的前提下享受 AI 带来的效率提升。它适合重视数据隐私的技术爱好者、开发者、研究人员以及希望自托管 AI 助手的个人用户。\n\n技术上，Eclaire 支持多种本地运行的 AI 模型（如 Qwen3-VL、llama.cpp），提供 SQLite 或 PostgreSQL 数据库选项，并可通过 API 扩展功能。最新版本简化了部署流程，支持单容器运行前后端及任务处理模块，同时推荐配合 Tailscale 或 Cloudflare Tunnel 等安全方案使用，确保私有环境下的稳定与安全。","\u003C!-- README.md -->\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_4b1ff112b780.png\" alt=\"Eclaire Logo\" width=\"400\" \u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">ECLAIRE\u003C\u002Fh1>\n\n\u003Ch3 align=\"center\">\u003Cem>Privacy-focused AI assistant for your data\u003C\u002Fem>\u003C\u002Fh3>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\".\u002FLICENSE\">\u003Cimg alt=\"License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Feclaire-labs\u002Feclaire\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire\u002Freleases\">\u003Cimg alt=\"Release\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Feclaire-labs\u002Feclaire?sort=semver\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\u002Fdocs\">\u003Cimg alt=\"Docs\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-eclaire.co%2Fdocs-informational\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FJiBnoTmev0w\">\u003Cimg alt=\"Watch demo\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWatch%20demo-YouTube-red?logo=youtube\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" id=\"demo\">\n  \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FJiBnoTmev0w\" target=\"_blank\" rel=\"noopener\">\n    \u003Cimg\n      src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_2e8086169743.gif\"\n      alt=\"Eclaire demo preview (click to watch on YouTube)\"\n      width=\"900\"\n    \u002F>\n  \u003C\u002Fa>\n  \u003Cbr\u002F>\n  \u003Csub>\u003Cem>Click to watch on YouTube\u003C\u002Fem>\u003C\u002Fsub>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#features\">Features\u003C\u002Fa> •\n  \u003Ca href=\"#installation\">Installation\u003C\u002Fa> •\n  \u003Ca href=\"#selecting-models\">Selecting Models\u003C\u002Fa> •\n  \u003Ca href=\"#architecture\">Architecture\u003C\u002Fa> •\n  \u003Ca href=\"#development\">Development\u003C\u002Fa> •\n  \u003Ca href=\"#contributing\">Contributing\u003C\u002Fa> •\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\u002Fdocs\">Docs\u003C\u002Fa> •\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\u002Fdocs\u002Fapi\">API\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n## ⚠️ Important Notices\n\n> [!IMPORTANT]  \n> **Pre-release \u002F Development Status**  \n> Eclaire is currently in pre-release and under active development.  \nExpect frequent updates, breaking changes, and evolving APIs\u002Fconfiguration.  \n> If you deploy it, please **backup your data regularly** and review release notes carefully before upgrading.\n\n> [!WARNING]  \n> **Security Warning**  \n> Do **NOT** expose Eclaire directly to the public internet.  \n> This project is designed to be self-hosted with privacy and security in mind, but it is **not hardened for direct exposure**.  \n>  \n> We strongly recommend placing it behind additional security layers such as:  \n> - [Tailscale](https:\u002F\u002Ftailscale.com\u002F) or other private networks\u002FVPNs  \n> - [Cloudflare Tunnels](https:\u002F\u002Fdevelopers.cloudflare.com\u002Fcloudflare-one\u002Fconnections\u002Fconnect-apps\u002F)  \n> - A reverse proxy with authentication\n\n---\n\n## Description\n\n**Eclaire** is a local-first, open-source AI that organizes, answers, and automates across tasks, notes, documents, photos, bookmarks and more.\n\nThere are are lot of existing frameworks and libraries enabling various AI capabilities; few deliver a complete product allowing users to get things done. Eclaire assembles proven building blocks into a cohesive, privacy-preserving solution you can run yourself.\n\nWith AI gaining rapid adoption, there is a growing need for alternatives to closed ecosystems and hosted models, especially for personal, private, or otherwise sensitive data.\n\n- **Self-hosted**  - runs entirely on your hardware with local models and data storage\n- **Unified data**  - one place where AI can see and connect everything\n- **AI-powered**  - content understanding, search, classification, OCR, and automation\n- **Open source**  - transparent, extensible, and community-driven\n\n### What's New in v0.6.0\n\n- **Unified deployment**: frontend + backend + workers can run in a single container\n- **Simplified Self-Hosting**  - new one-command `setup.sh` flow, plus a streamlined `compose.yaml`\n- **Better AI Support**  - New vision models (including Qwen3-VL), llama.cpp router, improved MLX support.\n- **Modern Frontend**  - Migrated from Next.js to Vite + TanStack Router\n- **SQLite Support**: Full SQLite database support alongside Postgres for simpler workloads\n- **Database Queue Mode**: Support Postgres or SQLite for job processing instead of Redis\u002FBullMQ\n- **New Admin CLI**  - Manage your instance from the command line\n\nSee the [CHANGELOG](.\u002FCHANGELOG.md) for full details.\n  \n## Features\n- **Cross-platform**: macOS, Linux and Windows. \n- **Private by default**: By default all AI models run locally, all data is stored locally.\n- **Unified data**: Manage across tasks, notes, documents, photos, bookmarks and more.\n- **AI conversations**: chat with context from your content; see sources for answers; supports streaming and thinking tokens.\n- **AI tool calling**: The assistant has tools to search data, open content, resolve tasks, add comments, create notes, and more\n- **Flexible deployment**: Run as a single unified container or separate services. SQLite or Postgres. Database queue or Redis. *(See [Architecture](#architecture) section below.)*\n- **Full API**: OpenAI-compatible REST endpoints with session tokens or API keys. [API Docs](https:\u002F\u002Feclaire.co\u002Fdocs\u002Fapi)\n- **Model backends**: works with llama.cpp, vLLM, mlx-lm\u002Fmlx-vlm, LM Studio, Ollama, and more via the standard OpenAI-compatible API. *(See [Selecting Models](#selecting-models).)*\n- **Model support**: text and vision models from Qwen, Gemma, DeepSeek, Mistral, Kimi, and others. *(See [Selecting Models](#selecting-models).)*\n- **Storage**: all assets (uploaded or generated) live in Postgres or file\u002Fobject storage.\n- **Integrations**: Telegram (more channels coming).\n- **Documents**: PDF, DOC\u002FDOCX, PPT\u002FPPTX, XLS\u002FXLSX, ODT\u002FODP\u002FODS, MD, TXT, RTF, Pages, Numbers, Keynote, HTML, CSV, and more.\n- **Photos\u002FImages**: JPG\u002FJPEG, PNG, SVG, WebP, HEIC\u002FHEIF, AVIF, GIF, BMP, TIFF, and more.\n- **Tasks**: track user tasks or assign tasks for the AI assistant to complete; the assistant add comments to tasks or write to separate docs.\n- **Notes**: plain text or Markdown format. Links to other assets.\n- **Bookmarks**: Fetches bookmarks and creates PDF, Readable and LLM friendly versions. Special handling for Github and Reddit APIs and metadata.\n- **Organization**: Tags, pin, flag, due dates, etc. across all asset types.\n- **Hardware acceleration**: takes advantage of Apple MLX, NVIDIA CUDA, and other platform-specific optimizations.\n- **Mobile & PWA**: installable PWA; iOS & Apple Watch via Shortcuts; Android via Tasker\u002FMacroDroid.\n\n## Sample use cases\n- Dictate notes using Apple Watch (or other smartwatch).\n- Save bookmarks to read later; generate clean “readable” and PDF versions.\n- Create readable and PDF versions of websites\n- Extract text from photos and document images (OCR).\n- Bulk-convert photos from HEIC to JPG.\n- Analyze, categorize, and search documents and photos with AI.\n- Create LLM-friendly text\u002FMarkdown versions of documents and bookmarks.\n- Save interesting content (web pages, photos, documents) from phone, tablet, or desktop.\n- Ask AI to find or summarize information across your data.\n- Schedule automations (e.g., “Every Monday morning, summarize my tasks for the week.”).\n- Chat with AI from web, mobile, Telegram, and other channels.\n- Process sensitive information (bank, health, etc.) privately on local models.\n- De-clutter your desktop by bulk-uploading and letting AI sort and tag.\n- Migrate data from Google\u002FApple and other vendors into an open, self-hosted platform under your control.\n\n## Screenshots\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_288f80eb199c.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_288f80eb199c.png\" alt=\"Dashboard View\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_f642753aa599.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_f642753aa599.png\" alt=\"Photo OCR\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_fd386ed1aa85.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_fd386ed1aa85.png\" alt=\"Main Dashboard\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_8acc35cd85db.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_8acc35cd85db.png\" alt=\"AI Assistant\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\n## Installation\n\n### Prerequisites\n\n- **Docker** and **Docker Compose**\n- **A local LLM server**  - [llama.cpp](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp) recommended\n\n### Quick Start\n\n```bash\nmkdir eclaire && cd eclaire\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002Feclaire-labs\u002Feclaire\u002Fmain\u002Fsetup.sh | sh\n```\n\nThe script will:\n1. Download configuration files\n2. Generate secrets automatically\n3. Initialize the database (PostgreSQL)\n\nAfter setup completes:\n\n```bash\n# 1. Start your LLM servers (in separate terminals)\n#    Models download automatically on first run if not already cached\nllama-server -hf unsloth\u002FQwen3-14B-GGUF:Q4_K_XL --ctx-size 16384 --port 11500\nllama-server -hf unsloth\u002Fgemma-3-4b-it-qat-GGUF:Q4_K_XL --ctx-size 16384 --port 11501\n\n# 2. Start Eclaire\ndocker compose up -d\n```\nOpen http:\u002F\u002Flocalhost:3000 and click \"Sign up\" to create your account.\n\nSee [AI Model Configuration](docs\u002Fai-models.md) to use other AI providers and models.\n\n\n\n### Configuration\n\nConfiguration lives in two places:\n- **`.env`**  - secrets, database settings, ports\n- **`config\u002Fai\u002F`**  - LLM provider URLs and model definitions\n\n\n### Stopping\n\n```bash\ndocker compose down\n```\n\n\n## Selecting Models\n\nEclaire uses AI models for two purposes:\n- **Backend**: Powers the chat assistant (requires good tool calling)\n- **Workers**: Processes documents and images (requires vision capability)\n\n> **Apple Silicon**: Mac users can leverage MLX for optimized local inference. See the [configuration guide](docs\u002Fai-models.md#mlx-on-apple-silicon) for details.\n\nUse the CLI to manage models:\n\n```bash\ndocker compose run --rm eclaire model list\n```\n\nSee [AI Model Configuration](docs\u002Fai-models.md) for detailed setup and model recommendations. \n\n## Architecture\n\nEclaire follows a modular architecture with clear separation between the frontend, backend API, background workers, and data layers.\n\n**📋 [View detailed architecture diagram →](docs\u002Farchitecture.md)**\n\n### Key Components\n- **Frontend**: Vite web application with React 19, TanStack Router, and Radix UI\n- **Backend API**: Node.js\u002FHono server with REST APIs\n- **Background Workers**: Job processing and scheduling (runs unified with backend by default)\n- **Data Layer**: PostgreSQL (recommended) or SQLite for persistence; database or Redis for job queue\n- **AI Services**: Local LLM backends (llama.cpp, MLX, LM Studio, etc.) for inference; Docling for document processing\n- **External Integrations**: GitHub and Reddit APIs for bookmark fetching\n\n## Roadmap\n- Support for more data sources and integrations\n- More robust full text indexing and search\n- Better extensibility and plugin system\n- Improved AI capabilities and model support\n- Evals for models and content pipelines\n- More hardening and security\n- Top requests from the community\n\n## Development\n\nFor contributors who want to build from source.\n\n### Additional Prerequisites\n\nBeyond Docker and an LLM server, you'll need:\n\n- **Node.js ≥ 24** with corepack enabled\n- **pnpm** (managed via corepack)\n\n**Document\u002Fimage processing tools:**\n\n**macOS:**\n```bash\nbrew install --cask libreoffice\nbrew install poppler graphicsmagick imagemagick ghostscript libheif\n```\n\n**Ubuntu\u002FDebian:**\n```bash\nsudo apt-get install libreoffice poppler-utils graphicsmagick imagemagick ghostscript libheif-examples\n```\n\n### Setup\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire.git\ncd eclaire\ncorepack enable\npnpm setup:dev\npnpm dev\n```\n\nAccess the application:\n- Frontend: http:\u002F\u002Flocalhost:3000\n- Backend: http:\u002F\u002Flocalhost:3001\u002Fhealth\n\n### Building Docker Locally\n\nTo build and test custom Docker images:\n\n```bash\n.\u002Fscripts\u002Fbuild.sh\ndocker compose -f compose.yaml -f compose.dev.yaml -f compose.local.yaml up -d\n```\n\n## Contributing\nWe 💙 contributions! Please read the Contributing Guide.\n\n## Security\nSee [SECURITY.md](.\u002FSECURITY.md) for our policy.\n\n## Telemetry\nThere should be no telemetry in the Eclaire code although 3rd party dependencies may have. If you find an instance where that is the case, let us know.\n\n## Community & Support\nIssues: [GitHub Issues](https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire\u002Fissues)\n","\u003C!-- README.md -->\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_4b1ff112b780.png\" alt=\"Eclaire Logo\" width=\"400\" \u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">ECLAIRE\u003C\u002Fh1>\n\n\u003Ch3 align=\"center\">\u003Cem>专注于隐私的 AI 助手，专为你的数据打造\u003C\u002Fem>\u003C\u002Fh3>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\".\u002FLICENSE\">\u003Cimg alt=\"License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Feclaire-labs\u002Feclaire\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire\u002Freleases\">\u003Cimg alt=\"Release\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Feclaire-labs\u002Feclaire?sort=semver\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\u002Fdocs\">\u003Cimg alt=\"Docs\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-eclaire.co%2Fdocs-informational\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FJiBnoTmev0w\">\u003Cimg alt=\"Watch demo\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWatch%20demo-YouTube-red?logo=youtube\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\" id=\"demo\">\n  \u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FJiBnoTmev0w\" target=\"_blank\" rel=\"noopener\">\n    \u003Cimg\n      src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_2e8086169743.gif\"\n      alt=\"Eclaire 演示预览（点击在 YouTube 上观看）\"\n      width=\"900\"\n    \u002F>\n  \u003C\u002Fa>\n  \u003Cbr\u002F>\n  \u003Csub>\u003Cem>点击在 YouTube 上观看\u003C\u002Fem>\u003C\u002Fsub>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#features\">功能\u003C\u002Fa> •\n  \u003Ca href=\"#installation\">安装\u003C\u002Fa> •\n  \u003Ca href=\"#selecting-models\">选择模型\u003C\u002Fa> •\n  \u003Ca href=\"#architecture\">架构\u003C\u002Fa> •\n  \u003Ca href=\"#development\">开发\u003C\u002Fa> •\n  \u003Ca href=\"#contributing\">贡献\u003C\u002Fa> •\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\u002Fdocs\">文档\u003C\u002Fa> •\n  \u003Ca href=\"https:\u002F\u002Feclaire.co\u002Fdocs\u002Fapi\">API\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n## ⚠️ 重要提示\n\n> [!IMPORTANT]  \n> **预发布 \u002F 开发状态**  \n> Eclaire 目前处于预发布阶段，并正在积极开发中。  \n> 请预期频繁更新、破坏性变更以及不断演进的 API 和配置。  \n> 如果你部署了它，请务必**定期备份你的数据**，并在升级前仔细阅读发布说明。\n\n> [!WARNING]  \n> **安全警告**  \n> **切勿**将 Eclaire 直接暴露在公共互联网上。  \n> 本项目设计为以隐私和安全为核心进行自托管，但**并未针对直接暴露进行加固**。  \n>  \n> 我们强烈建议将其置于额外的安全层之后，例如：  \n> - [Tailscale](https:\u002F\u002Ftailscale.com\u002F) 或其他私有网络\u002FVPN  \n> - [Cloudflare Tunnels](https:\u002F\u002Fdevelopers.cloudflare.com\u002Fcloudflare-one\u002Fconnections\u002Fconnect-apps\u002F)  \n> - 带身份验证的反向代理\n\n---\n\n## 描述\n\n**Eclaire** 是一个以本地优先（local-first）、开源的 AI 系统，可跨任务、笔记、文档、照片、书签等组织信息、回答问题并实现自动化。\n\n目前已有大量框架和库支持各种 AI 能力；但很少有能提供完整产品体验，让用户真正高效完成任务的解决方案。Eclaire 将经过验证的构建模块整合为一个连贯、注重隐私的解决方案，你可以自行运行。\n\n随着 AI 的快速普及，人们越来越需要替代封闭生态和托管模型的方案，尤其是处理个人、私密或敏感数据时。\n\n- **自托管（Self-hosted）** – 完全在你的硬件上运行，使用本地模型和本地数据存储  \n- **统一数据（Unified data）** – AI 可在此处查看并关联所有内容  \n- **AI 驱动（AI-powered）** – 支持内容理解、搜索、分类、OCR（光学字符识别）和自动化  \n- **开源（Open source）** – 透明、可扩展、社区驱动  \n\n### v0.6.0 新特性\n\n- **统一部署**：前端 + 后端 + 工作进程可在单个容器中运行  \n- **简化自托管** – 新增一键式 `setup.sh` 流程，以及简化的 `compose.yaml`  \n- **更好的 AI 支持** – 新增视觉模型（包括 Qwen3-VL）、llama.cpp 路由器、改进的 MLX 支持  \n- **现代化前端** – 从 Next.js 迁移至 Vite + TanStack Router  \n- **SQLite 支持**：除 Postgres 外，新增对 SQLite 数据库的完整支持，适用于更轻量的工作负载  \n- **数据库队列模式**：支持使用 Postgres 或 SQLite 进行任务处理，无需 Redis\u002FBullMQ  \n- **全新管理 CLI** – 可通过命令行管理你的实例  \n\n完整详情请参阅 [CHANGELOG](.\u002FCHANGELOG.md)。\n\n## 功能\n- **跨平台**：支持 macOS、Linux 和 Windows。  \n- **默认私有**：默认情况下所有 AI 模型均在本地运行，所有数据均存储在本地。  \n- **统一数据**：跨任务、笔记、文档、照片、书签等内容统一管理。  \n- **AI 对话**：基于你的内容上下文进行聊天；答案附带来源；支持流式输出和“思考”标记（thinking tokens）。  \n- **AI 工具调用**：助手具备多种工具，可用于搜索数据、打开内容、完成任务、添加评论、创建笔记等。  \n- **灵活部署**：可作为单一统一容器运行，也可拆分为独立服务；支持 SQLite 或 Postgres；支持数据库队列或 Redis。（详见下方 [架构](#architecture) 部分。）  \n- **完整 API**：提供与 OpenAI 兼容的 REST 接口，支持会话令牌或 API 密钥。[API 文档](https:\u002F\u002Feclaire.co\u002Fdocs\u002Fapi)  \n- **模型后端**：通过标准 OpenAI 兼容 API，支持 llama.cpp、vLLM、mlx-lm\u002Fmlx-vlm、LM Studio、Ollama 等。（详见 [选择模型](#selecting-models)。）  \n- **模型支持**：支持来自 Qwen、Gemma、DeepSeek、Mistral、Kimi 等的文本和视觉模型。（详见 [选择模型](#selecting-models)。）  \n- **存储**：所有资产（上传或生成）均存储于 Postgres 或文件\u002F对象存储中。  \n- **集成**：支持 Telegram（更多渠道即将推出）。  \n- **文档格式**：PDF、DOC\u002FDOCX、PPT\u002FPPTX、XLS\u002FXLSX、ODT\u002FODP\u002FODS、MD、TXT、RTF、Pages、Numbers、Keynote、HTML、CSV 等。  \n- **照片\u002F图像格式**：JPG\u002FJPEG、PNG、SVG、WebP、HEIC\u002FHEIF、AVIF、GIF、BMP、TIFF 等。  \n- **任务管理**：可跟踪用户任务，或分配任务给 AI 助手完成；助手可为任务添加评论或将内容写入独立文档。  \n- **笔记**：支持纯文本或 Markdown 格式，可链接到其他资产。  \n- **书签**：自动抓取书签，并生成 PDF、可读版本及适合 LLM（大语言模型）处理的版本；对 GitHub 和 Reddit 提供特殊 API 和元数据处理。  \n- **组织管理**：支持标签、置顶、标记、截止日期等功能，适用于所有资产类型。  \n- **硬件加速**：充分利用 Apple MLX、NVIDIA CUDA 等平台特定优化。  \n- **移动端与 PWA**：支持安装为 PWA；通过 iOS 快捷指令（Shortcuts）支持 iPhone 和 Apple Watch；通过 Tasker\u002FMacroDroid 支持 Android。\n\n## 典型使用场景\n- 使用 Apple Watch（或其他智能手表）口述笔记。\n- 保存稍后阅读的书签；生成干净的“可读”版本和 PDF 版本。\n- 为网站创建可读版本和 PDF 版本。\n- 从照片和文档图像中提取文字（OCR，光学字符识别）。\n- 批量将 HEIC 格式的照片转换为 JPG。\n- 利用 AI 分析、分类并搜索文档和照片。\n- 将文档和书签转换为适合大语言模型（LLM）处理的文本\u002FMarkdown 格式。\n- 从手机、平板或桌面设备保存感兴趣的内容（网页、照片、文档等）。\n- 让 AI 在你的数据中查找或总结信息。\n- 安排自动化任务（例如：“每周一早上，总结我本周的任务。”）。\n- 通过网页、移动端、Telegram 等渠道与 AI 聊天。\n- 在本地模型上私密处理敏感信息（如银行、健康等数据）。\n- 批量上传桌面文件，让 AI 自动整理和打标签，清理桌面杂乱。\n- 将 Google\u002FApple 等厂商的数据迁移到一个开放、可自托管且由你完全掌控的平台。\n\n## 截图\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_288f80eb199c.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_288f80eb199c.png\" alt=\"仪表盘视图\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_f642753aa599.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_f642753aa599.png\" alt=\"照片 OCR\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_fd386ed1aa85.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_fd386ed1aa85.png\" alt=\"主仪表盘\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_8acc35cd85db.png\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_readme_8acc35cd85db.png\" alt=\"AI 助手\" width=\"400\"\u002F>\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\n## 安装\n\n### 先决条件\n\n- **Docker** 和 **Docker Compose**\n- **本地 LLM（大语言模型）服务器** —— 推荐使用 [llama.cpp](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp)\n\n### 快速开始\n\n```bash\nmkdir eclaire && cd eclaire\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002Feclaire-labs\u002Feclaire\u002Fmain\u002Fsetup.sh | sh\n```\n\n该脚本将：\n1. 下载配置文件\n2. 自动生成密钥（secrets）\n3. 初始化数据库（PostgreSQL）\n\n设置完成后：\n\n```bash\n# 1. 启动你的 LLM 服务器（在单独的终端中运行）\n#    如果模型尚未缓存，首次运行时会自动下载\nllama-server -hf unsloth\u002FQwen3-14B-GGUF:Q4_K_XL --ctx-size 16384 --port 11500\nllama-server -hf unsloth\u002Fgemma-3-4b-it-qat-GGUF:Q4_K_XL --ctx-size 16384 --port 11501\n\n# 2. 启动 Eclaire\ndocker compose up -d\n```\n打开 http:\u002F\u002Flocalhost:3000 并点击“注册”以创建账户。\n\n如需使用其他 AI 提供商和模型，请参阅 [AI 模型配置](docs\u002Fai-models.md)。\n\n### 配置\n\n配置文件位于两个位置：\n- **`.env`** —— 存放密钥、数据库设置、端口等\n- **`config\u002Fai\u002F`** —— LLM 提供商的 URL 和模型定义\n\n### 停止服务\n\n```bash\ndocker compose down\n```\n\n\n## 模型选择\n\nEclaire 使用 AI 模型实现两类功能：\n- **后端（Backend）**：驱动聊天助手（需要良好的工具调用能力）\n- **工作器（Workers）**：处理文档和图像（需要视觉能力）\n\n> **Apple Silicon**：Mac 用户可利用 MLX 实现优化的本地推理。详情请参阅 [配置指南](docs\u002Fai-models.md#mlx-on-apple-silicon)。\n\n使用 CLI 管理模型：\n\n```bash\ndocker compose run --rm eclaire model list\n```\n\n有关详细设置和模型推荐，请参阅 [AI 模型配置](docs\u002Fai-models.md)。\n\n## 架构\n\nEclaire 采用模块化架构，前端、后端 API、后台工作器和数据层之间职责清晰。\n\n**📋 [查看详细架构图 →](docs\u002Farchitecture.md)**\n\n### 核心组件\n- **前端**：基于 Vite 的 Web 应用，使用 React 19、TanStack Router 和 Radix UI\n- **后端 API**：基于 Node.js\u002FHono 的 REST API 服务器\n- **后台工作器**：负责任务处理和调度（默认与后端统一运行）\n- **数据层**：推荐使用 PostgreSQL，也可使用 SQLite 进行持久化；任务队列使用数据库或 Redis\n- **AI 服务**：本地 LLM 后端（如 llama.cpp、MLX、LM Studio 等）用于推理；Docling 用于文档处理\n- **外部集成**：通过 GitHub 和 Reddit API 获取书签内容\n\n## 路线图\n- 支持更多数据源和集成\n- 更强大的全文索引和搜索功能\n- 更好的可扩展性和插件系统\n- 增强 AI 能力和模型支持\n- 为模型和内容处理管道添加评估（Evals）\n- 加强安全性和系统加固\n- 实现社区最高票请求的功能\n\n## 开发\n\n适用于希望从源码构建的贡献者。\n\n### 额外先决条件\n\n除了 Docker 和 LLM 服务器外，你还需要：\n\n- **Node.js ≥ 24**，并启用 corepack\n- **pnpm**（通过 corepack 管理）\n\n**文档\u002F图像处理工具：**\n\n**macOS:**\n```bash\nbrew install --cask libreoffice\nbrew install poppler graphicsmagick imagemagick ghostscript libheif\n```\n\n**Ubuntu\u002FDebian:**\n```bash\nsudo apt-get install libreoffice poppler-utils graphicsmagick imagemagick ghostscript libheif-examples\n```\n\n### 设置\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire.git\ncd eclaire\ncorepack enable\npnpm setup:dev\npnpm dev\n```\n\n访问应用：\n- 前端：http:\u002F\u002Flocalhost:3000\n- 后端：http:\u002F\u002Flocalhost:3001\u002Fhealth\n\n### 本地构建 Docker 镜像\n\n如需构建并测试自定义 Docker 镜像：\n\n```bash\n.\u002Fscripts\u002Fbuild.sh\ndocker compose -f compose.yaml -f compose.dev.yaml -f compose.local.yaml up -d\n```\n\n## 贡献\n我们 💙 欢迎各种贡献！请阅读贡献指南。\n\n## 安全\n请参阅 [SECURITY.md](.\u002FSECURITY.md) 了解我们的安全策略。\n\n## 遥测\nEclaire 代码中不应包含任何遥测功能，尽管第三方依赖项可能包含。如发现此类情况，请告知我们。\n\n## 社区与支持\n问题反馈：[GitHub Issues](https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire\u002Fissues)","# Eclaire 快速上手指南\n\nEclaire 是一个注重隐私的开源 AI 助手，支持本地运行，可统一管理任务、笔记、文档、图片、书签等数据，并通过本地大模型实现智能问答与自动化。\n\n---\n\n## 环境准备\n\n### 系统要求\n- 支持 macOS、Linux 和 Windows\n- 推荐使用 Apple Silicon（M1\u002FM2\u002FM3）或 NVIDIA GPU 以获得硬件加速\n\n### 前置依赖\n- **Docker** 和 **Docker Compose**\n- **本地 LLM 服务**（推荐 [llama.cpp](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp)）\n- （可选）如需处理文档\u002F图片，还需安装以下工具：\n  - **macOS**:\n    ```bash\n    brew install --cask libreoffice\n    brew install poppler graphicsmagick imagemagick ghostscript libheif\n    ```\n  - **Ubuntu\u002FDebian**:\n    ```bash\n    sudo apt-get install libreoffice poppler-utils graphicsmagick imagemagick ghostscript libheif-examples\n    ```\n\n> 💡 国内用户建议配置 Docker 镜像加速器（如阿里云、中科大镜像源）以提升拉取速度。\n\n---\n\n## 安装步骤\n\n1. 创建项目目录并运行一键安装脚本：\n   ```bash\n   mkdir eclaire && cd eclaire\n   curl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002Feclaire-labs\u002Feclaire\u002Fmain\u002Fsetup.sh | sh\n   ```\n   脚本将自动：\n   - 下载配置文件\n   - 生成密钥\n   - 初始化 PostgreSQL 数据库\n\n2. 启动本地 LLM 服务（在单独终端中运行，模型首次使用时会自动下载）：\n   ```bash\n   # 示例：启动两个本地模型（文本 + 视觉）\n   llama-server -hf unsloth\u002FQwen3-14B-GGUF:Q4_K_XL --ctx-size 16384 --port 11500\n   llama-server -hf unsloth\u002Fgemma-3-4b-it-qat-GGUF:Q4_K_XL --ctx-size 16384 --port 11501\n   ```\n\n3. 启动 Eclaire 服务：\n   ```bash\n   docker compose up -d\n   ```\n\n4. 打开浏览器访问 http:\u002F\u002Flocalhost:3000，点击 “Sign up” 创建账户即可开始使用。\n\n> ⚠️ 注意：请勿将 Eclaire 直接暴露在公网。建议通过 Tailscale、Cloudflare Tunnel 或带认证的反向代理访问。\n\n---\n\n## 基本使用\n\n### 最简使用流程\n1. **上传内容**：在 Web 界面上传文档（PDF、Word 等）、图片（JPG、PNG 等）或添加书签。\n2. **与 AI 对话**：在聊天界面提问，例如：\n   - “总结我上周的笔记”\n   - “找出所有包含发票的图片”\n   - “把这张 HEIC 照片转成 JPG”\n3. **查看结果**：AI 会基于你的私有数据回答，并标注信息来源。\n\n### 管理模型（可选）\n列出当前可用模型：\n```bash\ndocker compose run --rm eclaire model list\n```\n\n更多模型配置（如使用 Ollama、LM Studio 或 MLX）请参考官方文档：[AI Model Configuration](https:\u002F\u002Feclaire.co\u002Fdocs\u002Fai-models)\n\n---\n\n> 📌 提示：Eclaire 目前处于预发布阶段，升级前请务必备份数据并查阅 [CHANGELOG](.\u002FCHANGELOG.md)。","一位独立开发者正在整理自己过去三年积累的项目资料，包括零散的笔记、本地文档、截图、书签和待办任务，准备为新客户复用部分模块。\n\n### 没有 eclaire 时\n- 笔记存在 Obsidian，文档在本地文件夹，截图散落在 Downloads 和桌面，书签在浏览器里，任务又记在 Todoist，信息割裂严重。\n- 想找某次调试的截图或相关设计思路，只能靠模糊记忆手动翻找，效率极低。\n- 不同格式内容无法统一搜索，例如无法通过“支付失败”关键词同时命中笔记、截图中的错误日志和相关书签。\n- 担心使用云端 AI 工具会泄露未开源的项目代码或敏感配置，不敢上传数据。\n- 自动化能力弱，无法批量归类旧资料或生成摘要，全靠人工整理。\n\n### 使用 eclaire 后\n- 将所有笔记、文档、图片、书签和任务导入 eclaire，实现统一存储与索引，数据完全保留在本地。\n- 通过自然语言提问如“找出所有关于支付接口调试的记录”，eclaire 能跨类型检索并关联相关内容。\n- 内置 OCR 和多模态模型自动识别截图中的文字，并与文本笔记建立语义关联，提升搜索准确率。\n- 因为是自托管架构，所有处理都在本地完成，无需担心代码或隐私数据外泄。\n- 利用 eclaire 的 API 编写简单脚本，自动为旧项目打标签、生成概要，大幅减少手动整理时间。\n\neclaire 让个人知识资产真正“活”起来，在保障隐私的前提下实现高效连接与智能调用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Feclaire-labs_eclaire_2e808616.png","eclaire-labs","Eclaire Labs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Feclaire-labs_e6f8f455.png","Privacy-Focused AI",null,"info@eclaire.co","eclaire_labs","https:\u002F\u002Feclaire.co","https:\u002F\u002Fgithub.com\u002Feclaire-labs",[85,89,92,96,100,103,107],{"name":86,"color":87,"percentage":88},"TypeScript","#3178c6",96.8,{"name":90,"color":91,"percentage":23},"JavaScript","#f1e05a",{"name":93,"color":94,"percentage":95},"Shell","#89e051",0.8,{"name":97,"color":98,"percentage":99},"Dockerfile","#384d54",0.2,{"name":101,"color":102,"percentage":99},"CSS","#663399",{"name":104,"color":105,"percentage":106},"HTML","#e34c26",0,{"name":108,"color":109,"percentage":106},"Procfile","#3B2F63",833,86,"2026-04-03T21:11:31","MIT","Linux, macOS, Windows","未说明",{"notes":117,"python":115,"dependencies":118},"需要 Docker 和 Docker Compose；需自行运行本地 LLM 服务（如 llama.cpp）；支持 SQLite 或 PostgreSQL 数据库；推荐通过 Tailscale、Cloudflare Tunnel 等私有网络访问，不要直接暴露到公网；Apple Silicon 用户可使用 MLX 进行优化推理；文档和图像处理依赖 LibreOffice、Poppler、ImageMagick 等系统工具。",[],[13,53,14,51,26,15],[121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139],"ai","ai-assistant","automation","bookmark-manager","bookmarks","data-extraction","document-processing","llm","local-first","note-taking","ocr","on-device-ai","open-source","personal-knowledge-management","privacy","rest-api","self-hosted","task-management","web-archiving",4,"2026-03-27T02:49:30.150509","2026-04-06T07:14:56.146568",[144],{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},126,"Eclaire 初始启动时的默认登录凭证是什么？","如果你使用 \"pnpm setup:dev\" 以开发模式初始化系统，数据库会自动填充演示账户。你可以在 apps\u002Fbackend\u002Fscripts\u002Fdb-seed.ts 文件中查看 DEMO_USER 相关代码，了解创建的默认用户和管理员账号及其凭证。生产模式（prod）下不会创建默认账户，但应仍可注册新用户。","https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire\u002Fissues\u002F2",[150,155,160,165,170,175,180,185,190,195,200,205],{"id":151,"version":152,"summary_zh":153,"released_at":154},109481,"v0.6.3","### Bug Fixes\r\n\r\n- **workers**: add tiered extraction with lightweight HTTP fetch before browser\r\n- **backend**: include missing `thumbnailUrl` and `faviconUrl` in bookmark list responses\r\n- **auth**: harden Better Auth integration — security fixes, fail-fast init, dead code removal\r\n- **frontend**: fix SSE connection leak on page refresh\r\n- **frontend**: improve SSE streaming robustness and eliminate unnecessary re-renders\r\n- **frontend**: fix re-render issues, remove debug logging, and improve data fetching\r\n- **frontend**: remove hardcoded manifest link from `index.html`\r\n- **frontend**: fix typecheck errors in `use-list-page-state` and test files\r\n- **backend**: correct case-sensitive filename in photos test\r\n- **backend**: add missing type declaration for `turndown-plugin-gfm`\r\n- **workers**: harden job processors with size guards, timeouts, SSRF protection, and dedup\r\n- **workers**: improve document processor extraction quality, fix bugs, reduce resource usage\r\n- **workers**: fix image processor bugs, improve robustness and format handling\r\n- **docker**: add missing `tsconfig.base.json` and `api-types` to build stages\r\n- **deps**: upgrade hono 4.12.5, @hono\u002Fnode-server 1.19.11, add overrides for rollup\u002Fserialize-javascript\u002Fminimatch\u002Ftar\r\n- **vitest**: fix broken frontend alias, clean up configs across monorepo\r\n\r\n### Refactoring\r\n\r\n- **backend**: standardize all list endpoints to unified pagination shape\r\n- **backend**: deduplicate schema definitions, review\u002Fflag\u002Fpin\u002Freprocess endpoints, and route handlers\r\n- **backend**: consolidate error classes, add `withAuth()` wrapper, extract error schemas\r\n- **backend**: clean up routes, fix validation, add lazy sessions\r\n- **frontend**: deduplicate list pages into shared abstractions, extract shared components\r\n- **frontend**: split API modules, add CRUD hook factory and tests\r\n- **api**: consolidate API types, fix pagination offset, and clean up schemas\r\n- **core**: export const arrays for enum types, eliminate cross-package duplication\r\n- **ai**: fix bugs and clean up for reusability\r\n- **queue**: remove dead code, deduplicate adapters, fix layering\r\n- **logger**: typed levels, explicit `contextKey`, extensible context\r\n- **db**: harden PostgreSQL client, type `Tx` interface, fix schema issues\r\n- **storage**: decouple adapters from domain key structure, clean up design\r\n- **workers**: harden bookmark processing, extract `BrowserPipeline`, improve content quality\r\n- **monorepo**: standardize TypeScript configuration; simplify and deduplicate utilities across all packages\r\n\r\n### Tests\r\n\r\n- **backend**: fix and expand photo, bookmark, document, notes, and task test suites\r\n- **frontend**: expand test suite with entity hooks, utilities, and API contract tests\r\n- **core**: add comprehensive test suite for `@eclaire\u002Fcore`\r\n- **queue**: add core unit tests and strengthen integration test coverage\r\n- **storage**: restructure tests with conformance suite and unit\u002Fintegration split\r\n- **ai**: expand MLX adapter and client test coverage\r\n\r\n### Maintenance\r\n\r\n- **deps**: upgrade patch and minor dependencies across all workspaces\r\n- **deps**: bump `actions\u002Fcheckout` from 4 to 6, `actions\u002Fsetup-node` from 4 to 6\r\n- **lint**: consolidate linting config with pnpm catalog","2026-03-06T03:38:36",{"id":156,"version":157,"summary_zh":158,"released_at":159},109482,"v0.6.2","### Security\r\n\r\n- **deps**: upgrade hono 4.11.4 → 4.12.2 for multiple CVEs (XSS, cache bypass, IP spoofing, timing attack)\r\n- **deps**: upgrade axios 1.13.2 → 1.13.5 for GHSA-43fc-jf86-j433 (DoS via `__proto__` in mergeConfig)\r\n- **deps**: upgrade node-gyp 12.1.0 → 12.2.0 and refresh transitive deps (tar, minimatch, ajv, qs, brace-expansion)\r\n- **deps**: resolve seroval and diff vulnerabilities via clean reinstall\r\n- **frontend**: fix XSS vulnerability from unsanitized `dangerouslySetInnerHTML` uncovered during lint audit\r\n\r\n### CI\u002FCD\r\n\r\n- **ci**: expand lint workflow into full CI with parallel lint, typecheck, build, and test jobs\r\n- **ci**: add PR lint workflow enforcing Biome and oxlint with zero warnings\r\n- **ci**: add Dependabot config for npm security alerts and GitHub Actions auto-updates\r\n\r\n### Bug Fixes\r\n\r\n- **frontend**: generate route tree before typecheck to fix CI build order\r\n- **packages**: resolve types from source for workspace packages so typecheck works without a prior build step\r\n- **packages**: fix 12 strict null check errors across core\u002Fencryption, queue, and storage packages\r\n- **lint**: resolve stale closure risks from missing `useCallback` dependencies in React hooks\r\n- **lint**: fix accessibility errors across frontend components\r\n\r\n### Refactoring\r\n\r\n- **lint**: adopt Biome and oxlint as dual linters with unified `pnpm lint` script\r\n- **lint**: resolve all Biome and oxlint errors and warnings across the monorepo (~340 files touched)\r\n- **lint**: replace non-null assertions with runtime guards to satisfy `noNonNullAssertion`\r\n- **lint**: use stable keys instead of array indices in React list components\r\n- **storage**: replace `as unknown as` casts with proper `NodeWebReadableStream` types\r\n- **backend**: use `Number.isFinite` over global `isFinite`, fix shadowed variables\r\n\r\n### Maintenance\r\n\r\n- **deps**: upgrade patch and minor dependencies across all workspaces (biome 2.4.4, bullmq 5.70.1, pg 8.18.0, pino 10.3.1, playwright 1.58.2, react 19.2.4, vite 7.3.1, vitest 4.0.18)\r\n- **deps**: upgrade better-auth 1.4.19, hono 4.12.2, TanStack Router 1.162.8, tailwindcss 4.2.1, pnpm 10.30.2\r\n- **cleanup**: remove unused imports, variables, and function parameters across backend and frontend\r\n- **cleanup**: adopt `node:` protocol for all Node.js builtin imports\r\n- **config**: exclude generated service worker files from Biome linting\r\n- **config**: expand root lint and format scripts to cover entire monorepo","2026-02-24T13:05:26",{"id":161,"version":162,"summary_zh":163,"released_at":164},109483,"v0.6.1","### Security\r\n\r\n- **backend**: upgrade hono to 4.11.4 for CVE-2026-22817 (JWT Algorithm Confusion)","2026-01-14T02:28:48",{"id":166,"version":167,"summary_zh":168,"released_at":169},109484,"v0.6.0","### Highlights\r\n\r\n- **Unified deployment**: frontend + backend + workers can run in a single container\r\n- **Simplified Self-Hosting**  - new one-command `setup.sh` flow, plus a streamlined `compose.yaml`\r\n- **Better AI Support**  - New vision models, llama.cpp router, expanded provider support\r\n- **Modern Frontend**  - Migrated from Next.js to Vite + TanStack Router\r\n- **New Admin CLI**  - Manage your instance from the command line\r\n\r\n### Features\r\n\r\n- **Unified Deployment**: Single container can serve as backend, workers, or both via `SERVICE_ROLE` environment variable\r\n- **SQLite Support**: Full SQLite database support alongside Postgres with comprehensive parity tests\r\n- **Database Queue Mode**: Use Postgres or SQLite for job processing instead of Redis\u002FBullMQ\r\n- **In-Memory Notifications**: Single-process deployments no longer require Redis\r\n- **Admin CLI**: New `admin-cli` integrated into Docker for instance management\r\n- **Auto-Upgrade System**: Database migrations run automatically at startup\r\n- **Qwen3-VL-8B**: Added support for Qwen3-VL-8B vision model\r\n- **llama.cpp Router**: Support for llama-server's new router endpoint\r\n- **Request ID Tracing**: Better observability and debugging across distributed components\r\n- **Version-Prefixed Encryption**: Enables future key rotation support\r\n\r\n### Improvements\r\n\r\n- **Frontend**: Migrated from Next.js to Vite with TanStack Router for faster builds and modern routing\r\n- **ES Modules**: Complete migration from CommonJS to ES modules\r\n- **Tailwind CSS v4**: Upgraded to latest Tailwind with tw-animate-css support\r\n- **Transaction Support**: Read-Modify-Write for Postgres, mutex serialization for SQLite\r\n- **Services Architecture**: Thin routes pattern with extracted services layer\r\n- **Modular Packages**: New @eclaire\u002Fai, @eclaire\u002Fstorage, @eclaire\u002Fqueue packages\r\n- **AI Tool Calling**: More robust native tool calling support\r\n- **AI Providers**: Improved support for llama.cpp, MLX-LM, MLX-VLM, and LM Studio backends\r\n- **Dependency Upgrades**: Vite 7, Vitest 4, Playwright 1.57, Pino 10, Recharts 3, Better Auth 1.4.5\r\n\r\n### Bug Fixes\r\n\r\n- User queue jobs now deleted when account is deleted\r\n- Fixed AI response truncation\u002Frepetition detection with improved JSON parsing\r\n- SQLite case-insensitive sorting for text columns\r\n- Docker layer caching optimized for faster builds\r\n- Queue callback race conditions resolved with timeout safety guards\r\n- Route detail views render correctly\r\n- Image URL fallback uses thumbnailUrl consistently\r\n- Auth session tests include Origin header for CSRF validation\r\n- Sharp\u002Fnative dependency handling with SHARP_IGNORE_GLOBAL_LIBVIPS\r\n\r\n### Breaking Changes\r\n\r\n- **TaskStatus**: Removed `cancelled` status from enum\r\n- **SERVICE_ROLE**: New environment variable for deployment mode configuration\r\n- **Data Directories**: Changed from `data\u002Fdb` to `data\u002Fpostgres`, `data\u002Fpglite`, `data\u002Fsqlite`\r\n- **SQLITE_DB_PATH**: Renamed from `SQLITE_DATA_DIR`\r\n- **Environment**: Simplified to single `.env` file configuration\r\n- **Admin CLI**: `model-cli` replaced with `admin-cli`\r\n- **Build Script**: `build.sh` now defaults to dev mode (use `--prod` for production)\r\n- **Production Port**: Standardized to port 3000\r\n\r\n### Migration from v0.5.x\r\n\r\n**There is no automated upgrade path from v0.5.x to v0.6.0.** Due to significant architectural changes, we recommend setting up a fresh v0.6.0 instance and transferring your data from your previous installation.\r\n\r\nIf you need help migrating, please [open an issue](https:\u002F\u002Fgithub.com\u002Feclaire-labs\u002Feclaire\u002Fissues) or reach out to us  - we're happy to assist.\r\n\r\n### Infrastructure\r\n\r\n- All dependencies pinned to exact versions for reproducible builds\r\n- Root biome config for consistent formatting\u002Flinting\r\n- Restructured tests directory (`__tests__` → `tests`)\r\n- TypeScript config standardized (typecheck includes tests, build excludes)\r\n- ioredis pinned to 5.8.2 for BullMQ compatibility\r\n\r\n### Documentation\r\n\r\n- AI Model Configuration guide with Qwen3-VL-8B examples\r\n- First login instructions added to README\r\n- Clarified WORKER_AI_LOCAL_PROVIDER_URL documentation","2026-01-09T05:52:40",{"id":171,"version":172,"summary_zh":173,"released_at":174},109485,"v0.5.2","### Security\r\n\r\n- **frontend**: upgrade Next.js to 15.5.7 for CVE-2025-55182\r\n","2025-12-04T05:22:21",{"id":176,"version":177,"summary_zh":178,"released_at":179},109486,"v0.5.1","### Features\r\n\r\n- **build**: configure pnpm workspace for Docker deployment with pnpm deploy\r\n\r\n### Bug Fixes\r\n\r\n- **docker**: migrate to pnpm deploy for proper dependency resolution in containers\r\n- **frontend**: invalidate asset list on processing status to show spinner\r\n\r\n### CI\u002FCD\r\n\r\n- **dx**: add --dev flag to build script and update contributor docs\r\n","2025-11-13T10:10:29",{"id":181,"version":182,"summary_zh":183,"released_at":184},109487,"v0.5.0","### ⚠️ Migration Notes\r\n\r\nThis release includes significant tooling changes:\r\n\r\n- **Node.js Version**: Upgraded requirement from v22 to v24 LTS\r\n  - Ensure you're running Node.js v24.x\r\n\r\n- **Package Manager**: Migrated from npm to pnpm\r\n  - Enable corepack (one-time setup): `corepack enable`\r\n  - Delete `node_modules` folders: `rm -rf node_modules apps\u002F*\u002Fnode_modules tools\u002F*\u002Fnode_modules`\r\n  - Install dependencies: `pnpm install`\r\n\r\n### CI\u002FCD\r\n\r\n- **deps**: migrate to pnpm and update package dependencies\r\n- **deps**: upgrade Node.js requirement from v22 to v24 LTS\r\n- **deps**: change Node.js engine from >=24.0.0 to ^24.0.0\r\n\r\n### Bug Fixes\r\n\r\n- **workers**: eliminate macOS keychain popup by using non-persistent browser contexts\r\n- **workers**: add null check for browser context in bookmark processor\r\n- **db**: correct key length in seed script\r\n- **docker**: use monorepo root as build context for pnpm workspace compatibility\r\n- **ci**: use repository root as Docker build context in GitHub Actions","2025-11-11T23:29:04",{"id":186,"version":187,"summary_zh":188,"released_at":189},109488,"v0.4.1","### Security\r\n- **deps**: bumped Hono to address security vulnerabilities (GHSA-m732-5p4w-x69g, GHSA-q7jf-gf43-6x6p)\r\n  - Upgraded to latest safe version to resolve Improper Authorization vulnerability (CVE-2025-62610)\r\n  - Fixed Vary Header Injection leading to potential CORS Bypass\r\n\r\n### Bug Fixes\r\n- **ai**: use json_schema { name, schema } envelope to align with OpenAI structured outputs\r\n","2025-10-30T17:37:59",{"id":191,"version":192,"summary_zh":193,"released_at":194},109489,"v0.4.0","## Quick upgrade (Docker)\r\n```bash\r\ngit fetch --tags\r\ngit checkout v0.4.0\r\ndocker compose pull\r\ndocker compose up -d\r\n```\r\n\r\n### Features\r\n- **ai**: Apple MLX integration with native support for Apple Silicon\r\n  - **mlx-lm**: text inference using MLX\r\n  - **mlx-vlm**: vision model support with multimodal capabilities using MLX\r\n- **ai**: LM Studio integration for local model inference\r\n- **model-cli**: enhanced import workflow with provider selection\r\n  - Interactive provider selection (MLX-LM, MLX-VLM, LM Studio, Ollama, LlamaCpp, and more)\r\n  - Automatic vision capability detection from model metadata\r\n  - Smart warnings for incompatible provider\u002Fmodel combinations\r\n  - Improved user experience with context-aware prompts\r\n\r\n### Bug Fixes\r\n- **config**: use 127.0.0.1 instead of localhost for service URLs to improve compatibility\r\n- **model-cli**: display modelFullName instead of name in list command\r\n\r\n### Documentation\r\n- **readme**: added upgrade section with instructions for updating between versions\r\n\r\n### CI\u002FCD\r\n- **docker**: bumped default image tags to 0.4\r\n- **workflows**: explicit semver values for Docker tags","2025-10-14T05:33:53",{"id":196,"version":197,"summary_zh":198,"released_at":199},109490,"v0.3.1","\r\n### Features\r\n- **ci\u002Fcd**: official GHCR image publishing system with Github Actions\r\n\r\n### CI\u002FCD\r\n- **workflows**: overhauled CI\u002FCD workflows and Docker build system\r\n- **automation**: bootstrap GitHub Actions UI on main branch\r\n\r\n### Security\r\n- **deps**: upgraded axios, hono, next.js to resolve security advisories\r\n- **deps**: bumped axios in tools\u002Fmodels-cli to address security advisory\r\n\r\n### Refactoring\r\n- **deps**: migrated to zod v4 and removed zod-openapi integration\r\n- **deps**: removed unused @hono\u002Fzod-validator dependency\r\n- **deps**: upgraded safe dependencies across frontend, backend, and workers\r\n\r\n### Documentation\r\n- **readme**: added demo video\r\n- **readme**: added comprehensive Quick Start guide for running official Docker images\r\n- **readme**: restructured setup options (Quick Start, Development, Building Docker Locally)","2025-10-08T06:17:12",{"id":201,"version":202,"summary_zh":203,"released_at":204},109491,"v0.3.0","\r\n### Features\r\n- **repo**: publish core application (backend, frontend, workers) to a public repository\r\n- **ui**: new logo and refreshed theme with light\u002Fdark support\r\n- **docs**: landing page, high-level architecture overview, updated README with quick start\r\n- **tooling**: setup scripts for local development and maintenance\r\n- **security**: signed release tags \u002F verified commits support\r\n\r\n### Refactoring\r\n- **repo**: standardized naming and layout for public distribution\r\n- **config**: aligned package metadata and repository information\r\n- **deps**: locked dependency versions for reproducible builds\r\n\r\n### Bug Fixes\r\n- **ui**: responsive layout tweaks; hover state fixes; thumbnail edge cases\r\n- **processing**: improved reliability of background jobs and streaming updates\r\n- **content**: fixed malformed character extraction in certain inputs\r\n- **docs**: improved typography and contrast\r\n\r\n### Styling\r\n- **ui**: dashboard polish; asset list counts and filters; detail view status links\r\n\r\n### Documentation\r\n- **api**: link API docs to GitHub repository\r\n- **contrib**: contribution guide and issue\u002FPR templates\r\n- **security**: clarified vulnerability reporting process in `SECURITY.md`\r\n\r\n### Security\r\n- **defaults**: safer API key handling and configuration defaults\r\n\r\n### Maintenance\r\n- **integrations**: disabled non-essential third-party APIs by default\r\n- **cleanup**: removed legacy share-target functionality","2025-09-29T12:47:25",{"id":206,"version":207,"summary_zh":79,"released_at":208},109492,"media","2025-10-01T12:25:42"]