[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jundot--omlx":3,"tool-jundot--omlx":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",154349,2,"2026-04-13T23:32:16",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":76,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":32,"env_os":105,"env_gpu":106,"env_ram":107,"env_deps":108,"category_tags":116,"github_topics":117,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":124,"updated_at":125,"faqs":126,"releases":156},7355,"jundot\u002Fomlx","omlx","LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar","omlx 是一款专为 Apple Silicon Mac 打造的大语言模型（LLM）推理服务器，让用户能直接从 macOS 菜单栏轻松管理本地 AI 模型。它旨在解决现有工具在“便捷性”与“控制力”之间难以兼顾的痛点：用户既可以将常用模型常驻内存以实现秒级响应，又能让大型模型按需加载，同时灵活设定上下文限制。\n\n其核心亮点在于采用了创新的分级缓存技术，将热数据保留在内存中，而将历史对话上下文自动卸载至 SSD 存储。这意味着即使在长对话中切换模型或重启服务，之前的上下文依然可被快速复用，极大提升了本地大模型在编程辅助等实际场景中的实用性。\n\nomlx 非常适合希望在本地隐私安全环境下运行大模型的开发者、研究人员以及极客用户。无论是需要通过 OpenAI 兼容接口集成到开发工作流中，还是单纯想在菜单栏一键启停模型服务，omlx 都提供了流畅的体验。它支持自动发现各类模型，内置可视化聊天界面，并可通过 Homebrew 或直接安装 macOS 应用快速部署，让在 Mac 上运行高性能本地大模型变得像使用普通系统工具一样简单自然。","\u003Cp align=\"center\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"docs\u002Fimages\u002Ficon-rounded-dark.svg\" width=\"140\">\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"docs\u002Fimages\u002Ficon-rounded-light.svg\" width=\"140\">\n    \u003Cimg alt=\"oMLX\" src=\"docs\u002Fimages\u002Ficon-rounded-light.svg\" width=\"140\">\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">oMLX\u003C\u002Fh1>\n\u003Cp align=\"center\">\u003Cb>LLM inference, optimized for your Mac\u003C\u002Fb>\u003Cbr>Continuous batching and tiered KV caching, managed directly from your menu bar.\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fwww.buymeacoffee.com\u002Fjundot\">\u003Cimg src=\"https:\u002F\u002Fcdn.buymeacoffee.com\u002Fbuttons\u002Fv2\u002Fdefault-yellow.png\" alt=\"Buy Me A Coffee\" height=\"40\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-blue\" alt=\"License\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10+-green\" alt=\"Python 3.10+\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fplatform-Apple%20Silicon-black?logo=apple\" alt=\"Apple Silicon\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"mailto:junkim.dot@gmail.com\">junkim.dot@gmail.com\u003C\u002Fa> · \u003Ca href=\"https:\u002F\u002Fomlx.ai\u002Fme\">https:\u002F\u002Fomlx.ai\u002Fme\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#install\">Install\u003C\u002Fa> ·\n  \u003Ca href=\"#quickstart\">Quickstart\u003C\u002Fa> ·\n  \u003Ca href=\"#features\">Features\u003C\u002Fa> ·\n  \u003Ca href=\"#models\">Models\u003C\u002Fa> ·\n  \u003Ca href=\"#cli-configuration\">CLI Configuration\u003C\u002Fa> ·\n  \u003Ca href=\"https:\u002F\u002Fomlx.ai\u002Fbenchmarks\">Benchmarks\u003C\u002Fa> ·\n  \u003Ca href=\"https:\u002F\u002Fomlx.ai\">oMLX.ai\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cb>English\u003C\u002Fb> ·\n  \u003Ca href=\"README.zh.md\">中文\u003C\u002Fa> ·\n  \u003Ca href=\"README.ko.md\">한국어\u003C\u002Fa> ·\n  \u003Ca href=\"README.ja.md\">日本語\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_c88b4a81edeb.png\" alt=\"oMLX Admin Dashboard\" width=\"800\">\n\u003C\u002Fp>\n\n> *Every LLM server I tried made me choose between convenience and control. I wanted to pin everyday models in memory, auto-swap heavier ones on demand, set context limits - and manage it all from a menu bar.*\n>\n> *oMLX persists KV cache across a hot in-memory tier and cold SSD tier - even when context changes mid-conversation, all past context stays cached and reusable across requests, making local LLMs practical for real coding work with tools like Claude Code. That's why I built it.*\n\n## Install\n\n### macOS App\n\nDownload the `.dmg` from [Releases](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Freleases), drag to Applications, done. The app includes in-app auto-update, so future upgrades are just one click. Note that the macOS app does not install the `omlx` CLI command. For terminal usage, install via Homebrew or from source.\n\n### Homebrew\n\n```bash\nbrew tap jundot\u002Fomlx https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\nbrew install omlx\n\n# Upgrade to the latest version\nbrew update && brew upgrade omlx\n\n# Run as a background service (auto-restarts on crash)\nbrew services start omlx\n\n# Optional: MCP (Model Context Protocol) support\n\u002Fopt\u002Fhomebrew\u002Fopt\u002Fomlx\u002Flibexec\u002Fbin\u002Fpip install mcp\n```\n\n### From Source\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx.git\ncd omlx\npip install -e .          # Core only\npip install -e \".[mcp]\"   # With MCP (Model Context Protocol) support\n```\n\nRequires macOS 15.0+ (Sequoia), Python 3.10+, and Apple Silicon (M1\u002FM2\u002FM3\u002FM4).\n\n## Quickstart\n\n### macOS App\n\nLaunch oMLX from your Applications folder. The Welcome screen guides you through three steps - model directory, server start, and first model download. That's it. To connect OpenClaw, OpenCode, or Codex, see [Integrations](#integrations).\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_3295440e814f.png\" alt=\"oMLX Welcome Screen\" width=\"360\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_d882fda447f8.png\" alt=\"oMLX Menubar\" width=\"240\">\n\u003C\u002Fp>\n\n### CLI\n\n```bash\nomlx serve --model-dir ~\u002Fmodels\n```\n\nThe server discovers LLMs, VLMs, embedding models, and rerankers from subdirectories automatically. Any OpenAI-compatible client can connect to `http:\u002F\u002Flocalhost:8000\u002Fv1`. A built-in chat UI is also available at `http:\u002F\u002Flocalhost:8000\u002Fadmin\u002Fchat`.\n\n### Homebrew Service\n\nIf you installed via Homebrew, you can run oMLX as a managed background service:\n\n```bash\nbrew services start omlx    # Start (auto-restarts on crash)\nbrew services stop omlx     # Stop\nbrew services restart omlx  # Restart\nbrew services info omlx     # Check status\n```\n\nThe service runs `omlx serve` with zero-config defaults (`~\u002F.omlx\u002Fmodels`, port 8000). To customize, either set environment variables (`OMLX_MODEL_DIR`, `OMLX_PORT`, etc.) or run `omlx serve --model-dir \u002Fyour\u002Fpath` once to persist settings to `~\u002F.omlx\u002Fsettings.json`.\n\nLogs are written to two locations:\n- **Service log**: `$(brew --prefix)\u002Fvar\u002Flog\u002Fomlx.log` (stdout\u002Fstderr)\n- **Server log**: `~\u002F.omlx\u002Flogs\u002Fserver.log` (structured application log)\n\n## Features\n\nSupports text LLMs, vision-language models (VLM), OCR models, embeddings, and rerankers on Apple Silicon.\n\n### Admin Dashboard\n\nWeb UI at `\u002Fadmin` for real-time monitoring, model management, chat, benchmark, and per-model settings. Supports English, Korean, Japanese, and Chinese. All CDN dependencies are vendored for fully offline operation.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_be0707d54159.png\" alt=\"oMLX Admin Dashboard\" width=\"720\">\n\u003C\u002Fp>\n\n### Vision-Language Models\n\nRun VLMs with the same continuous batching and tiered KV cache stack as text LLMs. Supports multi-image chat, base64\u002FURL\u002Ffile image inputs, and tool calling with vision context. OCR models (DeepSeek-OCR, DOTS-OCR, GLM-OCR) are auto-detected with optimized prompts.\n\n### Tiered KV Cache (Hot + Cold)\n\nBlock-based KV cache management inspired by vLLM, with prefix sharing and Copy-on-Write. The cache operates across two tiers:\n\n- **Hot tier (RAM)**: Frequently accessed blocks stay in memory for fast access.\n- **Cold tier (SSD)**: When the hot cache fills up, blocks are offloaded to SSD in safetensors format. On the next request with a matching prefix, they're restored from disk instead of recomputed from scratch - even after a server restart.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_37d004016cb0.png\" alt=\"oMLX Hot & Cold Cache\" width=\"720\">\n\u003C\u002Fp>\n\n### Continuous Batching\n\nHandles concurrent requests through mlx-lm's BatchGenerator. Max concurrent requests is configurable via CLI or admin panel.\n\n### Claude Code Optimization\n\nContext scaling support for running smaller context models with Claude Code. Scales reported token counts so that auto-compact triggers at the right timing, and SSE keep-alive prevents read timeouts during long prefill.\n\n### Multi-Model Serving\n\nLoad LLMs, VLMs, embedding models, and rerankers within the same server. Models are managed through a combination of automatic and manual controls:\n\n- **LRU eviction**: Least-recently-used models are evicted automatically when memory runs low.\n- **Manual load\u002Funload**: Interactive status badges in the admin panel let you load or unload models on demand.\n- **Model pinning**: Pin frequently used models to keep them always loaded.\n- **Per-model TTL**: Set an idle timeout per model to auto-unload after a period of inactivity.\n- **Process memory enforcement**: Total memory limit (default: system RAM - 8GB) prevents system-wide OOM.\n\n### Per-Model Settings\n\nConfigure sampling parameters, chat template kwargs, TTL, model alias, model type override, and more per model directly from the admin panel. Changes apply immediately without server restart.\n\n- **Model alias**: set a custom API-visible name. `\u002Fv1\u002Fmodels` returns the alias, and requests accept both the alias and directory name.\n- **Model type override**: manually set a model as LLM or VLM regardless of auto-detection.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_a0b53939cbab.png\" alt=\"oMLX Chat Template Kwargs\" width=\"480\">\n\u003C\u002Fp>\n\n### Built-in Chat\n\nChat directly with any loaded model from the admin panel. Supports conversation history, model switching, dark mode, reasoning model output, and image upload for VLM\u002FOCR models.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_5a0c50116e8e.png\" alt=\"oMLX Chat\" width=\"720\">\n\u003C\u002Fp>\n\n\n### Model Downloader\n\nSearch and download MLX models from HuggingFace directly in the admin dashboard. Browse model cards, check file sizes, and download with one click.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_3059296a163b.png\" alt=\"oMLX Model Downloader\" width=\"720\">\n\u003C\u002Fp>\n\n### Integrations\n\nSet up OpenClaw, OpenCode, and Codex directly from the admin dashboard with a single click. No manual config editing required.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_d71df9aba784.png\" alt=\"oMLX Integrations\" width=\"720\">\n\u003C\u002Fp>\n\n### Performance Benchmark\n\nOne-click benchmarking from the admin panel. Measures prefill (PP) and text generation (TG) tokens per second, with partial prefix cache hit testing for realistic performance numbers.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_79294026d766.png\" alt=\"oMLX Benchmark Tool\" width=\"720\">\n\u003C\u002Fp>\n\n### macOS Menubar App\n\nNative PyObjC menubar app (not Electron). Start, stop, and monitor the server without opening a terminal. Includes persistent serving stats (survives restarts), auto-restart on crash, and in-app auto-update.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_ef379ae9da31.png\" alt=\"oMLX Menubar Stats\" width=\"400\">\n\u003C\u002Fp>\n\n### API Compatibility\n\nDrop-in replacement for OpenAI and Anthropic APIs. Supports streaming usage stats (`stream_options.include_usage`), Anthropic adaptive thinking, and vision inputs (base64, URL).\n\n| Endpoint | Description |\n|----------|-------------|\n| `POST \u002Fv1\u002Fchat\u002Fcompletions` | Chat completions (streaming) |\n| `POST \u002Fv1\u002Fcompletions` | Text completions (streaming) |\n| `POST \u002Fv1\u002Fmessages` | Anthropic Messages API |\n| `POST \u002Fv1\u002Fembeddings` | Text embeddings |\n| `POST \u002Fv1\u002Frerank` | Document reranking |\n| `GET \u002Fv1\u002Fmodels` | List available models |\n\n### Tool Calling & Structured Output\n\nSupports all function calling formats available in mlx-lm, JSON schema validation, and MCP tool integration. Tool calling requires the model's chat template to support the `tools` parameter. The following model families are auto-detected via mlx-lm's built-in tool parsers:\n\n| Model Family | Format |\n|---|---|\n| Llama, Qwen, DeepSeek, etc. | JSON `\u003Ctool_call>` |\n| Qwen3.5 Series | XML `\u003Cfunction=...>` |\n| Gemma | `\u003Cstart_function_call>` |\n| GLM (4.7, 5) | `\u003Carg_key>\u002F\u003Carg_value>` XML |\n| MiniMax | Namespaced `\u003Cminimax:tool_call>` |\n| Mistral | `[TOOL_CALLS]` |\n| Kimi K2 | `\u003C\\|tool_calls_section_begin\\|>` |\n| Longcat | `\u003Clongcat_tool_call>` |\n\nModels not listed above may still work if their chat template accepts `tools` and their output uses a recognized `\u003Ctool_call>` XML format. For tool-enabled streaming, assistant text is emitted incrementally while known tool-call control markup is suppressed from visible content; structured tool calls are emitted after parsing the completed turn.\n\n## Models\n\nPoint `--model-dir` at a directory containing MLX-format model subdirectories. Two-level organization folders (e.g., `mlx-community\u002Fmodel-name\u002F`) are also supported.\n\n```\n~\u002Fmodels\u002F\n├── Step-3.5-Flash-8bit\u002F\n├── Qwen3-Coder-Next-8bit\u002F\n├── gpt-oss-120b-MXFP4-Q8\u002F\n├── Qwen3.5-122B-A10B-4bit\u002F\n└── bge-m3\u002F\n```\n\nModels are auto-detected by type. You can also download models directly from the admin dashboard.\n\n| Type | Models |\n|------|--------|\n| LLM | Any model supported by [mlx-lm](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm) |\n| VLM | Qwen3.5 Series, GLM-4V, Pixtral, and other [mlx-vlm](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm) models |\n| OCR | DeepSeek-OCR, DOTS-OCR, GLM-OCR |\n| Embedding | BERT, BGE-M3, ModernBERT |\n| Reranker | ModernBERT, XLM-RoBERTa |\n\n## CLI Configuration\n\n```bash\n# Memory limit for loaded models\nomlx serve --model-dir ~\u002Fmodels --max-model-memory 32GB\n\n# Process-level memory limit (default: auto = RAM - 8GB)\nomlx serve --model-dir ~\u002Fmodels --max-process-memory 80%\n\n# Enable SSD cache for KV blocks\nomlx serve --model-dir ~\u002Fmodels --paged-ssd-cache-dir ~\u002F.omlx\u002Fcache\n\n# Set in-memory hot cache size\nomlx serve --model-dir ~\u002Fmodels --hot-cache-max-size 20%\n\n# Adjust max concurrent requests (default: 8)\nomlx serve --model-dir ~\u002Fmodels --max-concurrent-requests 16\n\n# With MCP tools\nomlx serve --model-dir ~\u002Fmodels --mcp-config mcp.json\n\n# HuggingFace mirror endpoint (for restricted regions)\nomlx serve --model-dir ~\u002Fmodels --hf-endpoint https:\u002F\u002Fhf-mirror.com\n\n# API key authentication\nomlx serve --model-dir ~\u002Fmodels --api-key your-secret-key\n# Localhost-only: skip verification via admin panel global settings\n```\n\nAll settings can also be configured from the web admin panel at `\u002Fadmin`. Settings are persisted to `~\u002F.omlx\u002Fsettings.json`, and CLI flags take precedence.\n\n\u003Cdetails>\n\u003Csummary>Architecture\u003C\u002Fsummary>\n\n```\nFastAPI Server (OpenAI \u002F Anthropic API)\n    │\n    ├── EnginePool (multi-model, LRU eviction, TTL, manual load\u002Funload)\n    │   ├── BatchedEngine (LLMs, continuous batching)\n    │   ├── VLMEngine (vision-language models)\n    │   ├── EmbeddingEngine\n    │   └── RerankerEngine\n    │\n    ├── ProcessMemoryEnforcer (total memory limit, TTL checks)\n    │\n    ├── Scheduler (FCFS, configurable concurrency)\n    │   └── mlx-lm BatchGenerator\n    │\n    └── Cache Stack\n        ├── PagedCacheManager (GPU, block-based, CoW, prefix sharing)\n        ├── Hot Cache (in-memory tier, write-back)\n        └── PagedSSDCacheManager (SSD cold tier, safetensors format)\n```\n\n\u003C\u002Fdetails>\n\n## Development\n\n### CLI Server\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx.git\ncd omlx\npip install -e \".[dev]\"\npytest -m \"not slow\"\n```\n\n### macOS App\n\nRequires Python 3.11+ and [venvstacks](https:\u002F\u002Fvenvstacks.lmstudio.ai) (`pip install venvstacks`).\n\n```bash\ncd packaging\n\n# Full build (venvstacks + app bundle + DMG)\npython build.py\n\n# Skip venvstacks (code changes only)\npython build.py --skip-venv\n\n# DMG only\npython build.py --dmg-only\n```\n\nSee [packaging\u002FREADME.md](packaging\u002FREADME.md) for details on the app bundle structure and layer configuration.\n\n## Contributing\n\nContributions are welcome! See [Contributing Guide](docs\u002FCONTRIBUTING.md) for details.\n\n- Bug fixes and improvements\n- Performance optimizations\n- Documentation improvements\n\n## License\n\n[Apache 2.0](LICENSE)\n\n## Acknowledgments\n\n- [MLX](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx) and [mlx-lm](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm) by Apple\n- [mlx-vlm](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm) - Vision-language model inference on Apple Silicon\n- [vllm-mlx](https:\u002F\u002Fgithub.com\u002Fwaybarrios\u002Fvllm-mlx) - oMLX started from vllm-mlx v0.1.0 and evolved significantly with multi-model serving, tiered KV caching, VLM with full paged cache support, an admin panel, and a macOS menu bar app\n- [venvstacks](https:\u002F\u002Fvenvstacks.lmstudio.ai) - Portable Python environment layering for the macOS app bundle\n- [mlx-embeddings](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-embeddings) - Embedding model support for Apple Silicon\n- [llm-compressor](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fllm-compressor) - Reference AWQ implementation for MoE models, used as design reference for oQ weight equalization\n","\u003Cp align=\"center\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"docs\u002Fimages\u002Ficon-rounded-dark.svg\" width=\"140\">\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"docs\u002Fimages\u002Ficon-rounded-light.svg\" width=\"140\">\n    \u003Cimg alt=\"oMLX\" src=\"docs\u002Fimages\u002Ficon-rounded-light.svg\" width=\"140\">\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">oMLX\u003C\u002Fh1>\n\u003Cp align=\"center\">\u003Cb>LLM 推理，专为您的 Mac 优化\u003C\u002Fb>\u003Cbr>连续批处理与分层 KV 缓存，直接通过菜单栏管理。\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fwww.buymeacoffee.com\u002Fjundot\">\u003Cimg src=\"https:\u002F\u002Fcdn.buymeacoffee.com\u002Fbuttons\u002Fv2\u002Fdefault-yellow.png\" alt=\"Buy Me A Coffee\" height=\"40\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-blue\" alt=\"License\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10+-green\" alt=\"Python 3.10+\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fplatform-Apple%20Silicon-black?logo=apple\" alt=\"Apple Silicon\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"mailto:junkim.dot@gmail.com\">junkim.dot@gmail.com\u003C\u002Fa> · \u003Ca href=\"https:\u002F\u002Fomlx.ai\u002Fme\">https:\u002F\u002Fomlx.ai\u002Fme\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#install\">安装\u003C\u002Fa> ·\n  \u003Ca href=\"#quickstart\">快速入门\u003C\u002Fa> ·\n  \u003Ca href=\"#features\">功能\u003C\u002Fa> ·\n  \u003Ca href=\"#models\">模型\u003C\u002Fa> ·\n  \u003Ca href=\"#cli-configuration\">CLI 配置\u003C\u002Fa> ·\n  \u003Ca href=\"https:\u002F\u002Fomlx.ai\u002Fbenchmarks\">基准测试\u003C\u002Fa> ·\n  \u003Ca href=\"https:\u002F\u002Fomlx.ai\">oMLX.ai\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cb>English\u003C\u002Fb> ·\n  \u003Ca href=\"README.zh.md\">中文\u003C\u002Fa> ·\n  \u003Ca href=\"README.ko.md\">한국어\u003C\u002Fa> ·\n  \u003Ca href=\"README.ja.md\">日本語\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_c88b4a81edeb.png\" alt=\"oMLX 管理仪表盘\" width=\"800\">\n\u003C\u002Fp>\n\n> *我尝试过的每一种 LLM 服务器，都让我在便利性和控制权之间做出选择。我希望将常用模型常驻内存，按需自动交换较重的模型，设置上下文限制——并且这一切都能通过菜单栏来管理。*\n>\n> *oMLX 在热态内存层和冷态 SSD 层之间持久化 KV 缓存——即使对话过程中上下文发生变化，所有历史上下文仍会保留在缓存中并可在后续请求中重复使用，这使得本地 LLM 在实际编码工作中与 Claude Code 等工具结合使用时变得切实可行。这就是我构建它的原因。*\n\n## 安装\n\n### macOS 应用程序\n\n从 [Releases](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Freleases) 下载 `.dmg` 文件，将其拖入 Applications 文件夹即可完成安装。该应用程序包含应用内自动更新功能，因此未来的升级只需点击一下。请注意，macOS 应用程序不会安装 `omlx` CLI 命令。如需在终端中使用，请通过 Homebrew 或源码进行安装。\n\n### Homebrew\n\n```bash\nbrew tap jundot\u002Fomlx https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\nbrew install omlx\n\n# 升级到最新版本\nbrew update && brew upgrade omlx\n\n# 以后台服务运行（崩溃后自动重启）\nbrew services start omlx\n\n# 可选：支持 MCP（模型上下文协议）\n\u002Fopt\u002Fhomebrew\u002Fopt\u002Fomlx\u002Flibexec\u002Fbin\u002Fpip install mcp\n```\n\n### 源码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx.git\ncd omlx\npip install -e .          # 仅核心\npip install -e \".[mcp]\"   # 支持 MCP（模型上下文协议）\n```\n\n需要 macOS 15.0+（Sequoia）、Python 3.10+ 和 Apple Silicon（M1\u002FM2\u002FM3\u002FM4）。\n\n## 快速入门\n\n### macOS 应用程序\n\n从 Applications 文件夹启动 oMLX。欢迎界面会引导您完成三个步骤——模型目录、启动服务器以及首次下载模型。仅此而已。要连接 OpenClaw、OpenCode 或 Codex，请参阅 [集成](#integrations)。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_3295440e814f.png\" alt=\"oMLX 欢迎界面\" width=\"360\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_d882fda447f8.png\" alt=\"oMLX 菜单栏\" width=\"240\">\n\u003C\u002Fp>\n\n### CLI\n\n```bash\nomlx serve --model-dir ~\u002Fmodels\n```\n\n服务器会自动从子目录中发现 LLM、VLM、嵌入模型和重新排序器。任何兼容 OpenAI 的客户端都可以连接到 `http:\u002F\u002Flocalhost:8000\u002Fv1`。内置聊天界面也可在 `http:\u002F\u002Flocalhost:8000\u002Fadmin\u002Fchat` 访问。\n\n### Homebrew 服务\n\n如果您通过 Homebrew 安装了 oMLX，可以将其作为受管后台服务运行：\n\n```bash\nbrew services start omlx    # 启动（崩溃后自动重启）\nbrew services stop omlx     # 停止\nbrew services restart omlx  # 重启\nbrew services info omlx     # 查看状态\n```\n\n该服务会以零配置默认值运行 `omlx serve`（`~\u002F.omlx\u002Fmodels`, 端口 8000）。如需自定义，可设置环境变量（`OMLX_MODEL_DIR`、`OMLX_PORT` 等），或运行一次 `omlx serve --model-dir \u002Fyour\u002Fpath`，以将设置保存到 `~\u002F.omlx\u002Fsettings.json`。\n\n日志会写入两个位置：\n- **服务日志**：`$(brew --prefix)\u002Fvar\u002Flog\u002Fomlx.log`（stdout\u002Fstderr）\n- **服务器日志**：`~\u002F.omlx\u002Flogs\u002Fserver.log`（结构化应用日志）\n\n## 功能\n\n支持文本 LLM、视觉-语言模型（VLM）、OCR 模型、嵌入模型和重新排序器，运行于 Apple Silicon 平台。\n\n### 管理仪表盘\n\n位于 `\u002Fadmin` 的 Web UI，用于实时监控、模型管理、聊天、基准测试以及每个模型的设置。支持英语、韩语、日语和中文。所有 CDN 依赖项均已打包，可完全离线运行。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_be0707d54159.png\" alt=\"oMLX 管理仪表盘\" width=\"720\">\n\u003C\u002Fp>\n\n### 视觉-语言模型\n\nVLM 可以使用与文本 LLM 相同的连续批处理和分层 KV 缓存架构运行。支持多图像聊天、base64\u002FURL\u002F文件形式的图像输入，以及带有视觉上下文的工具调用。OCR 模型（DeepSeek-OCR、DOTS-OCR、GLM-OCR）会自动检测，并采用优化后的提示词。\n\n### 分层 KV 缓存（热 + 冷）\n\n基于 vLLM 的块级 KV 缓存管理，支持前缀共享和写时复制。缓存分为两层：\n\n- **热层（RAM）**：频繁访问的块会保留在内存中，以便快速访问。\n- **冷层（SSD）**：当热缓存满时，块会被卸载到 SSD，以 safetensors 格式存储。下次有匹配前缀的请求时，这些块会从磁盘恢复，而不是从头开始重新计算——即使在服务器重启后也是如此。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_37d004016cb0.png\" alt=\"oMLX 热冷缓存\" width=\"720\">\n\u003C\u002Fp>\n\n### 连续批处理\n\n通过 mlx-lm 的 BatchGenerator 处理并发请求。最大并发请求数可通过 CLI 或管理面板进行配置。\n\n### Claude Code 优化\n\n支持上下文缩放，以便使用较小上下文的模型运行 Claude Code。它会调整报告的 token 数量，使自动压缩触发时机恰当，并通过 SSE keep-alive 防止长时间预填充期间出现读取超时。\n\n### 多模型服务\n\n在同一服务器中加载大语言模型（LLM）、视觉语言模型（VLM）、嵌入模型和重排序模型。模型通过自动与手动控制相结合的方式进行管理：\n\n- **LRU逐出机制**：当内存不足时，最近最少使用的模型会自动被逐出。\n- **手动加载\u002F卸载**：管理员面板中的交互式状态标记允许您按需加载或卸载模型。\n- **模型固定**：将常用模型固定以始终保持加载状态。\n- **每模型TTL**：为每个模型设置空闲超时，在一段时间无活动后自动卸载。\n- **进程内存限制**：总内存上限（默认为系统RAM减去8GB）可防止系统范围内的内存溢出。\n\n### 每模型设置\n\n直接从管理员面板为每个模型配置采样参数、聊天模板关键字参数、TTL、模型别名、模型类型覆盖等。更改立即生效，无需重启服务器。\n\n- **模型别名**：设置一个自定义的API可见名称。`\u002Fv1\u002Fmodels`返回该别名，请求既可以接受别名，也可以接受目录名。\n- **模型类型覆盖**：无论自动检测结果如何，均可手动将模型设置为LLM或VLM。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_a0b53939cbab.png\" alt=\"oMLX Chat Template Kwargs\" width=\"480\">\n\u003C\u002Fp>\n\n### 内置聊天功能\n\n可以直接从管理员面板与任何已加载的模型进行对话。支持对话历史、模型切换、暗模式、推理模型输出以及针对VLM\u002FOCR模型的图像上传。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_5a0c50116e8e.png\" alt=\"oMLX Chat\" width=\"720\">\n\u003C\u002Fp>\n\n\n### 模型下载器\n\n在管理员仪表盘中直接搜索并下载来自HuggingFace的MLX模型。浏览模型卡片，查看文件大小，并一键下载。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_3059296a163b.png\" alt=\"oMLX Model Downloader\" width=\"720\">\n\u003C\u002Fp>\n\n### 集成\n\n只需单击一下按钮，即可在管理员仪表盘中直接设置OpenClaw、OpenCode和Codex，无需手动编辑配置。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_d71df9aba784.png\" alt=\"oMLX Integrations\" width=\"720\">\n\u003C\u002Fp>\n\n### 性能基准测试\n\n管理员面板中的一键基准测试。测量预填充（PP）和文本生成（TG）的每秒令牌数，并进行部分前缀缓存命中测试，以获得更贴近实际的性能数据。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_79294026d766.png\" alt=\"oMLX Benchmark Tool\" width=\"720\">\n\u003C\u002Fp>\n\n### macOS菜单栏应用\n\n原生PyObjC菜单栏应用（非Electron）。无需打开终端即可启动、停止和监控服务器。包含持久化的服务统计信息（重启后仍保留）、崩溃后自动重启以及应用内自动更新。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_readme_ef379ae9da31.png\" alt=\"oMLX Menubar Stats\" width=\"400\">\n\u003C\u002Fp>\n\n### API兼容性\n\n可无缝替代OpenAI和Anthropic的API。支持流式使用统计（`stream_options.include_usage`）、Anthropic自适应思维以及视觉输入（base64、URL）。\n\n| 端点 | 描述 |\n|----------|-------------|\n| `POST \u002Fv1\u002Fchat\u002Fcompletions` | 聊天完成（流式） |\n| `POST \u002Fv1\u002Fcompletions` | 文本完成（流式） |\n| `POST \u002Fv1\u002Fmessages` | Anthropic Messages API |\n| `POST \u002Fv1\u002Fembeddings` | 文本嵌入 |\n| `POST \u002Fv1\u002Frerank` | 文档重排序 |\n| `GET \u002Fv1\u002Fmodels` | 列出可用模型 |\n\n### 工具调用与结构化输出\n\n支持mlx-lm中所有可用的函数调用格式、JSON模式验证以及MCP工具集成。工具调用要求模型的聊天模板支持`tools`参数。以下模型家族可通过mlx-lm内置的工具解析器自动检测：\n\n| 模型家族 | 格式 |\n|---|---|\n| Llama、Qwen、DeepSeek等 | JSON `\u003Ctool_call>` |\n| Qwen3.5系列 | XML `\u003Cfunction=...>` |\n| Gemma | `\u003Cstart_function_call>` |\n| GLM（4.7、5） | `\u003Carg_key>\u002F\u003Carg_value>` XML |\n| MiniMax | 命名空间下的 `\u003Cminimax:tool_call>` |\n| Mistral | `[TOOL_CALLS]` |\n| Kimi K2 | `\u003C\\|tool_calls_section_begin\\|>` |\n| Longcat | `\u003Clongcat_tool_call>` |\n\n未列出以上的模型如果其聊天模板接受`tools`且输出采用公认的`\u003C|num_start|>`XML格式，也可能正常工作。对于启用工具的流式响应，助手文本会逐步输出，同时隐藏已知的工具调用控制标记；完整的工具调用将在本轮对话结束后发出。\n\n## 模型\n\n将`--model-dir`指向包含MLX格式模型子目录的目录。也支持两层组织结构的文件夹（例如`mlx-community\u002Fmodel-name\u002F`）。\n\n```\n~\u002Fmodels\u002F\n├── Step-3.5-Flash-8bit\u002F\n├── Qwen3-Coder-Next-8bit\u002F\n├── gpt-oss-120b-MXFP4-Q8\u002F\n├── Qwen3.5-122B-A10B-4bit\u002F\n└── bge-m3\u002F\n```\n\n模型会根据类型自动检测。您还可以直接从管理员仪表盘下载模型。\n\n| 类型 | 模型 |\n|------|--------|\n| LLM | [mlx-lm](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm)支持的任何模型 |\n| VLM | Qwen3.5系列、GLM-4V、Pixtral以及其他[mlx-vlm](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm)模型 |\n| OCR | DeepSeek-OCR、DOTS-OCR、GLM-OCR |\n| 嵌入 | BERT、BGE-M3、ModernBERT |\n| 重排序 | ModernBERT、XLM-RoBERTa |\n\n## CLI配置\n\n```bash\n# 已加载模型的内存限制\nomlx serve --model-dir ~\u002Fmodels --max-model-memory 32GB\n\n# 进程级内存限制（默认：自动 = RAM - 8GB）\nomlx serve --model-dir ~\u002Fmodels --max-process-memory 80%\n\n# 启用KV块的SSD缓存\nomlx serve --model-dir ~\u002Fmodels --paged-ssd-cache-dir ~\u002F.omlx\u002Fcache\n\n# 设置内存中的热缓存大小\nomlx serve --model-dir ~\u002Fmodels --hot-cache-max-size 20%\n\n# 调整最大并发请求数（默认：8）\nomlx serve --model-dir ~\u002Fmodels --max-concurrent-requests 16\n\n# 使用MCP工具\nomlx serve --model-dir ~\u002Fmodels --mcp-config mcp.json\n\n# HuggingFace镜像端点（适用于受限地区）\nomlx serve --model-dir ~\u002Fmodels --hf-endpoint https:\u002F\u002Fhf-mirror.com\n\n# API密钥认证\nomlx serve --model-dir ~\u002Fmodels --api-key your-secret-key\n# 仅限本地访问：可通过管理员面板全局设置跳过验证\n```\n\n所有设置也可通过网页管理员面板在`\u002Fadmin`处进行配置。设置会持久化到`~\u002F.omlx\u002Fsettings.json`，且CLI标志具有优先权。\n\n\u003Cdetails>\n\u003Csummary>架构\u003C\u002Fsummary>\n\n```\nFastAPI服务器（OpenAI \u002F Anthropic API）\n    │\n    ├── 引擎池（多模型、LRU逐出、TTL、手动加载\u002F卸载）\n    │   ├── 批处理引擎（LLMs、连续批处理）\n    │   ├── VLM引擎（视觉语言模型）\n    │   ├── 嵌入引擎\n    │   └── 重排序引擎\n    │\n    ├── 进程内存强制执行器（总内存限制、TTL检查）\n    │\n    ├── 调度器（先到先得、可配置并发度）\n    │   └── mlx-lm批处理生成器\n    │\n    └── 缓存堆栈\n        ├── 分页缓存管理器（GPU、基于块、写时复制、前缀共享）\n        ├── 热缓存（内存层级、写回策略）\n        └── 分页SSD缓存管理器（SSD冷层级、safetensors格式）\n```\n\n\u003C\u002Fdetails>\n\n## 开发\n\n### 命令行服务器\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx.git\ncd omlx\npip install -e \".[dev]\"\npytest -m \"not slow\"\n```\n\n### macOS 应用程序\n\n需要 Python 3.11 或更高版本以及 [venvstacks](https:\u002F\u002Fvenvstacks.lmstudio.ai)（运行 `pip install venvstacks` 安装）。\n\n```bash\ncd packaging\n\n# 完整构建（包含 venvstacks、应用包和 DMG）\npython build.py\n\n# 跳过 venvstacks（仅代码更改）\npython build.py --skip-venv\n\n# 仅生成 DMG\npython build.py --dmg-only\n```\n\n有关应用包结构和层配置的详细信息，请参阅 [packaging\u002FREADME.md](packaging\u002FREADME.md)。\n\n## 参与贡献\n\n欢迎各位参与贡献！详情请参阅 [贡献指南](docs\u002FCONTRIBUTING.md)。\n\n- 错误修复和功能改进\n- 性能优化\n- 文档改进\n\n## 许可证\n\n[Apache 2.0](LICENSE)\n\n## 致谢\n\n- Apple 的 [MLX](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx) 和 [mlx-lm](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm)\n- [mlx-vlm](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm) —— 在 Apple Silicon 上进行视觉语言模型推理\n- [vllm-mlx](https:\u002F\u002Fgithub.com\u002Fwaybarrios\u002Fvllm-mlx) —— oMLX 最初基于 vllm-mlx v0.1.0 开发，随后在多模型服务、分层 KV 缓存、支持完整分页缓存的 VLM、管理面板以及 macOS 菜单栏应用程序等方面得到了显著演进\n- [venvstacks](https:\u002F\u002Fvenvstacks.lmstudio.ai) —— 用于 macOS 应用包的便携式 Python 环境分层技术\n- [mlx-embeddings](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-embeddings) —— 对 Apple Silicon 的嵌入模型支持\n- [llm-compressor](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fllm-compressor) —— MoE 模型的 AWQ 参考实现，被用作 oQ 权重均衡设计的参考。","# oMLX 快速上手指南\n\noMLX 是一款专为 Apple Silicon Mac 优化的本地大语言模型（LLM）推理工具。它支持连续批处理（Continuous Batching）和分层 KV 缓存（内存 + SSD），可通过菜单栏或命令行轻松管理，是运行本地 Coding Agent（如 Claude Code）的理想选择。\n\n## 环境准备\n\n在开始之前，请确保您的设备满足以下要求：\n\n*   **操作系统**：macOS 15.0 (Sequoia) 或更高版本\n*   **硬件架构**：Apple Silicon (M1 \u002F M2 \u002F M3 \u002F M4 系列芯片)\n*   **Python 版本**：Python 3.10+\n*   **模型格式**：需准备 MLX 格式的模型（可从 HuggingFace 下载或通过内置下载器获取）\n\n## 安装步骤\n\n您可以选择通过 **Homebrew**（推荐终端用户）或 **macOS App**（推荐图形界面用户）进行安装。\n\n### 方式一：通过 Homebrew 安装（含 CLI 工具）\n\n这是最灵活的安装方式，支持命令行服务和后台守护进程。\n\n```bash\n# 1. 添加 omlx 源\nbrew tap jundot\u002Fomlx https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\n\n# 2. 安装 omlx\nbrew install omlx\n\n# 3. (可选) 启动为后台服务（崩溃自动重启）\nbrew services start omlx\n\n# 4. (可选) 安装 MCP (Model Context Protocol) 支持\n\u002Fopt\u002Fhomebrew\u002Fopt\u002Fomlx\u002Flibexec\u002Fbin\u002Fpip install mcp\n```\n\n### 方式二：通过 macOS App 安装\n\n适合偏好图形界面管理的用户。\n\n1.  前往 [Releases 页面](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Freleases) 下载最新的 `.dmg` 文件。\n2.  将 `oMLX` 拖入“应用程序”文件夹。\n3.  启动应用，按照欢迎界面的指引设置模型目录并下载首个模型。\n    *   *注意：此方式不包含 `omlx` 命令行工具，如需终端调用请配合方式一使用。*\n\n### 方式三：从源码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx.git\ncd omlx\npip install -e .          # 仅安装核心功能\n# 或\npip install -e \".[mcp]\"   # 包含 MCP 支持\n```\n\n## 基本使用\n\n安装完成后，您可以通过命令行启动服务，或使用图形界面进行管理。\n\n### 1. 启动推理服务\n\n在终端中运行以下命令启动服务器（假设模型存放在 `~\u002Fmodels` 目录）：\n\n```bash\nomlx serve --model-dir ~\u002Fmodels\n```\n\n*   服务启动后，会自动扫描子目录中的 LLM、VLM、Embedding 和 Reranker 模型。\n*   **API 地址**：`http:\u002F\u002Flocalhost:8000\u002Fv1` (兼容 OpenAI 协议)\n*   **管理后台**：`http:\u002F\u002Flocalhost:8000\u002Fadmin` (内置 Web UI，支持模型下载、聊天测试、性能基准测试)\n\n### 2. 作为后台服务运行 (Homebrew 用户)\n\n如果您已通过 Homebrew 安装并希望开机自启或在后台运行：\n\n```bash\n# 启动服务\nbrew services start omlx\n\n# 查看状态\nbrew services info omlx\n\n# 停止服务\nbrew services stop omlx\n```\n\n*默认配置下，服务会读取 `~\u002F.omlx\u002Fmodels` 目录并在 8000 端口运行。*\n\n### 3. 连接客户端\n\noMLX 完全兼容 OpenAI API，您可以直接在代码或支持 OpenAI 协议的客户端（如 Cursor, OpenClaw, OpenCode 等）中使用：\n\n*   **Base URL**: `http:\u002F\u002Flocalhost:8000\u002Fv1`\n*   **API Key**: 任意字符串 (例如 `omlx`)\n\n**Python 调用示例：**\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n    base_url=\"http:\u002F\u002Flocalhost:8000\u002Fv1\",\n    api_key=\"omlx\" \n)\n\nresponse = client.chat.completions.create(\n    model=\"Step-3.5-Flash-8bit\", # 替换为您实际加载的模型目录名\n    messages=[{\"role\": \"user\", \"content\": \"Hello, oMLX!\"}]\n)\n\nprint(response.choices[0].message.content)\n```\n\n### 4. 核心特性提示\n\n*   **分层缓存**：oMLX 会自动将频繁访问的 KV 块保留在内存（Hot Tier），不常用的卸载到 SSD（Cold Tier），即使重启服务器也能复用之前的上下文，极大节省显存。\n*   **模型管理**：访问 `http:\u002F\u002Flocalhost:8000\u002Fadmin` 可一键下载 HuggingFace 上的 MLX 模型、固定常用模型到内存或设置自动卸载时间。\n*   **多模型并发**：支持同时加载文本、视觉 (VLM)、OCR 和 Embedding 模型，内存不足时会自动根据 LRU 策略卸载最少使用的模型。","一位 macOS 开发者需要在本地运行大型语言模型（如 Llama 3）辅助编写复杂代码，同时希望保持系统流畅并支持多任务并发处理。\n\n### 没有 omlx 时\n- **内存瓶颈严重**：加载大模型后占用全部统一内存，导致浏览器、IDE 等其他应用卡顿甚至崩溃，无法进行多任务操作。\n- **上下文切换低效**：每次开启新对话或切换模型时，必须重新加载权重和计算上下文，等待时间长且之前的缓存数据完全丢失。\n- **管理方式繁琐**：缺乏直观的控制界面，需通过复杂的命令行参数手动启停服务，难以实时监控显存状态或动态调整配置。\n- **长文本处理受限**：受限于纯内存架构，一旦代码上下文超出显存容量，推理速度急剧下降或直接报错，无法处理大型项目文件。\n\n### 使用 omlx 后\n- **智能分层缓存**：omlx 利用 SSD 作为二级缓存存储历史 KV 状态，将常用模型驻留内存，既释放了宝贵内存资源，又确保了系统整体流畅度。\n- **无缝上下文复用**：即使在长对话中动态切换任务，omlx 也能从 SSD 快速召回之前的上下文缓存，实现“秒级”响应，完美支持像 Claude Code 这样的编程助手。\n- **菜单栏一键管控**：通过 macOS 菜单栏即可直观地固定常用模型、按需热交换重型模型，并实时查看资源占用，无需触碰命令行。\n- **连续批处理优化**：内置的连续批处理技术允许同时处理多个开发者的请求或并行任务，显著提升了本地推理的吞吐量和实用性。\n\nomlx 通过将内存与 SSD 缓存智能结合，把原本仅能用于实验的本地大模型，转变为了开发者日常编码中稳定、高效且可管理的生产力工具。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjundot_omlx_c88b4a81.png","jundot","Jun Kim","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjundot_706d4016.jpg","Data engineer by day, AI dreamer by night. I build the tools I wish existed for my Mac — then open-source them.\r\njunkim.dot@gmail.com",null,"Seoul, Korea","https:\u002F\u002Fomlx.ai","https:\u002F\u002Fgithub.com\u002Fjundot",[81,85,89,93,97],{"name":82,"color":83,"percentage":84},"Python","#3572A5",83.7,{"name":86,"color":87,"percentage":88},"HTML","#e34c26",12.4,{"name":90,"color":91,"percentage":92},"JavaScript","#f1e05a",3.5,{"name":94,"color":95,"percentage":96},"CSS","#663399",0.3,{"name":98,"color":99,"percentage":100},"Ruby","#701516",0,9741,839,"2026-04-13T22:26:59","Apache-2.0","macOS","不需要独立显卡，必须使用 Apple Silicon (M1\u002FM2\u002FM3\u002FM4) 芯片，利用统一内存架构","最低未说明，推荐系统内存至少比模型需求多 8GB (默认进程内存限制为系统总内存 - 8GB)",{"notes":109,"python":110,"dependencies":111},"1. 仅支持 macOS 15.0+ (Sequoia) 及更高版本。\n2. 必须在 Apple Silicon 硬件上运行，不支持 Intel Mac 或其他平台。\n3. 提供分层 KV 缓存机制，可将不常用的缓存块卸载到 SSD 以节省内存。\n4. 支持通过 Homebrew 安装或直接下载 macOS .dmg 应用程序。","3.10+",[112,113,114,115],"mlx-lm","mlx-vlm","PyObjC","mcp (可选)",[14,35,52],[118,119,120,121,122,123],"apple-silicon","inference-server","llm","macos","mlx","openai-api","2026-03-27T02:49:30.150509","2026-04-14T12:30:14.588854",[127,132,137,142,147,152],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},33028,"模型在回答问题或执行任务时经常中断，是需要修改配置吗？","这通常是模型本身的限制，而非 oMLX 的 Bug。日志显示模型输出了自然语言（如“让我先查看一下这些改动”），但在生成具体的工具调用标记（如 `\u003Ctool_call>` 或 `\u003Cfunction=...>`）之前就停止了，并发送了结束序列（EOS）令牌。这意味着模型知道应该调用工具，但未能生成正确的 XML 格式标记就放弃了。建议尝试更换支持工具调用更完善的模型，或调整提示词以强化模型输出特定格式的能力。","https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fissues\u002F205",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},33029,"Gemma 4 模型加载后响应乱码或无法正常工作怎么办？","请升级到最新版本的 oMLX，新版本包含了对 Gemma 4 和 TurboQuant 支持的重大改进。如果更新后文本输入正常但图像输入仍有问题，可以尝试手动升级 mlx-vlm 库来解决：运行命令 `uv pip install git+https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm.git@main --upgrade --reinstall`。","https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fissues\u002F534",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},33030,"在 Open WebUI 中同一对话多次发消息时出现 'Response payload is not completed' 错误或服务器崩溃如何解决？","这是由 GPU 操作线程竞争引起的 Bug。当 Open WebUI 同时访问 VLM（接口模型）和 LLM（聊天模型）时，两个引擎会在同一个 Metal GPU 流上竞争，导致崩溃。该问题已在 v0.2.3.post4 版本中修复，修复方案是将所有 GPU 操作序列化到单个共享线程上。请下载并安装 v0.2.3.post4 或更高版本：https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Freleases\u002Ftag\u002Fv0.2.3.post4。","https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fissues\u002F80",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},33031,"服务器随机崩溃并报错 'Cannot load ... projected memory ... would exceed process limit' 是什么原因？","这通常发生在服务器试图重复加载同一个已加载的模型时，导致预估内存超过进程限制。例如，当新请求进来时，系统错误地尝试再次加载模型从而触发内存保护机制。该问题在后续版本中已通过增强后端鲁棒性得到修复，建议升级到最新版 oMLX 以避免此类因重复加载导致的内存溢出崩溃。","https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fissues\u002F62",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},33032,"执行复杂任务（如写文档）时 omlx 崩溃并报错缓存快照写入失败（No such file or directory）怎么办？","此错误通常与缓存目录路径无效、驱动器空间已满或发生缓存驱逐有关。日志显示后台快照写入失败是因为目标文件路径不存在。请检查您的缓存卷（如 \u002FVolumes\u002FCache）是否挂载正常且有足够空间。如果驱动器接近满载，系统可能会在写入过程中删除临时文件导致路径失效。清理磁盘空间或更改缓存设置到稳定的存储路径可缓解此问题。","https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fissues\u002F133",{"id":153,"question_zh":154,"answer_zh":155,"source_url":136},33033,"在 Claude Code 中使用 oMLX 的 'it' 系列模型时输出乱码或基准测试卡住怎么办？","即使升级到了 v0.3.2，部分用户反馈在 LiveCodeBench 基准测试中仍会卡住，且在 Claude Code 中使用 'it' 变体模型时输出乱码。这表明对特定模型变体或集成环境的兼容性仍在优化中。建议暂时避免在生产环境中对这些特定组合依赖过重，并关注官方发布的后续补丁，或在 Issue 中提供详细日志以便开发者进一步排查。",[157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252],{"id":158,"version":159,"summary_zh":160,"released_at":161},247758,"v0.3.5.dev1","> 开发版本，新增 Gemma 4 原生工具调用、UI 改进及多项 bug 修复。此为开发版本，可能包含未发现的 bug。如遇问题，请提交 issue。\n\n## 亮点\n\n### Gemma 4 原生工具调用\n\n将 mlx-lm 升级至 [dcbf6e3](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002Fdcbf6e3)，mlx-vlm 升级至 [23e1dff](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm\u002Fcommit\u002F23e1dff)。mlx-lm 现在内置了 Gemma 4 的原生工具调用解析器（`\u003C|tool_call>` \u002F `\u003Ctool_call|>`）以及多标记思考\u002F工具标记支持。omlx 现有的 `parse_tool_calls()` 将自动使用新解析器，无需服务器端特殊处理。\n\n由于 mlx-vlm 现已原生支持不同尺寸的图像，因此移除了 Gemma 4 的多图像视觉补丁。批处理解码补丁暂时保留（mlx-vlm 仍使用基于 `cache.state` 的 KV 共享）。\n\n### 自动主题模式\n\n系统外观与管理界面中的自动\u002F浅色\u002F深色主题选择器同步。由 @Stv-X 贡献 (#621, #624)。\n\n### 新功能\n\n- 通过 `seed` 参数实现可复现的生成 (#640)\n- 新增 6 位和 8 位 TurboQuant 选项 (#594)\n- 在本地主机上启用 `skip_api_key_verification` 时跳过管理员身份验证 (#587)\n- 统一 `max_num_seqs` 和 `completion_batch_size` 为 `max_concurrent_requests`\n\n### Bug 修复\n\n- 修复 TurboQuant SSD 缓存重建崩溃问题 (#577)\n- 修复 oQ：跳过没有 `to_quantized()` 方法的量化模块 (#625)\n- 修复 oQ：在敏感性测量中解开元组层输出 (#627)\n- 修复聊天页面深色模式下标题可见性及移动端用户体验问题 (#586)\n- 修复导航栏主题选择器宽度裁剪问题\n- 修复 Homebrew formula v0.3.4 的 sha256 校验值 (#589)\n- 修正触发公式更新的时机，改为发布版本而非推送标签\n- 在 DMG 构建中捆绑 spacy `en_core_web_sm`，用于 Kokoro TTS (#590)\n- 向音频扩展中添加缺失的 TTS\u002FSTT\u002FSTS 依赖项 (#590)\n\n### 依赖项\n\n- mlx-lm 升级至 dcbf6e3（Gemma 4 工具调用解析器、多标记思考\u002F工具）\n- mlx-vlm 升级至 23e1dff（Gemma 4 多图像修复、嵌套工具解析器）\n- 新增 `regex` 依赖\n\n### 新贡献者\n\n- @Stv-X — 自动主题模式及设置 UI (#621, #624)\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fcompare\u002Fv0.3.4...v0.3.5.dev1","2026-04-07T10:14:50",{"id":163,"version":164,"summary_zh":165,"released_at":166},247759,"v0.3.4","> 此版本修复了 v0.3.3 中 mlx-vlm 批处理支持引入的内存增长问题。\n\n## 亮点\n\n### 改进的 Gemma 4 支持\n\n将 mlx-lm 升级至 [4469ad4](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002F4469ad4)（BatchGenerator 重构），并将 mlx-vlm 升级至 [90732bd](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm\u002Fcommit\u002F90732bdee6a361ae5003b180dc3c1b17b9d01a80)。新增对 Gemma 4 视觉、音频及 MoE 模型的支持。修复了多张不同分辨率图像输入时视觉塔崩溃的问题。由 @TipKnuckle 实现的 Gemma 4 推理解析器和代理式工具调用功能（#565）。此外，还添加了 omlx 特有的自定义配置，以兼容 Gemma 4 VLM 的连续批处理。\n\n### TurboQuant：近乎零开销的解码\n\n将 `BatchTurboQuantKVCache` 重写为 `TurboQuantKVCache` 的子类，而非通过委托包装实现。结合 @Blaizzy 新推出的融合 Metal 内核（一次调度完成分数、softmax 和值计算），解码开销从基准测试的 43% 降至 8%。\n\n修复了混合注意力机制中的双重 softmax 错误（#556），该错误会导致模型失去焦点并陷入循环。同时修复了多个请求加入批处理时出现的连续批处理形状不匹配问题（#559）。\n\n**Qwen3.5-4B-MLX-4bit，8k 上下文，3-bit TQ：**\n\n| | 基准 | TurboQuant | 比例 |\n|---|---|---|---|\n| 解码 | 117.9 tok\u002Fs | 109.6 tok\u002Fs | 0.93x |\n| 峰值内存 | 5.19 GB | 4.90 GB | -5.6% |\n| KV 缓存 | 0.30 GB | 0.10 GB | -67% |\n\n### 带有 TurboQuant 的连续批处理\n\nTurboQuant 现在可以处理多个并发请求。批处理操作（合并、提取、扩展、过滤）能够在批大小变化时正确处理量化状态。\n\n### 新特性\n\n- 视觉特征缓存，用于多轮图像复用\n- MCP 工具调用循环以及由 @rayone 实现的聊天界面中引擎状态时间线功能（#509）\n- Gemma 4 推理解析器和代理式工具调用功能，由 @TipKnuckle 完成（#565）\n- 标签推送时自动更新 Homebrew formula\n\n### Bug 修复\n\n- 修复 VLM 解码模型内存重复问题（#582）\n- 修复通过 mlx-lm 解码模型导致的 VLM 批量解码性能退化问题\n- 修复采用新 BatchGenerator 流水线时的语法约束生成问题\n- 修复 Gemma 4 多张图像输入时视觉塔因分辨率不同而崩溃的问题\n- 修复 Gemma 4 消息提取器中保留 image_url 部分的问题\n- 修复 Anthropic 处理器中应用 Gemma 4 消息提取器的问题\n- 修复 VLM 清理代理时缺少 audio_tower 属性的问题\n- 修复 SSD 恢复时 RotatingKVCache 容量不足导致合并崩溃的问题\n- 修复 Gemma 4 密集模型中 oQ 的 num_experts 为空的问题（#554）\n- 修复 TTS 波形转换中 bfloat16 音频的问题（#551）\n- 由 @MKuBMax 实现的本地 oMLX 健康检查绕过代理功能（#558）\n- 修复聊天界面：隐藏 `_ui:false` 消息，并在中断时移除多余的 `\u003C\u002Fthink>` 标签\n\n### 依赖项\n\n- 将 mlx-lm 升级至 4469ad4（BatchGenerator 重构 + Gemma 4 支持）\n- 将 mlx-vlm 升级至 90732bd（融合 TurboQuant Metal 内核）\n\n### 新贡献者\n\n- @MKuBMax — 实现本地健康检查的代理绕过功能（#558）\n- @rayone — 实现 MCP 工具调用循环、引擎状态时间线以及聊天界面优化功能（#509）\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fcompare\u002Fv0.3.2...v0.3.4","2026-04-05T03:23:42",{"id":168,"version":169,"summary_zh":170,"released_at":171},247760,"v0.3.2","## 亮点\n\n### Gemma 4 支持\n\n将 mlx-vlm 升级至 [43b9b20](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm\u002Fcommit\u002F43b9b2078b7027e7622f34a434b2d94aada7eaad)，新增对 Gemma 4 视觉、音频及 MoE 模型的支持。同时修复了共享 KV 缓存模型的分块预填充问题。\n\n### TurboQuant 回归\n\n基于 @Blaizzy 的 [mlx-vlm TurboQuant 集成](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-vlm\u002Fpull\u002F858)。直接引入 mlx-vlm 的多编解码引擎——Prod、MSE、Polar 和 Split 编解码器，并支持分数位宽（如 3.5 位）。\n\n我在 mlx-vlm 的单请求 `TurboQuantKVCache` 基础上构建了 `BatchTurboQuantKVCache`，以供 omlx 的连续批处理调度器使用。KV 缓存会在预填充阶段立即量化，从而降低峰值内存占用；解码阶段则每 32 个 token 进行一次批量量化，并采用混合注意力机制，结合缓冲的 fp16 状态与量化状态。\n\n可通过管理后台的模型设置或 `model_settings.json` 启用。\n\n**Qwen3.5-27B-4bit, 3-bit TQ：**\n\n| | 32k 上下文 | | 128k 上下文 | |\n|---|---|---|---|---|\n| | 基线 | TQ | 基线 | TQ |\n| KV 缓存内存 | 2.14 GB | 0.54 GB (**-75%**) | 8.14 GB | 1.70 GB (**-79%**) |\n| 峰值内存 | 22.47 GB | 21.11 GB (**-1.4 GB**) | 37.66 GB | 33.55 GB (**-4.1 GB**) |\n| 预填充 | 362 tok\u002Fs | 353 tok\u002Fs | 238 tok\u002Fs | 226 tok\u002Fs |\n| 解码 | 28.4 tok\u002Fs | 17.9 tok\u002Fs | 19.4 tok\u002Fs | 7.3 tok\u002Fs |\n\n峰值内存的节省程度会随着上下文长度的增加而提升。解码速度的权衡是量化 KV 注意力机制固有的——TQ 更适合内存受限的长上下文场景，而非追求速度。\n\n### 错误修复\n\n- 修复 VLM 引擎加载图像时未应用 EXIF 方向的问题\n- 修复 Gemma 4 共享 KV 缓存模型的分块预填充问题\n\n### 依赖项\n\n- 将 mlx-vlm 升级至 43b9b20（Gemma 4、TurboQuant）\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fcompare\u002Fv0.3.1...v0.3.2","2026-04-03T13:46:36",{"id":173,"version":174,"summary_zh":175,"released_at":176},247761,"v0.3.1","### Bug 修复\n\n- **修复**：TTL 到期时卸载存在活跃请求的模型 — 所有引擎类型（LLM、VLM、嵌入、重排序器、STT、TTS、STS）现在都会报告活跃请求数，以便 TTL 检查跳过繁忙的引擎 (#522)\n- **修复**：VLM mRoPE 位置状态在预填充过程中丢失 — Qwen2-VL\u002FQwen2.5-VL 的多轮对话可能会产生质量下降的输出 (#531)\n- **修复**：快照写入线程与清理之间的竞态条件\n- **修复**：思维回退工具调用提取过于贪婪 — 调整正则表达式以防止误匹配 (#484)\n- **修复**：音频端点中模型别名无法解析 (#525)\n- **修复**：TTS\u002FSTT\u002FSTS 的 mlx-audio 可选依赖缺失 (#515)\n- **修复**：仅包含 VLM 的模型在 `force_lm` 基准测试加载时失败 (#487)\n\n### 改进\n\n- 将 xgrammar 设为可选 — 自动检测安装方式（pip 或 uv），并显示正确的安装命令\n- 启用 faulthandler 以进行原生崩溃诊断 (#511, #520)\n- 在 HF 上传工具中添加重新下载提示切换开关\n- oQ：更新描述以反映当前实现，暂时禁用增强量化 UI\n- 依赖项：将 mlx-vlm 升级至 9db27b5\n\n### 新贡献者\n\n- @latent-variable 在 #517 中做出了首次贡献","2026-04-02T20:41:53",{"id":178,"version":179,"summary_zh":180,"released_at":181},247762,"v0.3.0","## 亮点\n\n## 音频支持 — STT、TTS、STS\n\n通过集成 [mlx-audio](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-audio)，Apple Silicon 平台上的音频模型新增了三种引擎类型。作者：@ethannortharc\n\n- **STT**（语音转文本）：Whisper、Qwen3-ASR、Parakeet、Voxtral\n- **TTS**（文本转语音）：Qwen3-TTS、Kokoro、F5-TTS、Sesame CSM、Dia、Spark、CosyVoice\n- **STS**（语音到语音）：DeepFilterNet、MossFormer2、SAMAudio、LFM2.5-Audio\n\n新增三个与 OpenAI 兼容的端点：`\u002Fv1\u002Faudio\u002Ftranscriptions`、`\u002Fv1\u002Faudio\u002Fspeech`、`\u002Fv1\u002Faudio\u002Fprocess`（oMLX 特定）。音频模型会自动从 mlx-audio 注册表中检测，并在管理仪表板上与 LLM\u002FVLM 模型一同显示。\n\n![音频集成](https:\u002F\u002Fraw.githubusercontent.com\u002Fjundot\u002Fomlx\u002Fmain\u002Fdocs\u002Fimages\u002Fomlx_audio.png)\n\n可选依赖 — `pip install 'omlx[audio]'`。Homebrew 和 DMG 版本默认包含 mlx-audio。（#365）\n\n## XGrammar 约束解码\n\n由 @leuski 实现的基于 [xgrammar](https:\u002F\u002Fgithub.com\u002Fmlc-ai\u002Fxgrammar) 的结构化输出生成。该功能使用位掩码在 logits 层面强制执行语法约束，并通过 `mx.async_eval` 与模型前向传播并行运行。\n\n支持的语法类型：\n- `json` — JSON 模式验证\n- `regex` — 正则表达式模式\n- `grammar` — EBNF\u002FGBNF 文法\n- `choice` — 允许的字符串列表\n\n采用与 vLLM 兼容的 `structured_outputs` 字段（位于 `extra_body` 中）。每个模型的 `reasoning_parser` 配置将 xgrammar 的结构化标签映射到相应模型协议（Qwen、Harmony、DeepSeek、Llama 等）。解码时性能开销为 9–24%（TTFT 无影响），且随着模型规模增大而降低。\n\n可选依赖 — `pip install 'omlx[grammar]'`。Homebrew 和 DMG 版本默认包含 xgrammar。若未安装，`response_format` 将回退到提示注入方式，且 `structured_outputs` 会返回 400 错误，并附带安装说明。（#335）\n\n### 新特性\n\n- **XTC 采样器** — 支持 XTC（排除顶级选项）采样。可通过任意 API 端点传递 `xtc_probability` 和 `xtc_threshold` 参数。默认值为 0.0（禁用）。（#337，作者：@blightbow）\n- **MCP 可流式 HTTP** — MCP 现在除了支持 stdio 外，还支持可流式的 HTTP 传输。（#286，作者：@tianfeng98）\n- **多模态嵌入项** — `\u002Fv1\u002Fembeddings` 接受包含文本和图像输入的结构化 `items`。已使用 `Qwen3-VL-Embedding-2B-mxfp8` 进行测试。（#373，作者：@MasakiMu319）\n- **自定义处理器嵌入支持** — 嵌入请求会在存在自定义处理器钩子时通过这些钩子路由，从而修复了如 Qwen3-VL-Embedding 等在通用分词器路径下出现故障的模型。（#369，作者：@MasakiMu319）\n- **聊天界面中的系统提示支持** — 聊天界面现在可以接受系统提示。\n- **清除所有 SSD 缓存按钮** — 管理仪表板新增了一个用于清除所有 SSD 缓存块的按钮。\n- **SSD 缓存大小显示** — 即使没有加载任何模型，也会显示 SSD 缓存的大小。\n- **响应式管理仪表板** — 管理仪表板现已支持移动设备。\n- **实时菜单栏更新** — macOS 菜单栏状态（停止 → 启动中 → 运行中）及按钮状态","2026-03-30T19:20:56",{"id":183,"version":184,"summary_zh":185,"released_at":186},247763,"v0.3.0rc1","> 这是 oMLX 0.3.0 的预发布版本。在正式发布之前，它将进行一天的测试。如果您发现任何错误，请在 [问题页面](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fissues) 上报告！\n\n## 亮点\n\n## 音频支持 — STT、TTS、STS\n\n通过集成 [mlx-audio](https:\u002F\u002Fgithub.com\u002FBlaizzy\u002Fmlx-audio)，Apple Silicon 平台上新增了三种音频模型引擎类型。由 @ethannortharc 实现。\n\n- **STT**（语音转文本）：Whisper、Qwen3-ASR、Parakeet、Voxtral\n- **TTS**（文本转语音）：Qwen3-TTS、Kokoro、F5-TTS、Sesame CSM、Dia、Spark、CosyVoice\n- **STS**（语音到语音）：DeepFilterNet、MossFormer2、SAMAudio、LFM2.5-Audio\n\n新增三个兼容 OpenAI 的 API 端点：`\u002Fv1\u002Faudio\u002Ftranscriptions`、`\u002Fv1\u002Faudio\u002Fspeech`、`\u002Fv1\u002Faudio\u002Fprocess`（oMLX 特定）。音频模型会自动从 mlx-audio 注册表中检测，并与 LLM\u002FVLM 模型一同显示在管理后台中。\n\n![音频集成](https:\u002F\u002Fraw.githubusercontent.com\u002Fjundot\u002Fomlx\u002Fmain\u002Fdocs\u002Fimages\u002Fomlx_audio.png)\n\n可选依赖项 — `pip install 'omlx[audio]'`。Homebrew 和 DMG 构建默认包含 mlx-audio。（#365）\n\n## XGrammar 约束解码\n\n由 @leuski 实现的基于 [xgrammar](https:\u002F\u002Fgithub.com\u002Fmlc-ai\u002Fxgrammar) 的结构化输出生成。该功能使用位掩码在 logits 层面强制执行语法约束，并通过 `mx.async_eval` 与模型前向传播并行运行。\n\n支持的语法类型：\n- `json` — JSON 模式验证\n- `regex` — 正则表达式模式\n- `grammar` — EBNF\u002FGBNF 文法\n- `choice` — 允许的字符串列表\n\n使用与 vLLM 兼容的 `structured_outputs` 字段（位于 `extra_body` 中）。每个模型的 `reasoning_parser` 配置会将 xgrammar 的结构化标签映射到相应模型协议（Qwen、Harmony、DeepSeek、Llama 等）。解码时性能开销为 9–24%，不会影响 TTFT，且随着模型规模增大而降低。\n\n可选依赖项 — `pip install 'omlx[grammar]'`。Homebrew 和 DMG 构建默认包含 xgrammar。若未安装，`response_format` 将回退到提示注入方式，且 `structured_outputs` 会返回一个带有安装说明的 400 错误。（#335）\n\n### 新特性\n\n- **XTC 采样器** — 支持 XTC（eXclude Top Choices）采样。可通过任意 API 端点传递 `xtc_probability` 和 `xtc_threshold` 参数。默认值为 0.0（禁用）（#337，由 @blightbow 提供）\n- **MCP 流式 HTTP** — MCP 现在除了支持 stdio 外，还支持流式 HTTP 传输（#286，由 @tianfeng98 提供）\n- **多模态嵌入条目** — `\u002Fv1\u002Fembeddings` 端点现在接受包含文本和图像输入的结构化 `items`。已使用 `Qwen3-VL-Embedding-2B-mxfp8` 进行测试（#373，由 @MasakiMu319 提供）\n- **自定义处理器嵌入支持** — 嵌入请求会在有可用自定义处理器钩子时通过这些钩子路由，从而修复了像 Qwen3-VL-Embedding 这样的模型在通用分词器路径上出现的问题（#369，由 @MasakiMu319 提供）\n- **聊天界面中的系统提示支持** — 聊天界面现在可以接受系统提示\n- **清除所有 SSD 缓存按钮** — 管理后台新增了一个用于清除所有 SSD 缓存块的按钮\n- **SSD 缓存大小显示** — 即使在…时也能显示 SSD 缓存大小","2026-03-29T11:07:33",{"id":188,"version":189,"summary_zh":190,"released_at":191},247764,"v0.2.24","## v0.2.24 版本更新日志\n\n### 重大缺陷修复\n\n- **修复所有 Qwen3.5 模型的多模态模型加载失败问题** — `transformers` 5.4.0（于3月27日发布）将 `Qwen2VLImageProcessor` 的后端从 numpy\u002FPIL 重写为 torch\u002Ftorchvision，导致在没有安装 PyTorch 的环境中无法加载多模态模型。所有 Qwen3.5 模型在初始化多模态部分时都会失败，并回退到仅使用语言模型，从而造成模型被重复加载，峰值内存使用量增加约两倍。已将 `transformers` 锁定至 `>=5.0.0,\u003C5.4.0`。（#431）\n- **修复 IOKit 内核崩溃（completeMemory 准备计数下溢）问题** — 在请求完成后立即调用 `mx.clear_cache()` 会与 IOKit 的异步引用计数清理操作产生竞争，进而导致 M1\u002FM2\u002FM3 设备出现内核崩溃。现将 Metal 缓冲区的清理延迟 8 个生成步骤，以确保 IOKit 回调能够顺利完成。（#435）\n- **修复启用内存保护机制时模型加载过程中的交换问题** — `mx.set_memory_limit()` 会导致 MLX 在模型加载期间过度回收缓存缓冲区，引发频繁的内存分配与释放，最终使系统进入交换状态。现已完全移除 Metal 层面的内存限制，因为所有的内存保护机制都改用 `mx.get_active_memory()` 轮询来实现。（#429）\n\n### 其他缺陷修复\n\n- 修复大型 MoE 模型在 GPTQ 量化下的性能问题\n- 修复多模态模型分词器过早加载导致 oQ 量化过程中内存溢出的问题\n- 加强错误恢复机制，防止在清理过程中因次要 Metal 错误而触发 SIGABRT 信号（#429、#435）","2026-03-28T04:45:17",{"id":193,"version":194,"summary_zh":195,"released_at":196},247765,"v0.2.23","> 强烈建议升级到 0.2.23 版本。0.2.22 版本存在严重 bug，会导致在长上下文和并发请求场景下出现崩溃和内存问题。对此造成的不便，我们深表歉意。\n\n## v0.2.23 发行说明\n\n### 重大 Bug 修复\n\n- **修复预填充过程中 Metal 缓冲区累积导致的崩溃** — 0.2.22 版本禁用了预填充块之间的缓冲区清空操作，导致 GPU 内存会跨多个块不断累积，直至 Metal 驱动程序崩溃。此问题影响所有设备，但在内存较小的机器上尤为严重。（#410、#412、#421）\n- **修复因请求间残留的 Metal 缓冲区导致 TTFT 突增的问题** — 请求之间积累在 Metal 缓冲池中的已释放缓冲区，会在下一次预填充时触发昂贵的紧急垃圾回收。（#411）\n- **修复缓存重建时 KVCache 偏移量不匹配的问题** — 部分前缀匹配后，存储的 meta_state 偏移量可能超过实际张量长度，从而在并发数大于 1 的混合注意力模型（Qwen3.5）中引发 `broadcast_shapes` 错误。（#409）\n\n### 其他 Bug 修复\n\n- 修复 MoE 路由器门控量化导致模型加载失败的问题\n- 修复 TurboQuant KV 缓存转换在缓存合并预填充路径中缺失的问题（#422）\n- 在进一步优化之前，禁用实验性 TurboQuant 功能\n\n### 功能改进\n\n- oQ：优化位分配策略\n- oQ：为 Nemotron-H 模型启用增强量化","2026-03-27T16:09:12",{"id":198,"version":199,"summary_zh":200,"released_at":201},247766,"v0.2.22","## v0.2.22 版本更新日志\n\n### 错误修复\n\n- 修复请求中止路径中 `batch_generator.remove()` 之前的 GPU 同步问题\n- 修复因不必要的每块 `_sync_and_clear_cache()` 调用导致的预填充性能下降问题（#396）\n- 修复 VLM 模型在 Anthropic `tool_result` 内容中图像被移除的问题（#393）\n- 修复 GPTQ 轴不匹配问题——使去量化-量化分组与 `mx.quantize` 保持一致\n- 修复 GPTQ 在输出维度非 2 的幂时 `group_size` 回退崩溃问题\n- 修复准确率基准测试强制 LM 引擎避免 VLM 空响应的问题\n\n### 功能改进\n\n- 支持 `x-api-key` 头，以兼容 Anthropic SDK（#379）\n- oQ：密集模型的 MLP 非对称量化——在保护 `gate_proj` 和 `down_proj` 的同时减少 `up_proj` 的位数\n- oQ：GPTQ 性能和稳定性提升，并将增强后缀更名为 `e`","2026-03-26T16:03:43",{"id":203,"version":204,"summary_zh":205,"released_at":206},247767,"v0.2.21","## v0.2.21 发行说明\n\n### 亮点\n\n### TurboQuant KV 缓存（实验性）\n\n> 这是一项实验性功能，在某些场景下可能无法正常工作。\n\n![TurboQuant KV 缓存](docs\u002Fimages\u002Fomlx_turboquant.png)\n\n基于码本量化的 KV 缓存，在生成过程中对键值状态进行压缩。该技术基于 [TurboQuant](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19874)——随机正交旋转 + 贝塔分布码本 + 基于边界的标量量化。\n\n**工作原理**：预填充阶段以全精度 fp16 速度运行，不会造成质量损失。在第一个解码 token 时，累积的 KV 缓存会被量化为 3 位或 4 位的码本索引。解码注意力机制使用一个融合的两 pass Flash Attention Metal 内核，直接从打包后的索引中读取数据——无需反量化，也无需 fp16 中间张量。\n\n#### 针尖对麦芒（Qwen3.5-35B-A3B，3 位 TurboQuant）\n\n| 上下文长度 | 基线 | TurboQuant | KV 内存占用 |\n|------------|--------|------------|-------------|\n| 32K        | ✅     | ✅         | 735MB → 195MB (73%) |\n| 64K        | ✅     | ✅         | 1407MB → 327MB (77%) |\n| 128K       | ✅     | ✅         | 2749MB → 589MB (79%) |\n\n#### 性能表现\n\n| 模型               | 预填充速度 | 解码速度 |\n|--------------------|------------|----------|\n| Qwen3.5-35B-A3B    | 95%        | 87%      |\n| Qwen3.5-27B        | 97%        | 95%      |\n\n*速度数值为相对于 fp16 基线性能的百分比。*\n\n可在管理界面 → 模型设置 → 实验性功能 → TurboQuant KV 缓存开关中启用。\n\n### oQe — 结合 GPTQ 权重优化的增强量化\n\noQe 在 oQ 的敏感性驱动混合精度系统基础上，加入了基于海森矩阵的 GPTQ 误差补偿。标准量化会独立地对每个权重进行四舍五入；而 GPTQ 则按列顺序处理，并根据校准输入的逆海森矩阵引导，调整剩余权重以补偿四舍五入误差。最终输出仍为 mlx-lm 兼容格式——结构相同、推理速度一致——但输出误差显著降低。\n\n对于 MoE 模型，路由专家（占总参数的 90% 以上）采用批处理算法进行处理：同一层的所有专家共享同一个海森矩阵，因此可以同时对所有专家进行逐列优化。以 Qwen3.5-35B-A3B（256 个专家 × 40 层）为例，批量处理耗时约 6 分钟，而逐列处理则需约 90 分钟，速度提升 15 倍，结果完全一致。\n\n**支持的架构**：Qwen3.5 MoE\u002F密集型、MiniMax-M2.5、GLM、Step-3.5、Nemotron-Cascade、Llama\u002FMistral，以及 VLM（视觉权重保持 fp16）。详情请参阅 [oQ 文档](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fblob\u002Fmain\u002Fdocs\u002FoQ_Quantization.md)。\n\n### 错误修复\n\n- 修复 VLM 缓存代理使用权威 mx.array 偏移量，而非不可靠的 _offset 快捷方式\n- 修复 VLM 代理中 BatchRotatingKVCache 的单调 _offset 问题\n- 修复 SSD 缓存写入时磁盘满日志记录问题 (#342)\n- 修复 LoRA 适配器目录出现在模型发现和管理下载中的问题 (#356)\n- 修复 OOM 失败时 Metal 缓存清理的生成内存保护机制 (#372)\n- 修复 function_call_output 接受列表\u002F字典并序列化为 JSON 字符串的问题 (#367)\n- 修复下载弹窗","2026-03-25T23:18:32",{"id":208,"version":209,"summary_zh":210,"released_at":211},247768,"v0.2.20","## Highlights\r\n\r\n## oQ — oMLX universal dynamic quantization\r\n\r\nQuantization should not be exclusive to any particular inference server. oQ produces standard mlx-lm compatible models that work everywhere — oMLX, mlx-lm, and any app that supports MLX safetensors. No custom loader required.\r\n\r\n**oQ is a data-driven mixed-precision quantization system for Apple Silicon. Instead of assigning bits by fixed rules or tensor type, oQ measures each layer's actual quantization sensitivity through calibration and allocates bits where the data says they matter most.** See the [oQ documentation](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fblob\u002Fmain\u002Fdocs\u002FoQ_Quantization.md) for details.\r\n\r\n#### Benchmarks (Qwen3.5-35B-A3B)\r\n\r\n| Benchmark | Samples | 2-bit mlx-lm | 2-bit oQ | 3-bit mlx-lm | 3-bit oQ | 4-bit mlx-lm | 4-bit oQ |\r\n|-----------|---------|-------------|----------|-------------|----------|-------------|----------|\r\n| MMLU | 300 | 14.0% | **64.0%** | 76.3% | **85.0%** | 79.7% | **83.3%** |\r\n| TRUTHFULQA | 300 | 17.0% | **80.0%** | 81.7% | **86.7%** | 87.7% | **88.0%** |\r\n| HUMANEVAL | 164 (full) | 0.0% | **78.0%** | 84.8% | **86.6%** | **87.2%** | 85.4% |\r\n| MBPP | 300 | 0.3% | **63.3%** | 69.0% | **72.0%** | 71.7% | **74.3%** |\r\n\r\n- **oQ2-oQ8 levels** with sensitivity-driven mixed-precision bit allocation\r\n- **oQ3.5** base 3-bit + routed expert down_proj 4-bit (Super Weights protection)\r\n- **AWQ weight equalization** rewritten from scratch following the llm-compressor reference implementation. fixed critical double-scaling bug on hybrid attention models (Qwen3.5) and added per-layer mask-aware calibration\r\n- **Sensitivity-driven budget plan.** mandatory lm_head 8-bit protection, then data-driven tier allocation (+4\u002F+2\u002F+1 bits) with greedy fallback. no hardcoded tensor-type priorities — calibration data decides which layers matter\r\n- **Proxy sensitivity model.** select a quantized version of the source model for layer sensitivity analysis with ~4x less memory. 90% top-10 overlap with full-precision measurement validated on Qwen3.5-35B\r\n- **New calibration dataset.** 600 samples from codeparrot\u002Fself-instruct-starcoder (real code), allenai\u002Fc4 (web text), Open-Orca (conversation), gsm8k (reasoning), and wikipedia multilingual. replaces the old HumanEval\u002FMBPP-only code samples\r\n- **VLM support.** quantize vision-language models with vision weight preservation (fp16)\r\n- **FP8 model support.** use native FP8 models (MiniMax, DeepSeek) as quantization source\r\n- **MiniMax M2.5 support.** block_sparse_moe architecture with SwitchGLU fused experts\r\n- **DeepSeek V3.2 support.** shared_experts (plural) + MLA projections. MLP AWQ works, MLA attention AWQ planned\r\n- **Nemotron support.** backbone.embeddings path detection for sensitivity measurement on hybrid Mamba+MoE+Attention architecture\r\n- **AWQ grid size setting.** configurable n_grid (10 fast \u002F 20 recommended) from the web UI\r\n- **HuggingFace Hub uploader.** upload quantized models directly from the dashboard\r\n- blocks inference requests during quantization to prevent conflicts\r\n\r\n## Intelligence benchmark suite\r\n\r\nEvaluate model intelligence across knowledge, reasoning, math, and coding benchmarks. All datasets bundled locally for offline use.\r\n\r\n![oMLX Benchmark Suite](https:\u002F\u002Fraw.githubusercontent.com\u002Fjundot\u002Fomlx\u002Fmain\u002Fdocs\u002Fimages\u002Fomlx_benchmark.png)\r\n\r\n- **Knowledge:** MMLU, ARC-Challenge, KMMLU (Korean), CMMLU (Chinese), JMMLU (Japanese)\r\n- **Reasoning:** HellaSwag, Winogrande, TruthfulQA, GSM8K\r\n- **Coding:** HumanEval (164 function completions, pass@1), MBPP\r\n- benchmark queue for sequential multi-model evaluation with persistent results\r\n- comparison table with mode\u002Fsample columns and text export\r\n- sample size options: 30\u002F50\u002F100\u002F200\u002F300\u002F500\u002F1000\u002F2000\u002FFull\r\n- batch processing: 1x\u002F2x\u002F4x\u002F8x\u002F16x\u002F32x\r\n- download raw results as JSON\r\n\r\n## New Features\r\n\r\n- **Prefill memory guard.** prevents kernel panics on large context by detecting head_dim>128 O(n^2) SDPA fallback and enforcing safe prefill chunk sizes\r\n- **Native BERT\u002FXLMRoBERTa embedding.** load BERT-family embedding models (bge-m3, mxbai-embed) without mlx-embeddings fallback (#330 by @yes999zc)\r\n- **Jina v3 reranker.** reranking via `\u003C|score_token|>` logits for jinaai\u002Fjina-reranker-v3-mlx (#331 by @yes999zc)\r\n- **Partial mode.** assistant message prefill support for Moonshot\u002FKimi K2 models (`partial` field + `name` field passthrough) (#306 by @blightbow)\r\n- **Codex smart config merging.** non-destructive config merge with reasoning model auto-detection (#249 by @JasonYeYuhe)\r\n- **i18n normalization.** normalize translation files against en.json with missing key detection (#247 by @xiaoran007)\r\n- **Web dashboard generating status.** show generating status for active requests after prefill completes\r\n\r\n## Experimental Features\r\n\r\n- **SpecPrefill.** attention-based sparse prefill for MoE models. reduces prefill compute by skipping low-attention tokens. system prompt is protected from token dropping to preserve instruction fol","2026-03-23T20:11:52",{"id":213,"version":214,"summary_zh":215,"released_at":216},247769,"v0.2.20rc1","> This is a release candidate for v0.2.20. Please test and report any issues before the final release.\n\n## Highlights\n\n### oQ — oMLX universal dynamic quantization\n\nQuantize any model directly from the web dashboard. **oQ produces standard mlx-lm compatible models that work everywhere, no custom loader required.** oMLX, mlx-lm, and any app that supports MLX safetensors format.\n\noQ analyzes weight distributions per-layer and applies the optimal quantization format (mxfp4, mxfp8, affine) automatically. See the [oQ documentation](https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fblob\u002Fmain\u002Fdocs\u002FoQ_Quantization.md) for details.\n\n- **oQ2-oQ8 levels** with calibration datasets and CLIP-based evaluation\n- **oQ3.5** base 3-bit + expert down_proj 4-bit (~3.9 bpw)\n- **Hybrid quantization** per-layer mxfp4\u002Fmxfp8\u002Faffine format selection for better quality-size tradeoffs\n- **AutoAWQ weight equalization** with per-expert scaling and data-driven sensitivity analysis for improved accuracy\n- **VLM support.** Quantize vision-language models with vision weight preservation\n- **FP8 model support.** Use native FP8 models (MiniMax, DeepSeek) as quantization source\n- **Clip optimization speedup** with GPU batch size setting for faster AWQ-style clipping\n- Blocks inference requests during quantization to prevent conflicts\n\n### Intelligence benchmark suite\n\nEvaluate model intelligence across knowledge, reasoning, math, and coding benchmarks. All datasets bundled locally for offline use.\n\n- **Knowledge:** MMLU, ARC-Challenge, KMMLU (Korean), CMMLU (Chinese), JMMLU (Japanese)\n- **Reasoning:** HellaSwag, Winogrande, TruthfulQA, GSM8K\n- **Coding:** HumanEval (164 function completions, pass@1), MBPP\n- Benchmark queue for sequential multi-model evaluation with persistent results\n- Comparison table with mode\u002Fsample columns and text export\n- Sample size options: 30\u002F50\u002F100\u002F200\u002F300\u002F500\u002F1000\u002F2000\u002FFull\n- Batch processing: 1x\u002F2x\u002F4x\u002F8x\u002F16x\u002F32x\n- Download raw results as JSON\n\n## New Features\n\n- **Prefill memory guard.** Prevents kernel panics on large context by detecting head_dim>128 O(n²) SDPA fallback and enforcing safe prefill chunk sizes\n- **Native BERT\u002FXLMRoBERTa embedding.** Load BERT-family embedding models (bge-m3, mxbai-embed) without mlx-embeddings fallback (#330 by @yes999zc)\n- **Jina v3 reranker.** Reranking via `\u003C|score_token|>` logits for jinaai\u002Fjina-reranker-v3-mlx (#331 by @yes999zc)\n- **Partial mode.** Assistant message prefill support for Moonshot\u002FKimi K2 models (`partial` field + `name` field passthrough) (#306 by @blightbow)\n- **Codex smart config merging.** Non-destructive config merge with reasoning model auto-detection (#249 by @JasonYeYuhe)\n- **i18n normalization.** Normalize translation files against en.json with missing key detection (#247 by @xiaoran007)\n- **Web dashboard generating status.** Show generating status for active requests after prefill completes\n\n## Experimental Features\n\n- **SpecPrefill.** Attention-based sparse prefill for MoE models. Reduces prefill compute by skipping low-attention tokens. System prompt is protected from token dropping to preserve instruction following.\n\n## Bug Fixes\n\n- Fix chat streaming failure not sending error message to client (#342)\n- Fix TTL auto-unload during benchmark causing Metal GPU crash\n- Fix dtype normalization on enhanced path causing OOM on large models\n- Fix oQ bf16→fp16 weight conversion causing 41% quantized value corruption\n- Fix oQ mxfp4 uint8 scales being force-cast to fp16\n- Fix oQ clip optimization mask dtype and position_ids for Qwen3.5\n- Fix oQ streaming quantization accuracy and VLM support\n- Fix MC benchmarks (MMLU, HellaSwag, TruthfulQA) always scoring 0% due to max_tokens=1\n- Fix HumanEval scoring. Prepend prompt imports when model returns function only\n- Fix MBPP scoring. Include test cases in prompt so model uses correct function name\n- Fix benchmark code extraction. Extract last answer\u002Fcode block instead of first\n- Fix benchmark penalties. Force neutral presence_penalty=0 and repetition_penalty=1\n- Fix think prefix false positive for disabled thinking patterns (`\u003Cthink>\u003C\u002Fthink>`)\n- Fix responses API image support for VLM + missing prompt_tokens in completions usage\n- Fix SSE streaming behind nginx reverse proxy (X-Accel-Buffering header) (#309)\n- Fix CausalLM-based embedding model detection (Qwen3-Embedding) (#327)\n- Fix web dashboard unload tooltip clipping in active models box (#314)\n- Fix web dashboard 401 warning log spam from dashboard polling\n- Fix web dashboard model settings not showing for embedding\u002Freranker models\n- Fix PEP 735 dependency-groups for `uv sync --dev` (#305 by @blightbow)\n\n## New Contributors\n\n- @blightbow made their first contribution in #305\n- @yes999zc made their first contribution in #330\n- @JasonYeYuhe made their first contribution in #249\n- @xiaoran007 made their first contribution in #247\n\n**Full changelog**: https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fcompare\u002Fv0.2.19...v0.2.20rc1\n","2026-03-22T15:24:33",{"id":218,"version":219,"summary_zh":220,"released_at":221},247770,"v0.2.20.dev3","This is a pre-release build for testing purposes.\n\n## New Features\n\n- **Native BERT\u002FXLMRoBERTa embedding** — load BERT-family embedding models (bge-m3, mxbai-embed) without mlx-embeddings fallback (#330 by @yes999zc)\n- **Jina v3 reranker** — reranking via `\u003C|score_token|>` logits for jinaai\u002Fjina-reranker-v3-mlx (#331 by @yes999zc)\n- **oQ3.5 quantization level** — base 3-bit + expert down_proj 4-bit (~3.9 bpw)\n- **oQ VLM support** — quantize vision-language models with vision weight preservation\n- **oQ FP8 model support** — allow native FP8 models (MiniMax, DeepSeek) as quantization source\n- **Partial mode** — assistant message prefill support for Moonshot\u002FKimi K2 models (`partial` field + `name` field passthrough) (#306 by @blightbow)\n- **Benchmark sample sizes** — add 500\u002F1000\u002F2000 sample options for MMLU and HellaSwag\n- **Benchmark comparison columns** — show mode\u002Fsample and full dataset size in comparison table\n- **Codex smart config merging** — non-destructive config merge with reasoning model auto-detection (#249 by @JasonYeYuhe)\n- **i18n normalization** — normalize translation files against en.json with missing key detection (#247 by @xiaoran007)\n- **Admin generating status** — show generating status for active requests after prefill completes\n\n## Bug Fixes\n\n- fix oQ bf16→fp16 weight conversion causing 41% quantized value corruption\n- fix oQ mxfp4 uint8 scales being force-cast to fp16\n- fix oQ clip optimization mask dtype and position_ids for Qwen3.5\n- fix think prefix false positive for disabled thinking patterns (`\u003Cthink>\u003C\u002Fthink>`)\n- fix responses API image support for VLM + missing prompt_tokens in completions usage\n- fix SSE streaming behind nginx reverse proxy (X-Accel-Buffering header) (#309)\n- fix CausalLM-based embedding model detection (Qwen3-Embedding) (#327)\n- fix admin unload tooltip clipping in active models box (#314)\n- fix admin 401 warning log spam from dashboard polling\n- fix admin model settings not showing for embedding\u002Freranker models\n- fix PEP 735 dependency-groups for `uv sync --dev` (#305 by @blightbow)\n\n## New Contributors\n\n- @blightbow made their first contribution in #305\n- @yes999zc made their first contribution in #330\n- @JasonYeYuhe made their first contribution in #249\n- @xiaoran007 made their first contribution in #247","2026-03-21T20:42:47",{"id":223,"version":224,"summary_zh":225,"released_at":226},247771,"v0.2.20.dev2","This is a pre-release build for testing purposes.\n\n## New Features\n\n- **Hybrid quantization modes** — per-layer mxfp4\u002Fmxfp8\u002Faffine format selection for better quality-size tradeoffs\n- **Clip optimization speedup** — GPU batch size setting for faster AWQ-style clipping\n- **Block inference during quantization** — prevents request conflicts while oQ is running\n- **Download raw results** — export benchmark results as JSON\n- **Use model sampling settings** — benchmarks now respect per-model sampling parameters\n\n## Bug Fixes\n\n- fix MC benchmarks (MMLU, HellaSwag, TruthfulQA) always scoring 0%","2026-03-21T06:15:31",{"id":228,"version":229,"summary_zh":230,"released_at":231},247772,"v0.2.20.dev1","This is a pre-release build for testing purposes. Detailed feature breakdowns will be included in the official release notes. Please test extensively and report any issues.\n\n## New Features\n\n- **oQ Quantization** — oMLX Universal Dynamic Quantization with oQ2-oQ8 levels, calibration datasets, and CLIP support\n- **Accuracy Benchmark** — evaluate model intelligence with MMLU, HellaSwag, TruthfulQA, GSM8K, and LiveCodeBench. all datasets bundled locally for offline use. card-style grid UI with per-benchmark sample size selector (30\u002F50\u002F100\u002F200\u002F300\u002FFull) and batch processing (1x\u002F2x\u002F4x\u002F8x)\n- **Benchmark Queue** — queue multiple models for sequential benchmarking. results persist on the server until explicitly cleared. comparison table in text export for easy cross-model analysis\n- **SpecPrefill** — attention-based sparse prefill for MoE models. reduces prefill compute by skipping low-attention tokens while preserving output quality\n- **Prefill Memory Guard** — prevents kernel panics on large context by detecting head_dim>128 O(n²) SDPA fallback and enforcing safe prefill chunk sizes\n- **MLX Only filter** in model downloader — toggle to show only MLX-converted models\n- **Admin override for OCR model sampling params** — prevent repetition loops on OCR models\n\n## Bug Fixes\n\n- fix anthropic API temperature default (should be None, not 1.0)\n","2026-03-20T18:36:26",{"id":233,"version":234,"summary_zh":235,"released_at":236},247773,"v0.2.19","> Download the DMG that matches your macOS version (sequoia or tahoe).\n> If you're on an M5 Mac, you must use the `macos26-tahoe` DMG for M5 Neural Accelerator.\n\n## Highlights\n\n- **Fix sustained GPU spike on idle** — removed the keepalive warmup loop that caused unnecessary GPU usage (#292)\n- **Fix Metal buffer cache race condition** — GPU sync before clearing Metal buffer cache (#300)\n- **Per-model OCR generation defaults** — OCR models now use official recommended params to prevent repetition loops (#279)\n\n## Improvements\n\n- temperature and repetition_penalty for OCR models can now be customized from the admin dashboard\n- `marked.parse()` output in admin dashboard sanitized with DOMPurify to prevent XSS\n\n## Bug Fixes\n\n- **fix** sustained GPU spike when server is idle due to keepalive warmup loop (#292)\n- **fix** intermittent crash from Metal buffer cache race condition (#300)\n- **fix** OCR model repetition loop from missing `repetition_penalty` and `max_tokens` cap (#279)\n- **fix** server crash on startup when configured model directory is inaccessible\n- **fix** update notifications showing pre-release (dev\u002Frc) versions","2026-03-18T17:28:27",{"id":238,"version":239,"summary_zh":240,"released_at":241},247774,"v0.2.19.dev2","### What's in this build\n\nSecond pre-release for the Metal GPU crash fix (#300, #173). Includes additional fixes on top of dev1.\n\n### Changes since v0.2.19.dev1\n\n- **Fix: synchronize GPU before clearing Metal buffer cache** — Adds `mx.metal.clear_cache()` with a preceding `mx.eval()` fence to ensure all in-flight GPU work completes before the Metal buffer pool is cleared. Prevents a race where buffer deallocation overlaps with active command buffers, which was another path to the `completeMemory() prepare count underflow` kernel panic. (#300)\n- **Fix: skip pre-release versions in update notifications** — Update checker no longer prompts users to \"upgrade\" to dev\u002Fpre-release builds.\n\n### Testing needed\n\nSame as dev1. If you were experiencing kernel panics on Tahoe (especially M4), please try this build and report back.","2026-03-18T15:07:08",{"id":243,"version":244,"summary_zh":245,"released_at":246},247775,"v0.2.19.dev1","### What's in this build\n\nThis is a pre-release to test the per-stream-lock fix for Metal GPU crashes on macOS Tahoe (#300, #173).\n\n### Changes since v0.2.18\n\n- **Build: bundle per-stream-lock patched libmlx.dylib** — Replaces stock MLX's libmlx.dylib with a patched version that adds per-stream mutex protection to Metal command buffer\u002Fencoder access. This addresses the `completeMemory() prepare count underflow` kernel panic on M4 and SIGSEGV\u002FSIGABRT crashes on M3 Ultra caused by unsynchronized concurrent GPU stream access. Based on ml-explore\u002Fmlx#3247 by @rsnow. (#300, #173)\n- **Fix: remove keepalive warmup loop that caused sustained GPU spike** (#292)\n\n### Testing needed\n\nIf you were experiencing kernel panics or server crashes on Tahoe, please try this build and report back. SSD cache enabled + heavy workload (e.g. Claude Code with tool calling) is the scenario most likely to trigger the original bug.","2026-03-18T09:36:40",{"id":248,"version":249,"summary_zh":250,"released_at":251},247776,"v0.2.18","> Download the DMG that matches your macOS version (sequoia or tahoe).\n> If you're on an M5 Mac, you must use the `macos26-tahoe` DMG for M5 Neural Accelerator.\n\n## Highlights: thinking budget support\n\n- **Thinking budget for reasoning models.** you can now limit how many tokens a model spends on reasoning. set it per-model in the admin panel or per-request via the API. when the budget is exceeded, thinking is force-closed and the model transitions to the actual response.\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjundot\u002Fomlx\u002Fmain\u002Fdocs\u002Fimages\u002Fomlx_thinking_budget1.png\" width=\"960\">\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjundot\u002Fomlx\u002Fmain\u002Fdocs\u002Fimages\u002Fomlx_thinking_budget2.png\" width=\"960\">\n\n## New Features\n\n### Thinking budget (#285)\n- per-model thinking budget toggle + token count in admin panel (advanced settings)\n- per-request `thinking_budget` parameter for OpenAI API, `thinking.budget_tokens` for Anthropic API\n- uses logits processor to force close-think sequence when budget exceeded (same approach as vLLM\u002FSGLang)\n- auto-detects the correct `\u003C\u002Fthink>` transition pattern from each model's chat template (handles Qwen3, DeepSeek, GLM, MiniMax, Step etc.)\n- suppresses duplicate `\u003C\u002Fthink>` tokens after forced close\n- zero overhead when budget is not set. near-zero overhead when active\n- works for both LLM and VLM. no impact on embedding, reranker, or any cache system\n\n## Bug Fixes\n\n- **fix** disable `mx.compile` on runtime failure to prevent repeated warnings on every subsequent call\n\n## Notes\n\n- **tip for Qwen3.5-35B-A3B users:** if reasoning (`enable_thinking`) is true, the model may emit EOS during tool calling and stop generation mid-turn. if you're using Qwen3.5 for agentic coding, go to model settings → Chat Template Kwargs, set `enable_thinking` to `false` and check force.\n\n**full changelog**: https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fcompare\u002Fv0.2.17...v0.2.18","2026-03-17T15:48:12",{"id":253,"version":254,"summary_zh":255,"released_at":256},247777,"v0.2.17","> Download the DMG that matches your macOS version (sequoia or tahoe).\n> If you're on an M5 Mac, you must use the `macos26-tahoe` DMG for M5 Neural Accelerator.\n\n## Hotfix (v0.2.17)\n\n- **fix** `\u002Fv1\u002Fembeddings` and `\u002Fv1\u002Frerank` crash — `mx.compile` on container-returning models (#282, #283)\n\n> v0.2.16 has been removed due to this bug. all v0.2.16 changes are included below.\n\n---\n\n## New Models\n\n- **Nemotron Super** hybrid architecture support — `layers_block_type` config, MoE latent projections (upstream mlx-lm [73c8550](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002F73c8550))\n\n## Bug Fixes\n\n- **fix** GatedDelta\u002FSSM state precision (Qwen3.5, Qwen3-next, Kimi-Linear) — state buffers now use float32 instead of input dtype for numerical correctness. upstream fix ([735a43b](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002F735a43b))\n\n  > oMLX produces token-identical output to mlx-lm's BatchGenerator regardless of SSD cache on\u002Foff. (verified on KVCache, ArraysCache, ArraysCache MoE models)\n\n- **fix** BatchRotatingKVCache lazy evaluation ordering — `left_padding`\u002F`offset` could hold stale values due to MLX deferred evaluation. upstream fix ([89c430a](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002F89c430a)) adds `mx.depends()` to enforce correct order\n\n- **fix** SuScaledRoPE\u002FYarnRoPE mutating input arrays in-place — upstream fix ([2146e4e](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002F2146e4e)) shallow-copies before modification\n\n- **fix** Qwen3-Coder tool parser crashing on Python-style quoted dicts — falls back to `ast.literal_eval` when `json.loads` fails (upstream [ed69f83](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-lm\u002Fcommit\u002Fed69f83))\n\n- **fix** `api_key` exposed in stats response and integration CLI commands (#256)\n\n- **fix** redundant `\u002Fadmin\u002Fapi\u002Flogin` calls on every stats poll — reuses admin session\n\n- **fix** sub-block cache (`\u003Cblock_size`) now surfaced in runtime cache observability instead of showing misleading `0 indexed blocks` (#256)\n\n## Dependency Updates\n\n- mlx-lm `4a21ffd` (0.31.1) → `564281f` (0.31.2)\n\n## New Contributors\n\n- @yes999zc made their first contribution in #256 and fixed the compile crash in #283\n\nThanks to @yes999zc for the contributions!\n\n**full changelog**: https:\u002F\u002Fgithub.com\u002Fjundot\u002Fomlx\u002Fcompare\u002Fv0.2.15...v0.2.17","2026-03-17T12:32:55"]