[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-foldl--chatllm.cpp":3,"tool-foldl--chatllm.cpp":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":79,"languages":80,"stars":121,"forks":122,"last_commit_at":123,"license":124,"difficulty_score":10,"env_os":125,"env_gpu":126,"env_ram":127,"env_deps":128,"category_tags":132,"github_topics":133,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":136,"updated_at":137,"faqs":138,"releases":174},1302,"foldl\u002Fchatllm.cpp","chatllm.cpp","Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)","chatllm.cpp 是一个纯 C++ 编写的轻量级大模型推理引擎，能在本地 CPU 或 GPU 上实时运行从 1B 到 300B 参数的各类开源模型，并支持文字、语音、图像等多模态对话与 RAG 检索增强。它解决了云端服务延迟高、隐私泄露和费用不可控的问题，让你把大模型装进自己的电脑或服务器，随时离线聊天、写作、编程或做研究。  \n开发者可把它嵌入应用、做二次开发；研究人员能快速验证模型效果；对隐私敏感的设计师、写作者或普通用户也能零门槛体验高性能 AI。项目基于 ggml，量化加载、GPU 加速、分布式推理、工具调用等功能一应俱全，且持续更新，力求在准确率和速度上优于同类实现。","# ChatLLM.cpp\n\n[中文版](README_zh.md)\n\n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue)](LICENSE) [![CI](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Factions\u002Fworkflows\u002Fbuild.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Factions\u002Fworkflows\u002Fbuild.yml)\n\n\u003Ca href='https:\u002F\u002Fko-fi.com\u002FH2H21TKE4P' target='_blank'>\u003Cimg height='36' style='border:0px;height:36px;' src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoldl_chatllm.cpp_readme_f3d2dc6e08b2.png' border='0' alt='Buy Me a Coffee at ko-fi.com' \u002F>\u003C\u002Fa>\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoldl_chatllm.cpp_readme_a8cc1fda0230.gif)\n\nInference of a bunch of models from less than 1B to more than 300B, for real-time [multimodal](.\u002Fdocs\u002Fmultimodal.md) chat with [RAG](.\u002Fdocs\u002Frag.md) on your computer (CPU & GPU),\npure C++ implementation based on [@ggerganov](https:\u002F\u002Fgithub.com\u002Fggerganov)'s [ggml](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fggml).\n\nDeliver accurate or better results than other implementations [o](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F15600#issuecomment-3400860774)-[c](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F13694#issuecomment-3454877635)-[c](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F3377#issuecomment-2198554173)-[a](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F8183#issuecomment-2198348578)-[s](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fpull\u002F13760#issuecomment-2998476325)-[i](https:\u002F\u002Fgithub.com\u002Flmstudio-ai\u002Flmstudio-bug-tracker\u002Fissues\u002F798#issue-3266514944)-onally.\n\n| [Supported Models](.\u002Fdocs\u002Fmodels.md) | [Download Quantized Models](.\u002Fdocs\u002Fquick_start.md#download-quantized-models) |\n\n```mermaid\ngraph TD;\nggml --> chatllm.cpp\nchatllm.cpp --> AlphaGeometryRE\nchatllm.cpp --> WritingTools\nchatllm.cpp --> LittleAcademia\nsubgraph coding[ ]\n    AlphaGeometryRE\n    WritingTools\n    LittleAcademia\nend\nggml[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fggml\"                     style=\"text-decoration:none;\">ggml\u003C\u002Fa>            \u003Cbr>\u003Cspan style=\"font-size:10px;\">Machine learning library\u003C\u002Fspan>];\nchatllm.cpp[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\"          style=\"text-decoration:none;\">chatllm.cpp\u003C\u002Fa>     \u003Cbr>\u003Cspan style=\"font-size:10px;\">LLM inference\u003C\u002Fspan>];\nAlphaGeometryRE[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002Falphageometryre\"  style=\"text-decoration:none;\">AlphaGeometryRE\u003C\u002Fa> \u003Cbr>\u003Cspan style=\"font-size:10px;\">AlphaGeometry Re-engineered\u003C\u002Fspan>];\nWritingTools[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002FWritingTools\"        style=\"text-decoration:none;\">Writing Tools\u003C\u002Fa>   \u003Cbr>\u003Cspan style=\"font-size:10px;\">AI aided writing\u003C\u002Fspan>];\nLittleAcademia[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002Flittle-academia\"   style=\"text-decoration:none;\">Little Academia\u003C\u002Fa> \u003Cbr>\u003Cspan style=\"font-size:10px;\">Learn programming\u003C\u002Fspan>];\n```\n\n**What's New:**\n\n* 2026-03-28: InternVL3.5\n* 2026-03-27: Qianfan-OCR\n* 2026-03-22: Penguin-VL\n* 2026-03-06: Qwen3.5\n* 2026-03-03: GLM-OCR\n* 2026-02-22: Youtu-VL\n* 2026-02-18: Youtu-LLM\n* 2026-02-16: Voice Clone with Qwen3-TTS\n* 2026-02-12: Qwen3-TTS\n* 2026-02-01: Qwen3-ForceAligner\n* 2026-01-31: Qwen3-ASR\n* 2026-01-21: Step3-VL\n* 2026-01-20: GLM-4.7-Flash\n* 2026-01-19: TranslateGemma\n* 2026-01-13: WeDLM\n* 2026-01-09: QWen3-VL-Embedding\u002FReranker\n* 2026-01-05: HY-MT\n* 2026-01-04: GLM-ASR-Nano\n* 2025-12-31: Qwen3-VL\n* 2025-12-24: GLM-4.6V-Flash\n* 2025-12-15: Rnj-1\n* 2025-12-08: Ministral-3\n* 2025-11-06: Maya1\n* 2025-11-03: Ouro\n* 2025-10-10: [I can draw](.\u002Fdocs\u002Fmultimodal.md): Janus-Pro\n* 2025-06-21: [I can hear](.\u002Fdocs\u002Fmultimodal.md): Qwen2-Audio\n* 2025-05-23: [I can see](.\u002Fdocs\u002Fmultimodal.md): Fuyu\n* 2025-05-21: Re-quantization when loading (e.g. `--re-quantize q4_k`)\n* 2025-05-17: [I can speak](.\u002Fdocs\u002Fmultimodal.md): Orpheus-TTS\n* 2025-03-24: [GGMM](.\u002Fdocs\u002Fggmm.md) file format\n* 2025-02-21: [Distributed inference](.\u002Fdocs\u002Frpc.md)\n* 2025-02-10: [GPU acceleration](.\u002Fdocs\u002Fgpu.md) 🔥\n* 2024-12-09: [Reversed role](.\u002Fdocs\u002Ffun.md#reversed-role)\n* 2024-11-21: [Continued generation](.\u002Fdocs\u002Ffun.md#continued-generation)\n* 2024-11-01: [generation steering](.\u002Fdocs\u002Ffun.md#generation-steering)\n* 2024-06-15: [Tool calling](.\u002Fdocs\u002Ftool_calling.md)\n* 2024-05-29: [ggml](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fggml) is forked instead of submodule\n* 2024-05-14: [OpenAI API](.\u002Fdocs\u002Fbinding.md#openai-compatible-api), CodeGemma Base & Instruct supported\n* 2024-05-08: [Layer shuffling](.\u002Fdocs\u002Ffun.md#layer-shuffling)\n\n## Features\n\n* [x] Accelerated memory-efficient CPU\u002FGPU inference with int4\u002Fint8 quantization, optimized KV cache and parallel computing;\n* [x] Use OOP to address the similarities between different _Transformer_ based models;\n* [x] Streaming generation with typewriter effect;\n* [x] Continuous chatting (content length is virtually unlimited)\n\n    Two methods are available: _Restart_ and _Shift_. See `--extending` options.\n\n* [x] [Retrieval Augmented Generation](.\u002Fdocs\u002Frag.md) (RAG) 🔥\n\n* [x] [LoRA](.\u002Fdocs\u002Fmodels.md#lora-models);\n* [x] Python\u002FJavaScript\u002FC\u002FNim [Bindings](.\u002Fdocs\u002Fbinding.md), web demo, and more possibilities.\n\n## Quick Start\n\nAs simple as `main_nim -i -m :model_id`. [Check it out](.\u002Fdocs\u002Fquick_start.md).\n\n## Usage\n\n### Preparation\n\nClone the ChatLLM.cpp repository into your local machine:\n\n```sh\ngit clone --recursive https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp.git && cd chatllm.cpp\n```\n\nIf you forgot the `--recursive` flag when cloning the repository, run the following command in the `chatllm.cpp` folder:\n\n```sh\ngit submodule update --init --recursive\n```\n\n### Quantize Model\n\n**Some quantized models can be downloaded [on demand](.\u002Fdocs\u002Fquick_start.md#download-quantized-models).**\n\nInstall dependencies of `convert.py`:\n\n```sh\npip install -r requirements.txt\n```\n\nUse `convert.py` to transform models into quantized GGML format. For example, to convert the _fp16_ base model to q8_0 (quantized int8) GGML model, run:\n\n```sh\n# For models such as ChatLLM2-6B, InternLM, LlaMA, LlaMA-2, Baichuan-2, etc\npython convert.py -i path\u002Fto\u002Fmodel -t q8_0 -o quantized.bin --name ModelName\n\n# For some models such as CodeLlaMA, model type should be provided by `-a`\n# Find `-a ...` option for each model in `docs\u002Fmodels.md`.\npython convert.py -i path\u002Fto\u002Fmodel -t q8_0 -o quantized.bin -a CodeLlaMA --name ModelName\n```\n\nUse `--name` to specify model's name in English. Optionally, use `--native_name` to specify model's name in another language.\nUse `-l` to specify the path of the LoRA model to be merged, such as:\n\n```sh\npython convert.py -i path\u002Fto\u002Fmodel -l path\u002Fto\u002Flora\u002Fmodel -o quantized.bin --name ModelName\n```\n\nNote: Appropriately, only HF format is supported (with a few exceptions); Format of the generated `.bin` files is different from the one (GGUF) used by `llama.cpp`.\n\n### Build\n\nIn order to build this project you have several different options.\n\n- Using `CMake`:\n\n  ```sh\n  cmake -B build\n  cmake --build build -j --config Release\n  ```\n\n  The executable is `.\u002Fbuild\u002Fbin\u002Fmain`.\n\n  There are lots of `GGML_...` options to play with. Example: Vulkan acceleration together with RPC and backend dynamic loading:\n\n  ```sh\n  cmake -B build -DGGML_VULKAN=1 -DGGML_RPC=1 -DGGML_CPU_ALL_VARIANTS=1 -DGGML_BACKEND_DL=1\n  ```\n\n### Run\n\nNow you may chat with a quantized model by running:\n\n```sh\n.\u002Fbuild\u002Fbin\u002Fmain -m llama2.bin  --seed 100                      # Llama-2-Chat-7B\n# Hello! I'm here to help you with any questions or concerns ....\n```\n\nTo run the model in interactive mode, add the `-i` flag. For example:\n\n```sh\n# On Windows\n.\\build\\bin\\Release\\main -m model.bin -i\n\n# On Linux (or WSL)\nrlwrap .\u002Fbuild\u002Fbin\u002Fmain -m model.bin -i\n```\n\nIn interactive mode, your chat history will serve as the context for the next-round conversation.\n\nRun `.\u002Fbuild\u002Fbin\u002Fmain -h` to explore more options!\n\n## Plan Nim\n\nAll Python scripts are going to be rewritten in [Nim](https:\u002F\u002Fnim-lang.org\u002F), with following exceptions:\n\n* when `pickle` is used\n\n## Acknowledgements\n\n* This project is started as refactoring of [ChatGLM.cpp](https:\u002F\u002Fgithub.com\u002Fli-plus\u002Fchatglm.cpp), without which, this project could not be possible.\n\n* Thank those who have released their the model sources and checkpoints.\n\n* `chat_ui.html` adapted from [Ollama-Chat](https:\u002F\u002Fgithub.com\u002FOft3r\u002FOllama-Chat).\n\n## Note\n\nThis project is my hobby project to learn DL & GGML, and under active development. PRs of features won't\nbe accepted, while PRs for bug fixes are warmly welcome.\n","# ChatLLM.cpp\n\n[中文版](README_zh.md)\n\n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue)](LICENSE) [![CI](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Factions\u002Fworkflows\u002Fbuild.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Factions\u002Fworkflows\u002Fbuild.yml)\n\n\u003Ca href='https:\u002F\u002Fko-fi.com\u002FH2H21TKE4P' target='_blank'>\u003Cimg height='36' style='border:0px;height:36px;' src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoldl_chatllm.cpp_readme_f3d2dc6e08b2.png' border='0' alt='Buy Me a Coffee at ko-fi.com' \u002F>\u003C\u002Fa>\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoldl_chatllm.cpp_readme_a8cc1fda0230.gif)\n\n在您的计算机上（CPU与GPU），基于[@ggerganov](https:\u002F\u002Fgithub.com\u002Fggerganov)'s [ggml](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fggml)，以纯C++实现，可对从不到10亿参数到超过3000亿参数的一系列模型进行实时[多模态](.\u002Fdocs\u002Fmultimodal.md)聊天，并支持[RAG](.\u002Fdocs\u002Frag.md)。\n其结果准确度甚至优于其他实现[o](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F15600#issuecomment-3400860774)-[c](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F13694#issuecomment-3454877635)-[c](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F3377#issuecomment-2198554173)-[a](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fissues\u002F8183#issuecomment-2198348578)-[s](https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp\u002Fpull\u002F13760#issuecomment-2998476325)-[i](https:\u002F\u002Fgithub.com\u002Flmstudio-ai\u002Flmstudio-bug-tracker\u002Fissues\u002F798#issue-3266514944)-onally。\n\n| [支持的模型](.\u002Fdocs\u002Fmodels.md) | [下载量化模型](.\u002Fdocs\u002Fquick_start.md#download-quantized-models) |\n\n```mermaid\ngraph TD;\nggml --> chatllm.cpp\nchatllm.cpp --> AlphaGeometryRE\nchatllm.cpp --> WritingTools\nchatllm.cpp --> LittleAcademia\nsubgraph coding[ ]\n    AlphaGeometryRE\n    WritingTools\n    LittleAcademia\nend\nggml[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fggml-org\u002Fggml\"                     style=\"text-decoration:none;\">ggml\u003C\u002Fa>            \u003Cbr>\u003Cspan style=\"font-size:10px;\">机器学习库\u003C\u002Fspan>];\nchatllm.cpp[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\"          style=\"text-decoration:none;\">chatllm.cpp\u003C\u002Fa>     \u003Cbr>\u003Cspan style=\"font-size:10px;\">LLM推理\u003C\u002Fspan>];\nAlphaGeometryRE[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002Falphageometryre\"  style=\"text-decoration:none;\">AlphaGeometryRE\u003C\u002Fa> \u003Cbr>\u003Cspan style=\"font-size:10px;\">AlphaGeometry重新设计\u003C\u002Fspan>];\nWritingTools[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002FWritingTools\"        style=\"text-decoration:none;\">写作工具\u003C\u002Fa>   \u003Cbr>\u003Cspan style=\"font-size:10px;\">AI辅助写作\u003C\u002Fspan>];\nLittleAcademia[\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffoldl\u002Flittle-academia\"   style=\"text-decoration:none;\">小学院\u003C\u002Fa> \u003Cbr>\u003Cspan style=\"font-size:10px;\">学习编程\u003C\u002Fspan>];\n```\n\n**最新更新：**\n\n* 2026年3月28日：InternVL3.5\n* 2026年3月27日：Qianfan-OCR\n* 2026年3月22日：Penguin-VL\n* 2026年3月6日：Qwen3.5\n* 2026年3月3日：GLM-OCR\n* 2026年2月22日：Youtu-VL\n* 2026年2月18日：Youtu-LLM\n* 2026年2月16日：使用Qwen3-TTS进行语音克隆\n* 2026年2月12日：Qwen3-TTS\n* 2026年2月1日：Qwen3-ForceAligner\n* 2026年1月31日：Qwen3-ASR\n* 2026年1月21日：Step3-VL\n* 2026年1月20日：GLM-4.7-Flash\n* 2026年1月19日：TranslateGemma\n* 2026年1月13日：WeDLM\n* 2026年1月9日：QWen3-VL-嵌入\u002F重排序器\n* 2026年1月5日：HY-MT\n* 2026年1月4日：GLM-ASR-Nano\n* 2025年12月31日：Qwen3-VL\n* 2025年12月24日：GLM-4.6V-Flash\n* 2025年12月15日：Rnj-1\n* 2025年12月8日：Ministral-3\n* 2025年11月6日：Maya1\n* 2025年11月3日：Ouro\n* 2025年10月10日：[我会画画](.\u002Fdocs\u002Fmultimodal.md)：Janus-Pro\n* 2025年6月21日：[我能听](.\u002Fdocs\u002Fmultimodal.md)：Qwen2-Audio\n* 2025年5月23日：[我能看](.\u002Fdocs\u002Fmultimodal.md)：Fuyu\n* 2025年5月21日：加载时重新量化（例如`--re-quantize q4_k`）\n* 2025年5月17日：[我能说话](.\u002Fdocs\u002Fmultimodal.md)：Orpheus-TTS\n* 2025年3月24日：[GGMM](.\u002Fdocs\u002Fggmm.md)文件格式\n* 2025年2月21日：[分布式推理](.\u002Fdocs\u002Frpc.md)\n* 2025年2月10日：[GPU加速](.\u002Fdocs\u002Fgpu.md) 🔥\n* 2024年12月9日：[角色反转](.\u002Fdocs\u002Ffun.md#reversed-role)\n* 2024年11月21日：[持续生成](.\u002Fdocs\u002Ffun.md#continued-generation)\n* 2024年11月1日：[生成引导](.\u002Fdocs\u002Ffun.md#generation-steering)\n* 2024年6月15日：[工具调用](.\u002Fdocs\u002Ftool_calling.md)\n* 2024年5月29日：[ggml](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fggml)被作为独立项目而非子模块\n* 2024年5月14日：[OpenAI API](.\u002Fdocs\u002Fbinding.md#openai-compatible-api)，支持CodeGemma Base与Instruct\n* 2024年5月8日：[层混洗](.\u002Fdocs\u002Ffun.md#layer-shuffling)\n\n## 特性\n\n* [x] 通过int4\u002Fint8量化、优化KV缓存和并行计算，实现加速且内存高效的CPU\u002FGPU推理；\n* [x] 使用面向对象编程解决不同基于_Transformer_模型之间的相似性问题；\n* [x] 支持打字机效果的流式生成；\n* [x] 持续聊天（内容长度几乎无限制）\n\n    提供两种方式：_重启_和_切换_。详见`--extending`选项。\n\n* [x] [检索增强生成](.\u002Fdocs\u002Frag.md) (RAG) 🔥\n\n* [x] [LoRA](.\u002Fdocs\u002Fmodels.md#lora-models);\n* [x] 支持Python\u002FJavaScript\u002FC\u002FNim [绑定](.\u002Fdocs\u002Fbinding.md)，提供Web演示及更多可能性。\n\n## 快速入门\n\n简单如`main_nim -i -m :model_id`。[请查看](.\u002Fdocs\u002Fquick_start.md)。\n\n## 使用方法\n\n### 准备工作\n\n将ChatLLM.cpp仓库克隆到本地：\n\n```sh\ngit clone --recursive https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp.git && cd chatllm.cpp\n```\n\n如果在克隆仓库时忘记添加`--recursive`标志，请在`chatllm.cpp`文件夹中运行以下命令：\n\n```sh\ngit submodule update --init --recursive\n```\n\n### 量化模型\n\n**部分量化模型可按需下载[on demand](.\u002Fdocs\u002Fquick_start.md#download-quantized-models)。**\n\n安装`convert.py`的依赖项：\n\n```sh\npip install -r requirements.txt\n```\n\n使用`convert.py`将模型转换为量化GGML格式。例如，要将_fp16_基础模型转换为q8_0（量化int8）GGML模型，运行：\n\n```sh\n# 对于ChatLLM2-6B、InternLM、LlaMA、LlaMA-2、Baichuan-2等模型\npython convert.py -i path\u002Fto\u002Fmodel -t q8_0 -o quantized.bin --name ModelName\n\n# 对于CodeLlaMA等部分模型，需要通过`-a`指定模型类型\n# 各模型的`-a ...`选项可在`docs\u002Fmodels.md`中找到。\npython convert.py -i path\u002Fto\u002Fmodel -t q8_0 -o quantized.bin -a CodeLlaMA --name ModelName\n```\n\n使用`--name`指定模型的英文名称。也可选使用`--native_name`指定其他语言的名称。\n使用`-l`指定要合并的LoRA模型路径，例如：\n\n```sh\npython convert.py -i path\u002Fto\u002Fmodel -l path\u002Fto\u002Flora\u002Fmodel -o quantized.bin --name ModelName\n```\n\n注意：目前仅支持HF格式（少数例外）；生成的`.bin`文件格式与`llama.cpp`使用的(GGUF)格式不同。\n\n### 构建\n\n构建该项目有多种选择。\n\n- 使用`CMake`：\n\n  ```sh\n  cmake -B build\n  cmake --build build -j --config Release\n  ```\n\n  可执行文件为`.\u002Fbuild\u002Fbin\u002Fmain`。\n\n  有许多`GGML_...`选项可供调整。例如，结合Vulkan加速、RPC以及后端动态加载：\n\n  ```sh\n  cmake -B build -DGGML_VULKAN=1 -DGGML_RPC=1 -DGGML_CPU_ALL_VARIANTS=1 -DGGML_BACKEND_DL=1\n  ```\n\n### 运行\n\n现在，您可以通过运行以下命令与量化后的模型进行对话：\n\n```sh\n.\u002Fbuild\u002Fbin\u002Fmain -m llama2.bin  --seed 100                      # Llama-2-Chat-7B\n# 你好！我在这里帮助您解答任何问题或疑虑……\n```\n\n若要以交互模式运行模型，只需添加 `-i` 标志。例如：\n\n```sh\n# 在 Windows 上\n.\\build\\bin\\Release\\main -m model.bin -i\n\n# 在 Linux（或 WSL）上\nrlwrap .\u002Fbuild\u002Fbin\u002Fmain -m model.bin -i\n```\n\n在交互模式下，您的聊天记录将作为下一轮对话的上下文。\n\n运行 `.\u002Fbuild\u002Fbin\u002Fmain -h` 可查看更多选项！\n\n## 计划 Nim\n\n所有 Python 脚本都将用 [Nim](https:\u002F\u002Fnim-lang.org\u002F) 重写，但以下情况除外：\n\n* 使用 `pickle` 时\n\n## 致谢\n\n* 本项目始于对 [ChatGLM.cpp](https:\u002F\u002Fgithub.com\u002Fli-plus\u002Fchatglm.cpp) 的重构，若没有它，本项目根本无法实现。\n\n* 感谢那些公开了模型源代码和检查点的开发者。\n\n* `chat_ui.html` 基于 [Ollama-Chat](https:\u002F\u002Fgithub.com\u002FOft3r\u002FOllama-Chat) 改编而来。\n\n## 注意\n\n本项目是我学习深度学习与 GGML 的业余爱好项目，目前仍在积极开发中。我们暂不接受功能相关的 Pull Request，但非常欢迎针对 Bug 的修复 Pull Request。","# chatllm.cpp 快速上手指南\n\n## 环境准备\n- **系统**：Linux \u002F macOS \u002F Windows（WSL 亦可）\n- **编译工具**：CMake ≥ 3.15、C++17 编译器（GCC\u002FClang\u002FMSVC）\n- **Python**：≥ 3.8（仅用于模型量化）\n- **可选**：Vulkan SDK（GPU 加速）、NVIDIA CUDA（≥ 11.3）\n\n## 安装步骤\n\n1. 克隆源码（含子模块）\n   ```sh\n   git clone --recursive https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp.git\n   cd chatllm.cpp\n   # 若忘记 --recursive，补执行：\n   # git submodule update --init --recursive\n   ```\n\n2. 安装 Python 依赖（量化用）\n   ```sh\n   pip install -r requirements.txt\n   ```\n\n3. 编译\n   ```sh\n   cmake -B build\n   cmake --build build -j --config Release\n   # 可执行文件：.\u002Fbuild\u002Fbin\u002Fmain\n   ```\n\n   GPU 加速示例（Vulkan + RPC）\n   ```sh\n   cmake -B build -DGGML_VULKAN=1 -DGGML_RPC=1 -DGGML_CPU_ALL_VARIANTS=1 -DGGML_BACKEND_DL=1\n   cmake --build build -j --config Release\n   ```\n\n## 基本使用\n\n1. **下载量化模型**（推荐直接获取社区已量化好的 `.bin`）\n   - 参考 [docs\u002Fquick_start.md#download-quantized-models](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fblob\u002Fmain\u002Fdocs\u002Fquick_start.md#download-quantized-models)\n\n2. **自行量化模型**\n   ```sh\n   # 通用示例\n   python convert.py -i path\u002Fto\u002Fmodel -t q8_0 -o model.bin --name MyModel\n\n   # 带 LoRA\n   python convert.py -i path\u002Fto\u002Fmodel -l path\u002Fto\u002Flora -o model.bin --name MyModel\n   ```\n\n3. **运行对话**\n   ```sh\n   # 非交互\n   .\u002Fbuild\u002Fbin\u002Fmain -m model.bin --seed 42\n\n   # 交互模式\n   .\u002Fbuild\u002Fbin\u002Fmain -m model.bin -i\n   # Windows PowerShell\n   .\\build\\bin\\Release\\main.exe -m model.bin -i\n   ```\n\n4. **一键体验（Nim 版）**\n   ```sh\n   main_nim -i -m :qwen2-7b-chat-q4_k\n   ```\n\n至此即可在本地与 1B-300B 任意支持模型实时对话。更多参数请运行 `.\u002Fbuild\u002Fbin\u002Fmain -h`。","一家 5 人规模的独立游戏工作室正在为 Steam 新作做本地化 QA，需要在 3 天内把 2 万条英文对白批量翻译成中文并做语境校验，预算有限，无法调用云端大模型。\n\n### 没有 chatllm.cpp 时\n- 每人手动跑在线翻译 API，按 0.002 美元\u002F1K tokens 计算，2 万条对白约 40 万 tokens，光翻译费就 80 美元，超出预算。  \n- 网络延迟导致每条对白平均 2-3 秒才能返回，40 万 tokens 需 3-4 小时纯等待，QA 节奏被打断。  \n- 云端模型对游戏专有名词（NPC 名字、技能术语）识别不准，需人工二次校对，额外增加 1 天工作量。  \n- 多人共用同一 API key，触发并发限制，频繁 429 报错，进度被迫分批进行。  \n\n### 使用 chatllm.cpp 后\n- 本地 8 代 i7 + RTX 3060 部署 Qwen3-14B-q4_k 量化模型，0 元调用成本，预算全部留给美术。  \n- 纯 C++ 推理 + GPU 加速，单条对白 150-200 ms 出结果，2 万条 40 分钟跑完，QA 流程无缝衔接。  \n- 提前把游戏词典（角色名、技能、道具）写进 RAG 知识库，chatllm.cpp 在翻译时自动引用，专有名词准确率从 70% 提升到 95%，省去二次校对。  \n- 本地推理无并发限制，5 台开发机同时跑脚本，互不干扰，3 天任务 1 天完成。  \n\nchatllm.cpp 让小型团队在本地就能享受云端级大模型能力，省钱、省时、保隐私。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffoldl_chatllm.cpp_2bf0b979.png","foldl","Judd","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffoldl_57988608.png",null,"https:\u002F\u002Fgithub.com\u002Ffoldl",[81,85,89,93,97,101,105,109,113,117],{"name":82,"color":83,"percentage":84},"C++","#f34b7d",59,{"name":86,"color":87,"percentage":88},"C","#555555",19.1,{"name":90,"color":91,"percentage":92},"Cuda","#3A4E3A",9.4,{"name":94,"color":95,"percentage":96},"Python","#3572A5",3.7,{"name":98,"color":99,"percentage":100},"Metal","#8f14e9",2.6,{"name":102,"color":103,"percentage":104},"GLSL","#5686a5",1.8,{"name":106,"color":107,"percentage":108},"CMake","#DA3434",1.1,{"name":110,"color":111,"percentage":112},"WGSL","#1a5e9a",1,{"name":114,"color":115,"percentage":116},"Objective-C","#438eff",0.6,{"name":118,"color":119,"percentage":120},"Go Template","#00ADD8",0.5,854,64,"2026-04-03T09:23:25","MIT","Linux, Windows","可选；支持 Vulkan 加速，未指定显卡型号\u002F显存\u002FCUDA 版本","未说明",{"notes":129,"python":127,"dependencies":130},"需安装 CMake 构建；支持 CPU 与 GPU（Vulkan）推理；模型需先量化为 GGML 格式；首次使用需 git clone --recursive 获取子模块",[131],"requirements.txt 中列出的 Python 依赖",[13,26],[134,135],"llm","llm-inference","2026-03-27T02:49:30.150509","2026-04-06T06:56:29.280080",[139,144,149,154,159,164,169],{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},5963,"转换模型时出现“unknown name”异常怎么办？","该问题通常由 SNAC 模型文件版本不一致或 PyTorch 版本差异引起。请确认：\n1. 使用 mlx-community 提供的 SNAC 模型（而非作者原始仓库）。\n2. 检查 PyTorch 版本是否与项目要求一致（可尝试升级或降级）。\n3. 若仍报错，删除 `generation_config.json` 后重新执行转换命令。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F62",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},5964,"如何启用 GPU 加速运行模型？","在启动命令中加入 `-ngl 100,prolog,epilog` 即可将整个模型加载到 GPU：\n```bash\n.\u002Fmain -m model.bin -ngl 100,prolog,epilog\n```\n若显存不足，可通过 `-l 2048`（即 `--max_length`）减少最大长度以降低显存占用。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F13",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},5965,"Windows 11 下运行 chatllm.py 报错怎么办？","常见原因及解决：\n1. 内存不足：添加 `--max_length 8000` 限制上下文长度。\n2. 线程崩溃：检查 Python 绑定路径是否正确，确保 `chatllm.dll` 与 Python 位数一致（均为 64 位）。\n3. 若批量 OCR 时崩溃，建议逐张处理并捕获异常，避免多线程冲突。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F104",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},5966,"转换 reranker 模型时报 `TypeError: 'type' object is not subscriptable` 如何解决？","该错误由 Python 版本过低（\u003C3.9）导致类型注解不兼容。请升级至 Python 3.9+ 后重新运行：\n```bash\npython3 convert.py -i .\u002Fbce-reranker-base_v1\u002F -t q8_0 -o quantized.bin\n```\n项目已修复此问题，拉取最新代码即可。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F35",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},5967,"bge-reranker 推理速度极慢如何优化？","CPU 上 686 tokens 需 6 秒属正常，建议启用 GPU 加速：\n1. 参考 [gpu.md](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fblob\u002Fmaster\u002Fdocs\u002Fgpu.md) 编译 CUDA 版本。\n2. 使用 `-ngl 99` 将计算图迁移至 GPU，实测 RTX 4080 可提速 10 倍以上。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F24",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},5968,"转换 Megrez 模型后启动报错，提示 Python 版本问题？","删除模型目录下的 `generation_config.json` 后重新转换即可解决：\n```bash\nrm generation_config.json\npython convert.py -i .\u002Fmegrez\u002F -t q8_0 -o megrez.bin\n```\n此文件可能导致加载配置冲突，与 Python 版本无关。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F45",{"id":170,"question_zh":171,"answer_zh":172,"source_url":173},5969,"如何将 chatllm.cpp 作为服务器运行并通过 Python 调用？","1. 启动服务器：\n```bash\n.\u002Fserver -m model.bin --host 0.0.0.0 --port 8080\n```\n2. Python 客户端示例：\n```python\nimport requests\nresponse = requests.post('http:\u002F\u002Flocalhost:8080\u002Fv1\u002Fchat\u002Fcompletions', json={\n    \"messages\": [{\"role\": \"user\", \"content\": \"你好\"}]\n})\nprint(response.json()['choices'][0]['message']['content'])\n```\n详细文档见 [binding.md](https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fblob\u002Fmaster\u002Fdocs\u002Fbinding.md#web-demo)。","https:\u002F\u002Fgithub.com\u002Ffoldl\u002Fchatllm.cpp\u002Fissues\u002F83",[175,180,185,190,195,200,205,210,215,220,225,230,235,239,244,248,253,258,263,268],{"id":176,"version":177,"summary_zh":178,"released_at":179},105575,"v0.22","## New Models\r\n\r\n* **Qianfan-OCR**\r\n* Penguin-VL\r\n\r\n## Bug fixing\r\n\r\n* A nearly one-year old bug (Issue #65)","2026-03-27T10:34:41",{"id":181,"version":182,"summary_zh":183,"released_at":184},105576,"v0.21","New models:\r\n* Qwen3.5\r\n* GLM-OCR\r\n* Youtu-VL\r\n* Youtu-LLM\r\n* Qwen3-TTS (voice clone only support xvec)","2026-03-10T10:45:03",{"id":186,"version":187,"summary_zh":188,"released_at":189},105577,"v0.20","Support new models:\r\n* QWen3-ASR\r\n* QWen3-ForcedAligner","2026-02-10T02:03:59",{"id":191,"version":192,"summary_zh":193,"released_at":194},105578,"v0.19","* As always, more models are supported. Note that most of the new models are special in some way.\r\n\r\n    - Step3-VL: strong **vision** capability\r\n    - GLM-4.7-Flash: strong **coding** capability\r\n    - TranslateGemma: **translation**\r\n    - WeDLM: **diffusion** with AR\r\n    - QWen3-VL-Embedding\u002FReranker: **multimodal** embedding\r\n    - HY-MT: **translation**\r\n    - GLM-ASR-Nano: **ASR**\r\n    - Qwen3-VL: strong vision capability\r\n","2026-01-22T09:45:22",{"id":196,"version":197,"summary_zh":198,"released_at":199},105579,"v0.18","* As always, more models are supported.\r\n* Windows: prebuilt binary with Vulkan (1.4.335.0). Use `-ngl all` to run whole model on default GPU.\r\n* New `server.exe` with built-in llama.cpp WebUI\r\n\r\n\u003Cimg width=\"750\" height=\"825\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F5c045f49-3286-45fe-b1e2-17f9e1933020\" \u002F>\r\n","2025-12-27T02:02:19",{"id":201,"version":202,"summary_zh":203,"released_at":204},105580,"v0.17","* As always, more models are supported, notably LLaDA2.0.\r\n* Windows: prebuilt binary with Vulkan (1.4.321.1). Use `-ngl` all to run whole model on default GPU.","2025-10-27T02:04:11",{"id":206,"version":207,"summary_zh":208,"released_at":209},105581,"v0.16","* As always, more models are supported, notably Janus-Pro.\r\n* Windows: prebuilt binary with Vulkan (1.4.321.1). Use `-ngl all` to run whole model on default GPU.","2025-10-13T11:15:30",{"id":211,"version":212,"summary_zh":213,"released_at":214},105582,"v0.15","* As always, more models are supported.\r\n* Windows: prebuilt binary with Vulkan (1.4.321.1). Use `-ngl all` to run whole model on default GPU.","2025-09-07T10:47:27",{"id":216,"version":217,"summary_zh":218,"released_at":219},105583,"v0.14","* Fix `main_nim.exe`: could not download models that are > 2GB due to [this](https:\u002F\u002Fgithub.com\u002Fnim-lang\u002FNim\u002Fpull\u002F25105).","2025-08-18T08:32:09",{"id":221,"version":222,"summary_zh":223,"released_at":224},105584,"v0.13","* As always, more models are supported.\r\n* Windows: prebuilt binary with Vulkan (1.4.321.1). Use `-ngl all` to run whole model on default GPU.\r\n\r\nUpdate 2025-08-16: chatllm_win_x64.7z updated due to outdated `main.exe`.","2025-08-15T13:50:55",{"id":226,"version":227,"summary_zh":228,"released_at":229},105585,"v0.12","* As always, more models are supported.\r\n* Multimodal: vision & TTS.\r\n* Windows: prebuilt binary with Vulkan (1.4.304.1). Use `-ngl all` to run whole model on default GPU.","2025-06-12T10:59:29",{"id":231,"version":232,"summary_zh":233,"released_at":234},105586,"v0.11","* As always, more models are supported;\r\n* Windows: prebuilt binary with Vulkan (1.4.304.1). Use `-ngl all` to run whole model on default GPU.\r\n","2025-05-11T23:57:33",{"id":236,"version":237,"summary_zh":233,"released_at":238},105587,"v0.10","2025-04-20T23:29:14",{"id":240,"version":241,"summary_zh":242,"released_at":243},105588,"v0.9","* as always, more models are supported;\r\n* Windows: prebuilt binary with Vulkan. Use `-ngl all` to run whole model on default GPU.\r\n\r\nNote: Vulkan Runtime (included in SDK 1.4.304.1) is needed. Download it form here: https:\u002F\u002Fvulkan.lunarg.com\u002Fsdk\u002Fhome#windows","2025-03-14T22:52:06",{"id":245,"version":246,"summary_zh":242,"released_at":247},105589,"v0.8","2025-02-19T10:25:45",{"id":249,"version":250,"summary_zh":251,"released_at":252},105590,"v0.7","* As always, more models are supported.\r\n* Fix some issues (Llama3.2, etc).","2024-12-18T10:44:12",{"id":254,"version":255,"summary_zh":256,"released_at":257},105591,"v0.6","1. more models.\r\n2. fix RAG issues.\r\n3. misc. updates.","2024-11-29T10:34:19",{"id":259,"version":260,"summary_zh":261,"released_at":262},105592,"v0.5","* More models as always.\r\n* `main.exe` built from `main.nim`","2024-11-26T04:30:58",{"id":264,"version":265,"summary_zh":266,"released_at":267},105593,"v0.4","More models, of  course.","2024-08-28T23:38:53",{"id":269,"version":270,"summary_zh":271,"released_at":272},105594,"v0.3","Support more models.","2024-07-06T07:48:02"]