[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-TinyLLaVA--TinyLLaVA_Factory":3,"tool-TinyLLaVA--TinyLLaVA_Factory":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",153609,2,"2026-04-13T11:34:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":73,"owner_company":73,"owner_location":73,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":75,"languages":76,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":10,"env_os":89,"env_gpu":90,"env_ram":91,"env_deps":92,"category_tags":101,"github_topics":103,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":110,"updated_at":111,"faqs":112,"releases":147},7146,"TinyLLaVA\u002FTinyLLaVA_Factory","TinyLLaVA_Factory","A Framework of Small-scale Large Multimodal Models","TinyLLaVA_Factory 是一个专为构建小规模大型多模态模型（LMM）设计的开源框架。它旨在解决当前多模态模型开发中代码复杂、复现困难以及定制门槛高的问题，让开发者能够以更少的代码量和更低的出错风险，快速搭建并训练属于自己的多模态 AI 模型。\n\n该工具非常适合人工智能研究人员、算法工程师以及对多模态技术感兴趣的开发者使用。其核心优势在于高度模块化的架构设计，用户像搭积木一样灵活组合不同的组件：支持 OpenELM、Phi、Qwen 等多种轻量级大语言模型，兼容 CLIP、SigLIP 等视觉编码器，并提供 MLP、Qformer 等多种连接器选项。此外，它还集成了 LoRA、QLoRA 等前沿微调策略，显著降低了训练资源需求。\n\n值得一提的是，TinyLLaVA_Factory 在性能表现上颇具竞争力，其最佳模型仅在 3.1B 参数量级下，整体效果便超越了部分 7B 规模的同类模型。无论是用于学术研究验证新想法，还是进行高效的项目原型开发，TinyLLaVA_Factory 都提供了一个简洁、可扩展且易于复现的理想平台。","\u003Ch2 align=\"center\"> \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14289\">TinyLLaVA Factory\u003C\u002Fa>\u003Ch5 align=\"center\">\n\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-%20Open%20In%20HF-blue.svg)](https:\u002F\u002Fhuggingface.co\u002Ftinyllava) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2402.14289-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14289) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2405.11788-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11788)[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-yellow)](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fblob\u002Fmain\u002FLICENSE) [![Doc](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDoc-Document-logo=read%20the%20docs&logoColor=white&label=Doc)](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002F) [![Demo](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-Demo-red.svg)](http:\u002F\u002F8843843nmph5.vicp.fun\u002F#\u002F)\n\n![architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTinyLLaVA_TinyLLaVA_Factory_readme_9a7fcc50ef98.jpg)\n\n## &#x1F389; News\n* **[2025.01]**  Our new work [TinyLLaVA-Video](https:\u002F\u002Fgithub.com\u002FZhangXJ199\u002FTinyLLaVA-Video) is released.\n* **[2024.08.13]**  A simple [visualizaiton tool](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Ftree\u002Fmain\u002Ftinyllava_visualizer) for interpreting the prediction of TinyLLaVA is added.\n* **[2024.05.21]**  Our paper: [TinyLLaVA Factory: A Modularized Codebase for Small-scale Large Multimodal Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11788) is released!\n* **[2024.05.15]** [TinyLLaVA Factory](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory), our new codebase, is released!  **Note that the old codebase, TinyLLaVABench, is moved to the [tinyllava_bench](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Ftree\u002Ftinyllava_bench) branch.**\n* **[2024.05.04]**  [TinyLLaVA Demo](http:\u002F\u002F8843843nmph5.vicp.fun\u002F#\u002F) is released! (The password to access our demo is '1234'.)\n* **[2024.02.21]**  Our paper: [TinyLLaVA: A Framework of Small-scale Large Multimodal Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14289) is released!\n\n## &#x1F525; Takeaways\n- Our best model, [TinyLLaVA-Phi-2-SigLIP-3.1B](https:\u002F\u002Fhuggingface.co\u002Ftinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B), achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.\n\n- TinyLLaVA Factory is an open-source modular codebase for small-scale large multimodal models (LMMs), implemented in PyTorch and HuggingFace, with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results.\n\n- With TinyLLaVA Factory, you can customize your own large multimodal models with less coding effort and less coding mistakes.\n\n- TinyLLaVA Factory integrates a suite of cutting-edge models and methods. \n\n  - LLM currently supports **OpenELM**, **TinyLlama**, **StableLM**, **Qwen**, **Gemma**, and **Phi**. \n\n  - Vision tower currently supports **CLIP,** **SigLIP**, **Dino**, and **combination of CLIP and Dino**.\n    \n  - Connector currently supports **MLP**, **Qformer**, and **Resampler**.\n    \n  - Training Recipe currently supports **Frozen\u002FFully\u002FPartially tuning** and **LoRA\u002FQLoRA tuning**.\n\n## Contents\n\n- [🎉 News](#-news)\n- [🔥 Takeaways](#-takeaways)\n- [Contents](#contents)\n- [Installation and Requirements](#installation-and-requirements)\n    - [Upgrade to the latest code base](#upgrade-to-the-latest-code-base)\n- [Get Started](#get-started)\n    - [1. Data Preparation](#1-data-preparation)\n    - [2. Train](#2-train)\n    - [3. Evaluation](#3-evaluation)\n- [Model Zoo](#model-zoo)\n  - [Trained Models](#trained-models)\n    - [Model Performance](#model-performance)\n  - [Legacy Models](#legacy-models)\n- [Launch Demo Locally](#launch-demo-locally)\n  - [Gradio Web Demo](#gradio-web-demo)\n  - [CLI Inference](#cli-inference)\n  - [Quick Inference Scripts](#quick-inference-scripts)\n- [Custom Finetune](#custom-finetune)\n- [Customize Your Own Large Multimodel Models](#customize-your-own-large-multimodel-models)\n  - [LLM](#llm)\n  - [Vision Tower](#vision-tower)\n  - [Connector](#connector)\n- [Acknowledgement](#acknowledgement)\n- [Contact](#contact)\n- [✏ Citation](#-citation)\n- [❤️ Community efforts](#️-community-efforts)\n\n\n## Installation and Requirements\n\nPlease note that our environment requirements are different from LLaVA's environment requirements. We strongly recommend you create the environment from scratch as follows.\n\n1. Clone this repository and navigate to the folder\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory.git\ncd TinyLLaVA_Factory\n```\n\n2. Create a conda environment, activate it and install Packages\n```Shell\nconda create -n tinyllava_factory python=3.10 -y\nconda activate tinyllava_factory\npip install --upgrade pip  # enable PEP 660 support\npip install -e .\n```\n\n3. Install additional packages\n```Shell\npip install flash-attn==2.5.7 --no-build-isolation\n```\n#### Upgrade to the latest code base\n\n```Shell\ngit pull\npip install -e .\n```\n\n## Get Started\n\n#### 1. Data Preparation\n\nPlease refer to the [Data Preparation](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002FPrepare%20Datasets.html) section in our [Documenation](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002F).\n\n#### 2. Train\n\nHere's an example for training a LMM using Phi-2.\n\n- Replace data paths with yours in `scripts\u002Ftrain\u002Ftrain_phi.sh`\n- Replace `output_dir` with yours in `scripts\u002Ftrain\u002Fpretrain.sh`\n- Replace `pretrained_model_path` and `output_dir` with yours in `scripts\u002Ftrain\u002Ffinetune.sh`\n- Adjust your GPU ids (localhost) and `per_device_train_batch_size` in `scripts\u002Ftrain\u002Fpretrain.sh` and `scripts\u002Ftrain\u002Ffinetune.sh`\n\n```bash\nbash scripts\u002Ftrain\u002Ftrain_phi.sh\n```\n\nImportant hyperparameters used in pretraining and finetuning are provided below.\n\n| Training Stage | Global Batch Size | Learning rate | conv_version |\n| -------------- | :---------------: | :-----------: | :----------: |\n| Pretraining    | 256               | 1e-3          | pretrain     |\n| Finetuning     | 128               | 2e-5          | phi          |\n\n**Tips:** \n\nGlobal Batch Size = num of GPUs * `per_device_train_batch_size` * `gradient_accumulation_steps`, we recommand you always keep global batch size and learning rate as above except for lora tuning your model.\n\n`conv_version` is a hyperparameter used for choosing different chat templates for different LLMs. In the pretraining stage, `conv_version` is the same for all LLMs, using `pretrain`. In the finetuning stage, we use\n\n`phi` for Phi-2, StableLM, Qwen-1.5\n\n`llama` for TinyLlama, OpenELM\n\n`gemma` for Gemma\n\n#### 3. Evaluation\n\nPlease refer to the [Evaluation](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002FEvaluation.html) section in our [Documenation](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002FEvaluation.html).\n\n## Model Zoo\n\n### Trained Models\n\nwhich are trained using TinyLLaVA Factory.\n\n- [TinyLLaVA-Phi-2-SigLIP-3.1B](https:\u002F\u002Fhuggingface.co\u002Ftinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B)\n- [TinyLLaVA-Gemma-SigLIP-2.4B](https:\u002F\u002Fhuggingface.co\u002Ftinyllava\u002FTinyLLaVA-Gemma-SigLIP-2.4B)\n- [TinyLLaVA-OpenELM-450M-SigLIP-0.89B](https:\u002F\u002Fhuggingface.co\u002Fjiajunlong\u002FTinyLLaVA-0.89B)\n- [TinyLLaVA-Qwen2-0.5B-SigLIP](https:\u002F\u002Fhuggingface.co\u002FZhang199\u002FTinyLLaVA-Qwen2-0.5B-SigLIP)\n- [TinyLLaVA-Qwen2.5-3B-SigLIP](https:\u002F\u002Fhuggingface.co\u002FZhang199\u002FTinyLLaVA-Qwen2.5-3B-SigLIP)\n\n#### Model Performance\n\n| VT (HF Path)                      | LLM (HF Path)                      | Recipe    | VQA-v2 | GQA  | SQA-image | TextVQA | MM-Vet | POPE | MME    | MMMU-val |\n| --------------------------------- | ---------------------------------- | --------- | :----: | :--: | :-------: | :-----: | :----: | :--: | :----: | :------: |\n| openai\u002Fclip-vit-large-patch14-336 | apple\u002FOpenELM-450M-Instruct        | base      | 69.5   | 52.1 | 50.6      | 40.4    | 20.0   | 83.6 | 1052.9 | 23.9     |\n| google\u002Fsiglip-so400m-patch14-384  | apple\u002FOpenELM-450M-Instruct        | base      | 71.7   | 53.9 | 54.1      | 44.0    | 20.0   | 85.4 | 1118.8 | 24.0     |\n| google\u002Fsiglip-so400m-patch14-384  | Qwen\u002FQwen2-0.5B                    | base      | 72.3   | 55.8 | 60.1      | 45.2    | 19.5   | 86.6 | 1153.0 | 29.7     |\n| google\u002Fsiglip-so400m-patch14-384  | Qwen\u002FQwen2.5-0.5B                  | base      | 75.3   | 59.5 | 60.3      | 48.3    | 23.9   | 86.1 | 1253.0 | 33.3     |\n| google\u002Fsiglip-so400m-patch14-384  | Qwen\u002FQwen2.5-3B                    | base      | 79.4   | 62.5 | 74.1      | 58.3    | 34.8   | 87.4 | 1438.7 | 39.9     |\n| openai\u002Fclip-vit-large-patch14-336 | TinyLlama\u002FTinyLlama-1.1B-Chat-v1.0 | base      | 73.7   | 58.0 | 59.9      | 46.3    | 23.2   | 85.5 | 1284.6 | 27.9     |\n| google\u002Fsiglip-so400m-patch14-384  | TinyLlama\u002FTinyLlama-1.1B-Chat-v1.0 | base      | 75.5   | 58.6 | 64.0      | 49.6    | 23.5   | 86.3 | 1256.5 | 28.3     |\n| openai\u002Fclip-vit-large-patch14-336 | stabilityai\u002Fstablelm-2-zephyr-1_6b | base      | 75.9   | 59.5 | 64.6      | 50.5    | 27.3   | 86.1 | 1368.1 | 31.8     |\n| google\u002Fsiglip-so400m-patch14-384  | stabilityai\u002Fstablelm-2-zephyr-1_6b | base      | 78.2   | 60.7 | 66.7      | 56.0    | 29.4   | 86.3 | 1319.3 | 32.6     |\n| google\u002Fsiglip-so400m-patch14-384  | google\u002Fgemma-2b-it                 | base      | 78.4   | 61.6 | 64.4      | 53.6    | 26.9   | 86.4 | 1339.0 | 31.7     |\n| openai\u002Fclip-vit-large-patch14-336 | microsoft\u002Fphi-2                    | base      | 76.8   | 59.4 | 71.2      | 53.4    | 31.7   | 86.8 | 1448.6 | 36.3     |\n| google\u002Fsiglip-so400m-patch14-384  | microsoft\u002Fphi-2                    | base      | 79.2   | 61.6 | 71.9      | 57.4    | 35.0   | 87.2 | 1462.4 | 38.2     |\n| google\u002Fsiglip-so400m-patch14-384  | microsoft\u002Fphi-2                    | base&lora | 77.6   | 59.7 | 71.6      | 53.8    | 33.3   | 87.9 | 1413.2 | 35.6     |\n| google\u002Fsiglip-so400m-patch14-384  | microsoft\u002Fphi-2                    | share     | 80.1   | 62.1 | 73.0      | 60.3    | 37.5   | 87.2 | 1466.4 | 38.4     |\n\n### Legacy Models\n\nwhich are trained using the old codebase TinyLLaVABench.\n\n- [TinyLLaVA-3.1B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-3.1B)\n- [TinyLLaVA-2.0B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-2.0B)\n- [TinyLLaVA-1.5B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-1.5B)\n- [tiny-llava-hf](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002Ftiny-llava-v1-hf)\n\nIf you have models trained by our old codebase TinyLLaVABench and you still want to use them, we provide an example of [TinyLLaVA-3.1B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-3.1B) for how to use legacy models.\n\n\u003Cdetails>\n\u003Csummary>Example of using legacy models\u003C\u002Fsummary>\n\n\n```Python\nfrom tinyllava.eval.run_tiny_llava import eval_model\nfrom tinyllava.model.convert_legecy_weights_to_tinyllavafactory import *\n\nmodel = convert_legecy_weights_to_tinyllavafactory('bczhou\u002FTinyLLaVA-3.1B')\n\nprompt = \"What are the things I should be cautious about when I visit here?\"\nimage_file = \"https:\u002F\u002Fllava-vl.github.io\u002Fstatic\u002Fimages\u002Fview.jpg\"\n\nargs = type('Args', (), {\n    \"model_path\": None,\n    \"model\": model,\n    \"query\": prompt,\n    \"conv_mode\": \"phi\", # the same as conv_version in the training stage. Different LLMs have different conv_mode\u002Fconv_version, please replace it\n    \"image_file\": image_file,\n    \"sep\": \",\",\n    \"temperature\": 0,\n    \"top_p\": None,\n    \"num_beams\": 1,\n    \"max_new_tokens\": 512\n})()\n\neval_model(args)\n\n\"\"\"\nOutput: \nWhen visiting this serene lakeside location with a wooden dock, there are a few things to be cautious about. First, ensure that the dock is stable and secure before stepping onto it, as it might be slippery or wet, especially if it's a wooden structure. Second, be mindful of the surrounding water, as it can be deep or have hidden obstacles, such as rocks or debris, that could pose a risk. Additionally, be aware of the weather conditions, as sudden changes in weather can make the area more dangerous. Lastly, respect the natural environment and wildlife, and avoid littering or disturbing the ecosystem.\n\"\"\"\n```\n\n\u003C\u002Fdetails>\n\n\n\n## Launch Demo Locally\n\n### Gradio Web Demo\nLaunch a local web demo by running:\n```bash\npython tinyllava\u002Fserve\u002Fapp.py --model-path tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B\n```\n### CLI Inference\nWe also support running inference with CLI. To use our model, run:\n```bash\npython -m tinyllava.serve.cli \\\n   --model-path tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B \\\n   --image-file \".\u002Ftinyllava\u002Fserve\u002Fexamples\u002Fextreme_ironing.jpg\" \n```\n### Quick Inference Scripts\nIf you want to launch the model trained by yourself or us locally, here's an example.\n\u003Cdetails>\n\u003Csummary>Run inference with the model trained by yourself or downloaded from HuggingFace\u003C\u002Fsummary>\n\n```Python\nfrom tinyllava.eval.run_tiny_llava import eval_model\n\nmodel_path = \"\u002Fabsolute\u002Fpath\u002Fto\u002Fyour\u002Fmodel\u002F\"\nprompt = \"What are the things I should be cautious about when I visit here?\"\nimage_file = \"https:\u002F\u002Fllava-vl.github.io\u002Fstatic\u002Fimages\u002Fview.jpg\"\nconv_mode = \"phi\" # or llama, gemma, etc\n\nargs = type('Args', (), {\n    \"model_path\": model_path,\n    \"model\": None,\n    \"query\": prompt,\n    \"conv_mode\": conv_mode,\n    \"image_file\": image_file,\n    \"sep\": \",\",\n    \"temperature\": 0,\n    \"top_p\": None,\n    \"num_beams\": 1,\n    \"max_new_tokens\": 512\n})()\n\neval_model(args)\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Run inference with the model trained by us using huggingface transformers\u003C\u002Fsummary>\n\n```Python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nhf_path = 'tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B'\nmodel = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)\nmodel.cuda()\nconfig = model.config\ntokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side)\nprompt=\"What are these?\"\nimage_url=\"http:\u002F\u002Fimages.cocodataset.org\u002Fval2017\u002F000000039769.jpg\"\noutput_text, genertaion_time = model.chat(prompt=prompt, image=image_url, tokenizer=tokenizer)\n\nprint('model output:', output_text)\nprint('runing time:', genertaion_time)\n```\n\u003C\u002Fdetails>\n\n## Custom Finetune\nIf you want to finetune TinyLLaVA with your custom datasets, please refer to [here](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fblob\u002Fmain\u002FCUSTOM_FINETUNE.md).\n\n## Customize Your Own Large Multimodel Models\n\n### LLM\n\nIf you want to add a new LLM by yourself, you need to create two files: one for chat template and the other for language model, under the folders `tinyllava\u002Fdata\u002Ftemplate\u002F` and `tinyllava\u002Fmodel\u002Fllm\u002F`.\n\nHere is an example of adding the Gemma model.\n\nFirstly, create `tinyllava\u002Fdata\u002Ftemplate\u002Fgemma_template.py`, which will be used for the finetuning stage.\n\n```python\nfrom dataclasses import dataclass\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union\nfrom packaging import version\n\nfrom .formatter import EmptyFormatter, StringFormatter\nfrom .base import Template\nfrom .formatter import Formatter\nfrom . import register_template\nfrom ...utils.constants import *\n\nfrom transformers import PreTrainedTokenizer\nimport torch\nimport tokenizers\n\n    \nsystem = \"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\"\n\n@register_template('gemma') # Enable the TemplateFactory to obtain the added template by this string ('gemma').\n@dataclass\nclass GemmaTemplate(Template):\n    format_image_token: \"Formatter\" = StringFormatter(slot=\"\u003Cimage>\\n{{content}}\")\n    format_user: \"Formatter\" = StringFormatter(slot=\"USER\" + \": \" + \"{{content}}\" + \" \")\n    format_assistant: \"Formatter\" = StringFormatter(slot=\"ASSISTANT\" + \": \" + \"{{content}}\" + \"\u003Ceos>\") # to be modified according to the tokenizer you choose\n    system: \"Formatter\" = EmptyFormatter(slot=system+\" \")\n    separator: \"Formatter\" = EmptyFormatter(slot=[' ASSISTANT: ', '\u003Ceos>']) # to be modified according to the tokenizer you choose\n\n    def _make_masks(self, labels, tokenizer, sep, eos_token_length, rounds):\n        # your code here\n        return labels, cur_len\n```\n**Tips:**\n\nPlease ensure that the `labels` (returned by the `_make_masks` function) follows this format: answers and the eos token id are not masked, and the other tokens are masked with `-100`.\n\nSecondly, create `tinyllava\u002Fmodel\u002Fllm\u002Fgemma.py`.\n\n```python\nfrom transformers import GemmaForCausalLM, AutoTokenizer\n# The LLM you want to add along with its corresponding tokenizer.\n\nfrom . import register_llm\n\n# Add GemmaForCausalLM along with its corresponding tokenizer and handle special tokens.\n@register_llm('gemma') # Enable the LLMFactory to obtain the added LLM by this string ('gemma').\ndef return_gemmaclass(): \n    def tokenizer_and_post_load(tokenizer):\n        tokenizer.pad_token = tokenizer.unk_token\n        return tokenizer\n    return (GemmaForCausalLM, (AutoTokenizer, tokenizer_and_post_load))\n```\n\nFinally, create `scripts\u002Ftrain\u002Ftrain_gemma.sh` with the corresponding `LLM_VERSION` and `CONV_VERSION`.\n\n### Vision Tower\n\nIf you want to add a new vision tower, you need to implement a new vision tower class that should be inherited from the base class `VisionTower`. Here's an example of the MoF vision tower.\n\nFirst, create `tinyllava\u002Fmodel\u002Fvision_tower\u002Fmof.py`\n\n```python\n@register_vision_tower('mof')      \nclass MoFVisionTower(VisionTower):\n    def __init__(self, cfg):\n        super().__init__(cfg)\n\n        self._vision_tower = MoF(cfg)\n        self._image_processor = # your image processor\n  \n    def _load_model(self, vision_tower_name, **kwargs):\n        # your code here, make sure your model can be correctly loaded from pretrained parameters either by huggingface or pytorch loading\n\n    def forward(self, x, **kwargs):\n        # your code here\n```\n\nThen, modify your training scripts with the corresponding `VT_VERSION`.\n\n### Connector\n\nIf you want to add a new connector, you need to implement a new connector class that should be inherited from the base class `Connector`. Here's an example of the Linear connector.\n\nFirst, create `tinyllava\u002Fmodel\u002Fconnector\u002Flinear.py`\n\n\n```python\nimport torch.nn as nn\n\nfrom . import register_connector\nfrom .base import Connector\n    \n@register_connector('linear') #Enable the ConnectorMFactory to obtain the added connector by this string ('linear').     \nclass LinearConnector(Connector):\n    def __init__(self, config):\n        super().__init__()\n        self._connector =  nn.Linear(config.vision_hidden_size, config.hidden_size) # define your connector model\n```\n\nThen, modify your training scripts with the corresponding `CN_VERSION`.\n\n## Acknowledgement\nWe give special thanks to Lei Zhao, Luche Wang, Kaijun Luo, and Junchen Wang for building the [Demo](http:\u002F\u002F8843843nmph5.vicp.fun\u002F#\u002F).\n\n## Contact\nIf you have any questions, feel free to either initiate an *Issue* or contact us by WeChat (WeChatID: *TinyLLaVA*).\n\n## &#x270F; Citation\n\nIf you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.\n\n```BibTeX\n@misc{zhou2024tinyllava,\n      title={TinyLLaVA: A Framework of Small-scale Large Multimodal Models}, \n      author={Baichuan Zhou and Ying Hu and Xi Weng and Junlong Jia and Jie Luo and Xien Liu and Ji Wu and Lei Huang},\n      year={2024},\n      eprint={2402.14289},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG}\n}\n```\n```BibTeX\n@article{jia2024tinyllava,\n  title={TinyLLaVA Factory: A Modularized Codebase for Small-scale Large Multimodal Models},\n  author={Jia, Junlong and Hu, Ying and Weng, Xi and Shi, Yiming and Li, Miao and Zhang, Xingjian and Zhou, Baichuan and Liu, Ziyu and Luo, Jie and Huang, Lei and Wu, Ji},\n  journal={arXiv preprint arXiv:2405.11788},\n  year={2024}\n}\n```\n\n\n## ❤️ Community efforts\n* Our codebase is built upon the [LLaVA](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA) project. Great work!\n* Our project uses data from the [ShareGPT4V](https:\u002F\u002Fgithub.com\u002FInternLM\u002FInternLM-XComposer\u002Ftree\u002Fmain\u002Fprojects\u002FShareGPT4V) project. Great work!\n","\u003Ch2 align=\"center\"> \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14289\">TinyLLaVA工厂\u003C\u002Fa>\u003Ch5 align=\"center\">\n\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-%20Open%20In%20HF-blue.svg)](https:\u002F\u002Fhuggingface.co\u002Ftinyllava) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2402.14289-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14289) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2405.11788-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11788)[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-yellow)](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fblob\u002Fmain\u002FLICENSE) [![Doc](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDoc-Document-logo=read%20the%20docs&logoColor=white&label=Doc)](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002F) [![Demo](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDemo-Demo-red.svg)](http:\u002F\u002F8843843nmph5.vicp.fun\u002F#\u002F)\n\n![architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTinyLLaVA_TinyLLaVA_Factory_readme_9a7fcc50ef98.jpg)\n\n## &#x1F389; 新闻\n* **[2025.01]** 我们的新工作 [TinyLLaVA-Video](https:\u002F\u002Fgithub.com\u002FZhangXJ199\u002FTinyLLaVA-Video) 发布。\n* **[2024.08.13]** 添加了一个用于解释 TinyLLaVA 预测的简单 [可视化工具](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Ftree\u002Fmain\u002Ftinyllava_visualizer)。\n* **[2024.05.21]** 我们的论文：[TinyLLaVA工厂：面向小型多模态模型的模块化代码库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11788) 发表！\n* **[2024.05.15]** [TinyLLaVA工厂](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory)，我们的新代码库，发布！  **请注意，旧代码库 TinyLLaVABench 已移至 [tinyllava_bench](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Ftree\u002Ftinyllava_bench) 分支。**\n* **[2024.05.04]**  [TinyLLaVA演示](http:\u002F\u002F8843843nmph5.vicp.fun\u002F#\u002F) 发布！（访问我们演示的密码是‘1234’。）\n* **[2024.02.21]** 我们的论文：[TinyLLaVA：小型多模态模型框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14289) 发表！\n\n## &#x1F525; 要点\n- 我们最好的模型，[TinyLLaVA-Phi-2-SigLIP-3.1B](https:\u002F\u002Fhuggingface.co\u002Ftinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B)，在整体性能上优于现有的7B模型，如 LLaVA-1.5 和 Qwen-VL。\n\n- TinyLLaVA工厂是一个开源的模块化代码库，专为小型多模态模型（LMMs）设计，基于 PyTorch 和 HuggingFace 实现，注重代码实现的简洁性、新功能的可扩展性以及训练结果的可重复性。\n\n- 使用 TinyLLaVA工厂，您可以以更少的编码工作和更少的错误来定制您自己的大型多模态模型。\n\n- TinyLLaVA工厂集成了多种前沿模型和方法。\n\n  - LLM 目前支持 **OpenELM**、**TinyLlama**、**StableLM**、**Qwen**、**Gemma** 和 **Phi**。\n\n  - 视觉塔目前支持 **CLIP**、**SigLIP**、**Dino** 以及 **CLIP 和 Dino 的组合**。\n\n  - 连接器目前支持 **MLP**、**Qformer** 和 **Resampler**。\n\n  - 训练配方目前支持 **冻结\u002F完全\u002F部分微调** 以及 **LoRA\u002FQLoRA 微调**。\n\n## 目录\n\n- [🎉 新闻](#-news)\n- [🔥 要点](#-takeaways)\n- [目录](#contents)\n- [安装与要求](#installation-and-requirements)\n    - [升级到最新代码库](#upgrade-to-the-latest-code-base)\n- [开始使用](#get-started)\n    - [1. 数据准备](#1-data-preparation)\n    - [2. 训练](#2-train)\n    - [3. 评估](#3-evaluation)\n- [模型库](#model-zoo)\n  - [已训练模型](#trained-models)\n    - [模型性能](#model-performance)\n  - [遗留模型](#legacy-models)\n- [本地启动演示](#launch-demo-locally)\n  - [Gradio 网页演示](#gradio-web-demo)\n  - [CLI 推理](#cli-inference)\n  - [快速推理脚本](#quick-inference-scripts)\n- [自定义微调](#custom-finetune)\n- [定制您自己的大型多模态模型](#customize-your-own-large-multimodel-models)\n  - [LLM](#llm)\n  - [视觉塔](#vision-tower)\n  - [连接器](#connector)\n- [致谢](#acknowledgement)\n- [联系方式](#contact)\n- [✏ 引用](#-citation)\n- [❤️ 社区努力](#️-community-efforts)\n\n\n## 安装与要求\n\n请注意，我们的环境要求与 LLaVA 的环境要求不同。我们强烈建议您按照以下步骤从头创建环境。\n\n1. 克隆此仓库并进入文件夹\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory.git\ncd TinyLLaVA_Factory\n```\n\n2. 创建一个 conda 环境，激活它并安装包\n```Shell\nconda create -n tinyllava_factory python=3.10 -y\nconda activate tinyllava_factory\npip install --upgrade pip  # 启用 PEP 660 支持\npip install -e .\n```\n\n3. 安装额外的包\n```Shell\npip install flash-attn==2.5.7 --no-build-isolation\n```\n#### 升级到最新代码库\n\n```Shell\ngit pull\npip install -e .\n```\n\n## 开始使用\n\n#### 1. 数据准备\n\n请参阅我们 [文档](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002F) 中的 [数据准备](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002FPrepare%20Datasets.html) 部分。\n\n#### 2. 训练\n\n这里有一个使用 Phi-2 训练 LMM 的示例。\n\n- 在 `scripts\u002Ftrain\u002Ftrain_phi.sh` 中将数据路径替换为您自己的路径\n- 在 `scripts\u002Ftrain\u002Fpretrain.sh` 中将 `output_dir` 替换为您自己的路径\n- 在 `scripts\u002Ftrain\u002Ffinetune.sh` 中将 `pretrained_model_path` 和 `output_dir` 替换为您自己的路径\n- 在 `scripts\u002Ftrain\u002Fpretrain.sh` 和 `scripts\u002Ftrain\u002Ffinetune.sh` 中调整您的 GPU ID（localhost）以及 `per_device_train_batch_size`\n\n```bash\nbash scripts\u002Ftrain\u002Ftrain_phi.sh\n```\n\n以下是预训练和微调中使用的重要超参数。\n\n| 训练阶段 | 全局批量大小 | 学习率 | conv_version |\n| -------------- | :---------------: | :-----------: | :----------: |\n| 预训练    | 256               | 1e-3          | pretrain     |\n| 微调     | 128               | 2e-5          | phi          |\n\n**提示：**\n\n全局批量大小 = GPU 数量 * `per_device_train_batch_size` * `gradient_accumulation_steps`，我们建议您在对模型进行 LoRA 微调之外，始终将全局批量大小和学习率保持在上述水平。\n\n`conv_version` 是一个用于为不同 LLM 选择不同聊天模板的超参数。在预训练阶段，所有 LLM 的 `conv_version` 都相同，使用 `pretrain`。而在微调阶段，我们使用：\n\n`phi` 用于 Phi-2、StableLM、Qwen-1.5\n\n`llama` 用于 TinyLlama、OpenELM\n\n`gemma` 用于 Gemma\n\n#### 3. 评估\n\n请参阅我们 [文档](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002FEvaluation.html) 中的 [评估](https:\u002F\u002Ftinyllava-factory.readthedocs.io\u002Fen\u002Flatest\u002FEvaluation.html) 部分。\n\n## 模型库\n\n### 训练好的模型\n\n这些模型使用 TinyLLaVA Factory 进行训练。\n\n- [TinyLLaVA-Phi-2-SigLIP-3.1B](https:\u002F\u002Fhuggingface.co\u002Ftinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B)\n- [TinyLLaVA-Gemma-SigLIP-2.4B](https:\u002F\u002Fhuggingface.co\u002Ftinyllava\u002FTinyLLaVA-Gemma-SigLIP-2.4B)\n- [TinyLLaVA-OpenELM-450M-SigLIP-0.89B](https:\u002F\u002Fhuggingface.co\u002Fjiajunlong\u002FTinyLLaVA-0.89B)\n- [TinyLLaVA-Qwen2-0.5B-SigLIP](https:\u002F\u002Fhuggingface.co\u002FZhang199\u002FTinyLLaVA-Qwen2-0.5B-SigLIP)\n- [TinyLLaVA-Qwen2.5-3B-SigLIP](https:\u002F\u002Fhuggingface.co\u002FZhang199\u002FTinyLLaVA-Qwen2.5-3B-SigLIP)\n\n#### 模型性能\n\n| VT（HF 路径）                      | LLM（HF 路径）                      | 配方    | VQA-v2 | GQA  | SQA-image | TextVQA | MM-Vet | POPE | MME    | MMMU-val |\n| --------------------------------- | ---------------------------------- | --------- | :----: | :--: | :-------: | :-----: | :----: | :--: | :----: | :------: |\n| openai\u002Fclip-vit-large-patch14-336 | apple\u002FOpenELM-450M-Instruct        | 基础      | 69.5   | 52.1 | 50.6      | 40.4    | 20.0   | 83.6 | 1052.9 | 23.9     |\n| google\u002Fsiglip-so400m-patch14-384  | apple\u002FOpenELM-450M-Instruct        | 基础      | 71.7   | 53.9 | 54.1      | 44.0    | 20.0   | 85.4 | 1118.8 | 24.0     |\n| google\u002Fsiglip-so400m-patch14-384  | Qwen\u002FQwen2-0.5B                    | 基础      | 72.3   | 55.8 | 60.1      | 45.2    | 19.5   | 86.6 | 1153.0 | 29.7     |\n| google\u002Fsiglip-so400m-patch14-384  | Qwen\u002FQwen2.5-0.5B                  | 基础      | 75.3   | 59.5 | 60.3      | 48.3    | 23.9   | 86.1 | 1253.0 | 33.3     |\n| google\u002Fsiglip-so400m-patch14-384  | Qwen\u002FQwen2.5-3B                    | 基础      | 79.4   | 62.5 | 74.1      | 58.3    | 34.8   | 87.4 | 1438.7 | 39.9     |\n| openai\u002Fclip-vit-large-patch14-336 | TinyLlama\u002FTinyLlama-1.1B-Chat-v1.0 | 基础      | 73.7   | 58.0 | 59.9      | 46.3    | 23.2   | 85.5 | 1284.6 | 27.9     |\n| google\u002Fsiglip-so400m-patch14-384  | TinyLlama\u002FTinyLlama-1.1B-Chat-v1.0 | 基础      | 75.5   | 58.6 | 64.0      | 49.6    | 23.5   | 86.3 | 1256.5 | 28.3     |\n| openai\u002Fclip-vit-large-patch14-336 | stabilityai\u002Fstablelm-2-zephyr-1_6b | 基础      | 75.9   | 59.5 | 64.6      | 50.5    | 27.3   | 86.1 | 1368.1 | 31.8     |\n| google\u002Fsiglip-so400m-patch14-384  | stabilityai\u002Fstablelm-2-zephyr-1_6b | 基础      | 78.2   | 60.7 | 66.7      | 56.0    | 29.4   | 86.3 | 1319.3 | 32.6     |\n| google\u002Fsiglip-so400m-patch14-384  | google\u002Fgemma-2b-it                 | 基础      | 78.4   | 61.6 | 64.4      | 53.6    | 26.9   | 86.4 | 1339.0 | 31.7     |\n| openai\u002Fclip-vit-large-patch14-336 | microsoft\u002Fphi-2                    | 基础      | 76.8   | 59.4 | 71.2      | 53.4    | 31.7   | 86.8 | 1448.6 | 36.3     |\n| google\u002Fsiglip-so400m-patch14-384  | microsoft\u002Fphi-2                    | 基础      | 79.2   | 61.6 | 71.9      | 57.4    | 35.0   | 87.2 | 1462.4 | 38.2     |\n| google\u002Fsiglip-so400m-patch14-384  | microsoft\u002Fphi-2                    | 基础&lora | 77.6   | 59.7 | 71.6      | 53.8    | 33.3   | 87.9 | 1413.2 | 35.6     |\n| google\u002Fsiglip-so400m-patch14-384  | microsoft\u002Fphi-2                    | 共享      | 80.1   | 62.1 | 73.0      | 60.3    | 37.5   | 87.2 | 1466.4 | 38.4     |\n\n### 旧版模型\n\n这些模型使用旧代码库 TinyLLaVABench 进行训练。\n\n- [TinyLLaVA-3.1B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-3.1B)\n- [TinyLLaVA-2.0B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-2.0B)\n- [TinyLLaVA-1.5B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-1.5B)\n- [tiny-llava-hf](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002Ftiny-llava-v1-hf)\n\n如果您有使用我们旧代码库 TinyLLaVABench 训练的模型，并且仍然希望使用它们，我们提供了一个 [TinyLLaVA-3.1B](https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-3.1B) 的示例，说明如何使用旧版模型。\n\n\u003Cdetails>\n\u003Csummary>旧版模型使用示例\u003C\u002Fsummary>\n\n\n```Python\nfrom tinyllava.eval.run_tiny_llava import eval_model\nfrom tinyllava.model.convert_legecy_weights_to_tinyllavafactory import *\n\nmodel = convert_legecy_weights_to_tinyllavafactory('bczhou\u002FTinyLLaVA-3.1B')\n\nprompt = \"What are the things I should be cautious about when I visit here?\"\nimage_file = \"https:\u002F\u002Fllava-vl.github.io\u002Fstatic\u002Fimages\u002Fview.jpg\"\n\nargs = type('Args', (), {\n    \"model_path\": None,\n    \"model\": model,\n    \"query\": prompt,\n    \"conv_mode\": \"phi\", # 与训练阶段的 conv_version 相同。不同的 LLM 有不同的 conv_mode\u002Fconv_version，请根据实际情况替换。\n    \"image_file\": image_file,\n    \"sep\": \",\",\n    \"temperature\": 0,\n    \"top_p\": None,\n    \"num_beams\": 1,\n    \"max_new_tokens\": 512\n})()\n\neval_model(args)\n\n\"\"\"\n输出： \nWhen visiting this serene lakeside location with a wooden dock, there are a few things to be cautious about. First, ensure that the dock is stable and secure before stepping onto it, as it might be slippery or wet, especially if it's a wooden structure. Second, be mindful of the surrounding water, as it can be deep or have hidden obstacles, such as rocks or debris, that could pose a risk. Additionally, be aware of the weather conditions, as sudden changes in weather can make the area more dangerous. Lastly, respect the natural environment and wildlife, and avoid littering or disturbing the ecosystem.\n\"\"\"\n```\n\n\u003C\u002Fdetails>\n\n\n\n## 在本地启动演示\n\n### Gradio Web 演示\n通过运行以下命令启动本地 Web 演示：\n```bash\npython tinyllava\u002Fserve\u002Fapp.py --model-path tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B\n```\n### CLI 推理\n我们还支持使用 CLI 进行推理。要使用我们的模型，请运行：\n```bash\npython -m tinyllava.serve.cli \\\n   --model-path tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B \\\n   --image-file \".\u002Ftinyllava\u002Fserve\u002Fexamples\u002Fextreme_ironing.jpg\" \n```\n\n### 快速推理脚本\n如果您想在本地运行自己训练或我们提供的模型，这里有一个示例。\n\u003Cdetails>\n\u003Csummary>使用您自己训练的模型或从 HuggingFace 下载的模型进行推理\u003C\u002Fsummary>\n\n```Python\nfrom tinyllava.eval.run_tiny_llava import eval_model\n\nmodel_path = \"\u002F绝对路径\u002F到\u002F您的\u002F模型\u002F\"\nprompt = \"当我访问这里时，有哪些事情需要特别注意？\"\nimage_file = \"https:\u002F\u002Fllava-vl.github.io\u002Fstatic\u002Fimages\u002Fview.jpg\"\nconv_mode = \"phi\" # 或 llama、gemma 等\n\nargs = type('Args', (), {\n    \"model_path\": model_path,\n    \"model\": None,\n    \"query\": prompt,\n    \"conv_mode\": conv_mode,\n    \"image_file\": image_file,\n    \"sep\": \",\",\n    \"temperature\": 0,\n    \"top_p\": None,\n    \"num_beams\": 1,\n    \"max_new_tokens\": 512\n})()\n\neval_model(args)\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>使用我们用 huggingface transformers 训练的模型进行推理\u003C\u002Fsummary>\n\n```Python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nhf_path = 'tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B'\nmodel = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)\nmodel.cuda()\nconfig = model.config\ntokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side)\nprompt=\"这是什么？\"\nimage_url=\"http:\u002F\u002Fimages.cocodataset.org\u002Fval2017\u002F000000039769.jpg\"\noutput_text, genertaion_time = model.chat(prompt=prompt, image=image_url, tokenizer=tokenizer)\n\nprint('模型输出:', output_text)\nprint('运行时间:', genertaion_time)\n```\n\u003C\u002Fdetails>\n\n## 自定义微调\n如果您想使用自己的数据集对 TinyLLaVA 进行微调，请参阅[这里](https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fblob\u002Fmain\u002FCUSTOM_FINETUNE.md)。\n\n## 定制您自己的大型多模态模型\n\n### LLM\n\n如果您想自行添加一个新的 LLM，您需要在 `tinyllava\u002Fdata\u002Ftemplate\u002F` 和 `tinyllava\u002Fmodel\u002Fllm\u002F` 文件夹下创建两个文件：一个用于聊天模板，另一个用于语言模型。\n\n以下是以添加 Gemma 模型为例。\n\n首先，创建 `tinyllava\u002Fdata\u002Ftemplate\u002Fgemma_template.py`，该文件将在微调阶段使用。\n\n```python\nfrom dataclasses import dataclass\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union\nfrom packaging import version\n\nfrom .formatter import EmptyFormatter, StringFormatter\nfrom .base import Template\nfrom .formatter import Formatter\nfrom . import register_template\nfrom ...utils.constants import *\n\nfrom transformers import PreTrainedTokenizer\nimport torch\nimport tokenizers\n\n    \nsystem = \"一位好奇的用户与人工智能助手之间的对话。助手会针对用户的问题给出有帮助、详细且礼貌的回答。\"\n\n@register_template('gemma') # 使 TemplateFactory 能够通过字符串 ('gemma') 获取所添加的模板。\n@dataclass\nclass GemmaTemplate(Template):\n    format_image_token: \"Formatter\" = StringFormatter(slot=\"\u003Cimage>\\n{{content}}\")\n    format_user: \"Formatter\" = StringFormatter(slot=\"USER\" + \": \" + \"{{content}}\" + \" \")\n    format_assistant: \"Formatter\" = StringFormatter(slot=\"ASSISTANT\" + \": \" + \"{{content}}\" + \"\u003Ceos>\") # 根据您选择的分词器进行修改\n    system: \"Formatter\" = EmptyFormatter(slot=system+\" \")\n    separator: \"Formatter\" = EmptyFormatter(slot=[' ASSISTANT: ', '\u003Ceos>']) # 根据您选择的分词器进行修改\n\n    def _make_masks(self, labels, tokenizer, sep, eos_token_length, rounds):\n        # 您的代码在这里\n        return labels, cur_len\n```\n\n**提示：**\n\n请确保 `_make_masks` 函数返回的 `labels` 遵循以下格式：答案和 EOS 标记 ID 不被掩码，其他标记则用 `-100` 掩码。\n\n其次，创建 `tinyllava\u002Fmodel\u002Fllm\u002Fgemma.py`。\n\n```python\nfrom transformers import GemmaForCausalLM, AutoTokenizer\n# 您想要添加的 LLM 及其对应的分词器。\n\nfrom . import register_llm\n\n# 添加 GemmaForCausalLM 及其对应的分词器，并处理特殊标记。\n@register_llm('gemma') # 使 LLMFactory 能够通过字符串 ('gemma') 获取所添加的 LLM。\ndef return_gemmaclass(): \n    def tokenizer_and_post_load(tokenizer):\n        tokenizer.pad_token = tokenizer.unk_token\n        return tokenizer\n    return (GemmaForCausalLM, (AutoTokenizer, tokenizer_and_post_load))\n```\n\n最后，创建 `scripts\u002Ftrain\u002Ftrain_gemma.sh`，并指定相应的 `LLM_VERSION` 和 `CONV_VERSION`。\n\n### 视觉塔\n\n如果您想添加一个新的视觉塔，您需要实现一个新的视觉塔类，该类应继承自基类 `VisionTower`。以下是 MoF 视觉塔的示例。\n\n首先，创建 `tinyllava\u002Fmodel\u002Fvision_tower\u002Fmof.py`：\n\n```python\n@register_vision_tower('mof')      \nclass MoFVisionTower(VisionTower):\n    def __init__(self, cfg):\n        super().__init__(cfg)\n\n        self._vision_tower = MoF(cfg)\n        self._image_processor = # 您的图像处理器\n  \n    def _load_model(self, vision_tower_name, **kwargs):\n        # 您的代码在这里，确保您的模型能够通过 Hugging Face 或 PyTorch 加载方式正确地从预训练参数中加载\n\n    def forward(self, x, **kwargs):\n        # 您的代码在这里\n```\n\n然后，根据相应的 `VT_VERSION` 修改您的训练脚本。\n\n### 连接器\n\n如果您想添加一个新的连接器，您需要实现一个新的连接器类，该类应继承自基类 `Connector`。以下是线性连接器的示例。\n\n首先，创建 `tinyllava\u002Fmodel\u002Fconnector\u002Flinear.py`：\n\n```python\nimport torch.nn as nn\n\nfrom . import register_connector\nfrom .base import Connector\n    \n@register_connector('linear') # 使 ConnectorMFactory 能够通过字符串 ('linear') 获取所添加的连接器。     \nclass LinearConnector(Connector):\n    def __init__(self, config):\n        super().__init__()\n        self._connector =  nn.Linear(config.vision_hidden_size, config.hidden_size) # 定义您的连接器模型\n```\n\n然后，根据相应的 `CN_VERSION` 修改您的训练脚本。\n\n## 致谢\n我们特别感谢 Lei Zhao、Luche Wang、Kaijun Luo 和 Junchen Wang 构建了[演示](http:\u002F\u002F8843843nmph5.vicp.fun\u002F#\u002F)。\n\n## 联系方式\n如果您有任何问题，欢迎随时发起 *Issue* 或通过微信（微信号：*TinyLLaVA*）联系我们。\n\n## &#x270F; 引用\n\n如果您在研究中发现我们的论文和代码很有用，请考虑给项目点个赞 :star: 并引用我们 :pencil:。\n\n```BibTeX\n@misc{zhou2024tinyllava,\n      title={TinyLLaVA：小型多模态大模型框架}, \n      author={周百川、胡颖、翁曦、贾俊龙、罗杰、刘希恩、吴继、黄磊},\n      year={2024},\n      eprint={2402.14289},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG}\n}\n```\n```BibTeX\n@article{jia2024tinyllava,\n  title={TinyLLaVA 工厂：面向小型多模态大模型的模块化代码库},\n  author={贾俊龙、胡颖、翁曦、史一鸣、李淼、张兴建、周百川、刘子宇、罗杰、黄磊、吴继},\n  journal={arXiv 预印本 arXiv:2405.11788},\n  year={2024}\n}\n```\n\n\n## ❤️ 社区贡献\n* 我们的代码库基于 [LLaVA](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA) 项目构建。非常出色的工作！\n* 我们项目使用了来自 [ShareGPT4V](https:\u002F\u002Fgithub.com\u002FInternLM\u002FInternLM-XComposer\u002Ftree\u002Fmain\u002Fprojects\u002FShareGPT4V) 项目的数据。非常出色的工作！","# TinyLLaVA_Factory 快速上手指南\n\nTinyLLaVA_Factory 是一个用于构建小规模大型多模态模型（LMM）的模块化开源代码库。它基于 PyTorch 和 HuggingFace，支持多种主流 LLM（如 Phi, Qwen, Gemma, OpenELM 等）和视觉编码器（如 CLIP, SigLIP），旨在以最小的代码量实现模型的自定义训练与推理。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n- **操作系统**: Linux (推荐)\n- **Python 版本**: 3.10\n- **GPU**: 支持 CUDA 的 NVIDIA GPU（建议显存 16GB 以上以进行训练）\n- **依赖管理**: Conda\n\n> **注意**：本项目的环境依赖与原版 LLaVA 不同，强烈建议从头创建独立的虚拟环境。\n\n## 安装步骤\n\n### 1. 克隆仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory.git\ncd TinyLLaVA_Factory\n```\n\n### 2. 创建并激活 Conda 环境\n```bash\nconda create -n tinyllava_factory python=3.10 -y\nconda activate tinyllava_factory\n```\n\n### 3. 安装基础依赖\n启用 PEP 660 支持并安装项目包：\n```bash\npip install --upgrade pip\npip install -e .\n```\n\n### 4. 安装 Flash Attention\n为了获得最佳性能，需安装特定版本的 `flash-attn`：\n```bash\npip install flash-attn==2.5.7 --no-build-isolation\n```\n> **国内加速提示**：如果下载缓慢，可添加清华或阿里镜像源：\n> `pip install flash-attn==2.5.7 --no-build-isolation -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 5. 更新代码（可选）\n若需获取最新功能，可执行：\n```bash\ngit pull\npip install -e .\n```\n\n## 基本使用\n\n以下是使用 TinyLLaVA_Factory 进行模型推理的最简流程。\n\n### 启动本地 Web Demo\n您可以快速启动一个基于 Gradio 的网页界面来体验模型（以官方发布的 3.1B 模型为例）：\n\n```bash\npython tinyllava\u002Fserve\u002Fapp.py --model-path tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B\n```\n运行后，终端会显示本地访问地址（通常为 `http:\u002F\u002Flocalhost:7860`），在浏览器打开即可上传图片并进行对话。\n\n### 命令行推理 (CLI)\n如果您希望通过命令行直接获取结果，可以使用内置的评估脚本。以下是一个简单的 Python 调用示例：\n\n```python\nfrom tinyllava.eval.run_tiny_llava import eval_model\n\n# 配置参数\nargs = type('Args', (), {\n    \"model_path\": \"tinyllava\u002FTinyLLaVA-Phi-2-SigLIP-3.1B\",\n    \"model\": None,\n    \"query\": \"Describe this image in detail.\",\n    \"conv_mode\": \"phi\",  # 根据使用的 LLM 类型调整，如 llama, gemma, qwen 等\n    \"image_file\": \"https:\u002F\u002Fllava-vl.github.io\u002Fstatic\u002Fimages\u002Fview.jpg\",\n    \"sep\": \",\",\n    \"temperature\": 0,\n    \"top_p\": None,\n    \"num_beams\": 1,\n    \"max_new_tokens\": 512\n})()\n\n# 执行推理\neval_model(args)\n```\n\n### 快速训练示例\n若要训练自己的模型，需先准备数据（参考官方文档 Data Preparation 部分），然后修改脚本中的路径参数并运行：\n\n1. 编辑 `scripts\u002Ftrain\u002Ftrain_phi.sh`，替换数据路径和输出目录。\n2. 执行训练脚本：\n```bash\nbash scripts\u002Ftrain\u002Ftrain_phi.sh\n```\n*注：预训练和微调的关键超参数（如 Global Batch Size 和学习率）已在脚本中预设，通常无需更改，除非使用 LoRA 微调。*","一家初创教育科技公司希望为视障学生开发一款能实时描述课本插图并回答相关问题的移动端辅助应用，但受限于服务器成本和延迟要求，必须使用轻量级模型。\n\n### 没有 TinyLLaVA_Factory 时\n- **选型试错成本极高**：团队需手动拼接不同规模的视觉编码器（如 CLIP）与语言模型（如 TinyLlama），编写大量胶水代码，稍有不慎就会导致维度不匹配或训练崩溃。\n- **性能与体积难以平衡**：直接套用现有的 7B 参数模型会导致移动端推理延迟过高，而自行裁剪小模型又往往造成图像理解能力断崖式下跌，无法准确识别复杂图表。\n- **微调策略实施困难**：想要针对教育场景数据进行高效微调（如 LoRA），需要从头搭建复杂的训练流水线，复现前沿论文中的冻结层或部分参数更新策略耗时数周。\n- **可解释性缺失**：当模型错误描述图片时，开发人员缺乏内置工具来可视化注意力机制，难以定位是视觉特征提取出错还是语言生成逻辑偏差。\n\n### 使用 TinyLLaVA_Factory 后\n- **模块化快速组装**：利用其预置的模块化代码库，开发者像搭积木一样迅速组合 Phi-2 语言模型与 SigLIP 视觉塔，几行配置即可构建出专有的 3.1B 多模态模型，大幅减少编码错误。\n- **小模型大智慧**：直接调用经过验证的架构，在仅 3.1B 参数量下实现了超越传统 7B 模型（如 LLaVA-1.5）的图像理解精度，完美适配移动端低延迟需求。\n- **灵活高效的训练**：内置支持 QLoRA 及部分参数微调策略，团队仅需少量显卡资源即可在短时间内完成针对教材插图的定制化训练，快速迭代模型效果。\n- **直观的错误诊断**：借助集成的可视化工具，开发人员能清晰看到模型关注图片的哪些区域，迅速修正了模型对几何图形识别不准的问题，提升了产品可靠性。\n\nTinyLLaVA_Factory 通过高度模块化的设计，让开发者能以极低的代码成本构建出兼具高性能与轻量级的多模态模型，真正实现了“小身材，大能量”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTinyLLaVA_TinyLLaVA_Factory_7cf1b967.png","TinyLLaVA",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FTinyLLaVA_9e8c2b5d.png","https:\u002F\u002Fgithub.com\u002FTinyLLaVA",[77,81],{"name":78,"color":79,"percentage":80},"Python","#3572A5",87.1,{"name":82,"color":83,"percentage":84},"Shell","#89e051",12.9,976,99,"2026-04-12T15:05:21","Apache-2.0","Linux","需要 NVIDIA GPU（依赖 flash-attn），具体显存需求未说明，但需支持 Flash Attention 2.5.7","未说明",{"notes":93,"python":94,"dependencies":95},"官方强烈建议从头创建 conda 环境；必须安装特定版本的 flash-attn (2.5.7) 且需禁用构建隔离 (--no-build-isolation)；该工具专注于小规模多模态大模型，支持多种 LLM（如 Phi, Qwen, Gemma）和视觉编码器（如 SigLIP, CLIP）的组合；训练时需注意全局批大小和学习率的超参数设置。","3.10",[96,97,98,99,100],"torch","flash-attn==2.5.7","transformers","accelerate","peft",[35,14,102],"其他",[104,105,106,107,108,98,109],"large-multimodal-models","llama","llava","nlp","tinyllama","vision-language","2026-03-27T02:49:30.150509","2026-04-13T22:51:00.264769",[113,118,123,128,133,138,143],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},32099,"如何对 TinyLLaVA-3.1B 模型进行推理？直接替换 model_id 报错怎么办？","不能简单地替换 `model_id`，因为模型类型不匹配（例如从 `tiny_llava_phi` 实例化为 `llava` 会报错）。请按照仓库中的 `Run Inference` 示例进行操作，并使用 Hugging Face 上对应的文件（如 https:\u002F\u002Fhuggingface.co\u002Fbczhou\u002FTinyLLaVA-3.1B\u002Ftree\u002Fmain）。如果遇到预处理配置错误，可能需要使用 `tiny-llava-v1-hf` 中的 `preprocessor_config.json` 文件。","https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fissues\u002F4",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},32100,"为什么复现 LLaVA-Phi 的结果与论文或图表中的数据不一致？","这通常是因为精度设置问题。Phi-2 的原始权重是以 fp16 发布的，如果在 VLM 框架中将其转换为 bf16 会导致意外行为（例如预训练时难以收敛）。实验验证表明，将训练精度从 bf16 改为 fp16 对 Phi-2 模型有显著影响，能复现出预期的结果。","https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fissues\u002F9",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},32101,"LoRA 微调后生成的目录结构（connector, language_model, vision_tower 分开）如何合并为官方格式？","使用 LoRA 进行微调后，模型权重默认会分开存储为 `language_model`、`vision_tower` 和 `connector` 文件夹，这与全量微调或非 LoRA 微调的目录结构不同。这是预期行为。如果需要合并为单一模型格式以便推理或上传，通常需要编写脚本将这些组件加载并合并保存，或者在推理时分别加载这些组件（具体取决于推理代码的实现）。目前的目录结构是 LoRA 微调后的标准结果。","https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fissues\u002F122",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},32102,"如何在自定义文本 VQA 数据集上微调 TinyLLaVA-1.5B？应该使用哪个脚本？","对于使用 LoRA 微调 TinyLLaVA-1.5B，请确保数据格式符合问答对的要求。关于脚本选择，如果在评估时传递了 `--model-base` 参数且模型名称中不包含 'LoRA'，加载函数会寻找 `mm_projector.bin`。如果你在微调时没有传递 `--tune_mm_mlp_adapter` 参数，那么在评估时也不需要传递 `model_base` 参数。请根据是否调整了 MLP 适配器来决定评估命令的参数。","https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fissues\u002F31",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},32103,"TinyLLaVA 是否支持多轮对话和多图像输入？","1. 多轮对话：是的，TinyLLaVA 的训练数据集与 LLaVA1.5 或 ShareGPT4v 相同，因此在微调阶段是基于多轮对话数据进行训练的，支持多轮对话。\n2. 多图像输入：目前版本不支持同时接受多张图像输入，仅支持单张图像。开发团队计划在未来版本中添加此功能。","https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fissues\u002F62",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},32104,"微调时 Loss 始终为 0 是什么原因？","如果在使用非 LoRA 的微调脚本且只有一种模态数据时遇到 Loss 为 0 的情况，请检查是否错误地设置了 `group_by_modality_length` 参数。将其设置为 `False` 可能有助于解决此问题。此外，确保数据加载和损失计算逻辑正确，有时重新拉取最新代码修复潜在 bug 也能解决问题。","https:\u002F\u002Fgithub.com\u002FTinyLLaVA\u002FTinyLLaVA_Factory\u002Fissues\u002F38",{"id":144,"question_zh":145,"answer_zh":146,"source_url":137},32105,"如何微调最新的 TinyLLaVA-Phi-2-SigLIP-3.1B 模型？旧脚本不适用怎么办？","旧的微调脚本可能不适用于最新分支。维护者已修复相关 bug，请拉取仓库的最新版本（pull the latest version）即可解决兼容性问题。更新后，可以参考文档中的自定义微调指南进行操作。",[]]