[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-huggingface--nanoVLM":3,"tool-huggingface--nanoVLM":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":76,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":23,"env_os":96,"env_gpu":97,"env_ram":98,"env_deps":99,"category_tags":111,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":112,"updated_at":113,"faqs":114,"releases":140},946,"huggingface\u002FnanoVLM","nanoVLM","The simplest, fastest repository for training\u002Ffinetuning small-sized VLMs.","nanoVLM 是 Hugging Face 推出的一个极简视觉语言模型（VLM）训练框架，专为想要快速上手理解和定制小型多模态 AI 的开发者与研究人员设计。\n\n这个项目最大的特点是\"小而美\"——整个核心代码仅约 750 行纯 PyTorch 实现，包含视觉编码器、语言解码器、模态投影和训练循环等全部组件，代码清晰易读，没有复杂抽象。借鉴了 Andrej Karpathy nanoGPT 的教育理念，nanoVLM 不追求最新 SOTA 性能，而是让开发者能够真正\"看懂并改动\"每一行代码。\n\nnanoVLM 解决了传统 VLM 框架过于复杂、难以定制的问题。基于 SigLIP 视觉骨干和 SmolLM2 语言模型构建的 222M 参数版本，仅需单张 H100 GPU 训练约 6 小时即可在 MMStar 基准达到 35.3% 的准确率，证明了小模型也能具备实用能力。\n\n适合 AI 研究者、算法工程师以及希望深入理解多模态模型原理的技术爱好者。无论是想快速验证新想法、教学演示，还是为特定场景微调小型 VLM，nanoVLM 都是理想的起点。项目提供 Colab 一键运行和详细教程，大幅","nanoVLM 是 Hugging Face 推出的一个极简视觉语言模型（VLM）训练框架，专为想要快速上手理解和定制小型多模态 AI 的开发者与研究人员设计。\n\n这个项目最大的特点是\"小而美\"——整个核心代码仅约 750 行纯 PyTorch 实现，包含视觉编码器、语言解码器、模态投影和训练循环等全部组件，代码清晰易读，没有复杂抽象。借鉴了 Andrej Karpathy nanoGPT 的教育理念，nanoVLM 不追求最新 SOTA 性能，而是让开发者能够真正\"看懂并改动\"每一行代码。\n\nnanoVLM 解决了传统 VLM 框架过于复杂、难以定制的问题。基于 SigLIP 视觉骨干和 SmolLM2 语言模型构建的 222M 参数版本，仅需单张 H100 GPU 训练约 6 小时即可在 MMStar 基准达到 35.3% 的准确率，证明了小模型也能具备实用能力。\n\n适合 AI 研究者、算法工程师以及希望深入理解多模态模型原理的技术爱好者。无论是想快速验证新想法、教学演示，还是为特定场景微调小型 VLM，nanoVLM 都是理想的起点。项目提供 Colab 一键运行和详细教程，大幅降低入门门槛。","# nanoVLM\n\n![nanoVLM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_readme_a7eda5829901.png)\n\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhuggingface\u002FnanoVLM\u002Fblob\u002Fmain\u002FnanoVLM.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\n\u003C\u002Fa>\n\n---\n\n> [!TIP]\n> We have written a [tutorial on nanoVLM](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fnanovlm) which will guide you through the repository and help you get started in no time.\n\n---\n\n> [!NOTE]\n> We have pushed some more breaking changes on September 9, 2025. These are all the updates to use image splitting and train on multiple nodes. This was used for the ablations of the FineVision release. Some things in the codebase regarding support scripts (eg. the notebook, or memory evals) are propably not working anymore. Similarly to the older trained versions of nanoVLM (similarly to Note below). If you find something that doesn't work anymore please let us know in the Issues or submit a PR!\n\n---\n\n> [!NOTE]\n> We have pushed some breaking changes to the repository on June 4, 2025. To enable us to do smarter packing, we refactored the way image and text embeddings are combined. To keep everything as smooth as possible, we have trained a new nanoVLM-450M with this new pipeline, while leaving the old nanoVLM-222M compatible with the old pipeline If you clone this repository now or pull the updated to your local machine, the default will be the new 450M Model. If you would like a simpler understanding and a simpler codebase, you can use the v0.1 release. This works out of the box with the old 222M model.\n\n---\n\nnanoVLM is the simplest repository for training\u002Ffinetuning a small sized Vision-Language Model with a lightweight implementation in pure PyTorch. The code itself is very readable and approachable, the model consists of a Vision Backbone (`models\u002Fvision_transformer.py` ~150 lines), Language Decoder (`models\u002Flanguage_model.py` ~250 lines), Modality Projection (`models\u002Fmodality_projection.py` ~50 lines) and the VLM itself (`models\u002Fvision_language_model.py` ~100 lines) and a simple training loop (`train.py` ~200 lines).\n\nSimilar to Andrej Karpathy's nanoGPT, we wanted to equip the community with a very simple implementation and training script for Vision Language Models. We do not claim this to be a new SOTA model, rather an educational effort that packs quite a bit of punch if you have the right hardware! You should be able to tweak and play around with the code in no time.\n\n\n## What can nanoVLM do?\n\nThe model definition and training logic of this repository fits in ~750 lines, with some more boilerplate logging and parameter loading. \nUsing the [`SigLIP-B\u002F16-224-85M`](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-base-patch16-224) and [`HuggingFaceTB\u002FSmolLM2-135M`](https:\u002F\u002Fhuggingface.co\u002FHuggingFaceTB\u002FSmolLM2-135M) as backbones results in a **222M** nanoVLM. Training this for ~6h on a single H100 GPU on ~1.7M samples of [the cauldron](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FHuggingFaceM4\u002Fthe_cauldron) results in an accuracy of 35.3% on MMStar.\n\n![loss](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_readme_0a3296487610.png)\n\nIt is therefore a simple yet powerful platform to get started with VLMs. Perfect to tinker around with different setups and settings, to explore the capabilities and efficiencies of small VLMs!\n\n## Quick Start\n\nYou can either clone the repository, setup an environment and start with the scripts, or directly [open in Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhuggingface\u002FnanoVLM\u002Fblob\u002Fmain\u002FnanoVLM.ipynb). You can also use the [interactive notebook](.\u002FnanoVLM.ipynb) to get started!\n\n\n## Environment Setup\n\nWe really like `uv` and recommend using it as your package manager. But feel free to use whichever you prefer.\n\nLet's first clone the repository:\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM.git\ncd nanoVLM\n```\n\nIf you want to use `uv`:\n```bash\nuv init --bare --python 3.12\nuv sync --python 3.12\nsource .venv\u002Fbin\u002Factivate\nuv add torch numpy torchvision pillow datasets huggingface-hub transformers wandb\n# Optional: for lmms-eval integration you have to install it from source, see section 'Evaluation with lmms-eval'\n```\n\nIf you prefer another environment manager, simply install these packages:  \n```bash\npip install torch numpy torchvision pillow datasets huggingface-hub transformers wandb\n# Optional: for lmms-eval integration you have to install it from source, see section 'Evaluation with lmms-eval'\n\n```\nDependencies: \n- `torch` \u003C3\n- `numpy` \u003C3\n- `torchvision` for the image processors\n- `pillow` for image loading\n- `datasets` for the training datasets\n- `huggingface-hub` & `transformers` to load the pretrained backbones\n- `wandb` for logging\n\n## Training\n\nTo train nanoVLM, you can simply use the provided training script. After training, your model gets uploaded to the Hub!\n```bash\nwandb login --relogin\nhuggingface-cli login\npython train.py\n```\nwhich will use the default `models\u002Fconfig.py`.\n\n## Generate\n\nTo try a [trained model](https:\u002F\u002Fhuggingface.co\u002Flusxvr\u002FnanoVLM-450M), you can simply use the provided generate script\n```bash\npython generate.py\n```\nor, to use your own trained model, you can simply run:\n```bash\npython generate.py --checkpoint \u002Fyour\u002Fpath\u002Fto\u002Ftrained_models\n```\n\nIf we feed the example image in `assets\u002Fimage.png` with a question into the model, we get the following output. Even after only short training, the model can recognize the cat in the picture. \n```\nInput: \nImage + 'What is this?'\n\nOutputs:\nGeneration 1:  This is a cat sitting on the ground. I think this is a cat sitting on the ground.\nGeneration 2:  This picture is clicked outside. In the center there is a brown color cat seems to be sitting on\nGeneration 3:  This is a cat sitting on the ground, which is of white and brown in color. This cat\nGeneration 4:  This is a cat sitting on the ground. I think this is a cat sitting on the ground.\nGeneration 5:  This is a cat sitting on the ground, which is covered with a mat. I think this is\n```\n\n### Evaluation with lmms-eval\n\nnanoVLM now supports evaluation using the comprehensive [lmms-eval](https:\u002F\u002Fgithub.com\u002FEvolvingLMMs-Lab\u002Flmms-eval) toolkit:\n\n```bash\n# Install lmms-eval (has to be from source)\nuv pip install git+https:\u002F\u002Fgithub.com\u002FEvolvingLMMs-Lab\u002Flmms-eval.git\n\n# Make sure you have your environment variables set correctly and you are logged in to HF\nexport HF_HOME=\"\u003CPath to HF cache>\"\nhuggingface-cli login\n\n# Evaluate a trained model on multiple benchmarks\npython evaluation.py --model lusxvr\u002FnanoVLM-450M --tasks mmstar,mme\n\n# If you want to use it during training, simply import the module and call it just as you would from the command line.\n# You can pass all the arguments you can also pass in the command line.\n# The evaluation during training works in the full DDP setup.\nfrom evaluation import cli_evaluate\nargs = argparse.Namespace(\n    model='lusxvr\u002FnanoVLM-450M', # This can be either a checkpoint path or the model itself\n    tasks='mmstar,mmmu,ocrbench',\n    batch_size=128 # Adapt this to your GPU, needs to be passed to avoid an OOM Error\n)\nresults = cli_evaluate(args)\n```\n\n## Hub integration\n\n**nanoVLM** comes with handy methods to load and save the model from the Hugging Face Hub.\n\n### Pretrained weights\n\nHere is how to load from a repo on the Hugging Face Hub. This is the recommended way to start working with the pretrained weights.\n\n```python\n# Load pretrained weights from Hub\nfrom models.vision_language_model import VisionLanguageModel\n\nmodel = VisionLanguageModel.from_pretrained(\"lusxvr\u002FnanoVLM-450M\")\n```\n\n### Push to hub\n\nOnce you've trained a **nanoVLM** model, you might want to share it on the Hugging Face Hub. You can easily do that with:\n\n```python\n... # Load and train your model\n\n# Push it to `username\u002Fmy-awesome-nanovlm-model` repo\nmodel.push_to_hub(\"my-awesome-nanovlm-model\")\n```\n\nThe model will be saved on the Hub as a config file `config.json` and a weights file `model.safetensors`. A modelcard `README.md` will also be generated for you with some high-level information. Feel free to update it manually to explain your work.\n\nIf the repo does not exist, it will be created for you. By default the repo will be public. You can pass `private=True` if you don't want to share publicly.\n\n\n### Local save\u002Fload\n\nIf you don't want to host your model on the Hugging Face Hub, it is still possible to save it locally:\n\n```python\n... # Load and train your model\n\n# Save it to a local folder\nmodel.save_pretrained(\"path\u002Fto\u002Flocal\u002Fmodel\")\n```\n\nYou can then reload it from the local path:\n\n```python\n# Load pretrained weights from local path\nfrom models.vision_language_model import VisionLanguageModel\n\nmodel = VisionLanguageModel.from_pretrained(\"path\u002Fto\u002Flocal\u002Fmodel\")\n```\n\n## VRAM Usage\n\nUnderstanding the VRAM requirements for training is crucial for selecting the right hardware and batch sizes. We've benchmarked the default `nanoVLM` model (222M parameters) on a single NVIDIA H100 GPU. Below is a summary of the peak VRAM usage observed for different batch sizes during training (including model, gradients, and optimizer states):\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_readme_e4d981fa20f4.png\" width=\"600\" alt=\"VRAM Usage vs Batch Size\">\n\nHere's a breakdown of the approximate peak VRAM usage:\n\n```\nVRAM allocated after loading model to device: 871.44 MB\n--- Summary of VRAM Usage ---\nBatch Size 1: 4448.58 MB\nBatch Size 2: 4465.39 MB\nBatch Size 4: 4532.29 MB\nBatch Size 8: 5373.46 MB\nBatch Size 16: 7604.36 MB\nBatch Size 32: 12074.31 MB\nBatch Size 64: 20995.06 MB\nBatch Size 128: 38834.19 MB\nBatch Size 256: 74561.08 MB\nBatch Size 512: OOM (Peak before OOM: 80247.67 MB)\n```\n\nNote that the VRAM measurement was performed on a small setup using 'SmolLM2-135M' with a maximum input sequence length of 128 tokens. This may differ from the current default configuration in the project.\n\n**Key Takeaways:**\n- You'll need at least ~4.5 GB of VRAM to train the default model even with a batch size of 1.\n- With approximately 8 GB of VRAM, you should be able to train with a batch size of up to 16.\n\n**Measure for Your Setup:**\n\nThe values above are for the default model configuration. If you modify the model architecture (e.g., change backbones, hidden sizes) or use different sequence lengths, your VRAM requirements will change. \n\nWe provide a script `measure_vram.py` that allows you to test VRAM requirements on your specific machine and for your chosen model configuration and batch sizes. \n\nTo use it:\n1. Ensure you have a CUDA-enabled GPU and PyTorch installed.\n2. Run the script with your desired batch sizes. You can also specify a model checkpoint if you have one, or let it initialize a new model based on the default `VLMConfig`.\n\n```bash\n# Example: Test batch sizes 1, 2, 4, 8 with a new default model\npython measure_vram.py --batch_sizes \"1 2 4 8\"\n\n# Example: Test with a specific checkpoint and different batch sizes\npython measure_vram.py --vlm_checkpoint_path path\u002Fto\u002Fyour\u002Fmodel.pth --batch_sizes \"16 32 64\"\n\n```\n\nThis script will output the peak VRAM allocated for each batch size tested, helping you determine feasible training configurations for your hardware.\n\n\n## Contributing\n\nWe welcome contributions to nanoVLM! However, to maintain the repository's focus on simplicity and pure PyTorch, we have a few guidelines:\n\n*   **Pure PyTorch:** We aim to keep nanoVLM as a lightweight implementation in pure PyTorch. Contributions that introduce dependencies like `transformers.Trainer`, `accelerate`, or `deepspeed` will not be accepted.\n*   **New Features:** If you have an idea for a new feature, please open an issue first to discuss the scope and implementation details. This helps ensure that your contribution aligns with the project's goals.\n*   **Bug Fixes:** Feel free to submit pull requests for bug fixes.\n\n### Roadmap\n\nHere are some areas we're looking to work on in the near future. Contributions in these areas are particularly welcome:\n\n*   **Evaluations:** Implementing more evaluations or improving our MMStar implementation (highly valued)\n*   **Data Packing:** Implementing a way to create packs of a given size from the input data to optimize training.\n*   **Multi-gpu training:** Training on several GPUs\n*   **Multi-image support:** Training with several images\n*   **Image-splitting:** Enabling higher resolutions through image-splitting as done in SmolVLM.\n*   **VLMEvalKit:** Integration into [VLMEvalKit](https:\u002F\u002Fgithub.com\u002Fopen-compass\u002FVLMEvalKit) to enable further benchmarks\n\n## Citation\n\nIf you like the project and want to use it somewhere, please use this citation:\n```\n@misc{wiedmann2025nanovlm,\n  author = {Luis Wiedmann and Aritra Roy Gosthipaty and Andrés Marafioti},\n  title = {nanoVLM},\n  year = {2025},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM}}\n}\n```\n","# nanoVLM\n\n![nanoVLM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_readme_a7eda5829901.png)\n\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhuggingface\u002FnanoVLM\u002Fblob\u002Fmain\u002FnanoVLM.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\n\u003C\u002Fa>\n\n---\n\n> [!TIP]\n> 我们撰写了一篇 [nanoVLM 教程](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fnanovlm)，将引导你了解整个仓库并帮助你快速上手。\n\n---\n\n> [!NOTE]\n> 我们在 2025 年 9 月 9 日推送了一些重大变更（breaking changes）。这些更新包括使用图像分割（image splitting）以及在多节点上训练。这些功能用于 FineVision 发布的消融实验（ablations）。代码库中一些支持脚本（例如 notebook 或内存评估）可能已经无法正常工作。同样，旧版本的 nanoVLM 训练模型也可能受到影响（类似于下方的说明）。如果你发现某些功能无法正常工作，请在 Issues 中告知我们或提交 PR！\n\n---\n\n> [!NOTE]\n> 我们在 2025 年 6 月 4 日对仓库推送了一些重大变更（breaking changes）。为了实现更智能的打包（smarter packing），我们重构了图像和文本嵌入（embeddings）的组合方式。为了保持一切尽可能顺畅，我们使用新的流程训练了一个新的 nanoVLM-450M 模型，同时保留旧的 nanoVLM-222M 与旧流程兼容。如果你现在克隆此仓库或将更新拉取到本地，默认将是新的 450M 模型。如果你希望获得更简单的理解和更简洁的代码库，可以使用 v0.1 版本。该版本与旧的 222M 模型开箱即用。\n\n---\n\nnanoVLM 是用于训练\u002F微调小型视觉语言模型（Vision-Language Model, VLM）的最简仓库，采用纯 PyTorch 轻量级实现。代码本身非常易读且易于理解，模型由视觉骨干网络（Vision Backbone，`models\u002Fvision_transformer.py` 约 150 行）、语言解码器（Language Decoder，`models\u002Flanguage_model.py` 约 250 行）、模态投影（Modality Projection，`models\u002Fmodality_projection.py` 约 50 行）、VLM 本身（`models\u002Fvision_language_model.py` 约 100 行）以及简单的训练循环（`train.py` 约 200 行）组成。\n\n类似于 Andrej Karpathy 的 nanoGPT，我们希望为社区提供一个非常简单的视觉语言模型实现和训练脚本。我们并不声称这是一个新的 SOTA（State-of-the-Art，最先进）模型，而是一项教育性工作——如果你拥有合适的硬件，它能发挥相当大的威力！你应该能够在短时间内调整和尝试代码。\n\n\n## nanoVLM 能做什么？\n\n本仓库的模型定义和训练逻辑约 750 行，外加一些样板化的日志记录和参数加载代码。\n使用 [`SigLIP-B\u002F16-224-85M`](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-base-patch16-224) 和 [`HuggingFaceTB\u002FSmolLM2-135M`](https:\u002F\u002Fhuggingface.co\u002FHuggingFaceTB\u002FSmolLM2-135M) 作为骨干网络，可得到一个 **222M** 的 nanoVLM。在单张 H100 GPU 上训练约 6 小时，使用约 170 万条 [the cauldron](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FHuggingFaceM4\u002Fthe_cauldron) 样本，可在 MMStar 上达到 35.3% 的准确率。\n\n![loss](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_readme_0a3296487610.png)\n\n因此，这是一个简单但强大的视觉语言模型入门平台。非常适合尝试不同的设置和配置，探索小型视觉语言模型的能力和效率！\n\n## 快速开始\n\n你可以克隆仓库、设置环境并使用脚本开始，或者直接 [在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhuggingface\u002FnanoVLM\u002Fblob\u002Fmain\u002FnanoVLM.ipynb)。你也可以使用 [交互式 notebook](.\u002FnanoVLM.ipynb) 来上手！\n\n\n## 环境设置\n\n我们非常喜欢 `uv`，推荐将其作为包管理器。但你可以随意使用自己喜欢的工具。\n\n首先克隆仓库：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM.git\ncd nanoVLM\n```\n\n如果你想使用 `uv`：\n```bash\nuv init --bare --python 3.12\nuv sync --python 3.12\nsource .venv\u002Fbin\u002Factivate\nuv add torch numpy torchvision pillow datasets huggingface-hub transformers wandb\n# 可选：如需 lmms-eval 集成，需要从源码安装，参见\"使用 lmms-eval 评估\"部分\n```\n\n如果你更喜欢其他环境管理器，只需安装这些包：\n```bash\npip install torch numpy torchvision pillow datasets huggingface-hub transformers wandb\n# 可选：如需 lmms-eval 集成，需要从源码安装，参见\"使用 lmms-eval 评估\"部分\n\n```\n依赖项：\n- `torch` \u003C3\n- `numpy` \u003C3\n- `torchvision` 用于图像处理器（image processors）\n- `pillow` 用于图像加载\n- `datasets` 用于训练数据集\n- `huggingface-hub` 和 `transformers` 用于加载预训练骨干网络\n- `wandb` 用于日志记录\n\n## 训练\n\n要训练 nanoVLM，你可以直接使用提供的训练脚本。训练完成后，你的模型将被上传到 Hub！\n```bash\nwandb login --relogin\nhuggingface-cli login\npython train.py\n```\n这将使用默认的 `models\u002Fconfig.py`。\n\n## 生成\n\n要尝试 [训练好的模型](https:\u002F\u002Fhuggingface.co\u002Flusxvr\u002FnanoVLM-450M)，你可以直接使用提供的生成脚本\n```bash\npython generate.py\n```\n或者，要使用你自己训练的模型，你可以直接运行：\n```bash\npython generate.py --checkpoint \u002Fyour\u002Fpath\u002Fto\u002Ftrained_models\n```\n\n如果我们将 `assets\u002Fimage.png` 中的示例图像与问题一起输入模型，我们会得到以下输出。即使经过短期训练，模型也能识别出图片中的猫。\n```\n输入：\n图像 + 'What is this?'\n\n输出：\n生成 1:  This is a cat sitting on the ground. I think this is a cat sitting on the ground.\n生成 2:  This picture is clicked outside. In the center there is a brown color cat seems to be sitting on\n生成 3:  This is a cat sitting on the ground, which is of white and brown in color. This cat\n生成 4:  This is a cat sitting on the ground. I think this is a cat sitting on the ground.\n生成 5:  This is a cat sitting on the ground, which is covered with a mat. I think this is\n```\n\n### 使用 lmms-eval 评估\n\nnanoVLM 现在支持使用全面的 [lmms-eval](https:\u002F\u002Fgithub.com\u002FEvolvingLMMs-Lab\u002Flmms-eval) 工具包进行评估：\n\n```bash\n# 安装 lmms-eval（必须从源码安装）\nuv pip install git+https:\u002F\u002Fgithub.com\u002FEvolvingLMMs-Lab\u002Flmms-eval.git\n\n# 确保环境变量设置正确，并且已登录 HF\nexport HF_HOME=\"\u003CPath to HF cache>\"\nhuggingface-cli login\n\n# 在多个基准测试上评估训练好的模型\npython evaluation.py --model lusxvr\u002FnanoVLM-450M --tasks mmstar,mme\n\n# 如果你想在训练期间使用它，只需导入模块并像命令行一样调用它。\n# 你可以传递所有能在命令行中传递的参数。\n```\n\n# 训练过程中的评估在完整的 DDP（Distributed Data Parallel，分布式数据并行）设置下运行。\nfrom evaluation import cli_evaluate\nargs = argparse.Namespace(\n    model='lusxvr\u002FnanoVLM-450M', # 这可以是检查点路径或模型本身\n    tasks='mmstar,mmmu,ocrbench',\n    batch_size=128 # 根据你的 GPU 进行调整，需要传递此参数以避免 OOM（Out of Memory，内存不足）错误\n)\nresults = cli_evaluate(args)\n```\n\n## Hub 集成\n\n**nanoVLM** 提供了便捷的方法，用于从 Hugging Face Hub 加载和保存模型。\n\n### 预训练权重\n\n以下是从 Hugging Face Hub 上的仓库加载的方法。这是开始使用预训练权重的推荐方式。\n\n```python\n# 从 Hub 加载预训练权重\nfrom models.vision_language_model import VisionLanguageModel\n\nmodel = VisionLanguageModel.from_pretrained(\"lusxvr\u002FnanoVLM-450M\")\n```\n\n### 推送到 Hub\n\n当你训练好一个 **nanoVLM** 模型后，你可能希望将其分享到 Hugging Face Hub。你可以通过以下方式轻松实现：\n\n```python\n... # 加载并训练你的模型\n\n# 推送到 `username\u002Fmy-awesome-nanovlm-model` 仓库\nmodel.push_to_hub(\"my-awesome-nanovlm-model\")\n```\n\n模型将以配置文件 `config.json` 和权重文件 `model.safetensors` 的形式保存在 Hub 上。同时还会为你生成一个模型卡片 `README.md`，其中包含一些高级信息。你可以手动更新它以说明你的工作。\n\n如果仓库不存在，系统会为你自动创建。默认情况下仓库是公开的。如果你不想公开分享，可以传递 `private=True` 参数。\n\n### 本地保存\u002F加载\n\n如果你不想将模型托管在 Hugging Face Hub 上，仍然可以将其保存在本地：\n\n```python\n... # 加载并训练你的模型\n\n# 保存到本地文件夹\nmodel.save_pretrained(\"path\u002Fto\u002Flocal\u002Fmodel\")\n```\n\n然后你可以从本地路径重新加载：\n\n```python\n# 从本地路径加载预训练权重\nfrom models.vision_language_model import VisionLanguageModel\n\nmodel = VisionLanguageModel.from_pretrained(\"path\u002Fto\u002Flocal\u002Fmodel\")\n```\n\n## VRAM 使用量\n\n了解训练时的 VRAM（Video Random Access Memory，显存）需求对于选择合适的硬件和 batch size（批量大小）至关重要。我们在单张 NVIDIA H100 GPU 上对默认的 `nanoVLM` 模型（222M 参数）进行了基准测试。以下是训练过程中不同 batch size 下观察到的峰值 VRAM 使用量摘要（包括模型、梯度和优化器状态）：\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_readme_e4d981fa20f4.png\" width=\"600\" alt=\"VRAM Usage vs Batch Size\">\n\n以下是近似峰值 VRAM 使用量的详细分解：\n\n```\n模型加载到设备后的 VRAM 分配: 871.44 MB\n--- VRAM 使用量摘要 ---\nBatch Size 1: 4448.58 MB\nBatch Size 2: 4465.39 MB\nBatch Size 4: 4532.29 MB\nBatch Size 8: 5373.46 MB\nBatch Size 16: 7604.36 MB\nBatch Size 32: 12074.31 MB\nBatch Size 64: 20995.06 MB\nBatch Size 128: 38834.19 MB\nBatch Size 256: 74561.08 MB\nBatch Size 512: OOM (OOM 前峰值: 80247.67 MB)\n```\n\n请注意，VRAM 测量是在一个小型设置上进行的，使用的是 'SmolLM2-135M'，最大输入序列长度为 128 个 token。这可能与项目中当前的默认配置有所不同。\n\n**关键要点：**\n- 即使 batch size 为 1，你也需要至少约 4.5 GB 的 VRAM 来训练默认模型。\n- 拥有约 8 GB VRAM 时，你应该能够使用最大为 16 的 batch size 进行训练。\n\n**为你的设置测量：**\n\n上述数值是针对默认模型配置的。如果你修改了模型架构（例如，更改 backbone（主干网络）、hidden size（隐藏层大小））或使用不同的序列长度，你的 VRAM 需求将会发生变化。\n\n我们提供了一个脚本 `measure_vram.py`，允许你在特定机器上测试 VRAM 需求，以及针对你选择的模型配置和 batch size。\n\n使用方法：\n1. 确保你拥有支持 CUDA 的 GPU 并安装了 PyTorch。\n2. 使用你期望的 batch size 运行脚本。你也可以指定一个模型检查点（如果有的话），或者让它基于默认的 `VLMConfig` 初始化一个新模型。\n\n```bash\n# 示例：使用新的默认模型测试 batch size 1, 2, 4, 8\npython measure_vram.py --batch_sizes \"1 2 4 8\"\n\n# 示例：使用特定检查点和不同的 batch size 进行测试\npython measure_vram.py --vlm_checkpoint_path path\u002Fto\u002Fyour\u002Fmodel.pth --batch_sizes \"16 32 64\"\n\n```\n\n该脚本将输出每个测试 batch size 的峰值 VRAM 分配量，帮助你确定适合你硬件的可行训练配置。\n\n## 贡献\n\n我们欢迎对 nanoVLM 的贡献！然而，为了保持仓库对简洁性和纯 PyTorch 的关注，我们有以下指导原则：\n\n*   **纯 PyTorch：** 我们的目标是将 nanoVLM 保持为轻量级的纯 PyTorch 实现。引入 `transformers.Trainer`、`accelerate` 或 `deepspeed` 等依赖的贡献将不被接受。\n*   **新功能：** 如果你有新功能的想法，请先开启一个 issue 讨论范围和实现细节。这有助于确保你的贡献与项目目标保持一致。\n*   **Bug 修复：** 欢迎提交 bug 修复的 pull request。\n\n### 路线图\n\n以下是我们近期计划开展的一些领域。欢迎在这些领域做出贡献：\n\n*   **评估：** 实现更多评估方法或改进我们的 MMStar 实现（高度重视）\n*   **数据打包：** 实现一种从输入数据创建给定大小 pack（数据包）的方法，以优化训练。\n*   **多 GPU 训练：** 在多个 GPU 上进行训练\n*   **多图像支持：** 使用多张图像进行训练\n*   **图像分割：** 通过图像分割实现更高分辨率，如同 SmolVLM 中所做的那样。\n*   **VLMEvalKit：** 集成到 [VLMEvalKit](https:\u002F\u002Fgithub.com\u002Fopen-compass\u002FVLMEvalKit) 以启用更多基准测试\n\n## 引用\n\n如果你喜欢这个项目并希望在某些地方使用它，请使用以下引用：\n```\n@misc{wiedmann2025nanovlm,\n  author = {Luis Wiedmann and Aritra Roy Gosthipaty and Andrés Marafioti},\n  title = {nanoVLM},\n  year = {2025},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM}}\n}\n```","# nanoVLM 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- **Python**: 3.12 推荐\n- **GPU**: NVIDIA GPU，支持 CUDA（训练至少需要 ~4.5GB VRAM）\n- **操作系统**: Linux\u002FmacOS\u002FWindows（WSL）\n\n### 前置依赖\n- CUDA 环境（用于 GPU 训练）\n- Git\n\n## 安装步骤\n\n### 1. 克隆仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM.git\ncd nanoVLM\n```\n\n### 2. 创建环境（推荐 uv）\n```bash\n# 安装 uv（如未安装）\ncurl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh\n\n# 初始化环境\nuv init --bare --python 3.12\nuv sync --python 3.12\nsource .venv\u002Fbin\u002Factivate\n\n# 安装依赖\nuv add torch numpy torchvision pillow datasets huggingface-hub transformers wandb\n```\n\n### 3. 或使用 pip\n```bash\npip install torch numpy torchvision pillow datasets huggingface-hub transformers wandb\n```\n\n### 4. 登录必要服务\n```bash\n# 登录 Weights & Biases（训练日志）\nwandb login --relogin\n\n# 登录 Hugging Face（模型下载与上传）\nhuggingface-cli login\n```\n\n> **国内加速**: 使用 Hugging Face 镜像\n> ```bash\n> export HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com\n> ```\n\n## 基本使用\n\n### 快速体验（Colab）\n直接打开 [nanoVLM.ipynb](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fhuggingface\u002FnanoVLM\u002Fblob\u002Fmain\u002FnanoVLM.ipynb) 无需本地配置。\n\n### 加载预训练模型\n```python\nfrom models.vision_language_model import VisionLanguageModel\n\n# 从 Hugging Face Hub 加载\nmodel = VisionLanguageModel.from_pretrained(\"lusxvr\u002FnanoVLM-450M\")\n```\n\n### 运行推理\n```bash\n# 使用默认预训练模型\npython generate.py\n\n# 使用自己的训练模型\npython generate.py --checkpoint \u002Fpath\u002Fto\u002Fyour\u002Fmodel\n```\n\n### 训练模型\n```bash\npython train.py\n```\n- 默认配置位于 `models\u002Fconfig.py`\n- 训练完成后自动上传至 Hugging Face Hub\n\n### 模型保存与加载\n\n```python\n# 保存到本地\nmodel.save_pretrained(\"path\u002Fto\u002Flocal\u002Fmodel\")\n\n# 从本地加载\nmodel = VisionLanguageModel.from_pretrained(\"path\u002Fto\u002Flocal\u002Fmodel\")\n\n# 推送到 Hub\nmodel.push_to_hub(\"my-awesome-nanovlm-model\")\n```\n\n### 评估模型\n```bash\n# 安装 lmms-eval（需从源码安装）\nuv pip install git+https:\u002F\u002Fgithub.com\u002FEvolvingLMMs-Lab\u002Flmms-eval.git\n\n# 运行评估\npython evaluation.py --model lusxvr\u002FnanoVLM-450M --tasks mmstar,mme\n```\n\n### 检测显存需求\n```bash\n# 测试不同 batch size 的显存占用\npython measure_vram.py --batch_sizes \"1 2 4 8 16\"\n```\n\n---\n\n**参考资源**\n- 详细教程：[Hugging Face Blog - nanoVLM](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fnanovlm)\n- 预训练模型：[lusxvr\u002FnanoVLM-450M](https:\u002F\u002Fhuggingface.co\u002Flusxvr\u002FnanoVLM-450M)","某高校计算机视觉实验室的研究生小李，需要为智能仓储机器人开发一个视觉问答模块，让机器人能够理解仓库场景图像并回答\"货架第三层还剩多少箱货物\"这类问题。\n\n### 没有 nanoVLM 时\n\n- **代码黑箱，无从下手**：尝试基于 LLaVA 或 Qwen-VL 做微调，但官方仓库动辄上万行代码，Vision Encoder、Projector、LLM 的交互逻辑层层嵌套，花了两周还没理清数据流\n- **算力门槛高不可攀**：实验室只有一台 4×A6000 的服务器，主流 VLM 微调需要 8×A100 起步，不得不申请校外云计算资源，预算和审批流程拖慢进度\n- **调试如同盲人摸象**：训练 loss 异常时，由于框架封装过深，无法快速定位是图像 token 化、投影层还是注意力机制的问题，只能盲目调参\n\n### 使用 nanoVLM 后\n\n- **750 行代码全透明**：SigLIP 视觉编码器、SmolLM 语言模型、模态投影层各自独立且精简，小李半天就梳理完数据流，直接在 `models\u002Fmodality_projection.py` 里实验新的融合策略\n- **单卡 H100 六小时出模型**：222M 参数规模配合高效 packing 策略，实验室现有硬件即可训练，当天完成原型验证，无需额外申请资源\n- **问题定位精准高效**：训练不稳定时，借助纯 PyTorch 实现和清晰的模块划分，快速发现是图像 patch 数量与文本长度不匹配，调整 `image_splitting` 参数后顺利解决\n\nnanoVLM 用极简代码架构打破了 VLM 研发的门槛，让研究者把精力从\"看懂框架\"转向\"创新算法\"本身。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_nanoVLM_a7eda582.png","huggingface","Hugging Face","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhuggingface_90da21a4.png","The AI community building the future.",null,"https:\u002F\u002Fhuggingface.co\u002F","https:\u002F\u002Fgithub.com\u002Fhuggingface",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",84.6,{"name":89,"color":90,"percentage":91},"Shell","#89e051",15.4,4781,479,"2026-04-04T10:09:00","Apache-2.0","Linux, macOS, Windows","需要 NVIDIA GPU（CUDA-enabled），显存最低 4.5GB（batch size=1），推荐 8GB+（batch size≤16），支持 H100 等高端显卡进行大规模训练","未说明",{"notes":100,"python":101,"dependencies":102},"1. 推荐使用 uv 作为包管理器，也可使用 pip；2. 支持 Google Colab 直接运行；3. 可选依赖 lmms-eval 需从源码安装用于模型评估；4. 需要登录 Weights & Biases 和 Hugging Face Hub 账号进行训练和模型上传；5. 提供 VRAM 测量脚本 measure_vram.py 用于自定义硬件配置测试；6. 代码设计为纯 PyTorch 实现，不接受 transformers.Trainer、accelerate 或 deepspeed 等依赖的提交","3.12",[103,104,105,106,107,108,109,110],"torch","numpy\u003C3","torchvision","pillow","datasets","huggingface-hub","transformers","wandb",[26,14,13,54],"2026-03-27T02:49:30.150509","2026-04-06T06:46:03.549157",[115,120,125,130,135],{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},4172,"微调 SmolLM2-360M 时出现 NaN 损失如何解决？","这是精度问题，需要将混合精度训练的数据类型从 float16 改为 bfloat16。\n\n修改以下代码：\n```python\n# 从\nwith torch.autocast(device_type='cuda', dtype=torch.float16):\n# 改为\nwith torch.autocast(device_type='cuda', dtype=torch.bfloat16):\n```\n\n注意：Google Colab 的免费 T4 GPU 不支持 bfloat16。替代方案：\n1. 租用支持 bfloat16 的 GPU（如 3090）\n2. 或在 Colab 上使用 float32 训练：删除 `with torch.autocast` 行，并将 batch_size 减小到 6","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM\u002Fissues\u002F50",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},4173,"语言模型在训练中是否真的在学习？参数是否会被更新？","是的，语言模型（LM）和视觉模型（ViT）的参数都会被更新，但学习率不同：\n\n- 模态投影器（modality projector）：lr = 1e-3\n- 语言模型和视觉模型骨干网络：lr = 5e-5\n\n在训练配置中可以看到两个学习率设置：\n```python\nlr_mp: float = 1e-3        # 模态投影器学习率\nlr_backbones: float = 5e-5 # 骨干网络学习率\n```\n\n虽然 forward 方法中没有显式的 LM 目标，但在 VLM 的端到端训练中，LM 参数会通过反向传播从最终输出损失中得到梯度并更新。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM\u002Fissues\u002F134",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},4174,"如何配置超小型的 nanoVLM（如 10M 参数级别）？","可以参考以下超小配置（Vision encoder 7.8M + LLM decoder 3.6M）：\n\n```python\nvit_n_heads: int = 1\nvit_n_blocks: int = 1\nlm_hidden_dim: int = 72\nlm_inter_dim: int = 192\nlm_n_heads: int = 1\nlm_n_kv_heads: int = 1\nlm_n_blocks: int = 1\n```\n\n注意事项：\n- 极小的模型容易过拟合，特别是在固定问题、变图像的数据集上\n- 如果目标答案较长（10-50 词），建议增加正则化或数据增强\n- 相关经验分享：https:\u002F\u002Fwww.gradients.zone\u002Fblog\u002Fa-super-small-vision-language-model\u002F","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM\u002Fissues\u002F141",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},4175,"图像分割（image-splitting）功能如何实现高分辨率支持？","图像分割功能已在 #82 中实现，核心设计如下：\n\n1. **在 collator 中处理分割逻辑**：将图像分割逻辑从 dataset 类移到 collator，支持批量处理\n2. **图像处理器接受图像列表**：添加 `do_image_splitting` 参数，在 resize 后（如 max_edge=224）批量分割图像为 patches\n3. **包含原图作为参考**：分割后的 batch 包含原始图像用于全局信息\n4. **在 collator 中扩展 prompt**：参考 smolVLM 的方式，使用 rows\u002Fcols 和 image tokens 扩展 prompt，插入 `\u003Cimage>`、`\u003Cfake>`、`\u003Cglobal>` 等特殊 token\n5. **dataset 级别的 prompt 需要包含占位符**：确保 collator 知道在哪里注入 image token\n\n实现要点：确保 dataset prompt 包含图像占位符，collator 根据实际 patches 数量替换为相应数量的 image token。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM\u002Fissues\u002F62",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},4176,"如何保持代码简洁，避免项目变得复杂冗余？","项目维护者已通过发布版本的方式来平衡功能迭代和代码简洁性：\n\n- **v0.1 版本**：预 DDP（分布式数据并行）的基础版本，保持最简框架\n- **v0.2 版本**：包含更多功能（如 DDP、图像分割等）\n\n建议：\n- 基础学习使用 v0.1 版本\n- 生产\u002F高级功能使用最新版本\n- 自定义功能建议用户自行维护，保持核心仓库简洁\n\n版本发布地址：https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM\u002Freleases","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FnanoVLM\u002Fissues\u002F57",[141,146],{"id":142,"version":143,"summary_zh":144,"released_at":145},103583,"v0.2","We changed the underlying handling of how image and text tokens are combined and trained a new nanoVLM-450M with this.","2025-06-04T10:17:05",{"id":147,"version":148,"summary_zh":149,"released_at":150},103584,"v0.1","This is the first release of nanoVLM. Since we will keep adding new features to the repo, this is the place you can go if you want a simpler version.","2025-05-20T09:46:34"]