[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-deepseek-ai--DeepSeek-VL2":3,"tool-deepseek-ai--DeepSeek-VL2":64},[4,17,27,35,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[13,14,15,43],"视频",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":23,"last_commit_at":50,"category_tags":51,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":23,"last_commit_at":58,"category_tags":59,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,60,43,61,15,62,26,13,63],"数据工具","插件","其他","音频",{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":100,"forks":101,"last_commit_at":102,"license":103,"difficulty_score":10,"env_os":104,"env_gpu":105,"env_ram":104,"env_deps":106,"category_tags":113,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":114,"updated_at":115,"faqs":116,"releases":137},4208,"deepseek-ai\u002FDeepSeek-VL2","DeepSeek-VL2","DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding","DeepSeek-VL2 是深度求索推出的一系列先进的混合专家（MoE）视觉 - 语言模型，旨在大幅提升机器对图像与文本结合内容的理解能力。作为 DeepSeek-VL 的升级版，它不仅能流畅地进行视觉问答，还在光学字符识别（OCR）、复杂文档表格解析、图表分析以及视觉定位等任务上表现出卓越性能，有效解决了传统模型在处理多模态复杂场景时精度不足或响应迟缓的难题。\n\n该系列包含 Tiny、Small 和标准版三种变体，激活参数量分别为 10 亿、28 亿和 45 亿。其核心亮点在于采用了混合专家架构，能够在保持较低计算成本的同时，实现与更大规模稠密模型相媲美甚至更优的效果，达到了开源领域的领先水平。\n\nDeepSeek-VL2 非常适合 AI 研究人员探索多模态前沿技术，也便于开发者将其集成到智能客服、文档自动化处理或教育辅助等应用中。同时，得益于其高效的推理能力，普通用户也能通过在线演示轻松体验高质量的图文交互服务。无论是希望优化现有算法的工程师，还是寻求高效多模态解决方案的团队，DeepSeek-VL2 都是一个值得关注的强大选择。","\u003C!-- markdownlint-disable first-line-h1 -->\n\u003C!-- markdownlint-disable html -->\n\u003C!-- markdownlint-disable no-duplicate-header -->\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"images\u002Flogo.svg\" width=\"60%\" alt=\"DeepSeek AI\" \u002F>\n\u003C\u002Fdiv>\n\u003Chr>\n\u003Cdiv align=\"center\">\n\n  \u003Ca href=\"https:\u002F\u002Fwww.deepseek.com\u002F\" target=\"_blank\">\n    \u003Cimg alt=\"Homepage\" src=\"images\u002Fbadge.svg\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fdeepseek-ai\u002Fdeepseek-vl2-small\" target=\"_blank\">\n    \u003Cimg alt=\"Chat\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤖%20Chat-DeepSeek%20VL-536af5?color=536af5&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\" target=\"_blank\">\n    \u003Cimg alt=\"Hugging Face\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n\n\u003Cdiv align=\"center\">\n\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FTc7c45Zzu5\" target=\"_blank\">\n    \u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"images\u002Fqr.jpeg\" target=\"_blank\">\n    \u003Cimg alt=\"Wechat\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fdeepseek_ai\" target=\"_blank\">\n    \u003Cimg alt=\"Twitter Follow\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTwitter-deepseek_ai-white?logo=x&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n\n  \u003Ca href=\"LICENSE-CODE\">\n    \u003Cimg alt=\"Code License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode_License-MIT-f5de53?&color=f5de53\">\n  \u003C\u002Fa>\n  \u003Ca href=\"LICENSE-MODEL\">\n    \u003Cimg alt=\"Model License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FModel_License-Model_Agreement-f5de53?&color=f5de53\">\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#3-model-download\">\u003Cb>📥 Model Download\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#4-quick-start\">\u003Cb>⚡ Quick Start\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#5-license\">\u003Cb>📜 License\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#6-citation\">\u003Cb>📖 Citation\u003C\u002Fb>\u003C\u002Fa> \u003Cbr>\n  \u003Ca href=\".\u002FDeepSeek_VL2_paper.pdf\">\u003Cb>📄 Paper Link\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10302\">\u003Cb>📄 Arxiv Paper Link\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fdeepseek-ai\u002Fdeepseek-vl2-small\">\u003Cb>👁️ Demo\u003C\u002Fb>\u003C\u002Fa>\n\u003C\u002Fp>\n\n## 1. Introduction\n\nIntroducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition,  document\u002Ftable\u002Fchart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively.\nDeepSeek-VL2 achieves competitive or state-of-the-art performance with similar or fewer activated parameters compared to existing open-source dense and MoE-based models.\n\n\n[DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding]()\n\nZhiyu Wu*, Xiaokang Chen*, Zizheng Pan*, Xingchao Liu*, Wen Liu**, Damai Dai, Huazuo Gao, Yiyang Ma, Chengyue Wu, Bingxuan Wang, Zhenda Xie, Yu Wu, Kai Hu, Jiawei Wang, Yaofeng Sun, Yukun Li, Yishi Piao, Kang Guan, Aixin Liu, Xin Xie, Yuxiang You, Kai Dong, Xingkai Yu, Haowei Zhang, Liang Zhao, Yisong Wang, Chong Ruan*** (* Equal Contribution, ** Project Lead, *** Corresponding author)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepseek-ai_DeepSeek-VL2_readme_f3fef5e637a9.jpeg)\n\n## 2. Release\n✅ \u003Cb>2025-2-6\u003C\u002Fb>: Naive Implemented Gradio Demo on Huggingface Space [deepseek-vl2-small](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fdeepseek-ai\u002Fdeepseek-vl2-small).\n\n✅ \u003Cb>2024-12-25\u003C\u002Fb>: Gradio Demo Example, Incremental Prefilling and VLMEvalKit Support.\n\n✅ \u003Cb>2024-12-13\u003C\u002Fb>: DeepSeek-VL2 family released, including \u003Ccode>DeepSeek-VL2-tiny\u003C\u002Fcode>, \u003Ccode>DeepSeek-VL2-small\u003C\u002Fcode>, \u003Ccode>DeepSeek-VL2\u003C\u002Fcode>.\n\n## 3. Model Download\n\nWe release the DeepSeek-VL2 family, including \u003Ccode>DeepSeek-VL2-tiny\u003C\u002Fcode>, \u003Ccode>DeepSeek-VL2-small\u003C\u002Fcode>, \u003Ccode>DeepSeek-VL2\u003C\u002Fcode>.\nTo support a broader and more diverse range of research within both academic and commercial communities.\nPlease note that the use of this model is subject to the terms outlined in [License section](#5-license).\n\n### Huggingface\n\n| Model        | Sequence Length | Download                                                                    |\n|--------------|-----------------|-----------------------------------------------------------------------------|\n| DeepSeek-VL2-tiny | 4096            | [🤗 Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002Fdeepseek-vl2-tiny) |\n| DeepSeek-VL2-small | 4096            | [🤗 Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002Fdeepseek-vl2-small) |\n| DeepSeek-VL2 | 4096            | [🤗 Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002Fdeepseek-vl2)   |\n\n\n## 4. Quick Start\n\n### Installation\n\nOn the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command:\n\n```shell\npip install -e .\n```\n\n### Simple Inference Example with One Image\n\n**Note: You may need 80GB GPU memory to run this script with deepseek-vl2-small and even larger for deepseek-vl2.**\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# specify the path to the model\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-tiny\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n## single image conversation example\n## Please note that \u003C|ref|> and \u003C|\u002Fref|> are designed specifically for the object localization feature. These special tokens are not required for normal conversations.\n## If you would like to experience the grounded captioning functionality (responses that include both object localization and reasoning), you need to add the special token \u003C|grounding|> at the beginning of the prompt. Examples could be found in Figure 9 of our paper.\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"\u003Cimage>\\n\u003C|ref|>The giraffe at the back.\u003C|\u002Fref|>.\",\n        \"images\": [\".\u002Fimages\u002Fvisual_grounding_1.jpeg\"],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"},\n]\n\n# load images and prepare for inputs\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\n# run image encoder to get the image embeddings\ninputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n# run the model to get the response\noutputs = vl_gpt.language.generate(\n    inputs_embeds=inputs_embeds,\n    attention_mask=prepare_inputs.attention_mask,\n    pad_token_id=tokenizer.eos_token_id,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True\n)\n\nanswer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\nAnd the output is something like:\n```\n\u003C|User|>: \u003Cimage>\n\u003C|ref|>The giraffe at the back.\u003C|\u002Fref|>.\n\n\u003C|Assistant|>: \u003C|ref|>The giraffe at the back.\u003C|\u002Fref|>\u003C|det|>[[580, 270, 999, 900]]\u003C|\u002Fdet|>\u003C｜end▁of▁sentence｜>\n```\n\n### Simple Inference Example with Multiple Images\n\n**Note: You may need 80GB GPU memory to run this script with deepseek-vl2-small and even larger for deepseek-vl2.**\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# specify the path to the model\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-tiny\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n# multiple images\u002Finterleaved image-text\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"This is image_1: \u003Cimage>\\n\"\n                   \"This is image_2: \u003Cimage>\\n\"\n                   \"This is image_3: \u003Cimage>\\n Can you tell me what are in the images?\",\n        \"images\": [\n            \"images\u002Fmulti_image_1.jpeg\",\n            \"images\u002Fmulti_image_2.jpeg\",\n            \"images\u002Fmulti_image_3.jpeg\",\n        ],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"}\n]\n\n# load images and prepare for inputs\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\n# run image encoder to get the image embeddings\ninputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n# run the model to get the response\noutputs = vl_gpt.language.generate(\n    inputs_embeds=inputs_embeds,\n    attention_mask=prepare_inputs.attention_mask,\n    pad_token_id=tokenizer.eos_token_id,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True\n)\n\nanswer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\nAnd the output is something like:\n```\n\u003C|User|>: This is image_1: \u003Cimage>\nThis is image_2: \u003Cimage>\nThis is image_3: \u003Cimage>\n Can you tell me what are in the images?\n\n\u003C|Assistant|>: The images show three different types of vegetables. Image_1 features carrots, which are orange with green tops. Image_2 displays corn cobs, which are yellow with green husks. Image_3 contains raw pork ribs, which are pinkish-red with some marbling.\u003C｜end▁of▁sentence｜>\n```\n\n### Simple Inference Example with Incremental Prefilling\n\n**Note: We use incremental prefilling to inference within 40GB GPU using deepseek-vl2-small.**\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# specify the path to the model\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-small\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n# multiple images\u002Finterleaved image-text\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"This is image_1: \u003Cimage>\\n\"\n                   \"This is image_2: \u003Cimage>\\n\"\n                   \"This is image_3: \u003Cimage>\\n Can you tell me what are in the images?\",\n        \"images\": [\n            \"images\u002Fmulti_image_1.jpeg\",\n            \"images\u002Fmulti_image_2.jpeg\",\n            \"images\u002Fmulti_image_3.jpeg\",\n        ],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"}\n]\n\n# load images and prepare for inputs\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\nwith torch.no_grad():\n    # run image encoder to get the image embeddings\n    inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n    # incremental_prefilling when using 40G GPU for vl2-small\n    inputs_embeds, past_key_values = vl_gpt.incremental_prefilling(\n        input_ids=prepare_inputs.input_ids,\n        images=prepare_inputs.images,\n        images_seq_mask=prepare_inputs.images_seq_mask,\n        images_spatial_crop=prepare_inputs.images_spatial_crop,\n        attention_mask=prepare_inputs.attention_mask,\n        chunk_size=512 # prefilling size\n    )\n\n    # run the model to get the response\n    outputs = vl_gpt.generate(\n        inputs_embeds=inputs_embeds,\n        input_ids=prepare_inputs.input_ids,\n        images=prepare_inputs.images,\n        images_seq_mask=prepare_inputs.images_seq_mask,\n        images_spatial_crop=prepare_inputs.images_spatial_crop,\n        attention_mask=prepare_inputs.attention_mask,\n        past_key_values=past_key_values,\n\n        pad_token_id=tokenizer.eos_token_id,\n        bos_token_id=tokenizer.bos_token_id,\n        eos_token_id=tokenizer.eos_token_id,\n        max_new_tokens=512,\n\n        do_sample=False,\n        use_cache=True,\n    )\n\n    answer = tokenizer.decode(outputs[0][len(prepare_inputs.input_ids[0]):].cpu().tolist(), skip_special_tokens=False)\n\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\nAnd the output is something like:\n```\n\u003C|User|>: This is image_1: \u003Cimage>\nThis is image_2: \u003Cimage>\nThis is image_3: \u003Cimage>\n Can you tell me what are in the images?\n\n\u003C|Assistant|>: The first image contains carrots. The second image contains corn. The third image contains meat.\u003C｜end▁of▁sentence｜>\n```\n\nParse the bounding box coordinates, please refer to [parse_ref_bbox](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Fblob\u002Fmain\u002Fdeepseek_vl2\u002Fserve\u002Fapp_modules\u002Futils.py#L270-L298).\n\n\n### Full Inference Example\n```shell\n# without incremental prefilling\nCUDA_VISIBLE_DEVICES=0 python inference.py --model_path \"deepseek-ai\u002Fdeepseek-vl2\"\n\n# with incremental prefilling, when using 40G GPU for vl2-small\nCUDA_VISIBLE_DEVICES=0 python inference.py --model_path \"deepseek-ai\u002Fdeepseek-vl2-small\" --chunk_size 512\n\n```\n\n\n### Gradio Demo\n\n* Install the necessary dependencies:\n```shell\npip install -e .[gradio]\n```\n\n* then run the following command:\n\n```shell\n# vl2-tiny, 3.37B-MoE in total, activated 1B, can be run on a single GPU \u003C 40GB\nCUDA_VISIBLE_DEVICES=2 python web_demo.py \\\n--model_name \"deepseek-ai\u002Fdeepseek-vl2-tiny\"  \\\n--port 37914\n\n\n# vl2-small, 16.1B-MoE in total, activated 2.4B\n# If run on A100 40GB GPU, you need to set the `--chunk_size 512` for incremental prefilling for saving memory and it might be slow.\n# If run on > 40GB GPU, you can ignore the `--chunk_size 512` for faster response.\nCUDA_VISIBLE_DEVICES=2 python web_demo.py \\\n--model_name \"deepseek-ai\u002Fdeepseek-vl2-small\"  \\\n--port 37914 \\\n--chunk_size 512\n\n# # vl27.5-MoE in total, activated 4.2B\nCUDA_VISIBLE_DEVICES=2 python web_demo.py \\\n--model_name \"deepseek-ai\u002Fdeepseek-vl2\"  \\\n--port 37914\n```\n\n* **Important**: This is a basic and native demo implementation without any deployment optimizations, which may result in slower performance. For production environments, consider using optimized deployment solutions, such as vllm, sglang, lmdeploy, etc. These optimizations will help achieve faster response times and better cost efficiency.\n\n## 5. License\n\nThis code repository is licensed under [MIT License](.\u002FLICENSE-CODE). The use of DeepSeek-VL2 models is subject to [DeepSeek Model License](.\u002FLICENSE-MODEL). DeepSeek-VL2 series supports commercial use.\n\n## 6. Citation\n\n```\n@misc{wu2024deepseekvl2mixtureofexpertsvisionlanguagemodels,\n      title={DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding},\n      author={Zhiyu Wu and Xiaokang Chen and Zizheng Pan and Xingchao Liu and Wen Liu and Damai Dai and Huazuo Gao and Yiyang Ma and Chengyue Wu and Bingxuan Wang and Zhenda Xie and Yu Wu and Kai Hu and Jiawei Wang and Yaofeng Sun and Yukun Li and Yishi Piao and Kang Guan and Aixin Liu and Xin Xie and Yuxiang You and Kai Dong and Xingkai Yu and Haowei Zhang and Liang Zhao and Yisong Wang and Chong Ruan},\n      year={2024},\n      eprint={2412.10302},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10302},\n}\n```\n\n## 7. Contact\n\nIf you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com).\n","\u003C!-- markdownlint-disable first-line-h1 -->\n\u003C!-- markdownlint-disable html -->\n\u003C!-- markdownlint-disable no-duplicate-header -->\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"images\u002Flogo.svg\" width=\"60%\" alt=\"DeepSeek AI\" \u002F>\n\u003C\u002Fdiv>\n\u003Chr>\n\u003Cdiv align=\"center\">\n\n  \u003Ca href=\"https:\u002F\u002Fwww.deepseek.com\u002F\" target=\"_blank\">\n    \u003Cimg alt=\"主页\" src=\"images\u002Fbadge.svg\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fdeepseek-ai\u002Fdeepseek-vl2-small\" target=\"_blank\">\n    \u003Cimg alt=\"聊天\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤖%20Chat-DeepSeek%20VL-536af5?color=536af5&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\" target=\"_blank\">\n    \u003Cimg alt=\"Hugging Face\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n\n\u003Cdiv align=\"center\">\n\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FTc7c45Zzu5\" target=\"_blank\">\n    \u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"images\u002Fqr.jpeg\" target=\"_blank\">\n    \u003Cimg alt=\"微信\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fdeepseek_ai\" target=\"_blank\">\n    \u003Cimg alt=\"Twitter 关注\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTwitter-deepseek_ai-white?logo=x&logoColor=white\" \u002F>\n  \u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n\n  \u003Ca href=\"LICENSE-CODE\">\n    \u003Cimg alt=\"代码许可证\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode_License-MIT-f5de53?&color=f5de53\">\n  \u003C\u002Fa>\n  \u003Ca href=\"LICENSE-MODEL\">\n    \u003Cimg alt=\"模型许可证\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FModel_License-Model_Agreement-f5de53?&color=f5de53\">\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#3-model-download\">\u003Cb>📥 模型下载\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#4-quick-start\">\u003Cb>⚡ 快速入门\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#5-license\">\u003Cb>📜 许可证\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Ftree\u002Fmain?tab=readme-ov-file#6-citation\">\u003Cb>📖 引用\u003C\u002Fb>\u003C\u002Fa> \u003Cbr>\n  \u003Ca href=\".\u002FDeepSeek_VL2_paper.pdf\">\u003Cb>📄 论文链接\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10302\">\u003Cb>📄 Arxiv 论文链接\u003C\u002Fb>\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fdeepseek-ai\u002Fdeepseek-vl2-small\">\u003Cb>👁️ 演示\u003C\u002Fb>\u003C\u002Fa>\n\u003C\u002Fp>\n\n## 1. 引言\n\n隆重推出 DeepSeek-VL2，这是一系列先进的混合专家（MoE）视觉-语言大模型，较其前代产品 DeepSeek-VL 有了显著提升。DeepSeek-VL2 在多项任务中表现出色，包括但不限于视觉问答、光学字符识别、文档\u002F表格\u002F图表理解以及视觉定位等。该模型系列包含三个版本：DeepSeek-VL2-Tiny、DeepSeek-VL2-Small 和 DeepSeek-VL2，分别拥有 10 亿、28 亿和 45 亿激活参数。与现有的开源密集型及 MoE 模型相比，DeepSeek-VL2 在参数量相近或更少的情况下，仍能取得具有竞争力甚至最先进的性能。\n\n\n[DeepSeek-VL2：用于高级多模态理解的混合专家视觉-语言模型]()\n\n吴志宇*、陈晓康*、潘子正*、刘兴超*、刘文**、戴达迈、高华佐、马一洋、吴承悦、王炳轩、谢振达、吴宇、胡凯、王嘉伟、孙耀峰、李玉坤、剽义士、关康、刘爱欣、谢鑫、游宇翔、董凯、于兴凯、张浩伟、赵亮、王一松、阮冲*** (* 共同第一作者，** 项目负责人，*** 通讯作者)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepseek-ai_DeepSeek-VL2_readme_f3fef5e637a9.jpeg)\n\n## 2. 发布\n✅ \u003Cb>2025年2月6日\u003C\u002Fb>：在 Huggingface Space 上发布了基于 Gradio 的基础演示 [deepseek-vl2-small](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fdeepseek-ai\u002Fdeepseek-vl2-small)。\n\n✅ \u003Cb>2024年12月25日\u003C\u002Fb>：Gradio 演示示例、增量预填充及 VLMEvalKit 支持。\n\n✅ \u003Cb>2024年12月13日\u003C\u002Fb>：DeepSeek-VL2 系列正式发布，包括 \u003Ccode>DeepSeek-VL2-tiny\u003C\u002Fcode>、\u003Ccode>DeepSeek-VL2-small\u003C\u002Fcode>、\u003Ccode>DeepSeek-VL2\u003C\u002Fcode>。\n\n## 3. 模型下载\n\n我们发布了 DeepSeek-VL2 系列，包括 \u003Ccode>DeepSeek-VL2-tiny\u003C\u002Fcode>、\u003Ccode>DeepSeek-VL2-small\u003C\u002Fcode>、\u003Ccode>DeepSeek-VL2\u003C\u002Fcode>。此举旨在支持学术界和商业界更为广泛且多样化的研究工作。请注意，本模型的使用须遵守 [许可证章节](#5-license) 中所列条款。\n\n### Huggingface\n\n| 模型        | 序列长度 | 下载                                                                    |\n|--------------|-----------------|-----------------------------------------------------------------------------|\n| DeepSeek-VL2-tiny | 4096            | [🤗 Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002Fdeepseek-vl2-tiny) |\n| DeepSeek-VL2-small | 4096            | [🤗 Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002Fdeepseek-vl2-small) |\n| DeepSeek-VL2 | 4096            | [🤗 Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002Fdeepseek-vl2)   |\n\n\n## 4. 快速入门\n\n### 安装\n\n在 `Python >= 3.8` 环境的基础上，通过运行以下命令安装必要的依赖：\n\n```shell\npip install -e .\n```\n\n### 单张图片的简单推理示例\n\n**注意：运行此脚本时，您可能需要 80GB 的 GPU 内存来执行 deepseek-vl2-small，而对于 deepseek-vl2 则需要更大的显存。**\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# 指定模型路径\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-tiny\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n## 单张图片对话示例\n## 请注意，\u003C|ref|> 和 \u003C|\u002Fref|> 是专为对象定位功能设计的特殊标记。这些特殊标记在普通对话中并不需要。\n## 如果您希望体验带有对象定位和推理的落地式描述功能（即同时包含对象定位和推理的回答），则需要在提示语开头添加特殊标记 \u003C|grounding|>。相关示例可在我们的论文第 9 图中找到。\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"\u003Cimage>\\n\u003C|ref|>后面的长颈鹿。\u003C|\u002Fref|>。\",\n        \"images\": [\".\u002Fimages\u002Fvisual_grounding_1.jpeg\"],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"},\n]\n\n# 加载图片并准备输入\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\n# 运行图像编码器以获取图像嵌入\ninputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n# 运行模型以获取响应\noutputs = vl_gpt.language.generate(\n    inputs_embeds=inputs_embeds,\n    attention_mask=prepare_inputs.attention_mask,\n    pad_token_id=tokenizer.eos_token_id,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True\n)\n\nanswer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\n输出可能如下所示：\n```\n\u003C|User|>: \u003Cimage>\n\u003C|ref|>后面的长颈鹿。\u003C|\u002Fref|>.\n\n\u003C|Assistant|>: \u003C|ref|>后面的长颈鹿。\u003C|\u002Fref|>\u003C|det|>[[580, 270, 999, 900]]\u003C|\u002Fdet|>\u003C｜end▁of▁sentence｜>\n```\n\n### 多张图片的简单推理示例\n\n**注意：运行此脚本时，使用 deepseek-vl2-small 模型可能需要 80GB 显存，而使用 deepseek-vl2 则需要更大的显存。**\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# 指定模型路径\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-tiny\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n# 多张图片\u002F图文混排\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"这是图片1：\u003Cimage>\\n\"\n                   \"这是图片2：\u003Cimage>\\n\"\n                   \"这是图片3：\u003Cimage>\\n 你能告诉我这些图片里都有什么吗？\",\n        \"images\": [\n            \"images\u002Fmulti_image_1.jpeg\",\n            \"images\u002Fmulti_image_2.jpeg\",\n            \"images\u002Fmulti_image_3.jpeg\",\n        ],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"}\n]\n\n# 加载图片并准备输入\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\n# 运行图像编码器以获取图像嵌入\ninputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n# 运行模型以获取响应\noutputs = vl_gpt.language.generate(\n    inputs_embeds=inputs_embeds,\n    attention_mask=prepare_inputs.attention_mask,\n    pad_token_id=tokenizer.eos_token_id,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True\n)\n\nanswer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\n输出可能如下所示：\n```\n\u003C|User|>: 这是图片1：\u003Cimage>\n这是图片2：\u003Cimage>\n这是图片3：\u003Cimage>\n 你能告诉我这些图片里都有什么吗？\n\n\u003C|Assistant|>: 图片中展示了三种不同的蔬菜。图片1是胡萝卜，呈橙色，顶部带有绿色叶子；图片2是玉米棒，呈黄色，外面包裹着绿色苞叶；图片3是生的猪肋排，呈粉红色，夹杂着一些脂肪纹理。\u003C｜end▁of▁sentence｜>\n```\n\n### 增量预填充的简单推理示例\n\n**注意：我们使用增量预填充技术，以便在 40GB 显存的 GPU 上运行 deepseek-vl2-small 模型。**\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# 指定模型路径\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-small\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n# 多张图片\u002F图文混排\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"这是图片1：\u003Cimage>\\n\"\n                   \"这是图片2：\u003Cimage>\\n\"\n                   \"这是图片3：\u003Cimage>\\n 你能告诉我这些图片里都有什么吗？\",\n        \"images\": [\n            \"images\u002Fmulti_image_1.jpeg\",\n            \"images\u002Fmulti_image_2.jpeg\",\n            \"images\u002Fmulti_image_3.jpeg\",\n        ],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"}\n]\n\n# 加载图片并准备输入\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\nwith torch.no_grad():\n    # 运行图像编码器以获取图像嵌入\n    inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n    # 在 40G 显存的 GPU 上运行 vl2-small 时进行增量预填充\n    inputs_embeds, past_key_values = vl_gpt.incremental_prefilling(\n        input_ids=prepare_inputs.input_ids,\n        images=prepare_inputs.images,\n        images_seq_mask=prepare_inputs.images_seq_mask,\n        images_spatial_crop=prepare_inputs.images_spatial_crop,\n        attention_mask=prepare_inputs.attention_mask,\n        chunk_size=512 # 预填充大小\n    )\n\n    # 运行模型以获取响应\n    outputs = vl_gpt.generate(\n        inputs_embeds=inputs_embeds,\n        input_ids=prepare_inputs.input_ids,\n        images=prepare_inputs.images,\n        images_seq_mask=prepare_inputs.images_seq_mask,\n        images_spatial_crop=prepare_inputs.images_spatial_crop,\n        attention_mask=prepare_inputs.attention_mask,\n        past_key_values=past_key_values,\n\n        pad_token_id=tokenizer.eos_token_id,\n        bos_token_id=tokenizer.bos_token_id,\n        eos_token_id=tokenizer.eos_token_id,\n        max_new_tokens=512,\n\n        do_sample=False,\n        use_cache=True,\n    )\n\n    answer = tokenizer.decode(outputs[0][len(prepare_inputs.input_ids[0]):].cpu().tolist(), skip_special_tokens=False)\n\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\n输出可能如下所示：\n```\n\u003C|User|>: 这是图片1：\u003Cimage>\n这是图片2：\u003Cimage>\n这是图片3：\u003Cimage>\n 你能告诉我这些图片里都有什么吗？\n\n\u003C|Assistant|>: 第一张图片中有胡萝卜。第二张图片中有玉米。第三张图片中有肉。\u003C｜end▁of▁sentence｜>\n```\n\n解析边界框坐标，请参考 [parse_ref_bbox](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Fblob\u002Fmain\u002Fdeepseek_vl2\u002Fserve\u002Fapp_modules\u002Futils.py#L270-L298)。\n\n\n### 完整推理示例\n```shell\n# 不使用增量预填充\nCUDA_VISIBLE_DEVICES=0 python inference.py --model_path \"deepseek-ai\u002Fdeepseek-vl2\"\n\n# 使用增量预填充，在 40G 显存的 GPU 上运行 vl2-small 时\nCUDA_VISIBLE_DEVICES=0 python inference.py --model_path \"deepseek-ai\u002Fdeepseek-vl2-small\" --chunk_size 512\n\n```\n\n### Gradio 示例\n\n* 安装必要的依赖：\n```shell\npip install -e .[gradio]\n```\n\n* 然后运行以下命令：\n\n```shell\n# vl2-tiny，总规模3.37B-MoE，激活参数1B，可在单张40GB以下显卡上运行\nCUDA_VISIBLE_DEVICES=2 python web_demo.py \\\n--model_name \"deepseek-ai\u002Fdeepseek-vl2-tiny\"  \\\n--port 37914\n\n\n# vl2-small，总规模16.1B-MoE，激活参数2.4B\n# 若在A100 40GB显卡上运行，需设置 `--chunk_size 512` 进行增量预填充以节省显存，但可能会较慢。\n# 若在显存大于40GB的GPU上运行，则可忽略 `--chunk_size 512`，以获得更快的响应速度。\nCUDA_VISIBLE_DEVICES=2 python web_demo.py \\\n--model_name \"deepseek-ai\u002Fdeepseek-vl2-small\"  \\\n--port 37914 \\\n--chunk_size 512\n\n# # vl2，总规模7.5B-MoE，激活参数4.2B\nCUDA_VISIBLE_DEVICES=2 python web_demo.py \\\n--model_name \"deepseek-ai\u002Fdeepseek-vl2\"  \\\n--port 37914\n```\n\n* **重要提示**：这是一个基础且原生的示例实现，未进行任何部署优化，可能导致性能较慢。对于生产环境，请考虑使用优化后的部署方案，如vllm、sglang、lmdeploy等。这些优化将有助于提升响应速度并降低运行成本。\n\n## 5. 许可证\n\n本代码仓库采用[MIT许可证](.\u002FLICENSE-CODE)授权。DeepSeek-VL2模型的使用受[DeepSeek模型许可证](.\u002FLICENSE-MODEL)约束。DeepSeek-VL2系列支持商业用途。\n\n## 6. 引用\n\n```\n@misc{wu2024deepseekvl2mixtureofexpertsvisionlanguagemodels,\n      title={DeepSeek-VL2：面向高级多模态理解的专家混合视觉语言模型},\n      author={Zhiyu Wu、Xiaokang Chen、Zizheng Pan、Xingchao Liu、Wen Liu、Damai Dai、Huazuo Gao、Yiyang Ma、Chengyue Wu、Bingxuan Wang、Zhenda Xie、Yu Wu、Kai Hu、Jiawei Wang、Yaofeng Sun、Yukun Li、Yishi Piao、Kang Guan、Aixin Liu、Xin Xie、Yuxiang You、Kai Dong、Xingkai Yu、Haowei Zhang、Liang Zhao、Yisong Wang、Chong Ruan},\n      year={2024},\n      eprint={2412.10302},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10302},\n}\n```\n\n## 7. 联系方式\n\n如有任何问题，请提交Issue或发送邮件至[service@deepseek.com](mailto:service@deepseek.com)。","# DeepSeek-VL2 快速上手指南\n\nDeepSeek-VL2 是一个先进的混合专家（MoE）视觉 - 语言模型系列，包含 Tiny、Small 和标准版三个变体，分别在视觉问答、OCR、文档理解及视觉定位（Visual Grounding）等任务上表现出色。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐) 或 macOS\n*   **Python 版本**: >= 3.8\n*   **GPU 显存要求**:\n    *   `deepseek-vl2-tiny`: 较低显存即可运行。\n    *   `deepseek-vl2-small`: 常规推理建议 **80GB** 显存；若使用**增量预填充 (Incremental Prefilling)** 技术，可在 **40GB** 显存上运行。\n    *   `deepseek-vl2`: 需要更大显存。\n*   **依赖库**: PyTorch, Transformers 等（将通过安装命令自动解决）。\n\n> **提示**: 国内开发者建议使用国内镜像源加速 Python 包下载，例如在 pip 命令后添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`。\n\n## 2. 安装步骤\n\n克隆项目代码并安装依赖：\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2.git\ncd DeepSeek-VL2\npip install -e .\n```\n\n如需使用清华镜像源加速安装：\n```shell\npip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\n### 单图对话示例\n\n以下示例展示如何加载 `deepseek-vl2-tiny` 模型并进行单图对话。该示例包含了视觉定位功能（使用 `\u003C|ref|>` 和 `\u003C|\u002Fref|>` 标记）。\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# specify the path to the model\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-tiny\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n## single image conversation example\n## Please note that \u003C|ref|> and \u003C|\u002Fref|> are designed specifically for the object localization feature. These special tokens are not required for normal conversations.\n## If you would like to experience the grounded captioning functionality (responses that include both object localization and reasoning), you need to add the special token \u003C|grounding|> at the beginning of the prompt. Examples could be found in Figure 9 of our paper.\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"\u003Cimage>\\n\u003C|ref|>The giraffe at the back.\u003C|\u002Fref|>.\",\n        \"images\": [\".\u002Fimages\u002Fvisual_grounding_1.jpeg\"],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"},\n]\n\n# load images and prepare for inputs\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\n# run image encoder to get the image embeddings\ninputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n# run the model to get the response\noutputs = vl_gpt.language.generate(\n    inputs_embeds=inputs_embeds,\n    attention_mask=prepare_inputs.attention_mask,\n    pad_token_id=tokenizer.eos_token_id,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True\n)\n\nanswer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\n**输出示例：**\n```\n\u003C|User|>: \u003Cimage>\n\u003C|ref|>The giraffe at the back.\u003C|\u002Fref|>.\n\n\u003C|Assistant|>: \u003C|ref|>The giraffe at the back.\u003C|\u002Fref|>\u003C|det|>[[580, 270, 999, 900]]\u003C|\u002Fdet|>\u003C｜end▁of▁sentence｜>\n```\n\n### 多图对话示例\n\nDeepSeek-VL2 支持多张图片交错输入。\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM\n\nfrom deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM\nfrom deepseek_vl2.utils.io import load_pil_images\n\n\n# specify the path to the model\nmodel_path = \"deepseek-ai\u002Fdeepseek-vl2-tiny\"\nvl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)\ntokenizer = vl_chat_processor.tokenizer\n\nvl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\nvl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()\n\n# multiple images\u002Finterleaved image-text\nconversation = [\n    {\n        \"role\": \"\u003C|User|>\",\n        \"content\": \"This is image_1: \u003Cimage>\\n\"\n                   \"This is image_2: \u003Cimage>\\n\"\n                   \"This is image_3: \u003Cimage>\\n Can you tell me what are in the images?\",\n        \"images\": [\n            \"images\u002Fmulti_image_1.jpeg\",\n            \"images\u002Fmulti_image_2.jpeg\",\n            \"images\u002Fmulti_image_3.jpeg\",\n        ],\n    },\n    {\"role\": \"\u003C|Assistant|>\", \"content\": \"\"}\n]\n\n# load images and prepare for inputs\npil_images = load_pil_images(conversation)\nprepare_inputs = vl_chat_processor(\n    conversations=conversation,\n    images=pil_images,\n    force_batchify=True,\n    system_prompt=\"\"\n).to(vl_gpt.device)\n\n# run image encoder to get the image embeddings\ninputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n# run the model to get the response\noutputs = vl_gpt.language.generate(\n    inputs_embeds=inputs_embeds,\n    attention_mask=prepare_inputs.attention_mask,\n    pad_token_id=tokenizer.eos_token_id,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True\n)\n\nanswer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)\nprint(f\"{prepare_inputs['sft_format'][0]}\", answer)\n```\n\n### 低显存优化（增量预填充）\n\n如果您使用的是 `deepseek-vl2-small` 且显存有限（如 40GB），可以使用 `incremental_prefilling` 方法进行推理。核心代码片段如下：\n\n```python\n# ... (前部分代码与上述示例相同，加载模型和准备数据)\n\nwith torch.no_grad():\n    # run image encoder to get the image embeddings\n    inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)\n\n    # incremental_prefilling when using 40G GPU for vl2-small\n    inputs_embeds, past_key_values = vl_gpt.incremental_prefilling(\n        input_ids=prepare_inputs.input_ids,\n        images=prepare_inputs.images,\n        attention_mask=prepare_inputs.attention_mask,\n        chunk_size=512 # 可根据显存情况调整块大小\n    )\n\n    # run the model to get the response using the pre-filled states\n    outputs = vl_gpt.language.generate(\n        inputs_embeds=inputs_embeds,\n        past_key_values=past_key_values,\n        attention_mask=prepare_inputs.attention_mask,\n        pad_token_id=tokenizer.eos_token_id,\n        bos_token_id=tokenizer.bos_token_id,\n        eos_token_id=tokenizer.eos_token_id,\n        max_new_tokens=512,\n        do_sample=False,\n        use_cache=True\n    )\n    \n# ... (后续解码输出逻辑相同)\n```\n\n> **注意**: 完整代码逻辑请参考官方仓库中的详细示例。使用此方法可显著降低峰值显存占用。","某电商平台的运营团队每天需要处理成千上万张包含复杂表格、手写备注和商品图表的供应商报价单，旨在快速提取关键数据以更新库存系统。\n\n### 没有 DeepSeek-VL2 时\n- **复杂图表识别率低**：传统 OCR 工具无法理解报价单中的嵌套表格和趋势图，导致大量数据需要人工手动录入，效率极低。\n- **手写内容完全失效**：对于供应商在图片边缘添加的手写折扣说明或特殊条款，现有模型往往直接忽略或识别为乱码。\n- **多模态关联困难**：难以将图片中的视觉元素（如柱状图的高低）与具体的文字描述建立逻辑联系，无法回答“哪款产品利润率最高”这类综合问题。\n- **部署成本高昂**：为了达到尚可的准确率，往往需要调用多个大型闭源 API 或部署参数量巨大的稠密模型，推理速度慢且算力成本居高不下。\n\n### 使用 DeepSeek-VL2 后\n- **精准解析文档结构**：DeepSeek-VL2 凭借先进的混合专家（MoE）架构，能完美还原报价单中的复杂表格布局，自动提取行列数据并结构化输出。\n- **无缝识别手写信息**：模型具备强大的光学字符识别能力，能准确读取并理解图片中的手写备注，确保促销条款等关键信息不遗漏。\n- **深度视觉推理**：DeepSeek-VL2 不仅能“看见”图表，还能“理解”其含义，可直接回答基于图表数据的对比分析问题，辅助决策制定。\n- **高效低成本部署**：得益于稀疏激活机制，DeepSeek-VL2 在仅激活少量参数（如 2.8B 版本）的情况下即可实现 SOTA 性能，大幅降低了推理延迟和服务器成本。\n\nDeepSeek-VL2 通过高效的混合专家架构，将非结构化的复杂图文单据转化为可立即使用的商业洞察，实现了从“人工搬运数据”到“智能理解决策”的跨越。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepseek-ai_DeepSeek-VL2_0939def2.png","deepseek-ai","DeepSeek","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdeepseek-ai_04503588.png","",null,"service@deepseek.com","https:\u002F\u002Fwww.deepseek.com\u002F","https:\u002F\u002Fgithub.com\u002Fdeepseek-ai",[84,88,92,96],{"name":85,"color":86,"percentage":87},"Python","#3572A5",93.5,{"name":89,"color":90,"percentage":91},"CSS","#663399",3.5,{"name":93,"color":94,"percentage":95},"JavaScript","#f1e05a",1.9,{"name":97,"color":98,"percentage":99},"Makefile","#427819",1.2,5260,1818,"2026-04-04T10:14:55","MIT","未说明","必需 NVIDIA GPU。运行 deepseek-vl2-small 需 80GB 显存（或使用增量预填充技术降至 40GB）；运行 deepseek-vl2 需大于 80GB 显存。代码示例显示使用 torch.bfloat16 和 CUDA。",{"notes":107,"python":108,"dependencies":109},"1. 模型包含三个变体：Tiny (1.0B), Small (2.8B), 和完整版的 4.5B 激活参数。2. 默认加载小模型 (deepseek-vl2-tiny) 进行测试，较大模型对显存要求极高。3. 针对显存受限的情况（如 40GB 显卡运行 small 模型），官方提供了‘增量预填充 (Incremental Prefilling)'的技术方案以降低显存占用。4. 模型权重需通过 Hugging Face 下载，代码中需设置 trust_remote_code=True。","3.8+",[110,111,112],"torch","transformers","PIL (pillow)",[26,14,62],"2026-03-27T02:49:30.150509","2026-04-06T14:06:28.018617",[117,122,127,132],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},19169,"推理时遇到报错或无法生成结果，如何解决？","这通常是由于 transformers 库版本不兼容导致的。如果不希望降级版本，可以修改 `deepseek_vl2\u002Fmodels\u002Fmodeling_deepseek.py` 文件中的 `prepare_inputs_for_generation` 函数。具体修改如下：\n1. 在函数参数后新增一行 `cache_length = 0`。\n2. 将判断条件 `if inputs_embeds is not None and (past_key_values is None or cache_length == 0):` 中的逻辑确保包含新加的变量。\n此方法已在 transformers==4.46.3 版本上验证有效。","https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Fissues\u002F4",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},19170,"如何在多张 GPU 上运行 DeepSeek-VL2 大模型（如 16B 或 27B）？","官方提供了一个最小化的模型分片代码示例。你需要手动配置 `device_map` 来分配层数到不同的 GPU。参考代码如下：\n```python\ndef split_model(model_name):\n    device_map = {}\n    model_splits = {\n        'deepseek-ai\u002Fdeepseek-vl2-small': [13, 14], # 16B 模型使用 2 张卡\n        'deepseek-ai\u002Fdeepseek-vl2': [10, 10, 10],   # 27B 模型使用 3 张卡\n    }\n    num_layers_per_gpu = model_splits[model_name]\n    layer_cnt = 0\n    for i, num_layer in enumerate(num_layers_per_gpu):\n        for j in range(num_layer):\n            device_map[f'language.model.layers.{layer_cnt}'] = i\n            layer_cnt += 1\n    # 将视觉相关模块固定在第一张卡\n    device_map['vision'] = 0\n    device_map['projector'] = 0\n    device_map['image_newline'] = 0\n    device_map['view_seperator'] = 0\n    return device_map\n```\n请根据实际显卡数量调整 `model_splits` 中的层数分配。","https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Fissues\u002F8",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},19171,"在 RefCOCO 数据集上评估时准确率极低或边界框格式错误怎么办？","这是因为默认的推理参数设置不适合该任务。需要在调用生成函数时显式设置以下参数：`temperature=0.4`, `top_p=0.9`, `repetition_penalty=1.1`。此外，请确保使用的推理代码与仓库中的 `inference.py` 保持一致，README 中的示例代码可能存在遗漏。","https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Fissues\u002F123",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},19172,"使用多张显卡运行 web_demo.py 时出现 CUDA 显存溢出（OOM）错误，且似乎只用到了一张卡？","当使用 `CUDA_VISIBLE_DEVICES` 指定多卡运行 `web_demo.py` 仍报 OOM 错误时，通常是因为默认加载策略未正确启用模型并行。请参考 PR #91 的修复方案或社区确认的有效配置。确保在启动脚本中正确传递了多卡参数，并检查是否使用了支持自动分片的最新代码版本。如果问题依旧，尝试手动指定 `device_map` 或减少单卡负载。","https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-VL2\u002Fissues\u002F86",[]]