[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-unum-cloud--UForm":3,"tool-unum-cloud--UForm":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":32,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":117,"github_topics":119,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":140,"updated_at":141,"faqs":142,"releases":173},9593,"unum-cloud\u002FUForm","UForm","Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️","UForm 是一款轻量级的多模态人工智能库，专为高效理解与生成跨语言文本、图像及未来视频内容而设计。它旨在解决传统多模态模型体积庞大、推理速度慢且难以在移动端部署的痛点，让开发者能在资源受限的设备上也能轻松运行高性能 AI。\n\n无论是需要构建快速检索系统的后端工程师，还是希望集成图像描述、视觉问答功能的移动应用开发者，UForm 都是理想选择。其核心亮点在于提供了“套娃式”微小嵌入模型，支持从 64 到 768 维度的灵活调整，搜索速度极快；同时拥有参数量小但功能强大的生成模型，不仅能进行多轮对话，还能快速完成图片配文。\n\n相比 OpenAI CLIP 和 LLaVA 等主流方案，UForm 的推理速度提升了 2 至 5 倍，且原生支持 ONNX、CoreML 和 PyTorch 等多种格式，可无缝部署于服务器、浏览器甚至智能手机。此外，它还具备量化感知能力，能在几乎不损失精度的前提下将数据压缩至整数格式，并支持超过 20 种语言的均衡处理。凭借小巧灵活的架构，UForm 让多模态 AI 真正变得触手可及。","\u003Ch1 align=\"center\">UForm\u003C\u002Fh1>\n\u003Ch3 align=\"center\">\nPocket-Sized Multimodal AI\u003Cbr\u002F>\nFor Content Understanding and Generation\u003Cbr\u002F>\n\u003C\u002Fh3>\n\u003Cbr\u002F>\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FjsMURnSFM2\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Fdiscord.svg\" alt=\"Discord\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Funum-cloud\u002F\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Flinkedin.svg\" alt=\"LinkedIn\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Funum_cloud\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Ftwitter.svg\" alt=\"Twitter\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Funum.cloud\u002Fpost\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Fblog.svg\" alt=\"Blog\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Fgithub.svg\" alt=\"GitHub\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\nMultimodal Embeddings from 64 to 768 Dimensions • 1B Parameter Chat\n\u003Cbr\u002F>\nShort Texts • Images • 🔜 Video Clips • 🔜 Long Documents\n\u003Cbr\u002F>\nONNX • CoreML • PyTorch\n\u003Cbr\u002F>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fpython\u002FREADME.md\">Python\u003C\u002Fa>\n • \n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fjavascript\u002FREADME.md\">JavaScript\u003C\u002Fa>\n • \n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fswift\u002FREADME.md\">Swift\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n![UForm Chat Preview](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Funum-cloud_UForm_readme_93ded15f1432.jpg)\n\nWelcome to UForm, a __multimodal__ AI library that's as versatile as it is efficient.\nUForm [tiny embedding models](#encoder) will help you understand and search visual and textual content across various languages.\nUForm [small generative models](#decoder), on the other hand, don't only support conversational and chat use-cases, but are great for fast image captioning and Visual Question Answering (VQA).\nWith compact __custom pre-trained transformer models__, this can run anywhere from your server farm down to your smartphone.\n\n## Features\n\n- __Tiny Embeddings__: 64-dimensional [Matryoshka][matryoshka]-style embeddings for extremely fast [search][usearch].\n- __Throughput__: Thanks to the small size, the inference speed is [2-4x faster](#speed) than competitors.\n- __Portable__: Models come with native ONNX support, making them easy to deploy on any platform.\n- __Quantization Aware__: Down-cast embeddings from `f32` to `i8` without losing much recall.\n- __Multilingual__: Trained on a balanced dataset, the recall is great across over 20 languages.\n\n[usearch]: https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fusearch\n[matryoshka]: https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13147\n\n## Models\n\nFor accuracy and speed benchmarks refer to the [evaluation page](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002FBENCHMARKS.md).\n\n### Embedding Models\n\n\u003Ctable style=\"width:100%; border-collapse:collapse;\">\n    \u003Cthead>\n        \u003Ctr>\n            \u003Cth>Model\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">Parameters\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">Languages\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">Architecture\u003C\u002Fth>\n        \u003C\u002Ftr>\n    \u003C\u002Fthead>\n    \u003Ctbody>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-english-large\u002F\">uform3-image-text-english-large\u003C\u002Fa>\u003C\u002Fcode>  🆕\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">365 M\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">12 layer BERT, ViT-L\u002F14\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-english\u002F\">uform3-image-text-english-base\u003C\u002Fa>\u003C\u002Fcode>\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">143 M\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">4 layer BERT, ViT-B\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-english-small\u002F\">uform3-image-text-english-small\u003C\u002Fa>\u003C\u002Fcode>  🆕\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">79 M\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">4 layer BERT, ViT-S\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-multilingual-v2\u002F\">uform3-image-text-multilingual-base\u003C\u002Fa>\u003C\u002Fcode>\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">206M\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">21\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">12 layer BERT, ViT-B\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n    \u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n### Generative Models\n\n\u003Ctable style=\"width:100%; border-collapse:collapse;\">\n    \u003Cthead>\n        \u003Ctr>\n            \u003Cth>Model\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">Parameters\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">Purpose\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">Architecture\u003C\u002Fth>\n        \u003C\u002Ftr>\n    \u003C\u002Fthead>\n    \u003Ctbody>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen2-dpo\u002F\">uform-gen2-dpo\u003C\u002Fa>\u003C\u002Fcode>  🆕\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1.2 B\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">Chat, Image Captioning, VQA\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">qwen1.5-0.5B, ViT-H\u002F14\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen2-qwen-500m\u002F\">uform-gen2-qwen-500m\u003C\u002Fa>\u003C\u002Fcode>\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1.2 B\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">Chat, Image Captioning, VQA\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">qwen1.5-0.5B, ViT-H\u002F14\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen\u002F\">uform-gen\u003C\u002Fa>\u003C\u002Fcode> ⚠️\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1.5 B\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">Image Captioning, VQA\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">llama-1.3B, ViT-B\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n    \u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n## Quick Start Examples\n\n### Embedding Models\n\nFirst, `pip install uform`.\nThen, load the model:\n\n```py\nfrom uform import get_model, Modality\n\n# Defaults to `dtype='bfloat16'` for ~2x speedup with minimal accuracy loss\nprocessors, models = get_model('unum-cloud\u002Fuform3-image-text-english-small', device='cuda')\n\nmodel_text = models[Modality.TEXT_ENCODER]\nmodel_image = models[Modality.IMAGE_ENCODER]\nprocessor_text = processors[Modality.TEXT_ENCODER]\nprocessor_image = processors[Modality.IMAGE_ENCODER]\n```\n\nEmbed images:\n\n```py\nimport requests\nfrom io import BytesIO\nfrom PIL import Image\n\nimage_url = 'https:\u002F\u002Fmedia-cdn.tripadvisor.com\u002Fmedia\u002Fphoto-s\u002F1b\u002F28\u002F6b\u002F53\u002Flovely-armenia.jpg'\nimage = Image.open(BytesIO(requests.get(image_url).content))\nimage_data = processor_image(image)\nimage_features, image_embedding = model_image.encode(image_data, return_features=True)\n```\n\nEmbed queries:\n\n```py\ntext = 'a cityscape bathed in the warm glow of the sun, with varied architecture and a towering, snow-capped mountain rising majestically in the background'\ntext_data = processor_text(text)\ntext_features, text_embedding = model_text.encode(text_data, return_features=True)\n```\n\nFor more details check out:\n\n- Python docs on embedding models in [python\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fpython\u002FREADME.md#embedding-models)\n- JavaScript docs on embedding models in [javascript\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fjavascript\u002FREADME.md#embedding-models)\n- Swift docs on embedding models in [swift\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fswift\u002FREADME.md#embedding-models)\n\n### Generative Models\n\nThe generative models are natively compatible with \n\n```python\nfrom transformers import AutoModel, AutoProcessor\n\nmodel = AutoModel.from_pretrained('unum-cloud\u002Fuform-gen2-dpo', trust_remote_code=True)\nprocessor = AutoProcessor.from_pretrained('unum-cloud\u002Fuform-gen2-dpo', trust_remote_code=True)\n\nprompt = 'Question or Instruction'\nimage = Image.open('image.jpg')\n\ninputs = processor(text=[prompt], images=[image], return_tensors='pt')\n\nwith torch.inference_mode():\n     output = model.generate(\n        **inputs,\n        do_sample=False,\n        use_cache=True,\n        max_new_tokens=256,\n        eos_token_id=151645,\n        pad_token_id=processor.tokenizer.pad_token_id\n    )\nprompt_len = inputs['input_ids'].shape[1]\ndecoded_text = processor.batch_decode(output[:, prompt_len:])[0]\n```\n\nFor more details check out:\n\n- Python docs on generative models in [python\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fpython\u002FREADME.md#generative-models)\n- JavaScript docs on generative models 🔜\n- Swift docs on generative models 🔜\n\n## Technical Details\n\n### Down-casting, Quantization, Matryoshka, and Slicing\n\nDepending on the application, the embeddings can be down-casted to smaller numeric representations without losing much recall.\nSwitching from `f32` to `f16` is recommended in almost all cases, unless you are running on very old hardware without half-precision support.\nSwitching to `i8` with linear scaling is also possible, but will be noticeable in the recall on larger collections with millions of searchable entries.\nSimilarly, for higher-dimensional embeddings (512 or 768), a common strategy is to quantize them into single-bit representations for faster search.\n\n```python\nimport numpy as np\n\nf32_embedding: np.ndarray = model.encode_text(text_data, return_features=False)\nf16_embedding: np.ndarray = f32_embedding.astype(np.float16)\ni8_embedding: np.ndarray = (f32_embedding * 127).astype(np.int8)\nb1_embedding: np.ndarray = np.packbits((f32_embedding > 0).astype(np.uint8))\n```\n\nAlternative approach to quantization is to use the Matryoshka embeddings, where the embeddings are sliced into smaller parts, and the search is performed in a hierarchical manner.\n\n```python\nimport numpy as np\n\nlarge_embedding: np.ndarray = model.encode_text(text_data, return_features=False)\nsmall_embedding: np.ndarray = large_embedding[:, :256]\ntiny_embedding: np.ndarray = large_embedding[:, :64]\n```\n\nBoth approaches are natively supported by the [USearch][github-usearch] vector-search engine and the [SimSIMD][github-simsimd] numerics libraries.\nWhen dealing with small collections (up to millions of entries) and looking for low-latency cosine distance calculations, you can [achieve 5x-2500x performance improvement][report-simsimd] over Torch, NumPy, SciPy, and vanilla Python using SimSIMD.\n\n```python\nfrom simsimd import cosine, hamming\n\ndistance: float = cosine(f32_embedding, f32_embedding) # 32x SciPy performance on Apple M2 CPU\ndistance: float = cosine(f16_embedding, f16_embedding) # 79x SciPy performance on Apple M2 CPU\ndistance: float = cosine(i8_embedding, i8_embedding) # 133x SciPy performance on Apple M2 CPU\ndistance: float = hamming(b1_embedding, b1_embedding) # 17x SciPy performance on Apple M2 CPU\n```\n\nSimilarly, when dealing with large collections (up to billions of entries per server) and looking for high-throughput search, you can [achieve 100x performance improvement][report-usearch] over FAISS and other vector-search solutions using USearch.\nHere are a couple of examples:\n\n```python\nfrom usearch.index import Index\n\nf32_index = Index(ndim=64, metric='cos', dtype='f32') # for Matryoshka embeddings\nf16_index = Index(ndim=64, metric='cos', dtype='f16') # for Matryoshka embeddings\ni8_index = Index(ndim=256, metric='cos', dtype='i8') # for quantized embeddings\nb1_index = Index(ndim=768, metric='hamming', dtype='b1') # for binary embeddings\n```\n\n[github-usearch]: https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fusearch\n[github-simsimd]: https:\u002F\u002Fgithub.com\u002Fashvardanian\u002Fsimsimd\n[report-usearch]: https:\u002F\u002Fwww.unum.cloud\u002Fblog\u002F2023-11-07-scaling-vector-search-with-intel\n[report-simsimd]: https:\u002F\u002Fashvardanian.com\u002Fposts\u002Fpython-c-assembly-comparison\u002F\n\n### Compact Packaging\n\nPyTorch is a heavy dependency to carry, especially if you run on Edge or IoT devices.\nUsing vanilla ONNX runtime, one can significantly reduce memory consumption and deployment latency.\n\n```sh\n$ conda create -n uform_torch python=3.10 -y\n$ conda create -n uform_onnx python=3.10 -y\n$ conda activate uform_torch && pip install -e \".[torch]\" && conda deactivate\n$ conda activate uform_onnx && pip install -e \".[onnx]\" && conda deactivate\n$ du -sh $(conda info --envs | grep 'uform_torch' | awk '{print $2}')\n> 5.2G    ~\u002Fconda\u002Fenvs\u002Fuform_torch\n$ du -sh $(conda info --envs | grep 'uform_onnx' | awk '{print $2}')\n> 461M    ~\u002Fconda\u002Fenvs\u002Fuform_onnx\n```\n\nMost of that weight can be further reduced down to 100 MB for both the model and the runtime.\nYou can pick one of many supported [ONNX execution providers][onnx-providers], which includes XNNPACK, CUDA and TensorRT for Nvidia GPUs, OpenVINO on Intel, DirectML on Windows, ROCm on AMD, CoreML on Apple devices, and more to come.\n\n[onnx-providers]: https:\u002F\u002Fonnxruntime.ai\u002Fdocs\u002Fexecution-providers\u002F\n\n### Multimodal Chat in CLI\n\nThe generative models can be used for chat-like experiences in the command line.\nFor that, you can use the `uform-chat` CLI tool, which is available in the UForm package.\n\n```bash\n$ pip install uform\n$ uform-chat --model unum-cloud\u002Fuform-gen2-dpo --image=zebra.jpg\n$ uform-chat --model unum-cloud\u002Fuform-gen2-dpo \\\n>     --image=\"https:\u002F\u002Fbit.ly\u002F3tIVg9M\" \\\n>     --device=\"cuda:0\" \\\n>     --fp16\n```\n","\u003Ch1 align=\"center\">UForm\u003C\u002Fh1>\n\u003Ch3 align=\"center\">\n袖珍型多模态人工智能\u003Cbr\u002F>\n用于内容理解和生成\u003Cbr\u002F>\n\u003C\u002Fh3>\n\u003Cbr\u002F>\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FjsMURnSFM2\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Fdiscord.svg\" alt=\"Discord\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Funum-cloud\u002F\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Flinkedin.svg\" alt=\"LinkedIn\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Funum_cloud\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Ftwitter.svg\" alt=\"Twitter\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Funum.cloud\u002Fpost\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Fblog.svg\" alt=\"Blog\">\u003C\u002Fa>\n&nbsp; &nbsp; &nbsp;\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\">\u003Cimg height=\"25\" src=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002F.github\u002Fraw\u002Fmain\u002Fassets\u002Fgithub.svg\" alt=\"GitHub\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n多模态嵌入，维度从64到768 • 10亿参数的聊天模型\n\u003Cbr\u002F>\n短文本 • 图片 • 🔜 视频片段 • 🔜 长文档\n\u003Cbr\u002F>\nONNX • CoreML • PyTorch\n\u003Cbr\u002F>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fpython\u002FREADME.md\">Python\u003C\u002Fa>\n • \n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fjavascript\u002FREADME.md\">JavaScript\u003C\u002Fa>\n • \n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fswift\u002FREADME.md\">Swift\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n![UForm 聊天预览](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Funum-cloud_UForm_readme_93ded15f1432.jpg)\n\n欢迎来到 UForm，一个既通用又高效的__多模态__ AI 库。UForm 的[小型嵌入模型](#encoder)可以帮助您理解并搜索多种语言的视觉和文本内容。而 UForm 的[小型生成模型](#decoder)不仅支持对话和聊天场景，还非常适合快速生成图片标题以及进行视觉问答（VQA）。借助紧凑的__自定义预训练 Transformer 模型__，它可以在从您的服务器集群到智能手机的任何设备上运行。\n\n## 特性\n\n- __小型嵌入__：64 维的 [套娃式][matryoshka] 嵌入，可用于极其快速的 [搜索][usearch]。\n- __吞吐量__：由于模型体积小，推理速度比竞争对手快 [2–4 倍](#speed)。\n- __便携性__：模型原生支持 ONNX，因此可以轻松部署在任何平台上。\n- __量化感知__：将嵌入从 `f32` 降为 `i8` 而几乎不损失召回率。\n- __多语言__：基于均衡的数据集训练，可在 20 多种语言中实现出色的召回效果。\n\n[usearch]: https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fusearch\n[matryoshka]: https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13147\n\n## 模型\n\n有关准确性和速度的基准测试，请参阅[评估页面](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002FBENCHMARKS.md)。\n\n### 嵌入模型\n\n\u003Ctable style=\"width:100%; border-collapse:collapse;\">\n    \u003Cthead>\n        \u003Ctr>\n            \u003Cth>模型\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">参数\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">语言\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">架构\u003C\u002Fth>\n        \u003C\u002Ftr>\n    \u003C\u002Fthead>\n    \u003Ctbody>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-english-large\u002F\">uform3-image-text-english-large\u003C\u002Fa>\u003C\u002Fcode>  🆕\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">3.65 亿\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">12 层 BERT，ViT-L\u002F14\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-english\u002F\">uform3-image-text-english-base\u003C\u002Fa>\u003C\u002Fcode>\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1.43 亿\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">4 层 BERT，ViT-B\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-english-small\u002F\">uform3-image-text-english-small\u003C\u002Fa>\u003C\u002Fcode>  🆕\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">7900 万\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">1\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">4 层 BERT，ViT-S\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-vl-multilingual-v2\u002F\">uform3-image-text-multilingual-base\u003C\u002Fa>\u003C\u002Fcode>\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">2.06 亿\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">21\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">12 层 BERT，ViT-B\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n    \u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n### 生成模型\n\n\u003Ctable style=\"width:100%; border-collapse:collapse;\">\n    \u003Cthead>\n        \u003Ctr>\n            \u003Cth>模型\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">参数\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">用途\u003C\u002Fth>\n            \u003Cth style=\"text-align:right;\">架构\u003C\u002Fth>\n        \u003C\u002Ftr>\n    \u003C\u002Fthead>\n    \u003Ctbody>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen2-dpo\u002F\">uform-gen2-dpo\u003C\u002Fa>\u003C\u002Fcode>  🆕\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">12 亿\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">聊天、图片描述、VQA\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">qwen1.5-0.5B，ViT-H\u002F14\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen2-qwen-500m\u002F\">uform-gen2-qwen-500m\u003C\u002Fa>\u003C\u002Fcode>\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">12 亿\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">聊天、图片描述、VQA\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">qwen1.5-0.5B，ViT-H\u002F14\u003C\u002Ftd>\n        \u003C\u002Ftr>\n        \u003Ctr>\n            \u003Ctd>\u003Ccode>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen\u002F\">uform-gen\u003C\u002Fa>\u003C\u002Fcode> ⚠️\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">15 亿\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">图片描述、VQA\u003C\u002Ftd>\n            \u003Ctd style=\"text-align:right;\">llama-1.3B，ViT-B\u002F16\u003C\u002Ftd>\n        \u003C\u002Ftr>\n    \u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n## 快速入门示例\n\n### 嵌入模型\n\n首先，运行 `pip install uform`。\n然后加载模型：\n\n```py\nfrom uform import get_model, Modality\n\n# 默认使用 `dtype='bfloat16'`，可在几乎不损失精度的情况下实现约2倍的加速\nprocessors, models = get_model('unum-cloud\u002Fuform3-image-text-english-small', device='cuda')\n\nmodel_text = models[Modality.TEXT_ENCODER]\nmodel_image = models[Modality.IMAGE_ENCODER]\nprocessor_text = processors[Modality.TEXT_ENCODER]\nprocessor_image = processors[Modality.IMAGE_ENCODER]\n```\n\n嵌入图像：\n\n```py\nimport requests\nfrom io import BytesIO\nfrom PIL import Image\n\nimage_url = 'https:\u002F\u002Fmedia-cdn.tripadvisor.com\u002Fmedia\u002Fphoto-s\u002F1b\u002F28\u002F6b\u002F53\u002Flovely-armenia.jpg'\nimage = Image.open(BytesIO(requests.get(image_url).content))\nimage_data = processor_image(image)\nimage_features, image_embedding = model_image.encode(image_data, return_features=True)\n```\n\n嵌入查询：\n\n```py\ntext = '沐浴在温暖阳光下的城市景观，拥有多种建筑风格，背景中巍峨耸立着一座白雪皑皑的高山'\ntext_data = processor_text(text)\ntext_features, text_embedding = model_text.encode(text_data, return_features=True)\n```\n\n更多详情请参阅：\n\n- Python 文档中的嵌入模型部分，位于 [python\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fpython\u002FREADME.md#embedding-models)\n- JavaScript 文档中的嵌入模型部分，位于 [javascript\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fjavascript\u002FREADME.md#embedding-models)\n- Swift 文档中的嵌入模型部分，位于 [swift\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fswift\u002FREADME.md#embedding-models)\n\n### 生成式模型\n\n生成式模型原生兼容以下代码：\n\n```python\nfrom transformers import AutoModel, AutoProcessor\n\nmodel = AutoModel.from_pretrained('unum-cloud\u002Fuform-gen2-dpo', trust_remote_code=True)\nprocessor = AutoProcessor.from_pretrained('unum-cloud\u002Fuform-gen2-dpo', trust_remote_code=True)\n\nprompt = '问题或指令'\nimage = Image.open('image.jpg')\n\ninputs = processor(text=[prompt], images=[image], return_tensors='pt')\n\nwith torch.inference_mode():\n     output = model.generate(\n        **inputs,\n        do_sample=False,\n        use_cache=True,\n        max_new_tokens=256,\n        eos_token_id=151645,\n        pad_token_id=processor.tokenizer.pad_token_id\n    )\nprompt_len = inputs['input_ids'].shape[1]\ndecoded_text = processor.batch_decode(output[:, prompt_len:])[0]\n```\n\n更多详情请参阅：\n\n- Python 文档中的生成式模型部分，位于 [python\u002FREADME.md](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fblob\u002Fmain\u002Fpython\u002FREADME.md#generative-models)\n- JavaScript 文档中的生成式模型部分 🔜\n- Swift 文档中的生成式模型部分 🔜\n\n## 技术细节\n\n### 降精度、量化、套娃与切片\n\n根据具体应用场景，嵌入向量可以在不显著降低召回率的前提下，降为更小的数值表示形式。在几乎所有情况下，都建议从 `f32` 切换到 `f16`，除非您使用的硬件非常老旧且不支持半精度运算。此外，也可以采用线性缩放的方式将数据转换为 `i8` 格式，不过对于包含数百万条可搜索记录的大规模数据集而言，这种做法可能会对召回率产生明显影响。同样地，对于高维嵌入（如512或768维），一种常见的策略是将其量化为单比特表示，以提升检索速度。\n\n```python\nimport numpy as np\n\nf32_embedding: np.ndarray = model.encode_text(text_data, return_features=False)\nf16_embedding: np.ndarray = f32_embedding.astype(np.float16)\ni8_embedding: np.ndarray = (f32_embedding * 127).astype(np.int8)\nb1_embedding: np.ndarray = np.packbits((f32_embedding > 0).astype(np.uint8))\n```\n\n另一种量化方法是使用“套娃”嵌入，即将嵌入向量切分为多个较小的部分，并以层次化的方式进行检索。\n\n```python\nimport numpy as np\n\nlarge_embedding: np.ndarray = model.encode_text(text_data, return_features=False)\nsmall_embedding: np.ndarray = large_embedding[:, :256]\ntiny_embedding: np.ndarray = large_embedding[:, :64]\n```\n\n这两种方法均得到 [USearch][github-usearch] 向量搜索引擎和 [SimSIMD][github-simsimd] 数值计算库的原生支持。当处理小型数据集（最多数百万条记录）并需要低延迟的余弦距离计算时，借助 SimSIMD 库，您可以实现比 PyTorch、NumPy、SciPy 和原生 Python 快 5 至 2500 倍的性能提升[report-simsimd]。\n\n```python\nfrom simsimd import cosine, hamming\n\ndistance: float = cosine(f32_embedding, f32_embedding) # 在 Apple M2 CPU 上性能是 SciPy 的 32 倍\ndistance: float = cosine(f16_embedding, f16_embedding) # 在 Apple M2 CPU 上性能是 SciPy 的 79 倍\ndistance: float = cosine(i8_embedding, i8_embedding) # 在 Apple M2 CPU 上性能是 SciPy 的 133 倍\ndistance: float = hamming(b1_embedding, b1_embedding) # 在 Apple M2 CPU 上性能是 SciPy 的 17 倍\n```\n\n类似地，当处理大型数据集（每台服务器可达数十亿条记录）并追求高吞吐量检索时，使用 USearch 可以实现比 FAISS 和其他向量搜索引擎快 100 倍的性能提升[report-usearch]。以下是几个示例：\n\n```python\nfrom usearch.index import Index\n\nf32_index = Index(ndim=64, metric='cos', dtype='f32') # 用于套娃嵌入\nf16_index = Index(ndim=64, metric='cos', dtype='f16') # 用于套娃嵌入\ni8_index = Index(ndim=256, metric='cos', dtype='i8') # 用于量化后的嵌入\nb1_index = Index(ndim=768, metric='hamming', dtype='b1') # 用于二进制嵌入\n```\n\n[github-usearch]: https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fusearch\n[github-simsimd]: https:\u002F\u002Fgithub.com\u002Fashvardanian\u002Fsimsimd\n[report-usearch]: https:\u002F\u002Fwww.unum.cloud\u002Fblog\u002F2023-11-07-scaling-vector-search-with-intel\n[report-simsimd]: https:\u002F\u002Fashvardanian.com\u002Fposts\u002Fpython-c-assembly-comparison\u002F\n\n### 紧凑封装\n\nPyTorch 是一个较为庞大的依赖项，尤其是在边缘设备或物联网设备上运行时。通过使用原生 ONNX 运行时，可以显著降低内存占用和部署延迟。\n\n```sh\n$ conda create -n uform_torch python=3.10 -y\n$ conda create -n uform_onnx python=3.10 -y\n$ conda activate uform_torch && pip install -e \".[torch]\" && conda deactivate\n$ conda activate uform_onnx && pip install -e \".[onnx]\" && conda deactivate\n$ du -sh $(conda info --envs | grep 'uform_torch' | awk '{print $2}')\n> 5.2G    ~\u002Fconda\u002Fenvs\u002Fuform_torch\n$ du -sh $(conda info --envs | grep 'uform_onnx' | awk '{print $2}')\n> 461M    ~\u002Fconda\u002Fenvs\u002Fuform_onnx\n```\n\n其中大部分体积还可以进一步压缩至仅 100 MB，包括模型和运行时。您可以选择众多受支持的 [ONNX 执行提供者][onnx-providers]，其中包括适用于 Nvidia GPU 的 XNNPACK、CUDA 和 TensorRT，适用于 Intel 的 OpenVINO，适用于 Windows 的 DirectML，适用于 AMD 的 ROCm，适用于 Apple 设备的 CoreML，以及更多即将支持的选项。\n\n[onnx-providers]: https:\u002F\u002Fonnxruntime.ai\u002Fdocs\u002Fexecution-providers\u002F\n\n### 命令行中的多模态聊天\n\n生成式模型可以在命令行中用于类聊天的交互体验。\n为此，您可以使用 UForm 包中提供的 `uform-chat` 命令行工具。\n\n```bash\n$ pip install uform\n$ uform-chat --model unum-cloud\u002Fuform-gen2-dpo --image=zebra.jpg\n$ uform-chat --model unum-cloud\u002Fuform-gen2-dpo \\\n>     --image=\"https:\u002F\u002Fbit.ly\u002F3tIVg9M\" \\\n>     --device=\"cuda:0\" \\\n>     --fp16\n```","# UForm 快速上手指南\n\nUForm 是一款轻量级多模态 AI 库，专为内容理解与生成设计。它提供从 64 维到 768 维的紧凑嵌入模型（Embedding）以及参数量约 1B 的生成式模型（Generative），支持文本、图像的多语言处理，并原生兼容 ONNX、CoreML 和 PyTorch，可轻松部署于服务器至移动端。\n\n## 环境准备\n\n*   **系统要求**：支持 Linux、macOS 和 Windows。若需 GPU 加速，请确保已安装对应的 CUDA 驱动。\n*   **前置依赖**：\n    *   Python 3.8+\n    *   PyTorch (用于加载和运行模型)\n    *   Pillow (用于图像处理)\n    *   Requests (用于示例中获取网络图片)\n*   **硬件建议**：虽然模型小巧可在 CPU 运行，但推荐使用 NVIDIA GPU 以获得最佳推理速度。\n\n## 安装步骤\n\n使用 pip 安装核心库及依赖：\n\n```bash\npip install uform torch pillow requests\n```\n\n> **提示**：国内开发者如遇下载缓慢，可指定清华或阿里镜像源加速安装：\n> ```bash\n> pip install uform torch pillow requests -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n### 1. 嵌入模型 (Embedding Models)\n适用于跨模态搜索、以图搜图或文本检索。以下示例展示如何加载小型多模态模型并分别编码图像与文本。\n\n```py\nfrom uform import get_model, Modality\nimport requests\nfrom io import BytesIO\nfrom PIL import Image\n\n# 加载模型 (默认使用 bfloat16 以提升速度)\n# device='cuda' 可启用 GPU 加速，若无 GPU 可改为 'cpu'\nprocessors, models = get_model('unum-cloud\u002Fuform3-image-text-english-small', device='cuda')\n\nmodel_text = models[Modality.TEXT_ENCODER]\nmodel_image = models[Modality.IMAGE_ENCODER]\nprocessor_text = processors[Modality.TEXT_ENCODER]\nprocessor_image = processors[Modality.IMAGE_ENCODER]\n\n# 编码图像\nimage_url = 'https:\u002F\u002Fmedia-cdn.tripadvisor.com\u002Fmedia\u002Fphoto-s\u002F1b\u002F28\u002F6b\u002F53\u002Flovely-armenia.jpg'\nimage = Image.open(BytesIO(requests.get(image_url).content))\nimage_data = processor_image(image)\nimage_features, image_embedding = model_image.encode(image_data, return_features=True)\n\n# 编码文本查询\ntext = 'a cityscape bathed in the warm glow of the sun, with varied architecture and a towering, snow-capped mountain rising majestically in the background'\ntext_data = processor_text(text)\ntext_features, text_embedding = model_text.encode(text_data, return_features=True)\n\n# 此时 image_embedding 和 text_embedding 可用于计算相似度进行跨模态检索\n```\n\n### 2. 生成式模型 (Generative Models)\n适用于视觉问答 (VQA)、图像描述生成及多模态对话。UForm 的生成模型原生兼容 Hugging Face `transformers` 接口。\n\n```python\nfrom transformers import AutoModel, AutoProcessor\nfrom PIL import Image\nimport torch\n\n# 加载生成式模型\nmodel = AutoModel.from_pretrained('unum-cloud\u002Fuform-gen2-dpo', trust_remote_code=True)\nprocessor = AutoProcessor.from_pretrained('unum-cloud\u002Fuform-gen2-dpo', trust_remote_code=True)\n\n# 准备输入\nprompt = 'What is in this image?'\nimage = Image.open('image.jpg') # 请替换为本地图片路径\n\ninputs = processor(text=[prompt], images=[image], return_tensors='pt')\n\n# 执行生成\nwith torch.inference_mode():\n     output = model.generate(\n        **inputs,\n        do_sample=False,\n        use_cache=True,\n        max_new_tokens=256,\n        eos_token_id=151645,\n        pad_token_id=processor.tokenizer.pad_token_id\n    )\n\n# 解码输出\nprompt_len = inputs['input_ids'].shape[1]\ndecoded_text = processor.batch_decode(output[:, prompt_len:])[0]\nprint(decoded_text)\n```\n\n### 进阶技巧：量化与降维\n为了极致性能，UForm 支持将嵌入向量从 `f32` 降级为 `f16`、`i8` 甚至二值化 (`b1`)，或使用 Matryoshka 风格切片（如截取前 64 维），配合 USearch 或 SimSIMD 库可实现数倍至数千倍的搜索加速。\n\n```python\nimport numpy as np\n\n# 假设已获得 f32 格式的嵌入向量\nf32_embedding = model.encode_text(text_data, return_features=False)\n\n# 方案 A: 数据类型转换 (量化)\nf16_embedding = f32_embedding.astype(np.float16)\ni8_embedding = (f32_embedding * 127).astype(np.int8)\n\n# 方案 B: Matryoshka 切片 (降维)\ntiny_embedding = f32_embedding[:, :64]  # 仅保留前 64 维\n```","一家跨境电商公司的技术团队正在构建一个支持全球 20 多种语言的移动端图片搜索功能，让用户能直接拍照查找商品。\n\n### 没有 UForm 时\n- **响应延迟高**：依赖大型多模态模型（如 CLIP 或 LLaVA），在用户手机端推理速度慢，导致搜索结果需等待数秒，严重影响购物体验。\n- **部署成本昂贵**：为了维持可接受的响应速度，必须将计算压力转移到云端 GPU 集群，服务器运维和流量成本居高不下。\n- **多语言支持弱**：现有模型主要针对英语优化，处理德语、日语等小语种商品描述时，图文匹配准确率大幅下降。\n- **带宽消耗大**：模型参数量巨大，每次更新或下发模型都需要消耗大量移动数据流量，阻碍了离线功能的实现。\n\n### 使用 UForm 后\n- **毫秒级响应**：利用 UForm 的轻量级架构，推理速度比竞品快 2-5 倍，用户在手机本地即可实现“拍照即搜”的流畅体验。\n- **端侧低成本部署**：凭借 ONNX 原生支持和极小的模型体积，UForm 可直接运行在普通智能手机上，大幅削减了云端算力开支。\n- **全球化精准匹配**：基于平衡数据集训练的多语言能力，确保在超过 20 种语言环境下，图文检索的召回率依然保持高水平。\n- **极致轻量化**：支持从 64 维开始的“套娃式”嵌入和量化技术，显著降低了存储占用和内存消耗，让离线搜索成为可能。\n\nUForm 通过将高性能多模态 AI 压缩至口袋尺寸，成功帮助企业在移动端实现了低成本、低延迟且覆盖全球的智能视觉搜索。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Funum-cloud_UForm_93ded15f.jpg","unum-cloud","Unum","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Funum-cloud_dead21fe.png","The IO Layer For AI",null,"info@unum.cloud","unum_cloud","https:\u002F\u002Funum.cloud","https:\u002F\u002Fgithub.com\u002Funum-cloud",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Python","#3572A5",58.2,{"name":88,"color":89,"percentage":90},"Swift","#F05138",17.1,{"name":92,"color":93,"percentage":94},"JavaScript","#f1e05a",12.9,{"name":96,"color":97,"percentage":98},"Jupyter Notebook","#DA5B0B",11.9,1231,79,"2026-04-17T19:25:38","Apache-2.0","未说明","可选。支持 CUDA 加速（示例代码中 device='cuda'），也支持 CPU 运行。针对边缘设备或手机优化，无特定显存要求，但大模型推理建议具备现代 GPU 以利用半精度 (bfloat16\u002Ff16) 加速。","未说明。模型参数量小 (79M-1.5B)，旨在低资源环境运行，具体取决于所选模型大小。",{"notes":107,"python":103,"dependencies":108},"该工具主打轻量级和多平台部署，支持 ONNX、CoreML 和 PyTorch 后端，可运行于服务器至智能手机。默认使用 bfloat16 数据类型以获得约 2 倍速度提升。支持将嵌入向量量化为 f16、i8 甚至二值化 (b1) 以节省内存并加速搜索。生成式模型需设置 trust_remote_code=True。部分功能（如 JavaScript\u002FSwift 的生成式模型支持）仍在开发中。",[109,110,111,112,113,114,115,116],"uform","torch","transformers","Pillow","requests","numpy","simsimd","usearch",[35,15,45,118,14],"其他",[120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139],"huggingface-transformers","language-vision","multimodal","pytorch","semantic-search","transformer","cross-attention","vector-search","bert","neural-network","pretrained-models","multi-lingual","clip","openai","openclip","contrastive-learning","representation-learning","clustering","image-search","llava","2026-03-27T02:49:30.150509","2026-04-20T04:06:14.116316",[143,148,153,158,163,168],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},43063,"如何将 CoreML 模型转换为 FP16 格式以减小体积？","目前直接将多语言 V2 模型的文本编码器（text-encoder）转换为 FP16 会导致性能急剧下降（指标接近零），这可能是由于权重溢出问题。建议暂时保持文本编码器为 FP32，仅将图像编码器（image-encoder）转换为 FP16，这样可以在不损失性能的情况下减小部分体积。完整的 FP16 多语言模型支持仍在调查中。","https:\u002F\u002Fgithub.com\u002Funum-cloud\u002FUForm\u002Fissues\u002F50",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},43064,"运行 README 示例代码时出现 'KeyError: qwen2' 错误怎么办？","该错误通常是由于依赖版本过旧导致的。请确保通过 `pip install uform` 安装最新版本的库，这会自动触发 `pyproject.toml` 中的依赖更新。如果问题仍然存在，可能需要手动升级 transformers 库或检查 `pyproject.toml` 中的配置是否已包含对 qwen2 的支持。","https:\u002F\u002Fgithub.com\u002Funum-cloud\u002FUForm\u002Fissues\u002F69",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},43065,"导入时遇到 'No module named uform.models' 错误如何解决？","这是一个已知的导入路径问题。请将代码中的 `from uform.models import VisualEncoder` 修改为 `from uform.torch_models import VisualEncoder`。该修复已在社区贡献的 Pull Request 中被合并。","https:\u002F\u002Fgithub.com\u002Funum-cloud\u002FUForm\u002Fissues\u002F79",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},43066,"加载多语言模型时遇到 Hugging Face Transformers 库报错怎么办？","此问题已在 uform 版本 0.3.1 中修复。如果您遇到此错误，请尝试重新安装最新版本的 `transformers` 和 `uform` 库：\n`pip install --upgrade transformers uform`\n确保使用的是兼容的版本组合。","https:\u002F\u002Fgithub.com\u002Funum-cloud\u002FUForm\u002Fissues\u002F27",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},43067,"如何批量处理文本和图像以进行嵌入（Embedding）？","您可以按照以下步骤进行批量处理：\n1. 预处理文本列表：`batch_texts = model.preprocess_text(['cat', 'dog'])`\n2. 预处理图像列表并堆叠：\n```python\nimages = [Image.open('cat.jpg'), Image.open('dog.jpg')]\nbatch_images = [model.preprocess_image(img) for img in images]\nbatch_images = torch.stack(batch_images, dim=0)\n```\n3. 使用编码函数：\n```python\ntext_embeddings = model.encode_text(batch_texts)\nimage_embeddings = model.encode_image(batch_images)\n```","https:\u002F\u002Fgithub.com\u002Funum-cloud\u002FUForm\u002Fissues\u002F7",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},43068,"无法加载 'uform-vl-multilingual-v2' 模型并报错 'Unable to load from type NoneType' 是什么原因？","v2 模型可能已不再作为默认推荐版本支持，或者其资源路径配置发生了变化。建议直接使用更新的 v3 模型，其地址为 `unum-cloud\u002Fuform3-image-text-multilingual-base`。使用 README 中的最新指令加载该 v3 模型通常可以解决此问题。","https:\u002F\u002Fgithub.com\u002Funum-cloud\u002FUForm\u002Fissues\u002F90",[174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254,259,264,269],{"id":175,"version":176,"summary_zh":177,"released_at":178},342772,"v3.1.4","发布：v3.1.4 [跳过 CI]\n### 修复\n\n- 文档：DGX-B200 基准测试 (284f9f7)\n- 改进：Torch 默认使用 `bfloat16` (0419b54)\n- 构建：为 Python 3.10+ 升级依赖项 (cef4bea)\n- 构建：使用 `uv` 升级 CI (0dff0dc)\n","2025-10-30T23:39:21",{"id":180,"version":181,"summary_zh":182,"released_at":183},342773,"v3.1.3","发布：v3.1.3 [跳过 CI]","2025-09-03T08:30:22",{"id":185,"version":186,"summary_zh":187,"released_at":188},342774,"v3.1.2","发布：v3.1.2 [跳过 CI]","2025-06-21T16:14:54",{"id":190,"version":191,"summary_zh":192,"released_at":193},342775,"v3.1.1","发布：v3.1.1 [跳过 CI]","2025-01-03T23:11:30",{"id":195,"version":196,"summary_zh":197,"released_at":198},342776,"v3.1.0","苹果芯片提供了多个能够进行高吞吐量矩阵乘法和AI推理的功能单元。这些“计算单元”包括CPU、GPU以及苹果神经网络引擎（__ANE__）。用户可能会天真地认为，任何常见的架构，比如BERT或ViT，都应该能够在这些芯片上以常见的量化形式顺利运行——例如从`f32`单精度浮点数切换到`bf16`和`f16`半精度浮点数，或者转换为`i8`和`u8`整数。然而事实并非如此。在UForm已测试过的所有后端中，针对CoreML对整个模型进行量化是最具挑战性的任务；最终，苹果成为我们唯一选择以原始精度分发模型的平台。这颇为遗憾，因为全球运行iOS系统的设备数量高达20亿台，其中绝大多数分布在UForm多模态多语言嵌入原生支持的国家和语种群体中。\n\n在Swift中使用@unum-cloud的UForm模型时，我们通常会传递`computeUnits: .all`，让苹果的调度器自行选择目标设备，并将其视为一个黑盒优化过程。不过，更好的做法是显式提供专为苹果神经网络引擎调优过的模型。因此，我们与@TheStageAI的伙伴们合作，将模型量化至能够完美映射到ANE支持的操作，同时将精度损失降至最低，从而__使模型体积缩小2至4倍__，并将__推理速度提升至5倍__：\n\n| 模型               | GPU 文本编码器 | ANE 文本编码器 | GPU 图像编码器 | ANE 图像编码器 |\n| :------------------ | ----------: | ----------: | -----------: | -----------: |\n| `english-small`     |     2.53 ms |     0.53 ms |      6.57 ms |      1.23 ms |\n| `english-base`      |     2.54 ms |     0.61 ms |     18.90 ms |      3.79 ms |\n| `english-large`     |     2.30 ms |     0.61 ms |     79.68 ms |     20.94 ms |\n| `multilingual-base` |     2.34 ms |     0.50 ms |     18.98 ms |      3.77 ms |\n\n> 测试设备：搭载iOS 18.2的Apple M4 iPad。批次大小为1，模型已预先加载至内存。原始编码器采用`f32`单精度浮点数以确保最大兼容性，主要依赖__GPU__进行计算。而量化后的编码器则混合使用`i8`、`f16`和`f32`数据类型，以实现最佳性能，且主要由苹果神经网络引擎（__ANE__）负责计算。此处报告的是延迟的中位数。\n\n---\n\n如需在Swift中使用这些模型，请参阅[unum-cloud.github.io\u002Fuform\u002Fswift\u002F](https:\u002F\u002Funum-cloud.github.io\u002Fuform\u002Fswift\u002F)上的文档，或查看[SwiftSemanticSearch](https:\u002F\u002Fgithub.com\u002Fashvardanian\u002FSwiftSemanticSearch)仓库中的集成示例，该示例结合了[USearch](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fusearch)。感谢来自[TheStage.ai](https:\u002F\u002FTheStage.ai)的@ArnoldMSU、@b1n0、@Aydarkhan以及@AndreyAgeev提供的帮助👏。","2024-12-20T12:31:23",{"id":200,"version":201,"summary_zh":202,"released_at":203},342777,"v3.0.3","发布：v3.0.3 [跳过 CI]","2024-10-01T18:33:19",{"id":205,"version":206,"summary_zh":207,"released_at":208},342778,"v3.0.2","## [3.0.2](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv3.0.1...v3.0.2) (2024-04-25)\n\n\n### 构建\n\n* 更改 NPM 包名 ([e97977e](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002Fe97977e1fd82669f47cb0972a61c2a58e0f928a4))\n\n\n\n","2024-04-25T03:40:04",{"id":210,"version":211,"summary_zh":212,"released_at":213},342779,"v3.0.1","## [3.0.1](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv3.0.0...v3.0.1) (2024-04-25)\n\n\n### 构建\n\n* 升级 CI ([83fc71a](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F83fc71a16250c0ae9a1e8bccd414aa746b6139f6))\n\n\n\n","2024-04-25T03:20:32",{"id":215,"version":216,"summary_zh":217,"released_at":218},342780,"v3.0.0","# 适用于 JavaScript、Swift 和 Python 的多模态嵌入\n\n有多少 AI 模型可以开箱即用地在设备端运行？UForm 多模态嵌入可以 🥳 \n\n| 模型                                               | 参数量 | 支持语言 |                                 架构 |\n| :-------------------------------------------------- | ---------: | --------: | -------------------------------------------: |\n| [`uform3-image-text-english-large`][model-e-l] 🆕    |       365M |         1 | 6 层文本编码器，ViT-L\u002F14，6 层多模态融合层 |\n| [`uform3-image-text-english-base`][model-e-b]         |       143M |         1 | 2 层文本编码器，ViT-B\u002F16，2 层多模态融合层 |\n| [`uform3-image-text-english-small`][model-e-s] 🆕    |        79M |         1 | 2 层文本编码器，ViT-S\u002F16，2 层多模态融合层 |\n| [`uform3-image-text-multilingual-base`][model-m] |       206M |        21 | 8 层文本编码器，ViT-B\u002F16，4 层多模态融合层 |\n\n[model-e-l]: https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform3-image-text-english-large\n[model-e-b]: https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform3-image-text-english-base\n[model-e-s]: https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform3-image-text-english-small\n[model-m]: https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform3-image-text-multilingual-base\n\n## JavaScript\n\n加载不同模态的模型和预处理组件：\n\n```js\nimport { getModel, Modality, TextProcessor, TextEncoder, ImageEncoder, ImageProcessor } from '@unum-cloud\u002Fuform';\n\nconst { configPath, modalityPaths, tokenizerPath } = await getModel({\n    modelId: 'unum-cloud\u002Fuform3-image-text-english-small',\n    modalities: [Modality.TextEncoder, Modality.ImageEncoder],\n});\n```\n\n嵌入图像：\n\n```js\nconst imageProcessor = new ImageProcessor(configPath);\nawait imageProcessor.init();\nconst processedImages = await imageProcessor.process(\"path\u002Fto\u002Fimage.png\");\n\nconst imageEncoder = new ImageEncoder(modalityPaths.image_encoder, imageProcessor);\nawait imageEncoder.init();\nconst imageOutput = await imageEncoder.encode(processedImages);\nassert(imageOutput.embeddings.dims.length === 2, \"输出应为二维\");\n```\n\n嵌入查询：\n\n```js\nconst textProcessor = new TextProcessor(configPath, tokenizerPath);\nawait textProcessor.init();\nconst processedTexts = await textProcessor.process(\"a small red panda in a zoo\");\n\nconst textEncoder = new TextEncoder(modalityPaths.text_encoder, textProcessor);\nawait textEncoder.init();\nconst textOutput = await textEncoder.encode(processedTexts);\nassert(textOutput.embeddings.dims.length === 2, \"输出应为二维\");\nawait textEncoder.dispose();\n```\n\n## Swift\n\n嵌入图像：\n\n```swift\nlet imageModel = try await ImageEncoder(modelName: \"unum-cloud\u002Fuform3-image-text-english-small\")\nlet imageURL = \"https:\u002F\u002Fgithub.com\u002Fashvardanian\u002Fashvardanian\u002Fblob\u002Fmaster\u002Fdemos\u002Fbbq-on-beach.jpg?raw=true\"\nguard let url = URL(string: imageURL),\n    let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil),\n    let cgImage = CGImageSourceCreateImageAtIndex(","2024-04-25T03:13:12",{"id":220,"version":221,"summary_zh":222,"released_at":223},342781,"v2.1.1","## [2.1.1](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv2.1.0...v2.1.1) (2024-04-16)\n\n\n### 修复\n\n* 在 `gen_model.py` 中导入 ViT (#80) ([21f49ba](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F21f49bab444e0761ab7fc7ed20b3c81fb7924d17)), 关闭 [#80](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fissues\u002F80)\n\n\n\n","2024-04-16T03:55:39",{"id":225,"version":226,"summary_zh":227,"released_at":228},342782,"v2.1.0","# [2.1.0](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv2.0.2...v2.1.0) (2024-04-14)\n\n\n### Add\n\n* Initial Swift support ([00bd84c](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F00bd84c59995c7b3daa0b4fa1597f77608806fdb))\n\n### Fix\n\n* Image preprocessing in Swift ([f2772d0](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002Ff2772d0d92317818c4d1c49166bc7ec3ee314f60))\n\n### Improve\n\n* Fetching nested configs ([729b9d9](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F729b9d9f73990f2689c22af593171184589a2b27))\n\n### Make\n\n* Formatting Swift code ([f6faf4c](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002Ff6faf4cd877f6034d8c66edb108bc07ec1735232))\n\n\n\n","2024-04-14T00:50:48",{"id":230,"version":231,"summary_zh":232,"released_at":233},342783,"v2.0.2","## [2.0.2](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv2.0.1...v2.0.2) (2024-03-28)\n\n\n### Make\n\n* Fix PyPi CI version with hash ([364afe6](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F364afe605540c5ca8649c6b798ce4d8e65540c7c))\n\n\n\n","2024-03-28T20:43:32",{"id":235,"version":236,"summary_zh":237,"released_at":238},342784,"v2.0.1","## [2.0.1](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv2.0.0...v2.0.1) (2024-03-28)\n\n\n### Make\n\n* PyPi upload version ([9453802](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F945380263c5a8d6bc7906afedd7728ed1ad0cfcb))\n\n\n\n","2024-03-28T20:38:22",{"id":240,"version":241,"summary_zh":242,"released_at":243},342785,"v2.0.0","![DPO Preview](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Freleases\u002Fdownload\u002Fv2.0.0\u002FUnum.UForm.Gen.jpeg)\r\n\r\nToday we are releasing a new batch of multimodal models trained with [Nebius](https:\u002F\u002Fnebius.ai) and already available on [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Funum-cloud) 🤗 \r\n\r\n1.  Matryoshka style multimodal embeddings ranging from 64 to 256 and 768 dimensions 🖼️ \r\n2. Improved multimodal chat in 1.2B parameters, tuned with Direct Preference Optimization 💬 \r\n3. ONNX backend, making PyTorch dependency optional for lightning fast deployments ⚡ ","2024-03-28T20:35:06",{"id":245,"version":246,"summary_zh":247,"released_at":248},342786,"v1.1.1","Great thanks to @lmmx, @blackforestboi, and @kapulkin for their patches to the project!\r\n\r\n---\r\n\r\n* Performance observations for M2 CPUs (#56) ([8374ef6](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F8374ef6a4c13ec6875a6a349aa5297ceee47d6d3)), closes [#56](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fissues\u002F56)\r\n* Passing labels to `text_decoder` to compute loss. (#65) ([f445a8b](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002Ff445a8b73faa8fd6a83b18cc547660d45eebfd5a)), closes [#65](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fissues\u002F65)\r\n* Larger batch benchmarks ([fdc8587](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002Ffdc85876162fa631715dd1643f0ffe53f92a04e2))\r\n* pre-commit config and linters (#62) ([0a3efac](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F0a3efac1f6b80295d14b2cde291b7b7f20a82284)), closes [#62](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fissues\u002F62)\r\n\r\n\r\n\r\n","2024-02-23T18:14:43",{"id":250,"version":251,"summary_zh":252,"released_at":253},342787,"v1.1.0","# [1.1.0](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv1.0.3...v1.1.0) (2024-02-15)\n\n\n### Add\n\n* gen2 model (#66) ([37c26bc](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F37c26bc7abf9d9dd83d8897a05ea8daf46cd2002)), closes [#66](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fissues\u002F66)\n\n\n\n","2024-02-15T18:08:56",{"id":255,"version":256,"summary_zh":257,"released_at":258},342788,"v1.0.3","## [1.0.3](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv1.0.2...v1.0.3) (2023-12-29)\n\n\n### Improve\n\n* basic benchmark ([042ae87](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F042ae87b4b04671c253604d7cc3a5ba73da210d5))\n\n\n\n","2023-12-29T01:45:40",{"id":260,"version":261,"summary_zh":262,"released_at":263},342789,"v1.0.2","## [1.0.2](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv1.0.1...v1.0.2) (2023-12-28)\n\n\n### Make\n\n* Deprecate Anaconda ([1ec8097](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002F1ec8097b8559b669a1e0417f5b40952d010ff53d))\n\n\n\n","2023-12-28T17:46:59",{"id":265,"version":266,"summary_zh":267,"released_at":268},342790,"v1.0.0","## UForm v1: Multimodal Chat in 1.5 Billion Parameters\r\n\r\nThe UForm family of tiny multimodal transformer models just got bigger! In addition to the existing CLIP-like embedding models, we now have a generative model useful for image captioning, visual question answering, and multimodal chats. All that is in just a billion parameters, small enough to fit even on mobile devices 🎉\r\n\r\nRepository: https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\r\nGenerative model: https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen\r\nChat model: https:\u002F\u002Fhuggingface.co\u002Funum-cloud\u002Fuform-gen-chat\r\n\r\n## Evaluation Metrics\r\n\r\n![](https:\u002F\u002Fgithub.com\u002Fashvardanian\u002Fusearch-images\u002Fblob\u002Fmain\u002Fassets\u002Fuform-gen-preview.jpg?raw=true)\r\n\r\nBeing the smallest model of its kind, `unum-cloud\u002Fuform-gen` is hard to compare to others. Next in size are the 5x larger LLaVAs and InstructBLIP, with 7 billion parameters. LLaVA performs noticeably better on VQAv2: 78.5 vs 66.5. On captioning, CLIPScore and RefCLIPScore are relatively close across all models.\r\n\r\n| Model                               | Size | Caption Length | CLIPScore | RefCLIPScore |\r\n| :---------------------------------- | ---: | -------------: | --------: | -----------: |\r\n| `llava-hf\u002Fllava-1.5-7b-hf`          |   7B |           Long |     0.878 |        0.529 |\r\n| `llava-hf\u002Fllava-1.5-7b-hf`          |   7B |          Short |     0.886 |        0.531 |\r\n|                                     |\r\n| `Salesforce\u002Finstructblip-vicuna-7b` |   7B |           Long |     0.902 |        0.534 |\r\n| `Salesforce\u002Finstructblip-vicuna-7b` |   7B |          Short |     0.848 |        0.523 |\r\n|                                     |\r\n| `unum-cloud\u002Fuform-gen`              | 1.5B |           Long |     0.847 |        0.523 |\r\n| `unum-cloud\u002Fuform-gen`              | 1.5B |          Short |     0.842 |        0.522 |\r\n|                                     |\r\n| `unum-cloud\u002Fuform-gen-chat`         | 1.5B |           Long |     0.860 |        0.525 |\r\n| `unum-cloud\u002Fuform-gen-chat`         | 1.5B |          Short |     0.858 |        0.525 |\r\n\r\n## Throughput\r\n\r\nOn RTX 3090, using vanilla PyTorch for inference, with `bfloat16` arithmetic and greedy decoding, one should expect the following numbers for throughput.\r\n\r\n| Model                               | Size |               Speed |   Speedup |\r\n| :---------------------------------- | ---: | ------------------: | --------: |\r\n| `llava-hf\u002Fllava-1.5-7b-hf`          |   7B |  ~ 40 tokens\u002Fsecond |           |\r\n| `Salesforce\u002Finstructblip-vicuna-7b` |   7B |  ~ 40 tokens\u002Fsecond |           |\r\n| `unum-cloud\u002Fuform-gen`              | 1.5B | ~ 140 tokens\u002Fsecond | __x 3.5__ |","2023-12-28T17:33:51",{"id":270,"version":271,"summary_zh":272,"released_at":273},342791,"v0.4.8","## [0.4.8](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcompare\u002Fv0.4.7...v0.4.8) (2023-10-13)\n\n\n### Make\n\n* pass `ANACONDA_API_TOKEN` as env. var. ([ed020d3](https:\u002F\u002Fgithub.com\u002Funum-cloud\u002Fuform\u002Fcommit\u002Fed020d3094fb9e06a4f006f08d6106b7f6d3ed45))\n\n\n\n","2023-10-13T05:07:36"]