[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-vercel--modelfusion":3,"tool-vercel--modelfusion":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160015,2,"2026-04-18T11:30:52",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":77,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":32,"env_os":98,"env_gpu":98,"env_ram":98,"env_deps":99,"category_tags":103,"github_topics":105,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":126,"updated_at":127,"faqs":128,"releases":158},8104,"vercel\u002Fmodelfusion","modelfusion","The TypeScript library for building AI applications.","ModelFusion 是一款专为构建 AI 应用而设计的 TypeScript 库，旨在为开发者提供统一、高效的模型集成方案。它主要解决了在开发过程中面对不同 AI 供应商时接口不一致、多模态支持分散以及生产环境稳定性难以保障等痛点。通过抽象层设计，ModelFusion 将文本生成、对象结构化输出、工具调用等常见操作标准化，让开发者无需反复适配各家厂商的 API 差异。\n\n这款工具非常适合使用 JavaScript 或 TypeScript 进行开发的软件工程师和全栈开发者，尤其是那些希望快速构建聊天机器人、智能代理或多模态应用的技术团队。其独特亮点在于强大的类型推断与验证机制，能充分利用 TypeScript 特性提升代码安全性；同时内置了可观测性框架、自动重试、限流及日志记录等功能，确保应用在生产环境中稳健运行。此外，ModelFusion 坚持供应商中立原则，支持文本、图像、语音及嵌入等多种模型类型，且轻量无依赖，完美适配服务器less 环境。值得注意的是，该项目核心功能正逐步融入 Vercel AI SDK，持续为社区带来更先进的开发体验。","# ModelFusion\n\n> ### The TypeScript library for building AI applications.\n\n[![NPM Version](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fmodelfusion?color=33cd56&logo=npm)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fmodelfusion)\n[![MIT License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Flgrammel\u002Fmodelfusion)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n[![Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-modelfusion.dev-blue)](https:\u002F\u002Fmodelfusion.dev)\n[![Created by Lars Grammel](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcreated%20by-@lgrammel-4BBAAB.svg)](https:\u002F\u002Ftwitter.com\u002Flgrammel)\n\n[Introduction](#introduction) | [Quick Install](#quick-install) | [Usage](#usage-examples) | [Documentation](#documentation) | [Examples](#more-examples) | [Contributing](#contributing) | [modelfusion.dev](https:\u002F\u002Fmodelfusion.dev)\n\n## Introduction\n\n> [!IMPORTANT]\n> [ModelFusion has joined Vercel](https:\u002F\u002Fvercel.com\u002Fblog\u002Fvercel-ai-sdk-3-1-modelfusion-joins-the-team) and is being integrated into the [Vercel AI SDK](https:\u002F\u002Fsdk.vercel.ai\u002Fdocs\u002Fintroduction). We are bringing the best parts of modelfusion to the Vercel AI SDK, starting with text generation, structured object generation, and tool calls. Please check out the AI SDK for the latest developments.\n\n**ModelFusion** is an abstraction layer for integrating AI models into JavaScript and TypeScript applications, unifying the API for common operations such as **text streaming**, **object generation**, and **tool usage**. It provides features to support production environments, including observability hooks, logging, and automatic retries. You can use ModelFusion to build AI applications, chatbots, and agents.\n\n- **Vendor-neutral**: ModelFusion is a non-commercial open source project that is community-driven. You can use it with any supported provider.\n- **Multi-modal**: ModelFusion supports a wide range of models including text generation, image generation, vision, text-to-speech, speech-to-text, and embedding models.\n- **Type inference and validation**: ModelFusion infers TypeScript types wherever possible and validates model responses.\n- **Observability and logging**: ModelFusion provides an observer framework and logging support.\n- **Resilience and robustness**: ModelFusion ensures seamless operation through automatic retries, throttling, and error handling mechanisms.\n- **Built for production**: ModelFusion is fully tree-shakeable, can be used in serverless environments, and only uses a minimal set of dependencies.\n\n## Quick Install\n\n```sh\nnpm install modelfusion\n```\n\nOr use a starter template:\n\n- [ModelFusion terminal app starter](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-terminal-app-starter)\n- [Next.js, Vercel AI SDK, Llama.cpp & ModelFusion starter](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-llamacpp-nextjs-starter)\n- [Next.js, Vercel AI SDK, Ollama & ModelFusion starter](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-ollama-nextjs-starter)\n\n## Usage Examples\n\n> [!TIP]\n> The basic examples are a great way to get started and to explore in parallel with the [documentation](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002F). You can find them in the [examples\u002Fbasic](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fbasic) folder.\n\nYou can provide API keys for the different [integrations](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002F) using environment variables (e.g., `OPENAI_API_KEY`) or pass them into the model constructors as options.\n\n### [Generate Text](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text)\n\nGenerate text using a language model and a prompt. You can stream the text if it is supported by the model. You can use images for multi-modal prompting if the model supports it (e.g. with [llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)).\nYou can use [prompt styles](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text#prompt-styles) to use text, instruction, or chat prompts.\n\n#### generateText\n\n```ts\nimport { generateText, openai } from \"modelfusion\";\n\nconst text = await generateText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"Write a short story about a robot learning to love:\\n\\n\",\n});\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [OpenAI compatible](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp), [Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama), [Mistral](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fmistral), [Hugging Face](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fhuggingface), [Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n#### streamText\n\n```ts\nimport { streamText, openai } from \"modelfusion\";\n\nconst textStream = await streamText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"Write a short story about a robot learning to love:\\n\\n\",\n});\n\nfor await (const textPart of textStream) {\n  process.stdout.write(textPart);\n}\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [OpenAI compatible](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp), [Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama), [Mistral](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fmistral), [Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n#### streamText with multi-modal prompt\n\nMulti-modal vision models such as GPT 4 Vision can process images as part of the prompt.\n\n```ts\nimport { streamText, openai } from \"modelfusion\";\nimport { readFileSync } from \"fs\";\n\nconst image = readFileSync(\".\u002Fimage.png\");\n\nconst textStream = await streamText({\n  model: openai\n    .ChatTextGenerator({ model: \"gpt-4-vision-preview\" })\n    .withInstructionPrompt(),\n\n  prompt: {\n    instruction: [\n      { type: \"text\", text: \"Describe the image in detail.\" },\n      { type: \"image\", image, mimeType: \"image\u002Fpng\" },\n    ],\n  },\n});\n\nfor await (const textPart of textStream) {\n  process.stdout.write(textPart);\n}\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [OpenAI compatible](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp), [Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama)\n\n### [Generate Object](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-object)\n\nGenerate typed objects using a language model and a schema.\n\n#### generateObject\n\nGenerate an object that matches a schema.\n\n```ts\nimport {\n  ollama,\n  zodSchema,\n  generateObject,\n  jsonObjectPrompt,\n} from \"modelfusion\";\n\nconst sentiment = await generateObject({\n  model: ollama\n    .ChatTextGenerator({\n      model: \"openhermes2.5-mistral\",\n      maxGenerationTokens: 1024,\n      temperature: 0,\n    })\n    .asObjectGenerationModel(jsonObjectPrompt.instruction()),\n\n  schema: zodSchema(\n    z.object({\n      sentiment: z\n        .enum([\"positive\", \"neutral\", \"negative\"])\n        .describe(\"Sentiment.\"),\n    })\n  ),\n\n  prompt: {\n    system:\n      \"You are a sentiment evaluator. \" +\n      \"Analyze the sentiment of the following product review:\",\n    instruction:\n      \"After I opened the package, I was met by a very unpleasant smell \" +\n      \"that did not disappear even after washing. Never again!\",\n  },\n});\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [Ollama](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Follama), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Fllama.cpp)\n\n#### streamObject\n\nStream a object that matches a schema. Partial objects before the final part are untyped JSON.\n\n```ts\nimport { zodSchema, openai, streamObject } from \"modelfusion\";\n\nconst objectStream = await streamObject({\n  model: openai\n    .ChatTextGenerator(\u002F* ... *\u002F)\n    .asFunctionCallObjectGenerationModel({\n      fnName: \"generateCharacter\",\n      fnDescription: \"Generate character descriptions.\",\n    })\n    .withTextPrompt(),\n\n  schema: zodSchema(\n    z.object({\n      characters: z.array(\n        z.object({\n          name: z.string(),\n          class: z\n            .string()\n            .describe(\"Character class, e.g. warrior, mage, or thief.\"),\n          description: z.string(),\n        })\n      ),\n    })\n  ),\n\n  prompt: \"Generate 3 character descriptions for a fantasy role playing game.\",\n});\n\nfor await (const { partialObject } of objectStream) {\n  console.clear();\n  console.log(partialObject);\n}\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [Ollama](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Follama), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Fllama.cpp)\n\n### [Generate Image](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-image)\n\nGenerate an image from a prompt.\n\n```ts\nimport { generateImage, openai } from \"modelfusion\";\n\nconst image = await generateImage({\n  model: openai.ImageGenerator({ model: \"dall-e-3\", size: \"1024x1024\" }),\n  prompt:\n    \"the wicked witch of the west in the style of early 19th century painting\",\n});\n```\n\nProviders: [OpenAI (Dall·E)](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [Stability AI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fstability), [Automatic1111](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fautomatic1111)\n\n### [Generate Speech](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-speech)\n\nSynthesize speech (audio) from text. Also called TTS (text-to-speech).\n\n#### generateSpeech\n\n`generateSpeech` synthesizes speech from text.\n\n```ts\nimport { generateSpeech, lmnt } from \"modelfusion\";\n\n\u002F\u002F `speech` is a Uint8Array with MP3 audio data\nconst speech = await generateSpeech({\n  model: lmnt.SpeechGenerator({\n    voice: \"034b632b-df71-46c8-b440-86a42ffc3cf3\", \u002F\u002F Henry\n  }),\n  text:\n    \"Good evening, ladies and gentlemen! Exciting news on the airwaves tonight \" +\n    \"as The Rolling Stones unveil 'Hackney Diamonds,' their first collection of \" +\n    \"fresh tunes in nearly twenty years, featuring the illustrious Lady Gaga, the \" +\n    \"magical Stevie Wonder, and the final beats from the late Charlie Watts.\",\n});\n```\n\nProviders: [Eleven Labs](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Felevenlabs), [LMNT](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Flmnt), [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)\n\n#### streamSpeech\n\n`generateSpeech` generates a stream of speech chunks from text or from a text stream. Depending on the model, this can be fully duplex.\n\n```ts\nimport { streamSpeech, elevenlabs } from \"modelfusion\";\n\nconst textStream: AsyncIterable\u003Cstring>;\n\nconst speechStream = await streamSpeech({\n  model: elevenlabs.SpeechGenerator({\n    model: \"eleven_turbo_v2\",\n    voice: \"pNInz6obpgDQGcFmaJgB\", \u002F\u002F Adam\n    optimizeStreamingLatency: 1,\n    voiceSettings: { stability: 1, similarityBoost: 0.35 },\n    generationConfig: {\n      chunkLengthSchedule: [50, 90, 120, 150, 200],\n    },\n  }),\n  text: textStream,\n});\n\nfor await (const part of speechStream) {\n  \u002F\u002F each part is a Uint8Array with MP3 audio data\n}\n```\n\nProviders: [Eleven Labs](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Felevenlabs)\n\n### [Generate Transcription](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-transcription)\n\nTranscribe speech (audio) data into text. Also called speech-to-text (STT).\n\n```ts\nimport { generateTranscription, openai } from \"modelfusion\";\nimport fs from \"node:fs\";\n\nconst transcription = await generateTranscription({\n  model: openai.Transcriber({ model: \"whisper-1\" }),\n  mimeType: \"audio\u002Fmp3\",\n  audioData: await fs.promises.readFile(\"data\u002Ftest.mp3\"),\n});\n```\n\nProviders: [OpenAI (Whisper)](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [Whisper.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fwhispercpp)\n\n### [Embed Value](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fembed)\n\nCreate embeddings for text and other values. Embeddings are vectors that represent the essence of the values in the context of the model.\n\n```ts\nimport { embed, embedMany, openai } from \"modelfusion\";\n\n\u002F\u002F embed single value:\nconst embedding = await embed({\n  model: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\n  value: \"At first, Nox didn't know what to do with the pup.\",\n});\n\n\u002F\u002F embed many values:\nconst embeddings = await embedMany({\n  model: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\n  values: [\n    \"At first, Nox didn't know what to do with the pup.\",\n    \"He keenly observed and absorbed everything around him, from the birds in the sky to the trees in the forest.\",\n  ],\n});\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [OpenAI compatible](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp), [Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama), [Mistral](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fmistral), [Hugging Face](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fhuggingface), [Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n### [Classify Value](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify)\n\nClassifies a value into a category.\n\n```ts\nimport { classify, EmbeddingSimilarityClassifier, openai } from \"modelfusion\";\n\nconst classifier = new EmbeddingSimilarityClassifier({\n  embeddingModel: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\n  similarityThreshold: 0.82,\n  clusters: [\n    {\n      name: \"politics\" as const,\n      values: [\n        \"they will save the country!\",\n        \u002F\u002F ...\n      ],\n    },\n    {\n      name: \"chitchat\" as const,\n      values: [\n        \"how's the weather today?\",\n        \u002F\u002F ...\n      ],\n    },\n  ],\n});\n\n\u002F\u002F strongly typed result:\nconst result = await classify({\n  model: classifier,\n  value: \"don't you love politics?\",\n});\n```\n\nClassifiers: [EmbeddingSimilarityClassifier](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify#embeddingsimilarityclassifier)\n\n### [Tokenize Text](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Ftokenize-text)\n\nSplit text into tokens and reconstruct the text from tokens.\n\n```ts\nconst tokenizer = openai.Tokenizer({ model: \"gpt-4\" });\n\nconst text = \"At first, Nox didn't know what to do with the pup.\";\n\nconst tokenCount = await countTokens(tokenizer, text);\n\nconst tokens = await tokenizer.tokenize(text);\nconst tokensAndTokenTexts = await tokenizer.tokenizeWithTexts(text);\nconst reconstructedText = await tokenizer.detokenize(tokens);\n```\n\nProviders: [OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai), [Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp), [Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n### [Tools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools)\n\nTools are functions (and associated metadata) that can be executed by an AI model. They are useful for building chatbots and agents.\n\nModelFusion offers several tools out-of-the-box: [Math.js](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fmathjs), [MediaWiki Search](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fmediawiki-search), [SerpAPI](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fserpapi), [Google Custom Search](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fgoogle-custom-search). You can also create [custom tools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools).\n\n#### [runTool](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tool)\n\nWith `runTool`, you can ask a tool-compatible language model (e.g. OpenAI chat) to invoke a single tool. `runTool` first generates a tool call and then executes the tool with the arguments.\n\n```ts\nconst { tool, toolCall, args, ok, result } = await runTool({\n  model: openai.ChatTextGenerator({ model: \"gpt-3.5-turbo\" }),\n  too: calculator,\n  prompt: [openai.ChatMessage.user(\"What's fourteen times twelve?\")],\n});\n\nconsole.log(`Tool call:`, toolCall);\nconsole.log(`Tool:`, tool);\nconsole.log(`Arguments:`, args);\nconsole.log(`Ok:`, ok);\nconsole.log(`Result or Error:`, result);\n```\n\n#### [runTools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tools)\n\nWith `runTools`, you can ask a language model to generate several tool calls as well as text. The model will choose which tools (if any) should be called with which arguments. Both the text and the tool calls are optional. This function executes the tools.\n\n```ts\nconst { text, toolResults } = await runTools({\n  model: openai.ChatTextGenerator({ model: \"gpt-3.5-turbo\" }),\n  tools: [calculator \u002F* ... *\u002F],\n  prompt: [openai.ChatMessage.user(\"What's fourteen times twelve?\")],\n});\n```\n\n#### [Agent Loop](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fagent-loop)\n\nYou can use `runTools` to implement an agent loop that responds to user messages and executes tools. [Learn more](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fagent-loop).\n\n### [Vector Indices](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index)\n\n```ts\nconst texts = [\n  \"A rainbow is an optical phenomenon that can occur under certain meteorological conditions.\",\n  \"It is caused by refraction, internal reflection and dispersion of light in water droplets resulting in a continuous spectrum of light appearing in the sky.\",\n  \u002F\u002F ...\n];\n\nconst vectorIndex = new MemoryVectorIndex\u003Cstring>();\nconst embeddingModel = openai.TextEmbedder({\n  model: \"text-embedding-ada-002\",\n});\n\n\u002F\u002F update an index - usually done as part of an ingestion process:\nawait upsertIntoVectorIndex({\n  vectorIndex,\n  embeddingModel,\n  objects: texts,\n  getValueToEmbed: (text) => text,\n});\n\n\u002F\u002F retrieve text chunks from the vector index - usually done at query time:\nconst retrievedTexts = await retrieve(\n  new VectorIndexRetriever({\n    vectorIndex,\n    embeddingModel,\n    maxResults: 3,\n    similarityThreshold: 0.8,\n  }),\n  \"rainbow and water droplets\"\n);\n```\n\nAvailable Vector Stores: [Memory](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fvector-index\u002Fmemory), [SQLite VSS](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fvector-index\u002Fsqlite-vss), [Pinecone](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fvector-index\u002Fpinecone)\n\n### [Text Generation Prompt Styles](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text#prompt-styles)\n\nYou can use different prompt styles (such as text, instruction or chat prompts) with ModelFusion text generation models. These prompt styles can be accessed through the methods `.withTextPrompt()`, `.withChatPrompt()` and `.withInstructionPrompt()`:\n\n#### Text Prompt Style\n\n```ts\nconst text = await generateText({\n  model: openai\n    .ChatTextGenerator({\n      \u002F\u002F ...\n    })\n    .withTextPrompt(),\n\n  prompt: \"Write a short story about a robot learning to love\",\n});\n```\n\n#### Instruction Prompt Style\n\n```ts\nconst text = await generateText({\n  model: llamacpp\n    .CompletionTextGenerator({\n      \u002F\u002F run https:\u002F\u002Fhuggingface.co\u002FTheBloke\u002FLlama-2-7B-Chat-GGUF with llama.cpp\n      promptTemplate: llamacpp.prompt.Llama2, \u002F\u002F Set prompt template\n      contextWindowSize: 4096, \u002F\u002F Llama 2 context window size\n      maxGenerationTokens: 512,\n    })\n    .withInstructionPrompt(),\n\n  prompt: {\n    system: \"You are a story writer.\",\n    instruction: \"Write a short story about a robot learning to love.\",\n  },\n});\n```\n\n#### Chat Prompt Style\n\n```ts\nconst textStream = await streamText({\n  model: openai\n    .ChatTextGenerator({\n      model: \"gpt-3.5-turbo\",\n    })\n    .withChatPrompt(),\n\n  prompt: {\n    system: \"You are a celebrated poet.\",\n    messages: [\n      {\n        role: \"user\",\n        content: \"Suggest a name for a robot.\",\n      },\n      {\n        role: \"assistant\",\n        content: \"I suggest the name Robbie\",\n      },\n      {\n        role: \"user\",\n        content: \"Write a short story about Robbie learning to love\",\n      },\n    ],\n  },\n});\n```\n\n### [Image Generation Prompt Templates](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-image\u002Fprompt-format)\n\nYou an use prompt templates with image models as well, e.g. to use a basic text prompt. It is available as a shorthand method:\n\n```ts\nconst image = await generateImage({\n  model: stability\n    .ImageGenerator({\n      \u002F\u002F...\n    })\n    .withTextPrompt(),\n\n  prompt:\n    \"the wicked witch of the west in the style of early 19th century painting\",\n});\n```\n\n| Prompt Template | Text Prompt |\n| --------------- | ----------- |\n| Automatic1111   | ✅          |\n| Stability       | ✅          |\n\n### Metadata and original responses\n\nModelFusion model functions return rich responses that include the raw (original) response and metadata when you set the `fullResponse` argument to `true`.\n\n```ts\n\u002F\u002F access the raw response (needs to be typed) and the metadata:\nconst { text, rawResponse, metadata } = await generateText({\n  model: openai.CompletionTextGenerator({\n    model: \"gpt-3.5-turbo-instruct\",\n    maxGenerationTokens: 1000,\n    n: 2, \u002F\u002F generate 2 completions\n  }),\n  prompt: \"Write a short story about a robot learning to love:\\n\\n\",\n  fullResponse: true,\n});\n\nconsole.log(metadata);\n\n\u002F\u002F cast to the raw response type:\nfor (const choice of (rawResponse as OpenAICompletionResponse).choices) {\n  console.log(choice.text);\n}\n```\n\n### Logging and Observability\n\nModelFusion provides an [observer framework](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fobserver) and [logging support](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Flogging). You can easily trace runs and call hierarchies, and you can add your own observers.\n\n#### Enabling Logging on a Function Call\n\n```ts\nimport { generateText, openai } from \"modelfusion\";\n\nconst text = await generateText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"Write a short story about a robot learning to love:\\n\\n\",\n  logging: \"detailed-object\",\n});\n```\n\n## Documentation\n\n### [Guide](https:\u002F\u002Fmodelfusion.dev\u002Fguide)\n\n- [Model Functions](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002F)\n  - [Generate text](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text)\n  - [Generate object](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-object)\n  - [Generate image](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-image)\n  - [Generate speech](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-speech)\n  - [Generate transcription](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-transcription)\n  - [Tokenize Text](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Ftokenize-text)\n  - [Embed Value](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fembed)\n  - [Classify Value](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify)\n- [Tools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools)\n  - [Run Tool](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tool)\n  - [Run Tools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tools)\n  - [Agent Loop](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fagent-loop)\n  - [Available Tools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002F)\n  - [Custom Tools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fcustom-tools)\n  - [Advanced](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fadvanced)\n- [Vector Indices](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index)\n  - [Upsert](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index\u002Fupsert)\n  - [Retrieve](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index\u002Fretrieve)\n- [Text Chunks](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftext-chunk\u002F)\n  - [Split Text](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftext-chunk\u002Fsplit)\n- [Utilities](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002F)\n  - [API Configuration](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration)\n    - [Base URL](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fbase-url)\n    - [Headers](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fheaders)\n    - [Retry strategies](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fretry)\n    - [Throttling strategies](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fthrottle)\n  - [Logging](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Flogging)\n  - [Observers](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fobserver)\n  - [Runs](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Frun)\n  - [Abort signals](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fabort)\n- [Experimental](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002F)\n  - [Guards](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002Fguard)\n  - [Server](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002Fserver\u002F)\n  - [Cost calculation](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002Fcost-calculation)\n- [Troubleshooting](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftroubleshooting)\n  - [Bundling](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftroubleshooting\u002Fbundling)\n\n### [Integrations](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider)\n\n### [Examples & Tutorials](https:\u002F\u002Fmodelfusion.dev\u002Ftutorial)\n\n### [Showcase](https:\u002F\u002Fmodelfusion.dev\u002Ftutorial\u002Fshowcase)\n\n### [API Reference](https:\u002F\u002Fmodelfusion.dev\u002Fapi\u002Fmodules)\n\n## More Examples\n\n### [Basic Examples](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fbasic)\n\nExamples for almost all of the individual functions and objects. Highly recommended to get started.\n\n### [StoryTeller](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fstoryteller)\n\n> _multi-modal_, _object streaming_, _image generation_, _text to speech_, _speech to text_, _text generation_, _object generation_, _embeddings_\n\nStoryTeller is an exploratory web application that creates short audio stories for pre-school kids.\n\n### [Chatbot (Next.JS)](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fchatbot-next-js)\n\n> _Next.js app_, _OpenAI GPT-3.5-turbo_, _streaming_, _abort handling_\n\nA web chat with an AI assistant, implemented as a Next.js app.\n\n### [Chat with PDF](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fpdf-chat-terminal)\n\n> _terminal app_, _PDF parsing_, _in memory vector indices_, _retrieval augmented generation_, _hypothetical document embedding_\n\nAsk questions about a PDF document and get answers from the document.\n\n### [Next.js \u002F ModelFusion Demos](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fnextjs)\n\n> _Next.js app_, _image generation_, _transcription_, _object streaming_, _OpenAI_, _Stability AI_, _Ollama_\n\nExamples of using ModelFusion with Next.js 14 (App Router):\n\n- image generation\n- voice recording & transcription\n- object streaming\n\n### [Duplex Speech Streaming (using Vite\u002FReact & ModelFusion Server\u002FFastify)](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fspeech-streaming-vite-react-fastify)\n\n> _Speech Streaming_, _OpenAI_, _Elevenlabs_ _streaming_, _Vite_, _Fastify_, _ModelFusion Server_\n\nGiven a prompt, the server returns both a text and a speech stream response.\n\n### [BabyAGI Agent](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fbabyagi-agent)\n\n> _terminal app_, _agent_, _BabyAGI_\n\nTypeScript implementation of the BabyAGI classic and BabyBeeAGI.\n\n### [Wikipedia Agent](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fwikipedia-agent)\n\n> _terminal app_, _ReAct agent_, _GPT-4_, _OpenAI functions_, _tools_\n\nGet answers to questions from Wikipedia, e.g. \"Who was born first, Einstein or Picasso?\"\n\n### [Middle school math agent](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fmiddle-school-math-agent)\n\n> _terminal app_, _agent_, _tools_, _GPT-4_\n\nSmall agent that solves middle school math problems. It uses a calculator tool to solve the problems.\n\n### [PDF to Tweet](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fpdf-to-tweet)\n\n> _terminal app_, _PDF parsing_, _recursive information extraction_, _in memory vector index, \\_style example retrieval_, _OpenAI GPT-4_, _cost calculation_\n\nExtracts information about a topic from a PDF and writes a tweet in your own style about it.\n\n### [Cloudflare Workers](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fcloudflare-workers)\n\n> _Cloudflare_, _OpenAI_\n\nGenerate text on a Cloudflare Worker using ModelFusion and OpenAI.\n\n## Contributing\n\n### [Contributing Guide](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Fblob\u002Fmain\u002FCONTRIBUTING.md)\n\nRead the [ModelFusion contributing guide](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Fblob\u002Fmain\u002FCONTRIBUTING.md) to learn about the development process, how to propose bugfixes and improvements, and how to build and test your changes.\n","# ModelFusion\n\n> ### 用于构建 AI 应用的 TypeScript 库。\n\n[![NPM 版本](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fmodelfusion?color=33cd56&logo=npm)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fmodelfusion)\n[![MIT 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Flgrammel\u002Fmodelfusion)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-modelfusion.dev-blue)](https:\u002F\u002Fmodelfusion.dev)\n[![由 Lars Grammel 创建](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcreated%20by-@lgrammel-4BBAAB.svg)](https:\u002F\u002Ftwitter.com\u002Flgrammel)\n\n[简介](#introduction) | [快速安装](#quick-install) | [使用示例](#usage-examples) | [文档](#documentation) | [更多示例](#more-examples) | [贡献](#contributing) | [modelfusion.dev](https:\u002F\u002Fmodelfusion.dev)\n\n## 简介\n\n> [!重要]\n> [ModelFusion 已加入 Vercel](https:\u002F\u002Fvercel.com\u002Fblog\u002Fvercel-ai-sdk-3-1-modelfusion-joins-the-team)，并正在被整合到 [Vercel AI SDK](https:\u002F\u002Fsdk.vercel.ai\u002Fdocs\u002Fintroduction) 中。我们将把 ModelFusion 的精华部分引入 Vercel AI SDK，首先从文本生成、结构化对象生成和工具调用开始。请查看 AI SDK 以获取最新进展。\n\n**ModelFusion** 是一个用于将 AI 模型集成到 JavaScript 和 TypeScript 应用中的抽象层，统一了常见操作的 API，例如 **文本流式传输**、**对象生成** 和 **工具调用**。它提供了支持生产环境的功能，包括可观ability 钩子、日志记录和自动重试机制。您可以使用 ModelFusion 构建 AI 应用、聊天机器人和智能代理。\n\n- **供应商中立**：ModelFusion 是一个非商业性的开源项目，由社区驱动。您可以将其与任何受支持的提供商一起使用。\n- **多模态**：ModelFusion 支持广泛的模型，包括文本生成、图像生成、视觉、文本转语音、语音转文本以及嵌入模型。\n- **类型推断和验证**：ModelFusion 在可能的情况下会推断 TypeScript 类型，并对模型响应进行验证。\n- **可观ability 和日志记录**：ModelFusion 提供观察者框架和日志记录支持。\n- **弹性和稳健性**：ModelFusion 通过自动重试、限流和错误处理机制确保无缝运行。\n- **为生产环境而设计**：ModelFusion 完全支持 tree-shaking，可在无服务器环境中使用，并且仅依赖最少的外部库。\n\n## 快速安装\n\n```sh\nnpm install modelfusion\n```\n\n或者使用入门模板：\n\n- [ModelFusion 终端应用入门](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-terminal-app-starter)\n- [Next.js、Vercel AI SDK、Llama.cpp 和 ModelFusion 入门](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-llamacpp-nextjs-starter)\n- [Next.js、Vercel AI SDK、Ollama 和 ModelFusion 入门](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-ollama-nextjs-starter)\n\n## 使用示例\n\n> [!提示]\n> 基础示例是入门的好方法，也可以与 [文档](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002F) 并行探索。您可以在 [examples\u002Fbasic](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fbasic) 文件夹中找到它们。\n\n您可以使用环境变量（例如 `OPENAI_API_KEY`）为不同的 [集成](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002F) 提供 API 密钥，或将它们作为选项传递给模型构造函数。\n\n### [生成文本](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text)\n\n使用语言模型和提示生成文本。如果模型支持，您可以对文本进行流式传输。如果模型支持（例如使用 [llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)），您可以使用图像进行多模态提示。\n\n您可以使用 [提示风格](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text#prompt-styles) 来采用文本、指令或聊天形式的提示。\n\n#### generateText\n\n```ts\nimport { generateText, openai } from \"modelfusion\";\n\nconst text = await generateText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"写一篇关于机器人学会爱的短篇故事：\\n\\n\",\n});\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[兼容 OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)、[Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama)、[Mistral](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fmistral)、[Hugging Face](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fhuggingface)、[Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n#### streamText\n\n```ts\nimport { streamText, openai } from \"modelfusion\";\n\nconst textStream = await streamText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"写一篇关于机器人学会爱的短篇故事：\\n\\n\",\n});\n\nfor await (const textPart of textStream) {\n  process.stdout.write(textPart);\n}\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[兼容 OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)、[Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama)、[Mistral](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fmistral)、[Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n#### 带有多模态提示的 streamText\n\n像 GPT 4 Vision 这样的多模态视觉模型可以将图像作为提示的一部分进行处理。\n\n```ts\nimport { streamText, openai } from \"modelfusion\";\nimport { readFileSync } from \"fs\";\n\nconst image = readFileSync(\".\u002Fimage.png\");\n\nconst textStream = await streamText({\n  model: openai\n    .ChatTextGenerator({ model: \"gpt-4-vision-preview\" })\n    .withInstructionPrompt(),\n\n  prompt: {\n    instruction: [\n      { type: \"text\", text: \"详细描述这张图片。\" },\n      { type: \"image\", image, mimeType: \"image\u002Fpng\" },\n    ],\n  },\n});\n\nfor await (const textPart of textStream) {\n  process.stdout.write(textPart);\n}\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[兼容 OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)、[Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama)\n\n### [生成对象](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-object)\n\n使用语言模型和模式生成类型化的对象。\n\n#### generateObject\n\n生成符合模式的对象。\n\n```ts\nimport {\n  ollama,\n  zodSchema,\n  generateObject,\n  jsonObjectPrompt,\n} from \"modelfusion\";\n\nconst sentiment = await generateObject({\n  model: ollama\n    .ChatTextGenerator({\n      model: \"openhermes2.5-mistral\",\n      maxGenerationTokens: 1024,\n      temperature: 0,\n    })\n    .asObjectGenerationModel(jsonObjectPrompt.instruction()),\n\n  schema: zodSchema(\n    z.object({\n      sentiment: z\n        .enum([\"positive\", \"neutral\", \"negative\"])\n        .describe(\"情感。\"),\n    })\n  ),\n\n  prompt: {\n    system:\n      \"你是一名情感评估员。\" +\n      \"请分析以下产品评论的情感：\",\n    instruction:\n      \"打开包装后，我闻到一股非常难闻的气味，\" +\n      \"即使清洗后也没有消失。再也不会买了！\",\n  },\n});\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[Ollama](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Follama)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Fllama.cpp)\n\n#### streamObject\n\n流式生成符合模式的对象。最终部分之前的不完整对象是未类型的 JSON。\n\n```ts\nimport { zodSchema, openai, streamObject } from \"modelfusion\";\n\nconst objectStream = await streamObject({\n  model: openai\n    .ChatTextGenerator(\u002F* ... *\u002F)\n    .asFunctionCallObjectGenerationModel({\n      fnName: \"generateCharacter\",\n      fnDescription: \"生成角色描述。\",\n    })\n    .withTextPrompt(),\n\n  schema: zodSchema(\n    z.object({\n      characters: z.array(\n        z.object({\n          name: z.string(),\n          class: z\n            .string()\n            .describe(\"角色职业，例如战士、法师或盗贼。\"),\n          description: z.string(),\n        })\n      ),\n    })\n  ),\n\n  prompt: \"为一款奇幻角色扮演游戏生成3个角色描述。\",\n});\n\nfor await (const { partialObject } of objectStream) {\n  console.clear();\n  console.log(partialObject);\n}\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[Ollama](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Follama)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002F\u002Fintegration\u002Fmodel-provider\u002Fllama.cpp)\n\n### [生成图像](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-image)\n\n根据提示词生成一张图片。\n\n```ts\nimport { generateImage, openai } from \"modelfusion\";\n\nconst image = await generateImage({\n  model: openai.ImageGenerator({ model: \"dall-e-3\", size: \"1024x1024\" }),\n  prompt:\n    \"西方邪恶女巫，以19世纪早期绘画风格呈现\",\n});\n```\n\n提供商：[OpenAI (Dall·E)](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[Stability AI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fstability)、[Automatic1111](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fautomatic1111)\n\n### [生成语音](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-speech)\n\n将文本合成语音（音频）。也称为 TTS（文本转语音）。\n\n#### generateSpeech\n\n`generateSpeech` 将文本合成语音。\n\n```ts\nimport { generateSpeech, lmnt } from \"modelfusion\";\n\n\u002F\u002F `speech` 是一个包含 MP3 音频数据的 Uint8Array\nconst speech = await generateSpeech({\n  model: lmnt.SpeechGenerator({\n    voice: \"034b632b-df71-46c8-b440-86a42ffc3cf3\", \u002F\u002F Henry\n  }),\n  text:\n    \"女士们、先生们，晚上好！今晚广播里传来激动人心的消息，\" +\n    \"滚石乐队发布了他们的首张新专辑《Hackney Diamonds》，这是近二十年来他们首次推出的新作品，\" +\n    \"其中特别邀请了传奇人物 Lady Gaga、神奇的 Stevie Wonder，以及已故 Charlie Watts 的最后音符。\",\n});\n```\n\n提供商：[Eleven Labs](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Felevenlabs)、[LMNT](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Flmnt)、[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)\n\n#### streamSpeech\n\n`generateSpeech` 可以从文本或文本流中生成语音片段的流。根据模型的不同，这可以实现完全双工。\n\n```ts\nimport { streamSpeech, elevenlabs } from \"modelfusion\";\n\nconst textStream: AsyncIterable\u003Cstring>;\n\nconst speechStream = await streamSpeech({\n  model: elevenlabs.SpeechGenerator({\n    model: \"eleven_turbo_v2\",\n    voice: \"pNInz6obpgDQGcFmaJgB\", \u002F\u002F Adam\n    optimizeStreamingLatency: 1,\n    voiceSettings: { stability: 1, similarityBoost: 0.35 },\n    generationConfig: {\n      chunkLengthSchedule: [50, 90, 120, 150, 200],\n    },\n  }),\n  text: textStream,\n});\n\nfor await (const part of speechStream) {\n  \u002F\u002F 每个部分都是一个包含 MP3 音频数据的 Uint8Array\n}\n```\n\n提供商：[Eleven Labs](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Felevenlabs)\n\n### [生成转录](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-transcription)\n\n将语音（音频）数据转录成文本。也称为语音转文本（STT）。\n\n```ts\nimport { generateTranscription, openai } from \"modelfusion\";\nimport fs from \"node:fs\";\n\nconst transcription = await generateTranscription({\n  model: openai.Transcriber({ model: \"whisper-1\" }),\n  mimeType: \"audio\u002Fmp3\",\n  audioData: await fs.promises.readFile(\"data\u002Ftest.mp3\"),\n});\n```\n\n提供商：[OpenAI (Whisper)](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[Whisper.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fwhispercpp)\n\n### [嵌入值](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fembed)\n\n为文本和其他值创建嵌入。嵌入是向量，用于在模型上下文中表示这些值的本质。\n\n```ts\nimport { embed, embedMany, openai } from \"modelfusion\";\n\n\u002F\u002F 嵌入单个值：\nconst embedding = await embed({\n  model: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\n  value: \"起初，Nox不知道该如何处理这只小狗。\",\n});\n\n\u002F\u002F 嵌入多个值：\nconst embeddings = await embedMany({\n  model: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\n  values: [\n    \"起初，Nox不知道该如何处理这只小狗。\",\n    \"他敏锐地观察并吸收着周围的一切，从天空中的鸟儿到森林里的树木。\",\n  ],\n});\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[与 OpenAI 兼容](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)、[Ollama](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Follama)、[Mistral](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fmistral)、[Hugging Face](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fhuggingface)、[Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n### [分类值](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify)\n\n将一个值分类到某个类别中。\n\n```ts\nimport { classify, EmbeddingSimilarityClassifier, openai } from \"modelfusion\";\n\nconst classifier = new EmbeddingSimilarityClassifier({\n  embeddingModel: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\n  similarityThreshold: 0.82,\n  clusters: [\n    {\n      name: \"politics\" as const,\n      values: [\n        \"they will save the country!\",\n        \u002F\u002F ...\n      ],\n    },\n    {\n      name: \"chitchat\" as const,\n      values: [\n        \"how's the weather today?\",\n        \u002F\u002F ...\n      ],\n    },\n  ],\n});\n\n\u002F\u002F 强类型结果：\nconst result = await classify({\n  model: classifier,\n  value: \"don't you love politics?\",\n});\n```\n\n分类器：[EmbeddingSimilarityClassifier](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify#embeddingsimilarityclassifier)\n\n### [文本分词](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Ftokenize-text)\n\n将文本拆分为标记，并从标记中重建文本。\n\n```ts\nconst tokenizer = openai.Tokenizer({ model: \"gpt-4\" });\n\nconst text = \"At first, Nox didn't know what to do with the pup.\";\n\nconst tokenCount = await countTokens(tokenizer, text);\n\nconst tokens = await tokenizer.tokenize(text);\nconst tokensAndTokenTexts = await tokenizer.tokenizeWithTexts(text);\nconst reconstructedText = await tokenizer.detokenize(tokens);\n```\n\n提供商：[OpenAI](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenai)、[Llama.cpp](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fllamacpp)、[Cohere](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fcohere)\n\n### [工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools)\n\n工具是可由 AI 模型执行的函数（以及相关元数据）。它们对于构建聊天机器人和智能体非常有用。\n\nModelFusion 提供了多种开箱即用的工具：[Math.js](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fmathjs)、[MediaWiki 搜索](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fmediawiki-search)、[SerpAPI](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fserpapi)、[Google 自定义搜索](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fgoogle-custom-search)。你也可以创建[自定义工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools)。\n\n#### [runTool](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tool)\n\n通过 `runTool`，你可以让兼容工具的语言模型（例如 OpenAI 聊天模型）调用单个工具。`runTool` 首先生成工具调用，然后使用参数执行该工具。\n\n```ts\nconst { tool, toolCall, args, ok, result } = await runTool({\n  model: openai.ChatTextGenerator({ model: \"gpt-3.5-turbo\" }),\n  tool: calculator,\n  prompt: [openai.ChatMessage.user(\"十四乘以十二等于多少？\")],\n});\n\nconsole.log(`工具调用：`, toolCall);\nconsole.log(`工具：`, tool);\nconsole.log(`参数：`, args);\nconsole.log(`是否成功：`, ok);\nconsole.log(`结果或错误：`, result);\n```\n\n#### [runTools](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tools)\n\n通过 `runTools`，你可以让语言模型生成多个工具调用以及文本。模型会决定需要调用哪些工具以及使用哪些参数。文本和工具调用都是可选的。此函数会执行这些工具。\n\n```ts\nconst { text, toolResults } = await runTools({\n  model: openai.ChatTextGenerator({ model: \"gpt-3.5-turbo\" }),\n  tools: [calculator \u002F* ... *\u002F],\n  prompt: [openai.ChatMessage.user(\"十四乘以十二等于多少？\")],\n});\n```\n\n#### [智能体循环](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fagent-loop)\n\n你可以使用 `runTools` 来实现一个智能体循环，该循环能够响应用户消息并执行工具。[了解更多](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fagent-loop)。\n\n### [向量索引](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index)\n\n```ts\nconst texts = [\n  \"彩虹是一种光学现象，在特定气象条件下可能会出现。\",\n  \"它是由水滴中的光折射、内反射和色散引起的，从而在天空中呈现出连续的光谱。\",\n  \u002F\u002F ...\n];\n\nconst vectorIndex = new MemoryVectorIndex\u003Cstring>();\nconst embeddingModel = openai.TextEmbedder({\n  model: \"text-embedding-ada-002\",\n});\n\n\u002F\u002F 更新索引——通常作为数据摄取流程的一部分进行：\nawait upsertIntoVectorIndex({\n  vectorIndex,\n  embeddingModel,\n  objects: texts,\n  getValueToEmbed: (text) => text,\n});\n\n\u002F\u002F 从向量索引中检索文本片段——通常在查询时进行：\nconst retrievedTexts = await retrieve(\n  new VectorIndexRetriever({\n    vectorIndex,\n    embeddingModel,\n    maxResults: 3,\n    similarityThreshold: 0.8,\n  }),\n  \"彩虹与水滴\"\n);\n```\n\n可用的向量存储：[Memory](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fvector-index\u002Fmemory)、[SQLite VSS](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fvector-index\u002Fsqlite-vss)、[Pinecone](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fvector-index\u002Fpinecone)\n\n### [文本生成提示风格](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text#prompt-styles)\n\n你可以使用不同的提示风格（如文本提示、指令提示或聊天提示）与 ModelFusion 的文本生成模型一起使用。这些提示风格可以通过 `.withTextPrompt()`、`.withChatPrompt()` 和 `.withInstructionPrompt()` 方法访问：\n\n#### 文本提示风格\n\n```ts\nconst text = await generateText({\n  model: openai\n    .ChatTextGenerator({\n      \u002F\u002F ...\n    })\n    .withTextPrompt(),\n\n  prompt: \"写一篇关于机器人学会爱的短篇故事\",\n});\n```\n\n#### 指令提示风格\n\n```ts\nconst text = await generateText({\n  model: llamacpp\n    .CompletionTextGenerator({\n      \u002F\u002F 使用 llama.cpp 运行 https:\u002F\u002Fhuggingface.co\u002FTheBloke\u002FLlama-2-7B-Chat-GGUF\n      promptTemplate: llamacpp.prompt.Llama2, \u002F\u002F 设置提示模板\n      contextWindowSize: 4096, \u002F\u002F Llama 2 的上下文窗口大小\n      maxGenerationTokens: 512,\n    })\n    .withInstructionPrompt(),\n\n  prompt: {\n    system: \"你是一位故事作家。\",\n    instruction: \"写一篇关于机器人学会爱的短篇故事。\",\n  },\n});\n```\n\n#### 聊天提示风格\n\n```ts\nconst textStream = await streamText({\n  model: openai\n    .ChatTextGenerator({\n      model: \"gpt-3.5-turbo\",\n    })\n    .withChatPrompt(),\n\n  prompt: {\n    system: \"你是一位著名的诗人。\",\n    messages: [\n      {\n        role: \"用户\",\n        content: \"给机器人起个名字吧。\",\n      },\n      {\n        role: \"助手\",\n        content: \"我建议叫罗比。\",\n      },\n      {\n        role: \"用户\",\n        content: \"写一篇关于罗比学会爱的短篇故事吧。\",\n      },\n    ],\n  },\n});\n```\n\n### [图像生成提示模板](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-image\u002Fprompt-format)\n\n你也可以将提示模板与图像模型一起使用，例如使用一个基础的文本提示。这提供了一种简化的用法：\n\n```ts\nconst image = await generateImage({\n  model: stability\n    .ImageGenerator({\n      \u002F\u002F...\n    })\n    .withTextPrompt(),\n\n  prompt:\n    \"西方邪恶女巫，采用19世纪早期绘画风格\",\n});\n```\n\n| 提示模板 | 文本提示 |\n| --------------- | ----------- |\n| Automatic1111   | ✅          |\n| Stability       | ✅          |\n\n### 元数据和原始响应\n\nModelFusion 模型函数会返回丰富的响应，其中包括原始响应和元数据，只需将 `fullResponse` 参数设置为 `true` 即可。\n\n```ts\n\u002F\u002F 访问原始响应（需要进行类型转换）和元数据：\nconst { text, rawResponse, metadata } = await generateText({\n  model: openai.CompletionTextGenerator({\n    model: \"gpt-3.5-turbo-instruct\",\n    maxGenerationTokens: 1000,\n    n: 2, \u002F\u002F 生成2个完成内容\n  }),\n  prompt: \"写一篇关于机器人学会爱的短篇故事：\\n\\n\",\n  fullResponse: true,\n});\n\nconsole.log(metadata);\n\n\u002F\u002F 将原始响应强制转换为 OpenAICompletionResponse 类型：\nfor (const choice of (rawResponse as OpenAICompletionResponse).choices) {\n  console.log(choice.text);\n}\n```\n\n### 日志记录与可观测性\n\nModelFusion 提供了一个 [观察者框架](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fobserver) 和 [日志支持](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Flogging)。你可以轻松追踪运行和调用层次结构，并且可以添加自定义的观察者。\n\n#### 在函数调用中启用日志记录\n\n```ts\nimport { generateText, openai } from \"modelfusion\";\n\nconst text = await generateText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"写一篇关于机器人学会爱的短篇故事：\\n\\n\",\n  logging: \"detailed-object\",\n});\n```\n\n## 文档\n\n### [指南](https:\u002F\u002Fmodelfusion.dev\u002Fguide)\n\n- [模型函数](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002F)\n  - [生成文本](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-text)\n  - [生成对象](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-object)\n  - [生成图像](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-image)\n  - [生成语音](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-speech)\n  - [生成转录](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-transcription)\n  - [文本分词](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Ftokenize-text)\n  - [嵌入值](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fembed)\n  - [分类值](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify)\n- [工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools)\n  - [运行工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tool)\n  - [运行多工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Frun-tools)\n  - [智能体循环](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fagent-loop)\n  - [可用工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002F)\n  - [自定义工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fcustom-tools)\n  - [高级功能](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Fadvanced)\n- [向量索引](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index)\n  - [更新插入](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index\u002Fupsert)\n  - [检索](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fvector-index\u002Fretrieve)\n- [文本块](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftext-chunk\u002F)\n  - [拆分文本](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftext-chunk\u002Fsplit)\n- [实用工具](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002F)\n  - [API 配置](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration)\n    - [基础 URL](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fbase-url)\n    - [请求头](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fheaders)\n    - [重试策略](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fretry)\n    - [限流策略](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fapi-configuration\u002Fthrottle)\n  - [日志记录](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Flogging)\n  - [观察者](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fobserver)\n  - [运行管理](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Frun)\n  - [取消信号](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Futil\u002Fabort)\n- [实验性功能](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002F)\n  - [防护机制](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002Fguard)\n  - [服务器](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002Fserver\u002F)\n  - [成本计算](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Fexperimental\u002Fcost-calculation)\n- [故障排除](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftroubleshooting)\n  - [打包问题](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftroubleshooting\u002Fbundling)\n\n### [集成](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider)\n\n### [示例与教程](https:\u002F\u002Fmodelfusion.dev\u002Ftutorial)\n\n### [展示](https:\u002F\u002Fmodelfusion.dev\u002Ftutorial\u002Fshowcase)\n\n### [API 参考](https:\u002F\u002Fmodelfusion.dev\u002Fapi\u002Fmodules)\n\n## 更多示例\n\n### [基础示例](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fbasic)\n\n几乎涵盖了所有单独函数和对象的示例。强烈推荐作为入门参考。\n\n### [StoryTeller](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fstoryteller)\n\n> _多模态_, _对象流式传输_, _图像生成_, _文本转语音_, _语音转文本_, _文本生成_, _对象生成_, _嵌入_\n\nStoryTeller 是一款探索性的Web应用，用于为学龄前儿童创作简短的音频故事。\n\n### [聊天机器人（Next.js）](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fchatbot-next-js)\n\n> _Next.js 应用程序_, _OpenAI GPT-3.5-turbo_, _流式传输_, _取消处理_\n\n一个由 AI 助手驱动的 Web 聊天应用，基于 Next.js 实现。\n\n### [PDF 聊天](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fpdf-chat-terminal)\n\n> _终端应用_, _PDF 解析_, _内存中的向量索引_, _检索增强生成_, _假设文档嵌入_\n\n向 PDF 文档提问，即可获得来自该文档的答案。\n\n### [Next.js \u002F ModelFusion 演示](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fnextjs)\n\n> _Next.js 应用_, _图像生成_, _转录_, _对象流式传输_, _OpenAI_, _Stability AI_, _Ollama_\n\n使用 ModelFusion 结合 Next.js 14（App Router）的示例：\n\n- 图像生成\n- 语音录制与转录\n- 对象流式传输\n\n### [双工语音流（使用 Vite\u002FReact 和 ModelFusion Server\u002FFastify）](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fspeech-streaming-vite-react-fastify)\n\n> _语音流_, _OpenAI_, _Elevenlabs 流媒体_, _Vite_, _Fastify_, _ModelFusion 服务器_\n\n根据给定的提示，服务器会同时返回文本和语音流的响应。\n\n### [BabyAGI 智能体](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fbabyagi-agent)\n\n> _终端应用_, _智能体_, _BabyAGI_\n\nTypeScript 实现的经典 BabyAGI 和 BabyBeeAGI。\n\n### [维基百科智能体](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fwikipedia-agent)\n\n> _终端应用_, _ReAct 智能体_, _GPT-4_, _OpenAI 函数_, _工具_\n\n从维基百科获取问题答案，例如：“爱因斯坦和毕加索谁先出生？”\n\n### [初中数学智能体](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fmiddle-school-math-agent)\n\n> _终端应用_, _智能体_, _工具_, _GPT-4_\n\n一个用于解答初中数学题的小型智能体。它使用计算器工具来求解题目。\n\n### [PDF转推文](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fpdf-to-tweet)\n\n> _终端应用_, _PDF解析_, _递归信息抽取_, _内存向量索引_, _风格示例检索_, _OpenAI GPT-4_, _成本计算_\n\n从PDF中提取关于某个主题的信息，并以你自己的风格撰写一条推文。\n\n### [Cloudflare Workers](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Ftree\u002Fmain\u002Fexamples\u002Fcloudflare-workers)\n\n> _Cloudflare_, _OpenAI_\n\n使用ModelFusion和OpenAI在Cloudflare Worker上生成文本。\n\n## 贡献\n\n### [贡献指南](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Fblob\u002Fmain\u002FCONTRIBUTING.md)\n\n请阅读[ModelFusion贡献指南](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion\u002Fblob\u002Fmain\u002FCONTRIBUTING.md)，了解开发流程、如何提出修复和改进建议，以及如何构建和测试你的更改。","# ModelFusion 快速上手指南\n\nModelFusion 是一个用于构建 AI 应用的 TypeScript 库，它统一了文本生成、对象提取、图像生成、语音合成等多种 AI 操作的 API，并提供了生产环境所需的可观测性、日志记录和自动重试等功能。\n\n> **重要提示**：ModelFusion 已加入 Vercel 团队，其核心功能正在整合进 [Vercel AI SDK](https:\u002F\u002Fsdk.vercel.ai\u002Fdocs\u002Fintroduction)。建议关注 Vercel AI SDK 以获取最新进展，但 ModelFusion 目前仍可正常使用。\n\n## 环境准备\n\n- **运行时环境**：Node.js (推荐 v18+) 或支持 TypeScript 的浏览器环境\n- **包管理器**：npm, yarn, pnpm 或 bun\n- **前置依赖**：无特殊系统依赖，只需安装 Node.js 即可\n- **API Key**：根据你选择的模型提供商（如 OpenAI, Ollama, LMNT 等），需提前准备好相应的 API Key，并通过环境变量配置或在代码中传入。\n\n## 安装步骤\n\n使用 npm 安装核心库：\n\n```sh\nnpm install modelfusion\n```\n\n如果你希望快速开始，可以使用官方提供的 Starter 模板：\n\n- [终端应用模板](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-terminal-app-starter)\n- [Next.js + Llama.cpp 模板](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-llamacpp-nextjs-starter)\n- [Next.js + Ollama 模板](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion-ollama-nextjs-starter)\n\n## 基本使用\n\n以下示例展示如何使用 ModelFusion 调用不同模型。请确保已设置好对应服务商的环境变量（例如 `OPENAI_API_KEY`）。\n\n### 1. 生成文本 (Generate Text)\n\n调用大语言模型生成文本内容，支持流式输出。\n\n```ts\nimport { generateText, openai } from \"modelfusion\";\n\nconst text = await generateText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"Write a short story about a robot learning to love:\\n\\n\",\n});\n\nconsole.log(text);\n```\n\n**流式输出示例：**\n\n```ts\nimport { streamText, openai } from \"modelfusion\";\n\nconst textStream = await streamText({\n  model: openai.CompletionTextGenerator({ model: \"gpt-3.5-turbo-instruct\" }),\n  prompt: \"Write a short story about a robot learning to love:\\n\\n\",\n});\n\nfor await (const textPart of textStream) {\n  process.stdout.write(textPart);\n}\n```\n\n### 2. 生成结构化对象 (Generate Object)\n\n利用 Schema 约束模型输出，直接获得类型安全的 JSON 对象。\n\n```ts\nimport { zodSchema, generateObject, ollama } from \"modelfusion\";\nimport { z } from \"zod\";\n\nconst sentiment = await generateObject({\n  model: ollama\n    .ChatTextGenerator({\n      model: \"openhermes2.5-mistral\",\n      maxGenerationTokens: 1024,\n      temperature: 0,\n    })\n    .asObjectGenerationModel(\u002F* 配置对象生成模式 *\u002F),\n\n  schema: zodSchema(\n    z.object({\n      sentiment: z\n        .enum([\"positive\", \"neutral\", \"negative\"])\n        .describe(\"情感倾向\"),\n    })\n  ),\n\n  prompt: {\n    system: \"你是一个情感分析助手。\",\n    instruction: \"打开包裹后，我闻到了一股非常难闻的气味，即使清洗后也没有消失。再也不买了！\",\n  },\n});\n\nconsole.log(sentiment.object.sentiment);\n```\n\n### 3. 生成图像 (Generate Image)\n\n调用绘图模型生成图片。\n\n```ts\nimport { generateImage, openai } from \"modelfusion\";\n\nconst image = await generateImage({\n  model: openai.ImageGenerator({ model: \"dall-e-3\", size: \"1024x1024\" }),\n  prompt: \"the wicked witch of the west in the style of early 19th century painting\",\n});\n\n\u002F\u002F image 包含生成的图片数据\n```\n\n### 4. 语音合成 (Text-to-Speech)\n\n将文本转换为音频。\n\n```ts\nimport { generateSpeech, lmnt } from \"modelfusion\";\n\nconst speech = await generateSpeech({\n  model: lmnt.SpeechGenerator({\n    voice: \"034b632b-df71-46c8-b440-86a42ffc3cf3\", \u002F\u002F Henry\n  }),\n  text: \"Good evening, ladies and gentlemen! Exciting news on the airwaves tonight.\",\n});\n\n\u002F\u002F speech 是包含 MP3 音频数据的 Uint8Array\n```\n\n### 5. 语音转文字 (Speech-to-Text)\n\n将音频文件转录为文本。\n\n```ts\nimport { generateTranscription, openai } from \"modelfusion\";\nimport fs from \"node:fs\";\n\nconst transcription = await generateTranscription({\n  model: openai.Transcriber({ model: \"whisper-1\" }),\n  mimeType: \"audio\u002Fmp3\",\n  audioData: await fs.promises.readFile(\"data\u002Ftest.mp3\"),\n});\n\nconsole.log(transcription.text);\n```\n\n更多高级用法（如多模态输入、工具调用、自定义 Prompt 风格等）请参考 [官方文档](https:\u002F\u002Fmodelfusion.dev)。","某电商初创团队正在开发一个智能客服系统，需要整合多家大模型供应商以实现文本生成、结构化数据提取及自动重试机制。\n\n### 没有 modelfusion 时\n- **厂商锁定严重**：代码中充斥着针对 OpenAI、Anthropic 等不同厂商的特有 API 调用逻辑，切换模型供应商需要重构大量底层代码。\n- **类型安全缺失**：模型返回的 JSON 数据缺乏严格的 TypeScript 类型推断与验证，运行时经常因字段缺失或格式错误导致程序崩溃。\n- **生产级功能匮乏**：缺乏内置的自动重试、请求节流和统一日志观测机制，开发者需手动编写大量样板代码来保障服务稳定性。\n- **多模态支持割裂**：若要同时支持文本、图像理解或语音功能，必须分别引入多个互不兼容的 SDK，导致项目依赖臃肿且维护困难。\n\n### 使用 modelfusion 后\n- **实现厂商中立**：通过统一的抽象层屏蔽底层差异，仅需修改配置即可在 OpenAI、Llama.cpp 等任意支持的提供商间无缝切换。\n- **端到端类型安全**：利用 TypeScript 特性自动推断响应类型并校验模型输出，将潜在的运行时错误提前至编译阶段发现。\n- **开箱即用的鲁棒性**：内置自动重试、节流控制及可观测性钩子，无需额外开发即可构建高可用、易监控的生产环境应用。\n- **多模态统一集成**：在一个轻量级库中原生支持文本、图像、语音及嵌入模型，大幅简化了复杂 AI 应用的架构与依赖管理。\n\nmodelfusion 通过提供标准化、类型安全且具备生产级韧性的抽象层，让开发者能专注于业务逻辑而非繁琐的模型适配工作。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvercel_modelfusion_27ea4c48.png","vercel","Vercel","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvercel_d9ebfb29.png","Develop. Preview. Ship. Creators of Next.js.",null,"contactus@vercel.com","https:\u002F\u002Fvercel.com","https:\u002F\u002Fgithub.com\u002Fvercel",[82,86,90],{"name":83,"color":84,"percentage":85},"TypeScript","#3178c6",98.4,{"name":87,"color":88,"percentage":89},"JavaScript","#f1e05a",1.6,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0,1319,95,"2026-04-12T15:54:17","MIT","未说明",{"notes":100,"python":101,"dependencies":102},"该工具是一个 TypeScript 库，用于构建 AI 应用程序，通过 npm 安装。它本身不直接依赖特定的 GPU、Python 版本或系统内存，而是作为抽象层调用外部 AI 服务（如 OpenAI, Ollama, Llama.cpp 等）。具体的硬件和运行环境需求取决于您选择集成的后端模型提供商（例如，若本地运行 Llama.cpp 或 Ollama，则需参考相应工具的硬件要求）。支持 Node.js 及服务器无服务器环境。","不适用 (基于 TypeScript\u002FJavaScript)",[65],[104,14,35,15,13],"音频",[106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125],"chatbot","gpt-3","javascript","js","llm","openai","ts","typescript","whisper","ai","embedding","huggingface","dall-e","stable-diffusion","llamacpp","artificial-intelligence","claude","multi-modal","ollama","mistral","2026-03-27T02:49:30.150509","2026-04-18T20:51:08.353895",[129,134,139,144,149,154],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},36255,"ModelFusion 是否支持流式响应生成结构化数据（Streaming Function Calls）？","是的，从 v0.35.0 版本开始已支持该功能。您可以使用 `streamStructure` 方法来实现流式结构生成。具体用法请参考官方文档：https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fgenerate-structure#streamstructure","https:\u002F\u002Fgithub.com\u002Fvercel\u002Fmodelfusion\u002Fissues\u002F87",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},36256,"使用 llama.cpp server 进行嵌入（embeddings）时遇到 'Invalid JSON response' 错误怎么办？","该问题通常由 llama.cpp 的并行化调用导致，当没有空闲槽位时会返回 `{\"content\":\"slot unavailable\"}` 错误而非标准 JSON。此问题已在 ModelFusion v0.55.1 版本中修复，请升级您的依赖包至该版本或更高版本。","https:\u002F\u002Fgithub.com\u002Fvercel\u002Fmodelfusion\u002Fissues\u002F156",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},36257,"升级到 v0.134.0 或更高版本后，测试报错 'A dynamic import callback was invoked without --experimental-vm-modules' 如何解决？","这是一个在 v0.134.0 引入但在 v0.135.1 中修复的问题。请将 ModelFusion 升级到 v0.135.1 或更高版本。注意：此次修复改变了事件对象（event object）的结构，原本通过 `event.input` 访问的消息数组，现在需要通过 `event.input.input` 来访问，请相应调整您的日志记录代码。","https:\u002F\u002Fgithub.com\u002Fvercel\u002Fmodelfusion\u002Fissues\u002F278",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},36258,"如何在 Nuxt 3 应用中解决 'Cannot find module ZodSchema' 的导入错误？","这是 ModelFusion 内部导入路径的一个 bug，已在 v0.68.1 版本中修复。如果您在 Nuxt 3 或其他环境中遇到此错误，请将 `modelfusion` 包升级到 v0.68.1 或更高版本。","https:\u002F\u002Fgithub.com\u002Fvercel\u002Fmodelfusion\u002Fissues\u002F174",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},36259,"ModelFusion 是否支持 Ollama 的多模态（Multimodal）模型？","是的，从 ModelFusion v0.97.0 版本开始，已经添加了对 Ollama 多模态模型的基础支持。请确保您的 Ollama 版本也支持多模态功能（如 v0.1.15 或主分支版本），并将 ModelFusion 升级至 v0.97.0 以上。","https:\u002F\u002Fgithub.com\u002Fvercel\u002Fmodelfusion\u002Fissues\u002F190",{"id":155,"question_zh":156,"answer_zh":157,"source_url":148},36260,"在 Node.js 21 环境下运行 pdf-chat-terminal 示例时遇到与 pdfjs 相关的错误怎么办？","pdfjs 库在 Node.js 21 中存在兼容性问题（例如访问 `navigator.platform` 时报错），而在 Node.js 20 中可以正常运行。建议暂时使用 Node.js 20 (LTS) 来运行涉及 pdfjs 的示例或项目，直到 pdfjs 完全适配新版本 Node.js。",[159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254],{"id":160,"version":161,"summary_zh":162,"released_at":163},289070,"v0.137.0","### 变更\r\n\r\n- 将成本计算逻辑移至 `@modelfusion\u002Fcost-calculation` 包中。感谢 [@jakedetels](https:\u002F\u002Fgithub.com\u002Fjakedetels) 的重构！","2024-02-24T10:45:24",{"id":165,"version":166,"summary_zh":167,"released_at":168},289071,"v0.136.0","### 新增\r\n\r\n- `FileCache`：用于将响应缓存到磁盘。感谢 [@jakedetels](https:\u002F\u002Fgithub.com\u002Fjakedetels) 提供此功能！示例如下：\r\n\r\n  ```ts\r\n  import { generateText, openai } from \"modelfusion\";\r\n  import { FileCache } from \"modelfusion\u002Fnode\";\r\n\r\n  const cache = new FileCache();\r\n\r\n  const text1 = await generateText({\r\n    model: openai\r\n      .ChatTextGenerator({ model: \"gpt-3.5-turbo\", temperature: 1 })\r\n      .withTextPrompt(),\r\n    prompt: \"写一个关于机器人学会爱的短篇故事\",\r\n    logging: \"basic-text\",\r\n    cache,\r\n  });\r\n\r\n  console.log({ text1 });\r\n\r\n  const text2 = await generateText({\r\n    model: openai\r\n      .ChatTextGenerator({ model: \"gpt-3.5-turbo\", temperature: 1 })\r\n      .withTextPrompt(),\r\n    prompt: \"写一个关于机器人学会爱的短篇故事\",\r\n    logging: \"basic-text\",\r\n    cache,\r\n  });\r\n\r\n  console.log({ text2 }); \u002F\u002F 相同的内容\r\n  ```\r\n","2024-02-07T19:09:11",{"id":170,"version":171,"summary_zh":172,"released_at":173},289072,"v0.135.1","### 已修复\r\n\r\n- 尝试同时使用动态导入和 `require` 来按需加载库。","2024-02-04T14:44:55",{"id":175,"version":176,"summary_zh":177,"released_at":178},289073,"v0.135.0","## v0.135.0 - 2024-01-29\n\n### 新增\n\n- `ObjectGeneratorTool`: 一个使用 `generateObject` 创建合成或虚构结构化数据的工具。[文档](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ftools\u002Favailable-tools\u002Fobject-generator)\n- `jsonToolCallPrompt.instruction()`: 创建用于工具调用、以 JSON 格式编写的指令提示。\n\n### 变更\n\n- `jsonToolCallPrompt` 在模型支持的情况下会自动启用 JSON 模式或语法。","2024-01-29T12:25:18",{"id":180,"version":181,"summary_zh":182,"released_at":183},289074,"v0.134.0","### 新增\r\n\r\n- 为 `generateText`、`streamText`、`generateObject` 和 `streamObject` 添加了提示函数支持。您可以使用 `createTextPrompt`、`createInstructionPrompt` 和 `createChatPrompt` 创建文本、指令和聊天提示的提示函数。提示函数允许您从外部源加载提示，并改进提示日志记录。示例：\r\n\r\n  ```ts\r\n  const storyPrompt = createInstructionPrompt(\r\n    async ({ protagonist }: { protagonist: string }) => ({\r\n      system: \"你是一位获奖作家。\",\r\n      instruction: `写一篇关于 ${protagonist} 学会爱的故事。`,\r\n    })\r\n  );\r\n\r\n  const text = await generateText({\r\n    model: openai\r\n      .ChatTextGenerator({ model: \"gpt-3.5-turbo\" })\r\n      .withInstructionPrompt(),\r\n\r\n    prompt: storyPrompt({\r\n      protagonist: \"一个机器人\",\r\n    }),\r\n  });\r\n  ```\r\n\r\n### 变更\r\n\r\n- 重构构建流程，改用 `tsup`。","2024-01-28T10:37:23",{"id":185,"version":186,"summary_zh":187,"released_at":188},289075,"v0.133.0","### 新增\r\n\r\n- 支持 OpenAI 嵌入的自定义维度。\r\n\r\n### 变更\r\n\r\n- **破坏性变更**：将 `embeddingDimensions` 设置重命名为 `dimensions`。","2024-01-26T10:16:23",{"id":190,"version":191,"summary_zh":192,"released_at":193},289076,"v0.132.0","### 新增\r\n\r\n- 支持 OpenAI 的 `text-embedding-3-small` 和 `text-embedding-3-large` 嵌入模型。\r\n- 支持 OpenAI 的 `gpt-4-turbo-preview`、`gpt-4-0125-preview` 和 `gpt-3.5-turbo-0125` 对话模型。","2024-01-25T19:37:31",{"id":195,"version":196,"summary_zh":197,"released_at":198},289077,"v0.131.1","### 修复\r\n\r\n- 添加 `type-fest` 作为依赖，以修复类型推断错误。","2024-01-25T12:37:52",{"id":200,"version":201,"summary_zh":202,"released_at":203},289078,"v0.131.0","### 新增\n\n- `ObjectStreamResponse` 和 `ObjectStreamFromResponse` 序列化函数，用于在 Web 应用中使用由服务器生成的对象流。\n\n  服务器示例：\n\n  ```ts\n  export async function POST(req: Request) {\n    const { myArgs } = await req.json();\n\n    const objectStream = await streamObject({\n      \u002F\u002F ...\n    });\n\n    \u002F\u002F 将对象流序列化为响应：\n    return new ObjectStreamResponse(objectStream);\n  }\n  ```\n\n  客户端示例：\n\n  ```ts\n  const response = await fetch(\"\u002Fapi\u002Fstream-object-openai\", {\n    method: \"POST\",\n    body: JSON.stringify({ myArgs }),\n  });\n\n  \u002F\u002F 反序列化（结果对象比完整响应更简单）\n  const stream = ObjectStreamFromResponse({\n    schema: itinerarySchema,\n    response,\n  });\n\n  for await (const { partialObject } of stream) {\n    \u002F\u002F 执行某些操作，例如设置 React 状态\n  }\n  ```\n\n### 变更\n\n- **破坏性变更**：将 `generateStructure` 重命名为 `generateObject`，将 `streamStructure` 重命名为 `streamObject`。相关名称也已相应更改。\n- **破坏性变更**：`streamObject` 的结果流包含额外的数据。你需要使用 `stream.partialObject` 或解构来访问它：\n\n  ```ts\n  const objectStream = await streamObject({\n    \u002F\u002F ...\n  });\n\n  for await (const { partialObject } of objectStream) {\n    console.clear();\n    console.log(partialObject);\n  }\n  ```\n\n- **破坏性变更**：成功完成 `Schema` 验证后的结果现在存储在 `value` 属性中（之前是 `data`）。","2024-01-23T10:46:33",{"id":205,"version":206,"summary_zh":207,"released_at":208},289079,"v0.130.1","### 修复\r\n\r\n- 双工语音流在 Vercel Edge Functions 中可用。","2024-01-22T17:43:19",{"id":210,"version":211,"summary_zh":212,"released_at":213},289080,"v0.130.0","### Changed\r\n\r\n- **breaking change**: updated `generateTranscription` interface. The function now takes a `mimeType` and `audioData` (base64-encoded string, `Uint8Array`, `Buffer` or `ArrayBuffer`). Example:\r\n\r\n  ```ts\r\n  import { generateTranscription, openai } from \"modelfusion\";\r\n  import fs from \"node:fs\";\r\n\r\n  const transcription = await generateTranscription({\r\n    model: openai.Transcriber({ model: \"whisper-1\" }),\r\n    mimeType: \"audio\u002Fmp3\",\r\n    audioData: await fs.promises.readFile(\"data\u002Ftest.mp3\"),\r\n  });\r\n  ```\r\n\r\n- Images in instruction and chat prompts can be `Buffer` or `ArrayBuffer` instances (in addition to base64-encoded strings and `Uint8Array` instances).","2024-01-21T15:36:25",{"id":215,"version":216,"summary_zh":217,"released_at":218},289081,"v0.129.1",".","2024-01-21T11:09:47",{"id":220,"version":221,"summary_zh":222,"released_at":223},289082,"v0.129.0","### Changed\r\n\r\n- **breaking change**: Usage of Node `async_hooks` has been renamed from `node:async_hooks` to `async_hooks` for easier Webpack configuration. To exclude the `async_hooks` from client-side bundling, you can use the following config for Next.js (`next.config.mjs` or `next.config.js`):\r\n\r\n  ```js\r\n  \u002F**\r\n   * @type {import('next').NextConfig}\r\n   *\u002F\r\n  const nextConfig = {\r\n    webpack: (config, { isServer }) => {\r\n      if (isServer) {\r\n        return config;\r\n      }\r\n\r\n      config.resolve = config.resolve ?? {};\r\n      config.resolve.fallback = config.resolve.fallback ?? {};\r\n\r\n      \u002F\u002F async hooks is not available in the browser:\r\n      config.resolve.fallback.async_hooks = false;\r\n\r\n      return config;\r\n    },\r\n  };\r\n  ```","2024-01-20T16:55:12",{"id":225,"version":226,"summary_zh":227,"released_at":228},289083,"v0.128.0","### Changed\r\n\r\n- **breaking change**: ModelFusion uses `Uint8Array` instead of `Buffer` for better cross-platform compatibility (see also [\"Goodbye, Node.js Buffer\"](https:\u002F\u002Fsindresorhus.com\u002Fblog\u002Fgoodbye-nodejs-buffer)). This can lead to breaking changes in your code if you use `Buffer`-specific methods.\r\n- **breaking change**: Image content in multi-modal instruction and chat inputs (e.g. for GPT Vision) is passed in the `image` property (instead of `base64Image`) and supports both base64 strings and `Uint8Array` inputs:\r\n\r\n  ```ts\r\n  const image = fs.readFileSync(path.join(\"data\", \"example-image.png\"), {\r\n    encoding: \"base64\",\r\n  });\r\n\r\n  const textStream = await streamText({\r\n    model: openai.ChatTextGenerator({\r\n      model: \"gpt-4-vision-preview\",\r\n      maxGenerationTokens: 1000,\r\n    }),\r\n\r\n    prompt: [\r\n      openai.ChatMessage.user([\r\n        { type: \"text\", text: \"Describe the image in detail:\\n\\n\" },\r\n        { type: \"image\", image, mimeType: \"image\u002Fpng\" },\r\n      ]),\r\n    ],\r\n  });\r\n  ```\r\n\r\n- OpenAI-compatible providers with predefined API configurations have a customized provider name that shows up in the events.","2024-01-20T09:51:24",{"id":230,"version":231,"summary_zh":232,"released_at":233},289084,"v0.127.0","### Changed\r\n\r\n- **breaking change**: `streamStructure` returns an async iterable over deep partial objects. If you need to get the fully validated final result, you can use the `fullResponse: true` option and await the `structurePromise` value. Example:\r\n\r\n  ```ts\r\n  const { structureStream, structurePromise } = await streamStructure({\r\n    model: ollama\r\n      .ChatTextGenerator({\r\n        model: \"openhermes2.5-mistral\",\r\n        maxGenerationTokens: 1024,\r\n        temperature: 0,\r\n      })\r\n      .asStructureGenerationModel(jsonStructurePrompt.text()),\r\n\r\n    schema: zodSchema(\r\n      z.object({\r\n        characters: z.array(\r\n          z.object({\r\n            name: z.string(),\r\n            class: z\r\n              .string()\r\n              .describe(\"Character class, e.g. warrior, mage, or thief.\"),\r\n            description: z.string(),\r\n          })\r\n        ),\r\n      })\r\n    ),\r\n\r\n    prompt:\r\n      \"Generate 3 character descriptions for a fantasy role playing game.\",\r\n\r\n    fullResponse: true,\r\n  });\r\n\r\n  for await (const partialStructure of structureStream) {\r\n    console.clear();\r\n    console.log(partialStructure);\r\n  }\r\n\r\n  const structure = await structurePromise;\r\n\r\n  console.clear();\r\n  console.log(\"FINAL STRUCTURE\");\r\n  console.log(structure);\r\n  ```\r\n\r\n- **breaking change**: Renamed `text` value in `streamText` with `fullResponse: true` to `textPromise`.\r\n\r\n### Fixed\r\n\r\n- Ollama streaming.\r\n- Ollama structure generation and streaming.","2024-01-15T07:17:32",{"id":235,"version":236,"summary_zh":237,"released_at":238},289085,"v0.126.0","### Changed\r\n\r\n- **breaking change**: rename `useTool` to `runTool` and `useTools` to `runTools` to avoid confusion with React hooks.","2024-01-15T05:44:30",{"id":240,"version":241,"summary_zh":242,"released_at":243},289086,"v0.125.0","### Added\r\n\r\n- Perplexity AI chat completion support. Example:\r\n\r\n  ```ts\r\n  import { openaicompatible, streamText } from \"modelfusion\";\r\n\r\n  const textStream = await streamText({\r\n    model: openaicompatible\r\n      .ChatTextGenerator({\r\n        api: openaicompatible.PerplexityApi(),\r\n        provider: \"openaicompatible-perplexity\",\r\n        model: \"pplx-70b-online\", \u002F\u002F online model with access to web search\r\n        maxGenerationTokens: 500,\r\n      })\r\n      .withTextPrompt(),\r\n\r\n    prompt: \"What is RAG in AI?\",\r\n  });\r\n  ```","2024-01-14T16:49:58",{"id":245,"version":246,"summary_zh":247,"released_at":248},289087,"v0.124.0","### Added\r\n\r\n- [Embedding-support for OpenAI-compatible providers](https:\u002F\u002Fmodelfusion.dev\u002Fintegration\u002Fmodel-provider\u002Fopenaicompatible\u002F#embed-text). You can for example use the Together AI embedding endpoint:\r\n\r\n  ```ts\r\n  import { embed, openaicompatible } from \"modelfusion\";\r\n\r\n  const embedding = await embed({\r\n    model: openaicompatible.TextEmbedder({\r\n      api: openaicompatible.TogetherAIApi(),\r\n      provider: \"openaicompatible-togetherai\",\r\n      model: \"togethercomputer\u002Fm2-bert-80M-8k-retrieval\",\r\n    }),\r\n    value: \"At first, Nox didn't know what to do with the pup.\",\r\n  });\r\n  ```","2024-01-13T20:08:55",{"id":250,"version":251,"summary_zh":252,"released_at":253},289088,"v0.123.0","### Added\r\n\r\n- `classify` model function ([docs](https:\u002F\u002Fmodelfusion.dev\u002Fguide\u002Ffunction\u002Fclassify)) for classifying values. The `SemanticClassifier` has been renamed to `EmbeddingSimilarityClassifier` and can be used in conjunction with `classify`:\r\n\r\n  ```ts\r\n  import { classify, EmbeddingSimilarityClassifier, openai } from \"modelfusion\";\r\n\r\n  const classifier = new EmbeddingSimilarityClassifier({\r\n    embeddingModel: openai.TextEmbedder({ model: \"text-embedding-ada-002\" }),\r\n    similarityThreshold: 0.82,\r\n    clusters: [\r\n      {\r\n        name: \"politics\" as const,\r\n        values: [\r\n          \"they will save the country!\",\r\n          \u002F\u002F ...\r\n        ],\r\n      },\r\n      {\r\n        name: \"chitchat\" as const,\r\n        values: [\r\n          \"how's the weather today?\",\r\n          \u002F\u002F ...\r\n        ],\r\n      },\r\n    ],\r\n  });\r\n\r\n  \u002F\u002F strongly typed result:\r\n  const result = await classify({\r\n    model: classifier,\r\n    value: \"don't you love politics?\",\r\n  });\r\n  ```\r\n","2024-01-13T18:15:20",{"id":255,"version":256,"summary_zh":257,"released_at":258},289089,"v0.122.0","### Changed\r\n\r\n- **breaking change**: Switch from positional parameters to named parameters (parameter object) for all model and tool functions. The parameter object is the first and only parameter of the function. Additional options (last parameter before) are now part of the parameter object. Example:\r\n\r\n  ```ts\r\n  \u002F\u002F old:\r\n  const text = await generateText(\r\n    openai\r\n      .ChatTextGenerator({\r\n        model: \"gpt-3.5-turbo\",\r\n        maxGenerationTokens: 1000,\r\n      })\r\n      .withTextPrompt(),\r\n\r\n    \"Write a short story about a robot learning to love\",\r\n\r\n    {\r\n      functionId: \"example-function\",\r\n    }\r\n  );\r\n\r\n  \u002F\u002F new:\r\n  const text = await generateText({\r\n    model: openai\r\n      .ChatTextGenerator({\r\n        model: \"gpt-3.5-turbo\",\r\n        maxGenerationTokens: 1000,\r\n      })\r\n      .withTextPrompt(),\r\n\r\n    prompt: \"Write a short story about a robot learning to love\",\r\n\r\n    functionId: \"example-function\",\r\n  });\r\n  ```\r\n\r\n  This change was made to make the API more flexible and to allow for future extensions.","2024-01-13T09:24:44"]