[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-u14app--gemini-next-chat":3,"tool-u14app--gemini-next-chat":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",156804,2,"2026-04-15T11:34:33",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":32,"env_os":103,"env_gpu":104,"env_ram":104,"env_deps":105,"category_tags":112,"github_topics":113,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":129,"updated_at":130,"faqs":131,"releases":171},7785,"u14app\u002Fgemini-next-chat","gemini-next-chat","Deploy your private Gemini application for free with one click, supporting Gemini 1.5, Gemini 2.0 models.","Gemini Next Chat 是一款开源的聊天机器人框架，旨在帮助用户轻松搭建私有的 Gemini AI 对话应用。它核心解决了用户希望免费、快速部署专属 AI 助手的需求，无需复杂的服务器配置，只需一键即可在 Vercel 或 Cloudflare 等平台上完成部署。\n\n这款工具非常适合希望拥有独立 AI 服务环境的普通用户、开发者以及技术爱好者使用。对于不想依赖第三方付费服务的个人，它提供了一个低门槛的私有化方案；对于开发者，它则是一个基于 Next.js、Tailwind CSS 和 shadcn\u002Fui 构建的高质量项目模板，便于二次开发。\n\n在技术亮点方面，Gemini Next Chat 全面支持谷歌最新的 Gemini 1.5 Pro、Gemini 1.5 Flash 以及具备视觉识别能力的 Gemini Pro Vision 模型。它不仅提供了流畅的网页端体验，还支持打包为 Windows、macOS 和 Linux 的桌面应用程序，实现跨平台无缝使用。此外，项目还具备函数调用（Function Calling）扩展能力，为未来集成更多自动化任务留下了充足空间。无论是","Gemini Next Chat 是一款开源的聊天机器人框架，旨在帮助用户轻松搭建私有的 Gemini AI 对话应用。它核心解决了用户希望免费、快速部署专属 AI 助手的需求，无需复杂的服务器配置，只需一键即可在 Vercel 或 Cloudflare 等平台上完成部署。\n\n这款工具非常适合希望拥有独立 AI 服务环境的普通用户、开发者以及技术爱好者使用。对于不想依赖第三方付费服务的个人，它提供了一个低门槛的私有化方案；对于开发者，它则是一个基于 Next.js、Tailwind CSS 和 shadcn\u002Fui 构建的高质量项目模板，便于二次开发。\n\n在技术亮点方面，Gemini Next Chat 全面支持谷歌最新的 Gemini 1.5 Pro、Gemini 1.5 Flash 以及具备视觉识别能力的 Gemini Pro Vision 模型。它不仅提供了流畅的网页端体验，还支持打包为 Windows、macOS 和 Linux 的桌面应用程序，实现跨平台无缝使用。此外，项目还具备函数调用（Function Calling）扩展能力，为未来集成更多自动化任务留下了充足空间。无论是想体验最新多模态模型能力，还是构建团队内部的智能助手，Gemini Next Chat 都是一个高效且灵活的选择。","\u003Cdiv align=\"center\">\n\u003Ch1>Gemini Next Chat\u003C\u002Fh1>\n\n![GitHub deployments](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdeployments\u002Fu14app\u002Fgemini-next-chat\u002FProduction)\n![GitHub Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fu14app\u002Fgemini-next-chat)\n![Docker Image Size (tag)](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fimage-size\u002Fxiangfa\u002Ftalk-with-gemini\u002Flatest)\n![Docker Pulls](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fxiangfa\u002Ftalk-with-gemini)\n![GitHub License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fu14app\u002Fgemini-next-chat)\n\nDeploy your private Gemini application for free with one click, supporting Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini Pro and Gemini Pro Vision models.\n\n**English** · [简体中文](.\u002FREADME.zh-CN.md)\n\n[![Vercel](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVercel-111111?style=flat&logo=vercel&logoColor=white)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat&project-name=gemini-next-chat&env=GEMINI_API_KEY&env=ACCESS_PASSWORD&repository-name=gemini-next-chat)\n[![Cloudflare](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCloudflare-F69652?style=flat&logo=cloudflare&logoColor=white)](#deploy-to-cloudflare)\n\n[![Gemini](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGemini-8E75B2?style=flat&logo=googlegemini&logoColor=white)](https:\u002F\u002Fai.google.dev\u002F)\n[![Next](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNext.js-111111?style=flat&logo=nextdotjs&logoColor=white)](https:\u002F\u002Fnextjs.org\u002F)\n[![Tailwind CSS](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTailwind%20CSS-06B6D4?style=flat&logo=tailwindcss&logoColor=white)](https:\u002F\u002Ftailwindcss.com\u002F)\n[![shadcn\u002Fui](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fshadcn\u002Fui-111111?style=flat&logo=shadcnui&logoColor=white)](https:\u002F\u002Fui.shadcn.com\u002F)\n\n[![Web][Web-image]][web-url]\n[![MacOS][MacOS-image]][download-url]\n[![Windows][Windows-image]][download-url]\n[![Linux][Linux-image]][download-url]\n\n[Web App][web-url] \u002F [Desktop App][download-url] \u002F [Issues](https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues)\n\n[web-url]: https:\u002F\u002Fgemini.u14.app\u002F\n[download-url]: https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Freleases\n[Web-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeb-PWA-orange?logo=microsoftedge\n[Windows-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Windows-blue?logo=windows\n[MacOS-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-MacOS-black?logo=apple\n[Linux-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Linux-333?logo=ubuntu\n\n**Share GeminiNextChat Repository**\n\n[![][share-x-shield]][share-x-link]\n[![][share-telegram-shield]][share-telegram-link]\n[![][share-whatsapp-shield]][share-whatsapp-link]\n[![][share-reddit-shield]][share-reddit-link]\n[![][share-weibo-shield]][share-weibo-link]\n[![][share-mastodon-shield]][share-mastodon-link]\n\n[share-mastodon-link]: https:\u002F\u002Fmastodon.social\u002Fshare?text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat%20%23chatbot%20%23gemini\n[share-mastodon-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20mastodon-black?labelColor=black&logo=mastodon&logoColor=white&style=flat-square\n[share-reddit-link]: https:\u002F\u002Fwww.reddit.com\u002Fsubmit?title=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-reddit-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20reddit-black?labelColor=black&logo=reddit&logoColor=white&style=flat-square\n[share-telegram-link]: https:\u002F\u002Ft.me\u002Fshare\u002Furl\"?text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-telegram-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20telegram-black?labelColor=black&logo=telegram&logoColor=white&style=flat-square\n[share-weibo-link]: http:\u002F\u002Fservice.weibo.com\u002Fshare\u002Fshare.php?sharesource=weibo&title=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-weibo-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20weibo-black?labelColor=black&logo=sinaweibo&logoColor=white&style=flat-square\n[share-whatsapp-link]: https:\u002F\u002Fapi.whatsapp.com\u002Fsend?text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat%20%23chatbot%20%23gemini\n[share-whatsapp-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20whatsapp-black?labelColor=black&logo=whatsapp&logoColor=white&style=flat-square\n[share-x-link]: https:\u002F\u002Fx.com\u002Fintent\u002Ftweet?hashtags=chatbot%2CchatGPT%2CopenAI&text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-x-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20x-black?labelColor=black&logo=x&logoColor=white&style=flat-square\n\n![cover](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_f70e459d1ff7.png)\n\nSimple interface, supports image recognition and voice conversation\n\n![Gemini](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_cb543807e757.png)\n\nSupports Gemini 1.5 and Gemini 2.0 multimodal models\n\n![Support plugins](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_d122d232c573.jpg)\n\nSupport plugins, with built-in Web search, Web reader, Arxiv search, Weather and other practical plugins\n\n![Multimodal Live](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_16c69c81c7ef.jpg)\n\nSupport Multimodal Live API, smooth voice and video experience\n\n![Tray app](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_fb45bce5a96f.png)\n\nA cross-platform application client that supports a permanent menu bar, doubling your work efficiency\n\n\u003C\u002Fdiv>\n\n> Note: If you encounter problems during the use of the project, you can check the known problems and solutions of [FAQ](#FAQ).\n\n## TOC\n\n- [Features](#features)\n- [Roadmap](#️roadmap)\n- [Get Started](#get-started)\n  - [Updating Code](#updating-code)\n- [Environment Variables](#environment-variables)\n  - [Access Password](#access-password)\n  - [Custom model list](#️custom-model-list)\n- [Development](#development)\n  - [Requirements](#️requirements)\n- [Deployment](#deployment)\n  - [Docker (Recommended)](#docker-recommended)\n  - [Static Deployment](#static-deployment)\n- [FAQ](#faq)\n- [LICENSE](#license)\n- [Star History](#star-history)\n\n## Features\n\n- **Deploy for free with one-click** on Vercel in under 1 minute\n- Provides a very small (~4MB) cross-platform client (Windows\u002FMacOS\u002FLinux), can stay in the menu bar to improve office efficiency\n- Supports multi-modal models and can understand images, videos, audios and some text documents\n- Talk mode: Let you talk directly to Gemini, support Multimodal Live API\n- Visual recognition allows Gemini to understand the content of the picture\n- Assistant market with hundreds of selected system instruction\n- Support plugins, with built-in Web search, Web reader, Arxiv search, Weather and other practical plugins\n- Conversation list, so you can keep track of important conversations or discuss different topics with Gemini\n- Artifact support, allowing you to modify the conversation content more elegantly\n- Full Markdown support: KaTex formulas, code highlighting, Mermaid charts, etc.\n- Automatically compress contextual chat records to save Tokens while supporting very long conversations\n- Privacy and security, all data is saved locally in the user's browser\n- Support PWA, can run as an application\n- Well-designed UI, responsive design, supports dark mode\n- Extremely fast first screen loading speed, supporting streaming response\n- Static deployment supports deployment on any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc.\n- Multi-language support: English、简体中文、繁体中文、日本語、한국어、Español、Deutsch、Français、Português、Русский and العربية\n\n## Roadmap\n\n- [x] Reconstruct the topic square and introduce Prompt list\n- [x] Use tauri to package desktop applications\n- [x] Implementation based on functionCall plug-in\n- [x] Support conversation list\n- [x] Support conversation export features\n- [x] Enable Multimodal Live API\n- [ ] Support networked Deep Research mode\n- [ ] Support local knowledge base\n\n## Get Started\n\n1. Get [Gemini API Key](https:\u002F\u002Faistudio.google.com\u002Fapp\u002Fapikey)\n2. One-click deployment of the project, you can choose to deploy to Vercel\n\n   [![Deploy with Vercel](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_a4c0f8073a9c.png)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat&project-name=gemini-next-chat&env=GEMINI_API_KEY&env=ACCESS_PASSWORD&repository-name=gemini-next-chat)\n\n3. Start using\n\n### Deploy to Cloudflare\n\nCurrently the project supports deployment to Cloudflare, but you need to follow [How to deploy to Cloudflare Page](.\u002Fdocs\u002FHow-to-deploy-to-Cloudflare-Page.md) to do it.\n\n### Updating Code\n\nIf you want to update instantly, you can check out the [GitHub documentation](https:\u002F\u002Fdocs.github.com\u002Fen\u002Fpull-requests\u002Fcollaborating-with-pull-requests\u002Fworking-with-forks\u002Fsyncing-a-fork) to learn how to synchronize a forked project with upstream code.\n\nYou can star or watch this project or follow author to get release notifications in time.\n\n## Environment Variables\n\n#### `GEMINI_API_KEY` (optional)\n\nYour Gemini api key. This is required if you need to `enable` the server api. **This variable does not affect the value of the Gemini key on the frontend pages.**\nSupports multiple keys, each key is separated by `,`, i.e. `key1,key2,key3`\n\n#### `GEMINI_API_BASE_URL` (optional)\n\n> Default: `https:\u002F\u002Fgenerativelanguage.googleapis.com`\n\n> Examples: `http:\u002F\u002Fyour-gemini-proxy.com`\n\nOverride the Gemini api request base url. **In order to avoid server-side proxy url leakage, the value in the front-end page will not be overwritten and affected.**\n\n#### `NEXT_PUBLIC_GEMINI_MODEL_LIST` (optional)\n\nCustom model list, default: all.\n\n#### `NEXT_PUBLIC_UPLOAD_LIMIT` (optional)\n\nFile upload size limit. There is no file size limit by default.\n\n#### `ACCESS_PASSWORD` (optional)\n\nAccess password.\n\n#### `HEAD_SCRIPTS` (optional)\n\nInjected script code can be used for statistics or error tracking.\n\n#### `EXPORT_BASE_PATH` (optional)\n\nOnly used to set the page base path in [static deployment](#static-deployment) mode.\n\n### Access Password\n\nThis project provides limited access control. Please add an environment variable named `ACCESS_PASSWORD` on the vercel environment variables page.\n\nAfter adding or modifying this environment variable, please redeploy the project for the changes to take effect.\n\n### Custom model list\n\nThis project supports custom model lists. Please add an environment variable named `NEXT_PUBLIC_GEMINI_MODEL_LIST` in the `.env` file or environment variables page.\n\nThe default model list is represented by `all`, and multiple models are separated by `,`.\n\nIf you need to add a new model, please directly write the model name `all,new-model-name`, or use the `+` symbol plus the model name to add, that is, `all,+new-model-name`.\n\nIf you want to remove a model from the model list, use the `-` symbol followed by the model name to indicate removal, i.e. `all,-existing-model-name`. If you want to remove the default model list, you can use `-all`.\n\nIf you want to set a default model, you can use the `@` symbol plus the model name to indicate the default model, that is, `all,@default-model-name`.\n\n## Development\n\nIf you have not installed pnpm\n\n```shell\nnpm install -g pnpm\n```\n\n```shell\n# 1. install nodejs and yarn first\n# 2. config local variables, please change `.env.example` to `.env` or `.env.local`\n# 3. run\npnpm install\npnpm dev\n```\n\n### Requirements\n\nNodeJS >= 18, Docker >= 20\n\n## Deployment\n\n### Docker (Recommended)\n\n> The Docker version needs to be 20 or above, otherwise it will prompt that the image cannot be found.\n\n> ⚠️ Note: Most of the time, the docker version will lag behind the latest version by 1 to 2 days, so the \"update exists\" prompt will continue to appear after deployment, which is normal.\n\n```shell\ndocker pull xiangfa\u002Ftalk-with-gemini:latest\n\ndocker run -d --name talk-with-gemini -p 5481:3000 xiangfa\u002Ftalk-with-gemini\n```\n\nYou can also specify additional environment variables:\n\n```shell\ndocker run -d --name talk-with-gemini \\\n   -p 5481:3000 \\\n   -e GEMINI_API_KEY=AIzaSy... \\\n   -e ACCESS_PASSWORD=your-password \\\n   xiangfa\u002Ftalk-with-gemini\n```\n\nIf you need to specify other environment variables, please add `-e key=value` to the above command to specify it.\n\nDeploy using `docker-compose.yml`:\n\n```shell\nversion: '3.9'\nservices:\n   talk-with-gemini:\n      image: xiangfa\u002Ftalk-with-gemini\n      container_name: talk-with-gemini\n      environment:\n         - GEMINI_API_KEY=AIzaSy...\n         - ACCESS_PASSWORD=your-password\n      ports:\n         - 5481:3000\n```\n\n### Static Deployment\n\nYou can also build a static page version directly, and then upload all files in the `out` directory to any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc..\n\n```shell\npnpm build:export\n```\n\nIf you deploy the project in a subdirectory and encounter resource loading failures when accessing, please add `EXPORT_BASE_PATH=\u002Fpath\u002Fproject` in the `.env` file or variable setting page.\n\n## Acknowledgments\n\n### Technology Stack\n\n- [Next.js](https:\u002F\u002Fnextjs.org\u002F)\n- [Shadcn UI](https:\u002F\u002Fui.shadcn.com\u002F)\n- [Tailwindcss](https:\u002F\u002Ftailwindcss.com\u002F)\n- [Zustand](https:\u002F\u002Fzustand-demo.pmnd.rs\u002F)\n\n### Inspiration\n\n- [Lobe Chat](https:\u002F\u002Fgithub.com\u002Flobehub\u002Flobe-chat)\n- [Next Web](https:\u002F\u002Fgithub.com\u002FChatGPTNextWeb\u002FNextChat)\n- [Open Canvas](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fopen-canvas)\n\n## FAQ\n\n#### Solution for “User location is not supported for the API use”\n\n1. Use Cloudflare AI Gateway to forward APIs. Currently, Cloudflare AI Gateway already supports Google Vertex AI related APIs. For how to use it, please refer to [How to Use Cloudflare AI Gateway](.\u002Fdocs\u002FUse-Cloudflare-AI-Gateway.md). This solution is fast and stable, and is **recommended**.\n\n2. Use Cloudflare Worker for API proxy forwarding. For detailed settings, please refer to [How to Use Cloudflare Worker Proxy API](.\u002Fdocs\u002FHow-to-deploy-the-Cloudflare-Worker-api-proxy.md). Note that this solution may not work properly in some cases.\n\n#### Why can't I access the website in China after deploying it with one click using Vercel\n\nThe domain name generated after deploying Vercel was blocked by the Chinese network a few years ago, but the server's IP address was not blocked. You can customize the domain name and access it normally in China. Since Vercel does not have a server in China, it is normal to have some network fluctuations sometimes. For how to set the domain name, you can refer to the solution article [Vercel binds a custom domain name](https:\u002F\u002Fdocs.tangly1024.com\u002Farticle\u002Fvercel-domain) that I found online.\n\n#### Why can't I use Multimodal Live\n\nCurrently, the Multimodal Live API is only supported by the Gemini 2.0 Flash model, so you need to use the Gemini 2.0 Flash model to use it. Since the Gemini Multimodal Live API is not accessible in China, you may need to deploy a proxy forwarding API using Cloudflare Worker. For more information, refer to [Proxying the Multimodal Live API with Cloudflare Worker](.\u002Fdocs\u002FProxying-the-Multimodal-Live-API-with-Cloudflare-Worker.md).\n_Currently, Multimodal Live API does not support Chinese voice output._\n\n## Contributing\n\nContributions to this project are welcome! If you would like to contribute, please follow these steps:\n\n1. Fork the repository on GitHub.\n2. Clone your fork to your local machine.\n3. Create a new branch for your changes.\n4. Make your changes and commit them to your branch.\n5. Push your changes to your fork on GitHub.\n6. Open a pull request from your branch to the main repository.\n\nPlease ensure that your code follows the project's coding style and that all tests pass before submitting a pull request. If you find any bugs or have suggestions for improvements, feel free to open an issue on GitHub.\n\n## LICENSE\n\nThis project is licensed under the [MIT](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) License. See the LICENSE file for the full license text.\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_1aa60c779eb6.png)](https:\u002F\u002Fstar-history.com\u002F#u14app\u002Fgemini-next-chat&Date)\n","\u003Cdiv align=\"center\">\n\u003Ch1>Gemini Next Chat\u003C\u002Fh1>\n\n![GitHub deployments](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdeployments\u002Fu14app\u002Fgemini-next-chat\u002FProduction)\n![GitHub Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fu14app\u002Fgemini-next-chat)\n![Docker Image Size (tag)](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fimage-size\u002Fxiangfa\u002Ftalk-with-gemini\u002Flatest)\n![Docker Pulls](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fxiangfa\u002Ftalk-with-gemini)\n![GitHub License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fu14app\u002Fgemini-next-chat)\n\n一键免费部署您的私有 Gemini 应用程序，支持 Gemini 1.5 Pro、Gemini 1.5 Flash、Gemini Pro 和 Gemini Pro Vision 模型。\n\n**English** · [简体中文](.\u002FREADME.zh-CN.md)\n\n[![Vercel](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVercel-111111?style=flat&logo=vercel&logoColor=white)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat&project-name=gemini-next-chat&env=GEMINI_API_KEY&env=ACCESS_PASSWORD&repository-name=gemini-next-chat)\n[![Cloudflare](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCloudflare-F69652?style=flat&logo=cloudflare&logoColor=white)](#deploy-to-cloudflare)\n\n[![Gemini](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGemini-8E75B2?style=flat&logo=googlegemini&logoColor=white)](https:\u002F\u002Fai.google.dev\u002F)\n[![Next](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNext.js-111111?style=flat&logo=nextdotjs&logoColor=white)](https:\u002F\u002Fnextjs.org\u002F)\n[![Tailwind CSS](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTailwind%20CSS-06B6D4?style=flat&logo=tailwindcss&logoColor=white)](https:\u002F\u002Ftailwindcss.com\u002F)\n[![shadcn\u002Fui](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fshadcn\u002Fui-111111?style=flat&logo=shadcnui&logoColor=white)](https:\u002F\u002Fui.shadcn.com\u002F)\n\n[![Web][Web-image]][web-url]\n[![MacOS][MacOS-image]][download-url]\n[![Windows][Windows-image]][download-url]\n[![Linux][Linux-image]][download-url]\n\n[Web App][web-url] \u002F [Desktop App][download-url] \u002F [Issues](https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues)\n\n[web-url]: https:\u002F\u002Fgemini.u14.app\u002F\n[download-url]: https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Freleases\n[Web-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeb-PWA-orange?logo=microsoftedge\n[Windows-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Windows-blue?logo=windows\n[MacOS-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-MacOS-black?logo=apple\n[Linux-image]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Linux-333?logo=ubuntu\n\n**分享 GeminiNextChat 仓库**\n\n[![][share-x-shield]][share-x-link]\n[![][share-telegram-shield]][share-telegram-link]\n[![][share-whatsapp-shield]][share-whatsapp-link]\n[![][share-reddit-shield]][share-reddit-link]\n[![][share-weibo-shield]][share-weibo-link]\n[![][share-mastodon-shield]][share-mastodon-link]\n\n[share-mastodon-link]: https:\u002F\u002Fmastodon.social\u002Fshare?text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat%20%23chatbot%20%23gemini\n[share-mastodon-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20mastodon-black?labelColor=black&logo=mastodon&logoColor=white&style=flat-square\n[share-reddit-link]: https:\u002F\u002Fwww.reddit.com\u002Fsubmit?title=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-reddit-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20reddit-black?labelColor=black&logo=reddit&logoColor=white&style=flat-square\n[share-telegram-link]: https:\u002F\u002Ft.me\u002Fshare\u002Furl\"?text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-telegram-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20telegram-black?labelColor=black&logo=telegram&logoColor=white&style=flat-square\n[share-weibo-link]: http:\u002F\u002Fservice.weibo.com\u002Fshare\u002Fshare.php?sharesource=weibo&title=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-weibo-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20weibo-black?labelColor=black&logo=sinaweibo&logoColor=white&style=flat-square\n[share-whatsapp-link]: https:\u002F\u002Fapi.whatsapp.com\u002Fsend?text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat%20%23chatbot%20%23gemini\n[share-whatsapp-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20whatsapp-black?labelColor=black&logo=whatsapp&logoColor=white&style=flat-square\n[share-x-link]: https:\u002F\u002Fx.com\u002Fintent\u002Ftweet?hashtags=chatbot%2CchatGPT%2CopenAI&text=Check%20this%20GitHub%20repository%20out%20GeminiNextChat%20-%20An%20open-source%2C%20extensible%20(Function%20Calling)%2C%20high-performance%20gemini%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20Gemini%20web%20application.%20https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat\n[share-x-shield]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-share%20on%20x-black?labelColor=black&logo=x&logoColor=white&style=flat-square\n\n![cover](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_f70e459d1ff7.png)\n\n界面简洁，支持图像识别和语音对话\n\n![Gemini](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_cb543807e757.png)\n\n支持 Gemini 1.5 和 Gemini 2.0 多模态模型\n\n![支持插件](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_d122d232c573.jpg)\n\n支持插件，内置网页搜索、网页阅读器、Arxiv 搜索、天气等实用插件\n\n![多模态直播](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_16c69c81c7ef.jpg)\n\n支持多模态 Live API，带来流畅的语音和视频体验\n\n![托盘应用](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_fb45bce5a96f.png)\n\n一款跨平台的应用客户端，支持常驻菜单栏，让您的工作效率翻倍\n\n\u003C\u002Fdiv>\n\n> 注意：如果您在使用本项目时遇到问题，可以查看 [FAQ](#FAQ) 中的已知问题及解决方案。\n\n## 目录\n\n- [功能](#features)\n- [路线图](#️roadmap)\n- [开始使用](#get-started)\n  - [更新代码](#updating-code)\n- [环境变量](#environment-variables)\n  - [访问密码](#access-password)\n  - [自定义模型列表](#️custom-model-list)\n- [开发](#development)\n  - [要求](#️requirements)\n- [部署](#deployment)\n  - [Docker（推荐）](#docker-recommended)\n  - [静态部署](#static-deployment)\n- [常见问题](#faq)\n- [许可证](#license)\n- [星标历史](#star-history)\n\n## 功能\n\n- **在 Vercel 上一键免费部署**，不到 1 分钟即可完成\n- 提供体积非常小（约 4MB）的跨平台客户端（Windows\u002FMacOS\u002FLinux），可常驻菜单栏以提升办公效率\n- 支持多模态模型，能够理解图片、视频、音频及部分文本文档\n- 对话模式：允许您直接与 Gemini 交流，支持多模态实时 API\n- 视觉识别功能使 Gemini 能够理解图片内容\n- 助手市场提供数百条精选系统指令\n- 支持插件，内置网页搜索、网页阅读器、Arxiv 搜索、天气等实用插件\n- 对话列表功能，方便您跟踪重要对话或与 Gemini 讨论不同话题\n- 文档支持，让您更优雅地编辑对话内容\n- 完全支持 Markdown：KaTex 公式、代码高亮、Mermaid 图表等\n- 自动压缩上下文聊天记录以节省 Token，同时支持超长对话\n- 注重隐私与安全，所有数据均保存在用户本地浏览器中\n- 支持 PWA，可作为应用程序运行\n- 界面设计精美，响应式布局，支持深色模式\n- 首屏加载速度极快，支持流式响应\n- 静态部署支持在任何支持静态页面的网站服务上部署，如 Github Page、Cloudflare、Vercel 等\n- 多语言支持：英语、简体中文、繁体中文、日语、韩语、西班牙语、德语、法语、葡萄牙语、俄语和阿拉伯语\n\n## 路线图\n\n- [x] 重构主题广场并引入提示词列表\n- [x] 使用 Tauri 打包桌面应用\n- [x] 基于 functionCall 插件实现功能\n- [x] 支持对话列表\n- [x] 支持对话导出功能\n- [x] 开启多模态实时 API\n- [ ] 支持联网深度研究模式\n- [ ] 支持本地知识库\n\n## 开始使用\n\n1. 获取 [Gemini API 密钥](https:\u002F\u002Faistudio.google.com\u002Fapp\u002Fapikey)\n2. 一键部署项目，您可以选择部署到 Vercel\n\n   [![Deploy with Vercel](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_a4c0f8073a9c.png)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat&project-name=gemini-next-chat&env=GEMINI_API_KEY&env=ACCESS_PASSWORD&repository-name=gemini-next-chat)\n\n3. 开始使用\n\n### 部署到 Cloudflare\n\n目前该项目支持部署到 Cloudflare，但您需要按照 [如何部署到 Cloudflare Pages](.\u002Fdocs\u002FHow-to-deploy-to-Cloudflare-Page.md) 的说明进行操作。\n\n### 更新代码\n\n如果您想即时更新，可以查看 [GitHub 文档](https:\u002F\u002Fdocs.github.com\u002Fen\u002Fpull-requests\u002Fcollaborating-with-pull-requests\u002Fworking-with-forks\u002Fsyncing-a-fork) 了解如何将分叉项目与上游代码同步。\n\n您可以为该项目加星标或关注作者，以便及时获取发布通知。\n\n## 环境变量\n\n#### `GEMINI_API_KEY`（可选）\n\n您的 Gemini API 密钥。如果需要 `启用` 服务器 API，则此变量为必填项。**该变量不会影响前端页面上的 Gemini 密钥值。**\n支持多个密钥，各密钥之间用逗号分隔，例如 `key1,key2,key3`。\n\n#### `GEMINI_API_BASE_URL`（可选）\n\n> 默认值：`https:\u002F\u002Fgenerativelanguage.googleapis.com`\n\n> 示例：`http:\u002F\u002Fyour-gemini-proxy.com`\n\n覆盖 Gemini API 请求的基础 URL。**为避免服务器端代理 URL 泄露，前端页面中的值不会被覆盖或影响。**\n\n#### `NEXT_PUBLIC_GEMINI_MODEL_LIST`（可选）\n\n自定义模型列表，默认为全部。\n\n#### `NEXT_PUBLIC_UPLOAD_LIMIT`（可选）\n\n文件上传大小限制。默认情况下无文件大小限制。\n\n#### `ACCESS_PASSWORD`（可选）\n\n访问密码。\n\n#### `HEAD_SCRIPTS`（可选）\n\n注入的脚本代码可用于统计或错误追踪。\n\n#### `EXPORT_BASE_PATH`（可选）\n\n仅用于在[静态部署](#static-deployment)模式下设置页面基础路径。\n\n### 访问密码\n\n该项目提供有限的访问控制。请在 Vercel 的环境变量页面添加名为 `ACCESS_PASSWORD` 的环境变量。\n\n添加或修改此环境变量后，请重新部署项目以使更改生效。\n\n### 自定义模型列表\n\n该项目支持自定义模型列表。请在 `.env` 文件或环境变量页面添加名为 `NEXT_PUBLIC_GEMINI_MODEL_LIST` 的环境变量。\n\n默认模型列表表示为 `all`，多个模型之间用逗号分隔。\n若需添加新模型，可直接写入 `all,new-model-name`，或使用 `+` 符号加上模型名称来添加，即 `all,+new-model-name`。\n若要从模型列表中移除某个模型，可在模型名称前加 `-` 符号，例如 `all,-existing-model-name`。若要移除默认模型列表，可使用 `-all`。\n若要设置默认模型，可使用 `@` 符号加上模型名称来指定，默认模型，即 `all,@default-model-name`。\n\n## 开发\n\n如果您尚未安装 pnpm：\n\n```shell\nnpm install -g pnpm\n```\n\n```shell\n# 1. 首先安装 Node.js 和 Yarn\n# 2. 配置本地变量，请将 `.env.example` 重命名为 `.env` 或 `.env.local`\n# 3. 运行\npnpm install\npnpm dev\n```\n\n### 要求\n\nNodeJS ≥ 18，Docker ≥ 20\n\n## 部署\n\n### Docker（推荐）\n\n> Docker 版本需为 20 或以上，否则会提示无法找到镜像。\n\n> ⚠️ 注意：通常情况下，Docker 版本会比最新版本落后 1 到 2 天，因此部署后可能会持续出现“存在更新”的提示，这是正常现象。\n\n```shell\ndocker pull xiangfa\u002Ftalk-with-gemini:latest\n\ndocker run -d --name talk-with-gemini -p 5481:3000 xiangfa\u002Ftalk-with-gemini\n```\n\n您还可以指定其他环境变量：\n\n```shell\ndocker run -d --name talk-with-gemini \\\n   -p 5481:3000 \\\n   -e GEMINI_API_KEY=AIzaSy... \\\n   -e ACCESS_PASSWORD=your-password \\\n   xiangfa\u002Ftalk-with-gemini\n```\n\n如需指定其他环境变量，请在上述命令中添加 `-e key=value` 来设置。\n\n使用 `docker-compose.yml` 部署：\n\n```shell\nversion: '3.9'\nservices:\n   talk-with-gemini:\n      image: xiangfa\u002Ftalk-with-gemini\n      container_name: talk-with-gemini\n      environment:\n         - GEMINI_API_KEY=AIzaSy...\n         - ACCESS_PASSWORD=your-password\n      ports:\n         - 5481:3000\n```\n\n### 静态部署\n\n你也可以直接构建一个静态页面版本，然后将 `out` 目录中的所有文件上传到任何支持静态页面的网站服务上，比如 Github Page、Cloudflare、Vercel 等。\n\n```shell\npnpm build:export\n```\n\n如果你将项目部署在子目录中，并且访问时遇到资源加载失败的情况，请在 `.env` 文件或变量设置页面中添加 `EXPORT_BASE_PATH=\u002Fpath\u002Fproject`。\n\n## 致谢\n\n### 技术栈\n\n- [Next.js](https:\u002F\u002Fnextjs.org\u002F)\n- [Shadcn UI](https:\u002F\u002Fui.shadcn.com\u002F)\n- [Tailwindcss](https:\u002F\u002Ftailwindcss.com\u002F)\n- [Zustand](https:\u002F\u002Fzustand-demo.pmnd.rs\u002F)\n\n### 灵感来源\n\n- [Lobe Chat](https:\u002F\u002Fgithub.com\u002Flobehub\u002Flobe-chat)\n- [Next Web](https:\u002F\u002Fgithub.com\u002FChatGPTNextWeb\u002FNextChat)\n- [Open Canvas](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fopen-canvas)\n\n## 常见问题解答\n\n#### “API 不支持用户所在地区”的解决方案\n\n1. 使用 Cloudflare AI Gateway 转发 API。目前，Cloudflare AI Gateway 已经支持 Google Vertex AI 相关的 API。有关如何使用，请参阅 [如何使用 Cloudflare AI Gateway](.\u002Fdocs\u002FUse-Cloudflare-AI-Gateway.md)。此方案速度快、稳定性高，**推荐**使用。\n\n2. 使用 Cloudflare Worker 进行 API 代理转发。详细设置请参考 [如何使用 Cloudflare Worker 代理 API](.\u002Fdocs\u002FHow-to-deploy-the-Cloudflare-Worker-api-proxy.md)。请注意，该方案在某些情况下可能无法正常工作。\n\n#### 为什么使用 Vercel 一键部署后在中国无法访问网站？\n\n几年前，Vercel 部署后生成的域名曾被中国网络屏蔽，但服务器的 IP 地址并未被屏蔽。你可以自定义域名，在中国即可正常访问。由于 Vercel 在中国没有服务器，因此有时会出现网络波动的情况，这属于正常现象。关于如何设置域名，可以参考我在网上找到的解决方案文章 [Vercel 绑定自定义域名](https:\u002F\u002Fdocs.tangly1024.com\u002Farticle\u002Fvercel-domain)。\n\n#### 为什么无法使用多模态直播功能？\n\n目前，多模态直播 API 仅由 Gemini 2.0 Flash 模型支持，因此你需要使用 Gemini 2.0 Flash 模型才能使用该功能。由于 Gemini 多模态直播 API 在中国无法访问，你可能需要使用 Cloudflare Worker 部署一个代理转发 API。更多信息请参阅 [使用 Cloudflare Worker 代理多模态直播 API](.\u002Fdocs\u002FProxying-the-Multimodal-Live-API-with-Cloudflare-Worker.md)。\n*目前，多模态直播 API 不支持中文语音输出。*\n\n## 参与贡献\n\n欢迎为本项目做出贡献！如果你想参与贡献，请按照以下步骤操作：\n\n1. 在 GitHub 上 fork 该项目。\n2. 将你的 fork 克隆到本地。\n3. 为你的更改创建一个新的分支。\n4. 进行修改并提交到你的分支。\n5. 将更改推送到你在 GitHub 上的 fork。\n6. 从你的分支向主仓库发起 pull 请求。\n\n请确保你的代码符合项目的编码规范，并在提交 pull 请求前运行所有测试以确保通过。如果你发现任何 bug 或有改进建议，欢迎在 GitHub 上提交 issue。\n\n## 许可证\n\n本项目采用 [MIT](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) 许可证授权。完整的许可证文本请参阅 LICENSE 文件。\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_readme_1aa60c779eb6.png)](https:\u002F\u002Fstar-history.com\u002F#u14app\u002Fgemini-next-chat&Date)","# Gemini Next Chat 快速上手指南\n\nGemini Next Chat 是一个开源、高性能的 Gemini 聊天机器人框架，支持一键免费部署私有化 Web 应用。它支持 Gemini 1.5\u002F2.0 多模态模型（图像、视频、音频识别）、语音对话模式、插件系统（联网搜索、天气等）以及跨平台桌面客户端。\n\n## 环境准备\n\n在开始之前，请确保满足以下前置条件：\n\n*   **Gemini API Key**：你需要一个 Google AI Studio 的 API Key。\n    *   获取地址：[https:\u002F\u002Faistudio.google.com\u002Fapp\u002Fapikey](https:\u002F\u002Faistudio.google.com\u002Fapp\u002Fapikey)\n*   **部署方式选择**：\n    *   **方案 A（推荐新手）**：拥有 GitHub 和 Vercel 账号，无需本地环境，可一键云端部署。\n    *   **方案 B（开发者）**：本地安装了 Node.js (建议 v18+) 和 Git，用于本地开发或 Docker 部署。\n*   **网络环境**：由于涉及 Google 服务，部署环境需具备访问 Google API 的能力（如配置代理或使用反向代理）。\n\n## 安装与部署步骤\n\n你可以选择以下任意一种方式进行部署：\n\n### 方式一：一键部署到 Vercel（最简单）\n\n适合希望快速拥有私有化网页版的用户。\n\n1.  点击下方的 Deploy 按钮，跳转到 Vercel 导入页面：\n    [![Deploy with Vercel](https:\u002F\u002Fvercel.com\u002Fbutton)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https%3A%2F%2Fgithub.com%2Fu14app%2Fgemini-next-chat&project-name=gemini-next-chat&env=GEMINI_API_KEY&env=ACCESS_PASSWORD&repository-name=gemini-next-chat)\n2.  在 Vercel 设置页面中，配置以下环境变量：\n    *   `GEMINI_API_KEY`: 填入你的 Gemini API Key（支持多个，用逗号分隔）。\n    *   `ACCESS_PASSWORD`: （可选）设置访问密码，保护你的应用不被他人使用。\n3.  点击 **Deploy**，等待构建完成即可获取访问域名。\n\n### 方式二：Docker 部署（推荐自用\u002F内网）\n\n适合拥有服务器或希望本地运行的用户。\n\n1.  拉取最新镜像：\n    ```bash\n    docker pull xiangfa\u002Ftalk-with-gemini:latest\n    ```\n2.  运行容器（请替换 `\u003CYOUR_API_KEY>` 和 `\u003CYOUR_PASSWORD>`）：\n    ```bash\n    docker run -d -p 3000:3000 \\\n      -e GEMINI_API_KEY=\u003CYOUR_API_KEY> \\\n      -e ACCESS_PASSWORD=\u003CYOUR_PASSWORD> \\\n      --name gemini-next-chat \\\n      xiangfa\u002Ftalk-with-gemini:latest\n    ```\n3.  访问 `http:\u002F\u002Flocalhost:3000` 即可使用。\n\n### 方式三：本地源码运行（开发模式）\n\n适合需要修改代码或贡献功能的开发者。\n\n1.  克隆项目代码：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat.git\n    cd gemini-next-chat\n    ```\n2.  安装依赖：\n    ```bash\n    npm install\n    # 或者使用国内镜像源加速\n    # npm install --registry=https:\u002F\u002Fregistry.npmmirror.com\n    ```\n3.  配置环境变量：\n    在项目根目录创建 `.env.local` 文件，填入以下内容：\n    ```env\n    GEMINI_API_KEY=你的 API_KEY\n    ACCESS_PASSWORD=你的访问密码\n    ```\n4.  启动开发服务器：\n    ```bash\n    npm run dev\n    ```\n5.  浏览器访问 `http:\u002F\u002Flocalhost:3000`。\n\n## 基本使用\n\n部署完成后，即可通过浏览器或桌面客户端开始使用：\n\n1.  **网页访问**：打开部署好的网址，如果设置了 `ACCESS_PASSWORD`，请输入密码进入。\n2.  **模型选择**：在设置或对话框顶部，选择支持的模型（如 `Gemini 1.5 Pro`, `Gemini 1.5 Flash` 等）。\n3.  **多模态交互**：\n    *   **图片识别**：直接拖拽图片到对话框，或点击上传按钮，询问图片内容。\n    *   **语音对话**：点击麦克风图标，开启 \"Talk Mode\"（需浏览器支持），可直接与 Gemini 进行语音交流。\n4.  **使用插件**：\n    *   在对话界面启用内置插件，如 **Web Search**（联网搜索）、**Weather**（天气查询）或 **Arxiv Search**（论文搜索），以获取实时信息。\n5.  **桌面客户端（可选）**：\n    *   前往 [Releases 页面](https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Freleases) 下载对应系统（Windows\u002FMacOS\u002FLinux）的安装包。\n    *   安装后可作为独立应用运行，支持常驻菜单栏，提升办公效率。\n\n> **提示**：所有对话数据默认保存在用户本地浏览器中，保障隐私安全。如需更换 API Key 或模型列表，可在设置页面或通过环境变量 `NEXT_PUBLIC_GEMINI_MODEL_LIST` 进行自定义。","某初创团队希望为内部客服系统集成谷歌最新的 Gemini 1.5 Pro 模型，以处理复杂的长文档问答，但受限于预算和运维能力。\n\n### 没有 gemini-next-chat 时\n- **部署门槛高**：团队需手动配置 Next.js 环境、Tailwind CSS 及 shadcn\u002Fui 组件库，前端开发耗时数天才能搭建出基础聊天界面。\n- **多模态支持难**：原生 API 调用需自行编写代码处理图片上传与解析（Gemini Pro Vision），难以快速实现“看图说话”功能。\n- **数据隐私担忧**：直接使用第三方封装平台可能导致敏感客户数据外泄，而自建服务又缺乏现成的访问密码保护机制。\n- **维护成本大**：模型切换（如在 1.5 Flash 和 Pro 之间调整）需要修改后端代码并重新构建部署，响应业务需求迟缓。\n\n### 使用 gemini-next-chat 后\n- **一键私有化部署**：通过 Vercel 或 Cloudflare 按钮即可在几分钟内上线专属应用，自动集成所有前端依赖，零代码基础设施投入。\n- **原生多模态交互**：内置支持 Gemini 1.5 全系列模型，直接拖拽上传图片或 PDF 即可进行深度分析，无需额外开发解析逻辑。\n- **安全可控**：自带访问密码（ACCESS_PASSWORD）环境变量配置，确保只有授权客服人员能访问，数据完全留存于自有账户下。\n- **灵活模型调度**：在图形化设置中即可随时切换不同版本的 Gemini 模型，根据任务复杂度动态平衡成本与性能，无需重启服务。\n\ngemini-next-chat 让中小团队能以零成本、分钟级的速度，拥有安全且功能完备的私有化 Gemini 多模态对话系统。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fu14app_gemini-next-chat_f70e459d.png","u14app","U14 App","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fu14app_91ad72ec.png","We only develop interesting applications. u14 is a homophone of the word \"interesting\" in Chinese.",null,"https:\u002F\u002Fgithub.com\u002Fu14app",[80,84,88,92,96],{"name":81,"color":82,"percentage":83},"TypeScript","#3178c6",97.5,{"name":85,"color":86,"percentage":87},"CSS","#663399",1.3,{"name":89,"color":90,"percentage":91},"JavaScript","#f1e05a",0.6,{"name":93,"color":94,"percentage":95},"Rust","#dea584",0.3,{"name":97,"color":98,"percentage":95},"Dockerfile","#384d54",1606,588,"2026-04-13T23:53:31","MIT","Linux, macOS, Windows","未说明",{"notes":106,"python":104,"dependencies":107},"该项目为基于 Next.js 的前端应用，主要依赖 Google Gemini API 进行推理，无需本地 GPU 或大型模型文件。支持通过 Vercel、Cloudflare 一键部署，或通过 Docker 部署。提供基于 Tauri 的跨平台桌面客户端（约 4MB）。运行核心需求为有效的 GEMINI_API_KEY。",[108,109,110,111],"Next.js","Tailwind CSS","shadcn\u002Fui","Tauri (桌面客户端)",[15,13,14,52],[114,115,116,117,118,119,120,121,122,123,124,125,126,127,128],"ai","gemini","gemini-pro","gemini-pro-vision","gemini-15-flash","gemini-15-pro","gemini-ai","gemini-client","vercel-ai","gemini-app","google-gemini","google-gemini-ai","gemini-2-0-flash","gemini-flash","gemini-2","2026-03-27T02:49:30.150509","2026-04-16T01:45:07.103759",[132,137,142,147,152,157,162,167],{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},34859,"为什么思考型 AI 模型的思考过程会失败或被强行终止？","这可能是由以下原因导致的：1. 新模型本身存在错误；2. 代码在生成过程中产生错误；3. 重构后的代码逻辑存在问题。目前该问题难以在短时间内彻底解决，建议暂时使用其他更稳定的模型。此外，模型的思考过程不一定使用英语，有时会使用与提问相同的语言进行思考，这是已知情况。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F69",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},34860,"Docker 部署后配置了密码环境变量，但界面上找不到输入密码的地方怎么办？","这通常是因为 Docker 镜像缓存了旧版本。请尝试拉取特定版本镜像重新部署，命令为：docker pull xiangfa\u002Ftalk-with-gemini:v0.12.1。如果使用的是 latest 标签，可能仍然指向旧版本，建议指定版本号拉取并重新部署容器。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F37",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},34861,"免费计划的 Google API Key 无法使用或报错怎么办？","如果免费计划的 API Key 无法使用，大概率是当前 Google 账号存在问题（如被限制或未开通相应服务）。建议尝试重新注册一个新的 Google 账号并生成新的 API Key 进行测试。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F21",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},34862,"项目是否支持配置多个 Gemini API Key 以增加并发量？","目前暂不支持多 Key 模式。原因是 Gemini 1.5 系列接口涉及文件上传功能时，文件是与特定的 API Key 绑定的。如果使用 Key A 上传文件，却在后续请求中使用 Key B，会导致无权限访问文件的错误。要支持多 Key 需要构建复杂的会话绑定和缓存系统，因此目前未纳入开发计划。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F5",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},34863,"模型选择列表显示异常、被遮挡或部分模型消失如何解决？","如果是无访问密码场景下无法获取模型列表，作者已修复该问题。对于小屏幕设备上长列表显示不友好的问题，目前尚无完美的 UI 解决方案。如果模型列表中出现模型离奇消失，可能是获取逻辑问题，建议检查网络连接或尝试刷新页面；若问题依旧，可能是后端接口返回数据异常。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F71",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},34864,"Gemini 模型输出出现内容重复、语句断层或瞬间涌出大量重复文本怎么办？","这是一个已知的输出渲染 Bug，作者已在代码层面进行了修复。如果您仍在使用最新版本（测试版或演示站）遇到此问题，特别是在“文本编辑→续写”功能的长文本测试中，请提供具体的复现参数（如发布平台、API 接口类型是后端服务器、中转代理还是直连官方接口等），以便作者进一步排查。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F27",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},34865,"模型列表中模型名称重复或选中模型时下方蓝色下划线为空是什么原因？","模型名称重复可能是获取逻辑导致的显示问题。选中模型时蓝色下划线为空，通常是因为其他模型名称过长导致的样式错位问题，作者计划后续修复。对于移动端显示不全的问题，目前建议尽量使用名称较短的模型或在桌面端查看完整名称。","https:\u002F\u002Fgithub.com\u002Fu14app\u002Fgemini-next-chat\u002Fissues\u002F61",{"id":168,"question_zh":169,"answer_zh":170,"source_url":151},34866,"是否支持通过中转接口（OpenAI 格式）使用 Gemini？","可以通过同时设置 GEMINI_API_KEY 和 GEMINI_API_BASE_URL 环境变量来实现中转，但目前仅支持 Gemini 官方 API 形式的接口路径，无法直接兼容 OpenAI 格式（\u002Fv1\u002Fchat\u002Fcompletions）的接口。项目依赖的是 Gemini 官方 SDK，若要兼容 OneAPI 等将 Gemini 转为 OpenAI 格式的中转服务，目前尚不支持。",[172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257,262,267],{"id":173,"version":174,"summary_zh":175,"released_at":176},272168,"v1.10.1","feat: 页面加载时会自动拉取最新的模型列表\nfix: 修复重新生成功能无效问题\nfix: 修复搜索引用布局异常问题\nfix: 不支持插件的模型不再发送工具设置\n\n--- \n\n特性：页面加载时会自动拉取最新的模型列表\n修复：修复重新生成功能无效的问题\n修复：修复搜索引用布局异常的问题\n修复：不支持插件的模型不再发送工具设置","2025-03-18T03:25:23",{"id":178,"version":179,"summary_zh":180,"released_at":181},272169,"v1.10.0","本次更新聚焦于更强大的图文能力和性能优化：\n\n* **全新图文模型:** 新增支持 `gemini-2.0-flash-exp-image-generation` 模型，可生成包含图文混排的内容。\n* **图片体积优化:** 生成的图片将自动压缩，减小文件大小。\n* **图片加载优化:** 采用懒加载技术优化图片加载，减少图片渲染对文本生成速度的影响。\n\n现在，您可以体验更丰富的图文互动和更流畅的内容生成速度！\n\n![WeChatad8d91adbc3ef33ed8d2a22939bb88c7](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa4616fb6-eb9a-47e6-89ef-30b347929b9f)\n","2025-03-17T10:48:15",{"id":183,"version":184,"summary_zh":185,"released_at":186},272170,"v1.9.1","修复了客户端无法正常使用实时多模态视频和屏幕捕捉的问题，并适当优化了设置界面。","2025-02-21T11:28:01",{"id":188,"version":189,"summary_zh":190,"released_at":191},272171,"v1.9.0","本次更新带来了强大的多模态直播功能，并对性能和文档进行了优化：\n\n* **重磅推出：多模态直播！**\n    * 支持 Gemini Multimodal Live API。*注意：目前官方仅支持 gemini-2.0-flash 这类模型。*\n* **语音模式升级：** 将原语音模式重构为组件，并增加自动录音设置。\n* **性能优化：**\n    * 优化了Office文件解析代码，改为动态导入，减少落地页加载的文件数量。\n    * 将系统指令和附件区域组件改为动态加载，提升加载速度。\n    * 移除了 store 中的早期兼容代码。\n* **文档更新：**\n    * 改进了文档内容，并添加了新的路线图（Roadmap）。\n    * 新增了多模态直播 API 常见问题解答。\n    * 新增了使用 Cloudflare Worker 代理的多模态直播 API 文档。\n* **Bug修复：**\n    * 调整 defaultValue 为 value，防止表单状态被缓存。\n* **构建调整：** 调整了 wrangler.toml 配置。\n\n我们致力于不断优化产品体验，敬请期待更多精彩功能！","2025-02-21T04:47:39",{"id":193,"version":194,"summary_zh":195,"released_at":196},272172,"v1.8.1","本次更新主要集中在用户界面和模型优化：\n\n* **会话界面模型切换与标题显示:**  现在可以在会话界面切换模型，并显示会话标题，方便管理和使用。\n* **文件列表显示优化:** 文件列表中的文件名显示更加清晰易懂。\n* **移除过时模型:** 清理了不再使用的模型，保持模型列表的简洁。\n* **CSS 样式调整:**  优化了界面样式，提升视觉体验。\n","2025-02-15T14:51:19",{"id":198,"version":199,"summary_zh":200,"released_at":201},272173,"v1.8.0","本次更新重点提升了文件处理能力和对话体验：\n\n* **新增 Office 文件解析:**  现在可以解析 Office 文件（例如 .docx, .xlsx, .pptx）的内容。\n* **支持上传 Office 文件:** 文件上传功能支持 Office 文件类型。\n* **更智能的对话管理:** 空对话不再自动命名。\n* **优化问题输出体验:** 提升了问题呈现的流畅性和易读性。\n* **Bug 修复:**\n    * 修复了 Cloudflare 页面不支持 fetch cache 的问题。\n    * 修复了文本类型文件无法正常上传的问题。\n    * 修复了 functionCall 变量判断异常的问题。","2025-02-10T04:07:40",{"id":203,"version":204,"summary_zh":205,"released_at":206},272174,"v1.7.2","本次更新主要围绕 **Google Grounding Search** 功能进行了增强：\n\n* **新增 Google Grounding Search 结果UI:** 现在可以更直观地查看 Google Grounding Search 的搜索结果。\n* **文档导出适配 Grounding Search:** 优化了文档导出功能，使其更好地支持 Grounding Search。\n* **模型列表API增加错误信息:** 模型列表API现在会返回更详细的错误信息，方便问题排查。\n* **文档更新:**  更正了错误文档描述，并添加了更详细的说明。","2025-02-06T14:41:36",{"id":208,"version":209,"summary_zh":210,"released_at":211},272175,"v1.7.1","支持 Gemini 2.0\n\n---\n\n支持 Gemini 2.0","2025-02-06T01:32:38",{"id":213,"version":214,"summary_zh":215,"released_at":216},272176,"v1.7.0","本次更新带来以下改进：\n\n- **更自然流畅的对话体验:** 语音模式优化，对话更简洁自然。\n- **自动话题命名:**  系统将自动为对话命名，方便管理。\n- **改进的清除聊天内容功能:**  优化了清除聊天内容的操作，避免误操作导致内容丢失。\n- **更好的界面显示:**  长代码显示优化，小屏幕下代码框默认收缩；链接文字换行显示改进；修复了图像压缩比例问题。\n- **代码复制及高亮改进:** 复制代码时不再包含Markdown语法；修复了代码高亮在无法识别内容语言类型时的错误。\n- **其他优化:**  后台逻辑、Markdown排版、系统指令布局及默认模型设置均进行了优化。\n- **新增代码贡献指南:**  我们新增了代码贡献指南，欢迎您的参与！\n\n我们致力于提供更好的用户体验，感谢您的支持！","2025-02-04T05:39:32",{"id":218,"version":219,"summary_zh":220,"released_at":221},272177,"v1.6.1","修复：修复了一些已知问题\n\n---\n\n修复：修复一些已知问题","2025-01-31T02:23:57",{"id":223,"version":224,"summary_zh":225,"released_at":226},272178,"v1.6.0","feat: Add multi-key support\r\nfeat: Add support for mermaid\r\nrefactor: Use `react-markdown` to replace `markdown-it` to improve rendering performance and optimize content layout\r\nrefactor: Optimize file upload logic, use inlineData for files smaller than 2MB to reduce file upload frequency\r\nrefactor: Optimize assistant role prompt generation logic\r\n\r\n---\r\n\r\nfeat: 增加多 key 支持\r\nfeat: 增加对 mermaid 的支持\r\nrefactor: 使用 `react-markdown` 替换 `markdown-it` 提升渲染性能以及优化内容排版\r\nrefactor: 优化文件上传逻辑，小于 2MB 的文件使用 inlineData 降低文件上传频率\r\nrefactor: 优化助手角色提示生成逻辑","2025-01-26T17:29:14",{"id":228,"version":229,"summary_zh":230,"released_at":231},272179,"v1.5.3","refactor: Optimize model list loading logic and add list refresh button\r\nfix: Fixed some known bugs\r\n\r\n---\r\n\r\nrefactor：优化模型列表加载逻辑，增加模型列表手动刷新按钮\r\nfix：修复部分已知bug","2025-01-21T13:21:33",{"id":233,"version":234,"summary_zh":235,"released_at":236},272180,"v1.5.2","fix: Fixed the issue that the project in docker deployment mode cannot upload files normally\r\nfix: Fixed the issue that recording could not be done normally in voice mode\r\n\r\n---\r\n\r\n修复：修复docker部署模式下项目无法正常上传文件的问题\r\n修复：修复语音模式下无法正常录音的问题","2025-01-20T14:44:22",{"id":238,"version":239,"summary_zh":240,"released_at":241},272181,"v1.5.0","feat: Added Artifact feature, supports AI writing, adjustment of reading level, content length, multi-language translation, continuation, and addition of emojis\r\nfeat: Added export conversation feature, support exporting to markdown file\r\nrefactor: Use local compressed scripts to solve the problem that compressed scripts cannot be loaded normally in some cases\r\nfix: Use the v1alpha api version to solve the problem of abnormal thinking model\r\nfix: Compatible with Gemini 2.0 version safetySettings parameter changes\r\nfix: Fixed the issue that the default search is not compatible with other plugins.\r\n\r\n---\r\n\r\nfeat: 新增 Artifact 功能，支持 AI 写作、调整阅读级别、内容长度、多语言翻译、续写、添加表情符号\r\nfeat: 新增导出对话功能，支持导出为 markdown 文件\r\nrefactor: 使用本地压缩脚本，解决某些情况下压缩脚本无法正常加载的问题\r\nfix: 使用 v1alpha api 版本，解决思维模型异常问题\r\nfix: 兼容 Gemini 2.0 版本 safetySettings 参数变更\r\nfix: 修复默认搜索与其他插件不兼容的问题","2025-01-20T07:29:24",{"id":243,"version":244,"summary_zh":245,"released_at":246},272182,"v1.4.0","feat: `gemini-2.0-flash-exp` enables the network search feature by default\r\n\r\n---\r\n\r\nfeat: `gemini-2.0-flash-exp` 默认启用网络搜索功能","2025-01-13T12:45:29",{"id":248,"version":249,"summary_zh":250,"released_at":251},272183,"v1.3.0","feat: Added a translation button to support translating the conversation content into the system language\r\nfeat: Added env api to and global env store\r\nrefactor: Refactoring file upload logic\r\nrefactor: Optimize PWA installation method\r\nfix: Fixed the issue that the backend api did not correctly verify the access password\r\nfix: Fixed the issue where the dialogue content might exceed the area\r\nfix: Fixed the layout problem of assistant recommendation module\r\nfix: Compatible with iPhone series phones with notch screen\r\nbuild: Optimize Dockerfile configuration file\r\nbuild: Optimize next.config.js configuration file\r\n\r\n---\r\n\r\nfeat: 增加翻译按钮，支持将对话内容翻译成系统语言\r\nfeat: 增加 env api 到全局 env store\r\nrefactor: 重构文件上传逻辑\r\nrefactor: 优化 PWA 安装方式\r\nfix: 修复后端 api 未正确验证访问密码的问题\r\nfix: 修复对话内容可能超出区域的问题\r\nfix: 修复助手推荐模块布局问题\r\nfix: 兼容刘海屏 iPhone 系列手机\r\nbuild: 优化 Dockerfile 配置文件\r\nbuild: 优化 next.config.js 配置文件","2025-01-03T05:45:25",{"id":253,"version":254,"summary_zh":255,"released_at":256},272184,"v1.2.1","feat: Adapt the thinking series models\r\nfeat: The model parameters are initialized according to the model\r\nfeat: Added reset settings button\r\nrefactor: Function calls are adjusted to run locally\r\nrefactor: Removed `GEMINI_UPLOAD_BASE_URL` global parameter and refactored related logic\r\nrefactor: Upgrade tauri version to 2.0 and refactor related logic\r\nrefactor: Optimize the rendering process of text stream\r\nfix: Fixed the issue with copying and deleting in the conversation list.\r\nfix: Fixed the issue where the plugin button could not be displayed properly when switching models\r\nfix: Fixed the rendering anomaly of code blocks when the code language type is missing\r\nbuild: Add cloudflare deployment configuration file\r\n\r\n---\r\n\r\nfeat: 适配 Thinking 系列模型\r\nfeat: 根据模型初始化模型参数\r\nfeat: 增加重置设置按钮\r\nrefactor: 调整函数调用为本地运行\r\nrefactor: 移除 `GEMINI_UPLOAD_BASE_URL` 全局参数并重构相关逻辑\r\nrefactor: 升级 tauri 版本到 2.0 并重构相关逻辑\r\nrefactor: 优化文本流渲染流程\r\nfix: 修复对话列表中复制和删除问题\r\nfix: 修复切换模型时插件按钮无法正常显示的问题\r\nfix: 修复代码语言类型缺失时代码块渲染异常问题\r\nbuild: 添加 cloudflare 部署配置文件","2024-12-31T14:22:42",{"id":258,"version":259,"summary_zh":260,"released_at":261},272185,"v1.0.0","- Support plugins, with built-in Web search, Web reader, Arxiv search, Weather and other practical plugins\r\n- Conversation list, so you can keep track of important conversations or discuss different topics with Gemini\r\n- Restructure the assistant market to support custom assistants\r\n----\r\n- 插件系统，内置网络搜索、网页解读、论文搜索、实时天气等多种实用插件\r\n- 会话列表，让您可以保持重要的会话内容或与 Gemini 讨论不同的话题\r\n- 重构助理市场，支持自定义助理\r\n\r\n![pc-screenshot-3](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd97cab30-1fa7-4769-88d4-dab64ca9e31a)\r\n","2024-12-09T06:59:18",{"id":263,"version":264,"summary_zh":265,"released_at":266},272186,"v0.12.3","feat: Get the list of available models for the user\r\nfeat: Add support for pdf files\r\nrefactor: Remove code logic related to compatibility with earlier data formats\r\n\r\n----\r\n\r\nfeat: 获取用户可用模型列表\r\nfeat: 添加对 pdf 文件的支持\r\nrefactor: 删除与早期数据格式兼容性相关的代码逻辑","2024-10-15T11:50:49",{"id":268,"version":269,"summary_zh":270,"released_at":271},272187,"v0.12.2","- fix: Fixed the issue of audio duration loss in recorded files\r\n- fix: Fixed the issue with low recording volume on Safari browser\r\n- fix: Fixed the issue where the access password box in docker mode was hidden by mistake\r\n---\r\n- fix: 修复录制文件中音频时长丢失的问题\r\n- fix: 修复 Safari 浏览器下录制音量低的问题\r\n- fix: 修复 docker 模式下访问密码框被误隐藏的问题\r\n","2024-06-26T16:55:20"]