[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-HanaokaYuzu--Gemini-API":3,"tool-HanaokaYuzu--Gemini-API":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",148568,2,"2026-04-09T23:34:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":79,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":32,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":96,"github_topics":97,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":146},6195,"HanaokaYuzu\u002FGemini-API","Gemini-API","✨ Reverse-engineered Python API for Google Gemini web app","Gemini-API 是一个基于 Python 开发的异步封装库，旨在通过逆向工程方式，让开发者能够直接在代码中调用谷歌 Gemini 网页版（原 Bard）的强大功能。它主要解决了官方 API 在某些高级特性（如原生视频\u002F音频生成、深度研究流程）或免费额度使用上的限制，让用户无需复杂配置即可利用现有的谷歌账号访问完整的网页端能力。\n\n这款工具特别适合需要灵活集成 AI 能力的开发者、研究人员以及希望自动化工作流的技术爱好者。其核心亮点在于支持“持久化 Cookie\"自动刷新，非常适合构建需长期运行的后台服务；同时，它原生支持图像、视频及音频的生成与编辑，并能完整执行包含计划创建和状态轮询的“深度研究”任务。此外，Gemini-API 还兼容自定义系统提示词（Gems）、扩展插件（如 YouTube、Gmail），并提供分类输出和流式传输模式。整体接口设计简洁优雅，风格贴近谷歌官方生成式 AI SDK，结合异步架构，能高效处理多轮对话及大并发任务，是连接本地应用与 Gemini 网页智能的桥梁。","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_readme_81ad5b0939ac.png\" width=\"55%\" alt=\"Gemini Banner\" align=\"center\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fgemini-webapi\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fgemini-webapi\" alt=\"PyPI\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fgemini-webapi\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_readme_6a7b4a2f1f5c.png\" alt=\"Downloads\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fnetwork\u002Fdependencies\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Flibrariesio\u002Fgithub\u002FHanaokaYuzu\u002FGemini-API\" alt=\"Dependencies\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fblob\u002Fmaster\u002FLICENSE\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FHanaokaYuzu\u002FGemini-API\" alt=\"License\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\" alt=\"Code style\">\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#HanaokaYuzu\u002FGemini-API\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FHanaokaYuzu\u002FGemini-API?style=social\" alt=\"GitHub stars\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FHanaokaYuzu\u002FGemini-API?style=social&logo=github\" alt=\"GitHub issues\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Factions\u002Fworkflows\u002Fpypi-publish.yml\">\n        \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Factions\u002Fworkflows\u002Fpypi-publish.yml\u002Fbadge.svg\" alt=\"CI\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n# \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002FHanaokaYuzu\u002FGemini-API\u002Fmaster\u002Fassets\u002Flogo.svg\" width=\"35px\" alt=\"Gemini Icon\" \u002F> Gemini-API\n\nA reverse-engineered asynchronous Python wrapper for the [Google Gemini](https:\u002F\u002Fgemini.google.com) web app (formerly Bard).\n\n## Features\n\n- **Persistent Cookies** - Automatically refreshes cookies in background. Optimized for always-on services.\n- **Image Generation** - Natively supports generating and editing images with natural language.\n- **Video & Audio Generation** - Supports generating videos and audio\u002Fmusic content natively.\n- **Deep Research** - Full deep research workflow with plan creation, status polling, and result retrieval.\n- **System Prompt** - Supports customizing the model's system prompt with [Gemini Gems](https:\u002F\u002Fgemini.google.com\u002Fgems\u002Fview).\n- **Extension Support** - Supports generating content with [Gemini extensions](https:\u002F\u002Fgemini.google.com\u002Fextensions), such as YouTube and Gmail.\n- **Classified Outputs** - Categorizes text, thoughts, images, videos, and audio in the response.\n- **Streaming Mode** - Supports stream generation, yielding partial outputs as they are generated.\n- **CLI Tool** - Standalone command-line interface for quick interactions.\n- **Official Flavor** - Provides a simple and elegant interface inspired by [Google Generative AI](https:\u002F\u002Fai.google.dev\u002Ftutorials\u002Fpython_quickstart)'s official API.\n- **Asynchronous** - Utilizes `asyncio` to run generation tasks and return outputs efficiently.\n\n## Table of Contents\n\n- [Features](#features)\n- [Table of Contents](#table-of-contents)\n- [Installation](#installation)\n- [Authentication](#authentication)\n- [Usage](#usage)\n  - [Initialization](#initialization)\n  - [Generate Content](#generate-content)\n  - [Generate Content with Files](#generate-content-with-files)\n  - [Conversations Across Multiple Turns](#conversations-across-multiple-turns)\n  - [Continue Previous Conversations](#continue-previous-conversations)\n  - [Read Conversation History](#read-conversation-history)\n  - [Delete Previous Conversations from Gemini History](#delete-previous-conversations-from-gemini-history)\n  - [Temporary Mode](#temporary-mode)\n  - [Streaming Mode](#streaming-mode)\n  - [Select Language Model](#select-language-model)\n  - [List Available Models](#list-available-models)\n  - [Apply System Prompt with Gemini Gems](#apply-system-prompt-with-gemini-gems)\n  - [Manage Custom Gems](#manage-custom-gems)\n    - [Create a Custom Gem](#create-a-custom-gem)\n    - [Update an Existing Gem](#update-an-existing-gem)\n    - [Delete a Custom Gem](#delete-a-custom-gem)\n  - [Retrieve Model's Thought Process](#retrieve-models-thought-process)\n  - [Retrieve Images in Response](#retrieve-images-in-response)\n  - [Generate and Edit Images](#generate-and-edit-images)\n  - [Retrieve Videos and Audio](#retrieve-videos-and-audio)\n  - [Generate Content with Gemini Extensions](#generate-content-with-gemini-extensions)\n  - [Check and Switch to Other Reply Candidates](#check-and-switch-to-other-reply-candidates)\n  - [Deep Research](#deep-research)\n  - [Logging Configuration](#logging-configuration)\n- [CLI Tool](#cli-tool)\n  - [Cookie Setup](#cookie-setup)\n  - [CLI Commands](#cli-commands)\n  - [Deep Research Workflow](#deep-research-workflow)\n- [References](#references)\n- [Stargazers](#stargazers)\n\n## Installation\n\n> [!NOTE]\n>\n> This package requires Python 3.10 or higher.\n\nInstall or update the package with pip.\n\n```sh\npip install -U gemini_webapi\n```\n\nOptionally, the package offers a way to automatically import cookies from your local browser via optional dependency `browser-cookie3`. To enable this feature, install `gemini_webapi[browser]` instead. Supported platforms and browsers can be found [here](https:\u002F\u002Fgithub.com\u002Fborisbabic\u002Fbrowser_cookie3?tab=readme-ov-file#contribute).\n\n```sh\npip install -U gemini_webapi[browser]\n```\n\n## Authentication\n\n> [!TIP]\n>\n> If `browser-cookie3` is installed, you can skip this step and go directly to the [usage](#usage) section. Just make sure you are logged in to \u003Chttps:\u002F\u002Fgemini.google.com> in your browser.\n\n- Go to \u003Chttps:\u002F\u002Fgemini.google.com> and log in with your Google account\n- Press F12 to open the web inspector, go to the `Network` tab, and refresh the page\n- Click any request and copy the cookie values of `__Secure-1PSID` and `__Secure-1PSIDTS`\n\n> [!NOTE]\n>\n> If your application is deployed in a containerized environment (e.g. Docker), you may want to persist the cookies with a volume to avoid re-authentication every time the container rebuilds. You can set `GEMINI_COOKIE_PATH` environment variable to specify the path where auto-refreshed cookies are stored. Make sure the path is writable by the application.\n>\n> Here's part of a sample `docker-compose.yml` file:\n\n```yaml\nservices:\n    main:\n        environment:\n            GEMINI_COOKIE_PATH: \u002Ftmp\u002Fgemini_webapi\n        volumes:\n            - .\u002Fgemini_cookies:\u002Ftmp\u002Fgemini_webapi\n```\n\n> [!NOTE]\n>\n> The API's auto-cookie-refreshing feature doesn't require `browser-cookie3` and is enabled by default. It allows you to keep the API service running without worrying about cookie expiration.\n>\n> This feature may require you to log in to your Google account again in the browser. This is expected behavior and won't affect the API's functionality.\n>\n> To avoid this, it's recommended to get cookies from a separate browser session and close it as soon as possible for best utilization (e.g. a fresh login in the browser's private mode). More details can be found [here](https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F6).\n\n## Usage\n\n### Initialization\n\nImport the required packages and initialize a client with your cookies from the previous step. After successful initialization, the API will automatically refresh `__Secure-1PSIDTS` in the background as long as the process is alive.\n\n```python\nimport asyncio\nfrom gemini_webapi import GeminiClient\n\n# Replace \"COOKIE VALUE HERE\" with your actual cookie values.\n# Leave Secure_1PSIDTS empty if it's not available for your account.\nSecure_1PSID = \"COOKIE VALUE HERE\"\nSecure_1PSIDTS = \"COOKIE VALUE HERE\"\n\nasync def main():\n    # If browser-cookie3 is installed, simply use `client = GeminiClient()`\n    client = GeminiClient(Secure_1PSID, Secure_1PSIDTS, proxy=None)\n    await client.init(timeout=30, auto_close=False, close_delay=300, auto_refresh=True)\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> `auto_close` and `close_delay` are optional arguments for automatically closing the client after a certain period of inactivity. This feature is disabled by default. In an always-on service like a chatbot, it's recommended to set `auto_close` to `True` with a reasonable `close_delay` value for better resource management.\n\n### Generate Content\n\nAsk a single-turn question by calling `GeminiClient.generate_content`, which returns a `gemini_webapi.ModelOutput` object containing the generated text, images, thoughts, and conversation metadata.\n\n```python\nasync def main():\n    response = await client.generate_content(\"Hello World!\")\n    print(response.text)\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> Simply use `print(response)` to get the same output if you just want to see the response text.\n\n### Generate Content with Files\n\nGemini supports file input, including images and documents. Optionally, you can pass files as a list of paths in `str` or `pathlib.Path` to `GeminiClient.generate_content` together with a text prompt.\n\n```python\nasync def main():\n    response = await client.generate_content(\n            \"Introduce the contents of these two files. Is there any connection between them?\",\n            files=[\"assets\u002Fsample.pdf\", Path(\"assets\u002Fbanner.png\")],\n        )\n    print(response.text)\n\nasyncio.run(main())\n```\n\n### Conversations Across Multiple Turns\n\nIf you want to keep the conversation continuous, use `GeminiClient.start_chat` to create a `gemini_webapi.ChatSession` object and send messages through it. The conversation history will be handled automatically and updated after each turn.\n\n```python\nasync def main():\n    chat = client.start_chat()\n    response1 = await chat.send_message(\n        \"Introduce the contents of these two files. Is there any connection between them?\",\n        files=[\"assets\u002Fsample.pdf\", Path(\"assets\u002Fbanner.png\")],\n    )\n    print(response1.text)\n    response2 = await chat.send_message(\n        \"Use image generation tool to modify the banner with another font and design.\"\n    )\n    print(response2.text, response2.images, sep=\"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> Same as `GeminiClient.generate_content`, `ChatSession.send_message` also accepts `image` as an optional argument.\n\n### Continue Previous Conversations\n\nTo manually retrieve previous conversations, you can pass a previous `ChatSession`'s metadata to `GeminiClient.start_chat` when creating a new `ChatSession`. Alternatively, you can persist previous metadata to a file or database if you need to access it after the current Python process has exited.\n\n```python\nasync def main():\n    # Start a new chat session\n    chat = client.start_chat()\n    response = await chat.send_message(\"Fine weather today\")\n\n    # Save chat's metadata\n    previous_session = chat.metadata\n\n    # Load the previous conversation\n    previous_chat = client.start_chat(metadata=previous_session)\n    response = await previous_chat.send_message(\"What was my previous message?\")\n    print(response)\n\nasyncio.run(main())\n```\n\n### Read Conversation History\n\nYou can read the conversation history of a specific chat by calling `GeminiClient.read_chat` with the chat ID. It returns a `ChatHistory` object containing a list of `ChatTurn` objects ordered from newest to oldest.\n\n```python\nasync def main():\n    chat = client.start_chat()\n    await chat.send_message(\"What is the capital of France?\")\n\n    # Read the chat history\n    history = await client.read_chat(chat.cid)\n    if history:\n        for turn in history.turns:\n            print(f\"[{turn.role.upper()}] {turn.text}\")\n            print(\"\\n----------------------------------\\n\")\n\nasyncio.run(main())\n```\n\nTo list all recent chats, use `GeminiClient.list_chats`:\n\n```python\nasync def main():\n    chats = client.list_chats()\n    if chats:\n        for chat_info in chats:\n            print(f\"{chat_info.cid}: {chat_info.title}\")\n\nasyncio.run(main())\n```\n\n### Delete Previous Conversations from Gemini History\n\nYou can delete a specific chat from Gemini history on the server by calling `GeminiClient.delete_chat` with the chat ID.\n\n```python\nasync def main():\n    # Start a new chat session\n    chat = client.start_chat()\n    await chat.send_message(\"This is a temporary conversation.\")\n\n    # Delete the chat\n    await client.delete_chat(chat.cid)\n    print(f\"Chat deleted: {chat.cid}\")\n\nasyncio.run(main())\n```\n\n### Temporary Mode\n\nYou can start a temporary chat by passing `temporary=True` to `GeminiClient.generate_content` or `ChatSession.send_message`. Temporary chats won't be saved in Gemini history.\n\n```python\nasync def main():\n    response = await client.generate_content(\"Hello World!\", temporary=True)\n    print(response.text, \"\\n\\n----------------------------------\\n\\n\")\n\n    chat = client.start_chat()\n    await chat.send_message(\"Fine weather today\", temporary=False)\n    response2 = await chat.send_message(\"What's my last message?\", temporary=True)\n    print(response2.text)\n\nasyncio.run(main())\n```\n\n### Streaming Mode\n\nFor longer responses, you can use streaming mode to receive partial outputs as they are generated. This provides a more responsive user experience, especially for real-time applications like chatbots.\n\nThe `generate_content_stream` method yields `ModelOutput` objects where the `text_delta` attribute contains only the **new characters** received since the last yield, making it easy to display incremental updates.\n\n```python\nasync def main():\n    async for chunk in client.generate_content_stream(\n        \"What's the difference between 'await' and 'async for'?\"\n    ):\n        print(chunk.text_delta, end=\"\", flush=True)\n\n    print()\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> Streaming mode accepts the same arguments as `generate_content`. You can also use streaming mode in multi-turn conversations with `ChatSession.send_message_stream`.\n\n### Select Language Model\n\nYou can specify which language model to use by passing the `model` argument to `GeminiClient.generate_content` or `GeminiClient.start_chat`. The default value is `unspecified`.\n\nAvailable models are discovered **dynamically** at init time based on your account tier. The `Model` enum provides convenient shortcuts.\n\n```python\nfrom gemini_webapi.constants import Model\n\nasync def main():\n    response1 = await client.generate_content(\n        \"What's your language model version? Reply with the version number only.\",\n        model=Model.BASIC_FLASH,\n    )\n    print(f\"Model version ({Model.BASIC_FLASH.model_name}): {response1.text}\")\n\n    chat = client.start_chat(model=\"gemini-3-pro\")\n    response2 = await chat.send_message(\"What's your language model version? Reply with the version number only.\")\n    print(f\"Model version (gemini-3-pro): {response2.text}\")\n\nasyncio.run(main())\n```\n\nYou can also pass custom model header strings directly to access models that are not listed above.\n\n```python\n# \"model_name\" and \"model_header\" keys must be present\ncustom_model = {\n    \"model_name\": \"xxx\",\n    \"model_header\": {\n        \"x-goog-ext-525001261-jspb\": \"[1,null,null,null,'e6fa609c3fa255c0',null,null,null,[4]]\"\n    },\n}\n\nresponse = await client.generate_content(\n    \"What's your model version?\",\n    model=custom_model\n)\n```\n\n### List Available Models\n\nThe client dynamically discovers which models are available for your account at initialization. Use `GeminiClient.list_models` to see all available models and their details.\n\n```python\nasync def main():\n    await client.init()  # Make sure the client is initialized first\n    models = client.list_models()\n    if models:\n        for model in models:\n            print(f\"{model.display_name}: {model.model_name}\")\n\nasyncio.run(main())\n```\n\n### Apply System Prompt with Gemini Gems\n\nSystem prompts can be applied to conversations via [Gemini Gems](https:\u002F\u002Fgemini.google.com\u002Fgems\u002Fview). To use a gem, you can pass the `gem` argument to `GeminiClient.generate_content` or `GeminiClient.start_chat`. `gem` can be either a gem ID string or a `gemini_webapi.Gem` object. Only one gem can be applied to a single conversation.\n\n> [!TIP]\n>\n> There are some system predefined gems that are not shown to users by default (and therefore may not work properly). Use `client.fetch_gems(include_hidden=True)` to include them in the fetch result.\n\n```python\nasync def main():\n    # Fetch all gems for the current account, including both predefined and user-created ones\n    await client.fetch_gems(include_hidden=False)\n\n    # Once fetched, gems will be cached in `GeminiClient.gems`\n    gems = client.gems\n\n    # Get the gem you want to use\n    system_gems = gems.filter(predefined=True)\n    coding_partner = system_gems.get(id=\"coding-partner\")\n\n    response1 = await client.generate_content(\n        \"What's your system prompt?\",\n        gem=coding_partner,\n    )\n    print(response1.text)\n\n    # Another example with a user-created custom gem\n    # Gem ids are consistent strings. Store them somewhere to avoid fetching gems every time\n    your_gem = gems.get(name=\"Your Gem Name\")\n    your_gem_id = your_gem.id\n    chat = client.start_chat(gem=your_gem_id)\n    response2 = await chat.send_message(\"What's your system prompt?\")\n    print(response2)\n```\n\n### Manage Custom Gems\n\nYou can create, update, and delete your custom gems programmatically with the API. Note that predefined system gems cannot be modified or deleted.\n\n#### Create a Custom Gem\n\nCreate a new custom gem with a name, system prompt (instructions), and optional description:\n\n```python\nasync def main():\n    # Create a new custom gem\n    new_gem = await client.create_gem(\n        name=\"Python Tutor\",\n        prompt=\"You are a helpful Python programming tutor.\",\n        description=\"A specialized gem for Python programming\"\n    )\n\n    print(f\"Custom gem created: {new_gem}\")\n\n    # Use the newly created gem in a conversation\n    response = await client.generate_content(\n        \"Explain how list comprehensions work in Python\",\n        gem=new_gem\n    )\n    print(response.text)\n\nasyncio.run(main())\n```\n\n#### Update an Existing Gem\n\n> [!NOTE]\n>\n> When updating a gem, you must provide all parameters (name, prompt, description) even if you only want to change one of them.\n\n```python\nasync def main():\n    # Get a custom gem (assuming you have one named \"Python Tutor\")\n    await client.fetch_gems()\n    python_tutor = client.gems.get(name=\"Python Tutor\")\n\n    # Update the gem with new instructions\n    updated_gem = await client.update_gem(\n        gem=python_tutor,  # Can also pass gem ID string\n        name=\"Advanced Python Tutor\",\n        prompt=\"You are an expert Python programming tutor.\",\n        description=\"An advanced Python programming assistant\"\n    )\n\n    print(f\"Custom gem updated: {updated_gem}\")\n\nasyncio.run(main())\n```\n\n#### Delete a Custom Gem\n\n```python\nasync def main():\n    # Get the gem to delete\n    await client.fetch_gems()\n    gem_to_delete = client.gems.get(name=\"Advanced Python Tutor\")\n\n    # Delete the gem\n    await client.delete_gem(gem_to_delete)  # Can also pass gem ID string\n    print(f\"Custom gem deleted: {gem_to_delete.name}\")\n\nasyncio.run(main())\n```\n\n### Retrieve Model's Thought Process\n\nWhen using models with thinking capabilities, the model's thought process will be populated in `ModelOutput.thoughts`.\n\n```python\nasync def main():\n    response = await client.generate_content(\n            \"What's 1+1?\", model=\"gemini-3-pro\"\n        )\n    print(response.thoughts)\n    print(response.text)\n\nasyncio.run(main())\n```\n\n### Retrieve Images in Response\n\nImages in the API's output are stored as a list of `gemini_webapi.Image` objects. You can access the image title, URL, and description by calling `Image.title`, `Image.url` and `Image.alt` respectively.\n\n```python\nasync def main():\n    response = await client.generate_content(\"Send me some pictures of cats\")\n    for image in response.images:\n        print(image, \"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n### Generate and Edit Images\n\nYou can ask Gemini to generate and edit images with Nano Banana, Google's latest image model, using natural language.\n\n> [!IMPORTANT]\n>\n> Google has some limitations on Gemini's image generation feature, so availability may vary by region\u002Faccount. Here's a summary copied from [official documentation](https:\u002F\u002Fsupport.google.com\u002Fgemini\u002Fanswer\u002F14286560) (as of Sep 10, 2025):\n>\n> > This feature's availability in any specific Gemini app is also limited to the supported languages and countries of that app.\n> >\n> > For now, this feature isn't available to users under 18.\n> >\n> > To use this feature, you must be signed in to Gemini Apps.\n\nYou can save images returned from Gemini locally by calling `Image.save()`. Optionally, you can specify the file path and file name by passing `path` and `filename` arguments to the function. This works for both `WebImage` and `GeneratedImage`.\n\n```python\nasync def main():\n    response = await client.generate_content(\"Generate some pictures of cats\")\n    for i, image in enumerate(response.images):\n        await image.save(path=\"temp\u002F\", filename=f\"cat_{i}.png\", verbose=True)\n        print(image, \"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n> [!NOTE]\n>\n> By default, when asked to send images (like in the previous example), Gemini will send images fetched from the web instead of generating images with an AI model, unless you specifically ask it to \"generate\" images in your prompt. In this package, web images and generated images are treated differently as `WebImage` and `GeneratedImage`, and are automatically categorized in the output.\n\n### Retrieve Videos and Audio\n\nGemini can generate videos and audio\u002Fmusic content. These are returned as `GeneratedVideo` and `GeneratedMedia` objects in `ModelOutput.videos` and `ModelOutput.media` respectively. You can save them to disk just like images.\n\n> [!NOTE]\n>\n> You may need an active subscription to access Gemini's video and audio generation features.\n\n```python\nasync def main():\n    response = await client.generate_content(\"Generate a short video of a cat playing\")\n\n    # Save generated videos\n    for video in response.videos:\n        result = await video.save(path=\"temp\u002F\", verbose=True)\n        print(f\"Video saved: {result}\")\n\n    # Save generated media (audio\u002Fmusic)\n    for media in response.media:\n        result = await media.save(path=\"temp\u002F\", verbose=True)\n        print(f\"Media saved: {result}\")\n\nasyncio.run(main())\n```\n\n> [!NOTE]\n>\n> `GeneratedMedia.save()` accepts a `download_type` parameter: `\"audio\"`, `\"video\"`, or `\"both\"` (default). Generated video\u002Faudio may take time to render — the save method will poll automatically until the content is ready.\n\n### Generate Content with Gemini Extensions\n\n> [!IMPORTANT]\n>\n> To access Gemini extensions in the API, you must activate them on the [Gemini website](https:\u002F\u002Fgemini.google.com\u002Fextensions) first. As with image generation, Google also has limitations on the availability of Gemini extensions. Here's a summary copied from [official documentation](https:\u002F\u002Fsupport.google.com\u002Fgemini\u002Fanswer\u002F13695044) (as of March 19, 2025):\n>\n> > To connect apps to Gemini, you must have​​​​ Gemini Apps Activity on.\n> >\n> > To use this feature, you must be signed in to Gemini Apps.\n> >\n> > Important: If you're under 18, Google Workspace and Maps apps currently only work with English prompts in Gemini.\n\nAfter activating extensions for your account, you can access them in your prompts either in natural language or by starting your prompt with \"@\" followed by the extension keyword.\n\n```python\nasync def main():\n    response1 = await client.generate_content(\"@Gmail What's the latest message in my mailbox?\")\n    print(response1, \"\\n\\n----------------------------------\\n\\n\")\n\n    response2 = await client.generate_content(\"@Youtube What's the latest activity of Taylor Swift?\")\n    print(response2, \"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n> [!NOTE]\n>\n> For region availability, your Google account's **preferred language** only needs to be set to one of the three supported languages listed above. You can change your language settings [here](https:\u002F\u002Fmyaccount.google.com\u002Flanguage).\n\n### Check and Switch to Other Reply Candidates\n\nA response from Gemini sometimes contains multiple reply candidates with different generated content. You can check all candidates and choose one to continue the conversation. By default, the first candidate is chosen.\n\n```python\nasync def main():\n    # Start a conversation and list all reply candidates\n    chat = client.start_chat()\n    response = await chat.send_message(\"Recommend a science fiction book for me.\")\n    for candidate in response.candidates:\n        print(candidate, \"\\n\\n----------------------------------\\n\\n\")\n\n    if len(response.candidates) > 1:\n        # Control the ongoing conversation flow by choosing candidate manually\n        new_candidate = chat.choose_candidate(index=1)  # Choose the second candidate here\n        followup_response = await chat.send_message(\"Tell me more about it.\")  # Will generate content based on the chosen candidate\n        print(new_candidate, followup_response, sep=\"\\n\\n----------------------------------\\n\\n\")\n    else:\n        print(\"Only one candidate available.\")\n\nasyncio.run(main())\n```\n\n### Deep Research\n\nGemini's deep research feature is an autonomous research agent that browses the web, analyzes sources, and produces a comprehensive report. You can access it programmatically through the API.\n\n> [!NOTE]\n>\n> You may need an active subscription to access Gemini's deep research feature.\n\n**Quick one-call method:**\n\n```python\nasync def main():\n    result = await client.deep_research(\n        \"Compare the top 3 cloud providers and their AI offerings\",\n        poll_interval=10.0,\n        timeout=600.0,\n    )\n    print(f\"Done: {result.done}\")\n    print(result.text)\n\nasyncio.run(main())\n```\n\n**Step-by-step workflow** for more control:\n\n```python\nasync def main():\n    # Step 1: Create a research plan\n    plan = await client.create_deep_research_plan(\n        \"What are the latest advancements in quantum computing?\"\n    )\n    print(f\"Title: {plan.title}\")\n    print(f\"ETA: {plan.eta_text}\")\n    for step in plan.steps:\n        print(f\"  - {step}\")\n\n    # Step 2: Start the research\n    await client.start_deep_research(plan)\n\n    # Step 3: Poll for completion\n    result = await client.wait_for_deep_research(\n        plan,\n        poll_interval=10.0,\n        timeout=600.0,\n        on_status=lambda s: print(f\"Status: {s.state}\"),\n    )\n\n    print(result.text)\n\nasyncio.run(main())\n```\n\n### Logging Configuration\n\nThis package uses [loguru](https:\u002F\u002Floguru.readthedocs.io\u002Fen\u002Fstable\u002F) for logging and exposes a function `set_log_level` to control the log level. You can set the log level to one of the following values: `DEBUG`, `INFO`, `WARNING`, `ERROR`, and `CRITICAL`. The default value is `INFO`.\n\n```python\nfrom gemini_webapi import set_log_level\n\nset_log_level(\"DEBUG\")\n```\n\n> [!NOTE]\n>\n> Calling `set_log_level` for the first time will **globally** remove all existing loguru handlers. You may want to configure logging directly with loguru to avoid this issue and have more advanced control over logging behaviors.\n\n## CLI Tool\n\nA standalone CLI (`cli.py`) is included for interacting with Gemini from the terminal. It supports single-turn questions, multi-turn chat, deep research, image download, and account diagnostics.\n\n### Cookie Setup\n\nExport your cookies from [gemini.google.com](https:\u002F\u002Fgemini.google.com) and save them as a JSON file. The CLI supports multiple formats:\n\n```json\n{ \"__Secure-1PSID\": \"value...\", \"__Secure-1PSIDTS\": \"value...\" }\n```\n\nYou can also use a browser cookie extension export (array-of-objects format is supported).\n\n> [!NOTE]\n>\n> The CLI automatically persists updated cookies back to the JSON file after each run. Use `--no-persist` to disable this behavior.\n\n### CLI Commands\n\n**Global options** (placed before the subcommand):\n\n```sh\n--cookies-json PATH    Path to cookies JSON file (required)\n--proxy URL            Proxy URL (or uses HTTPS_PROXY env)\n--model NAME           Model name (see 'models' command)\n--verbose              Enable debug logging\n--no-persist           Don't update cookies file after run\n--request-timeout SEC  HTTP timeout in seconds (default: 300)\n```\n\n**Available commands:**\n\n```sh\n# Ask a single question (streams by default)\npython cli.py --cookies-json cookies.json ask \"What is quantum computing?\"\n\n# Ask with image input\npython cli.py --cookies-json cookies.json ask --image photo.jpg \"Describe this\"\n\n# Non-streaming mode\npython cli.py --cookies-json cookies.json ask --no-stream \"Hello\"\n\n# Continue a conversation (chat ID from previous output)\npython cli.py --cookies-json cookies.json reply c_abc123 \"Tell me more\"\n\n# List your chat history\npython cli.py --cookies-json cookies.json list\n\n# Read a specific chat conversation\npython cli.py --cookies-json cookies.json read c_abc123\n\n# List available models\npython cli.py --cookies-json cookies.json models\n\n# Download a generated image\npython cli.py --cookies-json cookies.json download \"https:\u002F\u002F...\" -o output.png\n\n# Account diagnostics (check feature availability)\npython cli.py --cookies-json cookies.json inspect\n```\n\n### Deep Research Workflow\n\nThe CLI supports Gemini's Deep Research feature — an autonomous research agent that browses the web, analyzes sources, and produces a comprehensive report.\n\n```sh\n# 1. Submit a research task\npython cli.py --cookies-json cookies.json research send --prompt \"AI chip competition 2025\"\n\n# 2. Check progress (use the chat ID from step 1)\npython cli.py --cookies-json cookies.json research check c_abc123\n\n# 3. Fetch the full result\npython cli.py --cookies-json cookies.json research get c_abc123\n\n# 4. Save result to a file\npython cli.py --cookies-json cookies.json research get c_abc123 --output report.md\n```\n\n## References\n\n[Google AI Studio](https:\u002F\u002Fai.google.dev\u002Ftutorials\u002Fai-studio_quickstart)\n\n[acheong08\u002FBard](https:\u002F\u002Fgithub.com\u002Facheong08\u002FBard)\n\n## Stargazers\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#HanaokaYuzu\u002FGemini-API\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_readme_5ba79ee1eead.png\" width=\"75%\" alt=\"Star History Chart\">\u003C\u002Fa>\n\u003C\u002Fp>\n","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_readme_81ad5b0939ac.png\" width=\"55%\" alt=\"Gemini Banner\" align=\"center\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fgemini-webapi\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fgemini-webapi\" alt=\"PyPI\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fgemini-webapi\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_readme_6a7b4a2f1f5c.png\" alt=\"Downloads\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fnetwork\u002Fdependencies\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Flibrariesio\u002Fgithub\u002FHanaokaYuzu\u002FGemini-API\" alt=\"Dependencies\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fblob\u002Fmaster\u002FLICENSE\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FHanaokaYuzu\u002FGemini-API\" alt=\"License\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\" alt=\"Code style\">\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#HanaokaYuzu\u002FGemini-API\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FHanaokaYuzu\u002FGemini-API?style=social\" alt=\"GitHub stars\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FHanaokaYuzu\u002FGemini-API?style=social&logo=github\" alt=\"GitHub issues\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Factions\u002Fworkflows\u002Fpypi-publish.yml\">\n        \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Factions\u002Fworkflows\u002Fpypi-publish.yml\u002Fbadge.svg\" alt=\"CI\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n# \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002FHanaokaYuzu\u002FGemini-API\u002Fmaster\u002Fassets\u002Flogo.svg\" width=\"35px\" alt=\"Gemini Icon\" \u002F> Gemini-API\n\n一个针对 [Google Gemini](https:\u002F\u002Fgemini.google.com) 网页应用（前身为 Bard）的逆向工程异步 Python 封装库。\n\n## 特性\n\n- **持久化 Cookie** - 自动在后台刷新 Cookie。专为常开服务优化。\n- **图像生成** - 原生支持通过自然语言生成和编辑图像。\n- **视频与音频生成** - 原生支持生成视频以及音频\u002F音乐内容。\n- **深度研究** - 提供完整的深度研究工作流程，包括计划创建、状态轮询和结果获取。\n- **系统提示词** - 支持使用 [Gemini Gems](https:\u002F\u002Fgemini.google.com\u002Fgems\u002Fview) 自定义模型的系统提示词。\n- **扩展支持** - 支持使用 [Gemini 扩展](https:\u002F\u002Fgemini.google.com\u002Fextensions)，如 YouTube 和 Gmail，生成内容。\n- **分类输出** - 对响应中的文本、思考、图片、视频和音频进行分类。\n- **流式模式** - 支持流式生成，可在生成过程中逐步返回部分输出。\n- **CLI 工具** - 独立的命令行界面，方便快速交互。\n- **官方风格** - 提供简洁优雅的接口，灵感源自 [Google Generative AI](https:\u002F\u002Fai.google.dev\u002Ftutorials\u002Fpython_quickstart) 的官方 API。\n- **异步支持** - 使用 `asyncio` 运行生成任务，并高效地返回结果。\n\n## 目录\n\n- [特性](#features)\n- [目录](#table-of-contents)\n- [安装](#installation)\n- [认证](#authentication)\n- [使用方法](#usage)\n  - [初始化](#initialization)\n  - [生成内容](#generate-content)\n  - [使用文件生成内容](#generate-content-with-files)\n  - [多轮对话](#conversations-across-multiple-turns)\n  - [继续之前的对话](#continue-previous-conversations)\n  - [读取对话历史](#read-conversation-history)\n  - [从 Gemini 历史中删除之前的对话](#delete-previous-conversations-from-gemini-history)\n  - [临时模式](#temporary-mode)\n  - [流式模式](#streaming-mode)\n  - [选择语言模型](#select-language-model)\n  - [列出可用模型](#list-available-models)\n  - [使用 Gemini Gems 应用系统提示词](#apply-system-prompt-with-gemini-gems)\n  - [管理自定义 Gems](#manage-custom-gems)\n    - [创建自定义 Gem](#create-a-custom-gem)\n    - [更新现有 Gem](#update-an-existing-gem)\n    - [删除自定义 Gem](#delete-a-custom-gem)\n  - [获取模型的思考过程](#retrieve-models-thought-process)\n  - [获取响应中的图片](#retrieve-images-in-response)\n  - [生成和编辑图像](#generate-and-edit-images)\n  - [获取视频和音频](#retrieve-videos-and-audio)\n  - [使用 Gemini 扩展生成内容](#generate-content-with-gemini-extensions)\n  - [检查并切换其他回复候选](#check-and-switch-to-other-reply-candidates)\n  - [深度研究](#deep-research)\n  - [日志配置](#logging-configuration)\n- [CLI 工具](#cli-tool)\n  - [Cookie 设置](#cookie-setup)\n  - [CLI 命令](#cli-commands)\n  - [深度研究工作流程](#deep-research-workflow)\n- [参考文献](#references)\n- [星标用户](#stargazers)\n\n## 安装\n\n> [!注意]\n>\n> 本包需要 Python 3.10 或更高版本。\n\n使用 pip 安装或更新该包。\n\n```sh\npip install -U gemini_webapi\n```\n\n可选地，该包还提供通过可选依赖 `browser-cookie3` 从本地浏览器自动导入 Cookie 的功能。若要启用此功能，请安装 `gemini_webapi[browser]`。支持的平台和浏览器请参阅 [此处](https:\u002F\u002Fgithub.com\u002Fborisbabic\u002Fbrowser_cookie3?tab=readme-ov-file#contribute)。\n\n```sh\npip install -U gemini_webapi[browser]\n```\n\n## 身份验证\n\n> [!TIP]\n>\n> 如果已安装 `browser-cookie3`，您可以跳过此步骤，直接前往[使用方法](#usage)部分。只需确保您已在浏览器中登录 \u003Chttps:\u002F\u002Fgemini.google.com>。\n\n- 访问 \u003Chttps:\u002F\u002Fgemini.google.com> 并使用您的 Google 帐户登录\n- 按 F12 打开开发者工具，切换到 `Network` 选项卡，并刷新页面\n- 点击任意请求，复制 `__Secure-1PSID` 和 `__Secure-1PSIDTS` 的 Cookie 值\n\n> [!NOTE]\n>\n> 如果您的应用部署在容器化环境中（例如 Docker），建议将 Cookie 存储在卷中以持久化保存，从而避免每次容器重建时都需要重新进行身份验证。您可以设置 `GEMINI_COOKIE_PATH` 环境变量来指定自动刷新的 Cookie 存储路径。请确保该路径对应用程序可写。\n>\n> 以下是一个示例 `docker-compose.yml` 文件的部分内容：\n\n```yaml\nservices:\n    main:\n        environment:\n            GEMINI_COOKIE_PATH: \u002Ftmp\u002Fgemini_webapi\n        volumes:\n            - .\u002Fgemini_cookies:\u002Ftmp\u002Fgemini_webapi\n```\n\n> [!NOTE]\n>\n> API 的自动刷新 Cookie 功能无需依赖 `browser-cookie3`，并且默认已启用。它允许您持续运行 API 服务，而无需担心 Cookie 过期的问题。\n>\n> 使用此功能时，可能需要您再次在浏览器中登录 Google 帐户。这是预期行为，不会影响 API 的正常功能。\n>\n> 为了避免这种情况，建议从一个独立的浏览器会话中获取 Cookie，并在获取完成后尽快关闭该会话，以达到最佳效果（例如，在浏览器的隐私模式下进行全新登录）。更多详情请参阅 [此处](https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F6)。\n\n## 使用方法\n\n### 初始化\n\n导入所需的包，并使用上一步获取的 Cookie 初始化客户端。初始化成功后，只要进程保持运行，API 就会在后台自动刷新 `__Secure-1PSIDTS`。\n\n```python\nimport asyncio\nfrom gemini_webapi import GeminiClient\n\n# 将“COOKIE VALUE HERE”替换为您的实际 Cookie 值。\n# 如果您的帐户没有 Secure_1PSIDTS，请将其留空。\nSecure_1PSID = \"COOKIE VALUE HERE\"\nSecure_1PSIDTS = \"COOKIE VALUE HERE\"\n\nasync def main():\n    # 如果已安装 browser-cookie3，可以直接使用 `client = GeminiClient()`\n    client = GeminiClient(Secure_1PSID, Secure_1PSIDTS, proxy=None)\n    await client.init(timeout=30, auto_close=False, close_delay=300, auto_refresh=True)\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> `auto_close` 和 `close_delay` 是可选参数，用于在一定时间无活动后自动关闭客户端。此功能默认关闭。对于聊天机器人等始终在线的服务，建议将 `auto_close` 设置为 `True`，并合理设置 `close_delay` 值，以更好地管理资源。\n\n### 生成内容\n\n通过调用 `GeminiClient.generate_content` 发送单轮问题，该方法会返回一个 `gemini_webapi.ModelOutput` 对象，其中包含生成的文本、图片、思考内容以及对话元数据。\n\n```python\nasync def main():\n    response = await client.generate_content(\"Hello World!\")\n    print(response.text)\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> 如果您只想查看响应文本，可以直接使用 `print(response)` 来获得相同输出。\n\n### 使用文件生成内容\n\nGemini 支持文件输入，包括图片和文档。您可以选择将文件路径列表（以 `str` 或 `pathlib.Path` 格式）与文本提示一起传递给 `GeminiClient.generate_content`。\n\n```python\nasync def main():\n    response = await client.generate_content(\n            \"请介绍这两份文件的内容。它们之间有什么联系吗？\",\n            files=[\"assets\u002Fsample.pdf\", Path(\"assets\u002Fbanner.png\")],\n        )\n    print(response.text)\n\nasyncio.run(main())\n```\n\n### 多轮对话\n\n如果您希望保持对话的连续性，可以使用 `GeminiClient.start_chat` 创建一个 `gemini_webapi.ChatSession` 对象，并通过该对象发送消息。对话历史将自动处理并在每一轮结束后更新。\n\n```python\nasync def main():\n    chat = client.start_chat()\n    response1 = await chat.send_message(\n        \"请介绍这两份文件的内容。它们之间有什么联系吗？\",\n        files=[\"assets\u002Fsample.pdf\", Path(\"assets\u002Fbanner.png\")],\n    )\n    print(response1.text)\n    response2 = await chat.send_message(\n        \"请使用图像生成工具，用另一种字体和设计修改横幅。\"\n    )\n    print(response2.text, response2.images, sep=\"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> 与 `GeminiClient.generate_content` 类似，`ChatSession.send_message` 也接受 `image` 作为可选参数。\n\n### 继续之前的对话\n\n要手动检索之前的对话，可以在创建新的 `ChatSession` 时，将先前 `ChatSession` 的元数据传递给 `GeminiClient.start_chat`。或者，您也可以将之前的元数据持久化到文件或数据库中，以便在当前 Python 进程退出后仍能访问这些信息。\n\n```python\nasync def main():\n    # 开始一个新的聊天会话\n    chat = client.start_chat()\n    await chat.send_message(\"今天天气真好\")\n\n    # 保存聊天的元数据\n    previous_session = chat.metadata\n\n    # 加载之前的对话\n    previous_chat = client.start_chat(metadata=previous_session)\n    await previous_chat.send_message(\"我之前发了什么消息？\")\n    print(await previous_chat.send_message(\"我之前发了什么消息？\"))\n\nasyncio.run(main())\n```\n\n### 阅读对话历史\n\n您可以通过调用 `GeminiClient.read_chat` 并传入聊天 ID 来阅读特定聊天的对话历史。该方法会返回一个 `ChatHistory` 对象，其中包含按时间顺序从最新到最旧排列的 `ChatTurn` 对象列表。\n\n```python\nasync def main():\n    chat = client.start_chat()\n    await chat.send_message(\"法国的首都是哪里？\")\n\n    # 阅读聊天历史\n    history = await client.read_chat(chat.cid)\n    if history:\n        for turn in history.turns:\n            print(f\"[{turn.role.upper()}] {turn.text}\")\n            print(\"\\n----------------------------------\\n\")\n\nasyncio.run(main())\n```\n\n要列出所有最近的聊天，可以使用 `GeminiClient.list_chats`：\n\n```python\nasync def main():\n    chats = client.list_chats()\n    if chats:\n        for chat_info in chats:\n            print(f\"{chat_info.cid}: {chat_info.title}\")\n\nasyncio.run(main())\n```\n\n### 从 Gemini 历史中删除之前的对话\n\n您可以通过调用 `GeminiClient.delete_chat` 并传入聊天 ID，从服务器端删除特定的聊天记录。\n\n```python\nasync def main():\n    # 开始一个新的聊天会话\n    chat = client.start_chat()\n    await chat.send_message(\"这是一个临时对话。\")\n\n    # 删除该聊天\n    await client.delete_chat(chat.cid)\n    print(f\"聊天已删除：{chat.cid}\")\n\nasyncio.run(main())```\n\n### 临时模式\n\n您可以通过将 `temporary=True` 传递给 `GeminiClient.generate_content` 或 `ChatSession.send_message` 来开始一个临时聊天。临时聊天不会保存在 Gemini 的历史记录中。\n\n```python\nasync def main():\n    response = await client.generate_content(\"Hello World!\", temporary=True)\n    print(response.text, \"\\n\\n----------------------------------\\n\\n\")\n\n    chat = client.start_chat()\n    await chat.send_message(\"Fine weather today\", temporary=False)\n    response2 = await chat.send_message(\"What's my last message?\", temporary=True)\n    print(response2.text)\n\nasyncio.run(main())\n```\n\n### 流式模式\n\n对于较长的回复，您可以使用流式模式，在生成时逐步接收部分输出。这能提供更流畅的用户体验，尤其适用于聊天机器人等实时应用。\n\n`generate_content_stream` 方法会生成 `ModelOutput` 对象，其中的 `text_delta` 属性仅包含自上次生成以来接收到的新字符，从而便于显示增量更新。\n\n```python\nasync def main():\n    async for chunk in client.generate_content_stream(\n        \"What's the difference between 'await' and 'async for'?\"\n    ):\n        print(chunk.text_delta, end=\"\", flush=True)\n\n    print()\n\nasyncio.run(main())\n```\n\n> [!TIP]\n>\n> 流式模式接受与 `generate_content` 相同的参数。您还可以在多轮对话中使用 `ChatSession.send_message_stream` 进行流式交互。\n\n### 选择语言模型\n\n您可以通过将 `model` 参数传递给 `GeminiClient.generate_content` 或 `GeminiClient.start_chat` 来指定要使用的语言模型。默认值为 `unspecified`。\n\n可用模型会在初始化时根据您的账户层级**动态**发现。`Model` 枚举提供了便捷的快捷方式。\n\n```python\nfrom gemini_webapi.constants import Model\n\nasync def main():\n    response1 = await client.generate_content(\n        \"What's your language model version? Reply with the version number only.\",\n        model=Model.BASIC_FLASH,\n    )\n    print(f\"Model version ({Model.BASIC_FLASH.model_name}): {response1.text}\")\n\n    chat = client.start_chat(model=\"gemini-3-pro\")\n    response2 = await chat.send_message(\"What's your language model version? Reply with the version number only.\")\n    print(f\"Model version (gemini-3-pro): {response2.text}\")\n\nasyncio.run(main())\n```\n\n您也可以直接传递自定义的模型头信息字符串，以访问未列出的模型。\n\n```python\n# 必须包含 \"model_name\" 和 \"model_header\" 键\ncustom_model = {\n    \"model_name\": \"xxx\",\n    \"model_header\": {\n        \"x-goog-ext-525001261-jspb\": \"[1,null,null,null,'e6fa609c3fa255c0',null,null,null,[4]]\"\n    },\n}\n\nresponse = await client.generate_content(\n    \"What's your model version?\",\n    model=custom_model\n)\n```\n\n### 列出可用模型\n\n客户端会在初始化时动态发现您账户可用的模型。使用 `GeminiClient.list_models` 可查看所有可用模型及其详细信息。\n\n```python\nasync def main():\n    await client.init()  # 确保客户端已初始化\n    models = client.list_models()\n    if models:\n        for model in models:\n            print(f\"{model.display_name}: {model.model_name}\")\n\nasyncio.run(main())\n```\n\n### 使用 Gemini Gems 应用系统提示\n\n系统提示可以通过 [Gemini Gems](https:\u002F\u002Fgemini.google.com\u002Fgems\u002Fview) 应用于对话中。要使用某个宝石，您可以将 `gem` 参数传递给 `GeminiClient.generate_content` 或 `GeminiClient.start_chat`。`gem` 可以是宝石 ID 字符串或 `gemini_webapi.Gem` 对象。单次对话中只能应用一个宝石。\n\n> [!TIP]\n>\n> 存在一些系统预设的宝石，默认情况下不会对用户显示（因此可能无法正常工作）。使用 `client.fetch_gems(include_hidden=True)` 可将其包含在获取结果中。\n\n```python\nasync def main():\n    # 获取当前账户的所有宝石，包括预设和用户创建的\n    await client.fetch_gems(include_hidden=False)\n\n    # 获取后，宝石会被缓存在 `GeminiClient.gems` 中\n    gems = client.gems\n\n    # 获取您想要使用的宝石\n    system_gems = gems.filter(predefined=True)\n    coding_partner = system_gems.get(id=\"coding-partner\")\n\n    response1 = await client.generate_content(\n        \"What's your system prompt?\",\n        gem=coding_partner,\n    )\n    print(response1.text)\n\n    # 另一个使用用户自定义宝石的例子\n    # 宝石 ID 是一致的字符串，可将其存储起来，避免每次都重新获取宝石\n    your_gem = gems.get(name=\"Your Gem Name\")\n    your_gem_id = your_gem.id\n    chat = client.start_chat(gem=your_gem_id)\n    response2 = await chat.send_message(\"What's your system prompt?\")\n    print(response2)\n```\n\n### 管理自定义宝石\n\n您可以通过 API 以编程方式创建、更新和删除自定义宝石。请注意，预设的系统宝石无法修改或删除。\n\n#### 创建自定义宝石\n\n使用名称、系统提示（指令）以及可选的描述来创建一个新的自定义宝石：\n\n```python\nasync def main():\n    # 创建一个新的自定义宝石\n    new_gem = await client.create_gem(\n        name=\"Python Tutor\",\n        prompt=\"You are a helpful Python programming tutor.\",\n        description=\"A specialized gem for Python programming\"\n    )\n\n    print(f\"Custom gem created: {new_gem}\")\n\n    # 在对话中使用新创建的宝石\n    response = await client.generate_content(\n        \"Explain how list comprehensions work in Python\",\n        gem=new_gem\n    )\n    print(response.text)\n\nasyncio.run(main())\n```\n\n#### 更新现有宝石\n\n> [!NOTE]\n>\n> 更新宝石时，必须提供所有参数（名称、提示、描述），即使您只想更改其中一个。\n\n```python\nasync def main():\n    # 获取一个自定义宝石（假设您有一个名为“Python Tutor”的宝石）\n    await client.fetch_gems()\n    python_tutor = client.gems.get(name=\"Python Tutor\")\n\n    # 用新的指令更新宝石\n    updated_gem = await client.update_gem(\n        gem=python_tutor,  \u002F\u002F 也可以传入宝石 ID 字符串\n        name=\"Advanced Python Tutor\",\n        prompt=\"You are an expert Python programming tutor.\",\n        description=\"An advanced Python programming assistant\"\n    )\n\n    print(f\"Custom gem updated: {updated_gem}\")\n\nasyncio.run(main())\n```\n\n#### 删除自定义宝石\n\n```python\nasync def main():\n    # 获取要删除的宝石\n    await client.fetch_gems()\n    gem_to_delete = client.gems.get(name=\"Advanced Python Tutor\")\n\n    # 删除宝石\n    await client.delete_gem(gem_to_delete)  \u002F\u002F 也可以传入宝石 ID 字符串\n    print(f\"Custom gem deleted: {gem_to_delete.name}\")\n\nasyncio.run(main())\n```\n\n### 获取模型的思考过程\n\n当使用具备思考能力的模型时，模型的思考过程会被填充到 `ModelOutput.thoughts` 中。\n\n```python\nasync def main():\n    response = await client.generate_content(\n            \"1加1等于多少？\", model=\"gemini-3-pro\"\n        )\n    print(response.thoughts)\n    print(response.text)\n\nasyncio.run(main())\n```\n\n### 获取响应中的图片\n\nAPI 输出中的图片以 `gemini_webapi.Image` 对象列表的形式存储。你可以通过调用 `Image.title`、`Image.url` 和 `Image.alt` 分别获取图片的标题、URL 和描述。\n\n```python\nasync def main():\n    response = await client.generate_content(\"给我一些猫的图片\")\n    for image in response.images:\n        print(image, \"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n### 生成和编辑图片\n\n你可以使用自然语言请求 Gemini 利用 Google 最新的图像模型 Nano Banana 来生成和编辑图片。\n\n> [!IMPORTANT]\n>\n> Google 对 Gemini 的图像生成功能有一些限制，因此其可用性可能因地区或账户而异。以下是摘自[官方文档](https:\u002F\u002Fsupport.google.com\u002Fgemini\u002Fanswer\u002F14286560)（截至 2025 年 9 月 10 日）的摘要：\n>\n> > 此功能在任何特定 Gemini 应用程序中的可用性也仅限于该应用程序支持的语言和国家。\n> >\n> > 目前，此功能对 18 岁以下的用户不可用。\n> >\n> > 要使用此功能，你必须登录 Gemini 应用程序。\n\n你可以通过调用 `Image.save()` 将 Gemini 返回的图片保存到本地。可选地，你可以通过向该函数传递 `path` 和 `filename` 参数来指定文件路径和文件名。这适用于 `WebImage` 和 `GeneratedImage`。\n\n```python\nasync def main():\n    response = await client.generate_content(\"生成一些猫的图片\")\n    for i, image in enumerate(response.images):\n        await image.save(path=\"temp\u002F\", filename=f\"cat_{i}.png\", verbose=True)\n        print(image, \"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n> [!NOTE]\n>\n> 默认情况下，当被要求发送图片时（如上例所示），Gemini 会发送从网上抓取的图片，而不是使用 AI 模型生成图片，除非你在提示中明确要求它“生成”图片。在此软件包中，网络图片和生成图片分别被视为 `WebImage` 和 `GeneratedImage`，并在输出中自动分类。\n\n### 获取视频和音频\n\nGemini 可以生成视频以及音频\u002F音乐内容。这些内容分别以 `GeneratedVideo` 和 `GeneratedMedia` 对象的形式返回在 `ModelOutput.videos` 和 `ModelOutput.media` 中。你可以像处理图片一样将它们保存到磁盘。\n\n> [!NOTE]\n>\n> 你可能需要有效的订阅才能访问 Gemini 的视频和音频生成功能。\n\n```python\nasync def main():\n    response = await client.generate_content(\"生成一段猫咪玩耍的短视频\")\n\n    # 保存生成的视频\n    for video in response.videos:\n        result = await video.save(path=\"temp\u002F\", verbose=True)\n        print(f\"视频已保存：{result}\")\n\n    # 保存生成的媒体（音频\u002F音乐）\n    for media in response.media:\n        result = await media.save(path=\"temp\u002F\", verbose=True)\n        print(f\"媒体已保存：{result}\")\n\nasyncio.run(main())\n```\n\n> [!NOTE]\n>\n> `GeneratedMedia.save()` 接受一个 `download_type` 参数：`\"audio\"`、`\"video\"` 或 `\"both\"`（默认）。生成的视频\u002F音频可能需要一些时间来渲染——保存方法会自动轮询，直到内容准备就绪。\n\n### 使用 Gemini 扩展生成内容\n\n> [!IMPORTANT]\n>\n> 要在 API 中访问 Gemini 扩展，你必须先在[Gemini 网站](https:\u002F\u002Fgemini.google.com\u002Fextensions)上启用它们。与图像生成一样，Google 对 Gemini 扩展的可用性也有一定的限制。以下是摘自[官方文档](https:\u002F\u002Fsupport.google.com\u002Fgemini\u002Fanswer\u002F13695044)（截至 2025 年 3 月 19 日）的摘要：\n>\n> > 要将应用连接到 Gemini，你必须开启 Gemini 应用活动功能。\n> >\n> > 要使用此功能，你必须登录 Gemini 应用程序。\n> >\n> > 重要提示：如果你未满 18 岁，Google Workspace 和 Maps 应用目前仅支持使用英语提示与 Gemini 交互。\n\n在为你的账户启用扩展后，你可以在提示中以自然语言形式或以“@”加上扩展关键词的方式访问它们。\n\n```python\nasync def main():\n    response1 = await client.generate_content(\"@Gmail 我的邮箱里最新的一条消息是什么？\")\n    print(response1, \"\\n\\n----------------------------------\\n\\n\")\n\n    response2 = await client.generate_content(\"@Youtube 泰勒·斯威夫特最近在做什么？\")\n    print(response2, \"\\n\\n----------------------------------\\n\\n\")\n\nasyncio.run(main())\n```\n\n> [!NOTE]\n>\n> 关于区域可用性，你的 Google 账户的**首选语言**只需设置为上述三种支持语言之一即可。你可以在此处更改语言设置：[这里](https:\u002F\u002Fmyaccount.google.com\u002Flanguage)。\n\n### 检查并切换其他回复候选\n\nGemini 的响应有时会包含多个具有不同生成内容的回复候选。你可以查看所有候选，并选择其中一个继续对话。默认情况下，会选择第一个候选。\n\n```python\nasync def main():\n    # 开始一次对话并列出所有回复候选\n    chat = client.start_chat()\n    response = await chat.send_message(\"给我推荐一本科幻小说。\")\n    for candidate in response.candidates:\n        print(candidate, \"\\n\\n----------------------------------\\n\\n\")\n\n    if len(response.candidates) > 1:\n        # 通过手动选择候选来控制正在进行的对话流程\n        new_candidate = chat.choose_candidate(index=1)  # 在这里选择第二个候选\n        followup_response = await chat.send_message(\"请多告诉我一些关于它的信息。\")  # 将基于所选候选生成内容\n        print(new_candidate, followup_response, sep=\"\\n\\n----------------------------------\\n\\n\")\n    else:\n        print(\"只有一个候选可用。\")\n\nasyncio.run(main())\n```\n\n### 深度研究\n\nGemini 的深度研究功能是一个自主的研究代理，它会浏览网页、分析资料来源，并生成一份全面的报告。您可以通过 API 以编程方式访问此功能。\n\n> [!NOTE]\n>\n> 您可能需要有效的订阅才能使用 Gemini 的深度研究功能。\n\n**快速单次调用方法：**\n\n```python\nasync def main():\n    result = await client.deep_research(\n        \"比较前三大云服务提供商及其 AI 产品\",\n        poll_interval=10.0,\n        timeout=600.0,\n    )\n    print(f\"完成：{result.done}\")\n    print(result.text)\n\nasyncio.run(main())\n```\n\n**分步工作流**，以便获得更精细的控制：\n\n```python\nasync def main():\n    # 第一步：创建研究计划\n    plan = await client.create_deep_research_plan(\n        \"量子计算领域的最新进展有哪些？\"\n    )\n    print(f\"标题：{plan.title}\")\n    print(f\"预计完成时间：{plan.eta_text}\")\n    for step in plan.steps:\n        print(f\"  - {step}\")\n\n    # 第二步：开始研究\n    await client.start_deep_research(plan)\n\n    # 第三步：轮询检查是否完成\n    result = await client.wait_for_deep_research(\n        plan,\n        poll_interval=10.0,\n        timeout=600.0,\n        on_status=lambda s: print(f\"状态：{s.state}\"),\n    )\n\n    print(result.text)\n\nasyncio.run(main())\n```\n\n### 日志配置\n\n本包使用 [loguru](https:\u002F\u002Floguru.readthedocs.io\u002Fen\u002Fstable\u002F) 进行日志记录，并提供 `set_log_level` 函数来控制日志级别。您可以将日志级别设置为以下值之一：`DEBUG`、`INFO`、`WARNING`、`ERROR` 和 `CRITICAL`。默认值为 `INFO`。\n\n```python\nfrom gemini_webapi import set_log_level\n\nset_log_level(\"DEBUG\")\n```\n\n> [!NOTE]\n>\n> 首次调用 `set_log_level` 将会 **全局地** 移除所有现有的 loguru 处理器。为了避免此问题并获得对日志行为更高级的控制，您可能希望直接使用 loguru 来配置日志记录。\n\n## 命令行工具\n\n为了方便在终端中与 Gemini 交互，我们提供了一个独立的命令行工具 (`cli.py`)。它支持单轮提问、多轮对话、深度研究、图片下载以及账户诊断等功能。\n\n### Cookie 设置\n\n请从 [gemini.google.com](https:\u002F\u002Fgemini.google.com) 导出您的 Cookie，并将其保存为 JSON 文件。CLI 支持多种格式：\n\n```json\n{ \"__Secure-1PSID\": \"value...\", \"__Secure-1PSIDTS\": \"value...\" }\n```\n\n您也可以使用浏览器 Cookie 扩展程序导出的文件（支持对象数组格式）。\n\n> [!NOTE]\n>\n> CLI 在每次运行后会自动将更新后的 Cookie 持久化回 JSON 文件。如果您不想启用此行为，可以使用 `--no-persist` 参数。\n\n### CLI 命令\n\n**全局选项**（置于子命令之前）：\n\n```sh\n--cookies-json PATH    Cookie JSON 文件路径（必填）\n--proxy URL            代理 URL（或使用 HTTPS_PROXY 环境变量）\n--model NAME           模型名称（参见 'models' 命令）\n--verbose              启用调试日志\n--no-persist           不在运行后更新 Cookie 文件\n--request-timeout SEC  HTTP 请求超时时间（默认：300 秒）\n```\n\n**可用命令：**\n\n```sh\n# 提问（默认为流式输出）\npython cli.py --cookies-json cookies.json ask \"什么是量子计算？\"\n\n# 带图片输入的提问\npython cli.py --cookies-json cookies.json ask --image photo.jpg \"描述这张照片\"\n\n# 非流式输出模式\npython cli.py --cookies-json cookies.json ask --no-stream \"你好\"\n\n# 继续对话（使用上一次输出的聊天 ID）\npython cli.py --cookies-json cookies.json reply c_abc123 \"再告诉我一些吧\"\n\n# 列出聊天历史\npython cli.py --cookies-json cookies.json list\n\n# 查看特定聊天记录\npython cli.py --cookies-json cookies.json read c_abc123\n\n# 列出可用模型\npython cli.py --cookies-json cookies.json models\n\n# 下载生成的图片\npython cli.py --cookies-json cookies.json download \"https:\u002F\u002F...\" -o output.png\n\n# 账户诊断（检查功能可用性）\npython cli.py --cookies-json cookies.json inspect\n```\n\n### 深度研究工作流\n\nCLI 支持 Gemini 的深度研究功能——一个能够自主浏览网页、分析资料来源并生成综合报告的研究代理。\n\n```sh\n# 1. 提交研究任务\npython cli.py --cookies-json cookies.json research send --prompt \"2025 年 AI 芯片竞争\"\n\n# 2. 检查进度（使用步骤 1 中的聊天 ID）\npython cli.py --cookies-json cookies.json research check c_abc123\n\n# 3. 获取完整结果\npython cli.py --cookies-json cookies.json research get c_abc123\n\n# 4. 将结果保存到文件\npython cli.py --cookies-json cookies.json research get c_abc123 --output report.md\n```\n\n## 参考文献\n\n[Google AI Studio](https:\u002F\u002Fai.google.dev\u002Ftutorials\u002Fai-studio_quickstart)\n\n[acheong08\u002FBard](https:\u002F\u002Fgithub.com\u002Facheong08\u002FBard)\n\n## 星标用户\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#HanaokaYuzu\u002FGemini-API\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_readme_5ba79ee1eead.png\" width=\"75%\" alt=\"星标历史图\">\u003C\u002Fa>\n\u003C\u002Fp>","# Gemini-API 快速上手指南\n\nGemini-API 是一个逆向工程的异步 Python 封装库，用于访问 Google Gemini (原 Bard) 网页版功能。它支持持久化 Cookie、多模态生成（文本、图片、音视频）、深度研究以及流式输出等高级特性。\n\n## 环境准备\n\n- **操作系统**：Windows, macOS, Linux\n- **Python 版本**：3.10 或更高\n- **前置依赖**：\n  - 基础使用：无额外依赖\n  - 自动获取浏览器 Cookie（可选）：需安装 `browser-cookie3` 扩展包\n- **网络要求**：需能访问 `https:\u002F\u002Fgemini.google.com`\n\n## 安装步骤\n\n使用 pip 安装基础版本：\n\n```sh\npip install -U gemini_webapi\n```\n\n若希望自动从本地浏览器读取 Cookie（需确保浏览器已登录 Gemini），可安装带浏览器支持的版本：\n\n```sh\npip install -U gemini_webapi[browser]\n```\n\n> **提示**：国内用户如遇下载缓慢，可指定清华或阿里云镜像源：\n> ```sh\n> pip install -U gemini_webapi -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n### 1. 获取认证 Cookie\n\n若未安装 `browser-cookie3`，需手动获取 Cookie：\n\n1. 在浏览器中登录 [https:\u002F\u002Fgemini.google.com](https:\u002F\u002Fgemini.google.com)\n2. 按 `F12` 打开开发者工具，切换至 **Network** 标签页\n3. 刷新页面，点击任意请求\n4. 在请求头中找到并复制以下两个 Cookie 值：\n   - `__Secure-1PSID`\n   - `__Secure-1PSIDTS`（若不存在可留空）\n\n### 2. 初始化客户端并生成内容\n\n以下是最简单的单轮对话示例：\n\n```python\nimport asyncio\nfrom gemini_webapi import GeminiClient\n\n# 替换为你的实际 Cookie 值\nSecure_1PSID = \"YOUR_1PSID_COOKIE\"\nSecure_1PSIDTS = \"YOUR_1PSIDTS_COOKIE\"\n\nasync def main():\n    # 初始化客户端\n    client = GeminiClient(Secure_1PSID, Secure_1PSIDTS)\n    await client.init(timeout=30, auto_refresh=True)\n\n    # 发送消息并打印回复\n    response = await client.generate_content(\"你好，请介绍一下你自己！\")\n    print(response.text)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n> **说明**：\n> - `auto_refresh=True` 启用后台自动刷新 Cookie，适合长期运行的服务。\n> - 若已安装 `gemini_webapi[browser]` 且浏览器已登录，可直接使用 `client = GeminiClient()` 省略 Cookie 参数。","某初创公司的内容运营团队需要构建一个 7x24 小时运行的自动化新闻摘要与多媒体报告系统，旨在将全球科技资讯转化为包含图文、音频的每日简报。\n\n### 没有 Gemini-API 时\n- **多模态能力割裂**：官方 API 对免费用户限制较多，团队需分别调用不同接口生成文本、图片和音频，导致代码逻辑复杂且维护成本极高。\n- **会话状态难维持**：缺乏原生的持久化 Cookie 管理，长时间运行的服务经常因会话过期而中断，需人工频繁干预重新登录。\n- **深度研究缺失**：无法直接触发 Google 原生的“深度研究”工作流，只能编写复杂的爬虫脚本模拟搜索，效率低且容易触发反爬机制。\n- **实时流式输出困难**：难以实现类似网页端的打字机效果，用户生成报告时需等待全部内容处理完毕才能查看，体验延迟严重。\n- **扩展功能受限**：无法便捷地调用 YouTube 或 Gmail 等原生扩展插件，导致简报中缺少最新的视频素材和邮件上下文信息。\n\n### 使用 Gemini-API 后\n- **原生多模态整合**：利用其原生支持的视频、音频及图像生成功能，通过统一的异步 Python 接口即可一次性产出丰富的多媒体报告。\n- **自动会话保活**：借助后台自动刷新 Cookie 的特性，系统实现了真正的无人值守运行，彻底解决了会话中断问题。\n- **一键深度调研**：直接调用内置的深度研究工作流，自动完成计划制定、状态轮询和结果获取，大幅提升了资讯挖掘的深度与准确性。\n- **流畅流式响应**：开启流模式后，系统能实时_yield_部分生成内容，让用户在报告生成过程中即可预览进度，显著降低感知延迟。\n- **生态插件无缝对接**：轻松集成 YouTube 和 Gmail 等扩展插件，自动抓取最新视频链接和邮件关键信息，丰富了简报的内容维度。\n\nGemini-API 通过逆向工程完美复刻了网页端的全部高级特性，让开发者能以极低的成本构建出具备官方完整能力的企业级自动化应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHanaokaYuzu_Gemini-API_81ad5b09.png","HanaokaYuzu","UZQueen","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FHanaokaYuzu_0da1be2d.png","ゲーム開発部、部長……ユズです","@GMDevDept","Inside a locker",null,"https:\u002F\u002Fgithub.com\u002FHanaokaYuzu",[81],{"name":82,"color":83,"percentage":84},"Python","#3572A5",100,2610,382,"2026-04-10T05:16:47","AGPL-3.0","Linux, macOS, Windows","未说明",{"notes":92,"python":93,"dependencies":94},"该工具是 Google Gemini 网页版的逆向工程封装，通过 Cookie 进行身份验证，无需本地 GPU 或大模型文件。若在容器化环境（如 Docker）部署，建议挂载卷以持久化存储 Cookie 文件避免重复认证。支持自动刷新 Cookie 以保持服务长期运行。","3.10+",[95],"browser-cookie3 (可选)",[13,52,14,15,16,35],[98,99,100,101,102,103,104,105,106,107,108,109,110,111,112],"ai","api","async","bard","chatbot","gemini","generative-ai","google","llm","python","imagefx","google-gemini","reverse-engineering","image-generation","nano-banana","2026-03-27T02:49:30.150509","2026-04-10T19:13:50.018509",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},28067,"为什么会出现 'Stream interrupted' 或模型一直处于等待\u002F排队状态？","这通常是因为您的 Google 账号或 IP 地址因可疑活动被暂时限制。频繁重试可能会加剧问题。此外，免费和 Pro 账号都有明确的使用限制，并非无限使用。\n解决方案：\n1. 检查账号是否被封锁，尝试更换账号。\n2. 更换 IP 地址或使用代理（如 WARP）。\n3. 避免短时间内高频请求。\n4. 确保使用的是最新版本的代码补丁。","https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F235",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},28068,"在不同 IP 地址（如代理或云服务器）上使用 Cookie 时无法登录或报错 429 怎么办？","Google 会检测异常流量，特别是来自热门云服务提供商的 IP，导致返回 429 错误并重定向到验证页面。\n解决方案：\n1. 尽量在与生成 Cookie 相同的 IP 环境下使用。\n2. 如果必须在不同 IP 使用，可以在 Python 环境变量中设置 `HTTP_PROXY`，通过代理（如 WARP）发送请求，这通常是免费且有效的。\n3. 避免在容易被标记的公共云 IP 上直接运行。","https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F48",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},28069,"账号被标记为可疑活动并弹出验证码或自动登出是什么原因？","主要原因是发送请求的频率过高，导致 IP 地址被暂时封锁。目前没有一个具体的频率阈值标准。\n建议：降低请求频率，避免短时间内大量并发请求。如果问题持续，请更换 IP 地址。","https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F156",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},28070,"Pro 版本无法正常生成图片或无响应如何解决？","该问题已在项目的 #244 号补丁中修复。\n解决方案：\n1. 更新项目代码到最新版本以应用修复。\n2. 如果仍然有问题，可以尝试使用社区维护的分支版本（如 https:\u002F\u002Fgithub.com\u002Fxob0t\u002FGemini-API），但需注意生成的图片分辨率可能与网页版不一致。","https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F250",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},28071,"RotateCookies API 响应中缺少 '__Secure-1PSIDTS' 导致 Cookie 快速过期怎么办？","如果 API 响应中没有返回 '__Secure-1PSIDTS'，可以通过以下步骤手动获取：\n1. 访问 https:\u002F\u002Fmyaccount.google.com\u002F。\n2. 打开浏览器开发者工具（按 F12），切换到 Network（网络）标签页。\n3. 删除浏览器中现有的 '__Secure-1PSIDTS' 和 'SID' Cookie。\n4. 刷新页面（F5）。\n5. 在网络请求中观察对 `google.com\u002FRotateCookies` 的调用，新的 '__Secure-1PSIDTS' 将会出现在响应或新的 Cookie 中。","https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F153",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},28072,"某些 Google 账号在浏览器中完全找不到 '__Secure-1PSIDTS' Cookie 该如何处理？","部分账号可能不会直接显示该 Cookie，或者其有效期极短。核心问题往往在于 '1PSID' 本身而非 '1PSIDTS'。\n解决方案：\n1. 尝试按照 Issue #153 中的方法，访问 Google 账户页面并刷新以触发 Cookie 生成。\n2. 如果依然无法获取，可能需要更换一个能正常生成该 Cookie 的 Google 账号，因为并非所有账号都支持此非官方 API 的认证方式。","https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fissues\u002F6",[147,152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242],{"id":148,"version":149,"summary_zh":150,"released_at":151},188955,"v2.0.0","**v2.0.0 版本发布：**\n\n特性：新增对视频和音频生成的支持，以及以完整尺寸下载生成图像的功能  \n特性：新增深度研究功能支持  \n特性：新增 CLI 工具  \n特性：新增获取最近聊天列表的功能  \n特性：动态模型检索与账户诊断功能  \n特性：客户端现在无需 Cookie 即可运行  \n修复：默认使用可写临时目录作为 Cookie 缓存  \n修复：图像到图像输出的发现机制  \n构建：从 httpx 迁移到 curl-cffi\n\n**重要提示：** 由于进行了大规模重构，从 v1 迁移可能会出现兼容性问题。如果在迁移过程中遇到任何问题，请查看最新的 [README](https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fblob\u002Fmaster\u002FREADME.md)。\n\n## 变更内容\n* 构建（依赖项）：将 actions\u002Fdownload-artifact 从 8.0.0 升级至 8.0.1，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F278 中完成  \n* 从 httpx 迁移到 curl-cffi，由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F244 中完成  \n* 根据特定账户设置或会员身份动态检索模型、语言及自定义请求头，由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F280 中完成  \n* 修复：为 Cookie 缓存使用可写临时目录，由 @onuraycicek 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F273 中完成  \n* 特性：深度研究工作流及 CLI 工具，由 @michaelGuo1204 和 @Leechael 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F282 中实现  \n\n## 新贡献者\n* @onuraycicek 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F273 中完成了首次贡献  \n* @michaelGuo1204 和 @Leechael 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F282 中完成了首次贡献  \n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.21.0...v2.0.0","2026-04-06T21:18:46",{"id":153,"version":154,"summary_zh":155,"released_at":156},188956,"v1.21.0","特性：新增按聊天 ID 读取聊天记录的功能\n特性：引入 `gemini_webapi[browser]` 安装选项，将 `browser-cookie3` 一同打包\n修复：将 `gemini-3.0-pro` 更新为 `gemini-3.1-pro`，并修正模型请求头\n重构：更新请求负载，以更好地模拟浏览器行为\n## 变更内容\n* 修复 Pro 模型的流式传输和恢复功能，由 @ww2283 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F255 中完成\n\n## 新贡献者\n* @ww2283 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F255 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.20.0...v1.21.0","2026-03-06T23:14:40",{"id":158,"version":159,"summary_zh":160,"released_at":161},188957,"v1.20.0","功能：添加临时单次调用模式 (#215)\n修复：允许在没有 SNlM0e 令牌的情况下初始化 (#247)\n## 变更内容\n* 功能：由 @JidaDiao 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F215 中添加临时单次调用模式\n* 构建（依赖）：由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F259 中将 actions\u002Fupload-artifact 从 6.0.0 升级到 7.0.0\n* 构建（依赖）：由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F258 中将 actions\u002Fdownload-artifact 从 7.0.0 升级到 8.0.0\n* 修复：由 @getCurrentThread 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F247 中实现允许在没有 SNlM0e 令牌的情况下初始化\n\n## 新贡献者\n* @JidaDiao 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F215 中做出了首次贡献\n* @getCurrentThread 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F247 中做出了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.19.2...v1.20.0","2026-03-03T22:47:43",{"id":163,"version":164,"summary_zh":165,"released_at":166},188958,"v1.19.2","合并拉取请求 #237，来自 luuquangvu 的分支 Fix-the-logic-for-queueing\n\n修复僵尸流的重试逻辑。\n## 变更内容\n* 修复僵尸流的重试逻辑。由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F237 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.19.1...v1.19.2","2026-02-14T05:25:33",{"id":168,"version":169,"summary_zh":170,"released_at":171},188959,"v1.19.1","修复未保存对话的重试逻辑。（#233）\n## 变更内容\n* 修复未保存对话的重试逻辑。由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F233 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.19.0...v1.19.1","2026-02-10T05:44:04",{"id":173,"version":174,"summary_zh":175,"released_at":176},188960,"v1.19.0","修复：解决了代码块重复的问题。（#228）\n\n---------\n\n## 变更内容\n* 由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F228 中修复了代码块重复的问题。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.18.1...v1.19.0","2026-02-09T23:16:15",{"id":178,"version":179,"summary_zh":180,"released_at":181},188961,"v1.18.1","合并拉取请求 #188，来自 CodeNebulaRex 的 delete_chat 分支\n## 变更内容\n* 添加了 client.delete_chat 方法，由 @CodeNebulaRex 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F188 中实现。\n* 修复：更新获取宝石的请求负载，并允许在 JSON 解析中使用负索引。\n\n## 新贡献者\n* @CodeNebulaRex 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F188 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.18.0...v1.18.1","2026-02-04T22:18:36",{"id":183,"version":184,"summary_zh":185,"released_at":186},188962,"v1.18.0","特性：添加对流式模式的支持\n特性：更新模型头信息，新增 `gemini-3.0-flash` 和 `gemini-3.0-flash-thinking`，停止维护 gemini 2.5 系列\n重构：优化了 JSON 解析、文件上传、令牌刷新等工具函数\n\n## 变更内容\n* build(deps)：由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F196 中将 actions\u002Fdownload-artifact 从 6.0.0 升级至 7.0.0\n* build(deps)：由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F195 中将 actions\u002Fupload-artifact 从 5.0.0 升级至 6.0.0\n* 由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F198 中实现文件路径校验、MIME 类型判断及正确文件元数据的发送\n* 由 @luuquangvu、@faithleysath 和 @ww2283 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F220 中添加流式响应、提升解析效率，并实现更完善的重试方案\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.17.3...v1.18.0","2026-02-03T01:18:04",{"id":188,"version":189,"summary_zh":190,"released_at":191},188963,"v1.17.3","修复：禁用 HTTP\u002F2 以避免 httpx.RemoteProtocolError 异常\n关闭 #165\n**完整更新日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.17.2...v1.17.3","2025-12-05T22:38:20",{"id":193,"version":194,"summary_zh":195,"released_at":196},188964,"v1.17.2","修复：解决无效的错误码问题，并使调试日志更加清晰。\n\n解决无效的错误码问题，同时使调试日志更加清晰，以避免误解。\n## 变更内容\n* 重构：由 @faithleysath 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F169 中改进代码质量和类型安全性\n* 构建（依赖项）：由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F174 中将 actions\u002Fcheckout 从版本 5 升级到版本 6\n* 解决无效的错误码问题，并使调试日志更加清晰，由 @luuquangvu 在 https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F182 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.17.1...v1.17.2","2025-12-02T00:16:22",{"id":198,"version":199,"summary_zh":200,"released_at":201},188965,"v1.17.1","feat: allow passing custom model headers\n\nref #161\n## What's Changed\n* build(deps): bump actions\u002Fdownload-artifact from 5.0.0 to 6.0.0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F150\n* build(deps): bump actions\u002Fupload-artifact from 4.6.2 to 5.0.0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F149\n\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.17.0...v1.17.1","2025-11-21T01:03:11",{"id":203,"version":204,"summary_zh":205,"released_at":206},188966,"v1.17.0","feat: add support for 3.0 pro model and retire 2.0 model series\r\n\r\nUse a dedicated JSON parsing function to fix index errors caused by unsteady response structure.\r\nDepreciate 2.0 models which have been redirected to 2.5 flash on backend.\r\n\r\nclose #160\r\n## What's Changed\r\n* Fix some issues by @luuquangvu in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F158\r\n* Add Gemini 3.0 Pro model support by @faithleysath in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F161\r\n\r\n## New Contributors\r\n* @luuquangvu made their first contribution in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F158\r\n* @faithleysath made their first contribution in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F161\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.16.0...v1.17.0","2025-11-20T04:46:16",{"id":208,"version":209,"summary_zh":210,"released_at":211},188967,"v1.16.0","feat: customizable cookie caching path; improved browser cookie loading logic\n\nclose #129\n## What's Changed\n* build(deps): bump pypa\u002Fgh-action-pypi-publish from 1.12.4 to 1.13.0 in \u002F.github\u002Fworkflows in the github_actions group across 1 directory by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F130\n* build(deps): bump actions\u002Fsetup-python from 5 to 6 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F132\n\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.15.2...v1.16.0","2025-10-15T22:20:16",{"id":213,"version":214,"summary_zh":215,"released_at":216},188968,"v1.15.2","feat: add error code 1050\n\nref #127\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.15.1...v1.15.2","2025-09-03T20:27:14",{"id":218,"version":219,"summary_zh":220,"released_at":221},188969,"v1.15.1","fix: update model ids\n\nref #126, #128\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.15.0...v1.15.1","2025-09-02T22:31:32",{"id":223,"version":224,"summary_zh":225,"released_at":226},188970,"v1.15.0","feat: add support to create, update, and delete custom gems\r\n\r\n## What's Changed\r\n* build(deps): bump actions\u002Fcheckout from 4 to 5 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F124\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.14.4...v1.15.0","2025-08-27T22:59:56",{"id":228,"version":229,"summary_zh":230,"released_at":231},188971,"v1.14.4","fix: image to image response parsing; bump orjson to v3.11.1\n\nref #119\n\n## What's Changed\n* build(deps): bump actions\u002Fdownload-artifact from 4.3.0 to 5.0.0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fpull\u002F122\n\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.14.3...v1.14.4","2025-08-12T00:56:19",{"id":233,"version":234,"summary_zh":235,"released_at":236},188972,"v1.14.3","fix: remove language constants when fetching gems\n\nref #114\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.14.2...v1.14.3","2025-07-18T22:45:57",{"id":238,"version":239,"summary_zh":240,"released_at":241},188973,"v1.14.2","feat: save generated images in full size by default\n\nref #112, #111, #102\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.14.1...v1.14.2","2025-07-04T22:00:44",{"id":243,"version":244,"summary_zh":245,"released_at":246},188974,"v1.14.1","fix: gemini.google.com 502 error\r\n\r\nrefactor: increase default timeout to 300 seconds\r\n\r\nref #110, #78\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHanaokaYuzu\u002FGemini-API\u002Fcompare\u002Fv1.14.0...v1.14.1","2025-07-04T04:16:40"]