[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-frankroeder--parrot.nvim":3,"tool-frankroeder--parrot.nvim":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":23,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":109,"github_topics":110,"view_count":23,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":131,"updated_at":132,"faqs":133,"releases":164},3141,"frankroeder\u002Fparrot.nvim","parrot.nvim","parrot.nvim 🦜 - the plugin that brings stochastic parrots to Neovim.","parrot.nvim 是一款专为 Neovim 打造的 AI 辅助插件，旨在将大语言模型（LLM）的能力无缝融入你的文本编辑工作流。它专注于提供按需的代码补全、内容生成及对话式交互，所有操作均在原生的 Neovim 缓冲区中进行，让开发者能像在聊天一样与 AI 协作编写和修改代码。\n\n这款工具主要解决了传统 AI 编程助手“黑盒化”的问题。不同于自动扫描文件或在后台静默发送数据的智能体，parrot.nvim 坚持透明与隐私优先的原则：用户完全掌控发送给 API 的内容，绝无隐藏的后台请求或自动分析行为。这意味着你可以放心地在敏感项目中使用它，而无需担心代码泄露风险。\n\nparrot.nvim 特别适合注重隐私、希望精确控制 AI 交互过程的资深开发者和技术研究人员。其技术亮点在于强大的兼容性，支持 OpenAI、Anthropic、Google Gemini 等主流云服务，也能通过 Ollama 连接本地离线模型；同时提供灵活的凭证管理和基于 Markdown 的持久化对话存储。无论是修复 Bug、重写代码段还是添加注释，它都能让你在不离开编辑器的情况下，安全高效地完成各类编程任务","parrot.nvim 是一款专为 Neovim 打造的 AI 辅助插件，旨在将大语言模型（LLM）的能力无缝融入你的文本编辑工作流。它专注于提供按需的代码补全、内容生成及对话式交互，所有操作均在原生的 Neovim 缓冲区中进行，让开发者能像在聊天一样与 AI 协作编写和修改代码。\n\n这款工具主要解决了传统 AI 编程助手“黑盒化”的问题。不同于自动扫描文件或在后台静默发送数据的智能体，parrot.nvim 坚持透明与隐私优先的原则：用户完全掌控发送给 API 的内容，绝无隐藏的后台请求或自动分析行为。这意味着你可以放心地在敏感项目中使用它，而无需担心代码泄露风险。\n\nparrot.nvim 特别适合注重隐私、希望精确控制 AI 交互过程的资深开发者和技术研究人员。其技术亮点在于强大的兼容性，支持 OpenAI、Anthropic、Google Gemini 等主流云服务，也能通过 Ollama 连接本地离线模型；同时提供灵活的凭证管理和基于 Markdown 的持久化对话存储。无论是修复 Bug、重写代码段还是添加注释，它都能让你在不离开编辑器的情况下，安全高效地完成各类编程任务。","\u003Cdiv align=\"center\">\n\n# parrot.nvim 🦜\n\n\nThis is [parrot.nvim](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim), the ultimate [stochastic parrot](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FStochastic_parrot) to support your text editing inside Neovim.\n\n[Features](#features) • [Demo](#demo) • [Getting Started](#getting-started) • [Commands](#commands) • [Configuration](#configuration) • [Roadmap](#roadmap) • [FAQ](#faq)\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrankroeder_parrot.nvim_readme_49efdf5e9782.png\" alt=\"parrot.nvim logo\" width=\"50%\">\n\u003C\u002Fdiv>\n\n## Features\n\n[parrot.nvim](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim) offers a seamless out-of-the-box experience, providing tight integration of current LLM APIs into your Neovim workflows, with a focus solely on text generation.\nThe selected core features include **on-demand text completion and editing**, as well as **chat-like sessions** within native **Neovim buffers**.\n\nThis plugin is intended for people who actually know what they are doing and people who care for **privacy and transparency**.\nThe user is always under **full control** of what will be sent to the LLM API endpoint, hence this plugin fully **excludes** the whole notion of agents provided by tools such as [codex](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex), [claude-code](https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fclaude-code), and the [gemini-cli](https:\u002F\u002Fgithub.com\u002Fgoogle-gemini\u002Fgemini-cli).\n\nA substantial part of the code is based on an early fork of the brilliant work by Tibor Schmidt's [gp.nvim](https:\u002F\u002Fgithub.com\u002FRobitx\u002Fgp.nvim).\n\n- Persistent conversations stored as markdown files within Neovim's standard path or a user-defined location\n- Custom hooks for inline text editing based on user instructions and chats with predefined system prompts\n- Unified provider system supporting any OpenAI-compatible API:\n    + [OpenAI API](https:\u002F\u002Fplatform.openai.com\u002F)\n    + [Anthropic API](https:\u002F\u002Fwww.anthropic.com\u002Fapi)\n    + [Google Gemini API](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs)\n    + [xAI API](https:\u002F\u002Fconsole.x.ai)\n    + Local and offline serving via [ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama)\n    + Any custom OpenAI-compatible endpoint with configurable functions; also supports [Perplexity.ai API](https:\u002F\u002Fblog.perplexity.ai\u002Fblog\u002Fintroducing-pplx-api), [Mistral API](https:\u002F\u002Fdocs.mistral.ai\u002Fapi\u002F), [Groq API](https:\u002F\u002Fconsole.groq.com), [DeepSeek API](https:\u002F\u002Fplatform.deepseek.com), [GitHub Models](https:\u002F\u002Fgithub.com\u002Fmarketplace\u002Fmodels), and [NVIDIA API](https:\u002F\u002Fdocs.api.nvidia.com)\n- Flexible API credential management from various sources:\n    + Environment variables\n    + Bash commands\n    + Password manager CLIs (lazy evaluation)\n- Repository-specific instructions via `.parrot.md` file using the `PrtContext` command\n- **No** autocompletion and **no** hidden requests in the background to analyze your files\n\n\n## Demo\n\nSeamlessly switch between providers and models.\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0df0348f-85c0-4a2d-ba1f-ede2738c6d02\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n---\n\nTrigger code completions based on comments.\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F197f99ac-9854-4fe9-bddb-394c1b64f6b6\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n---\n\nLet the parrot fix your bugs.\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd3a0b261-a9dd-45e6-b508-dc5280594b06\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n---\n\n\u003Cdetails>\n\u003Csummary>Rewrite a visual selection with `PrtRewrite`.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc3d38702-7558-4e9e-96a3-c43312a543d0\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n---\n\n\u003Cdetails>\n\u003Csummary>Append code with the visual selection as context with `PrtAppend`.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F80af02fa-cd88-4023-8a55-f2d3c0a2f28e\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n---\n\n\u003Cdetails>\n\u003Csummary>Add comments to a function with `PrtPrepend`.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F9a6bfe66-4bc7-4b63-8694-67bf9c23c064\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n---\n\n\u003Cdetails>\n\u003Csummary>Retry your latest rewrite, append or prepend with `PrtRetry`.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F03442f34-687b-482e-b7f1-7812f70739cc\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n## Getting Started\n\n### Dependencies\n\nThis plugin requires the latest version of Neovim and relies on a carefully selected set of established plugins.\n\n- [`neovim 0.10+`](https:\u002F\u002Fgithub.com\u002Fneovim\u002Fneovim\u002Freleases)\n- [`plenary`](https:\u002F\u002Fgithub.com\u002Fnvim-lua\u002Fplenary.nvim)\n- [`ripgrep`](https:\u002F\u002Fgithub.com\u002FBurntSushi\u002Fripgrep) (optional)\n- [`fzf`](https:\u002F\u002Fgithub.com\u002Fjunegunn\u002Ffzf) (optional, requires ripgrep)\n- [`telescope`](https:\u002F\u002Fgithub.com\u002Fnvim-telescope\u002Ftelescope.nvim) (optional)\n\n### Installation\n\n\u003Cdetails>\n  \u003Csummary>lazy.nvim\u003C\u002Fsummary>\n\n\n```lua\n{\n  \"frankroeder\u002Fparrot.nvim\",\n  dependencies = { \"ibhagwan\u002Ffzf-lua\", \"nvim-lua\u002Fplenary.nvim\" },\n  opts = {}\n}\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n  \u003Csummary>Packer\u003C\u002Fsummary>\n\n```lua\nrequire(\"packer\").startup(function()\n  use({\n    \"frankroeder\u002Fparrot.nvim\",\n    requires = { 'ibhagwan\u002Ffzf-lua', 'nvim-lua\u002Fplenary.nvim'},\n    config = function()\n      require(\"parrot\").setup()\n    end,\n  })\nend)\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n  \u003Csummary>Neovim native package\u003C\u002Fsummary>\n\n```sh\ngit clone --depth=1 https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim.git \\\n  \"${XDG_DATA_HOME:-$HOME\u002F.local\u002Fshare}\"\u002Fnvim\u002Fsite\u002Fpack\u002Fparrot\u002Fstart\u002Fparrot.nvim\n```\n\n\u003C\u002Fdetails>\n\n### Setup\n\nThe minimal requirement is to at least set up one provider, such as the one provided below or one from the [provider configuration examples](#provider-configuration-examples).\n\n```lua\n{\n  \"frankroeder\u002Fparrot.nvim\",\n  dependencies = { 'ibhagwan\u002Ffzf-lua', 'nvim-lua\u002Fplenary.nvim' },\n  -- optionally include \"folke\u002Fnoice.nvim\" or \"rcarriga\u002Fnvim-notify\" for beautiful notifications\n  config = function()\n    require(\"parrot\").setup {\n      -- Providers must be explicitly set up to make them available.\n      providers = {\n        openai = {\n          name = \"openai\",\n          api_key = os.getenv \"OPENAI_API_KEY\",\n          endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fchat\u002Fcompletions\",\n          params = {\n            chat = { temperature = 1.1, top_p = 1 },\n            command = { temperature = 1.1, top_p = 1 },\n          },\n          topic = {\n            model = \"gpt-4.1-nano\",\n            params = { max_completion_tokens = 64 },\n          },\n          models ={\n            \"gpt-4o\",\n            \"o4-mini\",\n            \"gpt-4.1-nano\",\n          }\n        },\n      },\n    }\n  end,\n}\n```\n\n## Usage\n\n### Chat Basics\n\nChats in `parrot.nvim` are essentially standard Markdown buffers.\n\n**How it works:**\n1. **Open a Chat**: Use `:PrtChatNew` to open a fresh chat buffer (or `:PrtChatToggle` to toggle the last one).\n2. **Type your prompt**: Just write your question or instruction in the buffer after the user prefix `🗨:`.\n3. **Trigger the LLM**: Press the trigger keymap (default `\u003CC-g>\u003CC-g>` in insert mode) or use the `:PrtChatRespond` command.\n4. **Receive Response**: The LLM streams its response directly into the buffer at your cursor position.\n5. **Stop Generation**: Press `\u003CC-g>s` to stop the generation at any time.\n\n**Key Concepts:**\n- **Context**: The entire buffer content is sent as context (unless hidden comments are used).\n- **System Prompts**: You can set unique system prompts per chat or globally.\n- **Persistence**: Chats are saved as `.md` files in your configured directory.\n\n### Command Mode (Interactive Commands)\n\nCommand mode allows you to interact with LLMs directly on your code without leaving your current buffer.\n\n**Available Commands:**\n- `:PrtRewrite` – Rewrite the visual selection based on your prompt.\n- `:PrtAppend` – Append generated text after the selection.\n- `:PrtPrepend` – Prepend generated text before the selection.\n- `:PrtRetry` – Retry the last rewrite\u002Fappend\u002Fprepend operation.\n- `:PrtEdit` – Edit and re-run the last command with a modified prompt.\n\n**Workflow:**\n1. Select the code you want to modify (visual mode).\n2. Run one of the commands above (e.g., `:PrtRewrite fix the bug`).\n3. The LLM processes your selection and streams the result or presents you with a `diff view`.\n\n**Separate Model Selection:**\n`parrot.nvim` maintains **two independent model selections**:\n- **Chat Model**: Used for chat buffers. Change it from within a chat buffer using `:PrtModel`.\n- **Command Model**: Used for interactive commands (`PrtRewrite`, etc.). Change it from any non-chat buffer using `:PrtModel`.\n\nThis allows you to use a fast\u002Fcheap model for quick inline edits while keeping a more capable model for in-depth chat conversations.\n\n## Commands\n\nBelow are the available commands that can be configured as keybindings.\nThese commands are included in the default setup.\nAdditional useful commands are implemented through hooks (see below).\n\n### General\n| Command                   | Description                                   |\n| ------------------------- | ----------------------------------------------|\n| `PrtChatNew \u003Ctarget>`     | Open a new chat                               |\n| `PrtChatToggle \u003Ctarget>`  | Toggle chat (open last chat or new one)       |\n| `PrtChatPaste \u003Ctarget>`   | Paste visual selection into the latest chat   |\n| `PrtInfo`                 | Print plugin config                           |\n| `PrtContext \u003Ctarget>`     | Edits the local context file                  |\n| `PrtChatFinder`           | Fuzzy search chat files using fzf             |\n| `PrtChatDelete`           | Delete the current chat file                  |\n| `PrtChatRespond`          | Trigger chat respond (in chat file)           |\n| `PrtStop`                 | Interrupt any ongoing Parrot generation (works everywhere) |\n| `PrtProvider \u003Cprovider>`  | Switch the provider (empty arg triggers fzf)  |\n| `PrtModel \u003Cmodel>`        | Switch the interactive command model (empty arg triggers fzf). Note: Chat model must be changed from within the chat buffer. |\n| `PrtStatus`               | Prints current provider and model selection   |\n| `PrtReloadCache \u003Coptional provider>` | Reload cached models for all or specific provider |\n| `PrtCmd \u003Coptional prompt>` | Directly generate executable Neovim commands (requires explicit Return to execute) |\n|  __Interactive__          | |\n| `PrtRewrite \u003Coptional prompt>` | Rewrites the visual selection based on a provided prompt (direct input, input dialog or from collection) |\n| `PrtEdit`                 | Like `PrtRewrite` but you can change the last prompt |\n| `PrtAppend \u003Coptional prompt>` | Append text to the visual selection based on a provided prompt (direct input, input dialog or from collection) |\n| `PrtPrepend \u003Coptional prompt>` | Prepend text to the visual selection based on a provided prompt (direct input, input dialog or from collection) |\n| `PrtRetry`                | Repeats the last rewrite\u002Fappend\u002Fprepend       |\n|  __Example Hooks__        | |\n| `PrtImplement`            | Takes the visual selection as prompt to generate code |\n| `PrtAsk`                  | Ask the model a question                      |\n\nWith `\u003Ctarget>`, we indicate the command to open the chat within one of the following target locations (defaults to `toggle_target`):\n\n- `popup`: open a popup window which can be configured via the options provided below\n- `split`: open the chat in a horizontal split\n- `vsplit`: open the chat in a vertical split\n- `tabnew`: open the chat in a new tab\n\nAll chat commands (`PrtChatNew, PrtChatToggle`) and custom hooks support the\nvisual selection to appear in the chat when triggered.\nInteractive commands require the user to make use of the [template placeholders](#template-placeholders)\nto consider a visual selection within an API request.\n\n## Configuration\n\n### Options\n\n```lua\n{\n    -- The provider definitions include endpoints, API keys, default parameters,\n    -- and topic model arguments for chat summarization. You can use any name\n    -- for your providers and configure them with custom functions.\n    providers = {\n      openai = {\n        name = \"openai\",\n        endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fchat\u002Fcompletions\",\n        -- endpoint to query the available models online\n        model_endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fmodels\",\n        api_key = os.getenv(\"OPENAI_API_KEY\"),\n        -- OPTIONAL: Alternative methods to retrieve API key\n        -- Using GPG for decryption:\n        -- api_key = { \"gpg\", \"--decrypt\", vim.fn.expand(\"$HOME\") .. \"\u002Fmy_api_key.txt.gpg\" },\n        -- Using macOS Keychain:\n        -- api_key = { \"\u002Fusr\u002Fbin\u002Fsecurity\", \"find-generic-password\", \"-s my-api-key\", \"-w\" },\n        --- default model parameters used for chat and interactive commands\n        params = {\n          chat = { temperature = 1.1, top_p = 1 },\n          command = { temperature = 1.1, top_p = 1 },\n        },\n        -- topic model parameters to summarize chats\n        topic = {\n          model = \"gpt-4.1-nano\",\n          params = { max_completion_tokens = 64 },\n        },\n        --  a selection of models that parrot can remember across sessions\n        --  NOTE: This will be handled more intelligently in a future version\n        models = {\n          \"gpt-4.1\",\n          \"o4-mini\",\n          \"gpt-4.1-mini\",\n          \"gpt-4.1-nano\",\n        },\n      },\n      ...\n    }\n\n    -- default system prompts used for the chat sessions and the command routines\n    system_prompt = {\n      chat = ...,\n      command = ...\n    },\n\n    -- the prefix used for all commands\n    cmd_prefix = \"Prt\",\n\n    -- optional parameters for curl\n    curl_params = {},\n\n    -- The directory to store persisted state information like the\n    -- current provider and the selected models\n    state_dir = vim.fn.stdpath(\"data\"):gsub(\"\u002F$\", \"\") .. \"\u002Fparrot\u002Fpersisted\",\n\n    -- The directory to store the chats (searched with PrtChatFinder)\n    chat_dir = vim.fn.stdpath(\"data\"):gsub(\"\u002F$\", \"\") .. \"\u002Fparrot\u002Fchats\",\n\n    -- Chat user prompt prefix\n    chat_user_prefix = \"🗨:\",\n\n    -- llm prompt prefix\n    llm_prefix = \"🦜:\",\n\n    -- Explicitly confirm deletion of a chat file\n    chat_confirm_delete = true,\n\n    -- Local chat buffer shortcuts\n    chat_shortcut_respond = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>\u003CC-g>\" },\n    chat_shortcut_delete = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>d\" },\n    chat_shortcut_stop = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>s\" },\n    chat_shortcut_new = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>c\" },\n\n    -- Option to move the cursor to the end of the file after finished respond\n    chat_free_cursor = false,\n\n    -- Default target for  PrtChatToggle, PrtChatNew, PrtContext and the chats opened from the ChatFinder\n    -- values: popup \u002F split \u002F vsplit \u002F tabnew\n    toggle_target = \"vsplit\",\n\n    -- The interactive user input appearing when can be \"native\" for\n    -- vim.ui.input or \"buffer\" to query the input within a native nvim buffer\n    -- (see video demonstrations below)\n    user_input_ui = \"native\",\n\n    -- Popup window layout\n    -- border: \"single\", \"double\", \"rounded\", \"solid\", \"shadow\", \"none\"\n    style_popup_border = \"single\",\n\n    -- margins are number of characters or lines\n    style_popup_margin_bottom = 8,\n    style_popup_margin_left = 1,\n    style_popup_margin_right = 2,\n    style_popup_margin_top = 2,\n    style_popup_max_width = 160\n\n    -- Prompt used for interactive LLM calls like PrtRewrite where {{llm}} is\n    -- a placeholder for the llm name\n    command_prompt_prefix_template = \"🤖 {{llm}} ~ \",\n\n    -- auto select command response (easier chaining of commands)\n    -- if false it also frees up the buffer cursor for further editing elsewhere\n    command_auto_select_response = true,\n\n    -- Time in hours until the model cache is refreshed\n    -- Set to 0 to deactive model caching\n    model_cache_expiry_hours = 48,\n\n    -- fzf_lua options for PrtModel and PrtChatFinder when plugin is installed\n    fzf_lua_opts = {\n        [\"--ansi\"] = true,\n        [\"--sort\"] = \"\",\n        [\"--info\"] = \"inline\",\n        [\"--layout\"] = \"reverse\",\n        [\"--preview-window\"] = \"nohidden:right:75%\",\n    },\n\n    -- Enables the query spinner animation \n    enable_spinner = true,\n    -- Type of spinner animation to display while loading\n    -- Available options: \"dots\", \"line\", \"star\", \"bouncing_bar\", \"bouncing_ball\"\n    spinner_type = \"star\",\n    -- Show hints for context added through completion with @file, @buffer or @directory\n    show_context_hints = true\n\n    -- Show diff preview before applying changes from rewrite\u002Fappend\u002Fprepend\n    enable_preview_mode = true,\n    preview_auto_apply = false, -- If true, applies changes automatically after preview timeout\n    preview_timeout = 10000, -- Time in ms before auto-apply (if enabled)\n    preview_border = \"rounded\",\n    preview_max_width = 120,\n    preview_max_height = 30,\n}\n```\n\n#### Demonstrations\n\n\u003Cdetails>\n\u003Csummary>With \u003Ccode>user_input_ui = \"native\"\u003C\u002Fcode>, use \u003Ccode>vim.ui.input\u003C\u002Fcode> as slim input interface.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc2fe3bde-a35a-4f2a-957b-687e4f6f2e5c\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>With \u003Ccode>user_input_ui = \"buffer\"\u003C\u002Fcode>, your input is simply a buffer. All of the content is passed to the API when closed.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F63e6e1c4-a2ab-4c60-9b43-332e4b581360\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>The spinner is a useful indicator for providers that take longer to respond.\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Febcd27cb-da00-4150-a0f8-1d2e1afa0acb\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n\n### Key Bindings\n\nThis plugin provides the following default key mappings:\n\n| Keymap       | Description                                                 |\n|--------------|-------------------------------------------------------------|\n| `\u003CC-g>c`     | Opens a new chat via `PrtChatNew`                           |\n| `\u003CC-g>\u003CC-g>` | Trigger the API to generate a response via `PrtChatRespond` |\n| `\u003CC-g>s`     | Stop any ongoing Parrot generation via `PrtStop`            |\n| `\u003CC-g>d`     | Delete the current chat file via `PrtChatDelete`            |\n\n### Provider Configuration Examples\n\nThe unified provider system allows you to configure any OpenAI-compatible API provider. Below are examples for popular providers:\n\n\u003Cdetails>\n\u003Csummary>Anthropic Claude\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  anthropic = {\n    name = \"anthropic\",\n    endpoint = \"https:\u002F\u002Fapi.anthropic.com\u002Fv1\u002Fmessages\",\n    model_endpoint = \"https:\u002F\u002Fapi.anthropic.com\u002Fv1\u002Fmodels\",\n    api_key = utils.get_api_key(\"anthropic-api-key\", \"ANTHROPIC_API_KEY\"),\n    params = {\n      chat = { max_tokens = 4096 },\n      command = { max_tokens = 4096 },\n    },\n    topic = {\n      model = \"claude-3-5-haiku-latest\",\n      params = { max_tokens = 32 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"x-api-key\"] = self.api_key,\n        [\"anthropic-version\"] = \"2023-06-01\",\n      }\n    end,\n    models = {\n      \"claude-sonnet-4-20250514\",\n      \"claude-3-7-sonnet-20250219\",\n      \"claude-3-5-sonnet-20241022\",\n      \"claude-3-5-haiku-20241022\",\n    },\n    preprocess_payload = function(payload)\n      for _, message in ipairs(payload.messages) do\n        message.content = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\")\n      end\n      if payload.messages[1] and payload.messages[1].role == \"system\" then\n        -- remove the first message that serves as the system prompt as anthropic\n        -- expects the system prompt to be part of the API call body and not the messages\n        payload.system = payload.messages[1].content\n        table.remove(payload.messages, 1)\n      end\n      return payload\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Google Gemini\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  gemini = {\n    name = \"gemini\",\n    endpoint = function(self)\n      return \"https:\u002F\u002Fgenerativelanguage.googleapis.com\u002Fv1beta\u002Fmodels\u002F\"\n        .. self._model\n        .. \":streamGenerateContent?alt=sse\"\n    end,\n    model_endpoint = function(self)\n      return { \"https:\u002F\u002Fgenerativelanguage.googleapis.com\u002Fv1beta\u002Fmodels?key=\" .. self.api_key }\n    end,\n    api_key = os.getenv \"GEMINI_API_KEY\",\n    params = {\n      chat = { temperature = 1.1, topP = 1, topK = 10, maxOutputTokens = 8192 },\n      command = { temperature = 0.8, topP = 1, topK = 10, maxOutputTokens = 8192 },\n    },\n    topic = {\n      model = \"gemini-1.5-flash\",\n      params = { maxOutputTokens = 64 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"x-goog-api-key\"] = self.api_key,\n      }\n    end,\n    models = {\n      \"gemini-2.5-flash-preview-05-20\",\n      \"gemini-2.5-pro-preview-05-06\",\n      \"gemini-1.5-pro-latest\",\n      \"gemini-1.5-flash-latest\",\n      \"gemini-2.5-pro-exp-03-25\",\n      \"gemini-2.0-flash-lite\",\n      \"gemini-2.0-flash-thinking-exp\",\n      \"gemma-3-27b-it\",\n    },\n    preprocess_payload = function(payload)\n      local contents = {}\n      local system_instruction = nil\n      for _, message in ipairs(payload.messages) do\n        if message.role == \"system\" then\n          system_instruction = { parts = { { text = message.content } } }\n        else\n          local role = message.role == \"assistant\" and \"model\" or \"user\"\n          table.insert(\n            contents,\n            { role = role, parts = { { text = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\") } } }\n          )\n        end\n      end\n      local gemini_payload = {\n        contents = contents,\n        generationConfig = {\n          temperature = payload.temperature,\n          topP = payload.topP or payload.top_p,\n          maxOutputTokens = payload.max_tokens or payload.maxOutputTokens,\n        },\n      }\n      if system_instruction then\n        gemini_payload.systemInstruction = system_instruction\n      end\n      return gemini_payload\n    end,\n    process_stdout = function(response)\n      if not response or response == \"\" then\n        return nil\n      end\n      local success, decoded = pcall(vim.json.decode, response)\n      if\n        success\n        and decoded.candidates\n        and decoded.candidates[1]\n        and decoded.candidates[1].content\n        and decoded.candidates[1].content.parts\n        and decoded.candidates[1].content.parts[1]\n      then\n        return decoded.candidates[1].content.parts[1].text\n      end\n      return nil\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>xAI\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  xai = {\n    name = \"xai\",\n    endpoint = \"https:\u002F\u002Fapi.x.ai\u002Fv1\u002Fchat\u002Fcompletions\",\n    model_endpoint = \"https:\u002F\u002Fapi.x.ai\u002Fv1\u002Flanguage-models\",\n    api_key = os.getenv \"XAI_API_KEY\",\n    params = {\n      chat = { temperature = 1.1, top_p = 1 },\n      command = { temperature = 1.1, top_p = 1 },\n    },\n    topic = {\n      model = \"grok-3-mini-beta\",\n      params = { max_completion_tokens = 64 },\n    },\n    models = {\n      \"grok-3-beta\",\n      \"grok-3-mini-beta\",\n    },\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Ollama\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  ollama = {\n    name = \"ollama\",\n    endpoint = \"http:\u002F\u002Flocalhost:11434\u002Fapi\u002Fchat\",\n    api_key = \"\", -- not required for local Ollama\n    params = {\n      chat = { temperature = 1.5, top_p = 1, num_ctx = 8192, min_p = 0.05 },\n      command = { temperature = 1.5, top_p = 1, num_ctx = 8192, min_p = 0.05 },\n    },\n    topic_prompt = [[\n    Summarize the chat above and only provide a short headline of 2 to 3\n    words without any opening phrase like \"Sure, here is the summary\",\n    \"Sure! Here's a shortheadline summarizing the chat\" or anything similar.\n    ]],\n    topic = {\n      model = \"llama3.2\",\n      params = { max_tokens = 32 },\n    },\n    headers = {\n      [\"Content-Type\"] = \"application\u002Fjson\",\n    },\n    models = {\n      \"codestral\",\n      \"llama3.2\",\n      \"gemma3\",\n    },\n    resolve_api_key = function()\n      return true\n    end,\n    process_stdout = function(response)\n      if response:match \"message\" and response:match \"content\" then\n        local ok, data = pcall(vim.json.decode, response)\n        if ok and data.message and data.message.content then\n          return data.message.content\n        end\n      end\n    end,\n    get_available_models = function(self)\n      local url = self.endpoint:gsub(\"chat\", \"\")\n      local logger = require \"parrot.logger\"\n      local job = Job:new({\n        command = \"curl\",\n        args = { \"-H\", \"Content-Type: application\u002Fjson\", url .. \"tags\" },\n      }):sync()\n      local parsed_response = require(\"parrot.utils\").parse_raw_response(job)\n      self:process_onexit(parsed_response)\n      if parsed_response == \"\" then\n        logger.debug(\"Ollama server not running on \" .. endpoint_api)\n        return {}\n      end\n\n      local success, parsed_data = pcall(vim.json.decode, parsed_response)\n      if not success then\n        logger.error(\"Ollama - Error parsing JSON: \" .. vim.inspect(parsed_data))\n        return {}\n      end\n\n      if not parsed_data.models then\n        logger.error \"Ollama - No models found. Please use 'ollama pull' to download one.\"\n        return {}\n      end\n\n      local names = {}\n      for _, model in ipairs(parsed_data.models) do\n        table.insert(names, model.name)\n      end\n\n      return names\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Perplexity\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  perplexity = {\n    name = \"perplexity\",\n    api_key = os.getenv(\"PERPLEXITY_API_KEY\"),\n    endpoint = \"https:\u002F\u002Fapi.perplexity.ai\u002Fchat\u002Fcompletions\",\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Accept\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    topic = {\n      model = \"r1-1776\",\n      params = {\n        max_tokens = 64,\n      },\n    },\n    models = {\n      \"sonar\",\n      \"sonar-pro\",\n      \"sonar-deep-research\",\n      \"sonar-reasoning\",\n      \"sonar-reasoning-pro\",\n      \"r1-1776\",\n    },\n  }\n}\n```\n\u003C\u002Fdetails>\n\n### Adding a new command\n\n#### Ask a single-turn question and receive the answer in a popup window\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    hooks = {\n      Ask = function(parrot, params)\n        local template = [[\n          In light of your existing knowledge base, please generate a response that\n          is succinct and directly addresses the question posed. Prioritize accuracy\n          and relevance in your answer, drawing upon the most recent information\n          available to you. Aim to deliver your response in a concise manner,\n          focusing on the essence of the inquiry.\n          Question: {{command}}\n        ]]\n        local model_obj = parrot.get_model(\"command\")\n        parrot.logger.info(\"Asking model: \" .. model_obj.name)\n        parrot.Prompt(params, parrot.ui.Target.popup, model_obj, \"🤖 Ask ~ \", template)\n      end,\n    }\n    -- ...\n}\n```\n\n#### Start a chat with a predefined chat prompt to check your spelling.\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    hooks = {\n      SpellCheck = function(prt, params)\n        local chat_prompt = [[\n          Your task is to take the text provided and rewrite it into a clear,\n          grammatically correct version while preserving the original meaning\n          as closely as possible. Correct any spelling mistakes, punctuation\n          errors, verb tense issues, word choice problems, and other\n          grammatical mistakes.\n        ]]\n        prt.ChatNew(params, chat_prompt)\n      end,\n    }\n    -- ...\n}\n```\n\nRefer to my [personal lazy.nvim setup](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fdotfiles\u002Fblob\u002Fmaster\u002Fnvim\u002Flua\u002Fplugins\u002Fparrot.lua) or\nthose of [other users](https:\u002F\u002Fgithub.com\u002Fsearch?utf8=%E2%9C%93&q=frankroeder%2Fparrot.nvim+language%3ALua&type=code&l=Lua) for further hooks and key bindings.\n\n### Prompt Collection\n\nIf you're repeatedly typing the same prompts into the input fields when using\n`PrtRewrite`, `PrtAppend`, or `PrtPrepend`, a more lightweight alternative to\nuser commands (also known as hooks) is to define prompts as follows:\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    prompts = {\n        [\"Spell\"] = \"I want you to proofread the provided text and fix the errors.\" -- e.g., :'\u003C,'>PrtRewrite Spell\n        [\"Comment\"] = \"Provide a comment that explains what the snippet is doing.\" -- e.g., :'\u003C,'>PrtPrepend Comment\n        [\"Complete\"] = \"Continue the implementation of the provided snippet in the file {{filename}}.\" -- e.g., :'\u003C,'>PrtAppend Complete\n    }\n    -- ...\n}\n```\nThey will appear as arguments for the aforementioned interactive commands and\ncan also be used with the [template placeholders](#template-placeholders).\n\n### Template Placeholders\n\nUsers can utilize the following placeholders in their hook and system templates to inject\nadditional context:\n\n| Placeholder             | Content                              |\n|-------------------------|--------------------------------------|\n| `{{selection}}`         | Current visual selection             |\n| `{{filetype}}`          | Filetype of the current buffer       |\n| `{{filepath}}`          | Full path of the current file        |\n| `{{filecontent}}`       | Full content of the current buffer   |\n| `{{multifilecontent}}`  | Full content of all open buffers     |\n\nBelow is an example of how to use these placeholders in a completion hook, which\nreceives the full file context and the selected code snippet as input.\n\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    hooks = {\n      CompleteFullContext = function(prt, params)\n        local template = [[\n        I have the following code from {{filename}}:\n\n        ```{{filetype}}\n        {{filecontent}}\n        ```\n\n        Please look at the following section specifically:\n        ```{{filetype}}\n        {{selection}}\n        ```\n\n        Please finish the code above carefully and logically.\n        Respond just with the snippet of code that should be inserted.\n        ]]\n        local model_obj = prt.get_model(\"command\")\n        prt.Prompt(params, prt.ui.Target.append, model_obj, nil, template)\n      end,\n    }\n    -- ...\n}\n```\n\nThe placeholders `{{filetype}}` and  `{{filecontent}}` can also be used in the `chat_prompt` when\ncreating custom hooks calling `prt.ChatNew(params, chat_prompt)` to directly inject the whole file content.\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n      CodeConsultant = function(prt, params)\n        local chat_prompt = [[\n          Your task is to analyze the provided {{filetype}} code and suggest\n          improvements to optimize its performance. Identify areas where the\n          code can be made more efficient, faster, or less resource-intensive.\n          Provide specific suggestions for optimization, along with explanations\n          of how these changes can enhance the code's performance. The optimized\n          code should maintain the same functionality as the original code while\n          demonstrating improved efficiency.\n\n          Here is the code\n          ```{{filetype}}\n          {{filecontent}}\n          ```\n        ]]\n        prt.ChatNew(params, chat_prompt)\n      end,\n    }\n    -- ...\n}\n```\n\n## Completion\n\nInstead of using the [template placeholders](#template-placeholders),\n`parrot.nvim` supports inline completion via [nvim-cmp](https:\u002F\u002Fgithub.com\u002Fhrsh7th\u002Fnvim-cmp)\nand [blink.cmp](https:\u002F\u002Fgithub.com\u002FSaghen\u002Fblink.cmp\u002F) for additional contexts:\n\n- `@buffer:foo.txt` - Includes the content of the open buffer `foo.txt`\n- `@file:test.lua` - Includes the content of the file `test.lua`\n- `@directory:src\u002F` - Includes all file contents from the directory `src\u002F`\n\n> Hint: The option `show_context_hints` allows you to transparently see notifications about the\nactual file contents considered by the request. The completion keywords (e.g., `@file`) need to be placed\non a **new line**!\n\n### Setup nvim-cmp\n\nTo enable `parrot.nvim` completions, add the source to your `nvim-cmp` configuration:\n\n```lua\n...\nsources = cmp.config.sources({\n  { name = \"parrot\" },\n}),\n...\n```\n\n### Setup blink.cmp\n\nFor `blink.cmp` you need to add `\"parrot\"` to the default sources and configure\nthe provider the following way:\n```lua\n...\nparrot = {\n    module = \"parrot.completion.blink\",\n    name = \"parrot\",\n    score_offset = 20,\n    opts = {\n        show_hidden_files = false,\n        max_items = 50,\n    }\n},\n...\n```\n\n\n## Statusline Support\n\nKnowing the current chat or command model can be shown using your favorite statusline plugin.\nBelow, we provide an example for [lualine](https:\u002F\u002Fgithub.com\u002Fnvim-lualine\u002Flualine.nvim):\n\n```lua\n  -- define function and formatting of the information\n  local function parrot_status()\n    local status_info = require(\"parrot.config\").get_status_info()\n    local status = \"\"\n    if status_info.is_chat then\n      status = status_info.prov.chat.name\n    else\n      status = status_info.prov.command.name\n    end\n    return string.format(\"%s(%s)\", status, status_info.model)\n  end\n\n  -- add to lueline section\n  require('lualine').setup {\n    sections = {\n      lualine_a = { parrot_status }\n  }\n\n```\n\n## Adding a custom provider\nIf the default provider is unavailable, you may define as many additonal custom\nproviders to suit your needs. This allows you to customize various aspects such as\nendpoints, available models, default parameters, headers, and functions for\nprocessing the LLM responses. \nPlease note that configuring providers in this manner is intended for advanced\nusers. I encourage you to open an issue or a discussion if you require assistance\nor have suggestions for improving provider support.\n```lua\n  providers = {\n    my_custom_provider = {\n      name = \"my_custom_provider\",\n      api_key = os.getenv(\"MY_API_KEY\"),\n      endpoint = \"https:\u002F\u002Fapi.example.com\u002Fv1\u002Fchat\u002Fcompletions\",\n      model = { \"model-1\", \"model-2\" },\n      -- Provider-specific curl parameters (optional)\n      curl_params = { \"--insecure\", \"--max-time\", \"30\", \"--proxy\", \"http:\u002F\u002Fproxy:8080\" },\n      -- Custom headers function\n      headers = function(api_key)\n        return {\n          [\"Content-Type\"] = \"application\u002Fjson\",\n          [\"Authorization\"] = \"Bearer \" .. api_key,\n          [\"X-Custom-Header\"] = \"custom-value\",\n        }\n      end,\n      -- Custom payload preprocessing\n      preprocess_payload = function(payload)\n        -- Modify payload for your API format\n        return payload\n      end,\n      -- Custom response processing\n      process_stdout = function(response)\n        -- Parse streaming response from your API\n        local success, decoded = pcall(vim.json.decode, response)\n        if success and decoded.content then\n          return decoded.content\n        end\n      end,\n    },\n  }\n```\n\n## Cancellation\n\nYou can stop any ongoing Parrot generation at any time using multiple methods:\n\n### Methods\n\n1. **Keybinding**: `\u003CC-g>s` (configurable via `chat_shortcut_stop`)\n2. **Command**: `:PrtStop` (works everywhere)\n\n### Behavior\n\nWhen you cancel a generation:\n\n- **Immediate Termination**: The API request is stopped immediately\n- **Preserves Generated Text**: The text generated so far remains in the buffer\n- **Visual Feedback**: You receive a notification confirming the cancellation\n- **Preview Mode**: If cancelled during streaming, the preview won't be shown\n- **Multiple Jobs**: If multiple generations are running, all are stopped\n\n### Autocommand Event\n\nA `User PrtCancelled` event is fired when generation is cancelled, allowing you to create custom hooks:\n\n```lua\nvim.api.nvim_create_autocmd(\"User\", {\n  pattern = \"PrtCancelled\",\n  callback = function()\n    -- Your custom logic here\n    print(\"Parrot generation was cancelled\")\n  end,\n})\n```\n\n### Advanced Usage\n\nFor buffer-specific cancellation in custom code:\n\n```lua\n-- Stop only jobs for current buffer\nlocal chat_handler = require(\"parrot\").chat_handler\nchat_handler:stop({ buffer = vim.api.nvim_get_current_buf() })\n\n-- Stop without notification\nchat_handler:stop({ notify = false })\n```\n\n## Bonus\n\nAccess parrot.nvim directly from your terminal:\n\n```bash\ncommand nvim -c \"PrtChatNew\"\n```\n\nAlso works by piping content directly into the chat:\n\n```bash\nls -l | command nvim - -c \"normal ggVGy\" -c \":PrtChatNew\" -c \"normal p\"\n```\n\n## Roadmap\n\n- Add status line integration\u002F notifications for summary of tokens used or money spent\n- Improve the documentation\n- Create a tutorial video\n- Reduce overall code complexity and improve robustness\n\n## FAQ\n\n- I am encountering errors related to the state.\n    > If the state is corrupted, simply delete the file `~\u002F.local\u002Fshare\u002Fnvim\u002Fparrot\u002Fpersisted\u002Fstate.json`.\n- The completion feature is not functioning, and I am receiving errors.\n    > Ensure that you have an adequate amount of API credits and examine the log file `~\u002F.local\u002Fstate\u002Fnvim\u002Fparrot.nvim.log` for any errors.\n- How do model selections work for chat vs. interactive commands?\n    > Model selection is separate for chat and interactive commands. To change the chat model, you must be inside a chat window started with `PrtChatNew`. Switching the model outside of a chat window only affects the interactive command model (e.g., `PrtRewrite`, `PrtAppend`). The selections are persistent after being set.\n- I have discovered a bug, have a feature suggestion, or possess a general idea to enhance this project.\n    > Everyone is invited to contribute to this project! If you have any suggestions, ideas, or bug reports, please feel free to submit an issue.\n\n## Related Projects\n\n- [parrot.nvim](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim) is a fork of an earlier version of [robitx\u002Fgp.nvim](https:\u002F\u002Fgithub.com\u002FRobitx\u002Fgp.nvim), branching off the commit `607f94d361f36b8eabb148d95993604fdd74d901` in January 2024. Since then, a significant portion of the original code has been removed or rewritten, and this effort will continue until `parrot.nvim` evolves into its own independent version. The original `MIT` license has been retained and will be maintained.\n- [huynle\u002Fogpt.nvim](https:\u002F\u002Fgithub.com\u002Fhuynle\u002Fogpt.nvim)\n- The idea for `PrtCmd` was inspired by [exit.nvim](https:\u002F\u002Fgithub.com\u002F3v0k4\u002Fexit.nvim).\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrankroeder_parrot.nvim_readme_28152ac59f10.png)](https:\u002F\u002Fstar-history.com\u002F#frankroeder\u002Fparrot.nvim&Date)\n","\u003Cdiv align=\"center\">\n\n# parrot.nvim 🦜\n\n\n这是 [parrot.nvim](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim)，一款终极的 [随机鹦鹉模型](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FStochastic_parrot)，旨在支持你在 Neovim 中进行文本编辑。\n\n[功能](#features) • [演示](#demo) • [快速入门](#getting-started) • [命令](#commands) • [配置](#configuration) • [路线图](#roadmap) • [常见问题](#faq)\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrankroeder_parrot.nvim_readme_49efdf5e9782.png\" alt=\"parrot.nvim logo\" width=\"50%\">\n\u003C\u002Fdiv>\n\n## 功能\n\n[parrot.nvim](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim) 提供开箱即用的无缝体验，将当前的 LLM API 与你的 Neovim 工作流紧密集成，专注于文本生成。精选的核心功能包括 **按需文本补全和编辑**，以及在原生 **Neovim 缓冲区** 中进行的 **类聊天会话**。\n\n此插件面向那些真正知道自己在做什么，并且重视 **隐私与透明度** 的用户。用户始终完全掌控将发送到 LLM API 端点的内容，因此该插件完全 **排除** 了诸如 [codex](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex)、[claude-code](https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fclaude-code) 和 [gemini-cli](https:\u002F\u002Fgithub.com\u002Fgoogle-gemini\u002Fgemini-cli) 等工具所提供的代理概念。\n\n代码的很大一部分基于 Tibor Schmidt 华丽作品 [gp.nvim](https:\u002F\u002Fgithub.com\u002FRobitx\u002Fgp.nvim) 的早期分支。\n\n- 持久对话以 Markdown 文件形式存储在 Neovim 的标准路径或用户定义的位置\n- 基于用户指令和带有预设系统提示的聊天的自定义钩子，用于内联文本编辑\n- 统一的提供者系统，支持任何兼容 OpenAI 的 API：\n    + [OpenAI API](https:\u002F\u002Fplatform.openai.com\u002F)\n    + [Anthropic API](https:\u002F\u002Fwww.anthropic.com\u002Fapi)\n    + [Google Gemini API](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs)\n    + [xAI API](https:\u002F\u002Fconsole.x.ai)\n    + 通过 [ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama) 进行本地和离线服务\n    + 任何可配置函数的自定义 OpenAI 兼容端点；还支持 [Perplexity.ai API](https:\u002F\u002Fblog.perplexity.ai\u002Fblog\u002Fintroducing-pplx-api)、[Mistral API](https:\u002F\u002Fdocs.mistral.ai\u002Fapi\u002F)、[Groq API](https:\u002F\u002Fconsole.groq.com)、[DeepSeek API](https:\u002F\u002Fplatform.deepseek.com)、[GitHub Models](https:\u002F\u002Fgithub.com\u002Fmarketplace\u002Fmodels) 和 [NVIDIA API](https:\u002F\u002Fdocs.api.nvidia.com)\n- 来自不同来源的灵活 API 凭证管理：\n    + 环境变量\n    + Bash 命令\n    + 密码管理器 CLI（惰性求值）\n- 使用 `.parrot.md` 文件通过 `PrtContext` 命令实现仓库特定的指令\n- **无** 自动补全，**无** 在后台隐藏请求来分析你的文件\n\n\n## 演示\n\n无缝切换提供商和模型。\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0df0348f-85c0-4a2d-ba1f-ede2738c6d02\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n---\n\n根据注释触发代码补全。\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F197f99ac-9854-4fe9-bddb-394c1b64f6b6\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n---\n\n让鹦鹉帮你修复 bug。\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd3a0b261-a9dd-45e6-b508-dc5280594b06\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n---\n\n\u003Cdetails>\n\u003Csummary>使用 `PrtRewrite` 重写可视选区。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc3d38702-7558-4e9e-96a3-c43312a543d0\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n---\n\n\u003Cdetails>\n\u003Csummary>使用 `PrtAppend` 将可视选区作为上下文追加到代码中。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F80af02fa-cd88-4023-8a55-f2d3c0a2f28e\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n---\n\n\u003Cdetails>\n\u003Csummary>使用 `PrtPrepend` 向函数添加注释。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F9a6bfe66-4bc7-4b63-8694-67bf9c23c064\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n---\n\n\u003Cdetails>\n\u003Csummary>使用 `PrtRetry` 重试你最近的一次重写、追加或插入操作。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F03442f34-687b-482e-b7f1-7812f70739cc\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n## 快速入门\n\n### 依赖项\n\n此插件需要最新版本的 Neovim，并依赖于一组精心挑选的成熟插件。\n\n- [`neovim 0.10+`](https:\u002F\u002Fgithub.com\u002Fneovim\u002Fneovim\u002Freleases)\n- [`plenary`](https:\u002F\u002Fgithub.com\u002Fnvim-lua\u002Fplenary.nvim)\n- [`ripgrep`](https:\u002F\u002Fgithub.com\u002FBurntSushi\u002Fripgrep)（可选）\n- [`fzf`](https:\u002F\u002Fgithub.com\u002Fjunegunn\u002Ffzf)（可选，需要 ripgrep）\n- [`telescope`](https:\u002F\u002Fgithub.com\u002Fnvim-telescope\u002Ftelescope.nvim)（可选）\n\n### 安装\n\n\u003Cdetails>\n  \u003Csummary>lazy.nvim\u003C\u002Fsummary>\n\n\n```lua\n{\n  \"frankroeder\u002Fparrot.nvim\",\n  dependencies = { \"ibhagwan\u002Ffzf-lua\", \"nvim-lua\u002Fplenary.nvim\" },\n  opts = {}\n}\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n  \u003Csummary>Packer\u003C\u002Fsummary>\n\n```lua\nrequire(\"packer\").startup(function()\n  use({\n    \"frankroeder\u002Fparrot.nvim\",\n    requires = { 'ibhagwan\u002Ffzf-lua', 'nvim-lua\u002Fplenary.nvim'},\n    config = function()\n      require(\"parrot\").setup()\n    end,\n  })\nend)\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n  \u003Csummary>Neovim 原生包\u003C\u002Fsummary>\n\n```sh\ngit clone --depth=1 https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim.git \\\n  \"${XDG_DATA_HOME:-$HOME\u002F.local\u002Fshare}\"\u002Fnvim\u002Fsite\u002Fpack\u002Fparrot\u002Fstart\u002Fparrot.nvim\n```\n\n\u003C\u002Fdetails>\n\n### 设置\n\n最低要求是至少设置一个提供者，例如下面提供的那个，或者从 [提供者配置示例](#provider-configuration-examples) 中选择一个。\n\n```lua\n{\n  \"frankroeder\u002Fparrot.nvim\",\n  dependencies = { 'ibhagwan\u002Ffzf-lua', 'nvim-lua\u002Fplenary.nvim' },\n  -- 可选地包含 \"folke\u002Fnoice.nvim\" 或 \"rcarriga\u002Fnvim-notify\" 以获得精美的通知\n  config = function()\n    require(\"parrot\").setup {\n      -- 必须显式设置提供者才能使其可用。\n      providers = {\n        openai = {\n          name = \"openai\",\n          api_key = os.getenv \"OPENAI_API_KEY\",\n          endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fchat\u002Fcompletions\",\n          params = {\n            chat = { temperature = 1.1, top_p = 1 },\n            command = { temperature = 1.1, top_p = 1 },\n          },\n          topic = {\n            model = \"gpt-4.1-nano\",\n            params = { max_completion_tokens = 64 },\n          },\n          models ={\n            \"gpt-4o\",\n            \"o4-mini\",\n            \"gpt-4.1-nano\",\n          }\n        },\n      },\n    }\n  end,\n}\n```\n\n## 使用\n\n### 聊天基础\n\n`parrot.nvim` 中的聊天本质上是标准的 Markdown 缓冲区。\n\n**工作原理：**\n1. **打开聊天**：使用 `:PrtChatNew` 打开一个新的聊天缓冲区（或使用 `:PrtChatToggle` 切换最近的一个）。\n2. **输入提示**：在用户前缀 `🗨:` 后直接在缓冲区中写下你的问题或指令。\n3. **触发 LLM**：按下触发键映射（默认为插入模式下的 `\u003CC-g>\u003CC-g>`）或使用 `:PrtChatRespond` 命令。\n4. **接收响应**：LLM 会将响应流式传输到光标所在位置。\n5. **停止生成**：随时按下 `\u003CC-g>s` 可以停止生成。\n\n**关键概念：**\n- **上下文**：整个缓冲区内容都会作为上下文发送（除非使用了隐藏注释）。\n- **系统提示**：你可以为每个聊天或全局设置独特的系统提示。\n- **持久化**：聊天会以 `.md` 文件的形式保存在你配置的目录中。\n\n### 命令模式（交互式命令）\n\n命令模式允许你在不离开当前缓冲区的情况下，直接对代码与 LLM 进行交互。\n\n**可用命令：**\n- `:PrtRewrite` – 根据你的提示重写可视选区。\n- `:PrtAppend` – 在选区后追加生成的文本。\n- `:PrtPrepend` – 在选区前插入生成的文本。\n- `:PrtRetry` – 重试上次的重写\u002F追加\u002F插入操作。\n- `:PrtEdit` – 编辑并重新运行上一条命令，同时修改提示。\n\n**工作流程：**\n1. 使用可视化模式选择需要修改的代码。\n2. 运行上述命令之一（例如 `:PrtRewrite fix the bug`）。\n3. LLM 处理你的选区，并将结果流式传输过来，或者显示一个差异视图。\n\n**独立的模型选择：**\n`parrot.nvim` 维持 **两套独立的模型选择**：\n- **聊天模型**：用于聊天缓冲区。可在聊天缓冲区内部使用 `:PrtModel` 进行切换。\n- **命令模型**：用于交互式命令（如 `PrtRewrite` 等）。可在任何非聊天缓冲区中使用 `:PrtModel` 进行切换。\n\n这使得你可以为快速的内联编辑使用速度快、成本低的模型，而为深入的对话交流则使用功能更强大的模型。\n\n## 命令\n\n以下是可配置为快捷键的可用命令。这些命令已包含在默认配置中。此外，还有一些有用的命令通过钩子实现（见下文）。\n\n### 通用\n| 命令                   | 描述                                   |\n| ------------------------- | ----------------------------------------------|\n| `PrtChatNew \u003Ctarget>`     | 打开一个新的聊天                               |\n| `PrtChatToggle \u003Ctarget>`  | 切换聊天（打开最近一次聊天或新建一个）       |\n| `PrtChatPaste \u003Ctarget>`   | 将可视选区粘贴到最近的聊天中                 |\n| `PrtInfo`                 | 打印插件配置                                 |\n| `PrtContext \u003Ctarget>`     | 编辑本地上下文文件                           |\n| `PrtChatFinder`           | 使用 fzf 模糊搜索聊天文件                    |\n| `PrtChatDelete`           | 删除当前的聊天文件                           |\n| `PrtChatRespond`          | 触发聊天响应（在聊天文件中）                 |\n| `PrtStop`                 | 中断正在进行的 Parrot 生成任务（在任何地方都有效） |\n| `PrtProvider \u003Cprovider>`  | 切换提供商（参数为空时会触发 fzf）           |\n| `PrtModel \u003Cmodel>`        | 切换交互式命令使用的模型（参数为空时会触发 fzf）。注意：聊天模型必须在聊天缓冲区内部更改。 |\n| `PrtStatus`               | 打印当前的提供商和模型选择                   |\n| `PrtReloadCache \u003Coptional provider>` | 为所有或特定提供商重新加载缓存的模型 |\n| `PrtCmd \u003Coptional prompt>` | 直接生成可执行的 Neovim 命令（需明确确认后执行） |\n|  __交互式__              | |\n| `PrtRewrite \u003Coptional prompt>` | 根据提供的提示重写可视选区（直接输入、输入对话框或从集合中选择） |\n| `PrtEdit`                 | 类似于 `PrtRewrite`，但可以修改上次的提示     |\n| `PrtAppend \u003Coptional prompt>` | 根据提供的提示将文本追加到可视选区（直接输入、输入对话框或从集合中选择） |\n| `PrtPrepend \u003Coptional prompt>` | 根据提供的提示将文本插入到可视选区之前（直接输入、输入对话框或从集合中选择） |\n| `PrtRetry`                | 重复上次的重写\u002F追加\u002F插入操作                 |\n|  __示例钩子__            | |\n| `PrtImplement`            | 将可视选区作为提示来生成代码                 |\n| `PrtAsk`                  | 向模型提问                                     |\n\n其中 `\u003Ctarget>` 表示在以下目标位置之一打开聊天的命令（默认为 `toggle_target`）：\n\n- `popup`：打开一个弹出窗口，可通过下方提供的选项进行配置。\n- `split`：在水平分割窗口中打开聊天。\n- `vsplit`：在垂直分割窗口中打开聊天。\n- `tabnew`：在新标签页中打开聊天。\n\n所有聊天相关命令（`PrtChatNew`, `PrtChatToggle`）以及自定义钩子都支持在触发时将可视选区显示在聊天中。而交互式命令则要求用户利用 [模板占位符](#template-placeholders)，以便在 API 请求中考虑可视选区。\n\n## 配置\n\n### 选项\n\n```lua\n{\n    -- 提供商定义包括端点、API密钥、默认参数，\n    -- 以及用于聊天摘要的主题模型参数。你可以为你的提供商使用任意名称，\n    -- 并通过自定义函数进行配置。\n    providers = {\n      openai = {\n        name = \"openai\",\n        endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fchat\u002Fcompletions\",\n        -- 用于在线查询可用模型的端点\n        model_endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fmodels\",\n        api_key = os.getenv(\"OPENAI_API_KEY\"),\n        -- 可选：获取API密钥的替代方法\n        -- 使用GPG解密：\n        -- api_key = { \"gpg\", \"--decrypt\", vim.fn.expand(\"$HOME\") .. \"\u002Fmy_api_key.txt.gpg\" },\n        -- 使用macOS Keychain：\n        -- api_key = { \"\u002Fusr\u002Fbin\u002Fsecurity\", \"find-generic-password\", \"-s my-api-key\", \"-w\" },\n        --- 用于聊天和交互式命令的默认模型参数\n        params = {\n          chat = { temperature = 1.1, top_p = 1 },\n          command = { temperature = 1.1, top_p = 1 },\n        },\n        -- 用于总结聊天的主题模型参数\n        topic = {\n          model = \"gpt-4.1-nano\",\n          params = { max_completion_tokens = 64 },\n        },\n        -- Parrot可以在会话间记住的一组模型\n        -- 注意：在未来的版本中，这将得到更智能的处理\n        models = {\n          \"gpt-4.1\",\n          \"o4-mini\",\n          \"gpt-4.1-mini\",\n          \"gpt-4.1-nano\",\n        },\n      },\n      ...\n    }\n\n    -- 聊天会话和命令流程中使用的默认系统提示\n    system_prompt = {\n      chat = ...,\n      command = ...\n    },\n\n    -- 所有命令使用的前缀\n    cmd_prefix = \"Prt\",\n\n    -- curl 的可选参数\n    curl_params = {},\n\n    -- 存储持久化状态信息的目录，例如\n    -- 当前提供商和所选模型\n    state_dir = vim.fn.stdpath(\"data\"):gsub(\"\u002F$\", \"\") .. \"\u002Fparrot\u002Fpersisted\",\n\n    -- 存储聊天记录的目录（可通过 PrtChatFinder 搜索）\n    chat_dir = vim.fn.stdpath(\"data\"):gsub(\"\u002F$\", \"\") .. \"\u002Fparrot\u002Fchats\",\n\n    -- 聊天用户提示前缀\n    chat_user_prefix = \"🗨:\",\n\n    -- LLM 提示前缀\n    llm_prefix = \"🦜:\",\n\n    -- 明确确认删除聊天文件\n    chat_confirm_delete = true,\n\n    -- 本地聊天缓冲区快捷键\n    chat_shortcut_respond = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>\u003CC-g>\" },\n    chat_shortcut_delete = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>d\" },\n    chat_shortcut_stop = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>s\" },\n    chat_shortcut_new = { modes = { \"n\", \"i\", \"v\", \"x\" }, shortcut = \"\u003CC-g>c\" },\n\n    -- 完成回复后将光标移动到文件末尾的选项\n    chat_free_cursor = false,\n\n    -- PrtChatToggle、PrtChatNew、PrtContext 以及从 ChatFinder 打开的聊天的默认目标\n    -- 取值：popup \u002F split \u002F vsplit \u002F tabnew\n    toggle_target = \"vsplit\",\n\n    -- 交互式用户输入的方式可以是“原生”的\n    -- 对于 vim.ui.input，也可以是“缓冲区”，即在原生 nvim 缓冲区中查询输入\n    -- （见下方视频演示）\n    user_input_ui = \"native\",\n\n    -- 弹出窗口布局\n    -- 边框样式：single、double、rounded、solid、shadow、none\n    style_popup_border = \"single\",\n\n    -- 边距以字符或行数表示\n    style_popup_margin_bottom = 8,\n    style_popup_margin_left = 1,\n    style_popup_margin_right = 2,\n    style_popup_margin_top = 2,\n    style_popup_max_width = 160\n\n    -- 用于交互式 LLM 调用的提示模板，例如 PrtRewrite，其中 {{llm}} 是\n    -- LLM 名称的占位符\n    command_prompt_prefix_template = \"🤖 {{llm}} ~ \",\n\n    -- 自动选择命令响应（便于命令链式调用）\n    -- 如果设置为 false，则会释放缓冲区光标以便在其他地方继续编辑\n    command_auto_select_response = true,\n\n    -- 模型缓存刷新的时间间隔（小时）\n    -- 设置为 0 可禁用模型缓存\n    model_cache_expiry_hours = 48,\n\n    -- 安装插件时，PrtModel 和 PrtChatFinder 使用的 fzf_lua 选项\n    fzf_lua_opts = {\n        [\"--ansi\"] = true,\n        [\"--sort\"] = \"\",\n        [\"--info\"] = \"inline\",\n        [\"--layout\"] = \"reverse\",\n        [\"--preview-window\"] = \"nohidden:right:75%\",\n    },\n\n    -- 启用查询加载动画\n    enable_spinner = true,\n    -- 加载时显示的动画类型\n    -- 可选：dots、line、star、bouncing_bar、bouncing_ball\n    spinner_type = \"star\",\n    -- 显示通过 @file、@buffer 或 @directory 补全添加的上下文提示\n    show_context_hints = true\n\n    -- 在应用重写\u002F追加\u002F前置更改之前显示差异预览\n    enable_preview_mode = true,\n    preview_auto_apply = false, -- 如果为真，预览超时后会自动应用更改\n    preview_timeout = 10000, -- 自动应用前的等待时间（毫秒）\n    preview_border = \"rounded\",\n    preview_max_width = 120,\n    preview_max_height = 30,\n}\n```\n\n#### 演示\n\n\u003Cdetails>\n\u003Csummary>当 \u003Ccode>user_input_ui = \"native\"\u003C\u002Fcode> 时，使用 \u003Ccode>vim.ui.input\u003C\u002Fcode> 作为简洁的输入界面。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fc2fe3bde-a35a-4f2a-957b-687e4f6f2e5c\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>当 \u003Ccode>user_input_ui = \"buffer\"\u003C\u002Fcode> 时，你的输入就是一个普通的缓冲区。关闭时，所有内容都会传递给 API。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F63e6e1c4-a2ab-4c60-9b43-332e4b581360\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>加载动画对于响应时间较长的提供商来说是一个有用的指示器。\u003C\u002Fsummary>\n\u003Cdiv align=\"left\">\n    \u003Cp>https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Febcd27cb-da00-4150-a0f8-1d2e1afa0acb\u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003C\u002Fdetails>\n\n\n### 键位绑定\n\n该插件提供了以下默认键映射：\n\n| 键位       | 描述                                                 |\n|--------------|-------------------------------------------------------------|\n| `\u003CC-g>c`     | 通过 `PrtChatNew` 打开一个新的聊天                           |\n| `\u003CC-g>\u003CC-g>` | 通过 `PrtChatRespond` 触发 API 生成回复                       |\n| `\u003CC-g>s`     | 通过 `PrtStop` 停止任何正在进行的 Parrot 生成过程            |\n| `\u003CC-g>d`     | 通过 `PrtChatDelete` 删除当前的聊天文件                    |\n\n### 提供商配置示例\n\n统一的提供商系统允许你配置任何兼容 OpenAI 的 API 提供商。以下是几个流行提供商的示例：\n\n\u003Cdetails>\n\u003Csummary>Anthropic Claude\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  perplexity = {\n    name = \"perplexity\",\n    endpoint = \"https:\u002F\u002Fapi.perplexity.ai\u002Fchat\u002Fcompletions\",\n    model_endpoint = \"https:\u002F\u002Fapi.perplexity.ai\u002Fmodels\",\n    api_key = os.getenv \"PERPLEXITY_API_KEY\",\n    params = {\n      chat = { temperature = 1.0, top_p = 1 },\n      command = { temperature = 1.0, top_p = 1 },\n    },\n    topic = {\n      model = \"pplx-70b-chat\",\n      params = { max_tokens = 32 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    models = {\n      \"pplx-70b-chat\",\n      \"pplx-70b-instruct\",\n      \"pplx-40b-chat\",\n      \"pplx-40b-instruct\",\n      \"pplx-22b-chat\",\n      \"pplx-22b-instruct\",\n    },\n    preprocess_payload = function(payload)\n      for _, message in ipairs(payload.messages) do\n        message.content = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\")\n      end\n      if payload.messages[1] and payload.messages[1].role == \"system\" then\n        -- 移除作为系统提示的第一条消息，因为 Perplexity 要求系统提示应包含在 API 请求体中，而非消息列表中\n        payload.system = payload.messages[1].content\n        table.remove(payload.messages, 1)\n      end\n      return payload\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Meta Llama\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  llama = {\n    name = \"llama\",\n    endpoint = \"https:\u002F\u002Fapi.llama-api.com\u002Fv1\u002Fchat\u002Fcompletions\",\n    model_endpoint = \"https:\u002F\u002Fapi.llama-api.com\u002Fv1\u002Fmodels\",\n    api_key = os.getenv \"LLAMA_API_KEY\",\n    params = {\n      chat = { temperature = 1.0, top_p = 1 },\n      command = { temperature = 1.0, top_p = 1 },\n    },\n    topic = {\n      model = \"Llama-3.1-8B-Instruct\",\n      params = { max_tokens = 32 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    models = {\n      \"Llama-3.1-8B-Instruct\",\n      \"Llama-3.1-70B-Instruct\",\n      \"Llama-3.1-405B-Instruct\",\n    },\n    preprocess_payload = function(payload)\n      for _, message in ipairs(payload.messages) do\n        message.content = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\")\n      end\n      if payload.messages[1] and payload.messages[1].role == \"system\" then\n        -- 移除作为系统提示的第一条消息，因为 Llama API 要求系统提示应包含在请求体中\n        payload.system = payload.messages[1].content\n        table.remove(payload.messages, 1)\n      end\n      return payload\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Qwen\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  qwen = {\n    name = \"qwen\",\n    endpoint = \"https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible\u002Foss\u002Fapi\u002Fv1\u002Fchat\u002Fcompletions\",\n    model_endpoint = \"https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible\u002Foss\u002Fapi\u002Fv1\u002Fmodels\",\n    api_key = os.getenv \"QWEN_API_KEY\",\n    params = {\n      chat = { temperature = 1.0, top_p = 1 },\n      command = { temperature = 1.0, top_p = 1 },\n    },\n    topic = {\n      model = \"Qwen-Max\",\n      params = { max_tokens = 32 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    models = {\n      \"Qwen-Max\",\n      \"Qwen-Turbo\",\n      \"Qwen-Long\",\n      \"Qwen-Plus\",\n    },\n    preprocess_payload = function(payload)\n      for _, message in ipairs(payload.messages) do\n        message.content = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\")\n      end\n      if payload.messages[1] and payload.messages[1].role == \"system\" then\n        -- 移除作为系统提示的第一条消息，因为 Qwen API 要求系统提示应包含在请求体中\n        payload.system = payload.messages[1].content\n        table.remove(payload.messages, 1)\n      end\n      return payload\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>DeepSeek\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  deepseek = {\n    name = \"deepseek\",\n    endpoint = \"https:\u002F\u002Fapi.deepseek.com\u002Fv1\u002Fchat\u002Fcompletions\",\n    model_endpoint = \"https:\u002F\u002Fapi.deepseek.com\u002Fv1\u002Fmodels\",\n    api_key = os.getenv \"DEEPSEEK_API_KEY\",\n    params = {\n      chat = { temperature = 1.0, top_p = 1 },\n      command = { temperature = 1.0, top_p = 1 },\n    },\n    topic = {\n      model = \"DeepSeek-V2-Lite\",\n      params = { max_tokens = 32 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    models = {\n      \"DeepSeek-V2-Lite\",\n      \"DeepSeek-V2-Pro\",\n      \"DeepSeek-V2.5-Lite\",\n      \"DeepSeek-V2.5-Pro\",\n    },\n    preprocess_payload = function(payload)\n      for _, message in ipairs(payload.messages) do\n        message.content = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\")\n      end\n      if payload.messages[1] and payload.messages[1].role == \"system\" then\n        -- 移除作为系统提示的第一条消息,因为 DeepSeek API 要求系统提示应包含在请求体中\n        payload.system = payload.messages[1].content\n        table.remove(payload.messages, 1)\n      end\n      return payload\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Yandex GigaChat\u003C\u002Fsummary>\n\n```lua\nproviders = {\n  gigachat = {\n    name = \"gigachat\",\n    endpoint = \"https:\u002F\u002Fgigachat.app\u002Fapi\u002Fv1\u002Fchat\u002Fcompletions\",\n    model_endpoint = \"https:\u002F\u002Fgigachat.app\u002Fapi\u002Fv1\u002Fmodels\",\n    api_key = os.getenv \"GIGACHAT_API_KEY\",\n    params = {\n      chat = { temperature = 1.0, top_p = 1 },\n      command = { temperature = 1.0, top_p = 1 },\n    },\n    topic = {\n      model = \"GigaChat Pro\",\n      params = { max_tokens = 32 },\n    },\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    models = {\n      \"GigaChat Pro\",\n      \"GigaChat Lite\",\n      \"GigaChat Turbo\",\n    },\n    preprocess_payload = function(payload)\n      for _, message in ipairs(payload.messages) do\n        message.content = message.content:gsub(\"^%s*(.-)%s*$\", \"%1\")\n      end\n      if payload.messages[1] and payload.messages[1].role == \"system\" then\n        -- 移除作为系统提示的第一条消息,因为 Yandex GigaChat 要求系统提示应包含在请求体中\n        payload.system = payload.messages[1].content\n        table.remove(payload.messages, 1)\n      end\n      return payload\n    end,\n  },\n}\n```\n\u003C\u002Fdetails>\n\n```lua\nproviders = {\n  perplexity = {\n    name = \"perplexity\",\n    api_key = os.getenv(\"PERPLEXITY_API_KEY\"),\n    endpoint = \"https:\u002F\u002Fapi.perplexity.ai\u002Fchat\u002Fcompletions\",\n    headers = function(self)\n      return {\n        [\"Content-Type\"] = \"application\u002Fjson\",\n        [\"Accept\"] = \"application\u002Fjson\",\n        [\"Authorization\"] = \"Bearer \" .. self.api_key,\n      }\n    end,\n    topic = {\n      model = \"r1-1776\",\n      params = {\n        max_tokens = 64,\n      },\n    },\n    models = {\n      \"sonar\",\n      \"sonar-pro\",\n      \"sonar-deep-research\",\n      \"sonar-reasoning\",\n      \"sonar-reasoning-pro\",\n      \"r1-1776\",\n    },\n  }\n}\n```\n\u003C\u002Fdetails>\n\n\n\n### 添加新命令\n\n#### 提问并以弹出窗口形式接收答案\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    hooks = {\n      Ask = function(parrot, params)\n        local template = [[\n          根据你现有的知识库，请生成一个简洁明了、直接回答问题的回复。在回答中优先考虑准确性和相关性，尽量利用你所掌握的最新信息。力求用简练的语言，抓住问题的核心要点。\n          问题: {{command}}\n        ]]\n        local model_obj = parrot.get_model(\"command\")\n        parrot.logger.info(\"正在调用模型: \" .. model_obj.name)\n        parrot.Prompt(params, parrot.ui.Target.popup, model_obj, \"🤖 询问 ~ \", template)\n      end,\n    }\n    -- ...\n}\n```\n\n#### 使用预设聊天提示开始对话，检查拼写\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    hooks = {\n      SpellCheck = function(prt, params)\n        local chat_prompt = [[\n          你的任务是将提供的文本改写成清晰、语法正确的版本，同时尽可能保留原文的意思。请纠正其中的拼写错误、标点符号错误、动词时态问题、用词不当以及其他语法错误。\n        ]]\n        prt.ChatNew(params, chat_prompt)\n      end,\n    }\n    -- ...\n}\n```\n\n更多钩子和键绑定可以参考我的 [个人 lazy.nvim 配置](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fdotfiles\u002Fblob\u002Fmaster\u002Fnvim\u002Flua\u002Fplugins\u002Fparrot.lua) 或者 [其他用户的配置](https:\u002F\u002Fgithub.com\u002Fsearch?utf8=%E2%9C%93&q=frankroeder%2Fparrot.nvim+language%3ALua&type=code&l=Lua)。\n\n### 提示语集合\n\n如果你在使用 `PrtRewrite`、`PrtAppend` 或 `PrtPrepend` 时，反复输入相同的提示语，那么可以考虑一种更轻量级的方式——直接定义提示语，而不是通过用户命令（即钩子）来实现：\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    prompts = {\n        [\"Spell\"] = \"我希望你校对所提供的文本，并修正其中的错误。\" -- 例如：:'\u003C,'>PrtRewrite Spell\n        [\"Comment\"] = \"请添加注释，解释这段代码的作用。\" -- 例如：:'\u003C,'>PrtPrepend Comment\n        [\"Complete\"] = \"请继续完成文件 {{filename}} 中提供的代码片段。\" -- 例如：:'\u003C,'>PrtAppend Complete\n    }\n    -- ...\n}\n```\n\n这些提示语可以直接作为上述交互式命令的参数使用，也可以与 [模板占位符](#template-placeholders) 结合使用。\n\n### 模板占位符\n\n用户可以在钩子和系统模板中使用以下占位符，以注入额外的上下文信息：\n\n| 占位符             | 含义                              |\n|-------------------------|--------------------------------------|\n| `{{selection}}`         | 当前可视选区             |\n| `{{filetype}}`          | 当前缓冲区的文件类型       |\n| `{{filepath}}`          | 当前文件的完整路径        |\n| `{{filecontent}}`       | 当前缓冲区的全部内容     |\n| `{{multifilecontent}}`  | 所有打开缓冲区的全部内容 |\n\n下面是一个如何在补全钩子中使用这些占位符的示例，该钩子会接收整个文件上下文和选中的代码片段作为输入。\n\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n    hooks = {\n      CompleteFullContext = function(prt, params)\n        local template = [[\n        我有来自 {{filename}} 的如下代码：\n\n        ```{{filetype}}\n        {{filecontent}}\n        ```\n\n        请特别关注以下部分：\n        ```{{filetype}}\n        {{selection}}\n        ```\n\n        请仔细且逻辑严谨地完成上述代码。请仅回复应插入的代码片段。\n        ]]\n        local model_obj = prt.get_model(\"command\")\n        prt.Prompt(params, prt.ui.Target.append, model_obj, nil, template)\n      end,\n    }\n    -- ...\n}\n```\n\n此外，`{{filetype}}` 和 `{{filecontent}}` 占位符也可以用于自定义钩子中调用 `prt.ChatNew(params, chat_prompt)` 时，直接注入整个文件的内容。\n\n```lua\nrequire(\"parrot\").setup {\n    -- ...\n      CodeConsultant = function(prt, params)\n        local chat_prompt = [[\n          你的任务是分析提供的 {{filetype}} 代码，并提出优化建议以提升其性能。找出代码中可以变得更高效、更快或更节省资源的地方。提供具体的优化方案，并说明这些改动将如何提升代码性能。优化后的代码应保持与原代码相同的功能，同时展现出更高的效率。\n\n          这里是代码：\n          ```{{filetype}}\n          {{filecontent}}\n          ```\n        ]]\n        prt.ChatNew(params, chat_prompt)\n      end,\n    }\n    -- ...\n}\n```\n\n## 补全功能\n\n除了使用 [模板占位符](#template-placeholders) 外，`parrot.nvim` 还支持通过 [nvim-cmp](https:\u002F\u002Fgithub.com\u002Fhrsh7th\u002Fnvim-cmp) 和 [blink.cmp](https:\u002F\u002Fgithub.com\u002FSaghen\u002Fblink.cmp\u002F) 实现内联补全，从而提供更多上下文信息：\n\n- `@buffer:foo.txt` - 包含名为 `foo.txt` 的打开缓冲区的内容\n- `@file:test.lua` - 包含名为 `test.lua` 的文件内容\n- `@directory:src\u002F` - 包含目录 `src\u002F` 下的所有文件内容\n\n> 提示：启用 `show_context_hints` 选项后，你可以直观地看到请求所考虑的实际文件内容提示。补全文本（如 `@file`）需要放在 **新行** 上！\n\n### 配置 nvim-cmp\n\n要启用 `parrot.nvim` 的补全功能，只需将该源添加到你的 `nvim-cmp` 配置中：\n\n```lua\n...\nsources = cmp.config.sources({\n  { name = \"parrot\" },\n}),\n...\n```\n\n### 配置 blink.cmp\n\n对于 `blink.cmp`，你需要将 `\"parrot\"` 添加到默认的源列表中，并按以下方式配置提供者：\n\n```lua\n...\nparrot = {\n    module = \"parrot.completion.blink\",\n    name = \"parrot\",\n    score_offset = 20,\n    opts = {\n        show_hidden_files = false,\n        max_items = 50,\n    }\n},\n...\n```\n\n## 状态栏支持\n\n使用您最喜欢的状态栏插件，可以显示当前的聊天或命令模式。下面我们提供一个针对 [lualine](https:\u002F\u002Fgithub.com\u002Fnvim-lualine\u002Flualine.nvim) 的示例：\n\n```lua\n  -- 定义函数及信息格式化\n  local function parrot_status()\n    local status_info = require(\"parrot.config\").get_status_info()\n    local status = \"\"\n    if status_info.is_chat then\n      status = status_info.prov.chat.name\n    else\n      status = status_info.prov.command.name\n    end\n    return string.format(\"%s(%s)\", status, status_info.model)\n  end\n\n  -- 添加到 lueline 区域\n  require('lualine').setup {\n    sections = {\n      lualine_a = { parrot_status }\n  }\n\n```\n\n## 添加自定义提供者\n如果默认提供者不可用，您可以根据需要定义任意数量的自定义提供者。这使您可以自定义各种方面，例如端点、可用模型、默认参数、头部以及处理 LLM 响应的函数。\n请注意，以这种方式配置提供者是为高级用户设计的。如果您需要帮助或对改进提供者支持有任何建议，请随时提交问题或讨论。\n```lua\n  providers = {\n    my_custom_provider = {\n      name = \"my_custom_provider\",\n      api_key = os.getenv(\"MY_API_KEY\"),\n      endpoint = \"https:\u002F\u002Fapi.example.com\u002Fv1\u002Fchat\u002Fcompletions\",\n      model = { \"model-1\", \"model-2\" },\n      -- 提供者特定的 curl 参数（可选）\n      curl_params = { \"--insecure\", \"--max-time\", \"30\", \"--proxy\", \"http:\u002F\u002Fproxy:8080\" },\n      -- 自定义头部函数\n      headers = function(api_key)\n        return {\n          [\"Content-Type\"] = \"application\u002Fjson\",\n          [\"Authorization\"] = \"Bearer \" .. api_key,\n          [\"X-Custom-Header\"] = \"custom-value\",\n        }\n      end,\n      -- 自定义负载预处理\n      preprocess_payload = function(payload)\n        -- 根据您的 API 格式修改负载\n        return payload\n      end,\n      -- 自定义响应处理\n      process_stdout = function(response)\n        -- 解析来自您的 API 的流式响应\n        local success, decoded = pcall(vim.json.decode, response)\n        if success and decoded.content then\n          return decoded.content\n        end\n      end,\n    },\n  }\n```\n\n## 取消操作\n\n您可以随时使用多种方法停止正在进行的 Parrot 生成：\n\n### 方法\n\n1. **快捷键**：`\u003CC-g>s`（可通过 `chat_shortcut_stop` 配置）\n2. **命令**：`:PrtStop`（在任何地方都有效）\n\n### 行为\n\n当您取消生成时：\n\n- **立即终止**：API 请求会立即停止\n- **保留已生成文本**：目前为止生成的文本仍保留在缓冲区中\n- **视觉反馈**：您会收到确认取消的通知\n- **预览模式**：如果在流式传输过程中取消，则不会显示预览\n- **多任务处理**：如果有多个生成任务正在运行，所有任务都会被停止\n\n### 自动命令事件\n\n当生成被取消时，会触发 `User PrtCancelled` 事件，允许您创建自定义钩子：\n\n```lua\nvim.api.nvim_create_autocmd(\"User\", {\n  pattern = \"PrtCancelled\",\n  callback = function()\n    -- 您的自定义逻辑在此处\n    print(\"Parrot 生成已被取消\")\n  end,\n})\n```\n\n### 高级用法\n\n对于自定义代码中的特定缓冲区取消操作：\n\n```lua\n-- 仅停止当前缓冲区的任务\nlocal chat_handler = require(\"parrot\").chat_handler\nchat_handler:stop({ buffer = vim.api.nvim_get_current_buf() })\n\n-- 不发送通知地停止\nchat_handler:stop({ notify = false })\n```\n\n## 附赠\n\n可以直接从终端访问 parrot.nvim：\n\n```bash\ncommand nvim -c \"PrtChatNew\"\n```\n\n也可以将内容直接通过管道输入到聊天中：\n\n```bash\nls -l | command nvim - -c \"normal ggVGy\" -c \":PrtChatNew\" -c \"normal p\"\n```\n\n## 路线图\n\n- 添加状态栏集成\u002F通知功能，用于总结使用的 token 数量或花费金额\n- 改进文档\n- 制作教程视频\n- 降低整体代码复杂度并提高健壮性\n\n## 常见问题解答\n\n- 我遇到了与状态相关的错误。\n    > 如果状态损坏，只需删除文件 `~\u002F.local\u002Fshare\u002Fnvim\u002Fparrot\u002Fpersisted\u002Fstate.json` 即可。\n- 补全功能无法正常工作，并且出现了错误。\n    > 请确保您有足够的 API 余额，并检查日志文件 `~\u002F.local\u002Fstate\u002Fnvim\u002Fparrot.nvim.log` 中是否有任何错误。\n- 聊天和交互式命令的模型选择是如何工作的？\n    > 聊天和交互式命令的模型选择是分开的。要更改聊天模型，您必须位于使用 `PrtChatNew` 启动的聊天窗口中。在聊天窗口之外切换模型只会影响交互式命令的模型（例如 `PrtRewrite`、`PrtAppend`）。一旦设置，这些选择就会保持不变。\n- 我发现了一个 bug，有一个功能建议，或者有一个总体的想法来改进这个项目。\n    > 欢迎所有人参与该项目！如果您有任何建议、想法或 bug 报告，请随时提交问题。\n\n## 相关项目\n\n- [parrot.nvim](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim) 是早期版本 [robitx\u002Fgp.nvim](https:\u002F\u002Fgithub.com\u002FRobitx\u002Fgp.nvim) 的分支，于 2024 年 1 月从提交 `607f94d361f36b8eabb148d95993604fdd74d901` 分支出来。此后，原始代码的很大一部分已被移除或重写，这一努力将持续进行，直到 `parrot.nvim` 发展成为其独立的版本。原始的 `MIT` 许可证已被保留并将继续维持。\n- [huynle\u002Fogpt.nvim](https:\u002F\u002Fgithub.com\u002Fhuynle\u002Fogpt.nvim)\n- `PrtCmd` 的想法受到 [exit.nvim](https:\u002F\u002Fgithub.com\u002F3v0k4\u002Fexit.nvim) 的启发。\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrankroeder_parrot.nvim_readme_28152ac59f10.png)](https:\u002F\u002Fstar-history.com\u002F#frankroeder\u002Fparrot.nvim&Date)","# parrot.nvim 快速上手指南\n\n`parrot.nvim` 是一款专为 Neovim 设计的 AI 辅助插件，主打**隐私透明**与**完全可控**。它不支持后台自动分析或代理模式，所有发送给 LLM 的内容均由用户明确指令触发。支持 OpenAI、Anthropic、Google Gemini、Ollama 本地模型及任何兼容 OpenAI 格式的接口。\n\n## 环境准备\n\n### 系统要求\n- **Neovim**: 版本 `0.10+` (必须)\n- **操作系统**: Linux, macOS, Windows\n\n### 前置依赖\n插件核心依赖以下工具，建议提前安装：\n- **[plenary.nvim](https:\u002F\u002Fgithub.com\u002Fnvim-lua\u002Fplenary.nvim)**: Neovim Lua 通用库 (必须)\n- **[fzf-lua](https:\u002F\u002Fgithub.com\u002Fibhagwan\u002Ffzf-lua)**: 用于模糊搜索和模型选择 (推荐，替代原 fzf\u002Fripgrep 组合以获得更好体验)\n- **ripgrep**: 可选，用于增强搜索功能\n- **Telescope**: 可选，若偏好 Telescope 而非 fzf\n\n> **国内加速建议**：\n> - 安装 Neovim 或 Lua 插件时，若遇到网络问题，可配置 GitHub 镜像源或使用代理。\n> - 使用 Ollama 等本地模型时，可直接从国内镜像站下载模型文件。\n\n## 安装步骤\n\n推荐使用 **lazy.nvim** 进行包管理。\n\n### 1. 使用 lazy.nvim 安装\n\n在你的插件配置文件中（如 `lua\u002Fplugins\u002Fparrot.lua` 或 `init.lua`）添加以下内容：\n\n```lua\n{\n  \"frankroeder\u002Fparrot.nvim\",\n  dependencies = { \"ibhagwan\u002Ffzf-lua\", \"nvim-lua\u002Fplenary.nvim\" },\n  -- 可选：添加通知插件以获得更美观的提示\n  -- dependencies = { \"ibhagwan\u002Ffzf-lua\", \"nvim-lua\u002Fplenary.nvim\", \"rcarriga\u002Fnvim-notify\" },\n  opts = {}\n}\n```\n\n### 2. 手动安装 (不推荐)\n\n```sh\ngit clone --depth=1 https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim.git \\\n  \"${XDG_DATA_HOME:-$HOME\u002F.local\u002Fshare}\"\u002Fnvim\u002Fsite\u002Fpack\u002Fparrot\u002Fstart\u002Fparrot.nvim\n```\n\n## 基本配置\n\n`parrot.nvim` 必须至少配置一个 Provider（服务提供商）才能使用。以下是一个基于 OpenAI 的最小化配置示例。\n\n请在插件配置中添加 `config` 部分：\n\n```lua\n{\n  \"frankroeder\u002Fparrot.nvim\",\n  dependencies = { \"ibhagwan\u002Ffzf-lua\", \"nvim-lua\u002Fplenary.nvim\" },\n  config = function()\n    require(\"parrot\").setup {\n      providers = {\n        openai = {\n          name = \"openai\",\n          -- 请确保环境变量 OPENAI_API_KEY 已设置，或直接填入字符串\n          api_key = os.getenv \"OPENAI_API_KEY\", \n          endpoint = \"https:\u002F\u002Fapi.openai.com\u002Fv1\u002Fchat\u002Fcompletions\",\n          params = {\n            chat = { temperature = 1.1, top_p = 1 },\n            command = { temperature = 1.1, top_p = 1 },\n          },\n          topic = {\n            model = \"gpt-4.1-nano\", -- 用于生成聊天标题的小模型\n            params = { max_completion_tokens = 64 },\n          },\n          models = {\n            \"gpt-4o\",\n            \"o4-mini\",\n            \"gpt-4.1-nano\",\n          }\n        },\n        -- 如需使用 Ollama 本地模型，可取消注释并配置：\n        -- ollama = {\n        --   name = \"ollama\",\n        --   endpoint = \"http:\u002F\u002Flocalhost:11434\u002Fv1\u002Fchat\u002Fcompletions\",\n        --   api_key = \"ollama\", \n        --   models = { \"llama3\", \"qwen2.5-coder\" }\n        -- }\n      },\n    }\n  end,\n}\n```\n\n> **注意**：如果你使用的是国内大模型（如 DeepSeek、Moonshot 等），只需将 `endpoint` 改为对应的 API 地址，并在 `models` 列表中填入可用模型名称即可，因为它们通常兼容 OpenAI 格式。\n\n## 基本使用\n\n### 1. 开启对话聊天 (Chat Mode)\n\n在任意 Neovim 缓冲区中执行以下命令打开聊天窗口：\n\n```vim\n:PrtChatNew\n```\n*或者使用 `:PrtChatToggle` 切换上一个聊天窗口。*\n\n**操作流程**：\n1. 在打开的 Markdown 缓冲区中，找到用户前缀 `🗨:`。\n2. 输入你的问题或指令（例如：\"解释这段代码\" 或 \"写一个 Python 快速排序\"）。\n3. 按下触发键 **`\u003CC-g>\u003CC-g>`** (在插入模式下) 或运行命令 `:PrtChatRespond`。\n4. AI 的回答将直接流式输出到缓冲区中。\n5. 如需停止生成，按 **`\u003CC-g>s`**。\n\n### 2. 交互式代码编辑 (Command Mode)\n\n无需离开当前代码文件，直接对选中的代码进行操作。\n\n**操作流程**：\n1. 进入可视模式 (`v`, `V`, 或 `Ctrl-v`) 选中一段代码。\n2. 执行以下任一命令（可在命令后追加具体指令）：\n\n```vim\n\" 重写选中的代码\n:PrtRewrite 优化这段代码的性能\n\n\" 在选中内容后追加代码\n:PrtAppend 为这个函数添加错误处理\n\n\" 在选中内容前插入代码（如注释）\n:PrtPrepend 为这个函数添加文档注释\n\n\" 重试上一次操作\n:PrtRetry\n```\n\n3. 插件会调用 LLM 处理选中内容，并以 Diff 视图或直接替换的方式展示结果。\n\n### 3. 切换模型与提供商\n\n`parrot.nvim` 允许为“聊天”和“命令行交互”分别设置不同的模型（例如：聊天用强大的 `gpt-4o`，行内编辑用快速的 `haiku` 或本地模型）。\n\n- **切换提供商**：`:PrtProvider` (留空参数可触发 fzf 选择)\n- **切换命令行模型**：在非聊天缓冲区使用 `:PrtModel`\n- **切换聊天模型**：需在**聊天缓冲区内部**使用 `:PrtModel`\n\n### 常用命令速查\n\n| 命令 | 描述 |\n| :--- | :--- |\n| `:PrtChatNew` | 新建聊天窗口 |\n| `:PrtChatToggle` | 切换\u002F打开最近一次的聊天 |\n| `:PrtRewrite \u003Cprompt>` | 重写可视选区代码 |\n| `:PrtAppend \u003Cprompt>` | 在选区后追加内容 |\n| `:PrtPrepend \u003Cprompt>` | 在选区前插入内容 |\n| `:PrtRetry` | 重试上一次编辑操作 |\n| `:PrtStop` | 强制停止当前的 AI 生成 |\n| `:PrtContext` | 编辑当前项目的上下文配置文件 (`.parrot.md`) |","资深后端工程师小李正在 Neovim 中重构一个遗留的 Python 微服务模块，需要快速修复逻辑漏洞并补充缺失的文档注释。\n\n### 没有 parrot.nvim 时\n- **上下文切换频繁**：遇到复杂 Bug 或需要生成代码时，必须手动复制代码片段切换到浏览器，在多个 AI 网页标签页间粘贴提问，打断心流。\n- **隐私与安全顾虑**：担心将包含内部业务逻辑或敏感配置的代码上传至不明云端服务，缺乏对发送内容的完全掌控权。\n- **工作流割裂**：AI 返回的代码需要手动复制回编辑器，再仔细对齐缩进和格式，容易引入人为错误且效率低下。\n- **模型切换困难**：想对比不同大模型（如本地 Ollama 与云端 Claude）的输出效果时，需重新配置环境变量或更换工具，过程繁琐。\n\n### 使用 parrot.nvim 后\n- **原生无缝交互**：直接在 Neovim 缓冲区选中代码，通过 `PrtRewrite` 或 `PrtAppend` 命令即可让 AI 基于当前上下文修复 Bug 或续写代码，无需离开编辑器。\n- **数据主权在握**：所有发送给 LLM 的内容均由用户显式触发且完全可控，支持调用本地部署的 Ollama 模型，确保敏感代码绝不流出内网。\n- **指令驱动编辑**：利用预定义的系统提示词（System Prompts），一键为函数添加符合团队规范的文档注释（`PrtPrepend`），自动保持代码风格一致。\n- **灵活模型调度**：通过统一接口随时在 OpenAI、Anthropic 或本地模型间切换，针对不同任务选择最优模型，且凭据管理安全透明。\n\nparrot.nvim 通过将大模型能力深度融入 Neovim 原生工作流，让开发者在享有极致编码效率的同时，牢牢掌握数据隐私与操作控制权。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffrankroeder_parrot.nvim_49efdf5e.png","frankroeder","Frank Röder","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffrankroeder_4cef276d.jpg","PhD candidate at the Hamburg University of Technology - Working on cognitive robotics, reinforcement learning, and natural language processing.","TUHH","Hamburg",null,"https:\u002F\u002Ffrankroeder.github.io","https:\u002F\u002Fgithub.com\u002Ffrankroeder",[86,90],{"name":87,"color":88,"percentage":89},"Lua","#000080",99.9,{"name":91,"color":92,"percentage":93},"Makefile","#427819",0.1,784,51,"2026-04-01T13:39:37","NOASSERTION","Linux, macOS, Windows","未说明 (插件本身无 GPU 需求，若使用本地 Ollama 部署模型则取决于所选模型)","未说明",{"notes":102,"python":100,"dependencies":103},"该工具是 Neovim 插件而非独立 Python 应用，因此无 Python 或特定 GPU 驱动要求。核心依赖为 Neovim 0.10+ 版本。支持通过 Ollama 进行本地离线部署，此时硬件需求取决于用户选择运行的具体大语言模型。需配置 API Key（支持环境变量、Bash 命令或密码管理器）才能使用云端服务。",[104,105,106,107,108],"Neovim 0.10+","plenary.nvim","fzf-lua (可选)","ripgrep (可选)","telescope.nvim (可选)",[26,15,53,13],[111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130],"gpt","large-language-models","llm","neovim","nvim","plugin","prompting","chatgpt","openai","openai-api","ollama","anthropic","gemini","claude-3-5-sonnet","gpt-4o","perplexity","nvidia-api","gemini-api","grok","xai","2026-03-27T02:49:30.150509","2026-04-06T11:31:10.406715",[134,139,144,149,154,159],{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},14471,"遇到 'Error parsing JSON: Expected value but found T_END' 错误怎么办？","这通常是因为 Ollama 服务器未运行导致的。请确保在运行 Neovim 之前已启动 Ollama 服务。维护者已在最近的提交中修复了此问题，现在如果 Ollama 服务器未运行，将不再返回错误提示。","https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F42",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},14472,"为什么使用 @buffer 或 @file 命令时无法读取文件内容？","@file 标记必须独占一行才能正常工作。正确的用法是将标记放在单独的一行，然后在下一行输入问题。例如：\n@buffer:Chat.ts\n\n这个 Chat 模型是做什么的？\n不要将标记和问题写在同一行。","https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F126",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},14473,"执行 :PrtChatRespond 时提示 'does not look like a chat file' 错误如何解决？","这通常发生在聊天目录被符号链接（symlink）指向其他位置时。解决方法是在 Parrot 的配置中显式设置 chat_dir 为真实路径。请在 setup 配置中添加以下代码：\nchat_dir = vim.loop.fs_realpath(vim.fn.stdpath(\"data\"):gsub(\"\u002F$\", \"\") .. \"\u002Fparrot\u002Fchats\")","https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F32",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},14474,"如何自定义助手的前缀图标（例如将鹦鹉表情改为眼镜）？","该功能已通过 commit 2b29eb3ca6bad8c82fe3cb8406a3ab82670a6643 添加到主分支，并包含在后续版本中。你可以像配置 chat_user_prefix 一样配置 agent_prefix。注意：如果你的终端不支持位图表情符号，可能会显示为方块或不显示，此时可以通过执行 :set wrap 命令来改善显示效果。","https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F27",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},14475,"如何在 Nixvim 或特定环境下配置 Perplexity 和 Gemini 提供商？","在 Nixvim 中配置时，确保正确声明原始 Lua 代码。对于 Perplexity，请参考官方文档中的研究模型卡片 (https:\u002F\u002Fdocs.perplexity.ai\u002Fmodels\u002Fmodel-cards)。如果你使用的是 1.8.0 版本，配置应该比较直接；如果是最新版本 (HEAD)，请检查配置语法是否符合最新要求。有用户反馈在修正 Nix 配置中的 Lua 代码声明后，Perplexity 即可正常工作。","https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F156",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},14476,"如何让 LLM 获取当前打开的其他缓冲区（文件）作为上下文？","该功能建议已被采纳并通过 Issue #112 实现。现在用户可以通过提示输入额外的缓冲区内容，将其作为上下文提供给 LLM。具体实现可以参考相关合并请求或查看更新后的文档以了解如何使用此功能。","https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F19",[165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250,255,260],{"id":166,"version":167,"summary_zh":168,"released_at":169},81389,"v2.5.1","## [2.5.1](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv2.5.0...v2.5.1) (2025-11-06)\n\n\n### Bug 修复\n\n* 关闭模型缓存现在能够正确地保持其不变 ([ccb35e9](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fccb35e9fea98c1cc14ef5f8e8d0829bfbd1c103e))","2025-11-17T11:28:26",{"id":171,"version":172,"summary_zh":173,"released_at":174},81390,"v2.5.0","## [2.5.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv2.4.0...v2.5.0) (2025-10-05)\n\n\n### 功能\n\n* **provider:** 添加对 provider 特定的 curl 参数的支持 ([1959be7](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F1959be7f889ed585b12a966e6e41fafe18d5352a))\n* **provider:** 添加对 provider 特定的 curl 参数的支持 ([54a6d92](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F54a6d92110e173a6a13e7c803536d5e1464eada7))\n\n\n### 错误修复\n\n* **providers:** 在参数为空时重新加载所有缓存 ([a2b869d](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fa2b869d6ea37675f32100ff72c212e6c812472be))\n* **providers:** 在获取模型时使用 provider 特定的 curl 参数 ([35155c6](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F35155c68149961d02a86db19617472e2763f7570))\n* **providers:** 在获取模型时使用 provider 特定的 curl 参数 ([43b0d91](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F43b0d916c0d28a01bbe41fa8bcc7ff2fcee708b2))","2025-10-05T19:04:48",{"id":176,"version":177,"summary_zh":178,"released_at":179},81391,"v2.4.0","## [2.4.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv2.3.0...v2.4.0) (2025-09-30)\n\n\n### 功能\n\n* 添加 PrtReloadCache 命令，用于重新加载缓存的模型 ([10d146f](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F10d146fba858b2e0c858ebbcddcd601ae6cb5e25))\n\n\n### 错误修复\n\n* 修复问题 [#177](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F177)，在删除聊天条目后重新加载列表 ([708fccf](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F708fccf259aab04cb5304802d00d03f437d74f4c))\n* **日志记录器：** 停止覆盖全局 vim.notify 函数 ([55676bc](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F55676bce578969fce606bac04fefc27fd16e3a8a))\n* 移除聊天提示缓冲区，聊天应为功能完整的缓冲区 ([da1eb03](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fda1eb031ccb9d5822978d06f181b89e30cb66b81))\n* 重写 append 和 prepend，使其现在尊重 chat_free_cursor 选项 ([6ad76a1](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F6ad76a1b170b3fa49851504ab17cb39075b93b03))\n* 修正拼写错误，并正确检测聊天\u002F命令模式 ([8f97191](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F8f9719188a5ce294c045a3973f83d7ee3a106277))","2025-09-30T13:36:16",{"id":181,"version":182,"summary_zh":183,"released_at":184},81392,"v2.3.0","## [2.3.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv2.2.0...v2.3.0) (2025-07-15)\n\n\n### 功能\n\n* 将 `\u003CC-c>` 设置为取消交互式重写\u002F追加\u002F前置命令 ([7bc2dc1](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F7bc2dc116e7541e5572e96010bd2b73b1c75dc34))\n* 预览 `(r)` eject 现在会跳回编辑提示并再次调用 API ([621bd76](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F621bd76108bfe83431612aed1abccca6fe1dcaea))\n\n\n### 错误修复\n\n* 使预览 `(q)` quit 取消整个流程 ([1a8e3de](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F1a8e3de0e4fd5d97766f2c8d99e76744334196ca))","2025-07-15T07:43:48",{"id":186,"version":187,"summary_zh":188,"released_at":189},81393,"v2.2.0","## [2.2.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv2.1.0...v2.2.0) (2025-07-11)\n\n\n### 功能\n\n* 为交互式命令（如 rewrite\u002Fappend\u002Fprepend）添加预览模式 ([#163](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F163)) ([dcd58f9](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fdcd58f9b1cff7890712760ad0b72a358a42d1a22))\n* 改进加载动画 ([#165](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fissues\u002F165)) ([66afa9c](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F66afa9c460ddaa6f0bfe972f3795535d98911f35))","2025-07-11T20:12:08",{"id":191,"version":192,"summary_zh":193,"released_at":194},81394,"v2.1.0","## [2.1.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv2.0.0...v2.1.0) (2025-05-31)\n\n\n### 功能特性\n\n* **provider:** 增强型模型缓存，避免每次调用都重新获取模型。([07e22e2](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F07e22e23203c81fd8100c8e630557436070a89fc))\n\n\n### 错误修复\n\n* **provider:** 处理 `api_key` 命令的关闭逻辑 ([97dbbe1](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F97dbbe1f90637c1cd895c07aff0cfd588f5a5e51))\n* **provider:** 修复 `model\u002Fmodels` 参数问题 ([bcfb227](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fbcfb227ffe3f9512fa198f12cf8fe38984a665cc))","2025-05-31T18:29:46",{"id":196,"version":197,"summary_zh":198,"released_at":199},81395,"v2.0.0","## [2.0.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.8.0...v2.0.0) (2025-05-29)\n\n\n### ⚠ 重大变更\n\n* 添加高级且灵活的提供者配置\n* 添加高级且灵活的提供者配置\n\n### 功能\n\n* 添加高级且灵活的提供者配置 ([aabd7a5](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Faabd7a5d629b26f765e84e06897bed861ba4a1c0))\n* 添加高级且灵活的提供者配置 ([bc70212](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fbc702128e29985be9f85e73de4f7478ba95f52b8))","2025-05-29T18:56:58",{"id":201,"version":202,"summary_zh":203,"released_at":204},81396,"v1.8.0","## [1.8.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.7.0...v1.8.0) (2025-05-27)\n\n\n### 功能\n\n* 添加 PrtCmd，可直接生成可执行的 Vim 命令 ([da3d5aa](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fda3d5aa98f2077246bb36a3572a6d89825cd6cb8))\n\n\n### Bug 修复\n\n* 直接将提示传递给 PrtCmd，并添加 README 提示，移除仓库上下文 ([e1a5f86](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fe1a5f86a985f2e38ef0e1953565fe0308fcc3f7b))\n* 在普通模式下使用 Telescope 进行模型选择 ([acfec2b](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Facfec2bab4b6e5ab2c12f736fe74041b769be6c3))\n* 在普通模式下使用 Telescope 进行模型选择 ([7dbf314](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F7dbf314f40d9556974d8f5a7ab0cf3f015806696))","2025-05-29T18:44:03",{"id":206,"version":207,"summary_zh":208,"released_at":209},81397,"v1.7.0","## [1.7.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.6.0...v1.7.0) (2025-04-10)\n\n\n### 功能特性\n\n* 添加选项以禁用思考窗口弹出 ([afb8aab](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fafb8aab69ef9b2b96dc89a198a49254db8a1909a))\n* 为交互式命令添加预定义提示 ([42f3a8e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F42f3a8e5b72139f555a7698ecb5dc13a50afbbd2))\n* **补全功能:** 添加 glob 匹配支持 ([@file](https:\u002F\u002Fgithub.com\u002Ffile):*.lua) ([c08b7e3](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fc08b7e3aa9f6379c5d2462d9949bdaebcc2294d2))\n* **提供者:** 添加对自定义模型选择的支持 ([55bd3f5](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F55bd3f5b7c8d47fd45fd9b1d8a79f7e5d4b1872e))\n\n\n### 错误修复\n\n* 添加额外的思考状态检查 ([161e75e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F161e75e84019a63604944816d100d5601cf109bf))\n* **上下文:** 修正展开提供路径的位置 ([074df4b](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F074df4b580fc2de94712657d8a622d941170935b))\n* 移除愚蠢的 State 参数 ([2a5cdaf](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F2a5cdaf3fd6db1a520b6b3e2bc44c33d0154c7e2))","2025-04-10T19:49:47",{"id":211,"version":212,"summary_zh":213,"released_at":214},81398,"v1.6.0","## [1.6.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.5.0...v1.6.0) (2025-03-31)\n\n\n### 功能\n\n* **finder:** 改进文件搜索 ([3158788](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F3158788f52745310bee3ec5a53dd0012f17f34d0))\n\n\n### 错误修复\n\n* **openai:** 检查推理模型以更改 curl API 参数 ([a2c9585](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fa2c9585ba06a5f4794ca1ae14918b505700592f0))","2025-03-31T16:35:23",{"id":216,"version":217,"summary_zh":218,"released_at":219},81399,"v1.5.0","## [1.5.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.4.0...v1.5.0) (2025-03-15)\n\n\n### Features\n\n* Add Claude thinking functionality ([f5a6057](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Ff5a6057a1a883fa979aacc6c04ecb8ea4dd2b128))\n* **anthropic:** add auto scroll for thinking ([078e2eb](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F078e2ebe5df88e6ffb2db6a9b592b4c4a4c72d96))\n* Persist thinking config to state ([40597f9](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F40597f9a605b35c984890677646fde29c4b83cec))\n* Remember thinking config when toggling ([c92bdb9](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fc92bdb93d3936f9b72fd80cbd3f94dededfebdfe))\n\n\n### Bug Fixes\n\n* pass payload to curl through stdin ([79446e2](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F79446e2416fb81bf5cc478417c552ea17814d576))\n* pass payload to curl through stdin ([5dd932e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F5dd932eb1146cf400880abb9bba437fe3dd2a1b7))","2025-03-15T07:06:05",{"id":221,"version":222,"summary_zh":223,"released_at":224},81400,"v1.4.0","## [1.4.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.3.0...v1.4.0) (2025-03-11)\n\n\n### Features\n\n* add nvim-cmp context completion support ([8f9adc6](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F8f9adc6096099da4a6290648457363a4e8bb13a6))\n\n\n### Bug Fixes\n\n* improve completion feature robustness and add tests ([9ad2ebc](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F9ad2ebc9d93806b7195c27c01e480b01f8410ff6))","2025-03-11T23:32:52",{"id":226,"version":227,"summary_zh":228,"released_at":229},81401,"v1.3.0","## [1.3.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.2.2...v1.3.0) (2025-03-06)\n\n\n### Features\n\n* add deepseek provider ([086ec4e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F086ec4e1f7bdf569f8e5f20104038ee80f9d5e75))\n* add support to deepseek as a provider ([66da297](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F66da297d328a90bbecfcb7c6302cce5246d60502))\n* **anthropic:** add model request and update default model selection ([28113b9](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F28113b9c7d23cebe54cfc9adac36aa613096e718))\n\n\n### Bug Fixes\n\n* **perplexity:** add new reasoning model ([340e195](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F340e195fad6ae32576a2947d2af152b89bfc5344))\n* **perplexity:** add new reasoning model ([7a87bab](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F7a87bab9d9d37d00ff244bcd56cd1a9739692e30))\n* **perplexity:** add the new reasoning pro model ([b5f37a0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fb5f37a07c76dba8ac1c8a34981af297067e69f64))\n* **perplexity:** add the new reasoning pro model ([e5c2d94](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fe5c2d9403fcdc6e9cb587eca099f442f109c9399))\n* use new release-please repo ([4ac5422](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F4ac542290c7b328e4a7916e7f6773d1a60c68957))","2025-03-06T13:11:55",{"id":231,"version":232,"summary_zh":233,"released_at":234},81402,"v1.2.2","## [1.2.2](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.2.1...v1.2.2) (2025-01-26)\n\n\n### Bug Fixes\n\n* add missing module in mistral provider ([b8f52da](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fb8f52dab988a2c21d18aff3ba7806ddc36c2fe8d))\n* change current_provider default value to nil ([a8468be](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fa8468be7311ac04b86bf08a05ea480f444b7c1ea))\n* **perplexity:** update available models ([bdb30c2](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fbdb30c2007f523e97911185ec97a55486adbecab))","2025-01-26T15:35:04",{"id":236,"version":237,"summary_zh":238,"released_at":239},81403,"v1.2.1","## [1.2.1](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.2.0...v1.2.1) (2024-11-19)\n\n\n### Bug Fixes\n\n* Set filetype to markdown for parrot responses ([119828b](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F119828b016c07c547a093fb31bf60272d518e033))\n* use literal string compare before file deletion ([4d43901](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F4d439010e6abf7bcb3e70761a3ccadaed19135ad))\n* xAI API change of model listing request ([c992483](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fc992483dd0cf9d7481b55714d52365d1f7a66f91))","2024-11-21T20:49:54",{"id":241,"version":242,"summary_zh":243,"released_at":244},81404,"v1.2.0","## [1.2.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.1.0...v1.2.0) (2024-10-21)\n\n\n### Features\n\n* add xAI as provider for Grok ([ef0149d](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fef0149d4b335d83d79deacae2f4bbf10e78314f5))","2024-10-21T18:04:41",{"id":246,"version":247,"summary_zh":248,"released_at":249},81405,"v1.1.0","## [1.1.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv1.0.0...v1.1.0) (2024-10-17)\n\n\n### Features\n\n* add nvidia api support ([94e218d](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F94e218dee56344d065c9d0cf37d89225d03ae5f5))\n\n\n### Bug Fixes\n\n* resolve issue with toggle_target ([51e7d1c](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F51e7d1c2820fb4333bdcfc9751abfa74e9d90329))","2024-10-17T06:37:54",{"id":251,"version":252,"summary_zh":253,"released_at":254},81406,"v1.0.0","## [1.0.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv0.7.0...v1.0.0) (2024-10-15)\n\n\n### ⚠ BREAKING CHANGES\n\n* ChatNew now follows toggle_target option\n\n### Features\n\n* ChatNew now follows toggle_target option ([345fb4e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F345fb4e3bed17c1822c1cd40ccec158be13d3f7e))\n\n\n### Bug Fixes\n\n* **ollama:** additional guard if server is not running ([fdcaa6c](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Ffdcaa6ccc368b69f0b0cdd8d5998e53ac2812aeb))\n* **provider:** remove pplx event-stream header ([b347a1c](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fb347a1ce80336a519634df3668c8b940acf83653))\n* resolve history bug with custom hooks ([0db1e3b](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F0db1e3beff0c434fec13c809bd105a4485946ece))","2024-10-15T08:31:08",{"id":256,"version":257,"summary_zh":258,"released_at":259},81407,"v0.7.0","## [0.7.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv0.6.0...v0.7.0) (2024-09-11)\n\n\n### Features\n\n* add github model beta support ([6f36955](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F6f36955a2174af95c3cf98165e907cdf60f289bb))\n* **provider:** add gemini online model support ([be975ee](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fbe975ee542c8c24ebb90f154e25e2c89633b5d2d))\n\n\n### Bug Fixes\n\n* add missing import ([4a50b58](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F4a50b58ce0036009ffc7419df2c2619e8a09496e))\n* **responsehandler:** address bug ([075294c](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F075294c1a9da6e35727007c4105590b8768d3681))\n* **responsehandler:** window handling ([f2cbfc5](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Ff2cbfc592e1a5c470a840abdba5abc4940911f55))","2024-09-13T06:25:45",{"id":261,"version":262,"summary_zh":263,"released_at":264},81408,"v0.6.0","## [0.6.0](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcompare\u002Fv0.5.0...v0.6.0) (2024-08-22)\n\n\n### Features\n\n* add status line support ([3ac1d28](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F3ac1d2885428a573b4851bbc07735465a2019351))\n* **commands:** implement the retry command ([29f7701](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F29f7701585e02abc363df0691c37f6699494bd03))\n\n\n### Bug Fixes\n\n* add `PrtStatus` command ([322a45e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F322a45ead223c4698f52ba5d03e745fe330a7ab5))\n* add missing multifilecontent support for chat prompts ([b8f221e](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fb8f221efdde7c0294917ecb96829e1e1fe6986b2))\n* Neovim version check ([a4fd3f3](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002Fa4fd3f3a55a258c689cd97f0b85a0f267bc239e3))\n* revert wrong license change ([1a7192c](https:\u002F\u002Fgithub.com\u002Ffrankroeder\u002Fparrot.nvim\u002Fcommit\u002F1a7192c3842f55578f787ff08766d7d4e713f701))","2024-08-22T22:16:34"]