[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PleasePrompto--notebooklm-mcp":3,"tool-PleasePrompto--notebooklm-mcp":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":79,"owner_twitter":75,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":23,"env_os":96,"env_gpu":97,"env_ram":97,"env_deps":98,"category_tags":101,"github_topics":79,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":102,"updated_at":103,"faqs":104,"releases":133},1060,"PleasePrompto\u002Fnotebooklm-mcp","notebooklm-mcp","MCP server for NotebookLM - Let your AI agents (Claude Code, Codex) research documentation directly with grounded, citation-backed answers from Gemini. Persistent auth, library management, cross-client sharing. Zero hallucinations, just your knowledge base.","notebooklm-mcp 是一个连接本地AI代理与NotebookLM知识库的服务器工具，让开发者能直接通过命令行与Gemini生成的零幻觉答案交互。它解决了传统文档检索的痛点：避免反复读取文档导致的高token消耗、关键词搜索的不准确性、以及代理自行编造信息的幻觉问题。用户通过本地代理（如Claude Code、Codex）直接向NotebookLM提问，系统会自动关联多份文档并引用来源，确保答案精准且有依据。\n\n适合需要高效处理技术文档的开发者和研究人员，尤其在编写代码、调试问题时，可快速获取基于知识库的深度解答。其核心优势在于：利用Gemini预处理文档生成专家级知识、支持自然语言多轮对话、自动关联多源信息，并通过引用标注增强可信度。无需搭建复杂基础设施，简化了本地RAG方案的实现流程。","\u003Cdiv align=\"center\">\n\n# NotebookLM MCP Server\n\n**Let your CLI agents (Claude, Cursor, Codex...) chat directly with NotebookLM for zero-hallucination answers based on your own notebooks**\n\n[![TypeScript](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTypeScript-5.x-blue.svg)](https:\u002F\u002Fwww.typescriptlang.org\u002F)\n[![MCP](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMCP-2025-green.svg)](https:\u002F\u002Fmodelcontextprotocol.io\u002F)\n[![npm](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fnotebooklm-mcp.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fnotebooklm-mcp)\n[![Claude Code Skill](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FClaude%20Code-Skill-purple.svg)](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FPleasePrompto\u002Fnotebooklm-mcp?style=social)](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp)\n\n[Installation](#installation) • [Quick Start](#quick-start) • [Why NotebookLM](#why-notebooklm-not-local-rag) • [Examples](#real-world-example) • [Claude Code Skill](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill) • [Documentation](.\u002Fdocs\u002F)\n\n\u003C\u002Fdiv>\n\n---\n\n## The Problem\n\nWhen you tell Claude Code or Cursor to \"search through my local documentation\", here's what happens:\n- **Massive token consumption**: Searching through documentation means reading multiple files repeatedly\n- **Inaccurate retrieval**: Searches for keywords, misses context and connections between docs\n- **Hallucinations**: When it can't find something, it invents plausible-sounding APIs\n- **Expensive & slow**: Each question requires re-reading multiple files\n\n## The Solution\n\nLet your local agents chat directly with [**NotebookLM**](https:\u002F\u002Fnotebooklm.google\u002F) — Google's **zero-hallucination knowledge base** powered by Gemini 2.5 that provides intelligent, synthesized answers from your docs.\n\n```\nYour Task → Local Agent asks NotebookLM → Gemini synthesizes answer → Agent writes correct code\n```\n\n**The real advantage**: No more manual copy-paste between NotebookLM and your editor. Your agent asks NotebookLM directly and gets answers straight back in the CLI. It builds deep understanding through automatic follow-ups — Claude asks multiple questions in sequence, each building on the last, getting specific implementation details, edge cases, and best practices. You can save NotebookLM links to your local library with tags and descriptions, and Claude automatically selects the relevant notebook based on your current task.\n\n---\n\n## Why NotebookLM, Not Local RAG?\n\n| Approach | Token Cost | Setup Time | Hallucinations | Answer Quality |\n|----------|------------|------------|----------------|----------------|\n| **Feed docs to Claude** | 🔴 Very high (multiple file reads) | Instant | Yes - fills gaps | Variable retrieval |\n| **Web search** | 🟡 Medium | Instant | High - unreliable sources | Hit or miss |\n| **Local RAG** | 🟡 Medium-High | Hours (embeddings, chunking) | Medium - retrieval gaps | Depends on setup |\n| **NotebookLM MCP** | 🟢 Minimal | 5 minutes | **Zero** - refuses if unknown | Expert synthesis |\n\n### What Makes NotebookLM Superior?\n\n1. **Pre-processed by Gemini**: Upload docs once, get instant expert knowledge\n2. **Natural language Q&A**: Not just retrieval — actual understanding and synthesis\n3. **Multi-source correlation**: Connects information across 50+ documents\n4. **Citation-backed**: Every answer includes source references\n5. **No infrastructure**: No vector DBs, embeddings, or chunking strategies needed\n\n---\n\n## Installation\n\n### Claude Code\n```bash\nclaude mcp add notebooklm npx notebooklm-mcp@latest\n```\n\n### Codex\n```bash\ncodex mcp add notebooklm -- npx notebooklm-mcp@latest\n```\n\n\u003Cdetails>\n\u003Csummary>Gemini\u003C\u002Fsummary>\n\n```bash\ngemini mcp add notebooklm npx notebooklm-mcp@latest\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Cursor\u003C\u002Fsummary>\n\nAdd to `~\u002F.cursor\u002Fmcp.json`:\n```json\n{\n  \"mcpServers\": {\n    \"notebooklm\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"notebooklm-mcp@latest\"]\n    }\n  }\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>amp\u003C\u002Fsummary>\n\n```bash\namp mcp add notebooklm -- npx notebooklm-mcp@latest\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>VS Code\u003C\u002Fsummary>\n\n```bash\ncode --add-mcp '{\"name\":\"notebooklm\",\"command\":\"npx\",\"args\":[\"notebooklm-mcp@latest\"]}'\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Other MCP clients\u003C\u002Fsummary>\n\n**Generic MCP config:**\n```json\n{\n  \"mcpServers\": {\n    \"notebooklm\": {\n      \"command\": \"npx\",\n      \"args\": [\"notebooklm-mcp@latest\"]\n    }\n  }\n}\n```\n\u003C\u002Fdetails>\n\n---\n\n## Alternative: Claude Code Skill\n\n**Prefer Claude Code Skills over MCP?** This server is now also available as a native Claude Code Skill with a simpler setup:\n\n**[NotebookLM Claude Code Skill](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill)** - Clone to `~\u002F.claude\u002Fskills` and start using immediately\n\n**Key differences:**\n- **MCP Server** (this repo): Persistent sessions, works with Claude Code, Codex, Cursor, and other MCP clients\n- **Claude Code Skill**: Simpler setup, Python-based, stateless queries, works only with local Claude Code\n\nBoth use the same browser automation technology and provide zero-hallucination answers from your NotebookLM notebooks.\n\n---\n\n## Quick Start\n\n### 1. Install the MCP server (see [Installation](#installation) above)\n\n### 2. Authenticate (one-time)\n\nSay in your chat (Claude\u002FCodex):\n```\n\"Log me in to NotebookLM\"\n```\n*A Chrome window opens → log in with Google*\n\n### 3. Create your knowledge base\nGo to [notebooklm.google.com](https:\u002F\u002Fnotebooklm.google.com) → Create notebook → Upload your docs:\n- 📄 PDFs, Google Docs, markdown files\n- 🔗 Websites, GitHub repos\n- 🎥 YouTube videos\n- 📚 Multiple sources per notebook\n\nShare: **⚙️ Share → Anyone with link → Copy**\n\n### 4. Let Claude use it\n```\n\"I'm building with [library]. Here's my NotebookLM: [link]\"\n```\n\n**That's it.** Claude now asks NotebookLM whatever it needs, building expertise before writing code.\n\n---\n\n## Real-World Example\n\n### Building an n8n Workflow Without Hallucinations\n\n**Challenge**: n8n's API is new — Claude hallucinates node names and functions.\n\n**Solution**:\n1. Downloaded complete n8n documentation → merged into manageable chunks\n2. Uploaded to NotebookLM\n3. Told Claude: *\"Build me a Gmail spam filter workflow. Use this NotebookLM: [link]\"*\n\n**Watch the AI-to-AI conversation:**\n\n```\nClaude → \"How does Gmail integration work in n8n?\"\nNotebookLM → \"Use Gmail Trigger with polling, or Gmail node with Get Many...\"\n\nClaude → \"How to decode base64 email body?\"\nNotebookLM → \"Body is base64url encoded in payload.parts, use Function node...\"\n\nClaude → \"How to parse OpenAI response as JSON?\"\nNotebookLM → \"Set responseFormat to json, use {{ $json.spam }} in IF node...\"\n\nClaude → \"What about error handling if the API fails?\"\nNotebookLM → \"Use Error Trigger node with Continue On Fail enabled...\"\n\nClaude → ✅ \"Here's your complete workflow JSON...\"\n```\n\n**Result**: Perfect workflow on first try. No debugging hallucinated APIs.\n\n---\n\n## Core Features\n\n### **Zero Hallucinations**\nNotebookLM refuses to answer if information isn't in your docs. No invented APIs.\n\n### **Autonomous Research**\nClaude asks follow-up questions automatically, building complete understanding before coding.\n\n### **Smart Library Management**\nSave NotebookLM links with tags and descriptions. Claude auto-selects the right notebook for your task.\n```\n\"Add [link] to library tagged 'frontend, react, components'\"\n```\n\n### **Deep, Iterative Research**\n- Claude automatically asks follow-up questions to build complete understanding\n- Each answer triggers deeper questions until Claude has all the details\n- Example: For n8n workflow, Claude asked multiple sequential questions about Gmail integration, error handling, and data transformation\n\n### **Cross-Tool Sharing**\nSet up once, use everywhere. Claude Code, Codex, Cursor — all share the same library.\n\n### **Deep Cleanup Tool**\nFresh start anytime. Scans entire system for NotebookLM data with categorized preview.\n\n---\n\n## Tool Profiles\n\nReduce token usage by loading only the tools you need. Each tool consumes context tokens — fewer tools = faster responses and lower costs.\n\n### Available Profiles\n\n| Profile | Tools | Use Case |\n|---------|-------|----------|\n| **minimal** | 5 | Query-only: `ask_question`, `get_health`, `list_notebooks`, `select_notebook`, `get_notebook` |\n| **standard** | 10 | + Library management: `setup_auth`, `list_sessions`, `add_notebook`, `update_notebook`, `search_notebooks` |\n| **full** | 16 | All tools including `cleanup_data`, `re_auth`, `remove_notebook`, `reset_session`, `close_session`, `get_library_stats` |\n\n### Configure via CLI\n\n```bash\n# Check current settings\nnpx notebooklm-mcp config get\n\n# Set a profile\nnpx notebooklm-mcp config set profile minimal\nnpx notebooklm-mcp config set profile standard\nnpx notebooklm-mcp config set profile full\n\n# Disable specific tools (comma-separated)\nnpx notebooklm-mcp config set disabled-tools \"cleanup_data,re_auth\"\n\n# Reset to defaults\nnpx notebooklm-mcp config reset\n```\n\n### Configure via Environment Variables\n\n```bash\n# Set profile\nexport NOTEBOOKLM_PROFILE=minimal\n\n# Disable specific tools\nexport NOTEBOOKLM_DISABLED_TOOLS=\"cleanup_data,re_auth,remove_notebook\"\n```\n\nSettings are saved to `~\u002F.config\u002Fnotebooklm-mcp\u002Fsettings.json` and persist across sessions. Environment variables override file settings.\n\n---\n\n## Architecture\n\n```mermaid\ngraph LR\n    A[Your Task] --> B[Claude\u002FCodex]\n    B --> C[MCP Server]\n    C --> D[Chrome Automation]\n    D --> E[NotebookLM]\n    E --> F[Gemini 2.5]\n    F --> G[Your Docs]\n    G --> F\n    F --> E\n    E --> D\n    D --> C\n    C --> B\n    B --> H[Accurate Code]\n```\n\n---\n\n## Common Commands\n\n| Intent | Say | Result |\n|--------|-----|--------|\n| Authenticate | *\"Open NotebookLM auth setup\"* or *\"Log me in to NotebookLM\"* | Chrome opens for login |\n| Add notebook | *\"Add [link] to library\"* | Saves notebook with metadata |\n| List notebooks | *\"Show our notebooks\"* | Lists all saved notebooks |\n| Research first | *\"Research this in NotebookLM before coding\"* | Multi-question session |\n| Select notebook | *\"Use the React notebook\"* | Sets active notebook |\n| Update notebook | *\"Update notebook tags\"* | Modify metadata |\n| Remove notebook | *\"Remove [notebook] from library\"* | Deletes from library |\n| View browser | *\"Show me the browser\"* | Watch live NotebookLM chat |\n| Fix auth | *\"Repair NotebookLM authentication\"* | Clears and re-authenticates |\n| Switch account | *\"Re-authenticate with different Google account\"* | Changes account |\n| Clean restart | *\"Run NotebookLM cleanup\"* | Removes all data for fresh start |\n| Keep library | *\"Cleanup but keep my library\"* | Preserves notebooks |\n| Delete all data | *\"Delete all NotebookLM data\"* | Complete removal |\n\n---\n\n## Comparison to Alternatives\n\n### vs. Downloading docs locally\n- **You**: Download docs → Claude: \"search through these files\"\n- **Problem**: Claude reads thousands of files → massive token usage, often misses connections\n- **NotebookLM**: Pre-indexed by Gemini, semantic understanding across all docs\n\n### vs. Web search\n- **You**: \"Research X online\"\n- **Problem**: Outdated info, hallucinated examples, unreliable sources\n- **NotebookLM**: Only your trusted docs, always current, with citations\n\n### vs. Local RAG setup\n- **You**: Set up embeddings, vector DB, chunking strategy, retrieval pipeline\n- **Problem**: Hours of setup, tuning retrieval, still gets \"creative\" with gaps\n- **NotebookLM**: Upload docs → done. Google handles everything.\n\n---\n\n## FAQ\n\n**Is it really zero hallucinations?**\nYes. NotebookLM is specifically designed to only answer from uploaded sources. If it doesn't know, it says so.\n\n**What about rate limits?**\nFree tier has daily query limits per Google account. Quick account switching supported for continued research.\n\n**How secure is this?**\nChrome runs locally. Your credentials never leave your machine. Use a dedicated Google account if concerned.\n\n**Can I see what's happening?**\nYes! Say *\"Show me the browser\"* to watch the live NotebookLM conversation.\n\n**What makes this better than Claude's built-in knowledge?**\nYour docs are always current. No training cutoff. No hallucinations. Perfect for new libraries, internal APIs, or fast-moving projects.\n\n---\n\n## Advanced Usage\n\n- 📖 [**Usage Guide**](.\u002Fdocs\u002Fusage-guide.md) — Patterns, workflows, tips\n- 🛠️ [**Tool Reference**](.\u002Fdocs\u002Ftools.md) — Complete MCP API\n- 🔧 [**Configuration**](.\u002Fdocs\u002Fconfiguration.md) — Environment variables\n- 🐛 [**Troubleshooting**](.\u002Fdocs\u002Ftroubleshooting.md) — Common issues\n\n---\n\n## The Bottom Line\n\n**Without NotebookLM MCP**: Write code → Find it's wrong → Debug hallucinated APIs → Repeat\n\n**With NotebookLM MCP**: Claude researches first → Writes correct code → Ship faster\n\nStop debugging hallucinations. Start shipping accurate code.\n\n```bash\n# Get started in 30 seconds\nclaude mcp add notebooklm npx notebooklm-mcp@latest\n```\n\n---\n\n## Disclaimer\n\nThis tool automates browser interactions with NotebookLM to make your workflow more efficient. However, a few friendly reminders:\n\n**About browser automation:**\nWhile I've built in humanization features (realistic typing speeds, natural delays, mouse movements) to make the automation behave more naturally, I can't guarantee Google won't detect or flag automated usage. I recommend using a dedicated Google account for automation rather than your primary account—think of it like web scraping: probably fine, but better safe than sorry!\n\n**About CLI tools and AI agents:**\nCLI tools like Claude Code, Codex, and similar AI-powered assistants are incredibly powerful, but they can make mistakes. Please use them with care and awareness:\n- Always review changes before committing or deploying\n- Test in safe environments first\n- Keep backups of important work\n- Remember: AI agents are assistants, not infallible oracles\n\nI built this tool for myself because I was tired of the copy-paste dance between NotebookLM and my editor. I'm sharing it in the hope it helps others too, but I can't take responsibility for any issues, data loss, or account problems that might occur. Use at your own discretion and judgment.\n\nThat said, if you run into problems or have questions, feel free to open an issue on GitHub. I'm happy to help troubleshoot!\n\n---\n\n## Contributing\n\nFound a bug? Have a feature idea? [Open an issue](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues) or submit a PR!\n\n## License\n\nMIT — Use freely in your projects.\n\n---\n\n\u003Cdiv align=\"center\">\n\nBuilt with frustration about hallucinated APIs, powered by Google's NotebookLM\n\n⭐ [Star on GitHub](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp) if this saves you debugging time!\n\n\u003C\u002Fdiv>\n","\u003Cdiv align=\"center\">\n\n# NotebookLM MCP 服务器\n\n**让你的 CLI 代理（Claude、Cursor、Codex 等）直接与 NotebookLM 聊天，基于你的笔记获得零幻觉答案**\n\n[![TypeScript](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTypeScript-5.x-blue.svg)](https:\u002F\u002Fwww.typescriptlang.org\u002F)\n[![MCP](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMCP-2025-green.svg)](https:\u002F\u002Fmodelcontextprotocol.io\u002F)\n[![npm](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fnotebooklm-mcp.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fnotebooklm-mcp)\n[![Claude Code 技能](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FClaude%20Code-Skill-purple.svg)](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FPleasePrompto\u002Fnotebooklm-mcp?style=social)](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp)\n\n[安装](#安装) • [快速开始](#快速开始) • [为何选择 NotebookLM](#为何-notebooklm-而非本地-rag) • [示例](#真实示例) • [Claude Code 技能](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill) • [文档](.\u002Fdocs\u002F)\n\n\u003C\u002Fdiv>\n\n---\n\n## 问题\n\n当你让 Claude Code 或 Cursor \"搜索我的本地文档\"时，会发生以下情况：\n- **大量 token 消耗**：搜索文档意味着反复读取多个文件\n- **检索不准确**：仅基于关键词搜索，忽略上下文和文档间的关联\n- **幻觉问题**：找不到信息时会编造看似合理的 API\n- **成本高且缓慢**：每个问题都需要重新读取多个文件\n\n## 解决方案\n\n让你的本地代理直接与 [**NotebookLM**](https:\u002F\u002Fnotebooklm.google\u002F)（谷歌基于 Gemini 2.5 的**零幻觉知识库**）聊天，从你的文档中获取智能合成答案。\n\n```\n你的任务 → 本地代理询问 NotebookLM → Gemini 合成答案 → 代理编写正确代码\n```\n\n**真正优势**：无需在 NotebookLM 和编辑器之间手动复制粘贴。你的代理可直接向 NotebookLM 提问并从 CLI 获取答案。通过自动追问建立深度理解——Claude 会连续提出多个问题，逐步获取具体实现细节、边界情况和最佳实践。你可以将 NotebookLM 链接保存到本地库并添加标签描述，Claude 会根据当前任务自动选择相关笔记本。\n\n---\n\n## 为何选择 NotebookLM 而非本地 RAG？\n\n| 方法 | token 成本 | 部署时间 | 幻觉问题 | 答案质量 |\n|----------|------------|------------|----------------|----------------|\n| **直接喂文档给 Claude** | 🔴 很高（多文件读取） | 立即可用 | 有 - 会填补空白 | 检索结果波动大 |\n| **网络搜索** | 🟡 中等 | 立即可用 | 高 - 来源不可靠 | 随机性大 |\n| **本地 RAG** | 🟡 中等-高 | 数小时（需要嵌入、分块） | 中等 - 检索空白 | 依赖配置 |\n| **NotebookLM MCP** | 🟢 极低 | 5 分钟 | **零幻觉** - 未知时会拒绝回答 | 专家级合成 |\n\n### NotebookLM 的优势\n\n1. **Gemini 预处理**：文档只需上传一次，即可获得即时专家知识\n2. **自然语言问答**：不仅是检索，更是真正的理解和合成\n3. **多源关联**：可关联 50+ 文档中的信息\n4. **引用支持**：每个答案都包含来源引用\n5. **无需基础设施**：无需向量数据库、嵌入或分块策略\n\n---\n\n## 安装\n\n### Claude Code\n```bash\nclaude mcp add notebooklm npx notebooklm-mcp@latest\n```\n\n### Codex\n```bash\ncodex mcp add notebooklm -- npx notebooklm-mcp@latest\n```\n\n\u003Cdetails>\n\u003Csummary>Gemini\u003C\u002Fsummary>\n\n```bash\ngemini mcp add notebooklm npx notebooklm-mcp@latest\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Cursor\u003C\u002Fsummary>\n\n添加到 `~\u002F.cursor\u002Fmcp.json`:\n```json\n{\n  \"mcpServers\": {\n    \"notebooklm\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"notebooklm-mcp@latest\"]\n    }\n  }\n}\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>amp\u003C\u002Fsummary>\n\n```bash\namp mcp add notebooklm -- npx notebooklm-mcp@latest\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>VS Code\u003C\u002Fsummary>\n\n```bash\ncode --add-mcp '{\"name\":\"notebooklm\",\"command\":\"npx\",\"args\":[\"notebooklm-mcp@latest\"]}'\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>其他 MCP 客户端\u003C\u002Fsummary>\n\n**通用 MCP 配置：**\n```json\n{\n  \"mcpServers\": {\n    \"notebooklm\": {\n      \"command\": \"npx\",\n      \"args\": [\"notebooklm-mcp@latest\"]\n    }\n  }\n}\n```\n\u003C\u002Fdetails>\n\n---\n\n## 替代方案：Claude Code 技能\n\n**偏好 Claude Code 技能而非 MCP？** 本服务器现也提供原生 Claude Code 技能版本，配置更简单：\n\n**[NotebookLM Claude Code 技能](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill)** - 克隆到 `~\u002F.claude\u002Fskills` 即可使用\n\n**主要区别：**\n- **MCP 服务器**（本仓库）：持久会话，支持 Claude Code、Codex、Cursor 和其他 MCP 客户端\n- **Claude Code 技能**：配置简单，基于 Python，无状态查询，仅支持本地 Claude Code\n\n两者均使用相同的浏览器自动化技术，从你的 NotebookLM 笔记中提供零幻觉答案。\n\n---\n\n## 快速开始\n\n### 1. 安装 MCP 服务器（见上方[安装](#安装)说明）\n\n### 2. 认证（一次性操作）\n\n在聊天窗口（Claude\u002FCodex）中输入：\n```\n\"登录 NotebookLM\"\n```\n*会打开 Chrome 窗口 → 使用谷歌账号登录*\n\n### 3. 创建知识库\n访问 [notebooklm.google.com](https:\u002F\u002Fnotebooklm.google.com) → 创建笔记本 → 上传你的文档：\n- 📄 PDF、Google 文档、markdown 文件\n- 🔗 网站、GitHub 仓库\n- 🎥 YouTube 视频\n- 📚 每个笔记本可包含多个来源\n\n分享：**⚙️ 分享 → 任何有链接者 → 复制链接**\n\n### 4. 让 Claude 使用它\n```\n\"我正在使用[库名]开发。这是我的 NotebookLM：[链接]\"\n```\n\n**完成**。Claude 现在会向 NotebookLM 提问所需信息，在编写代码前建立专业知识。\n\n---\n\n## 真实示例\n\n### 无幻觉构建 n8n 工作流\n\n**挑战**：n8n 的 API 是新的 —— Claude 会编造节点名称和函数\n\n**解决方案**：\n1. 下载完整 n8n 文档 → 合并为可管理的块\n2. 上传到 NotebookLM\n3. 告诉 Claude：\"为我构建 Gmail 垃圾过滤工作流。使用这个 NotebookLM：[链接]\"\n\n**观看 AI 对话：**\n\n```\nClaude → \"n8n 中 Gmail 集成如何工作？\"\nNotebookLM → \"使用 Gmail Trigger 的轮询，或 Gmail 节点的 Get Many...\"\n\nClaude → \"如何解码 base64 邮件正文？\"\nNotebookLM → \"正文在 payload.parts 中为 base64url 编码，使用 Function 节点...\"\n\nClaude → \"如何将 OpenAI 响应解析为 JSON？\"\nNotebookLM → \"设置 responseFormat 为 json，在 IF 节点中使用 {{ $json.spam }}...\"\n\nClaude → \"API 失败时如何处理错误？\"\nNotebookLM → \"使用 Error Trigger 节点并启用 Continue On Fail...\"\n\nClaude → ✅ \"这是你的完整工作流 JSON...\"\n```\n\n**结果**：首次尝试即获得完美工作流。无需调试编造的 API。\n\n---\n\n## 核心功能\n\n### **零幻觉**\nNotebookLM 若在你的文档中找不到信息，会拒绝回答。不会编造 API。\n\n### **自主研究**\nClaude 会自动提出后续问题，在编码前构建完整理解。\n\n### **智能库管理**\n保存带标签和描述的 NotebookLM 链接。Claude 会根据任务自动选择合适的笔记本。\n```\n\"Add [link] to library tagged 'frontend, react, components'\"\n```\n\n### **深度迭代研究**\n- Claude 自动提出后续问题以构建完整理解\n- 每个答案会触发更深入的问题，直到 Claude 掌握所有细节\n- 示例：针对 n8n 工作流，Claude 会连续提问 Gmail 集成、错误处理和数据转换等问题\n\n### **跨工具共享**\n一次配置，多处使用。Claude Code、Codex、Cursor 均共享同一库。\n\n### **深度清理工具**\n随时重置系统。扫描整个系统中的 NotebookLM 数据并提供分类预览。\n\n---\n\n## 工具配置\n\n通过仅加载所需工具减少 token 使用量。每个工具都会消耗上下文 token —— 工具越少，响应越快成本越低。\n\n### 可用配置\n\n| 配置 | 工具数 | 使用场景 |\n|---------|-------|----------|\n| **minimal** | 5 | 仅查询: `ask_question`, `get_health`, `list_notebooks`, `select_notebook`, `get_notebook` |\n| **standard** | 10 | + 库管理: `setup_auth`, `list_sessions`, `add_notebook`, `update_notebook`, `search_notebooks` |\n| **full** | 16 | 所有工具包括 `cleanup_data`, `re_auth`, `remove_notebook`, `reset_session`, `close_session`, `get_library_stats` |\n\n### 通过 CLI 配置\n\n```bash\n# 查看当前设置\nnpx notebooklm-mcp config get\n\n# 设置配置\nnpx notebooklm-mcp config set profile minimal\nnpx notebooklm-mcp config set profile standard\nnpx notebooklm-mcp config set profile full\n\n# 禁用特定工具（逗号分隔）\nnpx notebooklm-mcp config set disabled-tools \"cleanup_data,re_auth\"\n\n# 重置为默认值\nnpx notebooklm-mcp config reset\n```\n\n### 通过环境变量配置\n\n```bash\n# 设置配置\nexport NOTEBOOKLM_PROFILE=minimal\n\n# 禁用特定工具\nexport NOTEBOOKLM_DISABLED_TOOLS=\"cleanup_data,re_auth,remove_notebook\"\n```\n\n设置保存在 `~\u002F.config\u002Fnotebooklm-mcp\u002Fsettings.json` 中并跨会话保留。环境变量会覆盖文件设置。\n\n---\n\n## 架构\n\n```mermaid\ngraph LR\n    A[Your Task] --> B[Claude\u002FCodex]\n    B --> C[MCP Server]\n    C --> D[Chrome Automation]\n    D --> E[NotebookLM]\n    E --> F[Gemini 2.5]\n    F --> G[Your Docs]\n    G --> F\n    F --> E\n    E --> D\n    D --> C\n    C --> B\n    B --> H[Accurate Code]\n```\n\n---\n\n## 常用命令\n\n| 意图 | 说 | 结果 |\n|--------|-----|--------|\n| 认证 | *\"打开 NotebookLM 认证设置\"* 或 *\"登录 NotebookLM\"* | 打开 Chrome 登录页面 |\n| 添加笔记本 | *\"添加 [link] 到库\"* | 保存带元数据的笔记本 |\n| 列出笔记本 | *\"显示我们的笔记本\"* | 列出所有已保存笔记本 |\n| 研究优先 | *\"在编码前用 NotebookLM 研究这个\"* | 多问题会话 |\n| 选择笔记本 | *\"使用 React 笔记本\"* | 设置活动笔记本 |\n| 更新笔记本 | *\"更新笔记本标签\"* | 修改元数据 |\n| 移除笔记本 | *\"从库中移除 [notebook]\"* | 从库中删除 |\n| 查看浏览器 | *\"显示浏览器\"* | 实时观看 NotebookLM 聊天 |\n| 修复认证 | *\"修复 NotebookLM 认证\"* | 清除并重新认证 |\n| 切换账号 | *\"用其他 Google 账号重新认证\"* | 更换账号 |\n| 清理重启 | *\"运行 NotebookLM 清理\"* | 移除所有数据重置 |\n| 保留库 | *\"清理但保留我的库\"* | 保留笔记本 |\n| 删除所有数据 | *\"删除所有 NotebookLM 数据\"* | 完全移除 |\n\n---\n\n## 与替代方案对比\n\n### 与本地下载文档对比\n- **你**: 下载文档 → Claude: \"搜索这些文件\"\n- **问题**: Claude 阅读数千文件 → 巨大 token 消耗，常遗漏关联\n- **NotebookLM**: 由 Gemini 预索引，跨所有文档语义理解\n\n### 与网络搜索对比\n- **你**: \"在线研究 X\"\n- **问题**: 过时信息、幻觉示例、不可靠来源\n- **NotebookLM**: 仅使用你的可信文档，始终最新且带引用\n\n### 与本地 RAG 设置对比\n- **你**: 设置嵌入、向量数据库、分块策略、检索流水线\n- **问题**: 数小时配置，调优检索，仍会在空白处\"创造\"\n- **NotebookLM**: 上传文档 → 完成。Google 处理一切。\n\n---\n\n## 常见问题\n\n**真的完全不会幻觉吗？**\n是的。NotebookLM 仅回答来自上传源的内容。如果不知道答案，会明确说明。\n\n**有速率限制吗？**\n免费版每个 Google 账号有每日查询限制。支持快速切换账号以持续研究。\n\n**安全性如何？**\nChrome 本地运行。你的凭证不会离开本机。如有顾虑，可使用专用 Google 账号。\n\n**能查看操作过程吗？**\n当然！说 *\"显示浏览器\"* 即可观看实时 NotebookLM 对话。\n\n**相比 Claude 内置知识有何优势？**\n你的文档始终最新。无训练截止日期。无幻觉。特别适合新库、内部 API 或快速迭代项目。\n\n---\n\n## 高级用法\n\n- 📖 [**使用指南**](.\u002Fdocs\u002Fusage-guide.md) — 模式、工作流、技巧\n- 🛠️ [**工具参考**](.\u002Fdocs\u002Ftools.md) — 完整 MCP API\n- 🔧 [**配置**](.\u002Fdocs\u002Fconfiguration.md) — 环境变量\n- 🐛 [**故障排除**](.\u002Fdocs\u002Ftroubleshooting.md) — 常见问题\n\n---\n\n## 核心价值\n\n**没有 NotebookLM MCP**: 写代码 → 发现错误 → 调试幻觉 API → 重复\n\n**使用 NotebookLM MCP**: Claude 先研究 → 编写正确代码 → 更快交付\n\n停止调试幻觉，开始交付准确代码。\n\n```bash\n# 30 秒内开始使用\nclaude mcp add notebooklm npx notebooklm-mcp@latest\n```\n\n## 免责声明\n\n本工具通过自动化浏览器与NotebookLM的交互，提升您的工作效率。但请注意以下事项：\n\n**关于浏览器自动化：**  \n虽然我已集成人性化功能（humanization features）（如真实打字速度、自然延迟、鼠标移动），使自动化行为更接近真人操作，但无法保证Google不会检测或标记自动化使用。建议使用专用Google账号进行自动化操作而非主账号——可类比网络爬虫：虽无明确风险，但安全第一！\n\n**关于命令行工具与AI代理：**  \nClaude Code、Codex等AI驱动的命令行工具（CLI tools）功能强大，但可能产生错误。请谨慎使用并保持警觉：  \n- 提交或部署前务必审查变更内容  \n- 优先在安全环境中测试  \n- 定期备份重要工作  \n- 牢记：AI代理是助手，而非绝对可靠的预言家  \n\n我开发此工具源于对NotebookLM与编辑器间重复复制粘贴操作的厌倦。分享此工具是希望帮助他人，但对可能出现的问题、数据丢失或账号异常不承担任何责任。请自行判断风险使用。  \n\n若您遇到问题或有疑问，请随时在GitHub[提交问题](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues)。我乐意协助排查！\n\n---\n\n## 贡献指南\n\n发现缺陷？有功能建议？[提交问题](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues)或发送PR！\n\n## 许可协议\n\nMIT协议——可在您的项目中自由使用。\n\n---\n\n\u003Cdiv align=\"center\">\n\n因幻觉API而生，由Google NotebookLM驱动\n\n⭐ [GitHub点赞](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp) 支持本项目！节省调试时间就靠它了！\n\n\u003C\u002Fdiv>","# NotebookLM MCP 快速上手指南\n\n## 环境准备\n- **系统要求**：支持Node.js的环境（Windows\u002FmacOS\u002FLinux）\n- **前置依赖**：\n  - [Node.js](https:\u002F\u002Fnodejs.org) 14+（含npm）\n  - Google Chrome 浏览器\n  - 网络访问权限（用于NotebookLM认证和文档上传）\n\n---\n\n## 安装步骤\n\n### 1. 安装 MCP 服务\n```bash\nnpm install -g notebooklm-mcp\n# 或使用国内镜像加速（可选）\nnpm install -g cnpm --registry=https:\u002F\u002Fregistry.npmmirror.com\ncnpm install -g notebooklm-mcp\n```\n\n### 2. 配置不同客户端\n**Claude Code**  \n```bash\nclaude mcp add notebooklm npx notebooklm-mcp@latest\n```\n\n**Codex**  \n```bash\ncodex mcp add notebooklm -- npx notebooklm-mcp@latest\n```\n\n**Cursor**  \n编辑 `~\u002F.cursor\u002Fmcp.json` 添加：\n```json\n{\n  \"mcpServers\": {\n    \"notebooklm\": {\n      \"command\": \"npx\",\n      \"args\": [\"notebooklm-mcp@latest\"]\n    }\n  }\n}\n```\n\n**VS Code**  \n```bash\ncode --add-mcp '{\"name\":\"notebooklm\",\"command\":\"npx\",\"args\":[\"notebooklm-mcp@latest\"]}'\n```\n\n---\n\n## 基本使用\n\n### 1. 启动服务\n```bash\nnpx notebooklm-mcp\n```\n\n### 2. 认证登录\n在 Claude\u002FCodex 等客户端输入：\n```\n\"Log me in to NotebookLM\"\n```\n系统将自动弹出 Chrome 窗口，使用 Google 账号登录 NotebookLM。\n\n### 3. 创建知识库\n1. 访问 [NotebookLM](https:\u002F\u002Fnotebooklm.google.com)\n2. 创建新笔记本并上传文档：\n   - PDF\u002FMarkdown\u002FGoogle Docs\n   - GitHub 仓库链接\n   - 网站\u002FYouTube 视频链接\n3. 获取共享链接\n\n### 4. 调用知识库\n在客户端输入示例：\n```\n\"I'm building with React. Here's my NotebookLM: [你的笔记链接]\"\n```\nAI 将自动通过 NotebookLM 获取精准答案并生成代码。","某电商平台开发团队在集成第三方支付API时，需频繁查阅多份技术文档。开发者小李负责实现支付接口的异步通知处理功能，需同时参考支付网关、风控系统和日志框架的文档。\n\n### 没有 notebooklm-mcp 时\n- 高频调用AI代理搜索本地文档，导致每次查询消耗数百tokens\n- 关键接口参数常因跨文档关联被遗漏，需手动交叉验证\n- AI代理多次生成不存在的回调方法签名，引发调试延误\n- 每次需求变更需重新加载文档库，构建知识索引耗时10分钟以上\n\n### 使用 notebooklm-mcp 后\n- 通过CLI直接调用NotebookLM知识库，单次查询token消耗降低80%\n- 系统自动关联支付协议与风控规则，精准定位异步处理的异常场景\n- 所有返回接口文档均带源码引用标记，避免生成虚构方法\n- 文档更新后知识库即时同步，无需重建索引即可获取最新信息\n\n核心价值：通过零幻觉知识库实现开发效率跃迁，让AI代理直接调用结构化文档知识，而非依赖粗放的文本检索。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPleasePrompto_notebooklm-mcp_593a5d67.png","PleasePrompto","Please Prompto!","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPleasePrompto_a989f06e.png","AI tooling & integration | Claude, NotebookLM, MCP | RAG systems & agentic  architectures",null,"Germany","https:\u002F\u002Fductor dev","https:\u002F\u002Fgithub.com\u002FPleasePrompto",[84,88],{"name":85,"color":86,"percentage":87},"TypeScript","#3178c6",94.5,{"name":89,"color":90,"percentage":91},"JavaScript","#f1e05a",5.5,1777,228,"2026-04-05T10:04:52","MIT","Linux, macOS, Windows","未说明",{"notes":99,"python":97,"dependencies":100},"需安装Node.js环境，首次运行需通过Chrome浏览器进行身份验证，具体系统兼容性未明确说明",[],[13,53,15],"2026-03-27T02:49:30.150509","2026-04-06T07:12:02.079951",[105,110,114,119,124,129],{"id":106,"question_zh":107,"answer_zh":108,"source_url":109},4728,"启动服务器时出现‘Server does not support completions’错误如何解决？","此错误是由于服务器未声明支持completions能力导致的。解决方案：1. 更新至v1.2.1版本（已修复此问题）；2. 手动修改代码：打开resource-handlers.js文件，注释掉第174行的`server.setRequestHandler(CompleteRequestSchema, ...`代码块。","https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues\u002F6",{"id":111,"question_zh":112,"answer_zh":113,"source_url":109},4729,"如何手动修复‘Server does not support completions’错误？","在VS Code中打开路径`C:\\Users\\%USERNAME%\\AppData\\Roaming\\npm\\node_modules\\notebooklm-mcp\\dist\\resources\\resource-handlers.js`，定位到第174行，用`\u002F*`和`*\u002F`注释掉`server.setRequestHandler(CompleteRequestSchema, ...`整个代码块。",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},4730,"为什么无法连接到Claude Code？","此问题与completions能力缺失相关。建议更新至v1.2.1版本，或按照上述手动修复方法修改代码。维护者已通过PR#10修复此问题。","https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues\u002F9",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},4731,"如何在LibreChat中使用NotebookLM MCP？","当前版本可能存在兼容性问题。建议等待v1.2.1更新，或尝试在代码中添加`completions: {}`到服务器配置。维护者表示会优先处理此类问题。","https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues\u002F3",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},4732,"启动服务器时提示‘Fatal error starting server’如何处理？","此错误由completions能力缺失引发。请检查是否使用最新版本，或通过修改resource-handlers.js文件临时解决。维护者已确认该问题在v1.2.1中修复。","https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp\u002Fissues\u002F5",{"id":130,"question_zh":131,"answer_zh":132,"source_url":123},4733,"如何验证NotebookLM MCP是否正常初始化？","查看启动日志中的`✅ [时间] 💾 Library saved (0 notebooks)`和`✅ [时间] 📚 NotebookLibrary initialized`信息，若出现这些提示则说明初始化成功。",[134,139,144],{"id":135,"version":136,"summary_zh":137,"released_at":138},104269,"v1.2.1","## Bug Fix\n\n- Added missing `completions: {}` capability to server configuration (PR #10 by @joaocarlos)\n\nThis fixes the startup crash on Claude Desktop and other MCP clients:\n```\nError: Server does not support completions (required for completion\u002Fcomplete)\n```\n\n**Closes:** #5, #6, #9, #12\n\nThanks to @joaocarlos for the fix!","2025-12-27T03:50:20",{"id":140,"version":141,"summary_zh":142,"released_at":143},104270,"v1.2.0","## What's New\n\n### Tool Profiles System\nReduce token usage by loading only the tools you need:\n- **minimal** (5 tools) - Query-only operations\n- **standard** (10 tools) - + Library management\n- **full** (16 tools) - All tools\n\n### CLI Configuration\n```bash\nnpx notebooklm-mcp config get\nnpx notebooklm-mcp config set profile minimal\nnpx notebooklm-mcp config set disabled-tools \"cleanup_data,re_auth\"\nnpx notebooklm-mcp config reset\n```\n\nEnvironment variables: `NOTEBOOKLM_PROFILE`, `NOTEBOOKLM_DISABLED_TOOLS`\n\n### Bug Fixes\n- **LibreChat Compatibility** - Fixed \"Server does not support completions\" error (#3)\n- **Thinking Detection** - Fixed incomplete answers showing placeholder text\n\n### Improvements\n- Modularized codebase for better maintainability\n- Removed unreliable text-based placeholder detection\n\n**Full Changelog:** See CHANGELOG.md","2025-11-21T17:22:06",{"id":145,"version":146,"summary_zh":147,"released_at":148},104271,"v1.1.2","# NotebookLM MCP Server - First Stable Release\n\nLet your CLI agents (Claude Code, Codex, Cursor) chat directly with NotebookLM for zero-hallucination answers based on your own notebooks.\n\nThis is the first stable release after thorough testing and feature refinement. The server is production-ready and actively used in real-world projects.\n\n---\n\n## Installation\n\n### Quick Start\n\n**Claude Code:**\n```bash\nclaude mcp add notebooklm npx notebooklm-mcp@latest\n```\n\n**Codex:**\n```bash\ncodex mcp add notebooklm -- npx notebooklm-mcp@latest\n```\n\n**Cursor:**\nAdd to `~\u002F.cursor\u002Fmcp.json`:\n```json\n{\n  \"mcpServers\": {\n    \"notebooklm\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"notebooklm-mcp@latest\"]\n    }\n  }\n}\n```\n\nSee [README](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-mcp#installation) for more installation options (Gemini, amp, VS Code).\n\n---\n\n## What This Solves\n\nWhen you tell Claude or Cursor to \"search through my documentation\", here's what happens:\n- **Massive token consumption**: Reading multiple files repeatedly\n- **Inaccurate retrieval**: Keyword search misses context and connections\n- **Hallucinations**: Invents plausible-sounding APIs when it can't find something\n- **Expensive and slow**: Each question requires re-reading files\n\n**Solution**: Let your local agents chat directly with NotebookLM — Google's zero-hallucination knowledge base powered by Gemini 2.5 that provides intelligent, synthesized answers from your docs.\n\n```\nYour Task → Local Agent asks NotebookLM → Gemini synthesizes answer → Agent writes correct code\n```\n\nNo more manual copy-paste between NotebookLM and your editor. Your agent asks NotebookLM directly and gets answers straight back in the CLI.\n\n---\n\n## Core Features\n\n### Zero Hallucinations\nNotebookLM refuses to answer if information isn't in your docs. No invented APIs.\n\n### Autonomous Research\nClaude asks follow-up questions automatically, building complete understanding before coding.\n\n### Smart Library Management\nSave NotebookLM links with tags and descriptions. Claude auto-selects the right notebook for your task.\n\n### Deep Cleanup Tool\nFresh start anytime. Scans entire system for NotebookLM data with categorized preview.\n\n### Persistent Sessions\nBrowser sessions stay alive across multiple queries for faster responses.\n\n### Human-Like Automation\nRealistic typing speeds, natural delays, and mouse movements to avoid detection.\n\n---\n\n## What's New in v1.1.2\n\n### Version History Overview\n\n**v1.1.x - Feature Additions:**\n- Added reference to companion [Claude Code Skill](https:\u002F\u002Fgithub.com\u002FPleasePrompto\u002Fnotebooklm-skill) version\n- Fixed binary execution permissions across platforms\n- Expanded installation guides (Cursor, Gemini, amp, VS Code)\n- Major library management improvements\n\n**v1.0.x - Stability Refinements:**\n- Initial public release with core functionality\n- Multiple bug fixes and documentation improvements\n- Authentication flow hardening\n- Browser automation reliability improvements\n\n### Full Changelog\n\n**v1.1.2** (Latest)\n- Add Claude Code Skill reference to README\n- Include package-lock.json in repository\n- Clean up gitignore for CLAUDE.md\n\n**v1.1.1**\n- Fix binary permissions across npm installations\n- Add postbuild script for executable setup\n\n**v1.1.0**\n- Major feature update with enhanced library management\n- Improved session handling\n\n**v1.0.5**\n- Add installation guides for multiple clients (Cursor, Gemini, amp, VS Code)\n\n**v1.0.1 - v1.0.4**\n- Documentation improvements\n- Authentication flow fixes\n- Browser automation stability enhancements\n\n**v1.0.0**\n- Initial public release\n- Core MCP server functionality\n- Browser automation with Patchright\n- Smart library management\n- Deep cleanup tool\n\n---\n\n## Quick Start Guide\n\n### 1. Install the MCP server\n\nSee [Installation](#installation) section above for your specific client.\n\n### 2. Authenticate (one-time)\n\nSay in your chat:\n```\n\"Log me in to NotebookLM\"\n```\nA Chrome window opens → log in with Google → Done.\n\n### 3. Create your knowledge base\n\nGo to [notebooklm.google.com](https:\u002F\u002Fnotebooklm.google.com):\n- Create notebook\n- Upload your docs (PDFs, Google Docs, markdown, websites, YouTube videos)\n- Share: **⚙️ Share → Anyone with link → Copy**\n\n### 4. Let your agent use it\n\n```\n\"I'm building with [library]. Here's my NotebookLM: [link]\"\n```\n\nThat's it. Your agent now asks NotebookLM whatever it needs, building expertise before writing code.\n\n---\n\n## Real-World Example\n\n### Building an n8n Workflow Without Hallucinations\n\n**Challenge**: n8n's API is new — Claude hallucinates node names and functions.\n\n**Solution**:\n1. Downloaded complete n8n documentation → merged into manageable chunks\n2. Uploaded to NotebookLM\n3. Told Claude: \"Build me a Gmail spam filter workflow. Use this NotebookLM: [link]\"\n\n**Watch the AI-to-AI conversation:**\n\n```\nClaude → \"How does Gmail integration work in n8n?\"\nNotebookLM → \"Use Gmail Trigger with polling, or Gmail node with Get Many...\"\n\nClaude → \"How to decode base64 email body?\"\nNotebookLM →","2025-10-19T16:51:53"]