[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-watercrawl--WaterCrawl":3,"tool-watercrawl--WaterCrawl":65},[4,18,28,36,44,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":24,"last_commit_at":25,"category_tags":26,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",158594,2,"2026-04-16T23:34:05",[14,13,27],"语言模型",{"id":29,"name":30,"github_repo":31,"description_zh":32,"stars":33,"difficulty_score":10,"last_commit_at":34,"category_tags":35,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[27,15,13,14],{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":24,"last_commit_at":42,"category_tags":43,"status":17},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[14,27],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":24,"last_commit_at":50,"category_tags":51,"status":17},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,"2026-04-10T11:13:16",[15,16,52,53,13,54,27,14,55],"视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":62,"last_commit_at":63,"category_tags":64,"status":17},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[27,16,54],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":68,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":120,"forks":121,"last_commit_at":122,"license":123,"difficulty_score":10,"env_os":124,"env_gpu":124,"env_ram":124,"env_deps":125,"category_tags":133,"github_topics":134,"view_count":24,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":143,"updated_at":144,"faqs":145,"releases":166},8328,"watercrawl\u002FWaterCrawl","WaterCrawl","Transform Web Content into LLM-Ready Data","WaterCrawl 是一款强大的开源网络爬虫应用，专为将复杂的网页内容转化为大语言模型（LLM）可直接使用的高质量数据而设计。它基于 Python、Django、Scrapy 和 Celery 构建，能够有效解决从海量互联网信息中精准提取、清洗并结构化数据的技术难题，让用户无需手动处理杂乱的 HTML 代码即可获取纯净的训练素材。\n\n这款工具特别适合开发者、AI 研究人员以及需要构建私有知识库的企业团队使用。无论是希望为 AI 模型准备特定领域数据集的研究者，还是需要在自动化工作流中集成实时网络数据的工程师，WaterCrawl 都能提供灵活的支持。其核心技术亮点包括高度可定制的爬取策略（深度、速度及目标内容控制）、支持多语言与国家定位的搜索引擎，以及基于服务器发送事件（SSE）的异步实时进度监控。此外，WaterCrawl 提供了完善的 REST API 和 OpenAPI 文档，并能无缝集成 Dify、N8N 等主流 AI 自动化平台。作为支持自托管的开源项目，它在保障数据隐私与安全的同时，赋予了用户对数据处理流程的完全掌控权，是连接互联网公开信息与本地 AI 应用的理想桥梁。","![Water Crawl](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwatercrawl_WaterCrawl_readme_2e48bdca51cb.png)\n\n\u003Cdiv align=\"center\">\n\n[![WaterCrawl](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProduct-F04438)](https:\u002F\u002Fwatercrawl.dev)\n[![Pricing](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Ffree-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff)](https:\u002F\u002Fwatercrawl.dev\u002Fpricing)\n[![GitHub release (latest by date)](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fwatercrawl\u002Fwatercrawl)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Freleases)\n[![GitHub Workflow Status](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fwatercrawl\u002Fwatercrawl\u002Flint-pr.yml?label=tests)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Factions)\n[![Docker Image Version](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fv\u002Fwatercrawl\u002Fwatercrawl?label=docker)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fwatercrawl\u002Fwatercrawl)\n[![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fwatercrawl\u002Fwatercrawl)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Fstargazers)\n[![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fwatercrawl\u002Fwatercrawl)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Fissues)\n[![Python Version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.13-blue)](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F)\n\n\u003C\u002Fdiv>\n\n\n🕷️ WaterCrawl is a powerful web application that uses Python, Django, Scrapy, and Celery to crawl web pages and extract relevant data.\n\n## 🚀 Quick Start\n\n1. 🐳 [Quick start](#-quick-start)\n2. 💻 [Development **(For Contributing)**](.\u002FCONTRIBUTING.md)\n\n### 🐳 Quick start\n\nTo build and run WaterCrawl on Docker locally, please follow these steps:\n\n1. Clone the repository:\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl.git\n    cd watercrawl\n    ```\n\n2. Build and run the Docker containers:\n    ```bash\n    cd docker\n    cp .env.example .env\n    docker compose up -d\n    ```\n\n3. Access the application with open [http:\u002F\u002Flocalhost](http:\u002F\u002Flocalhost)\n\n> **⚠️ IMPORTANT**: If you're deploying on a domain or IP address other than localhost, you MUST update the MinIO configuration in your .env file:\n> ```bash\n> # Change this from 'localhost' to your actual domain or IP\n> MINIO_EXTERNAL_ENDPOINT=your-domain.com\n> \n> # Also update these URLs accordingly\n> MINIO_BROWSER_REDIRECT_URL=http:\u002F\u002Fyour-domain.com\u002Fminio-console\u002F\n> MINIO_SERVER_URL=http:\u002F\u002Fyour-domain.com\u002F\n> ```\n> Failure to update these settings will result in broken file uploads and downloads. For more details, see [DEPLOYMENT.md](.\u002FDEPLOYMENT.md).\n\n> **Important:** Before deploying to production, ensure that you update the `.env` file with the appropriate configuration values. Additionally, make sure to set up and configure the database, MinIO, and any other required services. for more information, please read the [Deployment Guide](.\u002FDEPLOYMENT.md).\n\n\n### 💻 Development (For Contributing)\n\nFor local development and contribution, please follow our [Contributing Guide](.\u002FCONTRIBUTING.md) 🤝\n\n\u003Cdiv align=\"\">\n   \u003Ca href=\"https:\u002F\u002Fwatercrawl.dev\u002Fjobs\">\n      \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🚀_We're_Hiring!-Join_Our_Team-F59E0B?style=for-the-badge\" alt=\"We're Hiring\" \u002F>\n   \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n## ✨ Features\n\n- **🕸️ Advanced Web Crawling & Scraping** - Crawl websites with highly customizable options for depth, speed, and targeting specific content\n- **🔍 Powerful Search Engine** - Find relevant content across the web with multiple search depths (basic, advanced, ultimate)\n- **🌐 Multi-language Support** - Search and crawl content in different languages with country-specific targeting\n- **⚡ Asynchronous Processing** - Monitor real-time progress of crawls and searches via Server-Sent Events (SSE)\n- **🔄 REST API with OpenAPI** - Comprehensive API with detailed documentation and client libraries\n- **🔌 Rich Ecosystem** - Integrations with Dify, N8N, and other AI\u002Fautomation platforms\n- **🏠 Self-hosted & Open Source** - Full control over your data with easy deployment options\n- **📊 Advanced Results Handling** - Download and process search results with customizable parameters\n\nCheck our [API Overview](https:\u002F\u002Fdocs.watercrawl.dev\u002Fintro) to learn more about these features.\n\n## 🛠️ Client SDKs\n\n- ✅ [**Python Client**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fpython) - Full-featured SDK with support for all API endpoints\n- ✅ [**Node.js Client**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fnodejs) - Complete JavaScript\u002FTypeScript integration\n- ✅ [**Go Client**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fgo) - Full-featured SDK with support for all API endpoints\n- ✅ [**PHP Client**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fphp) - Full-featured SDK with support for all API endpoints\n- 🔜 [**Rust Client**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Frust) - Coming soon\n\n## 🔌 Integrations\n\n- ✅ [Dify Plugin](https:\u002F\u002Fmarketplace.dify.ai\u002Fplugins\u002Fwatercrawl\u002Fwatercrawl) ([source code](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl-dify-plugin))\n- ✅ [N8N workflow node](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@watercrawl\u002Fn8n-nodes-watercrawl) ([source code](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fn8n-nodes-watercrawl))\n- ✅ [Dify Knowledge Base](https:\u002F\u002Fdify.ai\u002F)\n- 🔄 Langflow (Pull Request - Not Merged yet)\n- 🔜 Flowise (Coming soon)\n\n## 🔧 Plugins\n\n- ✅ WaterCrawl plugin\n- ✅ OpenAI Plugin\n\n## ⭐ Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwatercrawl_WaterCrawl_readme_60326c27780f.png)](https:\u002F\u002Fstar-history.com\u002F#watercrawl\u002Fwatercrawl&Date)\n\n## 🔒 Security Disclosure\n\n⚠️ Please avoid posting security issues on GitHub. Instead, send your questions to support@watercrawl.dev and we will provide you with a more detailed answer.\n\n## 📄 License\n\nThis repository is available under the [WaterCrawl License](LICENSE), which is essentially MIT with a few additional restrictions.\n\n---\n\u003Cdiv align=\"center\">\nMade with ❤️ by the WaterCrawl Team\n\u003C\u002Fdiv>","![Water Crawl](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwatercrawl_WaterCrawl_readme_2e48bdca51cb.png)\n\n\u003Cdiv align=\"center\">\n\n[![WaterCrawl](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProduct-F04438)](https:\u002F\u002Fwatercrawl.dev)\n[![定价](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Ffree-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff)](https:\u002F\u002Fwatercrawl.dev\u002Fpricing)\n[![GitHub 发布（按日期最新）](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fwatercrawl\u002Fwatercrawl)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Freleases)\n[![GitHub 工作流状态](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fwatercrawl\u002Fwatercrawl\u002Flint-pr.yml?label=tests)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Factions)\n[![Docker 镜像版本](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fv\u002Fwatercrawl\u002Fwatercrawl?label=docker)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fwatercrawl\u002Fwatercrawl)\n[![GitHub 星标数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fwatercrawl\u002Fwatercrawl)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Fstargazers)\n[![GitHub 问题数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fwatercrawl\u002Fwatercrawl)](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl\u002Fissues)\n[![Python 版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.13-blue)](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F)\n\n\u003C\u002Fdiv>\n\n\n🕷️ WaterCrawl 是一款功能强大的 Web 应用程序，使用 Python、Django、Scrapy 和 Celery 来抓取网页并提取相关数据。\n\n## 🚀 快速入门\n\n1. 🐳 [快速入门](#-quick-start)\n2. 💻 [开发 **（用于贡献）**](.\u002FCONTRIBUTING.md)\n\n### 🐳 快速入门\n\n要在本地 Docker 上构建并运行 WaterCrawl，请按照以下步骤操作：\n\n1. 克隆仓库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl.git\n    cd watercrawl\n    ```\n\n2. 构建并运行 Docker 容器：\n    ```bash\n    cd docker\n    cp .env.example .env\n    docker compose up -d\n    ```\n\n3. 打开 [http:\u002F\u002Flocalhost](http:\u002F\u002Flocalhost) 访问应用\n\n> **⚠️ 重要提示**：如果您是在除 localhost 之外的域名或 IP 地址上部署，务必更新 .env 文件中的 MinIO 配置：\n> ```bash\n> # 将此处的 'localhost' 更改为您的实际域名或 IP\n> MINIO_EXTERNAL_ENDPOINT=your-domain.com\n> \n> # 同时相应地更新这些 URL\n> MINIO_BROWSER_REDIRECT_URL=http:\u002F\u002Fyour-domain.com\u002Fminio-console\u002F\n> MINIO_SERVER_URL=http:\u002F\u002Fyour-domain.com\u002F\n> ```\n> 如果未更新这些设置，文件上传和下载将无法正常工作。有关详细信息，请参阅 [DEPLOYMENT.md](.\u002FDEPLOYMENT.md)。\n\n> **重要提示**：在部署到生产环境之前，请确保更新 `.env` 文件以包含适当的配置值。此外，还需设置并配置数据库、MinIO 以及其他所需的服务。更多信息请参阅 [部署指南](.\u002FDEPLOYMENT.md)。\n\n\n### 💻 开发（用于贡献）\n\n如需进行本地开发和贡献，请遵循我们的 [贡献指南](.\u002FCONTRIBUTING.md) 🤝\n\n\u003Cdiv align=\"\">\n   \u003Ca href=\"https:\u002F\u002Fwatercrawl.dev\u002Fjobs\">\n      \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🚀_We're_Hiring!-Join_Our_Team-F59E0B?style=for-the-badge\" alt=\"我们正在招聘\" \u002F>\n   \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n## ✨ 功能\n\n- **🕸️ 高级网页爬取与抓取** - 可高度自定义深度、速度及目标内容的网站爬取\n- **🔍 强大的搜索引擎** - 通过多种搜索深度（基础、高级、终极）查找全网相关内容\n- **🌐 多语言支持** - 支持不同语言的内容搜索与爬取，并可按国家\u002F地区定向\n- **⚡ 异步处理** - 通过服务器发送事件 (SSE) 实时监控爬取和搜索进度\n- **🔄 带 OpenAPI 的 REST API** - 功能全面的 API，附有详细文档和客户端库\n- **🔌 丰富的生态系统** - 可与 Dify、N8N 等 AI\u002F自动化平台集成\n- **🏠 自托管且开源** - 完全掌控您的数据，部署方式灵活\n- **📊 高级结果处理** - 可按自定义参数下载和处理搜索结果\n\n请查看我们的 [API 概览](https:\u002F\u002Fdocs.watercrawl.dev\u002Fintro)，了解更多功能详情。\n\n## 🛠️ 客户端 SDK\n\n- ✅ [**Python 客户端**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fpython) - 功能齐全的 SDK，支持所有 API 端点\n- ✅ [**Node.js 客户端**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fnodejs) - 完整的 JavaScript\u002FTypeScript 集成\n- ✅ [**Go 客户端**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fgo) - 功能齐全的 SDK，支持所有 API 端点\n- ✅ [**PHP 客户端**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Fphp) - 功能齐全的 SDK，支持所有 API 端点\n- 🔜 [**Rust 客户端**](https:\u002F\u002Fdocs.watercrawl.dev\u002Fclients\u002Frust) - 即将推出\n\n## 🔌 集成\n\n- ✅ [Dify 插件](https:\u002F\u002Fmarketplace.dify.ai\u002Fplugins\u002Fwatercrawl\u002Fwatercrawl) ([源代码](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fwatercrawl-dify-plugin))\n- ✅ [N8N 工作流节点](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@watercrawl\u002Fn8n-nodes-watercrawl) ([源代码](https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002Fn8n-nodes-watercrawl))\n- ✅ [Dify 知识库](https:\u002F\u002Fdify.ai\u002F)\n- 🔄 Langflow（尚未合并的拉取请求）\n- 🔜 Flowise（即将推出）\n\n## 🔧 插件\n\n- ✅ WaterCrawl 插件\n- ✅ OpenAI 插件\n\n## ⭐ 星标历史\n\n[![星标历史图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwatercrawl_WaterCrawl_readme_60326c27780f.png)](https:\u002F\u002Fstar-history.com\u002F#watercrawl\u002Fwatercrawl&Date)\n\n## 🔒 安全披露\n\n⚠️ 请勿在 GitHub 上发布安全问题。如有疑问，请发送至 support@watercrawl.dev，我们将为您提供更详细的解答。\n\n## 📄 许可证\n\n本仓库采用 [WaterCrawl 许可证](LICENSE) 开放，该许可证本质上是 MIT 许可证，但附加了一些限制。\n\n---\n\u003Cdiv align=\"center\">\n由 WaterCrawl 团队用心打造\n\u003C\u002Fdiv>","# WaterCrawl 快速上手指南\n\nWaterCrawl 是一个基于 Python、Django、Scrapy 和 Celery 构建的强大网络爬虫应用，支持高度自定义的网页抓取、数据提取及异步处理。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux、macOS 或 Windows（需安装 Docker Desktop）\n*   **核心依赖**：\n    *   [Docker](https:\u002F\u002Fwww.docker.com\u002F) (推荐版本 20.10+)\n    *   [Docker Compose](https:\u002F\u002Fdocs.docker.com\u002Fcompose\u002F) (通常随 Docker 桌面版自带)\n    *   Git\n*   **网络环境**：由于项目托管于 GitHub 且镜像拉取自 Docker Hub，国内用户建议配置 Docker 镜像加速器以确保拉取速度。\n\n## 安装步骤\n\nWaterCrawl 推荐使用 Docker 进行部署，以下是本地快速启动的具体步骤：\n\n1.  **克隆项目仓库**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl.git\n    cd WaterCrawl\n    ```\n\n2.  **配置环境变量**\n    进入 docker 目录并复制示例配置文件：\n    ```bash\n    cd docker\n    cp .env.example .env\n    ```\n\n3.  **启动服务**\n    使用 Docker Compose 后台构建并运行所有容器：\n    ```bash\n    docker compose up -d\n    ```\n\n> **⚠️ 重要提示（非本地部署）**\n> 如果您计划在域名或非 `localhost` 的 IP 地址上部署，**必须**修改 `.env` 文件中的 MinIO 配置，否则文件上传和下载功能将失效：\n> ```bash\n> # 将 'localhost' 替换为您的实际域名或 IP\n> MINIO_EXTERNAL_ENDPOINT=your-domain.com\n> \n> # 同步更新以下 URL\n> MINIO_BROWSER_REDIRECT_URL=http:\u002F\u002Fyour-domain.com\u002Fminio-console\u002F\n> MINIO_SERVER_URL=http:\u002F\u002Fyour-domain.com\u002F\n> ```\n\n## 基本使用\n\n服务启动成功后，您可以通过浏览器访问主界面。\n\n1.  **访问应用**\n    打开浏览器访问：\n    [http:\u002F\u002Flocalhost](http:\u002F\u002Flocalhost)\n\n2.  **开始抓取**\n    *   在 Web 界面中配置目标 URL、爬取深度、语言偏好等参数。\n    *   提交任务后，系统将通过 Server-Sent Events (SSE) 实时展示爬取进度。\n    *   任务完成后，可在线预览或直接下载结构化数据结果。\n\n3.  **API 调用（可选）**\n    WaterCrawl 提供完整的 REST API 及多语言 SDK（Python, Node.js, Go, PHP）。您可以参考 [API 文档](https:\u002F\u002Fdocs.watercrawl.dev\u002Fintro) 集成到您的自动化工作流或 AI 应用中。","某电商数据团队需要每日监控全球竞品网站的动态价格、新品上架信息及用户评论，以训练内部的大语言模型进行市场趋势预测。\n\n### 没有 WaterCrawl 时\n- **数据清洗成本极高**：抓取到的网页包含大量导航栏、广告脚本和 CSS 样式，开发人员需编写复杂的正则表达式手动清洗，耗时且容易出错。\n- **大模型处理效率低**：未经优化的原始 HTML 令牌数（Tokens）过多，直接喂给 LLM 不仅成本高昂，还常因上下文噪音导致分析结果不准确。\n- **多语言支持困难**：面对不同国家的竞品站点，团队需针对每种语言单独配置爬虫规则，难以实现统一的多语言内容提取。\n- **实时性差**：缺乏异步监控机制，无法实时掌握爬取进度，一旦任务失败往往要等到第二天才能发现并重新执行。\n\n### 使用 WaterCrawl 后\n- **一键生成纯净数据**：WaterCrawl 自动剥离无关元素，直接将网页转化为结构清晰、LLM 就绪的 Markdown 或 JSON 格式，清洗工作减少 90%。\n- **显著提升模型效能**：输入数据精简且聚焦核心内容，大幅降低了 Token 消耗，同时让大模型的市场分析报告准确率明显提升。\n- **全球化轻松覆盖**：利用其内置的多语言与区域定向功能，团队可一次性配置任务，自动抓取并处理英、日、德等多国站点内容。\n- **全流程可视可控**：通过 SSE 技术实时监控爬取状态，结合 REST API 轻松集成到现有的 N8N 自动化流程中，实现数据更新的即时响应。\n\nWaterCrawl 将繁琐的非结构化网页抓取转变为高效的标准化数据流水线，让团队能专注于高价值的模型训练而非数据预处理。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwatercrawl_WaterCrawl_2e48bdca.png","watercrawl","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fwatercrawl_3747c7fe.png","WaterCrawl is a sophisticated web application designed to bridge the gap between web content and Large Language Models (LLMs)",null,"info@watercrawl.io","WaterCrawl_dev","https:\u002F\u002Fwatercrawl.dev","https:\u002F\u002Fgithub.com\u002Fwatercrawl",[85,89,93,97,101,105,109,113,116],{"name":86,"color":87,"percentage":88},"TypeScript","#3178c6",46.5,{"name":90,"color":91,"percentage":92},"Python","#3572A5",37.3,{"name":94,"color":95,"percentage":96},"Jupyter Notebook","#DA5B0B",11.7,{"name":98,"color":99,"percentage":100},"HTML","#e34c26",3.2,{"name":102,"color":103,"percentage":104},"Shell","#89e051",0.5,{"name":106,"color":107,"percentage":108},"Makefile","#427819",0.4,{"name":110,"color":111,"percentage":112},"JavaScript","#f1e05a",0.2,{"name":114,"color":115,"percentage":112},"Dockerfile","#384d54",{"name":117,"color":118,"percentage":119},"CSS","#663399",0.1,1818,220,"2026-04-15T22:38:54","NOASSERTION","未说明",{"notes":126,"python":127,"dependencies":128},"该工具主要基于 Docker 部署（推荐使用 docker compose）。若非本地 localhost 部署，必须配置 MinIO 的外部端点域名或 IP 地址，否则文件上传下载功能将失效。生产环境需手动配置数据库、MinIO 及相关服务。","3.13",[129,130,131,132],"Django","Scrapy","Celery","MinIO",[27,16],[135,136,137,138,139,140,141,142],"crawl4ai","crawler","crawling-python","html2markdown","llm-crawler","llm-scraper","scraper","aicrawler","2026-03-27T02:49:30.150509","2026-04-17T09:54:54.859844",[146,151,156,161],{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},37260,"生成的 PDF 文件内容被截断或缺失部分网页元素，如何解决？","这通常是因为动态内容加载时间不足或 CSS 选择器配置不当导致的。请尝试以下解决方案：\n1. 增加等待时间：在 `page_options` 配置中增大 `wait_time` 的值，确保动态内容在生成 PDF 前完全加载。\n2. 调整内容筛选：检查并修改 `include_tags` 和 `exclude_tags` 中的 CSS 选择器（如 article, main, nav 等），确保所需内容未被排除。\n3. 检查主内容选项：确认 `only_main_content` 设置，如果设为 True 可能会限制只提取主要区域，导致侧边栏或页脚等内容丢失。","https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fissues\u002F59",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},37261,"是否支持本地大模型（如 Ollama）？","目前 WaterCrawl 正在规划更广泛的模型支持（包括 Ollama），但这将在 v1.0.0 版本发布后以新的架构形式推出，而非通过当前的插件系统。维护者建议用户关注后续版本更新，届时将原生支持更多本地和远程模型，无需依赖第三方 SaaS 订阅。","https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fissues\u002F26",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},37262,"是否计划支持结构化数据提取（如提取商品价格、库存等特定字段）？","是的，团队已计划在未来版本中实现上下文相关的结构化结果提取功能。用户可以定义结构化 Schema 来指定需要提取的字段（如价格、标题、库存等）。该功能预计在未来一个月内上线，同时还将包含代理（Agentic）功能和知识库检索支持。","https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fissues\u002F99",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},37263,"生成的 Markdown 文件中图片相对路径导致链接失效，能否转换为绝对路径？","维护者已确认该功能已被列入计划，将在下一个版本中实现。届时，WaterCrawl 会自动将 HTML 中的相对图片路径（如 `..\u002Fimages\u002Fgk1.jpg`）解析并转换为基于源网页基础 URL 的绝对路径（如 `https:\u002F\u002Fexample.com\u002Fimages\u002Fgk1.jpg`），以确保导入知识库或 RAG 系统时图片链接的有效性。","https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fissues\u002F180",[167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257,262],{"id":168,"version":169,"summary_zh":170,"released_at":171},297786,"v0.12.2","## 变更内容\n* 修复（安全）：@amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F205 中修复了多个安全漏洞，并改进了错误处理。\n* 修复（安全）：@amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F207 中升级了 minimatch 和 webpack，以解决安全漏洞。\n* 修复（安全）：@amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F209 中更新了依赖项覆盖。\n* 发布 v0.12.2：@amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F210 中完成。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.12.1...v0.12.2","2026-02-22T12:48:48",{"id":173,"version":174,"summary_zh":175,"released_at":176},297787,"v0.12.1","## 变更内容\n* 修复：由 @tomo-v 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F189 中实现，通过 POSTGRES_USER 用户执行 pg_isready 健康检查\n* 修复（安全）：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F190 中解决安全漏洞并改进开发环境配置\n* 修复（安全）：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F191 中将 Scrapy 升级至 2.13.4 版本\n* 功能（工作流）：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F192 中增强手动版本号更新功能，以同步更新文档引用\n* 发布 v0.12.1 版本：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F194 中完成\n\n## 新贡献者\n* @tomo-v 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F189 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.12.0...v0.12.1","2025-12-11T00:06:13",{"id":178,"version":179,"summary_zh":180,"released_at":181},297788,"v0.12.0","## 变更内容\n* 修复：在社交登录中处理缺失的姓名字段及 GitHub API 错误，由 @seer-by-sentry[bot] 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F173 中完成\n* 新特性：为站点地图请求添加代理服务器支持，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F183 中完成\n* 修复：在爬虫管道中实现基于 URL 哈希的去重功能，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F186 中完成\n* 发布 v0.12.0 版本，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F187 中完成\n\n## 新贡献者\n* @seer-by-sentry[bot] 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F173 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.11.1...v0.12.0","2025-11-16T14:40:23",{"id":183,"version":184,"summary_zh":185,"released_at":186},297789,"v0.11.1","## 变更内容\n* 修复：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F168 中实现，跨应用入口点初始化 Sentry 监控\n* 杂项：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F170 中将 Django 更新至 5.2.8 版本\n* 发布 v0.11.1：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F172 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.11.0...v0.11.1","2025-11-06T20:26:48",{"id":188,"version":189,"summary_zh":190,"released_at":191},297790,"v0.11.0","## 变更内容\n* chore：更新依赖和 docker-compose 配置，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F154 中完成\n* feat：改进 Docker 运行时配置和环境变量处理，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F156 中完成\n* refactor：优化 Docker 构建流程和前端缓存机制，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F157 中完成\n* feat：将动态生成的 config.js 版本替换为构建时版本，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F158 中完成\n* feat：集成 Sentry 错误监控与跟踪功能，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F155 中完成\n* fix：统一 API 基础 URL 环境变量命名规范，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F161 中完成\n* 发布 v0.11.0 版本，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F164 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.10.2...v0.11.0","2025-11-04T21:45:26",{"id":193,"version":194,"summary_zh":195,"released_at":196},297791,"v0.10.2","## 变更内容\n* chore: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F122 中更新了 0.10.1 版本的 CHANGELOG.md\n* feat: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F124 中添加了支持动态主题和 Schema 展示的 API 参考页面\n* feat: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F126 中实现了用于订阅管理的 PlansDisplay 和 PlansModal 组件\n* fix: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F128 中为站点地图处理添加了针对允许 URL 的路径校验\n* feat: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F129 中实现了基于 BM25 算法的 URL 路径相关性评分，用于站点地图搜索过滤\n* refactor(dashboard): 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F131 中重构了仪表板布局，并新增了 MCP 连接功能\n* chore(docker): 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F132 中将 mcp 镜像升级至 v1.2.0，并更新了本地 Docker Compose 配置\n* Fix: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F134 中添加了光标 SVG 资源，并提升了 DashboardPage 标题的响应式效果\n* fix: 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F136 中优化了 Docker 和反向代理配置\n* 升级 watercrawl-openai 和 openai 依赖；修复中间件中的 robots.txt 检查问题，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F137 中完成\n* 由 @amirasaran 发布 v0.10.2 版本，详见 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F138\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.10.1...v0.10.2","2025-09-02T00:14:57",{"id":198,"version":199,"summary_zh":200,"released_at":201},297792,"v0.10.1","## 变更内容\n* 修复：增强站点地图处理，增加最大请求数限制，并支持嵌套站点地图检测，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F120 中实现。\n* 发布 v0.10.1 版本，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F121 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.10.0...v0.10.1","2025-08-26T20:08:10",{"id":203,"version":204,"summary_zh":205,"released_at":206},297793,"v0.10.0","## 变更内容\n* chore：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F80 中更新文档中的版本号\n* fix：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F81 中更新 perform_destroy 方法，以处理搜索请求的 API 视图\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F82 中添加测试代理服务器端点的文档\n* 由 @alireza1992 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F88 中更新 Poetry-shell 文档\n* feat：由 @alireza1992 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F85 中添加 ignore_rendering 选项，用于在…过程中跳过 JavaScript 渲染\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F92 中更新 GitHub Actions 工作流，使用改进的 fetch 选项安全检出 PR 分支\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F93 中更新 GitHub Actions 工作流，通过禁用凭据持久化来优化检出流程\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F94 中增强 GitHub Actions 工作流，自动修复后端和前端的代码风格问题\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F95 中改进 GitHub Actions 工作流，以更安全地从本地及外部复刻仓库检出 PR 分支\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F96 中更新代码风格检查工作流，允许在出现错误时继续执行\n* 新教程由 @alexmofidi 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F89 中添加\n* chore：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F100 中更改仓库横幅\n* chore：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F101 中将仓库横幅图片 URL 更新为指向主分支\n* docs：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F103 中更新 CONTRIBUTING.md，明确开发服务器的搭建说明\n* chore：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F112 中更新软件包依赖项及覆盖配置，以确保兼容性\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F113 中添加静态文件和媒体文件的存储选项配置\n* feat：由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F114 中添加可配置的并发请求数设置，用于爬虫任务\n* 由 @amirasaran 发布 v0.10.0 版本，详见 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F115\n\n## 新贡献者\n* @alireza1992 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F88 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.9.2...v0.10.0","2025-08-21T18:18:16",{"id":208,"version":209,"summary_zh":210,"released_at":211},297794,"v0.9.2","## 变更内容\n* chore(发布): 版本号升级至 v0.9.1，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F77 中完成\n* fix: 添加抓取请求的批量创建接口，并更新文档和序列化器，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F78 中完成\n* 发布\u002Fv0.9.2，由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F79 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.9.1...v0.9.2","2025-06-28T20:23:10",{"id":213,"version":214,"summary_zh":215,"released_at":216},297795,"v0.9.1","## 变更内容\n* 修复：更新 TeamSelector 组件，使用新的 Headless UI 组件 … 由 @amirasaran 在 https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F76 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.9.0...v0.9.1","2025-06-27T20:50:52",{"id":218,"version":219,"summary_zh":220,"released_at":221},297796,"v0.9.0","## What's Changed\r\n* feat(sitemap): implement sitemap generation and crawling functionality by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F71\r\n* fix: enhance API header handling and improve sitemap result filtering by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F72\r\n* fix: improve URL handling in SSE subscription to support empty baseURL by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F73\r\n* Feature\u002Fsitemap by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F74\r\n* Release\u002Fv0.9.0 by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F75\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.8.0...v0.9.0","2025-06-27T20:02:29",{"id":223,"version":224,"summary_zh":225,"released_at":226},297797,"v0.8.0","## What's Changed\r\n* docs(api): add comprehensive search API documentation by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F55\r\n* feat(proxy): Implement proxy server management and integration by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F61\r\n* docs(proxy): update proxy documentation by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F63\r\n* feat: upgrade outdated dependencies by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F64\r\n* feat: upgrade outdated dependencies by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F65\r\n* refactor(ui): improve settings loading experience by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F67\r\n* feat: Update Docker configuration for backend plugins by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F46\r\n* Updating the tutorials adding a new webpage crawl and chat tutorial by @alexmofidi in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F62\r\n* chore(release): bump version to v0.8.0 by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F68\r\n\r\n## 🔒 Notice: Important Security Update\r\nWith this new release, you **must set** the `API_ENCRYPTION_KEY` environment variable.\r\nThis key is used to **encrypt proxy passwords** stored in the database. For your security, please generate a new key using the command below:\r\n\r\n```bash\r\npython -c \"from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())\"\r\n```\r\nBe sure to update your environment with the new key. Without this update, encrypted data may become inaccessible.\r\n\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.7.1...v0.8.0","2025-05-20T18:27:29",{"id":228,"version":229,"summary_zh":230,"released_at":231},297798,"v0.7.1","## What's Changed\r\n* fix(search): credit calculation and validation by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F52\r\n* Release v0.7.1 by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F53\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.7.0...v0.7.1","2025-04-29T19:20:46",{"id":233,"version":234,"summary_zh":235,"released_at":236},297799,"v0.7.0","## What's Changed\r\n* flare, langchain, watercrawl tutorial added by @alexmofidi in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F36\r\n* stock analysis tutorial by @alexmofidi in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F37\r\n* feat: Add search functionality to WaterCrawl by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F47\r\n* feat(search): replace scraping with Google Custom Search API by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F48\r\n* feat(search): replace scraping with Google Custom Search API by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F49\r\n* fix: add missing envs by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F50\r\n\r\n## New Contributors\r\n* @alexmofidi made their first contribution in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F36\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.6.0...v0.7.0","2025-04-29T06:08:15",{"id":238,"version":239,"summary_zh":240,"released_at":241},297800,"v0.6.0","## What's Changed\r\n* feat(docker): improve deployment infrastructure with Nginx and environment management by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F22\r\n* docs: update CONTRIBUTING.md with correct file paths and commands by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F23\r\n* feat(docker): implement dynamic Nginx configuration for MinIO buckets by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F24\r\n* fix(docs): resolve API documentation rendering exception by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F25\r\n* feat: Add Sitemap, enhance API docs UI, add Go\u002FNode\u002FPython examples, and refactorhooks by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F27\r\n* fix: fix build issue by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F28\r\n* ci(github): merge backend, frontend, and docs Docker build workflows into unified docker-publish.yml by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F29\r\n* ci(github): add manual version bump and release PR workflow by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F30\r\n* chore: minor fixes by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F31\r\n* security: fix audit issues by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F32\r\n* fix: remove doc auto-build by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F33\r\n* Release v0.6.0 by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F35\r\n\r\n## 🔧 Upgrade Notice (v0.6.0)\r\n\r\nStarting from this release, **WaterCrawl requires `REDIS_URL` to be set in the backend environment.**\r\n\r\nPlease update your environment configuration with the following:\r\n\r\nIf you are using self hosted version update app.env\r\n\r\n```env\r\nREDIS_URL=redis:\u002F\u002Fredis:6379\u002F1\r\n```\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.5.0...v0.6.0","2025-04-19T14:26:30",{"id":243,"version":244,"summary_zh":245,"released_at":246},297801,"v0.5.0","## What's Changed\r\n* Feature\u002Fmono repo by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F5\r\n* Add PR linting workflow and improve Docker configurations by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F6\r\n* fix: add packageManager to package json by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F7\r\n* fix: fix docker build warning by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F8\r\n* docs: move documentations from another repo to this repo by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F9\r\n* docs: add contribution guidelines and GitHub templates by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F10\r\n* docs: enhance README with badges and emojis 🎨 by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F11\r\n* chore: update project configuration and documentation 🔧 by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F12\r\n* Feature: Mono-repo Enhancements by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F13\r\n* docs(docker): improve docker setup and documentation by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F16\r\n* chore: Enable MinIO consistency check on startup by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F17\r\n* fix(security): update dependencies to address multiple vulnerabilities by @amirasaran in https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F18\r\n* feat: implement invitation-based user registration by @amirasaran  https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fpull\u002F21\r\n\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fwatercrawl\u002FWaterCrawl\u002Fcompare\u002Fv0.4.0...v0.5.5","2025-04-06T21:44:11",{"id":248,"version":249,"summary_zh":250,"released_at":251},297802,"v0.4.0","### Added\r\n- Pagination support with customizable page size (25 default, max 100)\r\n- Added filtering capabilities for crawl requests by UUID, URL, status, and date\r\n- Added filtering for crawl results by URL and creation date\r\n- Support for multiple output formats in download endpoint (markdown, json)\r\n- New serializer for full crawl results with prefetching option\r\n- New common pagination module for consistent pagination across the API\r\n\r\n### Changed\r\n- Enhanced API documentation with detailed query parameters\r\n- Improved filtering capabilities with advanced filters (contains, startswith, greater than, less than)\r\n- Updated API endpoints to support prefetched result data\r\n- Modified download endpoint to return zip files with formatted content\r\n- Updated crawl status checking to support prefetched results","2025-03-20T22:40:31",{"id":253,"version":254,"summary_zh":255,"released_at":256},297803,"v0.3.3","### Changed\r\n- Updated user models, serializers, and services\r\n- Modified common serializers and services\r\n- Updated project settings and version\r\n\r\n### Added\r\n- New user migration for newsletter and privacy confirmation\r\n\r\n","2025-02-11T19:48:36",{"id":258,"version":259,"summary_zh":260,"released_at":261},297804,"v0.3.2","## What's New\n\n### Added\n- Automated daily page credit reset for active subscriptions\n- Celery beat schedule for running daily tasks\n\nThis release adds automated page credit management:\n1. New Celery task for resetting daily page credits for active subscriptions\n2. Configured Celery beat to run the reset task daily at 00:01\n3. Improved subscription management with automated credit handling","2025-02-09T22:55:25",{"id":263,"version":264,"summary_zh":265,"released_at":266},297805,"v0.3.1","## What's Changed\n\n### Fixed\n- Removed custom billing cycle anchor from Stripe checkout to fix subscription timing issues\n- Simplified Stripe checkout session configuration for better compatibility\n- Fixed Stripe webhook handling for default plan subscriptions\n\nThis release includes improvements to the Stripe integration:\n1. Simplified checkout process by removing custom billing cycle configuration\n2. Fixed webhook handling for default plan subscriptions to prevent processing issues\n3. General improvements to subscription handling","2025-02-09T17:42:44"]