[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-wzdnzd--harvester":3,"similar-wzdnzd--harvester":73},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":15,"owner_location":15,"owner_email":15,"owner_twitter":15,"owner_website":15,"owner_url":18,"languages":19,"stars":32,"forks":33,"last_commit_at":34,"license":35,"difficulty_score":36,"env_os":37,"env_gpu":37,"env_ram":37,"env_deps":38,"category_tags":41,"github_topics":46,"view_count":36,"oss_zip_url":15,"oss_zip_packed_at":15,"status":53,"created_at":54,"updated_at":55,"faqs":56,"releases":72},9753,"wzdnzd\u002Fharvester","harvester","Intelligent data acquisition framework for GitHub and web sources","Harvester 是一款智能且通用的数据采集框架，专为从 GitHub、网络空间测绘平台（如 FOFA、Shodan）及各类自定义 Web 端点高效获取信息而设计。它主要解决了在多源异构环境下进行大规模数据收集时面临的架构复杂、扩展困难及解析效率低等痛点，帮助用户轻松构建自动化的情报搜集流程。\n\n虽然当前版本以发现 AI 服务密钥为典型应用场景，但其核心架构具有极强的延展性，能够灵活适配代码仓库监控、物联网设备枚举等多种数据采集需求。Harvester 特别适合开发者、安全研究人员及数据分析师使用，尤其是那些需要定制爬虫或整合多源数据的专业技术人群。\n\n在技术亮点方面，Harvester 采用了清晰的分层架构与插件化设计，将任务调度、流水线管理、查询优化及依赖解析等模块解耦，使得新增数据源变得简单快捷。框架内置了智能查询优化引擎和自适应解析机制，不仅能有效应对 API 速率限制，还能通过模块化组合快速响应不同的采集策略。无论是用于学术研究中的数据集构建，还是企业级的安全资产探测，Harvester 都能提供稳定可靠的基础设施支持。","# Harvester - Universal Data Acquisition Framework\n\n**📖 [中文文档](README.zh-CN.md) | English | 🔗 [More Tools](https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fai-collector)**\n\nA universal, adaptive data acquisition framework designed for comprehensive information acquisition from multiple sources including GitHub, network mapping platforms (FOFA, Shodan), and arbitrary web endpoints. While the current implementation focuses on AI service provider key discovery as a practical example, the framework is architected for extensibility to support diverse data acquisition scenarios.\n\n---\n\n⭐⭐⭐ **If this project helps you, please give it a star!** Your support motivates us to keep improving and adding new features.\n\n---\n\n## Table of Contents\n\n- [Key Features](#key-features)\n- [Quick Start](#quick-start)\n- [Architecture](#architecture)\n- [Directory Structure](#directory-structure)\n- [Troubleshooting](#troubleshooting)\n- [Contributing](#contributing)\n\n## Project Goals\n\nThe system aims to build a **universal data acquisition framework** primarily targeting:\n\n- **GitHub**: Code repositories, issues, commits, and API endpoints\n- **Network Mapping Platforms**: \n  - [FOFA](https:\u002F\u002Ffofa.info) - Cyberspace mapping and asset discovery\n  - [Shodan](https:\u002F\u002Fwww.shodan.io\u002F) - Internet-connected device search engine\n- **Arbitrary Web Endpoints**: Custom APIs, web services, and data sources\n- **Extensible Architecture**: Plugin-based system for easy integration of new data sources\n\n## Current Data Source Support\n\n| Data Source | Status        | Description                             |\n| ----------- | ------------- | --------------------------------------- |\n| GitHub API  | ✅ Implemented | Full API integration with rate limiting |\n| GitHub Web  | ✅ Implemented | Web scraping with intelligent parsing   |\n| FOFA        | 🚧 Planned     | Cyberspace asset discovery integration  |\n| Shodan      | 🚧 Planned     | IoT and network device enumeration      |\n| Custom APIs | 🚧 Planned     | Generic REST\u002FGraphQL API adapter        |\n\n## Architecture\n\n### Layered Architecture\n\n```mermaid\ngraph TB\n    %% Entry Layer\n    subgraph Entry[\"Entry Layer\"]\n        CLI[\"CLI Interface\u003Cbr\u002F>(main.py)\"]\n        App[\"Application Core\u003Cbr\u002F>(main.py)\"]\n    end\n\n    %% Management Layer\n    subgraph Management[\"Management Layer\"]\n        TaskMgr[\"Task Manager\u003Cbr\u002F>(manager\u002Ftask.py)\"]\n        Pipeline[\"Pipeline Manager\u003Cbr\u002F>(manager\u002Fpipeline.py)\"]\n        WorkerMgr[\"Worker Manager\u003Cbr\u002F>(manager\u002Fworker.py)\"]\n        QueueMgr[\"Queue Manager\u003Cbr\u002F>(manager\u002Fqueue.py)\"]\n        StatusMgr[\"Status Manager\u003Cbr\u002F>(manager\u002Fstatus.py)\"]\n        Shutdown[\"Shutdown Coordinator\u003Cbr\u002F>(manager\u002Fshutdown.py)\"]\n    end\n\n    %% Processing Layer\n    subgraph Processing[\"Processing Layer\"]\n        StageBase[\"Stage Framework\u003Cbr\u002F>(stage\u002Fbase.py)\"]\n        StageImpl[\"Stage Implementations\u003Cbr\u002F>(stage\u002Fdefinition.py)\"]\n        StageReg[\"Stage Registry\u003Cbr\u002F>(stage\u002Fregistry.py)\"]\n        StageFactory[\"Stage Factory\u003Cbr\u002F>(stage\u002Ffactory.py)\"]\n        StageResolver[\"Dependency Resolver\u003Cbr\u002F>(stage\u002Fresolver.py)\"]\n    end\n\n    %% Service Layer\n    subgraph Service[\"Service Layer\"]\n        SearchSvc[\"Search Service\u003Cbr\u002F>(search\u002Fclient.py)\"]\n        SearchProviders[\"Search Providers\u003Cbr\u002F>(search\u002Fprovider\u002F)\"]\n        RefineSvc[\"Query Refinement\u003Cbr\u002F>(refine\u002F)\"]\n        RefineEngine[\"Refine Engine\u003Cbr\u002F>(refine\u002Fengine.py)\"]\n        RefineOptimizer[\"Query Optimizer\u003Cbr\u002F>(refine\u002Foptimizer.py)\"]\n    end\n\n    %% Core Domain Layer\n    subgraph Core[\"Core Domain Layer\"]\n        Models[\"Domain Models & Tasks\u003Cbr\u002F>(core\u002Fmodels.py)\"]\n        Types[\"Type System\u003Cbr\u002F>(core\u002Ftypes.py)\"]\n        Enums[\"Enumerations\u003Cbr\u002F>(core\u002Fenums.py)\"]\n        Metrics[\"Metrics\u003Cbr\u002F>(core\u002Fmetrics.py)\"]\n        Auth[\"Authentication\u003Cbr\u002F>(core\u002Fauth.py)\"]\n    end\n\n    %% Infrastructure Layer\n    subgraph Infrastructure[\"Infrastructure Layer\"]\n        Config[\"Configuration\u003Cbr\u002F>(config\u002F)\"]\n        Tools[\"Tools & Utilities\u003Cbr\u002F>(tools\u002F)\"]\n        Constants[\"Constants\u003Cbr\u002F>(constant\u002F)\"]\n        Storage[\"Storage & Persistence\u003Cbr\u002F>(storage\u002F)\"]\n    end\n\n    %% State Management Layer\n    subgraph StateLayer[\"State Management Layer\"]\n        StateCollector[\"State Collector\u003Cbr\u002F>(state\u002Fcollector.py)\"]\n        StateDisplay[\"Display Engine\u003Cbr\u002F>(state\u002Fdisplay.py)\"]\n        StateBuilder[\"Status Builder\u003Cbr\u002F>(state\u002Fbuilder.py)\"]\n        StateModels[\"State Models\u003Cbr\u002F>(state\u002Fmodels.py)\"]\n        StateMonitor[\"State Monitor\u003Cbr\u002F>(state\u002Fmonitor.py)\"]\n        StateEnums[\"State Enums\u003Cbr\u002F>(state\u002Fenums.py)\"]\n        StateTypes[\"State Types\u003Cbr\u002F>(state\u002Ftypes.py)\"]\n    end\n\n    %% External Systems\n    subgraph External[\"External Systems\"]\n        GitHub[\"GitHub\u003Cbr\u002F>(API + Web)\"]\n        AIServices[\"AI Service\u003Cbr\u002F>Providers\"]\n        FileSystem[\"File System\u003Cbr\u002F>(Local Storage)\"]\n    end\n\n    %% Dependencies (Top-down)\n    Entry --> Management\n    Management --> Processing\n    Processing --> Service\n    Service --> Core\n\n    %% Infrastructure dependencies\n    Entry -.-> Infrastructure\n    Management -.-> Infrastructure\n    Processing -.-> Infrastructure\n    Service -.-> Infrastructure\n    Core -.-> Infrastructure\n\n    %% State management dependencies\n    Entry -.-> StateLayer\n    Management -.-> StateLayer\n\n    %% External dependencies\n    Service --> External\n    Infrastructure --> External\n```\n\n### System Architecture Overview\n\n```mermaid\ngraph TB\n    %% User Interface Layer\n    subgraph UserLayer[\"User Interface Layer\"]\n        User[User]\n        CLI[Command Line Interface]\n        ConfigMgmt[Configuration Management]\n    end\n\n    %% Application Management Layer\n    subgraph AppLayer[\"Application Management Layer\"]\n        MainApp[Main Application]\n        TaskManager[Task Manager]\n        StatusManager[Status Manager]\n        ResourceManager[Resource Manager]\n        ShutdownManager[Shutdown Manager]\n    end\n\n    %% Core Pipeline Engine\n    subgraph PipelineCore[\"Pipeline Engine\"]\n        %% Stage Management System\n        subgraph StageSystem[\"Stage Management System\"]\n            StageRegistry[Stage Registry]\n            DependencyResolver[Dependency Resolver]\n            StageFactory[Stage Factory]\n        end\n\n        %% Queue Management System\n        subgraph QueueSystem[\"Queue Management System\"]\n            QueueManager[Queue Manager]\n            WorkerManager[Worker Manager]\n            MonitoringSystem[System Monitor]\n        end\n\n        %% Processing Stages\n        subgraph ProcessingStages[\"Processing Stages\"]\n            SearchStage[Search Stage]\n            GatherStage[Gather Stage]\n            CheckStage[Check Stage]\n            InspectStage[Inspect Stage]\n        end\n    end\n\n    %% Search Provider Ecosystem\n    subgraph ProviderEcosystem[\"Search Provider Ecosystem\"]\n        ProviderRegistry[Provider Registry]\n        BaseProvider[Base Provider]\n        OpenAIProvider[OpenAI-like Provider]\n        CustomProviders[Custom Providers]\n    end\n\n    %% Advanced Processing Engines\n    subgraph ProcessingEngines[\"Processing Engines\"]\n        SearchClient[Search Client]\n\n        %% Query Optimization Engine\n        subgraph QueryOptimizer[\"Query Optimization Engine\"]\n            RefineEngine[Refine Engine]\n            RegexParser[Regex Parser]\n            SplittabilityAnalyzer[Splittability Analyzer]\n            EnumerationOptimizer[Enumeration Optimizer]\n            QueryGenerator[Query Generator]\n            OptimizationStrategies[Optimization Strategies]\n\n            %% Internal Flow\n            RefineEngine --> RegexParser\n            RegexParser --> SplittabilityAnalyzer\n            SplittabilityAnalyzer --> EnumerationOptimizer\n            EnumerationOptimizer --> OptimizationStrategies\n            OptimizationStrategies --> QueryGenerator\n        end\n\n        ValidationEngine[API Key Validation]\n        RecoveryEngine[Task Recovery]\n    end\n\n    %% State & Data Management\n    subgraph StateManagement[\"State & Data Management\"]\n        StateCollector[State Collector]\n        DisplayEngine[Display Engine]\n        StatusBuilder[Status Builder]\n        StateMonitor[State Monitor]\n        PersistenceLayer[Persistence Layer]\n        SnapshotManager[Snapshot Manager]\n        ResultManager[Result Manager]\n    end\n\n    %% Infrastructure Services\n    subgraph Infrastructure[\"Infrastructure Services\"]\n        RateLimiting[Rate Limiting]\n        CredentialMgmt[Credential Management]\n        AgentRotation[User Agent Rotation]\n        LoggingSystem[Logging System]\n        RetryFramework[Retry Framework]\n        ResourcePool[Resource Pool]\n    end\n\n    %% External Systems\n    subgraph External[\"External Systems\"]\n        GitHubAPI[GitHub API]\n        GitHubWeb[GitHub Web Interface]\n        AIServiceAPIs[AI Service APIs]\n        FileSystem[Local File System]\n    end\n\n    %% User Interactions\n    User --> CLI\n    User --> ConfigMgmt\n    CLI --> MainApp\n    ConfigMgmt --> MainApp\n\n    %% Application Flow\n    MainApp --> TaskManager\n    MainApp --> StatusManager\n    MainApp --> ResourceManager\n    MainApp --> ShutdownManager\n    TaskManager --> StageRegistry\n    TaskManager --> QueueManager\n\n    %% Stage Management Flow\n    StageRegistry --> DependencyResolver\n    StageRegistry --> StageFactory\n    DependencyResolver --> ProcessingStages\n    StageFactory --> ProcessingStages\n\n    %% Queue Management Flow\n    QueueManager --> WorkerManager\n    QueueManager --> MonitoringSystem\n    WorkerManager --> ProcessingStages\n\n    %% Stage Dependencies (Pipeline)\n    SearchStage --> GatherStage\n    GatherStage --> CheckStage\n    CheckStage --> InspectStage\n\n    %% Processing Engine Integration\n    SearchStage --> SearchClient\n    SearchStage --> QueryOptimizer\n    CheckStage --> ValidationEngine\n    ProcessingStages --> RecoveryEngine\n\n    %% Provider Integration\n    SearchClient --> ProviderRegistry\n    ProviderRegistry --> BaseProvider\n    BaseProvider --> OpenAIProvider\n    BaseProvider --> CustomProviders\n\n    %% State Management Integration\n    ProcessingStages --> StateCollector\n    QueueManager --> StateCollector\n    StateCollector --> DisplayEngine\n    StateCollector --> StatusBuilder\n    StateMonitor --> DisplayEngine\n    ProcessingStages --> PersistenceLayer\n    PersistenceLayer --> SnapshotManager\n    PersistenceLayer --> ResultManager\n\n    %% Infrastructure Integration\n    SearchClient -.-> RateLimiting\n    ResourceManager -.-> CredentialMgmt\n    ResourceManager -.-> AgentRotation\n    MainApp -.-> LoggingSystem\n    ProcessingStages -.-> RetryFramework\n    Infrastructure -.-> ResourcePool\n\n    %% External Connections\n    SearchClient --> GitHubAPI\n    SearchClient --> GitHubWeb\n    ValidationEngine --> AIServiceAPIs\n    PersistenceLayer --> FileSystem\n\n    %% Styling\n    classDef userClass fill:#e3f2fd,stroke:#1976d2,stroke-width:2px\n    classDef appClass fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px\n    classDef coreClass fill:#e8f5e8,stroke:#388e3c,stroke-width:3px\n    classDef providerClass fill:#fff3e0,stroke:#f57c00,stroke-width:2px\n    classDef engineClass fill:#fce4ec,stroke:#c2185b,stroke-width:2px\n    classDef stateClass fill:#f1f8e9,stroke:#689f38,stroke-width:2px\n    classDef infraClass fill:#f5f5f5,stroke:#616161,stroke-width:2px\n    classDef externalClass fill:#ffebee,stroke:#d32f2f,stroke-width:2px\n\n    class User,CLI,ConfigMgmt userClass\n    class MainApp,TaskManager,StatusManager,ResourceManager,ShutdownManager appClass\n    class StageRegistry,DependencyResolver,StageFactory,QueueManager,WorkerManager,MonitoringSystem,SearchStage,GatherStage,CheckStage,InspectStage coreClass\n    class ProviderRegistry,BaseProvider,OpenAIProvider,CustomProviders providerClass\n    class SearchClient,QueryOptimizer,ValidationEngine,RecoveryEngine engineClass\n    class StateCollector,StateMonitor,DisplayEngine,StatusBuilder,PersistenceLayer,SnapshotManager,ResultManager stateClass\n    class RateLimiting,CredentialMgmt,AgentRotation,LoggingSystem,RetryFramework,ResourcePool infraClass\n    class GitHubAPI,GitHubWeb,AIServiceAPIs,FileSystem externalClass\n```\n\nThe project follows a layered architecture with the following core components:\n\n### Multi-Stage Processing Flow\n\n```mermaid\nsequenceDiagram\n    participant CLI as CLI\n    participant App as Application\n    participant TM as TaskManager\n    participant Pipeline as Pipeline\n    participant Search as SearchStage\n    participant Gather as GatherStage\n    participant Check as CheckStage\n    participant Inspect as InspectStage\n    participant Storage as Storage\n    participant Monitor as StatusManager\n\n    %% Initialization Phase\n    CLI->>App: 1. Start Application\n    App->>App: 2. Load Configuration\n    App->>TM: 3. Create TaskManager\n    TM->>TM: 4. Initialize Providers\n    TM->>Pipeline: 5. Create Pipeline\n    Pipeline->>Search: 6. Register SearchStage\n    Pipeline->>Gather: 7. Register GatherStage\n    Pipeline->>Check: 8. Register CheckStage\n    Pipeline->>Inspect: 9. Register InspectStage\n    App->>Monitor: 10. Start Status Manager\n\n    %% Processing Phase\n    loop Multi-Stage Processing\n        TM->>Search: 11. Submit Search Tasks\n        Search->>Search: 12. Query GitHub with Optimization\n        Search->>Gather: 13. Forward Search Results\n\n        Gather->>Gather: 14. Acquire Detailed Information\n        Gather->>Check: 15. Forward Extracted Keys\n\n        Check->>Check: 16. Validate API Keys\n        Check->>Inspect: 17. Forward Valid Keys\n\n        Inspect->>Inspect: 18. Inspect API Capabilities\n        Inspect->>Storage: 19. Save Results\n\n        Pipeline->>Monitor: 20. Update Status\n        Monitor->>App: 21. Display Progress\n    end\n\n    %% Recovery and Persistence\n    loop Background Operations\n        Storage->>Storage: Auto-save Results\n        Storage->>Storage: Create Snapshots\n        Pipeline->>Pipeline: Task Recovery\n        Monitor->>Monitor: Collect Metrics\n    end\n\n    %% Completion Phase\n    Pipeline->>Pipeline: 22. Check Completion\n    Pipeline->>Storage: 23. Final Persistence\n    Pipeline->>Monitor: 24. Final Status Report\n    App->>TM: 25. Graceful Shutdown\n    TM->>Storage: 26. Save State\n```\n\n## Architecture Layers\n\n### 1. **Presentation Layer**\n   - **CLI Interface** (`main.py`): Command-line entry point with argument parsing and application lifecycle\n   - **Configuration System** (`config\u002F`): YAML-based configuration management with validation and schemas\n\n### 2. **Application Layer**\n   - **Application Core** (`main.py`): Main application lifecycle and orchestration\n   - **Task Management** (`manager\u002Ftask.py`): Provider coordination and task distribution\n   - **Resource Coordination** (`tools\u002Fcoordinator.py`): Global resource management and coordination\n   - **Shutdown Management** (`manager\u002Fshutdown.py`): Graceful shutdown coordination\n   - **Status Management** (`manager\u002Fstatus.py`): Application status management and coordination\n   - **Worker Management** (`manager\u002Fworker.py`): Worker thread management and scaling\n   - **Queue Management** (`manager\u002Fqueue.py`): Multi-queue coordination and management\n\n### 3. **Business Service Layer**\n   - **Pipeline Engine** (`manager\u002Fpipeline.py`): Multi-stage processing orchestration with DAG execution\n   - **Stage System** (`stage\u002F`): Pluggable processing stages with dependency resolution and factory pattern\n   - **Search Service** (`search\u002F`): GitHub code search with provider abstraction and optimization\n   - **Query Refinement** (`refine\u002F`): Intelligent query optimization with strategy pattern and mathematical foundations\n\n### 4. **Domain Layer**\n   - **Core Models & Tasks** (`core\u002Fmodels.py`): Business domain objects, data structures, and task definitions\n   - **Type System** (`core\u002Ftypes.py`): Interface definitions and contracts\n   - **Business Enums** (`core\u002Fenums.py`): Domain enumerations and constants\n   - **Metrics & Analytics** (`core\u002Fmetrics.py`): Performance measurement and KPI tracking\n   - **Authentication** (`core\u002Fauth.py`): Authentication and authorization logic\n   - **Custom Exceptions** (`core\u002Fexceptions.py`): Domain-specific exception handling\n   - **Custom Exceptions** (`core\u002Fexceptions.py`): Domain-specific exception handling\n\n### 5. **Infrastructure Layer**\n   - **Storage & Persistence** (`storage\u002F`): Result storage, recovery, and snapshot management\n     - **Atomic Operations** (`storage\u002Fatomic.py`): Atomic file operations with fsync\n     - **Result Management** (`storage\u002Fpersistence.py`): Multi-format result persistence\n     - **Task Recovery** (`storage\u002Frecovery.py`): Task recovery mechanisms\n     - **Shard Management** (`storage\u002Fshard.py`): NDJSON shard management with rotation\n     - **Snapshot Management** (`storage\u002Fsnapshot.py`): Backup and restore functionality\n   - **Tools & Utilities** (`tools\u002F`): Infrastructure tools and utilities\n     - **Logging System** (`tools\u002Flogger.py`): Structured logging with API key redaction\n     - **Rate Limiting** (`tools\u002Fratelimit.py`): Adaptive rate control with token bucket algorithm\n     - **Load Balancing** (`tools\u002Fbalancer.py`): Resource distribution strategies\n     - **Credential Management** (`tools\u002Fcredential.py`): Secure credential rotation and management\n     - **Agent Management** (`tools\u002Fagent.py`): User-agent rotation for web scraping\n     - **Pattern Matching** (`tools\u002Fpatterns.py`): Pattern matching utilities and helpers\n     - **Retry Framework** (`tools\u002Fretry.py`): Unified retry mechanisms with backoff strategies\n     - **Resource Pooling** (`tools\u002Fresources.py`): Resource pool management and optimization\n\n### 6. **State Management Layer**\n   - **State Collection** (`state\u002Fcollector.py`): System metrics gathering and aggregation\n   - **Display Engine** (`state\u002Fdisplay.py`): User-friendly progress visualization and formatting\n   - **Status Builder** (`state\u002Fbuilder.py`): Status data construction and transformation\n   - **State Models** (`state\u002Fmodels.py`): Monitoring data structures and metrics\n   - **State Monitoring** (`state\u002Fmonitor.py`): Real-time state monitoring and tracking\n   - **State Enumerations** (`state\u002Fenums.py`): State-related enumerations and constants\n   - **State Types** (`state\u002Ftypes.py`): State type definitions and interfaces\n\n\n## Processing Stages\n\nThe system implements a **4-stage pipeline** for comprehensive data acquisition and validation:\n\n1. **Search Stage** (`stage\u002Fdefinition.py:SearchStage`):\n   - Intelligent GitHub code search with advanced query optimization\n   - Multi-provider search support (API + Web)\n   - Query refinement using mathematical optimization algorithms\n   - Rate-limited search execution with adaptive throttling\n\n2. **Gather Stage** (`stage\u002Fdefinition.py:GatherStage`):\n   - Detailed information acquisition from search results\n   - Content extraction and parsing\n   - Pattern matching for key identification\n   - Structured data collection and normalization\n\n3. **Check Stage** (`stage\u002Fdefinition.py:CheckStage`):\n   - API key validation against actual service endpoints\n   - Authentication verification and capability testing\n   - Service availability and response validation\n   - Error handling and retry mechanisms\n\n4. **Inspect Stage** (`stage\u002Fdefinition.py:InspectStage`):\n   - API capability inspection for validated keys\n   - Model enumeration and feature detection\n   - Service limits and quota analysis\n   - Comprehensive capability profiling\n\n## Advanced Query Optimization Engine\n\nThe system features a sophisticated **Query Optimization Engine** with mathematical foundations:\n\n### Core Components\n\n1. **Regex Parser**\n   - Advanced regex pattern parsing with support for complex syntax\n   - Handles escaped characters, character classes, and quantifiers\n   - Converts patterns into analyzable segment structures\n\n2. **Splittability Analyzer**\n   - Mathematical analysis of pattern divisibility\n   - Recursive depth limiting for safety\n   - Value threshold analysis for optimization feasibility\n   - Resource cost estimation for performance control\n\n3. **Enumeration Optimizer**\n   - Intelligent enumeration strategy selection\n   - Multi-dimensional optimization (depth, breadth, value)\n   - Combinatorial analysis for optimal segment selection\n   - Topological sorting for dependency resolution\n\n4. **Query Generator**\n   - Generates optimized query variants from enumeration strategies\n   - Supports configurable enumeration depth\n   - Produces mathematically optimal query distributions\n   - Maintains query semantic equivalence\n\n### Optimization Algorithms\n\n- **Mathematical Modeling**: Uses mathematical principles to analyze regex patterns\n- **Enumeration Strategy**: Intelligent selection of optimal enumeration depth and combinations\n- **Resource Management**: Prevents resource exhaustion through intelligent limiting\n- **Performance Optimization**: Singleton pattern ensures optimal memory usage\n\n## Supported Data Sources & Use Cases\n\n### 🔍 Current Implementation (AI Service Discovery)\n- **OpenAI and compatible interfaces**\n- **Anthropic Claude**\n- **Azure OpenAI**\n- **Google Gemini**\n- **AWS Bedrock**\n- **GooeyAI**\n- **Stability AI**\n- **百度文心一言**\n- **智谱AI**\n- **Custom providers**\n\n### 🌐 Planned Data Sources\n- **[FOFA](https:\u002F\u002Ffofa.info)**: Cyberspace asset discovery and network mapping\n- **[Shodan](https:\u002F\u002Fwww.shodan.io\u002F)**: Internet-connected device enumeration\n- **Custom REST APIs**: Generic API integration framework\n- **GraphQL Endpoints**: Flexible query-based data acquisition\n- **Web Scraping**: JavaScript-rendered content and dynamic sites\n- **Database Connectors**: Direct database query capabilities\n\n### 📊 Potential Use Cases\n- **Data Mining**: Large-scale information extraction and analysis\n\n## Key Features\n\n### 🌐 Universal Data Acquisition\n- **Multi-Source Support**: GitHub, FOFA, Shodan, and custom endpoints\n- **Adaptive Query Engine**: Intelligent optimization for different data sources\n- **Protocol Agnostic**: REST, GraphQL, WebSocket, and web scraping support\n- **Rate Limiting**: Per-source intelligent rate control and quota management\n\n### 🏗️ Advanced Architecture\n- **Dynamic Stage System**: Configurable processing pipelines with DAG execution\n- **Plugin Architecture**: Extensible framework for custom data sources and processors\n- **Dependency Resolution**: Automatic stage ordering and dependency management\n- **Handler Registration**: Pluggable processors for flexible data transformation\n\n### ⚡ High Performance\n- **Asynchronous Processing**: Multi-threaded task execution with intelligent queuing\n- **Adaptive Load Balancing**: Dynamic resource allocation based on workload\n- **Query Optimization**: Mathematical modeling for optimal search strategies\n- **Resource Monitoring**: Real-time performance tracking and bottleneck detection\n\n### 🛡️ Enterprise Ready\n- **Fault Tolerance**: Comprehensive error handling, retry mechanisms, and recovery\n- **State Persistence**: Queue state recovery and graceful shutdown capabilities\n- **Security**: Credential management, API key redaction, and secure storage\n- **Monitoring**: Real-time analytics, alerting, and performance visualization\n\n## System Requirements\n\n### **Dependencies**\n- **Python**: 3.10+\n- **Libraries**: `PyYAML`\n- **Optional**: `uvloop` (Linux\u002FmacOS performance boost)\n- **Development**: `pytest`, `black`, `mypy` (for contributors)\n\n## Quick Start\n\n> 📚 For comprehensive documentation, tutorials, and advanced usage guides, please visit [DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fwzdnzd\u002Fharvester)\n\n1. **Installation**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fharvester.git\n   cd harvester\n   pip install -r requirements.txt\n   ```\n\n2. **Configuration**\n\n  > Choose one of the following methods to create your configuration\n\n   **Method 1: Generate default configuration**\n   ```bash\n   python main.py --create-config\n   ```\n\n   **Method 2: Copy from examples**\n   ```bash\n   # For basic configuration\n   cp examples\u002Fconfig-simple.yaml config.yaml\n\n   # For full configuration with all options\n   cp examples\u002Fconfig-full.yaml config.yaml\n   ```\n\n   Edit the configuration file:\n   - Set your Github session token or API key\n   - Configure provider search patterns\n   - Adjust rate limits and thread counts\n\n   ### Configuration Guide\n\n   The system provides two configuration templates:\n\n   1. **Basic Configuration** - Suitable for quick start:\n      ```yaml\n      # Global application settings\n      global:\n        workspace: \".\u002Fdata\"  # Working directory\n        github_credentials:\n          sessions:\n            - \"your_github_session_here\"  # GitHub session token\n          strategy: \"round_robin\"  # Load balancing strategy\n\n      # Pipeline stage configuration\n      pipeline:\n        threads:\n          search: 1    # Search threads (keep low)\n          gather: 4   # Acquisition threads\n          check: 2     # Validation threads\n          inspect: 1    # API capability inspection threads\n\n      # System monitoring settings\n      monitoring:\n        update_interval: 2.0    # Monitoring update interval\n        error_threshold: 0.1    # Error rate threshold\n\n      # Data persistence configuration\n      persistence:\n        auto_restore: true      # Auto restore state on startup\n        shutdown_timeout: 30    # Shutdown timeout in seconds\n\n      # Global rate limiting configuration\n      ratelimits:\n        github_web:\n          base_rate: 0.5       # Base rate in requests per second\n          burst_limit: 2       # Maximum burst size\n          adaptive: true       # Enable adaptive rate limiting\n\n      # Provider task configurations\n      tasks:\n        - name: \"openai\"         # Provider name\n          enabled: true          # Enable\u002Fdisable provider\n          provider_type: \"openai\"\n          use_api: false         # Use GitHub API for searching\n          \n          # Pipeline stage settings\n          stages:\n            search: true         # Enable search stage\n            gather: true         # Enable acquisition stage\n            check: true          # Enable validation stage\n            inspect: true        # Enable API capability inspection\n          \n          # Pattern matching configuration\n          patterns:\n            key_pattern: \"sk(?:-proj)?-[a-zA-Z0-9]{20}T3BlbkFJ[a-zA-Z0-9]{20}\"\n          \n          # Search conditions\n          conditions:\n            - query: '\"T3BlbkFJ\"'\n      ```\n\n   2. **Full Configuration** - Includes all advanced options:\n      - `display`: Display and monitoring settings\n      - `global`: Global system configuration\n      - `pipeline`: Pipeline stage configuration\n      - `monitoring`: System monitoring parameters\n      - `persistence`: Data persistence settings\n      - `worker`: Worker pool configuration\n      - `ratelimits`: Rate limiting settings\n      - `tasks`: Provider task configurations\n\n   ### Advanced Task Configuration\n\n   > 📋 **For complete configuration examples, please refer to:**\n   > - [`examples\u002Fconfig-full.yaml`](examples\u002Fconfig-full.yaml) - Comprehensive configuration with all available options\n   > - [`examples\u002Fconfig-simple.yaml`](examples\u002Fconfig-simple.yaml) - Basic configuration for quick start\n\n   The `tasks` section is the core of the configuration, defining what providers to search and how to process them. Refer to the basic configuration example above for a complete tasks configuration.\n\n   #### Key Configuration Options\n\n   - **`name`**: Unique identifier for the task\n   - **`provider_type`**: Determines validation method (`openai`, `openai_like`, `anthropic`, `gemini`, etc.)\n   - **`api`**: API endpoint configuration for key validation\n   - **`patterns.key_pattern`**: Regex pattern to identify valid API keys\n   - **`conditions`**: Search queries to find potential keys\n   - **`stages`**: Enable\u002Fdisable specific processing stages\n   - **`extras.directory`**: Custom output directory for results\n\n3. **Running**\n   ```bash\n   python main.py                  # Use default config\n   python main.py -c custom.yaml   # Use custom config\n   python main.py --validate       # Validate config\n   python main.py --log-level DEBUG # Enable debug logging\n   ```\n\n## Directory Structure\n\n```\nharvester\u002F\n├── config\u002F           # Configuration management\n│   ├── accessor.py   # Configuration access utilities\n│   ├── defaults.py   # Default configuration values\n│   ├── loader.py     # Configuration loading\n│   ├── schemas.py    # Configuration schemas\n│   ├── validator.py  # Configuration validation\n│   └── __init__.py   # Package initialization\n├── constant\u002F         # System constants\n│   ├── monitoring.py # Monitoring constants\n│   ├── runtime.py    # Runtime constants\n│   ├── search.py     # Search constants\n│   ├── system.py     # System constants\n│   └── __init__.py   # Package initialization\n├── core\u002F             # Core domain models\n│   ├── auth.py       # Authentication\n│   ├── enums.py      # System enumerations\n│   ├── exceptions.py # Custom exceptions\n│   ├── metrics.py    # Performance metrics\n│   ├── models.py     # Core data models & task definitions\n│   ├── types.py      # Core type definitions\n│   └── __init__.py   # Package initialization\n├── examples\u002F         # Configuration examples\n│   ├── config-full.yaml    # Complete configuration template\n│   └── config-simple.yaml  # Basic configuration template\n├── manager\u002F          # Task and resource management\n│   ├── base.py       # Base management classes\n│   ├── pipeline.py   # Pipeline management\n│   ├── queue.py      # Queue management\n│   ├── shutdown.py   # Shutdown coordination\n│   ├── status.py     # Status management\n│   ├── task.py       # Task management\n│   ├── worker.py     # Worker thread management\n│   └── __init__.py   # Package initialization\n├── refine\u002F           # Query optimization\n│   ├── config.py     # Refine configuration\n│   ├── engine.py     # Optimization engine\n│   ├── generator.py  # Query generation\n│   ├── optimizer.py  # Query optimization\n│   ├── parser.py     # Query parsing\n│   ├── segment.py    # Pattern segmentation\n│   ├── splittability.py # Splittability analysis\n│   ├── strategies.py # Optimization strategies\n│   ├── types.py      # Refine type definitions\n│   └── __init__.py   # Package initialization\n├── search\u002F           # Search engines\n│   ├── client.py     # Search client\n│   ├── provider\u002F     # Provider implementations\n│   │   ├── anthropic.py    # Anthropic provider\n│   │   ├── azure.py        # Azure OpenAI provider\n│   │   ├── base.py         # Base provider class\n│   │   ├── bedrock.py      # AWS Bedrock provider\n│   │   ├── doubao.py       # ByteDance Doubao provider\n│   │   ├── gemini.py       # Google Gemini provider\n│   │   ├── gooeyai.py      # GooeyAI provider\n│   │   ├── openai.py       # OpenAI provider\n│   │   ├── openai_like.py  # OpenAI-compatible provider\n│   │   ├── qianfan.py      # Baidu Qianfan provider\n│   │   ├── registry.py     # Provider registry\n│   │   ├── stabilityai.py  # Stability AI provider\n│   │   ├── vertex.py       # Google Vertex AI provider\n│   │   └── __init__.py     # Package initialization\n│   └── __init__.py   # Package initialization\n├── stage\u002F            # Pipeline stages\n│   ├── base.py       # Base stage classes\n│   ├── definition.py # Stage implementations\n│   ├── factory.py    # Stage factory\n│   ├── registry.py   # Stage registry\n│   ├── resolver.py   # Dependency resolver\n│   └── __init__.py   # Package initialization\n├── state\u002F            # State management\n│   ├── builder.py    # Status builder\n│   ├── collector.py  # State collection\n│   ├── display.py    # Display engine\n│   ├── enums.py      # State enumerations\n│   ├── models.py     # State data models\n│   ├── monitor.py    # State monitoring\n│   ├── types.py      # State type definitions\n│   └── __init__.py   # Package initialization\n├── storage\u002F          # Storage and persistence\n│   ├── atomic.py     # Atomic file operations\n│   ├── persistence.py # Result persistence\n│   ├── recovery.py   # Task recovery\n│   ├── shard.py      # NDJSON shard management\n│   ├── snapshot.py   # Snapshot management\n│   └── __init__.py   # Package initialization\n├── tools\u002F            # Tools and utilities\n│   ├── agent.py      # User agent management\n│   ├── balancer.py   # Load balancing\n│   ├── coordinator.py # Resource coordination\n│   ├── credential.py # Credential management\n│   ├── logger.py     # Logging system\n│   ├── patterns.py   # Pattern matching utilities\n│   ├── ratelimit.py  # Rate limiting\n│   ├── resources.py  # Resource pooling\n│   ├── retry.py      # Retry framework\n│   ├── utils.py      # General utilities\n│   └── __init__.py   # Package initialization\n├── .dockerignore     # Docker ignore rules\n├── .gitignore        # Git ignore rules\n├── Dockerfile        # Docker container configuration\n├── entrypoint.sh     # Docker entrypoint script\n├── LICENSE           # License file\n├── main.py           # Entry point and application core\n├── README.md         # English documentation\n├── README.zh-CN.md   # Chinese documentation\n├── requirements.txt  # Python dependencies\n└── __init__.py       # Root package initialization\n```\n\n## Advanced Features\n\n1. **Real-time Monitoring**\n   - Task status tracking\n   - Performance metrics collection\n   - Resource usage monitoring\n   - Alert system\n\n2. **Configuration Flexibility**\n   - Multi-provider configuration\n   - Custom search patterns\n   - Adjustable performance parameters\n   - Dynamic resource allocation\n\n3. **Extensibility**\n   - Plugin-style providers\n   - Custom pipeline stages\n   - Configurable monitoring system\n   - Flexible recovery strategies\n\n## Troubleshooting\n\n### **Common Issues**\n\n#### **1. Installation Problems**\n```bash\n# Issue: pip install fails\n# Solution: Upgrade pip and use virtual environment\npython -m pip install --upgrade pip\npython -m venv venv\n\n# Linux\u002FmacOS\nsource venv\u002Fbin\u002Factivate\n\n# Windows\nvenv\\Scripts\\activate\n\npip install -r requirements.txt\n```\n\n#### **2. Configuration Errors**\n```bash\n# Issue: Configuration validation fails\n# Solution: Validate configuration file\npython main.py --validate\n\n# Issue: Missing configuration file\n# Solution: Create from example\ncp examples\u002Fconfig-simple.yaml config.yaml\n```\n\n#### **3. Rate Limiting Issues**\n```bash\n# Issue: Too many API requests\n# Solution: Adjust rate limits in config\nrate_limits:\n  github_api:\n    base_rate: 0.1  # Reduce rate\n    adaptive: true  # Enable adaptive limiting\n```\n\n#### **4. Memory Issues**\n```bash\n# Issue: High memory usage\n# Solution: Reduce batch sizes and thread counts\npipeline:\n  threads:\n    search: 1\n    gather: 2  # Reduce from default\npersistence:\n  batch_size: 25  # Reduce from default 50\n```\n\n#### **5. Network Connectivity**\n```bash\n# Issue: Connection timeouts\n# Solution: Increase timeout values\napi:\n  timeout: 60  # Increase from default 30\n  retries: 5   # Increase retry attempts\n```\n\n### **Debug Mode**\n```bash\n# Enable debug logging\npython main.py --log-level DEBUG\n\n# Save debug output to file\npython main.py --log-level DEBUG > debug.log 2>&1\n```\n\n## Security Considerations\n\n### **Credential Management**\n- **Never commit credentials** to version control\n- **Use environment variables** for sensitive configuration\n- **Rotate credentials regularly** to minimize exposure risk\n- **Implement least privilege** access for API keys\n\n### **Data Protection**\n```yaml\n# Example: Secure credential configuration\nglobal:\n  github_credentials:\n    sessions:\n      - \"${GITHUB_SESSION_1}\"  # Use environment variables\n      - \"${GITHUB_SESSION_2}\"\n    tokens:\n      - \"${GITHUB_TOKEN_1}\"\n```\n\n### **Privacy Considerations**\n- **Respect robots.txt** and website terms of service\n- **Implement rate limiting** to avoid overwhelming target services\n- **Log redaction** automatically removes sensitive data from logs\n- **Data retention policies** should comply with applicable regulations\n\n### **Compliance Guidelines**\n- **Review legal requirements** before using in production\n- **Obtain necessary permissions** for data collection\n- **Implement data anonymization** where required\n- **Document data processing** activities for compliance\n\n## Important Notes\n\n1. **Limitations**\n   - Respect Github API usage limits\n   - Configure rate limits appropriately\n   - Mind memory usage\n   - Handle sensitive data carefully\n\n2. **Best Practices**\n   - Use appropriate thread counts\n   - Backup results regularly\n   - Monitor error rates\n   - Handle alerts promptly\n\n## TODO & Roadmap\n\n### 🏗️ Core Architecture Improvements\n\n#### Data Source Abstraction\n- [ ] **Abstract Data Source Interface**: Create a unified interface for all data sources\n  - [ ] Define `DataSourceProvider` base class with standard methods (`search`, `gather`, `validate`)\n  - [ ] Implement adapter pattern for different API formats (REST, GraphQL, WebSocket)\n  - [ ] Add configuration schema for data source registration\n  - [ ] Support dynamic data source loading and hot-swapping\n\n#### Stage System Enhancement\n- [ ] **Flexible Stage Definition**: Move beyond the current 4-stage limitation\n  - [ ] Create `StageDefinition` configuration format (YAML\u002FJSON)\n  - [ ] Implement dynamic stage loading from configuration files\n  - [ ] Add stage composition and conditional execution\n  - [ ] Support user-defined stage workflows and DAG customization\n\n#### Handler\u002FProcessor Registration System\n- [ ] **Pluggable Processing Architecture**: Replace fixed function calls with configurable handlers\n  - [ ] Implement `HandlerRegistry` for stage-specific processors\n  - [ ] Create `ProcessorInterface` with standardized input\u002Foutput contracts\n  - [ ] Add handler discovery mechanism (annotation-based or configuration-driven)\n  - [ ] Support middleware chains for request\u002Fresponse processing\n\n### 🌐 Data Source Integrations\n\n#### Network Mapping Platforms\n- [ ] **FOFA Integration**\n  - [ ] Implement FOFA API client with authentication\n  - [ ] Add FOFA-specific query optimization\n\n- [ ] **Shodan Integration**\n  - [ ] Support data querying and extraction from Shodan\n\n#### Generic Web Sources\n- [ ] **Universal Web Scraper**\n  - [ ] Build configurable web scraping engine\n  - [ ] Add support for JavaScript-rendered content (Selenium\u002FPlaywright)\n  - [ ] Implement anti-bot detection bypass mechanisms\n  - [ ] Create content extraction rule engine\n\n### 🔧 Framework Enhancements\n\n#### Configuration & Extensibility\n- [ ] **Plugin System**\n  - [ ] Design plugin architecture with lifecycle management\n  - [ ] Create plugin marketplace and discovery mechanism\n  - [ ] Add plugin sandboxing and security validation\n  - [ ] Implement plugin dependency resolution\n\n#### Performance & Scalability\n- [ ] **Distributed Processing**\n  - [ ] Add support for distributed task execution (Celery\u002FRQ)\n  - [ ] Implement horizontal scaling with load balancing\n  - [ ] Create cluster management and node discovery\n  - [ ] Add distributed state synchronization\n\n#### Security\n- [ ] **Enhanced Security Features**\n  - [ ] Implement credential encryption and secure storage\n  - [ ] Create rate limiting policies per data source\n\n### 📊 Monitoring & Analytics\n\n#### Advanced Monitoring\n- [ ] **Real-time Analytics Dashboard**\n  - [ ] Build web-based monitoring interface\n  - [ ] Add real-time metrics visualization\n  - [ ] Implement alerting and notification system\n  - [ ] Create performance profiling and bottleneck analysis\n\n\n\n### 🚀 Advanced Features\n\n#### API & Integration\n- [ ] **RESTful API Server**\n  - [ ] Build comprehensive REST API for external integration\n  - [ ] Implement webhook support for real-time notifications\n  - [ ] Create SDK libraries for popular programming languages\n\n## Contributing\n\nContributions are welcome! Before submitting a pull request, please ensure:\n\n1. Tests are updated\n2. Code follows style guidelines\n3. Documentation is added where necessary\n4. All tests pass\n\n### Priority Areas for Contributors\n\n- 🔥 **High Priority**: Data source abstraction and FOFA\u002FShodan integration\n- 🔥 **High Priority**: Stage system flexibility and handler registration\n- 🔥 **High Priority**: Plugin architecture and extensibility framework\n- 🔥 **Medium Priority**: Performance optimization and distributed processing\n- 🔥 **Medium Priority**: Web-based monitoring dashboard\n\n## License\n\nThis project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). See the [LICENSE](LICENSE) file for details.\n\n## Disclaimer\n\n**⚠️ IMPORTANT NOTICE**\n\nThis project is developed **solely for educational and technical research purposes**. Users should exercise caution and responsibility when using this software.\n\n**Key Points:**\n- This software is intended for learning, research, and educational use only\n- Users must comply with all applicable laws and regulations in their jurisdiction\n- Users are responsible for ensuring their usage complies with the terms of service of any third-party platforms or APIs\n- **The project authors do not recommend, encourage, or endorse the use of this software for illegally obtaining others' API keys or credentials**\n- The project authors assume **no responsibility** for any disputes, legal issues, or damages arising from the use of this software\n- Commercial use is strictly prohibited without explicit written permission\n- Users should respect the intellectual property rights and privacy of others\n\n**By using this software, you acknowledge that you have read, understood, and agree to these terms. Use at your own risk.**\n\n\n\n## Contact\n\nFor questions or other inquiries during usage, please contact the project maintainers through GitHub Issues.","# 收割者 - 通用数据采集框架\n\n**📖 [中文文档](README.zh-CN.md) | 英文 | 🔗 [更多工具](https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fai-collector)**\n\n一个通用、自适应的数据采集框架，旨在从多个来源全面获取信息，包括 GitHub、网络测绘平台（FOFA、Shodan）以及任意 Web 端点。尽管当前实现以 AI 服务提供商密钥发现作为实际示例，但该框架的架构设计具有高度可扩展性，能够支持多样化的数据采集场景。\n\n---\n\n⭐⭐⭐ **如果这个项目对您有帮助，请给它点个赞！** 您的支持将激励我们不断改进并添加新功能。\n\n---\n\n## 目录\n\n- [关键特性](#key-features)\n- [快速入门](#quick-start)\n- [架构](#architecture)\n- [目录结构](#directory-structure)\n- [故障排除](#troubleshooting)\n- [贡献](#contributing)\n\n## 项目目标\n\n该系统旨在构建一个**通用数据采集框架**，主要针对以下目标：\n\n- **GitHub**：代码仓库、问题、提交记录及 API 端点\n- **网络测绘平台**：\n  - [FOFA](https:\u002F\u002Ffofa.info) - 网络空间测绘与资产发现\n  - [Shodan](https:\u002F\u002Fwww.shodan.io\u002F) - 互联网设备搜索引擎\n- **任意 Web 端点**：自定义 API、Web 服务及其他数据源\n- **可扩展架构**：基于插件的系统，便于集成新的数据源\n\n## 当前数据源支持情况\n\n| 数据源     | 状态        | 描述                             |\n| ---------- | ------------- | --------------------------------------- |\n| GitHub API | ✅ 已实现     | 完整的 API 集成，并支持限流       |\n| GitHub Web | ✅ 已实现     | 带智能解析的网页爬取              |\n| FOFA       | 🚧 计划中     | 网络空间资产发现集成              |\n| Shodan     | 🚧 计划中     | 物联网及网络设备枚举              |\n| 自定义 APIs| 🚧 计划中     | 通用 REST\u002FGraphQL API 适配器        |\n\n## 架构\n\n### 分层架构\n\n```mermaid\ngraph TB\n    %% 入口层\n    subgraph Entry[\"入口层\"]\n        CLI[\"CLI 界面\u003Cbr\u002F>(main.py)\"]\n        App[\"应用核心\u003Cbr\u002F>(main.py)\"]\n    end\n\n    %% 管理层\n    subgraph Management[\"管理层\"]\n        TaskMgr[\"任务管理器\u003Cbr\u002F>(manager\u002Ftask.py)\"]\n        Pipeline[\"流水线管理器\u003Cbr\u002F>(manager\u002Fpipeline.py)\"]\n        WorkerMgr[\"工作者管理器\u003Cbr\u002F>(manager\u002Fworker.py)\"]\n        QueueMgr[\"队列管理器\u003Cbr\u002F>(manager\u002Fqueue.py)\"]\n        StatusMgr[\"状态管理器\u003Cbr\u002F>(manager\u002Fstatus.py)\"]\n        Shutdown[\"关机协调器\u003Cbr\u002F>(manager\u002Fshutdown.py)\"]\n    end\n\n    %% 处理层\n    subgraph Processing[\"处理层\"]\n        StageBase[\"阶段框架\u003Cbr\u002F>(stage\u002Fbase.py)\"]\n        StageImpl[\"阶段实现\u003Cbr\u002F>(stage\u002Fdefinition.py)\"]\n        StageReg[\"阶段注册表\u003Cbr\u002F>(stage\u002Fregistry.py)\"]\n        StageFactory[\"阶段工厂\u003Cbr\u002F>(stage\u002Ffactory.py)\"]\n        StageResolver[\"依赖解析器\u003Cbr\u002F>(stage\u002Fresolver.py)\"]\n    end\n\n    %% 服务层\n    subgraph Service[\"服务层\"]\n        SearchSvc[\"搜索服务\u003Cbr\u002F>(search\u002Fclient.py)\"]\n        SearchProviders[\"搜索提供者\u003Cbr\u002F>(search\u002Fprovider\u002F)\"]\n        RefineSvc[\"查询优化\u003Cbr\u002F>(refine\u002F)\"]\n        RefineEngine[\"优化引擎\u003Cbr\u002F>(refine\u002Fengine.py)\"]\n        RefineOptimizer[\"查询优化器\u003Cbr\u002F>(refine\u002Foptimizer.py)\"]\n    end\n\n    %% 核心领域层\n    subgraph Core[\"核心领域层\"]\n        Models[\"领域模型与任务\u003Cbr\u002F>(core\u002Fmodels.py)\"]\n        Types[\"类型系统\u003Cbr\u002F>(core\u002Ftypes.py)\"]\n        Enums[\"枚举类型\u003Cbr\u002F>(core\u002Fenums.py)\"]\n        Metrics[\"指标\u003Cbr\u002F>(core\u002Fmetrics.py)\"]\n        Auth[\"认证模块\u003Cbr\u002F>(core\u002Fauth.py)\"]\n    end\n\n    %% 基础设施层\n    subgraph Infrastructure[\"基础设施层\"]\n        Config[\"配置\u003Cbr\u002F>(config\u002F)\"]\n        Tools[\"工具与实用程序\u003Cbr\u002F>(tools\u002F)\"]\n        Constants[\"常量\u003Cbr\u002F>(constant\u002F)\"]\n        Storage[\"存储与持久化\u003Cbr\u002F>(storage\u002F)\"]\n    end\n\n    %% 状态管理层\n    subgraph StateLayer[\"状态管理层\"]\n        StateCollector[\"状态收集器\u003Cbr\u002F>(state\u002Fcollector.py)\"]\n        StateDisplay[\"显示引擎\u003Cbr\u002F>(state\u002Fdisplay.py)\"]\n        StateBuilder[\"状态构建器\u003Cbr\u002F>(state\u002Fbuilder.py)\"]\n        StateModels[\"状态模型\u003Cbr\u002F>(state\u002Fmodels.py)\"]\n        StateMonitor[\"状态监控器\u003Cbr\u002F>(state\u002Fmonitor.py)\"]\n        StateEnums[\"状态枚举\u003Cbr\u002F>(state\u002Fenums.py)\"]\n        StateTypes[\"状态类型\u003Cbr\u002F>(state\u002Ftypes.py)\"]\n    end\n\n    %% 外部系统\n    subgraph External[\"外部系统\"]\n        GitHub[\"GitHub\u003Cbr\u002F>(API + Web)\"]\n        AIServices[\"AI 服务提供商\"]\n        FileSystem[\"文件系统\u003Cbr\u002F>(本地存储)\"]\n    end\n\n    %% 依赖关系（自上而下）\n    Entry --> Management\n    Management --> Processing\n    Processing --> Service\n    Service --> Core\n\n    %% 基础设施依赖\n    Entry -.-> Infrastructure\n    Management -.-> Infrastructure\n    Processing -.-> Infrastructure\n    Service -.-> Infrastructure\n    Core -.-> Infrastructure\n\n    %% 状态管理依赖\n    Entry -.-> StateLayer\n    Management -.-> StateLayer\n\n    %% 外部依赖\n    Service --> External\n    Infrastructure --> External\n```\n\n### 系统架构概览\n\n```mermaid\ngraph TB\n    %% 用户界面层\n    subgraph UserLayer[\"用户界面层\"]\n        User[用户]\n        CLI[命令行界面]\n        ConfigMgmt[配置管理]\n    end\n\n    %% 应用管理层\n    subgraph AppLayer[\"应用管理层\"]\n        MainApp[主应用]\n        TaskManager[任务管理器]\n        StatusManager[状态管理器]\n        ResourceManager[资源管理器]\n        ShutdownManager[关闭管理器]\n    end\n\n    %% 核心管道引擎\n    subgraph PipelineCore[\"管道引擎\"]\n        %% 阶段管理系统\n        subgraph StageSystem[\"阶段管理系统\"]\n            StageRegistry[阶段注册表]\n            DependencyResolver[依赖解析器]\n            StageFactory[阶段工厂]\n        end\n\n        %% 队列管理系统\n        subgraph QueueSystem[\"队列管理系统\"]\n            QueueManager[队列管理器]\n            WorkerManager[工作线程管理器]\n            MonitoringSystem[系统监控]\n        end\n\n        %% 处理阶段\n        subgraph ProcessingStages[\"处理阶段\"]\n            SearchStage[搜索阶段]\n            GatherStage[收集阶段]\n            CheckStage[检查阶段]\n            InspectStage[检查阶段]\n        end\n    end\n\n    %% 搜索提供商生态系统\n    subgraph ProviderEcosystem[\"搜索提供商生态系统\"]\n        ProviderRegistry[提供商注册表]\n        BaseProvider[基础提供商]\n        OpenAIProvider[类似OpenAI的提供商]\n        CustomProviders[自定义提供商]\n    end\n\n    %% 高级处理引擎\n    subgraph ProcessingEngines[\"处理引擎\"]\n        SearchClient[搜索客户端]\n\n        %% 查询优化引擎\n        subgraph QueryOptimizer[\"查询优化引擎\"]\n            RefineEngine[细化引擎]\n            RegexParser[正则表达式解析器]\n            SplittabilityAnalyzer[可拆分性分析器]\n            EnumerationOptimizer[枚举优化器]\n            QueryGenerator[查询生成器]\n            OptimizationStrategies[优化策略]\n\n            %% 内部流程\n            RefineEngine --> RegexParser\n            RegexParser --> SplittabilityAnalyzer\n            SplittabilityAnalyzer --> EnumerationOptimizer\n            EnumerationOptimizer --> OptimizationStrategies\n            OptimizationStrategies --> QueryGenerator\n        end\n\n        ValidationEngine[API密钥验证]\n        RecoveryEngine[任务恢复]\n    end\n\n    %% 状态与数据管理\n    subgraph StateManagement[\"状态与数据管理\"]\n        StateCollector[状态收集器]\n        DisplayEngine[显示引擎]\n        StatusBuilder[状态构建器]\n        StateMonitor[状态监视器]\n        PersistenceLayer[持久化层]\n        SnapshotManager[快照管理器]\n        ResultManager[结果管理器]\n    end\n\n    %% 基础设施服务\n    subgraph Infrastructure[\"基础设施服务\"]\n        RateLimiting[速率限制]\n        CredentialMgmt[凭证管理]\n        AgentRotation[用户代理轮换]\n        LoggingSystem[日志系统]\n        RetryFramework[重试框架]\n        ResourcePool[资源池]\n    end\n\n    %% 外部系统\n    subgraph External[\"外部系统\"]\n        GitHubAPI[GitHub API]\n        GitHubWeb[GitHub Web界面]\n        AIServiceAPIs[AI服务APIs]\n        FileSystem[本地文件系统]\n    end\n\n    %% 用户交互\n    User --> CLI\n    User --> ConfigMgmt\n    CLI --> MainApp\n    ConfigMgmt --> MainApp\n\n    %% 应用流程\n    MainApp --> TaskManager\n    MainApp --> StatusManager\n    MainApp --> ResourceManager\n    MainApp --> ShutdownManager\n    TaskManager --> StageRegistry\n    TaskManager --> QueueManager\n\n    %% 阶段管理流程\n    StageRegistry --> DependencyResolver\n    StageRegistry --> StageFactory\n    DependencyResolver --> ProcessingStages\n    StageFactory --> ProcessingStages\n\n    %% 队列管理流程\n    QueueManager --> WorkerManager\n    QueueManager --> MonitoringSystem\n    WorkerManager --> ProcessingStages\n\n    %% 阶段依赖关系（管道）\n    SearchStage --> GatherStage\n    GatherStage --> CheckStage\n    CheckStage --> InspectStage\n\n    %% 处理引擎集成\n    SearchStage --> SearchClient\n    SearchStage --> QueryOptimizer\n    CheckStage --> ValidationEngine\n    ProcessingStages --> RecoveryEngine\n\n    %% 提供商集成\n    SearchClient --> ProviderRegistry\n    ProviderRegistry --> BaseProvider\n    BaseProvider --> OpenAIProvider\n    BaseProvider --> CustomProviders\n\n    %% 状态管理集成\n    ProcessingStages --> StateCollector\n    QueueManager --> StateCollector\n    StateCollector --> DisplayEngine\n    StateCollector --> StatusBuilder\n    StateMonitor --> DisplayEngine\n    ProcessingStages --> PersistenceLayer\n    PersistenceLayer --> SnapshotManager\n    PersistenceLayer --> ResultManager\n\n    %% 基础设施集成\n    SearchClient -.-> RateLimiting\n    ResourceManager -.-> CredentialMgmt\n    ResourceManager -.-> AgentRotation\n    MainApp -.-> LoggingSystem\n    ProcessingStages -.-> RetryFramework\n    Infrastructure -.-> ResourcePool\n\n    %% 外部连接\n    SearchClient --> GitHubAPI\n    SearchClient --> GitHubWeb\n    ValidationEngine --> AIServiceAPIs\n    PersistenceLayer --> FileSystem\n\n    %% 样式\n    classDef userClass fill:#e3f2fd,stroke:#1976d2,stroke-width:2px\n    classDef appClass fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px\n    classDef coreClass fill:#e8f5e8,stroke:#388e3c,stroke-width:3px\n    classDef providerClass fill:#fff3e0,stroke:#f57c00,stroke-width:2px\n    classDef engineClass fill:#fce4ec,stroke:#c2185b,stroke-width:2px\n    classDef stateClass fill:#f1f8e9,stroke:#689f38,stroke-width:2px\n    classDef infraClass fill:#f5f5f5,stroke:#616161,stroke-width:2px\n    classDef externalClass fill:#ffebee,stroke:#d32f2f,stroke-width:2px\n\n    class User,CLI,ConfigMgmt userClass\n    class MainApp,TaskManager,StatusManager,ResourceManager,ShutdownManager appClass\n    class StageRegistry,DependencyResolver,StageFactory,QueueManager,WorkerManager,MonitoringSystem,SearchStage,GatherStage,CheckStage,InspectStage coreClass\n    class ProviderRegistry,BaseProvider,OpenAIProvider,CustomProviders providerClass\n    class SearchClient,QueryOptimizer,ValidationEngine,RecoveryEngine engineClass\n    class StateCollector,StateMonitor,DisplayEngine,StatusBuilder,PersistenceLayer,SnapshotManager,ResultManager stateClass\n    class RateLimiting,CredentialMgmt,AgentRotation,LoggingSystem,RetryFramework,ResourcePool infraClass\n    class GitHubAPI,GitHubWeb,AIServiceAPIs,FileSystem externalClass\n```\n\n该项目采用分层架构，核心组件如下：\n\n### 多阶段处理流程\n\n```mermaid\nsequenceDiagram\n    participant CLI as CLI\n    participant App as Application\n    participant TM as TaskManager\n    participant Pipeline as Pipeline\n    participant Search as SearchStage\n    participant Gather as GatherStage\n    participant Check as CheckStage\n    participant Inspect as InspectStage\n    participant Storage as Storage\n    participant Monitor as StatusManager\n\n    %% 初始化阶段\n    CLI->>App: 1. 启动应用\n    App->>App: 2. 加载配置\n    App->>TM: 3. 创建任务管理器\n    TM->>TM: 4. 初始化服务提供者\n    TM->>Pipeline: 5. 创建流水线\n    Pipeline->>Search: 6. 注册搜索阶段\n    Pipeline->>Gather: 7. 注册收集阶段\n    Pipeline->>Check: 8. 注册检查阶段\n    Pipeline->>Inspect: 9. 注册检查阶段\n    App->>Monitor: 10. 启动状态管理器\n\n    %% 处理阶段\n    loop 多阶段处理\n        TM->>Search: 11. 提交搜索任务\n        Search->>Search: 12. 带优化的 GitHub 查询\n        Search->>Gather: 13. 转发搜索结果\n\n        Gather->>Gather: 14. 获取详细信息\n        Gather->>Check: 15. 转发提取的密钥\n\n        Check->>Check: 16. 验证 API 密钥\n        Check->>Inspect: 17. 转发有效密钥\n\n        Inspect->>Inspect: 18. 检查 API 功能\n        Inspect->>Storage: 19. 保存结果\n\n        Pipeline->>Monitor: 20. 更新状态\n        Monitor->>App: 21. 显示进度\n    end\n\n    %% 恢复与持久化\n    loop 后台操作\n        Storage->>Storage: 自动保存结果\n        Storage->>Storage: 创建快照\n        Pipeline->>Pipeline: 任务恢复\n        Monitor->>Monitor: 收集指标\n    end\n\n    %% 完成阶段\n    Pipeline->>Pipeline: 22. 检查是否完成\n    Pipeline->>Storage: 23. 最终持久化\n    Pipeline->>Monitor: 24. 最终状态报告\n    App->>TM: 25. 优雅关闭\n    TM->>Storage: 26. 保存状态\n```\n\n## 架构层次\n\n### 1. **表现层**\n   - **CLI 界面** (`main.py`)：命令行入口，包含参数解析和应用生命周期管理\n   - **配置系统** (`config\u002F`)：基于 YAML 的配置管理，支持验证和模式定义\n\n### 2. **应用层**\n   - **应用核心** (`main.py`)：主应用生命周期及编排逻辑\n   - **任务管理** (`manager\u002Ftask.py`)：服务提供者协调与任务分发\n   - **资源协调** (`tools\u002Fcoordinator.py`)：全局资源管理和协调\n   - **关闭管理** (`manager\u002Fshutdown.py`)：优雅关闭协调\n   - **状态管理** (`manager\u002Fstatus.py`)：应用状态管理和协调\n   - **工作线程管理** (`manager\u002Fworker.py`)：工作线程管理和扩展\n   - **队列管理** (`manager\u002Fqueue.py`)：多队列协调与管理\n\n### 3. **业务服务层**\n   - **流水线引擎** (`manager\u002Fpipeline.py`)：基于 DAG 执行的多阶段处理编排\n   - **阶段系统** (`stage\u002F`)：可插拔的处理阶段，支持依赖解析和工厂模式\n   - **搜索服务** (`search\u002F`)：GitHub 代码搜索，提供抽象层和优化功能\n   - **查询优化** (`refine\u002F`)：基于策略模式和数学基础的智能查询优化\n\n### 4. **领域层**\n   - **核心模型与任务** (`core\u002Fmodels.py`)：业务领域对象、数据结构和任务定义\n   - **类型系统** (`core\u002Ftypes.py`)：接口定义和契约\n   - **业务枚举** (`core\u002Fenums.py`)：领域枚举和常量\n   - **度量与分析** (`core\u002Fmetrics.py`)：性能测量和 KPI 跟踪\n   - **认证** (`core\u002Fauth.py`)：认证和授权逻辑\n   - **自定义异常** (`core\u002Fexceptions.py`)：领域特定的异常处理\n   - **自定义异常** (`core\u002Fexceptions.py`)：领域特定的异常处理\n\n### 5. **基础设施层**\n   - **存储与持久化** (`storage\u002F`)：结果存储、恢复和快照管理\n     - **原子操作** (`storage\u002Fatomic.py`)：带 fsync 的原子文件操作\n     - **结果管理** (`storage\u002Fpersistence.py`)：多格式结果持久化\n     - **任务恢复** (`storage\u002Frecovery.py`)：任务恢复机制\n     - **分片管理** (`storage\u002Fshard.py`)：带轮转的 NDJSON 分片管理\n     - **快照管理** (`storage\u002Fsnapshot.py`)：备份和恢复功能\n   - **工具与实用程序** (`tools\u002F`)：基础设施工具和实用程序\n     - **日志系统** (`tools\u002Flogger.py`)：带 API 密钥脱敏的结构化日志\n     - **限流** (`tools\u002Fratelimit.py`)：基于令牌桶算法的自适应速率控制\n     - **负载均衡** (`tools\u002Fbalancer.py`)：资源分配策略\n     - **凭证管理** (`tools\u002Fcredential.py`)：安全的凭证轮换和管理\n     - **代理管理** (`tools\u002Fagent.py`)：用于网页抓取的用户代理轮换\n     - **模式匹配** (`tools\u002Fpatterns.py`)：模式匹配工具和辅助函数\n     - **重试框架** (`tools\u002Fretry.py`)：统一的重试机制，支持退避策略\n     - **资源池** (`tools\u002Fresources.py`)：资源池管理和优化\n\n### 6. **状态管理层**\n   - **状态采集** (`state\u002Fcollector.py`)：系统指标收集和聚合\n   - **显示引擎** (`state\u002Fdisplay.py`)：用户友好的进度可视化和格式化\n   - **状态构建器** (`state\u002Fbuilder.py`)：状态数据的构建和转换\n   - **状态模型** (`state\u002Fmodels.py`)：监控数据结构和指标\n   - **状态监控** (`state\u002Fmonitor.py`)：实时状态监控和跟踪\n   - **状态枚举** (`state\u002Fenums.py`)：状态相关枚举和常量\n   - **状态类型** (`state\u002Ftypes.py`)：状态类型定义和接口\n\n## 处理阶段\n\n系统实现了**4阶段流水线**，用于全面的数据采集与验证：\n\n1. **搜索阶段**（`stage\u002Fdefinition.py:SearchStage`）：\n   - 带有高级查询优化的智能GitHub代码搜索\n   - 多提供商搜索支持（API + Web）\n   - 使用数学优化算法进行查询精炼\n   - 具有自适应限流功能的速率限制搜索执行\n\n2. **收集阶段**（`stage\u002Fdefinition.py:GatherStage`）：\n   - 从搜索结果中获取详细信息\n   - 内容提取与解析\n   - 模式匹配以识别关键信息\n   - 结构化数据的收集与归一化\n\n3. **检查阶段**（`stage\u002Fdefinition.py:CheckStage`）：\n   - 根据实际服务端点验证API密钥\n   - 身份验证与能力测试\n   - 服务可用性及响应验证\n   - 错误处理与重试机制\n\n4. **检查阶段**（`stage\u002Fdefinition.py:InspectStage`）：\n   - 针对已验证密钥的API能力检查\n   - 模型枚举与功能检测\n   - 服务限制与配额分析\n   - 全面的能力画像生成\n\n## 高级查询优化引擎\n\n系统配备了一个基于数学原理的复杂**查询优化引擎**：\n\n### 核心组件\n\n1. **正则表达式解析器**：\n   - 支持复杂语法的高级正则模式解析\n   - 处理转义字符、字符类和量词\n   - 将模式转换为可分析的段结构\n\n2. **可拆分性分析器**：\n   - 对模式可分割性的数学分析\n   - 递归深度限制以确保安全\n   - 值阈分析以判断优化可行性\n   - 资源成本估算以控制性能\n\n3. **枚举优化器**：\n   - 智能选择枚举策略\n   - 多维度优化（深度、广度、值）\n   - 组合分析以选择最优段\n   - 拓扑排序以解决依赖关系\n\n4. **查询生成器**：\n   - 根据枚举策略生成优化后的查询变体\n   - 支持可配置的枚举深度\n   - 产生数学上最优的查询分布\n   - 保持查询语义等价性\n\n### 优化算法\n\n- **数学建模**：利用数学原理分析正则模式\n- **枚举策略**：智能选择最优的枚举深度和组合\n- **资源管理**：通过智能限制造避免资源耗尽\n- **性能优化**：单例模式确保最佳内存使用\n\n## 支持的数据来源与用例\n\n### 🔍 当前实现（AI服务发现）\n- **OpenAI及其兼容接口**\n- **Anthropic Claude**\n- **Azure OpenAI**\n- **Google Gemini**\n- **AWS Bedrock**\n- **GooeyAI**\n- **Stability AI**\n- **百度文心一言**\n- **智谱AI**\n- **自定义提供商**\n\n### 🌐 计划中的数据来源\n- **[FOFA](https:\u002F\u002Ffofa.info)**：网络空间资产发现与网络映射\n- **[Shodan](https:\u002F\u002Fwww.shodan.io\u002F)**：互联网连接设备枚举\n- **自定义REST API**：通用API集成框架\n- **GraphQL端点**：灵活的基于查询的数据获取\n- **网页爬取**：JavaScript渲染内容及动态站点\n- **数据库连接器**：直接查询数据库的能力\n\n### 📊 潜在用例\n- **数据挖掘**：大规模信息提取与分析\n\n## 关键特性\n\n### 🌐 通用数据采集\n- **多源支持**：GitHub、FOFA、Shodan及自定义端点\n- **自适应查询引擎**：针对不同数据源的智能优化\n- **协议无关**：支持REST、GraphQL、WebSocket及网页爬取\n- **速率限制**：按来源的智能速率控制与配额管理\n\n### 🏗️ 先进架构\n- **动态阶段系统**：可配置的处理流水线，采用DAG执行\n- **插件架构**：可扩展的框架，支持自定义数据源和处理器\n- **依赖关系解析**：自动阶段排序与依赖管理\n- **处理器注册**：可插拔处理器，实现灵活的数据转换\n\n### ⚡ 高性能\n- **异步处理**：多线程任务执行，配合智能队列\n- **自适应负载均衡**：根据工作负载动态分配资源\n- **查询优化**：通过数学建模实现最优搜索策略\n- **资源监控**：实时性能跟踪与瓶颈检测\n\n### 🛡️ 企业级就绪\n- **容错性**：全面的错误处理、重试机制及恢复能力\n- **状态持久化**：队列状态恢复与优雅关机能力\n- **安全性**：凭据管理、API密钥脱敏及安全存储\n- **监控**：实时分析、告警及性能可视化\n\n## 系统要求\n\n### **依赖项**\n- **Python**：3.10及以上\n- **库**：`PyYAML`\n- **可选**：`uvloop`（Linux\u002FmacOS性能提升）\n- **开发**：`pytest`、`black`、`mypy`（供贡献者使用）\n\n## 快速入门\n\n> 📚 如需全面的文档、教程和高级使用指南，请访问 [DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fwzdnzd\u002Fharvester)\n\n1. **安装**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fharvester.git\n   cd harvester\n   pip install -r requirements.txt\n   ```\n\n2. **配置**\n\n  > 请选择以下任一方法来创建您的配置文件\n\n   **方法一：生成默认配置**\n   ```bash\n   python main.py --create-config\n   ```\n\n   **方法二：从示例复制**\n   ```bash\n   # 对于基础配置\n   cp examples\u002Fconfig-simple.yaml config.yaml\n\n   # 对于包含所有选项的完整配置\n   cp examples\u002Fconfig-full.yaml config.yaml\n   ```\n\n   编辑配置文件：\n   - 设置您的 GitHub 会话令牌或 API 密钥\n   - 配置提供商搜索模式\n   - 调整速率限制和线程数\n\n   ### 配置指南\n\n   系统提供了两种配置模板：\n\n   1. **基础配置** - 适合快速入门：\n      ```yaml\n      # 全局应用设置\n      global:\n        workspace: \".\u002Fdata\"  # 工作目录\n        github_credentials:\n          sessions:\n            - \"your_github_session_here\"  # GitHub 会话令牌\n          strategy: \"round_robin\"  # 负载均衡策略\n\n      # 流水线阶段配置\n      pipeline:\n        threads:\n          search: 1    # 搜索线程（保持较低）\n          gather: 4   # 获取线程\n          check: 2     # 验证线程\n          inspect: 1    # API 能力检测线程\n\n      # 系统监控设置\n      monitoring:\n        update_interval: 2.0    # 监控更新间隔\n        error_threshold: 0.1    # 错误率阈值\n\n      # 数据持久化配置\n      persistence:\n        auto_restore: true      # 启动时自动恢复状态\n        shutdown_timeout: 30    # 关机超时时间（秒）\n\n      # 全局速率限制配置\n      ratelimits:\n        github_web:\n          base_rate: 0.5       # 基础速率（每秒请求数）\n          burst_limit: 2       # 最大突发数量\n          adaptive: true       # 启用自适应速率限制\n\n      # 提供商任务配置\n      tasks:\n        - name: \"openai\"         # 提供商名称\n          enabled: true          # 是否启用该提供商\n          provider_type: \"openai\"\n          use_api: false         # 是否使用 GitHub API 进行搜索\n\n          # 流水线阶段设置\n          stages:\n            search: true         # 启用搜索阶段\n            gather: true         # 启用获取阶段\n            check: true          # 启用验证阶段\n            inspect: true        # 启用 API 能力检测\n\n          # 模式匹配配置\n          patterns:\n            key_pattern: \"sk(?:-proj)?-[a-zA-Z0-9]{20}T3BlbkFJ[a-zA-Z0-9]{20}\"\n\n          # 搜索条件\n          conditions:\n            - query: '\"T3BlbkFJ\"'\n      ```\n\n   2. **完整配置** - 包含所有高级选项：\n      - `display`: 显示与监控设置\n      - `global`: 全局系统配置\n      - `pipeline`: 流水线阶段配置\n      - `monitoring`: 系统监控参数\n      - `persistence`: 数据持久化设置\n      - `worker`: 工作线程池配置\n      - `ratelimits`: 速率限制设置\n      - `tasks`: 提供商任务配置\n\n   ### 高级任务配置\n\n   > 📋 **如需完整的配置示例，请参考：**\n   > - [`examples\u002Fconfig-full.yaml`](examples\u002Fconfig-full.yaml) - 包含所有可用选项的综合配置\n   > - [`examples\u002Fconfig-simple.yaml`](examples\u002Fconfig-simple.yaml) - 适用于快速入门的基础配置\n\n   `tasks` 部分是配置的核心，定义了要搜索哪些提供商以及如何处理它们。有关完整的任务配置，请参阅上述基础配置示例。\n\n   #### 主要配置选项\n\n   - **`name`**: 任务的唯一标识符\n   - **`provider_type`**: 决定验证方式（`openai`、`openai_like`、`anthropic`、`gemini` 等）\n   - **`api`**: 用于密钥验证的 API 端点配置\n   - **`patterns.key_pattern`**: 用于识别有效 API 密钥的正则表达式模式\n   - **`conditions`**: 用于查找潜在密钥的搜索查询\n   - **`stages`**: 启用或禁用特定的处理阶段\n   - **`extras.directory`**: 自定义结果输出目录\n\n3. **运行**\n   ```bash\n   python main.py                  # 使用默认配置\n   python main.py -c custom.yaml   # 使用自定义配置\n   python main.py --validate       # 验证配置\n   python main.py --log-level DEBUG # 开启调试日志\n   ```\n\n## 目录结构\n\n```\nharvester\u002F\n├── config\u002F           # 配置管理\n│   ├── accessor.py   # 配置访问工具\n│   ├── defaults.py   # 默认配置值\n│   ├── loader.py     # 配置加载\n│   ├── schemas.py    # 配置模式\n│   ├── validator.py  # 配置验证\n│   └── __init__.py   # 包初始化\n├── constant\u002F         # 系统常量\n│   ├── monitoring.py # 监控常量\n│   ├── runtime.py    # 运行时常量\n│   ├── search.py     # 搜索常量\n│   ├── system.py     # 系统常量\n│   └── __init__.py   # 包初始化\n├── core\u002F             # 核心领域模型\n│   ├── auth.py       # 认证\n│   ├── enums.py      # 系统枚举\n│   ├── exceptions.py # 自定义异常\n│   ├── metrics.py    # 性能指标\n│   ├── models.py     # 核心数据模型及任务定义\n│   ├── types.py      # 核心类型定义\n│   └── __init__.py   # 包初始化\n├── examples\u002F         # 配置示例\n│   ├── config-full.yaml    # 完整配置模板\n│   └── config-simple.yaml  # 基本配置模板\n├── manager\u002F          # 任务与资源管理\n│   ├── base.py       # 基础管理类\n│   ├── pipeline.py   # 管道管理\n│   ├── queue.py      # 队列管理\n│   ├── shutdown.py   # 关机协调\n│   ├── status.py     # 状态管理\n│   ├── task.py       # 任务管理\n│   ├── worker.py     # 工作线程管理\n│   └── __init__.py   # 包初始化\n├── refine\u002F           # 查询优化\n│   ├── config.py     # Refine 配置\n│   ├── engine.py     # 优化引擎\n│   ├── generator.py  # 查询生成\n│   ├── optimizer.py  # 查询优化\n│   ├── parser.py     # 查询解析\n│   ├── segment.py    # 模式分割\n│   ├── splittability.py # 可拆分性分析\n│   ├── strategies.py # 优化策略\n│   ├── types.py      # Refine 类型定义\n│   └── __init__.py   # 包初始化\n├── search\u002F           # 搜索引擎\n│   ├── client.py     # 搜索客户端\n│   ├── provider\u002F     # 提供商实现\n│   │   ├── anthropic.py    # Anthropic 提供商\n│   │   ├── azure.py        # Azure OpenAI 提供商\n│   │   ├── base.py         # 基础提供商类\n│   │   ├── bedrock.py      # AWS Bedrock 提供商\n│   │   ├── doubao.py       # ByteDance Doubao 提供商\n│   │   ├── gemini.py       # Google Gemini 提供商\n│   │   ├── gooeyai.py      # GooeyAI 提供商\n│   │   ├── openai.py       # OpenAI 提供商\n│   │   ├── openai_like.py  # 兼容 OpenAI 的提供商\n│   │   ├── qianfan.py      # Baidu Qianfan 提供商\n│   │   ├── registry.py     # 提供商注册表\n│   │   ├── stabilityai.py  # Stability AI 提供商\n│   │   ├── vertex.py       # Google Vertex AI 提供商\n│   │   └── __init__.py     # 包初始化\n│   └── __init__.py   # 包初始化\n├── stage\u002F            # 管道阶段\n│   ├── base.py       # 基础阶段类\n│   ├── definition.py # 阶段实现\n│   ├── factory.py    # 阶段工厂\n│   ├── registry.py   # 阶段注册表\n│   ├── resolver.py   # 依赖项解析器\n│   └── __init__.py   # 包初始化\n├── state\u002F            # 状态管理\n│   ├── builder.py    # 状态构建器\n│   ├── collector.py  # 状态收集\n│   ├── display.py    # 显示引擎\n│   ├── enums.py      # 状态枚举\n│   ├── models.py     # 状态数据模型\n│   ├── monitor.py    # 状态监控\n│   ├── types.py      # 状态类型定义\n│   └── __init__.py   # 包初始化\n├── storage\u002F          # 存储与持久化\n│   ├── atomic.py     # 原子文件操作\n│   ├── persistence.py # 结果持久化\n│   ├── recovery.py   # 任务恢复\n│   ├── shard.py      # NDJSON 分片管理\n│   ├── snapshot.py   # 快照管理\n│   └── __init__.py   # 包初始化\n├── tools\u002F            # 工具与实用程序\n│   ├── agent.py      # 用户代理管理\n│   ├── balancer.py   # 负载均衡\n│   ├── coordinator.py # 资源协调\n│   ├── credential.py # 凭证管理\n│   ├── logger.py     # 日志系统\n│   ├── patterns.py   # 模式匹配工具\n│   ├── ratelimit.py  # 速率限制\n│   ├── resources.py  # 资源池\n│   ├── retry.py      # 重试框架\n│   ├── utils.py      # 通用工具\n│   └── __init__.py   # 包初始化\n├── .dockerignore     # Docker 忽略规则\n├── .gitignore        # Git 忽略规则\n├── Dockerfile        # Docker 容器配置\n├── entrypoint.sh     # Docker 入口脚本\n├── LICENSE           # 许可证文件\n├── main.py           # 入口点及应用核心\n├── README.md         # 英文文档\n├── README.zh-CN.md   # 中文文档\n├── requirements.txt  # Python 依赖项\n└── __init__.py       # 根包初始化\n```\n\n## 高级功能\n\n1. **实时监控**\n   - 任务状态跟踪\n   - 性能指标收集\n   - 资源使用情况监控\n   - 警报系统\n\n2. **配置灵活性**\n   - 多提供商配置\n   - 自定义搜索模式\n   - 可调整的性能参数\n   - 动态资源分配\n\n3. **可扩展性**\n   - 插件式提供商\n   - 自定义管道阶段\n   - 可配置的监控系统\n   - 灵活的恢复策略\n\n## 故障排除\n\n### **常见问题**\n\n#### **1. 安装问题**\n```bash\n# 问题：pip install 失败\n# 解决方案：升级 pip 并使用虚拟环境\npython -m pip install --upgrade pip\npython -m venv venv\n\n# Linux\u002FmacOS\nsource venv\u002Fbin\u002Factivate\n\n# Windows\nvenv\\Scripts\\activate\n\npip install -r requirements.txt\n```\n\n#### **2. 配置错误**\n```bash\n# 问题：配置验证失败\n# 解决方案：验证配置文件\npython main.py --validate\n\n# 问题：缺少配置文件\n# 解决方案：从示例创建\ncp examples\u002Fconfig-simple.yaml config.yaml\n```\n\n#### **3. 速率限制问题**\n```bash\n# 问题：API 请求过多\n# 解决方案：在配置中调整速率限制\nrate_limits:\n  github_api:\n    base_rate: 0.1  # 降低速率\n    adaptive: true  # 启用自适应限制\n```\n\n#### **4. 内存问题**\n```bash\n# 问题：内存占用过高\n# 解决方案：减少批处理大小和线程数\npipeline:\n  threads:\n    search: 1\n    gather: 2  # 从默认值减少\npersistence:\n  batch_size: 25  # 从默认值 50 减少\n```\n\n#### **5. 网络连接**\n```bash\n# 问题：连接超时\n# 解决方案：增加超时时间\napi:\n  timeout: 60  # 从默认值 30 增加\n  retries: 5   # 增加重试次数\n```\n\n### **调试模式**\n```bash\n# 启用调试日志记录\npython main.py --log-level DEBUG\n\n# 将调试输出保存到文件\npython main.py --log-level DEBUG > debug.log 2>&1\n```\n\n## 安全考虑\n\n### **凭证管理**\n- **切勿将凭证提交到版本控制系统**\n- **使用环境变量**存储敏感配置\n- **定期轮换凭证**以降低暴露风险\n- **为 API 密钥实施最小权限原则**\n\n### **数据保护**\n```yaml\n# 示例：安全的凭证配置\nglobal:\n  github_credentials:\n    sessions:\n      - \"${GITHUB_SESSION_1}\"  # 使用环境变量\n      - \"${GITHUB_SESSION_2}\"\n    tokens:\n      - \"${GITHUB_TOKEN_1}\"\n```\n\n### **隐私考量**\n- **尊重 robots.txt 文件**及网站的服务条款\n- **实施速率限制**以避免对目标服务造成过大压力\n- **日志脱敏**可自动从日志中移除敏感数据\n- **数据保留策略**应符合相关法律法规\n\n### **合规指南**\n- **在生产环境中使用前审查法律要求**\n- **获取数据收集所需的必要许可**\n- **在必要时实施数据匿名化**\n- **记录数据处理活动以确保合规性**\n\n## 重要提示\n\n1. **局限性**\n   - 遵守 GitHub API 的使用限制\n   - 合理配置速率限制\n   - 注意内存使用情况\n   - 小心处理敏感数据\n\n2. **最佳实践**\n   - 使用合适的线程数\n   - 定期备份结果\n   - 监控错误率\n   - 及时处理告警\n\n## 待办事项与路线图\n\n### 🏗️ 核心架构改进\n\n#### 数据源抽象\n- [ ] **抽象数据源接口**：为所有数据源创建统一接口\n  - [ ] 定义包含标准方法（`search`、`gather`、`validate`）的 `DataSourceProvider` 基类\n  - [ ] 实现针对不同 API 格式（REST、GraphQL、WebSocket）的适配器模式\n  - [ ] 添加用于数据源注册的配置模式\n  - [ ] 支持动态加载和热插拔数据源\n\n#### 阶段系统增强\n- [ ] **灵活的阶段定义**：突破当前 4 阶段的限制\n  - [ ] 创建 `StageDefinition` 配置格式（YAML\u002FJSON）\n  - [ ] 实现从配置文件中动态加载阶段\n  - [ ] 增加阶段组合与条件执行功能\n  - [ ] 支持用户自定义阶段工作流及 DAG 定制\n\n#### 处理器注册系统\n- [ ] **可插拔的处理架构**：用可配置处理器取代固定函数调用\n  - [ ] 实施适用于各阶段的 `HandlerRegistry`\n  - [ ] 创建具有标准化输入输出契约的 `ProcessorInterface`\n  - [ ] 添加处理器发现机制（基于注解或配置驱动）\n  - [ ] 支持请求\u002F响应处理的中间件链\n\n### 🌐 数据源集成\n\n#### 网络测绘平台\n- [ ] **FOFA 集成**\n  - [ ] 实现带有认证的 FOFA API 客户端\n  - [ ] 添加针对 FOFA 的查询优化功能\n\n- [ ] **Shodan 集成**\n  - [ ] 支持从 Shodan 查询和提取数据\n\n#### 通用网络来源\n- [ ] **通用网页爬虫**\n  - [ ] 构建可配置的网页抓取引擎\n  - [ ] 增加对 JavaScript 渲染内容的支持（Selenium\u002FPlaywright）\n  - [ ] 实现反机器人检测绕过机制\n  - [ ] 创建内容提取规则引擎\n\n### 🔧 框架增强\n\n#### 配置与扩展性\n- [ ] **插件系统**\n  - [ ] 设计具备生命周期管理的插件架构\n  - [ ] 创建插件市场及发现机制\n  - [ ] 增加插件沙箱隔离与安全验证\n  - [ ] 实现插件依赖关系解析\n\n#### 性能与可扩展性\n- [ ] **分布式处理**\n  - [ ] 增加对分布式任务执行的支持（Celery\u002FRQ）\n  - [ ] 实现负载均衡的水平扩展\n  - [ ] 创建集群管理和节点发现功能\n  - [ ] 增加分布式状态同步机制\n\n#### 安全性\n- [ ] **增强的安全特性**\n  - [ ] 实现凭证加密与安全存储\n  - [ ] 为每个数据源制定速率限制策略\n\n### 📊 监控与分析\n\n#### 高级监控\n- [ ] **实时分析仪表盘**\n  - [ ] 构建基于 Web 的监控界面\n  - [ ] 增加实时指标可视化功能\n  - [ ] 实现告警与通知系统\n  - [ ] 创建性能剖析与瓶颈分析功能\n\n\n\n### 🚀 高级功能\n\n#### API 与集成\n- [ ] **RESTful API 服务器**\n  - [ ] 构建全面的 REST API 以供外部集成\n  - [ ] 实现支持实时通知的 Webhook 功能\n  - [ ] 创建适用于主流编程语言的 SDK 库\n\n## 贡献说明\n\n欢迎贡献！在提交拉取请求之前，请确保：\n\n1. 更新测试用例\n2. 代码符合编码规范\n3. 在必要时添加文档\n4. 所有测试通过\n\n### 贡献者优先领域\n\n- 🔥 **高优先级**：数据源抽象及 FOFA\u002FShodan 集成\n- 🔥 **高优先级**：阶段系统灵活性与处理器注册\n- 🔥 **高优先级**：插件架构与扩展性框架\n- 🔥 **中优先级**：性能优化与分布式处理\n- 🔥 **中优先级**：基于 Web 的监控仪表盘\n\n## 许可证\n\n本项目采用知识共享署名-非商业性使用 4.0 国际许可协议（CC BY-NC 4.0）。详细信息请参阅 [LICENSE](LICENSE) 文件。\n\n## 免责声明\n\n**⚠️ 重要提示**\n\n本项目仅**用于教育和技术研究目的**。用户在使用该软件时应谨慎并承担相应责任。\n\n**关键点：**\n- 本软件仅供学习、研究和教育用途\n- 用户必须遵守其所在司法管辖区的所有适用法律和法规\n- 用户有责任确保其使用行为符合任何第三方平台或 API 的服务条款\n- **项目作者不建议、鼓励或认可使用本软件非法获取他人的 API 密钥或凭证**\n- 项目作者对因使用本软件而产生的任何争议、法律问题或损害概不负责\n- 未经明确书面许可，严禁商业使用\n- 用户应尊重他人的知识产权和隐私权\n\n**使用本软件即表示您已阅读、理解并同意上述条款。请自行承担使用风险。**\n\n\n\n## 联系方式\n\n如在使用过程中有任何疑问或其他咨询，请通过 GitHub Issues 联系项目维护人员。","# Harvester 快速上手指南\n\nHarvester 是一个通用且自适应的数据采集框架，专为从 GitHub、网络空间测绘平台（如 FOFA、Shodan）及任意 Web 端点获取综合信息而设计。虽然当前实现以发现 AI 服务提供商密钥为例，但其架构具有高度可扩展性，支持多样化的数据采集场景。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux、macOS 或 Windows (推荐 WSL2)\n*   **Python 版本**：Python 3.8 或更高版本\n*   **依赖管理**：pip 或 poetry\n*   **网络环境**：能够访问 GitHub API 及目标数据源（若在国内使用，建议配置代理或使用加速镜像）\n\n## 安装步骤\n\n### 1. 克隆项目\n从 GitHub 克隆仓库到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fai-collector.git\ncd ai-collector\u002Fharvester\n```\n\n> **提示**：如果直接克隆速度较慢，可使用国内镜像加速：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Fmirror\u002Fgithub-wzdnzd\u002Fai-collector.git\n> cd ai-collector\u002Fharvester\n> ```\n\n### 2. 创建虚拟环境（推荐）\n为了避免依赖冲突，建议创建独立的 Python 虚拟环境：\n\n```bash\npython -m venv venv\nsource venv\u002Fbin\u002Factivate  # Linux\u002FmacOS\n# 或在 Windows 上执行: venv\\Scripts\\activate\n```\n\n### 3. 安装依赖\n安装项目所需的 Python 包。如果在国内遇到下载慢的问题，推荐使用清华或阿里镜像源：\n\n```bash\n# 使用默认源\npip install -r requirements.txt\n\n# 推荐使用清华镜像源（国内加速）\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\nHarvester 主要通过命令行界面 (CLI) 进行操作。以下是最简单的使用示例，演示如何启动数据采集任务。\n\n### 配置凭证（可选但推荐）\n对于需要认证的数据源（如 GitHub API），建议先配置凭证。您可以编辑 `config\u002F` 目录下的配置文件，或通过环境变量设置：\n\n```bash\nexport GITHUB_TOKEN=\"your_github_token_here\"\n# export FOFA_KEY=\"your_fofa_key_here\"  # 未来支持\n```\n\n### 运行采集任务\n使用 `main.py` 启动框架。以下命令将启动默认的采集流程（当前主要针对 GitHub 上的 AI 服务密钥发现）：\n\n```bash\npython main.py --target github --mode search\n```\n\n**参数说明：**\n*   `--target`: 指定数据源类型（当前支持 `github`，未来支持 `fofa`, `shodan` 等）。\n*   `--mode`: 指定运行模式（如 `search` 搜索, `gather` 收集, `check` 验证）。\n\n### 查看结果\n采集完成后，结果通常保存在 `storage\u002F` 或 `results\u002F` 目录下（具体路径取决于配置文件 `config\u002F` 中的设置）。您可以直接查看生成的日志文件或数据文件：\n\n```bash\nls -l results\u002F\ncat results\u002Flatest_output.json\n```\n\n---\n*注：本工具仅供学习和研究使用，请严格遵守相关法律法规及各平台服务条款，切勿用于非法用途。*","某安全研究团队需要持续监控 GitHub 及网络空间测绘平台，以发现泄露的 AI 服务密钥和暴露的敏感资产。\n\n### 没有 harvester 时\n- 研究人员需手动编写多个独立脚本分别对接 GitHub API 和 FOFA\u002FShodan 接口，维护成本极高且代码重复严重。\n- 面对海量搜索结果，缺乏智能查询优化机制，导致大量无效请求浪费 API 配额，关键线索常被淹没在噪音中。\n- 数据采集流程呈孤岛状，从任务调度、队列管理到结果存储全靠人工拼接，一旦中途报错很难断点续传。\n- 新增数据源（如自定义 REST API）时需要重构核心逻辑，扩展性差，无法快速响应新的情报收集需求。\n\n### 使用 harvester 后\n- 利用其统一的插件化架构，一键配置即可同时调度 GitHub 代码库与网络测绘平台的数据抓取任务，无需重复造轮子。\n- 内置的查询优化引擎自动精炼搜索关键词，显著减少无效请求，精准定位高价值的密钥泄露与资产暴露信息。\n- 分层架构中的任务与队列管理器实现了全流程自动化，支持故障自动恢复与状态实时监控，确保长周期任务稳定运行。\n- 通过简单的阶段注册机制即可接入任意自定义 Web 端点，轻松将情报来源扩展至内部系统或第三方新型数据源。\n\nharvester 将原本碎片化、高门槛的情报收集工作转化为标准化、可无限扩展的自动化流水线，极大提升了数据安全监测的效率与覆盖面。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwzdnzd_harvester_1aca6ba8.png","wzdnzd",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fwzdnzd_ff9eff83.jpg","A thousand-li journey is started by taking the first step.","https:\u002F\u002Fgithub.com\u002Fwzdnzd",[20,24,28],{"name":21,"color":22,"percentage":23},"Python","#3572A5",99.7,{"name":25,"color":26,"percentage":27},"Dockerfile","#384d54",0.2,{"name":29,"color":30,"percentage":31},"Shell","#89e051",0.1,550,105,"2026-04-06T13:52:30","NOASSERTION",2,"未说明",{"notes":39,"python":37,"dependencies":40},"该工具是一个通用的数据采集框架，主要用于从 GitHub、FOFA、Shodan 等来源获取数据（如 AI 服务密钥）。架构基于分层设计，包含任务管理、管道处理、搜索服务和状态管理等模块。目前 README 内容主要侧重于架构设计和功能目标，未提供具体的运行环境安装指南、依赖列表或硬件资源需求。",[37],[42,43,44,45],"Agent","开发框架","语言模型","图像",[47,48,49,50,51,52],"ai","anthropic","deepseek","gemini","openai","qwen","ready","2026-03-27T02:49:30.150509","2026-04-20T04:06:08.999269",[57,62,67],{"id":58,"question_zh":59,"answer_zh":60,"source_url":61},43802,"如何配置 HTTP 或 SOCKS 代理？会自动使用环境变量吗？","是的，工具会自动使用 `http_proxy` 或 `HTTP_PROXY` 等环境变量。因为网络请求是基于 Python 的 `urllib` 库实现的，当配置了这些环境变量时，默认会使用代理进行连接。无需在配置文件中额外设置代理参数，直接在系统或终端中设置环境变量即可。","https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fharvester\u002Fissues\u002F3",{"id":63,"question_zh":64,"answer_zh":65,"source_url":66},43803,"运行长时间后出现日志写入错误（PermissionError: [WinError 32]），提示文件被占用怎么办？","这是一个在 Windows 上进行日志轮转（log rollover）时发生的已知问题，通常由多线程竞争导致。维护者已提交修复代码，请尝试更新到包含该修复的版本。具体修复提交地址为：https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fharvester\u002Fcommit\u002Fff8cd39c80e8b3922485b23dfdb09438297c7796 。更新后该锁竞争问题应得到解决。","https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fharvester\u002Fissues\u002F2",{"id":68,"question_zh":69,"answer_zh":70,"source_url":71},43804,"为什么运行很久只收获了很少的 Key，且日志显示 'Authentication failed (HTTP 401)'？","日志中的 `Authentication failed (HTTP 401)` 错误表明 GitHub 认证失败。这通常是因为配置的 `sessions` 或 `tokens` 无效、过期或格式不正确。请检查配置文件中的 `github_credentials` 部分，确保填入的 Session Token 或 Personal Access Token 是有效且未过期的。如果使用了错误的凭证，搜索阶段将无法正常工作，导致收获效率极低。","https:\u002F\u002Fgithub.com\u002Fwzdnzd\u002Fharvester\u002Fissues\u002F1",[],[74,84,92,100,110,118],{"id":75,"name":76,"github_repo":77,"description_zh":78,"stars":79,"difficulty_score":80,"last_commit_at":81,"category_tags":82,"status":53},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[42,43,45,83],"数据工具",{"id":85,"name":86,"github_repo":87,"description_zh":88,"stars":89,"difficulty_score":80,"last_commit_at":90,"category_tags":91,"status":53},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[43,45,42],{"id":93,"name":94,"github_repo":95,"description_zh":96,"stars":97,"difficulty_score":36,"last_commit_at":98,"category_tags":99,"status":53},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,"2026-04-19T11:32:54",[43,42,44],{"id":101,"name":102,"github_repo":103,"description_zh":104,"stars":105,"difficulty_score":106,"last_commit_at":107,"category_tags":108,"status":53},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[42,109],"插件",{"id":111,"name":112,"github_repo":113,"description_zh":114,"stars":115,"difficulty_score":36,"last_commit_at":116,"category_tags":117,"status":53},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[43,45,42],{"id":119,"name":120,"github_repo":121,"description_zh":122,"stars":123,"difficulty_score":36,"last_commit_at":124,"category_tags":125,"status":53},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[109,42,45,43]]