[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lensacom--sparkit-learn":3,"tool-lensacom--sparkit-learn":64},[4,17,27,35,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[13,14,15,43],"视频",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":23,"last_commit_at":50,"category_tags":51,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":23,"last_commit_at":58,"category_tags":59,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,60,43,61,15,62,26,13,63],"数据工具","插件","其他","音频",{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":100,"env_os":79,"env_gpu":101,"env_ram":102,"env_deps":103,"category_tags":111,"github_topics":112,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":151},4186,"lensacom\u002Fsparkit-learn","sparkit-learn","PySpark + Scikit-learn = Sparkit-learn","sparkit-learn 是一个旨在将 Scikit-learn 的强大功能与 PySpark 的分布式计算能力相结合的开源库。它的核心目标是让开发者能够在 Spark 集群上，使用与 Scikit-learn 高度一致的 API 进行大规模机器学习建模，从而降低从单机实验迁移到分布式生产环境的门槛。\n\n该工具主要解决了传统 Scikit-learn 无法直接处理超出单机内存限制的海量数据这一痛点。通过引入“本地思考，分布式执行”的设计理念，sparkit-learn 将数据划分为数组或稀疏矩阵块（Block），在块级别执行操作，既保留了 NumPy 和 SciPy 的操作习惯，又利用了 Spark 的并行处理能力。\n\nsparkit-learn 特别适合熟悉 Python 数据科学生态的开发者、数据科学家及研究人员使用，尤其是那些已经掌握 Scikit-learn 但需要处理 TB 级数据集的用户。其独特的技术亮点在于提供了 ArrayRDD 和 SparseRDD 两种分布式数据结构，它们分别对应本地的密集数组和稀疏矩阵，支持类似 NumPy 的切片、索引等直观操作，并无缝兼容现","sparkit-learn 是一个旨在将 Scikit-learn 的强大功能与 PySpark 的分布式计算能力相结合的开源库。它的核心目标是让开发者能够在 Spark 集群上，使用与 Scikit-learn 高度一致的 API 进行大规模机器学习建模，从而降低从单机实验迁移到分布式生产环境的门槛。\n\n该工具主要解决了传统 Scikit-learn 无法直接处理超出单机内存限制的海量数据这一痛点。通过引入“本地思考，分布式执行”的设计理念，sparkit-learn 将数据划分为数组或稀疏矩阵块（Block），在块级别执行操作，既保留了 NumPy 和 SciPy 的操作习惯，又利用了 Spark 的并行处理能力。\n\nsparkit-learn 特别适合熟悉 Python 数据科学生态的开发者、数据科学家及研究人员使用，尤其是那些已经掌握 Scikit-learn 但需要处理 TB 级数据集的用户。其独特的技术亮点在于提供了 ArrayRDD 和 SparseRDD 两种分布式数据结构，它们分别对应本地的密集数组和稀疏矩阵，支持类似 NumPy 的切片、索引等直观操作，并无缝兼容现有的 PySpark RDD 操作，让用户无需学习全新的复杂接口即可轻松扩展模型训练规模。","Sparkit-learn\n=============\n\n|Build Status| |PyPi| |Gitter| |Gitential|\n\n**PySpark + Scikit-learn = Sparkit-learn**\n\nGitHub: https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\n\nAbout\n=====\n\nSparkit-learn aims to provide scikit-learn functionality and API on\nPySpark. The main goal of the library is to create an API that stays\nclose to sklearn's.\n\nThe driving principle was to *\"Think locally, execute distributively.\"*\nTo accomodate this concept, the basic data block is always an array or a\n(sparse) matrix and the operations are executed on block level.\n\n\nRequirements\n============\n\n-  **Python 2.7.x or 3.4.x**\n-  **Spark[>=1.3.0]**\n-  NumPy[>=1.9.0]\n-  SciPy[>=0.14.0]\n-  Scikit-learn[>=0.16]\n\n\n\nRun IPython from notebooks directory\n====================================\n\n.. code:: bash\n\n    PYTHONPATH=${PYTHONPATH}:.. IPYTHON_OPTS=\"notebook\" ${SPARK_HOME}\u002Fbin\u002Fpyspark --master local\\[4\\] --driver-memory 2G\n\n\nRun tests with\n==============\n\n.. code:: bash\n\n    .\u002Fruntests.sh\n\n\nQuick start\n===========\n\nSparkit-learn introduces three important distributed data format:\n\n-  **ArrayRDD:**\n\n   A *numpy.array* like distributed array\n\n   .. code:: python\n\n       from splearn.rdd import ArrayRDD\n\n       data = range(20)\n       # PySpark RDD with 2 partitions\n       rdd = sc.parallelize(data, 2) # each partition with 10 elements\n       # ArrayRDD\n       # each partition will contain blocks with 5 elements\n       X = ArrayRDD(rdd, bsize=5) # 4 blocks, 2 in each partition\n\n   Basic operations:\n\n   .. code:: python\n\n       len(X) # 20 - number of elements in the whole dataset\n       X.blocks # 4 - number of blocks\n       X.shape # (20,) - the shape of the whole dataset\n\n       X # returns an ArrayRDD\n       # \u003Cclass 'splearn.rdd.ArrayRDD'> from PythonRDD...\n\n       X.dtype # returns the type of the blocks\n       # numpy.ndarray\n\n       X.collect() # get the dataset\n       # [array([0, 1, 2, 3, 4]),\n       #  array([5, 6, 7, 8, 9]),\n       #  array([10, 11, 12, 13, 14]),\n       #  array([15, 16, 17, 18, 19])]\n\n       X[1].collect() # indexing\n       # [array([5, 6, 7, 8, 9])]\n\n       X[1] # also returns an ArrayRDD!\n\n       X[1::2].collect() # slicing\n       # [array([5, 6, 7, 8, 9]),\n       #  array([15, 16, 17, 18, 19])]\n\n       X[1::2] # returns an ArrayRDD as well\n\n       X.tolist() # returns the dataset as a list\n       # [0, 1, 2, ... 17, 18, 19]\n       X.toarray() # returns the dataset as a numpy.array\n       # array([ 0,  1,  2, ... 17, 18, 19])\n\n       # pyspark.rdd operations will still work\n       X.getNumPartitions() # 2 - number of partitions\n\n\n- **SparseRDD:**\n\n  The sparse counterpart of the *ArrayRDD*, the main difference is that the\n  blocks are sparse matrices. The reason behind this split is to follow the\n  distinction between *numpy.ndarray*s and *scipy.sparse* matrices.\n  Usually the *SparseRDD* is created by *splearn*'s transformators, but one can\n  instantiate too.\n\n  .. code:: python\n\n       # generate a SparseRDD from a text using SparkCountVectorizer\n       from splearn.rdd import SparseRDD\n       from sklearn.feature_extraction.tests.test_text import ALL_FOOD_DOCS\n       ALL_FOOD_DOCS\n       #(u'the pizza pizza beer copyright',\n       # u'the pizza burger beer copyright',\n       # u'the the pizza beer beer copyright',\n       # u'the burger beer beer copyright',\n       # u'the coke burger coke copyright',\n       # u'the coke burger burger',\n       # u'the salad celeri copyright',\n       # u'the salad salad sparkling water copyright',\n       # u'the the celeri celeri copyright',\n       # u'the tomato tomato salad water',\n       # u'the tomato salad water copyright')\n\n       # ArrayRDD created from the raw data\n       X = ArrayRDD(sc.parallelize(ALL_FOOD_DOCS, 4), 2)\n       X.collect()\n       # [array([u'the pizza pizza beer copyright',\n       #         u'the pizza burger beer copyright'], dtype='\u003CU31'),\n       #  array([u'the the pizza beer beer copyright',\n       #         u'the burger beer beer copyright'], dtype='\u003CU33'),\n       #  array([u'the coke burger coke copyright',\n       #         u'the coke burger burger'], dtype='\u003CU30'),\n       #  array([u'the salad celeri copyright',\n       #         u'the salad salad sparkling water copyright'], dtype='\u003CU41'),\n       #  array([u'the the celeri celeri copyright',\n       #         u'the tomato tomato salad water'], dtype='\u003CU31'),\n       #  array([u'the tomato salad water copyright'], dtype='\u003CU32')]\n\n       # Feature extraction executed\n       from splearn.feature_extraction.text import SparkCountVectorizer\n       vect = SparkCountVectorizer()\n       X = vect.fit_transform(X)\n       # and we have a SparseRDD\n       X\n       # \u003Cclass 'splearn.rdd.SparseRDD'> from PythonRDD...\n\n       # it's type is the scipy.sparse's general parent\n       X.dtype\n       # scipy.sparse.base.spmatrix\n\n       # slicing works just like in ArrayRDDs\n       X[2:4].collect()\n       # [\u003C2x11 sparse matrix of type '\u003Ctype 'numpy.int64'>'\n       #   with 7 stored elements in Compressed Sparse Row format>,\n       #  \u003C2x11 sparse matrix of type '\u003Ctype 'numpy.int64'>'\n       #   with 9 stored elements in Compressed Sparse Row format>]\n\n       # general mathematical operations are available\n       X.sum(), X.mean(), X.max(), X.min()\n       # (55, 0.45454545454545453, 2, 0)\n\n       # even with axis parameters provided\n       X.sum(axis=1)\n       # matrix([[5],\n       #         [5],\n       #         [6],\n       #         [5],\n       #         [5],\n       #         [4],\n       #         [4],\n       #         [6],\n       #         [5],\n       #         [5],\n       #         [5]])\n\n       # It can be transformed to dense ArrayRDD\n       X.todense()\n       # \u003Cclass 'splearn.rdd.ArrayRDD'> from PythonRDD...\n       X.todense().collect()\n       # [array([[1, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0],\n       #         [1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0]]),\n       #  array([[2, 0, 0, 0, 1, 1, 0, 0, 2, 0, 0],\n       #         [2, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0]]),\n       #  array([[0, 1, 0, 2, 1, 0, 0, 0, 1, 0, 0],\n       #         [0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0]]),\n       #  array([[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],\n       #         [0, 0, 0, 0, 1, 0, 2, 1, 1, 0, 1]]),\n       #  array([[0, 0, 2, 0, 1, 0, 0, 0, 2, 0, 0],\n       #         [0, 0, 0, 0, 0, 0, 1, 0, 1, 2, 1]]),\n       #  array([[0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1]])]\n\n       # One can instantiate SparseRDD manually too:\n       sparse = sc.parallelize(np.array([sp.eye(2).tocsr()]*20), 2)\n       sparse = SparseRDD(sparse, bsize=5)\n       sparse\n       # \u003Cclass 'splearn.rdd.SparseRDD'> from PythonRDD...\n\n       sparse.collect()\n       # [\u003C10x2 sparse matrix of type '\u003Ctype 'numpy.float64'>'\n       #   with 10 stored elements in Compressed Sparse Row format>,\n       #  \u003C10x2 sparse matrix of type '\u003Ctype 'numpy.float64'>'\n       #   with 10 stored elements in Compressed Sparse Row format>,\n       #  \u003C10x2 sparse matrix of type '\u003Ctype 'numpy.float64'>'\n       #   with 10 stored elements in Compressed Sparse Row format>,\n       #  \u003C10x2 sparse matrix of type '\u003Ctype 'numpy.float64'>'\n       #   with 10 stored elements in Compressed Sparse Row format>]\n\n\n-  **DictRDD:**\n\n   A column based data format, each column with it's own type.\n\n   .. code:: python\n\n       from splearn.rdd import DictRDD\n\n       X = range(20)\n       y = list(range(2)) * 10\n       # PySpark RDD with 2 partitions\n       X_rdd = sc.parallelize(X, 2) # each partition with 10 elements\n       y_rdd = sc.parallelize(y, 2) # each partition with 10 elements\n       # DictRDD\n       # each partition will contain blocks with 5 elements\n       Z = DictRDD((X_rdd, y_rdd),\n                   columns=('X', 'y'),\n                   bsize=5,\n                   dtype=[np.ndarray, np.ndarray]) # 4 blocks, 2\u002Fpartition\n       # if no dtype is provided, the type of the blocks will be determined\n       # automatically\n\n       # or:\n       import numpy as np\n\n       data = np.array([range(20), list(range(2))*10]).T\n       rdd = sc.parallelize(data, 2)\n       Z = DictRDD(rdd,\n                   columns=('X', 'y'),\n                   bsize=5,\n                   dtype=[np.ndarray, np.ndarray])\n\n   Basic operations:\n\n   .. code:: python\n\n       len(Z) # 8 - number of blocks\n       Z.columns # returns ('X', 'y')\n       Z.dtype # returns the types in correct order\n       # [numpy.ndarray, numpy.ndarray]\n\n       Z # returns a DictRDD\n       #\u003Cclass 'splearn.rdd.DictRDD'> from PythonRDD...\n\n       Z.collect()\n       # [(array([0, 1, 2, 3, 4]), array([0, 1, 0, 1, 0])),\n       #  (array([5, 6, 7, 8, 9]), array([1, 0, 1, 0, 1])),\n       #  (array([10, 11, 12, 13, 14]), array([0, 1, 0, 1, 0])),\n       #  (array([15, 16, 17, 18, 19]), array([1, 0, 1, 0, 1]))]\n\n       Z[:, 'y'] # column select - returns an ArrayRDD\n       Z[:, 'y'].collect()\n       # [array([0, 1, 0, 1, 0]),\n       #  array([1, 0, 1, 0, 1]),\n       #  array([0, 1, 0, 1, 0]),\n       #  array([1, 0, 1, 0, 1])]\n\n       Z[:-1, ['X', 'y']] # slicing - DictRDD\n       Z[:-1, ['X', 'y']].collect()\n       # [(array([0, 1, 2, 3, 4]), array([0, 1, 0, 1, 0])),\n       #  (array([5, 6, 7, 8, 9]), array([1, 0, 1, 0, 1])),\n       #  (array([10, 11, 12, 13, 14]), array([0, 1, 0, 1, 0]))]\n\n\nBasic workflow\n--------------\n\nWith the use of the described data structures, the basic workflow is\nalmost identical to sklearn's.\n\nDistributed vectorizing of texts\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSparkCountVectorizer\n^^^^^^^^^^^^^^^^^^^^\n\n.. code:: python\n\n    from splearn.rdd import ArrayRDD\n    from splearn.feature_extraction.text import SparkCountVectorizer\n    from sklearn.feature_extraction.text import CountVectorizer\n\n    X = [...]  # list of texts\n    X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc is SparkContext\n\n    local = CountVectorizer()\n    dist = SparkCountVectorizer()\n\n    result_local = local.fit_transform(X)\n    result_dist = dist.fit_transform(X_rdd)  # SparseRDD\n\n\nSparkHashingVectorizer\n^^^^^^^^^^^^^^^^^^^^^^\n\n.. code:: python\n\n    from splearn.rdd import ArrayRDD\n    from splearn.feature_extraction.text import SparkHashingVectorizer\n    from sklearn.feature_extraction.text import HashingVectorizer\n\n    X = [...]  # list of texts\n    X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc is SparkContext\n\n    local = HashingVectorizer()\n    dist = SparkHashingVectorizer()\n\n    result_local = local.fit_transform(X)\n    result_dist = dist.fit_transform(X_rdd)  # SparseRDD\n\n\nSparkTfidfTransformer\n^^^^^^^^^^^^^^^^^^^^^\n\n.. code:: python\n\n    from splearn.rdd import ArrayRDD\n    from splearn.feature_extraction.text import SparkHashingVectorizer\n    from splearn.feature_extraction.text import SparkTfidfTransformer\n    from splearn.pipeline import SparkPipeline\n\n    from sklearn.feature_extraction.text import HashingVectorizer\n    from sklearn.feature_extraction.text import TfidfTransformer\n    from sklearn.pipeline import Pipeline\n\n    X = [...]  # list of texts\n    X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc is SparkContext\n\n    local_pipeline = Pipeline((\n        ('vect', HashingVectorizer()),\n        ('tfidf', TfidfTransformer())\n    ))\n    dist_pipeline = SparkPipeline((\n        ('vect', SparkHashingVectorizer()),\n        ('tfidf', SparkTfidfTransformer())\n    ))\n\n    result_local = local_pipeline.fit_transform(X)\n    result_dist = dist_pipeline.fit_transform(X_rdd)  # SparseRDD\n\n\nDistributed Classifiers\n~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: python\n\n    from splearn.rdd import DictRDD\n    from splearn.feature_extraction.text import SparkHashingVectorizer\n    from splearn.feature_extraction.text import SparkTfidfTransformer\n    from splearn.svm import SparkLinearSVC\n    from splearn.pipeline import SparkPipeline\n\n    from sklearn.feature_extraction.text import HashingVectorizer\n    from sklearn.feature_extraction.text import TfidfTransformer\n    from sklearn.svm import LinearSVC\n    from sklearn.pipeline import Pipeline\n\n    X = [...]  # list of texts\n    y = [...]  # list of labels\n    X_rdd = sc.parallelize(X, 4)\n    y_rdd = sc.parallelize(y, 4)\n    Z = DictRDD((X_rdd, y_rdd),\n                columns=('X', 'y'),\n                dtype=[np.ndarray, np.ndarray])\n\n    local_pipeline = Pipeline((\n        ('vect', HashingVectorizer()),\n        ('tfidf', TfidfTransformer()),\n        ('clf', LinearSVC())\n    ))\n    dist_pipeline = SparkPipeline((\n        ('vect', SparkHashingVectorizer()),\n        ('tfidf', SparkTfidfTransformer()),\n        ('clf', SparkLinearSVC())\n    ))\n\n    local_pipeline.fit(X, y)\n    dist_pipeline.fit(Z, clf__classes=np.unique(y))\n\n    y_pred_local = local_pipeline.predict(X)\n    y_pred_dist = dist_pipeline.predict(Z[:, 'X'])\n\n\nDistributed Model Selection\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: python\n\n    from splearn.rdd import DictRDD\n    from splearn.grid_search import SparkGridSearchCV\n    from splearn.naive_bayes import SparkMultinomialNB\n\n    from sklearn.grid_search import GridSearchCV\n    from sklearn.naive_bayes import MultinomialNB\n\n    X = [...]\n    y = [...]\n    X_rdd = sc.parallelize(X, 4)\n    y_rdd = sc.parallelize(y, 4)\n    Z = DictRDD((X_rdd, y_rdd),\n                columns=('X', 'y'),\n                dtype=[np.ndarray, np.ndarray])\n\n    parameters = {'alpha': [0.1, 1, 10]}\n    fit_params = {'classes': np.unique(y)}\n\n    local_estimator = MultinomialNB()\n    local_grid = GridSearchCV(estimator=local_estimator,\n                              param_grid=parameters)\n\n    estimator = SparkMultinomialNB()\n    grid = SparkGridSearchCV(estimator=estimator,\n                             param_grid=parameters,\n                             fit_params=fit_params)\n\n    local_grid.fit(X, y)\n    grid.fit(Z)\n\n\nROADMAP\n=======\n\n- [ ] Transparent API to support plain numpy and scipy objects (partially done in the transparent_api branch)\n- [ ] Update all dependencies\n- [ ] Use Mllib and ML packages more extensively (since it becames more mature)\n- [ ] Support Spark DataFrames\n\n\nSpecial thanks\n==============\n\n- scikit-learn community\n- spylearn community\n- pyspark community\n\nSimilar Projects\n===============\n\n- `Thunder \u003Chttps:\u002F\u002Fgithub.com\u002Fthunder-project\u002Fthunder>`_\n- `Bolt \u003Chttps:\u002F\u002Fgithub.com\u002Fbolt-project\u002Fbolt>`_\n\n.. |Build Status| image:: https:\u002F\u002Ftravis-ci.org\u002Flensacom\u002Fsparkit-learn.png?branch=master\n   :target: https:\u002F\u002Ftravis-ci.org\u002Flensacom\u002Fsparkit-learn\n.. |PyPi| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fsparkit-learn.svg\n   :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fsparkit-learn\n.. |Gitter| image:: https:\u002F\u002Fbadges.gitter.im\u002FJoin%20Chat.svg\n   :alt: Join the chat at https:\u002F\u002Fgitter.im\u002Flensacom\u002Fsparkit-learn\n   :target: https:\u002F\u002Fgitter.im\u002Flensacom\u002Fsparkit-learn?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge\n.. |Gitential| image:: https:\u002F\u002Fapi.gitential.com\u002Faccounts\u002F6\u002Fprojects\u002F75\u002Fbadges\u002Fcoding-hours.svg\n   :alt: Gitential Coding Hours\n   :target: https:\u002F\u002Fgitential.com\u002Faccounts\u002F6\u002Fprojects\u002F75\u002Fshare?uuid=095e15c5-46b9-4534-a1d4-3b0bf1f33100&utm_source=shield&utm_medium=shield&utm_campaign=75\n","Sparkit-learn\n=============\n\n|构建状态| |PyPi| |Gitter| |Gitential|\n\n**PySpark + Scikit-learn = Sparkit-learn**\n\nGitHub: https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\n\n简介\n=====\n\nSparkit-learn旨在为PySpark提供scikit-learn的功能和API。该库的主要目标是创建一个与scikit-learn高度一致的API。\n\n其核心理念是“本地思考，分布式执行”。为了实现这一理念，基本的数据块始终是一个数组或（稀疏）矩阵，而操作则在块级别上执行。\n\n\n要求\n============\n\n-  **Python 2.7.x 或 3.4.x**\n-  **Spark[>=1.3.0]**\n-  NumPy[>=1.9.0]\n-  SciPy[>=0.14.0]\n-  Scikit-learn[>=0.16]\n\n\n\n从notebook目录运行IPython\n====================================\n\n.. code:: bash\n\n    PYTHONPATH=${PYTHONPATH}:.. IPYTHON_OPTS=\"notebook\" ${SPARK_HOME}\u002Fbin\u002Fpyspark --master local\\[4\\] --driver-memory 2G\n\n\n运行测试\n==============\n\n.. code:: bash\n\n    .\u002Fruntests.sh\n\n\n快速入门\n===========\n\nSparkit-learn引入了三种重要的分布式数据格式：\n\n-  **ArrayRDD:**\n\n   类似于*numpy.array*的分布式数组\n\n   .. code:: python\n\n       from splearn.rdd import ArrayRDD\n\n       data = range(20)\n       # PySpark RDD，分为2个分区\n       rdd = sc.parallelize(data, 2) # 每个分区包含10个元素\n       # ArrayRDD\n       # 每个分区将包含5个元素的块\n       X = ArrayRDD(rdd, bsize=5) # 4个块，每个分区2个\n\n   基本操作：\n\n   .. code:: python\n\n       len(X) # 20 - 整个数据集中的元素数量\n       X.blocks # 4 - 块的数量\n       X.shape # (20,) - 整个数据集的形状\n\n       X # 返回一个ArrayRDD\n       # \u003Cclass 'splearn.rdd.ArrayRDD'> from PythonRDD...\n\n       X.dtype # 返回块的类型\n       # numpy.ndarray\n\n       X.collect() # 获取整个数据集\n       # [array([0, 1, 2, 3, 4]),\n       #  array([5, 6, 7, 8, 9]),\n       #  array([10, 11, 12, 13, 14]),\n       #  array([15, 16, 17, 18, 19])]\n\n       X[1].collect() # 索引\n       # [array([5, 6, 7, 8, 9])]\n\n       X[1] # 同样返回一个ArrayRDD！\n\n       X[1::2].collect() # 切片\n       # [array([5, 6, 7, 8, 9]),\n       #  array([15, 16, 17, 18, 19])]\n\n       X[1::2] # 同样返回一个ArrayRDD\n\n       X.tolist() # 将数据集转换为列表\n       # [0, 1, 2, ... 17, 18, 19]\n       X.toarray() # 将数据集转换为numpy.array\n       # array([ 0,  1,  2, ... 17, 18, 19])\n\n       # pyspark.rdd的操作仍然有效\n       X.getNumPartitions() # 2 - 分区数量\n\n\n- **SparseRDD:**\n\n  *ArrayRDD*的稀疏对应物，主要区别在于块是稀疏矩阵。这样划分的原因是为了遵循*numpy.ndarray*和*scipy.sparse*矩阵之间的区别。通常，*SparseRDD*是由*splearn*的转换器创建的，但也可以手动实例化。\n\n  .. code:: python\n\n       # 使用SparkCountVectorizer从文本生成SparseRDD\n       from splearn.rdd import SparseRDD\n       from sklearn.feature_extraction.tests.test_text import ALL_FOOD_DOCS\n       ALL_FOOD_DOCS\n       #(u'the pizza pizza beer copyright',\n       # u'the pizza burger beer copyright',\n       # u'the the pizza beer beer copyright',\n       # u'the burger beer beer copyright',\n       # u'the coke burger coke copyright',\n       # u'the coke burger burger',\n       # u'the salad celeri copyright',\n       # u'the salad salad sparkling water copyright',\n       # u'the the celeri celeri copyright',\n       # u'the tomato tomato salad water',\n       # u'the tomato salad water copyright')\n\n       # 从原始数据创建ArrayRDD\n       X = ArrayRDD(sc.parallelize(ALL_FOOD_DOCS, 4), 2)\n       X.collect()\n       # [array([u'the pizza pizza beer copyright',\n       #         u'the pizza burger beer copyright'], dtype='\u003CU31'),\n       #  array([u'the the pizza beer beer copyright',\n       #         u'the burger beer beer copyright'], dtype='\u003CU33'),\n       #  array([u'the coke burger coke copyright',\n       #         u'the coke burger burger'], dtype='\u003CU30'),\n       #  array([u'the salad celeri copyright',\n       #         u'the salad salad sparkling water copyright'], dtype='\u003CU41'),\n       #  array([u'the the celeri celeri copyright',\n       #         u'the tomato tomato salad water'], dtype='\u003CU31'),\n       #  array([u'the tomato salad water copyright'], dtype='\u003CU32')]\n\n       # 执行特征提取\n       from splearn.feature_extraction.text import SparkCountVectorizer\n       vect = SparkCountVectorizer()\n       X = vect.fit_transform(X)\n       # 这样我们就得到了一个SparseRDD\n       X\n       # \u003Cclass 'splearn.rdd.SparseRDD'> from PythonRDD...\n\n       # 其类型是scipy.sparse的通用父类\n       X.dtype\n       # scipy.sparse.base.spmatrix\n\n       # 切片操作与ArrayRDD相同\n       X[2:4].collect()\n       # [\u003C2x11稀疏矩阵，类型为'\u003Ctype 'numpy.int64'>'\n       #   以压缩稀疏行格式存储7个元素>,\n       #  \u003C2x11稀疏矩阵，类型为'\u003Ctype 'numpy.int64'>'\n       #   以压缩稀疏行格式存储9个元素>]\n\n       # 可以进行一般的数学运算\n       X.sum(), X.mean(), X.max(), X.min()\n       # (55, 0.45454545454545453, 2, 0)\n\n       # 即使指定了轴参数也一样\n       X.sum(axis=1)\n       # matrix([[5],\n       #         [5],\n       #         [6],\n       #         [5],\n       #         [5],\n       #         [4],\n       #         [4],\n       #         [6],\n       #         [5],\n       #         [5],\n       #         [5]])\n\n       # 它可以转换为稠密的ArrayRDD\n       X.todense()\n       # \u003Cclass 'splearn.rdd.ArrayRDD'> from PythonRDD...\n       X.todense().collect()\n       # [array([[1, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0],\n       #         [1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0]]),\n       #  array([[2, 0, 0, 0, 1, 1, 0, 0, 2, 0, 0],\n       #         [2, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0]]),\n       #  array([[0, 1, 0, 2, 1, 0, 0, 0, 1, 0, 0],\n       #         [0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0]]),\n       #  array([[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],\n       #         [0, 0, 0, 0, 1, 0, 2, 1, 1, 0, 1]]),\n       #  array([[0, 0, 0, 0, 0, 0, 1, 0, 1, 2, 1]]),\n       #  array([[0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1]])]\n\n       # 也可以手动实例化SparseRDD：\n       sparse = sc.parallelize(np.array([sp.eye(2).tocsr()]*20), 2)\n       sparse = SparseRDD(sparse, bsize=5)\n       sparse\n       # \u003Cclass 'splearn.rdd.SparseRDD'> from PythonRDD...\n\nsparse.collect()\n       # [\u003C10×2稀疏矩阵，类型为'\u003Cclass 'numpy.float64'>'，\n       #   其中以压缩稀疏行格式存储了10个元素>,\n       #  \u003C10×2稀疏矩阵，类型为'\u003Cclass 'numpy.float64'>'，\n       #   其中以压缩稀疏行格式存储了10个元素>,\n       #  \u003C10×2稀疏矩阵，类型为'\u003Cclass 'numpy.float64'>'，\n       #   其中以压缩稀疏行格式存储了10个元素>,\n       #  \u003C10×2稀疏矩阵，类型为'\u003Cclass 'numpy.float64'>'，\n       #   其中以压缩稀疏行格式存储了10个元素>]\n\n\n-  **DictRDD：**\n\n   一种基于列的数据格式，每列都有其特定的类型。\n\n   .. code:: python\n\n       from splearn.rdd import DictRDD\n\n       X = range(20)\n       y = list(range(2)) * 10\n       # PySpark RDD，分为2个分区\n       X_rdd = sc.parallelize(X, 2) # 每个分区包含10个元素\n       y_rdd = sc.parallelize(y, 2) # 每个分区包含10个元素\n       # DictRDD\n       # 每个分区将包含5个元素的块\n       Z = DictRDD((X_rdd, y_rdd),\n                   columns=('X', 'y'),\n                   bsize=5,\n                   dtype=[np.ndarray, np.ndarray]) # 4个块，每个分区2个\n       # 如果未提供dtype，则块的类型将自动确定\n\n       # 或者：\n       import numpy as np\n\n       data = np.array([range(20), list(range(2))*10]).T\n       rdd = sc.parallelize(data, 2)\n       Z = DictRDD(rdd,\n                   columns=('X', 'y'),\n                   bsize=5,\n                   dtype=[np.ndarray, np.ndarray])\n\n   基本操作：\n\n   .. code:: python\n\n       len(Z) # 8 - 块的数量\n       Z.columns # 返回 ('X', 'y')\n       Z.dtype # 按正确顺序返回类型\n       # [numpy.ndarray, numpy.ndarray]\n\n       Z # 返回一个DictRDD\n       #\u003Cclass 'splearn.rdd.DictRDD'> 来自PythonRDD...\n\n       Z.collect()\n       # [(array([0, 1, 2, 3, 4]), array([0, 1, 0, 1, 0])),\n       #  (array([5, 6, 7, 8, 9]), array([1, 0, 1, 0, 1])),\n       #  (array([10, 11, 12, 13, 14]), array([0, 1, 0, 1, 0])),\n       #  (array([15, 16, 17, 18, 19]), array([1, 0, 1, 0, 1]))]\n\n       Z[:, 'y'] # 选择列 - 返回一个ArrayRDD\n       Z[:, 'y'].collect()\n       # [array([0, 1, 0, 1, 0]),\n       #  array([1, 0, 1, 0, 1]),\n       #  array([0, 1, 0, 1, 0]),\n       #  array([1, 0, 1, 0, 1])]\n\n       Z[:-1, ['X', 'y']] # 切片 - DictRDD\n       Z[:-1, ['X', 'y']].collect()\n       # [(array([0, 1, 2, 3, 4]), array([0, 1, 0, 1, 0])),\n       #  (array([5, 6, 7, 8, 9]), array([1, 0, 1, 0, 1])),\n       #  (array([10, 11, 12, 13, 14]), array([0, 1, 0, 1, 0]))]\n\n\n基本工作流程\n--------------\n\n使用上述数据结构时，基本工作流程与scikit-learn几乎相同。\n\n文本的分布式向量化\n~~~~~~~~~~~~~~~~\n\nSparkCountVectorizer\n^^^^^^^^^^^^^^^^^^^^\n\n.. code:: python\n\n    from splearn.rdd import ArrayRDD\n    from splearn.feature_extraction.text import SparkCountVectorizer\n    from sklearn.feature_extraction.text import CountVectorizer\n\n    X = [...]  # 文本列表\n    X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc是SparkContext\n\n    local = CountVectorizer()\n    dist = SparkCountVectorizer()\n\n    result_local = local.fit_transform(X)\n    result_dist = dist.fit_transform(X_rdd)  # SparseRDD\n\n\nSparkHashingVectorizer\n^^^^^^^^^^^^^^^^^^^^^^\n\n.. code:: python\n\n    from splearn.rdd import ArrayRDD\n    from splearn.feature_extraction.text import SparkHashingVectorizer\n    from sklearn.feature_extraction.text import HashingVectorizer\n\n    X = [...]  # 文本列表\n    X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc是SparkContext\n\n    local = HashingVectorizer()\n    dist = SparkHashingVectorizer()\n\n    result_local = local.fit_transform(X)\n    result_dist = dist.fit_transform(X_rdd)  # SparseRDD\n\n\nSparkTfidfTransformer\n^^^^^^^^^^^^^^^^^^^^^\n\n.. code:: python\n\n    from splearn.rdd import ArrayRDD\n    from splearn.feature_extraction.text import SparkHashingVectorizer\n    from splearn.feature_extraction.text import SparkTfidfTransformer\n    from splearn.pipeline import SparkPipeline\n\n    from sklearn.feature_extraction.text import HashingVectorizer\n    from sklearn.feature_extraction.text import TfidfTransformer\n    from sklearn.pipeline import Pipeline\n\n    X = [...]  # 文本列表\n    X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc是SparkContext\n\n    local_pipeline = Pipeline((\n        ('vect', HashingVectorizer()),\n        ('tfidf', TfidfTransformer())\n    ))\n    dist_pipeline = SparkPipeline((\n        ('vect', SparkHashingVectorizer()),\n        ('tfidf', SparkTfidfTransformer())\n    ))\n\n    result_local = local_pipeline.fit_transform(X)\n    result_dist = dist_pipeline.fit_transform(X_rdd)  # SparseRDD\n\n\n分布式分类器\n~~~~~~~~~~~~~~~\n\n.. code:: python\n\n    from splearn.rdd import DictRDD\n    from splearn.feature_extraction.text import SparkHashingVectorizer\n    from splearn.feature_extraction.text import SparkTfidfTransformer\n    from splearn.svm import SparkLinearSVC\n    from splearn.pipeline import SparkPipeline\n\n    from sklearn.feature_extraction.text import HashingVectorizer\n    from sklearn.feature_extraction.text import TfidfTransformer\n    from sklearn.svm import LinearSVC\n    from sklearn.pipeline import Pipeline\n\n    X = [...]  # 文本列表\n    y = [...]  # 标签列表\n    X_rdd = sc.parallelize(X, 4)\n    y_rdd = sc.parallelize(y, 4)\n    Z = DictRDD((X_rdd, y_rdd),\n                columns=('X', 'y'),\n                dtype=[np.ndarray, np.ndarray])\n\n    local_pipeline = Pipeline((\n        ('vect', HashingVectorizer()),\n        ('tfidf', TfidfTransformer()),\n        ('clf', LinearSVC())\n    ))\n    dist_pipeline = SparkPipeline((\n        ('vect', SparkHashingVectorizer()),\n        ('tfidf', SparkTfidfTransformer()),\n        ('clf', SparkLinearSVC())\n    ))\n\n    local_pipeline.fit(X, y)\n    dist_pipeline.fit(Z, clf__classes=np.unique(y))\n\n    y_pred_local = local_pipeline.predict(X)\n    y_pred_dist = dist_pipeline.predict(Z[:, 'X'])\n\n\n分布式模型选择\n~~~~~~~~~~~~~~~\n\n.. code:: python\n\n    from splearn.rdd import DictRDD\n    from splearn.grid_search import SparkGridSearchCV\n    from splearn.naive_bayes import SparkMultinomialNB\n\n    from sklearn.grid_search import GridSearchCV\n    from sklearn.naive_bayes import MultinomialNB\n\n    X = [...]\n    y = [...]\n    X_rdd = sc.parallelize(X, 4)\n    y_rdd = sc.parallelize(y, 4)\n    Z = DictRDD((X_rdd, y_rdd),\n                columns=('X', 'y'),\n                dtype=[np.ndarray, np.ndarray])\n\n    parameters = {'alpha': [0.1, 1, 10]}\n    fit_params = {'classes': np.unique(y)}\n\n    local_estimator = MultinomialNB()\n    local_grid = GridSearchCV(estimator=local_estimator,\n                              param_grid=parameters)\n\n    estimator = SparkMultinomialNB()\n    grid = SparkGridSearchCV(estimator=estimator,\n                             param_grid=parameters,\n                             fit_params=fit_params)\n\n本地网格搜索.fit(X, y)\n    网格搜索.fit(Z)\n\n\n路线图\n=======\n\n- [ ] 提供透明的 API，以支持纯 NumPy 和 SciPy 对象（已在 transparent_api 分支中部分完成）\n- [ ] 更新所有依赖项\n- [ ] 更广泛地使用 Mllib 和 ML 包（因为它们已经变得更加成熟）\n- [ ] 支持 Spark DataFrame\n\n\n特别感谢\n==============\n\n- scikit-learn 社区\n- spylearn 社区\n- PySpark 社区\n\n类似项目\n===============\n\n- `Thunder \u003Chttps:\u002F\u002Fgithub.com\u002Fthunder-project\u002Fthunder>`_\n- `Bolt \u003Chttps:\u002F\u002Fgithub.com\u002Fbolt-project\u002Fbolt>`_\n\n.. |构建状态| image:: https:\u002F\u002Ftravis-ci.org\u002Flensacom\u002Fsparkit-learn.png?branch=master\n   :target: https:\u002F\u002Ftravis-ci.org\u002Flensacom\u002Fsparkit-learn\n.. |PyPi| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fsparkit-learn.svg\n   :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fsparkit-learn\n.. |Gitter| image:: https:\u002F\u002Fbadges.gitter.im\u002FJoin%20Chat.svg\n   :alt: 加入聊天：https:\u002F\u002Fgitter.im\u002Flensacom\u002Fsparkit-learn\n   :target: https:\u002F\u002Fgitter.im\u002Flensacom\u002Fsparkit-learn?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge\n.. |Gitential| image:: https:\u002F\u002Fapi.gitential.com\u002Faccounts\u002F6\u002Fprojects\u002F75\u002Fbadges\u002Fcoding-hours.svg\n   :alt: Gitential 编码时数\n   :target: https:\u002F\u002Fgitential.com\u002Faccounts\u002F6\u002Fprojects\u002F75\u002Fshare?uuid=095e15c5-46b9-4534-a1d4-3b0bf1f33100&utm_source=shield&utm_medium=shield&utm_campaign=75","# Sparkit-learn 快速上手指南\n\nSparkit-learn 旨在将 Scikit-learn 的功能和 API 移植到 PySpark 上，遵循“本地思考，分布式执行”的原则，让开发者能够以熟悉的 sklearn 风格处理大规模分布式数据。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **Python**: 2.7.x 或 3.4.x\n*   **Apache Spark**: 版本 >= 1.3.0\n*   **核心依赖库**:\n    *   NumPy >= 1.9.0\n    *   SciPy >= 0.14.0\n    *   Scikit-learn >= 0.16\n\n> **提示**：国内用户建议使用清华或阿里镜像源加速安装 Python 依赖包：\n> `pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple numpy scipy scikit-learn`\n\n## 安装步骤\n\n目前 Sparkit-learn 主要通过源码方式使用。请克隆仓库并将其添加到 Python 路径中。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn.git\ncd sparkit-learn\n# 确保将当前目录或父目录加入 PYTHONPATH，以便导入 splearn 模块\nexport PYTHONPATH=${PYTHONPATH}:$(pwd)\n```\n\n## 基本使用\n\nSparkit-learn 的核心在于三种分布式数据结构：`ArrayRDD`（密集数组）、`SparseRDD`（稀疏矩阵）和 `DictRDD`（列式数据）。其工作流与 sklearn 高度一致。\n\n### 1. 启动环境\n\n推荐使用 IPython Notebook 进行交互式开发：\n\n```bash\nPYTHONPATH=${PYTHONPATH}:.. IPYTHON_OPTS=\"notebook\" ${SPARK_HOME}\u002Fbin\u002Fpyspark --master local[4] --driver-memory 2G\n```\n\n### 2. 分布式文本向量化示例\n\n这是最典型的使用场景，展示了如何使用 `SparkCountVectorizer` 替代 sklearn 的 `CountVectorizer` 进行分布式特征提取。\n\n```python\nfrom splearn.rdd import ArrayRDD\nfrom splearn.feature_extraction.text import SparkCountVectorizer\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n# 假设 X 是一个文本列表\nX = [\"the pizza pizza beer copyright\", \"the pizza burger beer copyright\"]\n\n# 创建 SparkContext (在 pyspark shell 中默认为 sc)\n# 将数据转换为分布式 ArrayRDD\nX_rdd = ArrayRDD(sc.parallelize(X, 2))\n\n# 本地 sklearn 方式 (仅用于对比或小数据)\nlocal_vect = CountVectorizer()\nresult_local = local_vect.fit_transform(X)\n\n# 分布式 Sparkit-learn 方式\ndist_vect = SparkCountVectorizer()\nresult_dist = dist_vect.fit_transform(X_rdd)\n\n# result_dist 是一个 SparseRDD，支持类似 numpy 的操作\nprint(type(result_dist)) \n# \u003Cclass 'splearn.rdd.SparseRDD'>\n\n# 查看结果块\nprint(result_dist.collect())\n```\n\n### 3. 构建分布式流水线 (Pipeline)\n\n您可以像使用 sklearn 的 `Pipeline` 一样组合多个步骤：\n\n```python\nfrom splearn.pipeline import SparkPipeline\nfrom splearn.feature_extraction.text import SparkHashingVectorizer, SparkTfidfTransformer\nfrom splearn.svm import SparkLinearSVC\n\n# 定义分布式流水线\ndist_pipeline = SparkPipeline([\n    ('vect', SparkHashingVectorizer()),\n    ('tfidf', SparkTfidfTransformer()),\n    ('clf', SparkLinearSVC())\n])\n\n# 拟合与预测 (输入需为 DictRDD 或对应的 RDD 格式)\n# dist_pipeline.fit(Z) \n# predictions = dist_pipeline.predict(Z)\n```\n\n通过上述方式，您只需将数据封装为 `ArrayRDD` 或 `DictRDD`，并将 estimator 替换为 `Spark` 前缀的版本，即可轻松实现机器学习任务的分布式扩展。","某电商数据团队需要处理 TB 级的用户评论文本，以构建大规模情感分析模型来预测商品口碑。\n\n### 没有 sparkit-learn 时\n- **内存瓶颈严重**：试图将海量文本特征矩阵直接加载到单机内存中使用 Scikit-learn 处理，导致驱动程序频繁发生 OOM（内存溢出）崩溃。\n- **代码重构成本高**：为了利用 Spark 分布式计算，不得不放弃熟悉的 Scikit-learn API，重新学习并编写复杂的 PySpark MLlib 代码，开发效率极低。\n- **数据格式转换繁琐**：在 RDD 弹性分布式数据集与本地 NumPy 数组之间手动反复转换格式，不仅代码冗长易错，还造成了巨大的网络传输开销。\n- **算法选择受限**：由于 MLlib 支持的算法库相对较少，无法直接使用 Scikit-learn 中丰富且先进的预处理和模型算法。\n\n### 使用 sparkit-learn 后\n- **分布式无缝扩展**：利用 ArrayRDD 和 SparseRDD 将数据分块存储在集群中，轻松处理超出单机内存容量的 TB 级稀疏特征矩阵。\n- **API 平滑迁移**：直接沿用 Scikit-learn 的标准接口（如 `fit`、`transform`），仅需少量修改即可将本地脚本升级为分布式任务，保护了原有技术资产。\n- **“本地思维，分布式执行”**：遵循“本地思考，分布式执行”原则，操作自动在数据块级别并行运行，无需手动管理底层 RDD 转换逻辑。\n- **生态能力复用**：能够直接在 Spark 集群上调用 Scikit-learn 完整的算法生态（如复杂的文本向量化器），兼顾了大规模计算能力与算法多样性。\n\nsparkit-learn 成功打破了单机内存限制与分布式开发复杂度之间的壁垒，让数据科学家能用熟悉的工具链轻松驾驭海量数据。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flensacom_sparkit-learn_be6b1e0b.png","lensacom","Lensa","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flensacom_0cdcfa79.png","",null,"https:\u002F\u002Flensa.com","https:\u002F\u002Fgithub.com\u002Flensacom",[84,88,92],{"name":85,"color":86,"percentage":87},"Python","#3572A5",98.9,{"name":89,"color":90,"percentage":91},"Scala","#c22d40",0.9,{"name":93,"color":94,"percentage":95},"Shell","#89e051",0.2,1149,255,"2026-04-02T08:32:00","Apache-2.0",4,"未说明","启动示例中建议驱动内存至少 2G (--driver-memory 2G)",{"notes":104,"python":105,"dependencies":106},"该工具旨在为 PySpark 提供类似 Scikit-learn 的 API，核心原则是“本地思考，分布式执行”。数据以块（block）为单位在数组或稀疏矩阵上进行操作。运行 IPython Notebook 示例时需设置 PYTHONPATH 并指定 Spark master 为 local[4]。","2.7.x 或 3.4.x",[107,108,109,110],"Spark>=1.3.0","NumPy>=1.9.0","SciPy>=0.14.0","Scikit-learn>=0.16",[13],[113,114,115,116,117],"scikit-learn","apache-spark","machine-learning","distributed-computing","python","2026-03-27T02:49:30.150509","2026-04-06T14:02:47.596087",[121,126,131,136,141,146],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},19076,"如何在 Spark 集群上安装和配置 sparkit-learn？","有两种主要方法：\n1. 在 Driver 端将模块打包成 zip 文件，然后使用 `sc.addPyFiles` 添加到 Spark 上下文。\n2. 在每个执行节点（Worker\u002FExecutor）上使用 pip 安装。\n注意：需要在执行节点上也安装 scipy 栈（包括 scikit-learn 和 numpy）。如果 Python 模块依赖 Cython 编译的 C 代码，只要 Driver 端包齐全，Spark 通常能自动序列化并在执行端运行，但确保执行端环境一致是最稳妥的做法。","https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\u002Fissues\u002F10",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},19077,"为什么在小数据集上 sparkit-learn 的性能比 scikit-learn 慢？","如果数据集可以完全放入内存（例如只有 20,000 个样本），本地的 scikit-learn\u002Fnumpy\u002Fpandas 通常会比基于 Spark 的实现更快，因为 Spark 存在任务调度和序列化的开销。Spark 的优势在于处理无法放入单机内存的大规模数据集。对于小数据量，建议直接使用 scikit-learn；对于大数据量，sparkit-learn 的表现会更好。此外，项目维护者建议可以考虑迁移到 Dask 作为替代方案。","https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\u002Fissues\u002F77",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},19078,"遇到 'AttributeError: SparkCountVectorizer object has no attribute _validate_vocabulary' 错误怎么办？","该库依赖 scikit-learn 0.16 或更高版本。如果您使用的是 scikit-learn 0.15.2 或更低版本，会出现此属性错误且没有简单的变通方法。解决方案是将 scikit-learn 升级到 0.16+。如果暂时无法升级，可以尝试使用兼容性更好的 `SparkHashingVectorizer`。","https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\u002Fissues\u002F6",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},19079,"创建 DictRDD 时报错 'Can only zip RDDs with same number of elements in each partition' 如何解决？","这是因为传入的两个 RDD 分区元素数量不一致。不要分别 map 生成两个 RDD 再组合，而应该先在一个 RDD 中通过 map 转换成元组形式，再传给 DictRDD。\n推荐代码示例：\n```python\ntrain_text_y = train_rdd.map(\n    lambda (x, y): (y.split(\"~\")[0], int(y.split(\"~\")[1])))\n\nZ_train = DictRDD(train_text_y, columns=('X', 'y'), bsize=50)\n```\n这样避免了跨 RDD zip 操作导致的分区元素不匹配问题。","https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\u002Fissues\u002F46",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},19080,"在 AWS EMR 或 Spark 2.1.0 环境中导入 DBSCAN 时出现 'ImportError: No module named _common' 错误？","这是由于 Spark 2.x 版本中内部模块结构发生变化，导致 `pyspark.mllib._common` 路径失效。该问题在旧版本的 sparkit-learn 中较为常见。虽然部分测试显示在特定配置下可能通过，但根本解决方法是确保使用的 sparkit-learn 版本已针对 Spark 2.x 进行了适配，或者回退到兼容的 Spark 1.x 版本。如果必须使用 Spark 2.1+，建议检查是否有更新的 fork 版本或补丁支持。","https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\u002Fissues\u002F75",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},19081,"DBSCAN 在聚合步骤变成串行计算且永不结束怎么办？","这是某些第三方 DBSCAN on Spark 实现（如 alitouka\u002Fspark_dbscan）的已知瓶颈，在分区合并的聚合步骤会退化为串行执行，导致大规模数据下无法完成。建议不要依赖此类有缺陷的实现。可以尝试基于 `sparkit-learn` 自行实现，通过对数据分块应用 scikit-learn 的原生 DBSCAN 算法。另外，参考可视化实现思路（如 irvingc\u002Fdbscan-on-spark）进行定制化开发可能更有效。","https:\u002F\u002Fgithub.com\u002Flensacom\u002Fsparkit-learn\u002Fissues\u002F55",[152,156],{"id":153,"version":154,"summary_zh":80,"released_at":155},117129,"0.2.5","2015-06-24T07:00:18",{"id":157,"version":158,"summary_zh":80,"released_at":159},117130,"0.1.5","2015-06-12T08:32:15"]