[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-CrawlScript--tf_geometric":3,"tool-CrawlScript--tf_geometric":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",154349,2,"2026-04-13T23:32:16",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":32,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":103,"github_topics":104,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":111,"updated_at":112,"faqs":113,"releases":149},7312,"CrawlScript\u002Ftf_geometric","tf_geometric","Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x","tf_geometric 是一款专为 TensorFlow 1.x 和 2.x 打造的高效、易用的图神经网络（GNN）开源库。它灵感源自 PyTorch Geometric，旨在填补 TensorFlow 生态中缺乏友好 GNN 工具的空白，让开发者无需从零构建底层逻辑即可轻松上手图深度学习。\n\n针对传统图神经网络实现中密集矩阵计算效率低、稀疏矩阵操作复杂难懂等痛点，tf_geometric 采用了先进的“消息传递”机制。这一设计不仅大幅提升了运算效率，还通过优雅简洁的 API 封装了复杂的图操作，让用户仅需几行代码就能构建并运行如多头图注意力网络（GAT）、图卷积网络（GCN）等主流模型。\n\n该工具非常适合需要在 TensorFlow 框架下进行图数据研究的算法工程师、科研人员以及高校学生。无论是处理节点分类、图分类还是链接预测任务，tf_geometric 都提供了丰富的预置模型和演示案例（Demo），帮助用户快速验证想法或复现论文结果。其独特的优势在于完美平衡了性能与易用性，既保留了底层计算的灵活性，又提供了类似高级语言般的开发体验，是 TensorFlow 用户探索图智能领域的","tf_geometric 是一款专为 TensorFlow 1.x 和 2.x 打造的高效、易用的图神经网络（GNN）开源库。它灵感源自 PyTorch Geometric，旨在填补 TensorFlow 生态中缺乏友好 GNN 工具的空白，让开发者无需从零构建底层逻辑即可轻松上手图深度学习。\n\n针对传统图神经网络实现中密集矩阵计算效率低、稀疏矩阵操作复杂难懂等痛点，tf_geometric 采用了先进的“消息传递”机制。这一设计不仅大幅提升了运算效率，还通过优雅简洁的 API 封装了复杂的图操作，让用户仅需几行代码就能构建并运行如多头图注意力网络（GAT）、图卷积网络（GCN）等主流模型。\n\n该工具非常适合需要在 TensorFlow 框架下进行图数据研究的算法工程师、科研人员以及高校学生。无论是处理节点分类、图分类还是链接预测任务，tf_geometric 都提供了丰富的预置模型和演示案例（Demo），帮助用户快速验证想法或复现论文结果。其独特的优势在于完美平衡了性能与易用性，既保留了底层计算的灵活性，又提供了类似高级语言般的开发体验，是 TensorFlow 用户探索图智能领域的理想助手。","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCrawlScript_tf_geometric_readme_2dd4f29d4084.png\" width=\"400\"\u002F>\n\u003C\u002Fp>\n\n\n# tf_geometric\n\nEfficient and Friendly Graph Neural Network (GNN) Library for TensorFlow 1.x and 2.x.\n\nInspired by __rusty1s\u002Fpytorch_geometric__, we build a GNN library for TensorFlow.\n\n\n## Homepage and Documentation\n\n+ Homepage: [https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric)\n+ Documentation: [https:\u002F\u002Ftf-geometric.readthedocs.io](https:\u002F\u002Ftf-geometric.readthedocs.io) ([中文版](https:\u002F\u002Ftf-geometric.readthedocs.io\u002Fen\u002Flatest\u002Findex_cn.html))\n+ Paper: [Efficient Graph Deep Learning in TensorFlow with tf_geometric](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11552)\n\n\n## Efficient and Friendly\n\nWe use Message Passing mechanism to implement Graph Neural Networks (GNNs), which is way efficient than the dense matrix based implementations and more friendly than the sparse matrix based ones.\nIn addition, we provide easy and elegant APIs for complex GNN operations.\nThe following example constructs a graph and applies a Multi-head Graph Attention Network (GAT) on it:\n```python\n# coding=utf-8\nimport numpy as np\nimport tf_geometric as tfg\nimport tensorflow as tf\n\ngraph = tfg.Graph(\n    x=np.random.randn(5, 20),  # 5 nodes, 20 features,\n    edge_index=[[0, 0, 1, 3],\n                [1, 2, 2, 1]]  # 4 undirected edges\n)\n\nprint(\"Graph Desc: \\n\", graph)\n\ngraph = graph.to_directed()  # pre-process edges\nprint(\"Processed Graph Desc: \\n\", graph)\nprint(\"Processed Edge Index:\\n\", graph.edge_index)\n\n# Multi-head Graph Attention Network (GAT)\ngat_layer = tfg.layers.GAT(units=4, num_heads=4, activation=tf.nn.relu)\noutput = gat_layer([graph.x, graph.edge_index])\nprint(\"Output of GAT: \\n\", output)\n```\n\nOutput:\n```html\nGraph Desc:\n Graph Shape: x => (5, 20)\tedge_index => (2, 4)\ty => None\n\nProcessed Graph Desc:\n Graph Shape: x => (5, 20)\tedge_index => (2, 8)\ty => None\n\nProcessed Edge Index:\n [[0 0 1 1 1 2 2 3]\n [1 2 0 2 3 0 1 1]]\n\nOutput of GAT:\n tf.Tensor(\n[[0.22443159 0.         0.58263206 0.32468423]\n [0.29810357 0.         0.19403605 0.35630274]\n [0.18071976 0.         0.58263206 0.32468423]\n [0.36123228 0.         0.88897204 0.450244  ]\n [0.         0.         0.8013462  0.        ]], shape=(5, 4), dtype=float32)\n```\n\n\n## DEMO\n\nWe recommend you to get started with some demo.\n\n\n### Node Classification\n\n+ [Graph Convolutional Network (GCN)](demo\u002Fdemo_gcn.py)\n+ [Multi-head Graph Attention Network (GAT)](demo\u002Fdemo_gat.py)\n+ [Approximate Personalized Propagation of Neural Predictions (APPNP)](demo\u002Fdemo_appnp.py)\n+ [Inductive Representation Learning on Large Graphs (GraphSAGE)](demo\u002Fdemo_graph_sage_func.py)\n+ [Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering (ChebyNet)](demo\u002Fdemo_chebynet.py)\n+ [Simple Graph Convolution (SGC)](demo\u002Fdemo_sgc.py)\n+ [Topology Adaptive Graph Convolutional Network (TAGCN)](demo\u002Fdemo_tagcn.py)\n+ [Deep Graph Infomax (DGI)](demo\u002Fdemo_dgi.py)\n+ [DropEdge: Towards Deep Graph Convolutional Networks on Node Classification (DropEdge)](demo\u002Fdemo_drop_edge_gcn.py)\n+ [Graph Convolutional Networks for Text Classification (TextGCN)](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FTensorFlow-TextGCN)\n+ [Simple Spectral Graph Convolution (SSGC\u002FS^2GC)](demo\u002Fdemo_ssgc.py)\n\n\n### Graph Classification\n\n+ [MeanPooling](demo\u002Fdemo_mean_pool.py)\n+ [Graph Isomorphism Network (GIN)](demo\u002Fdemo_gin.py)\n+ [Self-Attention Graph Pooling (SAGPooling)](demo\u002Fdemo_sag_pool_h.py)\n+ [Hierarchical Graph Representation Learning with Differentiable Pooling (DiffPool)](demo\u002Fdemo_diff_pool.py)\n+ [Order Matters: Sequence to Sequence for Sets (Set2Set)](demo\u002Fdemo_set2set.py)\n+ [ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations (ASAP)](demo\u002Fdemo_asap.py)\n+ [An End-to-End Deep Learning Architecture for Graph Classification (SortPool)](demo\u002Fdemo_sort_pool.py)\n+ [Spectral Clustering with Graph Neural Networks for Graph Pooling (MinCutPool)](demo\u002Fdemo_min_cut_pool.py)\n\n\n\n### Link Prediction\n\n+ [Graph Auto-Encoder (GAE)](demo\u002Fdemo_gae.py)\n\n\n\n### Save and Load Models\n\n+ [Save and Load Models](demo\u002Fdemo_save_and_load_model.py)\n+ [Save and Load Models with tf.train.Checkpoint](demo\u002Fdemo_checkpoint.py)\n\n\n### Distributed Training\n\n+ [Distributed GCN for Node Classification](demo\u002Fdemo_distributed_gcn.py)\n+ [Distributed MeanPooling for Graph Classification](demo\u002Fdemo_distributed_mean_pool.py)\n\n\n### Sparse\n\n+ [Sparse Node Features](demo\u002Fdemo_sparse_node_features.py)\n\n\n\n## Installation\n\nRequirements:\n+ Operation System: Windows \u002F Linux \u002F Mac OS\n+ Python: version >= 3.7 \n+ Python Packages:\n    + tensorflow\u002Ftensorflow-gpu: >= 1.15.0 or >= 2.7.0\n    + tf_sparse\n    + numpy >= 1.17.4\n    + networkx >= 2.1\n    + scipy >= 1.1.0\n\n\nUse one of the following commands below:\n```bash\npip install -U tf_geometric # this will not install the tensorflow\u002Ftensorflow-gpu package\n\npip install -U tf_geometric[tf1-cpu] # this will install TensorFlow 1.x CPU version\n\npip install -U tf_geometric[tf1-gpu] # this will install TensorFlow 1.x GPU version\n\npip install -U tf_geometric[tf2-cpu] # this will install TensorFlow 2.x CPU version\n\npip install -U tf_geometric[tf2-gpu] # this will install TensorFlow 2.x GPU version\n```\n\n## OOP and Functional API\n\nWe provide both OOP and Functional API, with which you can make some cool things.\n\n```python\n# coding=utf-8\nimport os\n# Enable GPU 0\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\nimport tf_geometric as tfg\nimport tensorflow as tf\nimport numpy as np\n\n# ==================================== Graph Data Structure ====================================\n# In tf_geometric, the data of a graph can be represented by either a collections of\n# tensors (numpy.ndarray or tf.Tensor) or a tfg.Graph object.\n# A graph usually consists of x(node features), edge_index and edge_weight(optional)\n\n# Node Features => (num_nodes, num_features)\nx = np.random.randn(5, 20).astype(np.float32)  # 5 nodes, 20 features\n\n# Edge Index => (2, num_edges)\n# Each column of edge_index (u, v) represents an directed edge from u to v.\n# Note that it does not cover the edge from v to u. You should provide (v, u) to cover it.\n# This is not convenient for users.\n# Thus, we allow users to provide edge_index in undirected form and convert it later.\n# That is, we can only provide (u, v) and convert it to (u, v) and (v, u) with `convert_edge_to_directed` method.\nedge_index = np.array([\n    [0, 0, 1, 3],\n    [1, 2, 2, 1]\n])\n\n# Edge Weight => (num_edges)\nedge_weight = np.array([0.9, 0.8, 0.1, 0.2]).astype(np.float32)\n\n\n# Usually, we use a graph object to manager these information\n# edge_weight is optional, we can set it to None if you don't need it\n# Using 'to_directed' to obtain a graph with directed edges such that we can use it as the input of GCN\ngraph = tfg.Graph(x=x, edge_index=edge_index, edge_weight=edge_weight).to_directed()\n\n\n# Define a Graph Convolutional Layer (GCN)\ngcn_layer = tfg.layers.GCN(4, activation=tf.nn.relu)\n# Perform GCN on the graph\nh = gcn_layer([graph.x, graph.edge_index, graph.edge_weight])\nprint(\"Node Representations (GCN on a Graph): \\n\", h)\n\nfor _ in range(10):\n    # Using Graph.cache can avoid recomputation of GCN's normalized adjacency matrix,\n    # which can dramatically improve the efficiency of GCN.\n    h = gcn_layer([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n\n\n# For algorithms that deal with batches of graphs, we can pack a batch of graph into a BatchGraph object\n# Batch graph wrap a batch of graphs into a single graph, where each nodes has an unique index and a graph index.\n# The node_graph_index is the index of the corresponding graph for each node in the batch.\n# The edge_graph_index is the index of the corresponding edge for each node in the batch.\nbatch_graph = tfg.BatchGraph.from_graphs([graph, graph, graph, graph, graph])\n\n# We can reversely split a BatchGraph object into Graphs objects\ngraphs = batch_graph.to_graphs()\n\n# Define a Graph Convolutional Layer (GCN)\nbatch_gcn_layer = tfg.layers.GCN(4, activation=tf.nn.relu)\n# Perform GCN on the BatchGraph\nbatch_h = gcn_layer([batch_graph.x, batch_graph.edge_index, batch_graph.edge_weight])\nprint(\"Node Representations (GCN on a BatchGraph): \\n\", batch_h)\n\n# Graph Pooling algorithms often rely on such batch data structure\n# Most of them accept a BatchGraph's data as input and output a feature vector for each graph in the batch\ngraph_h = tfg.nn.mean_pool(batch_h, batch_graph.node_graph_index, num_graphs=batch_graph.num_graphs)\nprint(\"Graph Representations (Mean Pooling on a BatchGraph): \\n\", batch_h)\n\n\n# Define a Graph Convolutional Layer (GCN) for scoring each node\ngcn_score_layer = tfg.layers.GCN(1)\n# We provide some advanced graph pooling operations such as topk_pool\nnode_score = gcn_score_layer([batch_graph.x, batch_graph.edge_index, batch_graph.edge_weight])\nnode_score = tf.reshape(node_score, [-1])\nprint(\"Score of Each Node: \\n\", node_score)\ntopk_node_index = tfg.nn.topk_pool(batch_graph.node_graph_index, node_score, ratio=0.6)\nprint(\"Top-k Node Index (Top-k Pooling): \\n\", topk_node_index)\n\n\n\n\n# ==================================== Built-in Datasets ====================================\n# all graph data are in numpy format\n\n# Cora Dataset\ngraph, (train_index, valid_index, test_index) = tfg.datasets.CoraDataset().load_data()\n\n# PPI Dataset\ntrain_data, valid_data, test_data = tfg.datasets.PPIDataset().load_data()\n\n# TU Datasets\n# TU Datasets: https:\u002F\u002Fls11-www.cs.tu-dortmund.de\u002Fstaff\u002Fmorris\u002Fgraphkerneldatasets\ngraph_dicts = tfg.datasets.TUDataset(\"NCI1\").load_data()\n\n\n# ==================================== Basic OOP API ====================================\n# OOP Style GCN (Graph Convolutional Network)\ngcn_layer = tfg.layers.GCN(units=20, activation=tf.nn.relu)\n\nfor graph in test_data:\n    # Cache can speed-up GCN by caching the normed edge information\n    outputs = gcn_layer([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n    print(outputs)\n\n\n# OOP Style GAT (Multi-head Graph Attention Network)\ngat_layer = tfg.layers.GAT(units=20, activation=tf.nn.relu, num_heads=4)\nfor graph in test_data:\n    outputs = gat_layer([graph.x, graph.edge_index])\n    print(outputs)\n\n\n# OOP Style Multi-layer GCN Model\nclass GCNModel(tf.keras.Model):\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.gcn0 = tfg.layers.GCN(16, activation=tf.nn.relu)\n        self.gcn1 = tfg.layers.GCN(7)\n        self.dropout = tf.keras.layers.Dropout(0.5)\n\n    def call(self, inputs, training=None, mask=None, cache=None):\n        x, edge_index, edge_weight = inputs\n        h = self.dropout(x, training=training)\n        h = self.gcn0([h, edge_index, edge_weight], cache=cache)\n        h = self.dropout(h, training=training)\n        h = self.gcn1([h, edge_index, edge_weight], cache=cache)\n        return h\n\n\ngcn_model = GCNModel()\nfor graph in test_data:\n    outputs = gcn_model([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n    print(outputs)\n\n\n# ==================================== Basic Functional API ====================================\n# Functional Style GCN\n# Functional API is more flexible for advanced algorithms\n# You can pass both data and parameters to functional APIs\n\ngcn_w = tf.Variable(tf.random.truncated_normal([test_data[0].num_features, 20]))\nfor graph in test_data:\n    outputs = tfg.nn.gcn(graph.x, graph.adj(), gcn_w, activation=tf.nn.relu)\n    print(outputs)\n\n\n# ==================================== Advanced Functional API ====================================\n# Most APIs are implemented with Map-Reduce Style\n# This is a gcn without without weight normalization and transformation\n# Just pass the mapper\u002Freducer\u002Fupdater functions to the Functional API\n\nfor graph in test_data:\n    outputs = tfg.nn.aggregate_neighbors(\n        x=graph.x,\n        edge_index=graph.edge_index,\n        edge_weight=graph.edge_weight,\n        mapper=tfg.nn.identity_mapper,\n        reducer=tfg.nn.sum_reducer,\n        updater=tfg.nn.sum_updater\n    )\n    print(outputs)\n```\n\n\n\n\n## Cite\n\nIf you use tf_geometric in a scientific publication, we would appreciate citations to the following paper:\n\n```html\n@inproceedings{DBLP:conf\u002Fmm\u002FHuQFWZZX21,\n  author    = {Jun Hu and\n               Shengsheng Qian and\n               Quan Fang and\n               Youze Wang and\n               Quan Zhao and\n               Huaiwen Zhang and\n               Changsheng Xu},\n  editor    = {Heng Tao Shen and\n               Yueting Zhuang and\n               John R. Smith and\n               Yang Yang and\n               Pablo Cesar and\n               Florian Metze and\n               Balakrishnan Prabhakaran},\n  title     = {Efficient Graph Deep Learning in TensorFlow with tf{\\_}geometric},\n  booktitle = {{MM} '21: {ACM} Multimedia Conference, Virtual Event, China, October\n               20 - 24, 2021},\n  pages     = {3775--3778},\n  publisher = {{ACM}},\n  year      = {2021},\n  url       = {https:\u002F\u002Fdoi.org\u002F10.1145\u002F3474085.3478322},\n  doi       = {10.1145\u002F3474085.3478322},\n  timestamp = {Wed, 20 Oct 2021 12:40:01 +0200},\n  biburl    = {https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fmm\u002FHuQFWZZX21.bib},\n  bibsource = {dblp computer science bibliography, https:\u002F\u002Fdblp.org}\n}\n```\n\n\n## Related Projects\n\n+ __MIG-GT:__ \"Modality-Independent Graph Neural Networks with Global Transformers for Multimodal Recommendation\" (AAAI 2025). URL: [https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMIG-GT](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMIG-GT).\n+ __RpHGNN:__ “Efficient Heterogeneous Graph Learning via Random Projection” (TKDE 2024). URL: [https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FRpHGNN](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FRpHGNN).\n+ __MGDCF:__ \"MGDCF: Distance Learning via Markov Graph Diffusion for Neural Collaborative Filtering\" (TKDE 2024). URL: [https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FTorch-MGDCF](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FTorch-MGDCF).\n+ __tf_sparse:__ We develop [TensorFlow Sparse (tf_sparse)](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_sparse) to implement efficient and elegant \nsparse TensorFlow operations for tf_geometric. URL: [https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_sparse](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_sparse).\n+ __GRecX:__ [GRecX](https:\u002F\u002Fgithub.com\u002Fmaenzhier\u002FGRecX) is an efficient and unified benchmark for GNN-based recommendation. URL: [https:\u002F\u002Fgithub.com\u002Fmaenzhier\u002FGRecX](https:\u002F\u002Fgithub.com\u002Fmaenzhier\u002FGRecX).\n+ __🐈 MMClaw:__ The Ultra-Lightweight, Pure Python Kernel for Multimodal AI Agents. URL: [https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMMClaw](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMMClaw).\n\n","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCrawlScript_tf_geometric_readme_2dd4f29d4084.png\" width=\"400\"\u002F>\n\u003C\u002Fp>\n\n\n# tf_geometric\n\n高效且易用的图神经网络（GNN）库，适用于 TensorFlow 1.x 和 2.x。\n\n受 __rusty1s\u002Fpytorch_geometric__ 的启发，我们为 TensorFlow 构建了一个 GNN 库。\n\n\n## 主页与文档\n\n+ 主页：[https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric)\n+ 文档：[https:\u002F\u002Ftf-geometric.readthedocs.io](https:\u002F\u002Ftf-geometric.readthedocs.io) ([中文版](https:\u002F\u002Ftf-geometric.readthedocs.io\u002Fen\u002Flatest\u002Findex_cn.html))\n+ 论文：[使用 tf_geometric 在 TensorFlow 中实现高效的图深度学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11552)\n\n\n## 高效且友好\n\n我们采用消息传递机制来实现图神经网络（GNN），这种方式比基于稠密矩阵的实现更高效，也比基于稀疏矩阵的实现更加友好。此外，我们还为复杂的 GNN 操作提供了简单而优雅的 API。以下示例构建了一个图，并在其上应用了多头图注意力网络（GAT）：\n```python\n# coding=utf-8\nimport numpy as np\nimport tf_geometric as tfg\nimport tensorflow as tf\n\ngraph = tfg.Graph(\n    x=np.random.randn(5, 20),  # 5个节点，每个节点有20个特征，\n    edge_index=[[0, 0, 1, 3],\n                [1, 2, 2, 1]]  # 4条无向边\n)\n\nprint(\"图描述：\\n\", graph)\n\ngraph = graph.to_directed()  # 处理边信息\nprint(\"处理后的图描述：\\n\", graph)\nprint(\"处理后的边索引：\\n\", graph.edge_index)\n\n# 多头图注意力网络（GAT）\ngat_layer = tfg.layers.GAT(units=4, num_heads=4, activation=tf.nn.relu)\noutput = gat_layer([graph.x, graph.edge_index])\nprint(\"GAT 的输出：\\n\", output)\n```\n\n输出：\n```html\n图描述：\n 图结构：x => (5, 20)\tedge_index => (2, 4)\ty => None\n\n处理后的图描述：\n 图结构：x => (5, 20)\tedge_index => (2, 8)\ty => None\n\n处理后的边索引：\n [[0 0 1 1 1 2 2 3]\n [1 2 0 2 3 0 1 1]]\n\nGAT 的输出：\n tf.Tensor(\n[[0.22443159 0.         0.58263206 0.32468423]\n [0.29810357 0.         0.19403605 0.35630274]\n [0.18071976 0.         0.58263206 0.32468423]\n [0.36123228 0.         0.88897204 0.450244  ]\n [0.         0.         0.8013462  0.        ]], shape=(5, 4), dtype=float32)\n```\n\n\n## DEMO\n\n我们建议您从一些示例开始入手。\n\n\n### 节点分类\n\n+ [图卷积网络（GCN）](demo\u002Fdemo_gcn.py)\n+ [多头图注意力网络（GAT）](demo\u002Fdemo_gat.py)\n+ [近似个性化神经预测传播（APPNP）](demo\u002Fdemo_appnp.py)\n+ [大型图上的归纳表示学习（GraphSAGE）](demo\u002Fdemo_graph_sage_func.py)\n+ [具有快速局部化谱滤波的图卷积神经网络（ChebyNet）](demo\u002Fdemo_chebynet.py)\n+ [简单图卷积（SGC）](demo\u002Fdemo_sgc.py)\n+ [拓扑自适应图卷积网络（TAGCN）](demo\u002Fdemo_tagcn.py)\n+ [深度图信息最大化（DGI）](demo\u002Fdemo_dgi.py)\n+ [DropEdge：面向节点分类的深层图卷积网络](demo\u002Fdemo_drop_edge_gcn.py)\n+ [用于文本分类的图卷积网络（TextGCN）](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FTensorFlow-TextGCN)\n+ [简单谱图卷积（SSGC\u002FS^2GC）](demo\u002Fdemo_ssgc.py)\n\n\n### 图分类\n\n+ [均值池化（MeanPooling）](demo\u002Fdemo_mean_pool.py)\n+ [图同构网络（GIN）](demo\u002Fdemo_gin.py)\n+ [自注意力图池化（SAGPooling）](demo\u002Fdemo_sag_pool_h.py)\n+ [可微分池化的层次化图表示学习（DiffPool）](demo\u002Fdemo_diff_pool.py)\n+ [顺序很重要：集合的序列到序列模型（Set2Set）](demo\u002Fdemo_set2set.py)\n+ [ASAP：用于学习层次化图表示的自适应结构感知池化（ASAP）](demo\u002Fdemo_asap.py)\n+ [面向图分类的端到端深度学习架构（SortPool）](demo\u002Fdemo_sort_pool.py)\n+ [基于图神经网络的谱聚类用于图池化（MinCutPool）](demo\u002Fdemo_min_cut_pool.py)\n\n\n\n### 链接预测\n\n+ [图自编码器（GAE）](demo\u002Fdemo_gae.py)\n\n\n\n### 模型保存与加载\n\n+ [模型保存与加载](demo\u002Fdemo_save_and_load_model.py)\n+ [使用 tf.train.Checkpoint 保存和加载模型](demo\u002Fdemo_checkpoint.py)\n\n\n### 分布式训练\n\n+ [面向节点分类的分布式 GCN](demo\u002Fdemo_distributed_gcn.py)\n+ [面向图分类的分布式均值池化](demo\u002Fdemo_distributed_mean_pool.py)\n\n\n### 稀疏数据\n\n+ [稀疏节点特征](demo\u002Fdemo_sparse_node_features.py)\n\n\n\n## 安装\n\n要求：\n+ 操作系统：Windows \u002F Linux \u002F Mac OS\n+ Python：版本 >= 3.7 \n+ Python 包：\n    + tensorflow\u002Ftensorflow-gpu：>= 1.15.0 或 >= 2.7.0\n    + tf_sparse\n    + numpy >= 1.17.4\n    + networkx >= 2.1\n    + scipy >= 1.1.0\n\n\n请使用以下命令之一进行安装：\n```bash\npip install -U tf_geometric # 这不会安装 tensorflow\u002Ftensorflow-gpu 包\n\npip install -U tf_geometric[tf1-cpu] # 这将安装 TensorFlow 1.x CPU 版本\n\npip install -U tf_geometric[tf1-gpu] # 这将安装 TensorFlow 1.x GPU 版本\n\npip install -U tf_geometric[tf2-cpu] # 这将安装 TensorFlow 2.x CPU 版本\n\npip install -U tf_geometric[tf2-gpu] # 这将安装 TensorFlow 2.x GPU 版本\n```\n\n## 面向对象与函数式 API\n\n我们同时提供了面向对象和函数式 API，您可以利用它们创造出许多有趣的东西。\n\n```python\n# coding=utf-8\nimport os\n# 启用 GPU 0\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\nimport tf_geometric as tfg\nimport tensorflow as tf\nimport numpy as np\n\n# ==================================== 图数据结构 ====================================\n# 在 tf_geometric 中，图的数据可以用一组张量（numpy.ndarray 或 tf.Tensor）或者一个 tfg.Graph 对象来表示。\n# 一个图通常由节点特征 x、边索引 edge_index 和边权重 edge_weight（可选）组成。\n\n# 节点特征 => (num_nodes, num_features)\nx = np.random.randn(5, 20).astype(np.float32)  # 5个节点，每个节点有20个特征\n\n# 边索引 => (2, num_edges)\n# 边索引的每一列 (u, v) 表示一条从 u 到 v 的有向边。\n# 注意，它并不包含从 v 到 u 的边。你需要提供 (v, u) 来补充这一部分。\n# 这种方式对用户不太友好。因此，我们允许用户以无向形式提供边索引，并在后续将其转换为有向形式。\n# 也就是说，我们可以只提供 (u, v)，然后通过 `convert_edge_to_directed` 方法将其转换为 (u, v) 和 (v, u)。\nedge_index = np.array([\n    [0, 0, 1, 3],\n    [1, 2, 2, 1]\n])\n\n# 边权重 => (num_edges)\nedge_weight = np.array([0.9, 0.8, 0.1, 0.2]).astype(np.float32)\n\n\n# 通常，我们会使用一个图对象来管理这些信息。\n# 边权重是可选的，如果你不需要的话可以设置为 None。\n# 使用 'to_directed' 方法将图转换为有向边形式，以便作为 GCN 的输入。\ngraph = tfg.Graph(x=x, edge_index=edge_index, edge_weight=edge_weight).to_directed()\n\n\n# 定义一个图卷积层（GCN）\ngcn_layer = tfg.layers.GCN(4, activation=tf.nn.relu)\n\n# 在图上执行 GCN\nh = gcn_layer([graph.x, graph.edge_index, graph.edge_weight])\nprint(\"节点表示（图上的 GCN）：\\n\", h)\n\nfor _ in range(10):\n    # 使用 Graph.cache 可以避免重新计算 GCN 的归一化邻接矩阵，\n    # 从而显著提高 GCN 的效率。\n    h = gcn_layer([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n\n\n# 对于处理图批次的算法，我们可以将一批图打包成一个 BatchGraph 对象\n# Batch graph 将一批图包装成一个单一的图，其中每个节点都有一个唯一的索引和一个图索引。\n# node_graph_index 是批次中每个节点对应图的索引。\n# edge_graph_index 是批次中每个边对应图的索引。\nbatch_graph = tfg.BatchGraph.from_graphs([graph, graph, graph, graph, graph])\n\n# 我们可以将 BatchGraph 对象反向拆分为 Graph 对象\ngraphs = batch_graph.to_graphs()\n\n# 定义一个图卷积层（GCN）\nbatch_gcn_layer = tfg.layers.GCN(4, activation=tf.nn.relu)\n# 在 BatchGraph 上执行 GCN\nbatch_h = gcn_layer([batch_graph.x, batch_graph.edge_index, batch_graph.edge_weight])\nprint(\"节点表示（BatchGraph 上的 GCN）：\\n\", batch_h)\n\n# 图池化算法通常依赖于这种批处理数据结构\n# 大多数算法接受 BatchGraph 的数据作为输入，并为批次中的每张图输出一个特征向量\ngraph_h = tfg.nn.mean_pool(batch_h, batch_graph.node_graph_index, num_graphs=batch_graph.num_graphs)\nprint(\"图表示（BatchGraph 上的均值池化）：\\n\", batch_h)\n\n\n# 定义一个用于对每个节点打分的图卷积层（GCN）\ngcn_score_layer = tfg.layers.GCN(1)\n# 我们提供了一些高级的图池化操作，例如 topk_pool\nnode_score = gcn_score_layer([batch_graph.x, batch_graph.edge_index, batch_graph.edge_weight])\nnode_score = tf.reshape(node_score, [-1])\nprint(\"每个节点的得分：\\n\", node_score)\ntopk_node_index = tfg.nn.topk_pool(batch_graph.node_graph_index, node_score, ratio=0.6)\nprint(\"Top-k 节点索引（Top-k 池化）：\\n\", topk_node_index)\n\n\n\n\n# ==================================== 内置数据集 ====================================\n# 所有图数据都以 numpy 格式存储\n\n# Cora 数据集\ngraph, (train_index, valid_index, test_index) = tfg.datasets.CoraDataset().load_data()\n\n# PPI 数据集\ntrain_data, valid_data, test_data = tfg.datasets.PPIDataset().load_data()\n\n# TU 数据集\n# TU 数据集：https:\u002F\u002Fls11-www.cs.tu-dortmund.de\u002Fstaff\u002Fmorris\u002Fgraphkerneldatasets\ngraph_dicts = tfg.datasets.TUDataset(\"NCI1\").load_data()\n\n\n# ==================================== 基础面向对象 API ====================================\n# 面向对象风格的 GCN（图卷积网络）\ngcn_layer = tfg.layers.GCN(units=20, activation=tf.nn.relu)\n\nfor graph in test_data:\n    # 缓存可以通过缓存归一化的边信息来加速 GCN\n    outputs = gcn_layer([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n    print(outputs)\n\n\n# 面向对象风格的 GAT（多头图注意力网络）\ngat_layer = tfg.layers.GAT(units=20, activation=tf.nn.relu, num_heads=4)\nfor graph in test_data:\n    outputs = gat_layer([graph.x, graph.edge_index])\n    print(outputs)\n\n\n# 面向对象风格的多层 GCN 模型\nclass GCNModel(tf.keras.Model):\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.gcn0 = tfg.layers.GCN(16, activation=tf.nn.relu)\n        self.gcn1 = tfg.layers.GCN(7)\n        self.dropout = tf.keras.layers.Dropout(0.5)\n\n    def call(self, inputs, training=None, mask=None, cache=None):\n        x, edge_index, edge_weight = inputs\n        h = self.dropout(x, training=training)\n        h = self.gcn0([h, edge_index, edge_weight], cache=cache)\n        h = self.dropout(h, training=training)\n        h = self.gcn1([h, edge_index, edge_weight], cache=cache)\n        return h\n\n\ngcn_model = GCNModel()\nfor graph in test_data:\n    outputs = gcn_model([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n    print(outputs)\n\n\n# ==================================== 基础函数式 API ====================================\n# 函数式风格的 GCN\n# 函数式 API 对于高级算法更加灵活\n# 你可以同时传递数据和参数给函数式 API\n\ngcn_w = tf.Variable(tf.random.truncated_normal([test_data[0].num_features, 20]))\nfor graph in test_data:\n    outputs = tfg.nn.gcn(graph.x, graph.adj(), gcn_w, activation=tf.nn.relu)\n    print(outputs)\n\n\n# ==================================== 高级函数式 API ====================================\n# 大多数 API 都是以 Map-Reduce 风格实现的\n# 这是一个没有权重归一化和变换的 GCN\n# 只需将映射\u002F归约\u002F更新函数传递给函数式 API\n\nfor graph in test_data:\n    outputs = tfg.nn.aggregate_neighbors(\n        x=graph.x,\n        edge_index=graph.edge_index,\n        edge_weight=graph.edge_weight,\n        mapper=tfg.nn.identity_mapper,\n        reducer=tfg.nn.sum_reducer,\n        updater=tfg.nn.sum_updater\n    )\n    print(outputs)\n```\n\n\n\n\n## 引用\n\n如果您在科学出版物中使用 tf_geometric，我们非常感谢您引用以下论文：\n\n```html\n@inproceedings{DBLP:conf\u002Fmm\u002FHuQFWZZX21,\n  author    = {Jun Hu and\n               Shengsheng Qian and\n               Quan Fang and\n               Youze Wang and\n               Quan Zhao and\n               Huaiwen Zhang and\n               Changsheng Xu},\n  editor    = {Heng Tao Shen and\n               Yueting Zhuang and\n               John R. Smith and\n               Yang Yang and\n               Pablo Cesar and\n               Florian Metze and\n               Balakrishnan Prabhakaran},\n  title     = {Efficient Graph Deep Learning in TensorFlow with tf{\\_}geometric},\n  booktitle = {{MM} '21: {ACM} Multimedia Conference, Virtual Event, China, October\n               20 - 24, 2021},\n  pages     = {3775--3778},\n  publisher = {{ACM}},\n  year      = {2021},\n  url       = {https:\u002F\u002Fdoi.org\u002F10.1145\u002F3474085.3478322},\n  doi       = {10.1145\u002F3474085.3478322},\n  timestamp = {Wed, 20 Oct 2021 12:40:01 +0200},\n  biburl    = {https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fmm\u002FHuQFWZZX21.bib},\n  bibsource = {dblp computer science bibliography, https:\u002F\u002Fdblp.org}\n}\n```\n\n## 相关项目\n\n+ __MIG-GT：__ “用于多模态推荐的跨模态全局Transformer图神经网络”（AAAI 2025）。网址：[https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMIG-GT](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMIG-GT)。\n+ __RpHGNN：__ “基于随机投影的高效异构图学习”（TKDE 2024）。网址：[https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FRpHGNN](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FRpHGNN)。\n+ __MGDCF：__ “MGDCF：基于马尔可夫图扩散的距离学习用于神经协同过滤”（TKDE 2024）。网址：[https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FTorch-MGDCF](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FTorch-MGDCF)。\n+ __tf_sparse：__ 我们开发了[TensorFlow Sparse (tf_sparse)](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_sparse)，用于在tf_geometric中实现高效且优雅的稀疏TensorFlow运算。网址：[https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_sparse](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_sparse)。\n+ __GRecX：__ [GRecX](https:\u002F\u002Fgithub.com\u002Fmaenzhier\u002FGRecX) 是一个高效、统一的基于GNN的推荐基准测试平台。网址：[https:\u002F\u002Fgithub.com\u002Fmaenzhier\u002FGRecX](https:\u002F\u002Fgithub.com\u002Fmaenzhier\u002FGRecX)。\n+ __🐈 MMClaw：__ 超轻量级、纯Python的多模态AI智能体核心框架。网址：[https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMMClaw](https:\u002F\u002Fgithub.com\u002FCrawlScript\u002FMMClaw)。","# tf_geometric 快速上手指南\n\n`tf_geometric` 是一个高效且友好的图神经网络（GNN）库，专为 TensorFlow 1.x 和 2.x 设计。它受 `pytorch_geometric` 启发，采用消息传递机制实现 GNN，相比基于稠密矩阵的实现更高效，比基于稀疏矩阵的实现更易用。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Windows \u002F Linux \u002F Mac OS\n*   **Python 版本**: >= 3.7\n*   **核心依赖**:\n    *   `tensorflow` 或 `tensorflow-gpu`: >= 1.15.0 或 >= 2.7.0\n    *   `tf_sparse`\n    *   `numpy` >= 1.17.4\n    *   `networkx` >= 2.1\n    *   `scipy` >= 1.1.0\n\n> **提示**：国内用户若遇到下载慢的问题，建议在安装命令前添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` 使用清华镜像源加速。\n\n## 安装步骤\n\n根据您的 TensorFlow 版本和需求（CPU\u002FGPU），选择以下任一命令进行安装：\n\n```bash\n# 仅安装 tf_geometric (需自行预先安装 tensorflow)\npip install -U tf_geometric\n\n# 安装 tf_geometric + TensorFlow 1.x CPU 版\npip install -U tf_geometric[tf1-cpu]\n\n# 安装 tf_geometric + TensorFlow 1.x GPU 版\npip install -U tf_geometric[tf1-gpu]\n\n# 安装 tf_geometric + TensorFlow 2.x CPU 版\npip install -U tf_geometric[tf2-cpu]\n\n# 安装 tf_geometric + TensorFlow 2.x GPU 版\npip install -U tf_geometric[tf2-gpu]\n```\n\n*(国内加速示例：`pip install -U tf_geometric[tf2-gpu] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`)*\n\n## 基本使用\n\n以下示例展示了如何构建一个图数据对象，并使用多头图注意力网络（GAT）进行处理。这是最基础且典型的使用流程。\n\n```python\n# coding=utf-8\nimport numpy as np\nimport tf_geometric as tfg\nimport tensorflow as tf\n\n# 1. 构建图数据\n# x: 节点特征 (5 个节点，每个节点 20 维特征)\n# edge_index: 边索引 (无向边，格式为 [源节点列表，目标节点列表])\ngraph = tfg.Graph(\n    x=np.random.randn(5, 20),\n    edge_index=[[0, 0, 1, 3],\n                [1, 2, 2, 1]]\n)\n\nprint(\"原始图描述:\\n\", graph)\n\n# 2. 预处理边\n# 将无向边转换为有向边（包含双向），以便 GNN 层处理\ngraph = graph.to_directed()\nprint(\"处理后的边索引:\\n\", graph.edge_index)\n\n# 3. 定义并运行 GAT 层\n# 创建一个 4 头注意力机制的 GAT 层，输出维度为 4，激活函数为 ReLU\ngat_layer = tfg.layers.GAT(units=4, num_heads=4, activation=tf.nn.relu)\n\n# 执行前向传播\noutput = gat_layer([graph.x, graph.edge_index])\n\nprint(\"GAT 输出结果:\\n\", output)\n```\n\n**运行结果示例：**\n```text\n原始图描述:\n Graph Shape: x => (5, 20)\tedge_index => (2, 4)\ty => None\n处理后的边索引:\n [[0 0 1 1 1 2 2 3]\n [1 2 0 2 3 0 1 1]]\nGAT 输出结果:\n tf.Tensor(\n[[0.22443159 0.         0.58263206 0.32468423]\n [0.29810357 0.         0.19403605 0.35630274]\n ...\n [0.         0.         0.8013462  0.        ]], shape=(5, 4), dtype=float32)\n```\n\n### 进阶提示\n*   **批处理**: 对于图分类任务，可使用 `tfg.BatchGraph.from_graphs()` 将多个图打包成一个批次进行处理。\n*   **缓存加速**: 在使用 GCN 等需要计算归一化邻接矩阵的模型时，传入 `cache=graph.cache` 参数可避免重复计算，显著提升训练速度。\n*   **内置数据集**: 库内置了 Cora、PPI 等常用数据集，可通过 `tfg.datasets.CoraDataset().load_data()` 直接加载。","某金融科技团队正在构建基于交易网络的反欺诈系统，需要利用图神经网络识别隐藏的团伙作案模式。\n\n### 没有 tf_geometric 时\n- **开发门槛极高**：团队需手动实现复杂的稀疏矩阵运算和消息传递机制，代码量大且极易出错，导致算法验证周期长达数周。\n- **框架适配困难**：由于现有基础设施基于 TensorFlow，而主流 GNN 库多支持 PyTorch，强行迁移框架或重写模型成本巨大。\n- **性能瓶颈明显**：自研的稠密矩阵实现无法高效处理大规模稀疏交易图，显存占用过高，难以在大规模数据上训练深层网络。\n- **复用性差**：每尝试一种新算法（如从 GCN 切换到 GAT），都需要重新编写底层数据预处理和图层逻辑，无法快速迭代。\n\n### 使用 tf_geometric 后\n- **开箱即用**：通过简洁的 API（如 `tfg.layers.GAT`）即可在几行代码内构建多头图注意力网络，将模型原型开发时间从数周缩短至几天。\n- **原生兼容**：完美支持 TensorFlow 1.x 和 2.x，团队无需更改现有技术栈，直接复用现有的部署流水线和优化策略。\n- **高效计算**：内置基于消息传递机制的高效算子，自动处理边索引预处理和无向图转换，显著降低显存消耗并提升训练速度。\n- **灵活扩展**：提供涵盖节点分类、图分类及链路预测的丰富演示代码，研究人员可轻松切换不同 SOTA 模型进行对比实验。\n\ntf_geometric 让 TensorFlow 用户也能以极低的成本享受高效、友好的图深度学习能力，加速了反欺诈模型的落地进程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCrawlScript_tf_geometric_2dd4f29d.png","CrawlScript","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FCrawlScript_95c5f15e.jpg","",null,"https:\u002F\u002Fgithub.com\u002FCrawlScript",[79,83],{"name":80,"color":81,"percentage":82},"Python","#3572A5",99.9,{"name":84,"color":85,"percentage":86},"Shell","#89e051",0.1,508,92,"2026-04-02T09:05:00","GPL-3.0","Windows, Linux, macOS","可选。若使用 GPU 版本，需安装 tensorflow-gpu (>=1.15.0 或 >=2.7.0)，具体 CUDA 版本取决于安装的 TensorFlow 版本，README 未指定具体显卡型号和显存大小。","未说明",{"notes":95,"python":96,"dependencies":97},"该库支持 TensorFlow 1.x 和 2.x。安装时可通过 pip 额外参数选择是否自动安装 CPU 或 GPU 版本的 TensorFlow（例如：pip install -U tf_geometric[tf2-gpu]）。若仅运行基础库而不需要自动安装 TF，可使用默认命令。",">=3.7",[98,99,100,101,102],"tensorflow>=1.15.0 或 >=2.7.0","tf_sparse","numpy>=1.17.4","networkx>=2.1","scipy>=1.1.0",[14],[105,106,107,108,109,110],"gnn","gnns","tensorflow","tensorflow2","efficient","library","2026-03-27T02:49:30.150509","2026-04-14T12:34:21.585807",[114,119,124,129,134,139,144],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},32826,"是否有交流群组可以讨论 tf_geometric 的使用问题？","有的，官方提供了 QQ 交流群，群号为：535148548。用户可以加入该群组进行技术交流和讨论。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F26",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},32820,"如何安装 tf_geometric 而不升级现有的 TensorFlow 版本？","请使用命令 `pip install -U tf_geometric`。该命令只会更新 tf_geometric 本身，不会安装或升级 TensorFlow，从而保留您现有的 TensorFlow 版本（如 2.4.1）。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F25",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},32821,"如何在 tf_geometric 中支持 Mini-batch 训练？","如果您有一个图列表（graph list），可以通过 `tfg.BatchGraph.from_graphs(graphs)` 构建一个 batch_graph 对象。由于 `tfg.BatchGraph` 继承自 `tfg.Graph`，您可以对其直接使用任何 GNN 层。此外，`batch_graph.y` 是多个图标签的组合结果，可直接用于计算损失函数。具体示例可参考仓库中的 Graph Pooling Demo（如 demo_mean_pool.py）。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F29",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},32822,"使用 convert_edge_to_nx_graph 时为什么只返回了部分边？","这是因为输入的 `edge_index` 格式错误。在 tf_geometric 中，`edge_index` 的形状必须是 `[2, num_edges]`（即一个包含两行的张量，第一行是源节点，第二行是目标节点），而不是列表元组形式。请确保将边数据转换为正确的形状后再传入函数。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F37",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},32823,"tf_geometric 的 API 是否必须传入 Graph 对象？如何处理多图输入？","大多数 tfg API 不需要直接传入 Graph 对象。大多数操作或层要求您直接传入节点特征矩阵（x）和边信息（edge_index）。Graph 和 BatchGraph 对象主要用于方便地组织数据（包括输入数据和中间数据），它们作为普通的 TensorFlow 操作是可微分的。您可以像使用普通 TensorFlow 张量一样使用它们，梯度会正常传递。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F3",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},32824,"为什么 GCN 层处理含有环（loops）或三角形结构的图时，环内节点的输出特征会变得相同？","这是由 GCN 的消息传递机制决定的。在不考虑权重变换的情况下，对于完全连接的子图（如三角形 a, b, c），每个节点聚合了其邻居（包括环内其他节点）的特征并取平均。因此，经过聚合后，环内每个节点的特征都会趋向于该子图所有节点特征的平均值（(a + b + c) \u002F 3），导致输出相同。这是算法本身的数学特性，而非代码错误。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F11",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},32825,"PPI 数据集下载链接失效怎么办？","该问题已在 tf_geometric 0.0.31 版本中修复。请将您的库升级到 0.0.31 或更高版本，即可自动使用正确的数据集链接。","https:\u002F\u002Fgithub.com\u002FCrawlScript\u002Ftf_geometric\u002Fissues\u002F13",[]]