[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-dmlc--dgl":3,"tool-dmlc--dgl":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160015,2,"2026-04-18T11:30:52",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":32,"env_os":118,"env_gpu":119,"env_ram":120,"env_deps":121,"category_tags":128,"github_topics":129,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":132,"updated_at":133,"faqs":134,"releases":135},9082,"dmlc\u002Fdgl","dgl","Python package built to ease deep learning on graph, on top of existing DL frameworks.","DGL（Deep Graph Library）是一款专为图深度学习打造的高性能 Python 库，旨在降低在现有深度学习框架上进行图神经网络开发的门槛。它主要解决了传统框架在处理复杂图结构数据时效率低、扩展性差以及代码复用难的痛点，让开发者能够专注于模型逻辑而非底层实现。\n\n无论是致力于前沿算法探索的研究人员，还是需要将图模型集成到端到端应用中的工程师，DGL 都是理想的选择。其核心优势在于“框架无关”的设计，完美支持 PyTorch、TensorFlow 和 MXNet 等主流后端，用户可灵活沿用熟悉的技术栈。技术亮点方面，DGL 提供了强大的图对象抽象，支持 CPU 与 GPU 无缝切换，并内置了高效且可定制的消息传递机制，极大简化了图神经网络的构建过程。此外，针对海量数据场景，DGL 经过深度优化，能够轻松利用多卡或多机集群进行分布式训练，从容应对十亿级节点的大规模图数据。配合丰富的预置模型库、基准测试支持以及详尽的新手教程，DGL 帮助用户快速从理论验证走向生产部署，是连接学术创新与工业应用的桥梁。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdmlc_dgl_readme_a155b5be4c57.jpg\" height=\"200\">\n\u003C\u002Fp>\n\n[![Latest Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fdmlc\u002Fdgl)](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Freleases)\n[![Conda Latest Release](https:\u002F\u002Fanaconda.org\u002Fdglteam\u002Fdgl\u002Fbadges\u002Fversion.svg)](https:\u002F\u002Fanaconda.org\u002Fdglteam\u002Fdgl)\n[![Build Status](https:\u002F\u002Fci.dgl.ai\u002FbuildStatus\u002Ficon?job=DGL\u002Fmaster)](https:\u002F\u002Fci.dgl.ai\u002Fjob\u002FDGL\u002Fjob\u002Fmaster\u002F)\n[![Benchmark by ASV](http:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fbenchmarked%20by-asv-green.svg?style=flat)](https:\u002F\u002Fasv.dgl.ai\u002F)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](.\u002FLICENSE)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002FDGLGraph?style=social)](https:\u002F\u002Ftwitter.com\u002FGraphDeep)\n\n[Website](https:\u002F\u002Fwww.dgl.ai) | [A Blitz Introduction to DGL](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html) | Documentation ([Latest](https:\u002F\u002Fwww.dgl.ai\u002Fdgl_docs\u002F) | [Official Examples](examples\u002FREADME.md) | [Discussion Forum](https:\u002F\u002Fdiscuss.dgl.ai) | [Slack Channel](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fdeep-graph-library\u002Fshared_invite\u002Fzt-eb4ict1g-xcg3PhZAFAB8p6dtKuP6xQ)\n\nDGL is an easy-to-use, high performance and scalable Python package for deep learning on graphs. DGL is framework agnostic, meaning if a deep graph model is a component of an end-to-end application, the rest of the logics can be implemented in any major frameworks, such as PyTorch, Apache MXNet or TensorFlow.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdmlc_dgl_readme_18e7d7287542.png\" alt=\"DGL v0.4 architecture\" width=\"600\">\n  \u003Cbr>\n  \u003Cb>Figure\u003C\u002Fb>: DGL Overall Architecture\n\u003C\u002Fp>\n\n## Highlighted Features\n\n### A GPU-ready graph library\n\nDGL provides a powerful graph object that can reside on either CPU or GPU. It bundles structural data as well as features for better control. We provide a variety of functions for computing with graph objects including efficient and customizable message passing primitives for Graph Neural Networks.\n\n### A versatile tool for GNN researchers and practitioners\n\nThe field of graph deep learning is still rapidly evolving and many research ideas emerge by standing on the shoulders of giants. To ease the process, [DGl-Go](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fdglgo) is a command-line interface to get started with training, using and studying state-of-the-art GNNs.\nDGL collects a rich set of [example implementations](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples) of popular GNN models of a wide range of topics. Researchers can [search](https:\u002F\u002Fwww.dgl.ai\u002F) for related models to innovate new ideas from or use them as baselines for experiments. Moreover, DGL provides many state-of-the-art [GNN layers and modules](https:\u002F\u002Fdocs.dgl.ai\u002Fapi\u002Fpython\u002Fnn.html) for users to build new model architectures. DGL is one of the preferred platforms for many standard graph deep learning benchmarks including [OGB](https:\u002F\u002Fogb.stanford.edu\u002F) and [GNNBenchmarks](https:\u002F\u002Fgithub.com\u002Fgraphdeeplearning\u002Fbenchmarking-gnns).\n\n### Easy to learn and use\n\nDGL provides plenty of learning materials for all kinds of users from ML researchers to domain experts. The [Blitz Introduction to DGL](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html) is a 120-minute tour of the basics of graph machine learning. The [User Guide](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Findex.html) explains in more details the concepts of graphs as well as the training methodology. All of them include code snippets in DGL that are runnable and ready to be plugged into one’s own pipeline.\n\n### Scalable and efficient\n\nIt is convenient to train models using DGL on large-scale graphs across **multiple GPUs** or **multiple machines**. DGL extensively optimizes the whole stack to reduce the overhead in communication, memory consumption and synchronization. As a result, DGL can easily scale to billion-sized graphs. Get started with the [tutorials](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002Ftutorials\u002Fdist\u002Findex.html) and [user guide](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002Flatest\u002Fguide\u002Fdistributed.html) for distributed training. See the [system performance note](https:\u002F\u002Fdocs.dgl.ai\u002Fperformance.html) for the comparison with other tools.\n\n## Get Started\n\nUsers can install DGL from [pip and conda](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fstart.html). You can also download GPU enabled DGL docker [containers](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fcontainers\u002Fdgl) (backended by PyTorch) from NVIDIA NGC for both x86 and ARM based linux systems. Advanced users can follow the [instructions](https:\u002F\u002Fdocs.dgl.ai\u002Finstall\u002Findex.html#install-from-source) to install from source.\n\nFor absolute beginners, start with [the Blitz Introduction to DGL](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html). It covers the basic concepts of common graph machine learning tasks and a step-by-step on building Graph Neural Networks (GNNs) to solve them.\n\nFor acquainted users who wish to learn more,\n\n* Experience state-of-the-art GNN models in only two command-lines using [DGL-Go](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fdglgo).\n* Learn DGL by [example implementations](https:\u002F\u002Fwww.dgl.ai\u002F) of popular GNN models.\n* Read the [User Guide](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Findex.html) ([中文版链接](https:\u002F\u002Fdocs.dgl.ai\u002Fguide_cn\u002Findex.html)), which explains the concepts and usage of DGL in much more details.\n* Go through the tutorials for advanced features like [stochastic training of GNNs](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Flarge\u002Findex.html), training on [multi-GPU](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fmulti\u002Findex.html) or [multi-machine](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fdist\u002Findex.html).\n* [Study classical papers](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fmodels\u002Findex.html) on graph machine learning alongside DGL.\n* Search for the usage of a specific API in the [API reference manual](https:\u002F\u002Fdocs.dgl.ai\u002Fapi\u002Fpython\u002Findex.html), which organizes all DGL APIs by their namespace.\n\nAll the learning materials are available at our [documentation site](https:\u002F\u002Fdocs.dgl.ai\u002F). If you are new to deep learning in general,\ncheck out the open source book [Dive into Deep Learning](https:\u002F\u002Fd2l.ai\u002F).\n\n\n## Community\n\n### Get connected\n\nWe provide multiple channels to connect you to the community of the DGL developers, users, and the general GNN academic researchers:\n\n* Our Slack channel, [click to join](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fdeep-graph-library\u002Fshared_invite\u002Fzt-eb4ict1g-xcg3PhZAFAB8p6dtKuP6xQ)\n* Our discussion forum: https:\u002F\u002Fdiscuss.dgl.ai\u002F\n* Our [Zhihu blog (in Chinese)](https:\u002F\u002Fwww.zhihu.com\u002Fcolumn\u002Fc_1070749881013936128)\n* Monthly GNN User Group online seminar ([event link](https:\u002F\u002Fwww.eventbrite.com\u002Fe\u002Fgraph-neural-networks-user-group-tickets-137512275919?utm-medium=discovery&utm-campaign=social&utm-content=attendeeshare&aff=escb&utm-source=cp&utm-term=listing) | [past videos](https:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCnmuSDY1pTlaFH1WRQElfTg))\n\nTake the survey [here](https:\u002F\u002Fforms.gle\u002FEj3jHCocACmb49Gp8) and leave any feedback to make DGL better fit for your needs. Thanks!\n\n### DGL-powered projects\n\n* DGL-LifeSci: a DGL-based package for various applications in life science with graph neural networks. https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdgl-lifesci\n* DGL-KE: a high performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings. https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdgl-ke\n* Benchmarking GNN: https:\u002F\u002Fgithub.com\u002Fgraphdeeplearning\u002Fbenchmarking-gnns\n* OGB: a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on graphs. https:\u002F\u002Fogb.stanford.edu\u002F\n* Graph4NLP: an easy-to-use library for R&D at the intersection of Deep Learning on Graphs and Natural Language Processing. https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\n* GNN-RecSys: https:\u002F\u002Fgithub.com\u002Fje-dbl\u002FGNN-RecSys\n* Amazon Neptune ML: a new capability of Neptune that uses Graph Neural Networks (GNNs), a machine learning technique purpose-built for graphs, to make easy, fast, and more accurate predictions using graph data. https:\u002F\u002Faws.amazon.com\u002Fcn\u002Fneptune\u002Fmachine-learning\u002F\n* GNNLens2: Visualization tool for Graph Neural Networks. https:\u002F\u002Fgithub.com\u002Fdmlc\u002FGNNLens2\n* RNAGlib: A package to facilitate construction, analysis, visualization and machine learning on RNA 2.5D Graphs. Includes a pre-built dataset: https:\u002F\u002Frnaglib.cs.mcgill.ca\n* OpenHGNN: Model zoo and benchmarks for Heterogeneous Graph Neural Networks. https:\u002F\u002Fgithub.com\u002FBUPT-GAMMA\u002FOpenHGNN\n* TGL: A graph learning framework for large-scale temporal graphs. https:\u002F\u002Fgithub.com\u002Famazon-research\u002Ftgl\n* gtrick: Bag of Tricks for Graph Neural Networks. https:\u002F\u002Fgithub.com\u002Fsangyx\u002Fgtrick\n* ArangoDB-DGL Adapter: Import [ArangoDB](https:\u002F\u002Fgithub.com\u002Farangodb\u002Farangodb) graphs into DGL and vice-versa. https:\u002F\u002Fgithub.com\u002Farangoml\u002Fdgl-adapter\n* DGLD: [DGLD](https:\u002F\u002Fgithub.com\u002FEagleLab-ZJU\u002FDGLD) is an open-source library for Deep Graph Anomaly Detection based on pytorch and DGL.\n### Awesome Papers Using DGL\n\n1. [**Benchmarking Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.00982.pdf), *Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson*\n\n1. [**Open Graph Benchmarks: Datasets for Machine Learning on Graphs**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.00687.pdf), NeurIPS'20, *Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec*\n\n1. [**DropEdge: Towards Deep Graph Convolutional Networks on Node Classification**](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hkx1qkrKPr), ICLR'20, *Yu Rong, Wenbing Huang, Tingyang Xu, Junzhou Huan*\n\n1. [**Discourse-Aware Neural Extractive Text Summarization**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.451\u002F), ACL'20, *Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu*\n\n1. [**GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394486.3403168?casa_token=EClsH2Vc4DcAAAAA:LIB8cbtr6yTDbYuv4cTLwTIYeDq5Y2dhj_ktcWdKpzdPLGeiuL0o8GlcN4QIOnpsAnmGeGVZ), KDD'20, *Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang*\n\n1. [**DGL-KE: Training Knowledge Graph Embeddings at Scale**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.08532), SIGIR'20, *Da Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, George Karypis*\n\n1. [**Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.09252.pdf), *Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, Michael M. Bronstein*\n\n1. [**INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.02924.pdf), *Yuhuai Wu, Albert Q. Jiang, Jimmy Ba, Roger Grosse*\n\n1. [**Finding Patient Zero: Learning Contagion Source with Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.11913.pdf), *Chintan Shah, Nima Dehmamy, Nicola Perra, Matteo Chinazzi, Albert-László Barabási, Alessandro Vespignani, Rose Yu*\n\n1. [**FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.11359.pdf), SC'20, *Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang*\n\n\n\u003Cdetails>\u003Csummary>more\u003C\u002Fsummary>\n\n11. [**BP-Transformer: Modelling Long-Range Context via Binary Partitioning.**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.04070.pdf), *Zihao Ye, Qipeng Guo, Quan Gan, Xipeng Qiu, Zheng Zhang*\n\n12. [**OptiMol: Optimization of Binding Affinities in Chemical Space for Drug Discovery**](https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fbiorxiv\u002Fearly\u002F2020\u002F06\u002F16\u002F2020.05.23.112201.full.pdf), *Jacques Boitreaud,Vincent Mallet, Carlos Oliver, Jérôme Waldispühl*\n\n1. [**JAKET: Joint Pre-training of Knowledge Graph and Language Understanding**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.00796.pdf), *Donghan Yu, Chenguang Zhu, Yiming Yang, Michael Zeng*\n\n1. [**Architectural Implications of Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.00804.pdf), *Zhihui Zhang, Jingwen Leng, Lingxiao Ma, Youshan Miao, Chao Li, Minyi Guo*\n\n1. [**Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.01610.pdf), *Quentin Cappart, Thierry Moisan, Louis-Martin Rousseau1, Isabeau Prémont-Schwarz, and Andre Cire*\n\n1. [**Therapeutics Data Commons: Machine Learning Datasets and Tasks for Therapeutics**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09548) ([code repo](https:\u002F\u002Fgithub.com\u002Fmims-harvard\u002FTDC)), *Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W. Coley, Cao Xiao, Jimeng Sun, Marinka Zitnik*\n\n1. [**Sparse Graph Attention Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00552), *Yang Ye, Shihao Ji*\n\n1. [**On Self-Distilling Graph Neural Network**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.02255.pdf), *Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang*\n\n1. [**Learning Robust Node Representations on Graphs**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.11416.pdf), *Xu Chen, Ya Zhang, Ivor Tsang, and Yuangang Pan*\n\n1. [**Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.05530), *Woojeong Jin, Meng Qu, Xisen Jin, Xiang Ren*\n\n1. [**Graph Neural Ordinary Differential Equations**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07532), *Michael Poli, Stefano Massaroli, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park*\n\n1. [**FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.06391.pdf), *Md. Khaledur Rahman, Majedul Haque Sujon, , Ariful Azad*\n\n1. [**An Efficient Neighborhood-based Interaction Model for Recommendation on Heterogeneous Graph**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00216.pdf), KDD'20 *Jiarui Jin, Jiarui Qin, Yuchen Fang, Kounianhua Du, Weinan Zhang, Yong Yu, Zheng Zhang, Alexander J. Smola*\n\n1. [**Learning Interaction Models of Structured Neighborhood on Heterogeneous Information Network**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.12683.pdf), *Jiarui Jin, Kounianhua Du, Weinan Zhang, Jiarui Qin, Yuchen Fang, Yong Yu, Zheng Zhang, Alexander J. Smola*\n\n1. [**Graphein - a Python Library for Geometric Deep Learning and Network Analysis on Protein Structures**](https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002F10.1101\u002F2020.07.15.204701v1), *Arian R. Jamasb, Pietro Lió, Tom L. Blundell*\n\n1. [**Graph Policy Gradients for Large Scale Robot Control**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.03822), *Arbaaz Khan, Ekaterina Tolstaya, Alejandro Ribeiro, Vijay Kumar*\n\n1. [**Heterogeneous Molecular Graph Neural Networks for Predicting Molecule Properties**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.12710), *Zeren Shui, George Karypis*\n\n1. [**Could Graph Neural Networks Learn Better Molecular Representation for Drug Discovery? A Comparison Study of Descriptor-based and Graph-based Models**](https:\u002F\u002Fassets.researchsquare.com\u002Ffiles\u002Frs-81439\u002Fv1_stamped.pdf), *Dejun Jiang, Zhenxing Wu, Chang-Yu Hsieh, Guangyong Chen, Ben Liao, Zhe Wang, Chao Shen, Dongsheng Cao, Jian Wu, Tingjun Hou*\n\n1. [**Principal Neighbourhood Aggregation for Graph Nets**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.05718), *Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, Petar Veličković*\n\n1. [**Collective Multi-type Entity Alignment Between Knowledge Graphs**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3366423.3380289), *Qi Zhu, Hao Wei, Bunyamin Sisman, Da Zheng, Christos Faloutsos, Xin Luna Dong, Jiawei Han*\n\n1. [**Graph Representation Forecasting of Patient's Medical Conditions: towards A Digital Twin**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08299), *Pietro Barbiero, Ramon Viñas Torné, Pietro Lió*\n\n1. [**Relational Graph Learning on Visual and Kinematics Embeddings for Accurate Gesture Recognition in Robotic Surgery**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01619), *Yong-Hao Long, Jie-Ying Wu, Bo Lu, Yue-Ming Jin, Mathias Unberath, Yun-Hui Liu, Pheng-Ann Heng and Qi Dou*\n\n1. [**Dark Reciprocal-Rank: Boosting Graph-Convolutional Self-Localization Network via Teacher-to-student Knowledge Transfer**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.00402), *Takeda Koji, Tanaka Kanji*\n\n1. [**Graph InfoClust: Leveraging Cluster-Level Node Information For Unsupervised Graph Representation Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06946), *Costas Mavromatis, George Karypis*\n\n1. [**GraphSeam: Supervised Graph Learning Framework for Semantic UV Mapping**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.13748), *Fatemeh Teimury, Bruno Roy, Juan Sebastian Casallas, David macdonald, Mark Coates*\n\n1. [**Comprehensive Study on Molecular Supervised Learning with Graph Neural Networks**](https:\u002F\u002Fpubs.acs.org\u002Fdoi\u002F10.1021\u002Facs.jcim.0c00416), *Doyeong Hwang, Soojung Yang, Yongchan Kwon, Kyung Hoon Lee, Grace Lee, Hanseok Jo, Seyeol Yoon, and Seongok Ryu*\n\n1. [**A graph auto-encoder model for miRNA-disease associations prediction**](https:\u002F\u002Facademic.oup.com\u002Fbib\u002Fadvance-article-abstract\u002Fdoi\u002F10.1093\u002Fbib\u002Fbbaa240\u002F5929824?redirectedFrom=fulltext), *Zhengwei Li, Jiashu Li, Ru Nie, Zhu-Hong You, Wenzheng Bao*\n\n1. [**Graph convolutional regression of cardiac depolarization from sparse endocardial maps**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.14068), STACOM 2020 workshop, *Felix Meister, Tiziano Passerini, Chloé Audigier, Èric Lluch, Viorel Mihalef, Hiroshi Ashikaga, Andreas Maier, Henry Halperin, Tommaso Mansi*\n\n1. [**AttnIO: Knowledge Graph Exploration with In-and-Out Attention Flow for Knowledge-Grounded Dialogue**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.280\u002F), EMNLP'20, *Jaehun Jung, Bokyung Son, Sungwon Lyu*\n\n1. [**Learning from Non-Binary Constituency Trees via Tensor Decomposition**](https:\u002F\u002Fgithub.com\u002Fdanielecastellana22\u002Ftensor-tree-nn), COLING'20, *Daniele Castellana, Davide Bacciu*\n\n1. [**Inducing Alignment Structure with Gated Graph Attention Networks for Sentence Matching**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07668), *Peng Cui, Le Hu, Yuanchao Liu*\n\n1. [**Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.06253), COLING'20, *Peng Cui, Le Hu, Yuanchao Liu*\n\n1. [**Double Graph Based Reasoning for Document-level Relation Extraction**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.13752), EMNLP'20, *Shuang Zeng, Runxin Xu, Baobao Chang, Lei Li*\n\n1. [**Systematic Generalization on gSCAN with Language Conditioned Embedding**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05552), AACL-IJCNLP'20, *Tong Gao, Qi Huang, Raymond J. Mooney*\n\n1. [**Automatic selection of clustering algorithms using supervised graph embedding**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.08225.pdf), *Noy Cohen-Shapira, Lior Rokach*\n\n1. [**Improving Learning to Branch via Reinforcement Learning**](https:\u002F\u002Fopenreview.net\u002Fforum?id=z4D7-PTxTb), *Haoran Sun, Wenbo Chen, Hui Li, Le Song*\n\n1. [**A Practical Guide to Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.05234.pdf), *Isaac Ronald Ward, Jack Joyner, Casey Lickfold, Stash Rowe, Yulan Guo, Mohammed Bennamoun*, [code](https:\u002F\u002Fgithub.com\u002Fisolabs\u002Fgnn-tutorial)\n\n1. [**APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.11545.pdf), SIGMOD'21, *Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang, Xinguang Wang, Ping Cui, Yupu Yang, Bowen Sun, Zhenyu Guo, Junkui Li*\n\n1. [**Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.14455.pdf), *Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias*\n\n1. [**Computing Graph Neural Networks: A Survey from Algorithms to Accelerators**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.00130.pdf), *Sergi Abadal, Akshay Jain, Robert Guirado, Jorge López-Alonso, Eduard Alarcón*\n\n1. [**NHK_STRL at WNUT-2020 Task 2: GATs with Syntactic Dependencies as Edges and CTC-based Loss for Text Classification**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.wnut-1.43.pdf), *Yuki Yasuda, Taichi Ishiwatari, Taro Miyazaki, Jun Goto*\n\n1. [**Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.597.pdf), *Taichi Ishiwatari, Yuki Yasuda, Taro Miyazaki, Jun Goto*\n\n1. [**PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks**](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F8fb134f258b1f7865a6ab2d935a897c9-Paper.pdf), *Minh N. Vu, My T. Thai*\n\n1. [**A Generalization of Transformer Networks to Graphs**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.09699.pdf), *Vijay Prakash Dwivedi, Xavier Bresson*\n\n1. [**Discourse-Aware Neural Extractive Text Summarization**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.451.pdf), ACL'20, *Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu*\n\n1. [**Learning Robust Node Representations on Graphs**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.11416), *Xu Chen, Ya Zhang, Ivor Tsang, Yuangang Pan*\n\n1. [**Adaptive Graph Diffusion Networks with Hop-wise Attention**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15024), *Chuxiong Sun, Guoshi Wu*\n\n1. [**The Photoswitch Dataset: A Molecular Machine Learning Benchmark for the Advancement of Synthetic Chemistry**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.03226), *Aditya R. Thawani, Ryan-Rhys Griffiths, Arian Jamasb, Anthony Bourached, Penelope Jones, William McCorkindale, Alexander A. Aldrick, Alpha A. Lee*\n\n1. [**A community-powered search of machine learning strategy space to find NMR property prediction models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05994), *Lars A. Bratholm, Will Gerrard, Brandon Anderson, Shaojie Bai, Sunghwan Choi, Lam Dang, Pavel Hanchar, Addison Howard, Guillaume Huard, Sanghoon Kim, Zico Kolter, Risi Kondor, Mordechai Kornbluth, Youhan Lee, Youngsoo Lee, Jonathan P. Mailoa, Thanh Tu Nguyen, Milos Popovic, Goran Rakocevic, Walter Reade, Wonho Song, Luka Stojanovic, Erik H. Thiede, Nebojsa Tijanic, Andres Torrubia, Devin Willmott, Craig P. Butts, David R. Glowacki, Kaggle participants*\n\n1. [**Adaptive Layout Decomposition with Graph Embedding Neural Networks**](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~byu\u002Fpapers\u002FC98-DAC2020-MPL-Selector.pdf), *Wei Li, Jialu Xia, Yuzhe Ma, Jialu Li, Yibo Lin, Bei Yu*, DAC'20\n\n1. [**Transfer Learning with Graph Neural Networks for Optoelectronic Properties of Conjugated Oligomers**](https:\u002F\u002Faip.scitation.org\u002Fdoi\u002F10.1063\u002F5.0037863), J. Chem. Phys. 154, *Chee-Kong Lee, Chengqiang Lu, Yue Yu, Qiming Sun, Chang-Yu Hsieh, Shengyu Zhang, Qi Liu, and  Liang Shi*\n\n1. [**Jet tagging in the Lund plane with graph networks**](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002FJHEP03(2021)052), Journal of High Energy Physics 2021, *Frédéric A. Dreyer and Huilin Qu*\n\n1. [**Global Attention Improves Graph Networks Generalization**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07846), *Omri Puny, Heli Ben-Hamu, and Yaron Lipman*\n\n1. [**Learning over Families of Sets -- Hypergraph Representation Learning for Higher Order Tasks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.07773), SDM 2021, *Balasubramaniam Srinivasan, Da Zheng, and George Karypis*\n\n1. [**SSFG: Stochastically Scaling Features and Gradients for Regularizing Graph Convolution Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10338), *Haimin Zhang, Min Xu*\n\n1. [**Application and evaluation of knowledge graph embeddings in biomedical data**](https:\u002F\u002Fpeerj.com\u002Farticles\u002Fcs-341\u002F), PeerJ Computer Science 7:e341, *Mona Alshahrani​, Maha A. Thafar, Magbubah Essack*\n\n1. [**MoTSE: an interpretable task similarity estimator for small molecular property prediction tasks**](https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002F10.1101\u002F2021.01.13.426608v2), bioRxiv 2021.01.13.426608, *Han Li, Xinyi Zhao, Shuya Li, Fangping Wan, Dan Zhao, Jianyang Zeng*\n\n1. [**Reinforcement Learning For Data Poisoning on Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06800), *Jacob Dineen, A S M Ahsan-Ul Haque, Matthew Bielskas*\n\n1. [**Generalising Recursive Neural Models by Tensor Decomposition**](https:\u002F\u002Fgithub.com\u002Fdanielecastellana22\u002Ftensor-tree-nn), IJCNN'20, *Daniele Castellana, Davide Bacciu*\n\n1. [**Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data**](https:\u002F\u002Fgithub.com\u002Fdanielecastellana22\u002Ftensor-tree-nn), ESANN'20, *Daniele Castellana, Davide Bacciu*\n\n1. [**Combining Self-Organizing and Graph Neural Networks for Modeling Deformable Objects in Robotic Manipulation**](https:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpmc\u002Farticles\u002FPMC7806087\u002F), Frotiers in Robotics and AI, *Valencia, Angel J., and Pierre Payeur*\n\n1. [**Joint stroke classification and text line grouping in online handwritten documents with edge pooling attention networks**](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0031320321000467), Pattern Recognition, *Jun-Yu Ye, Yan-Ming Zhang, Qing Yang, Cheng-Lin Liu*\n\n1. [**Toward Accurate Predictions of Atomic Properties via Quantum Mechanics Descriptors Augmented Graph Convolutional Neural Network: Application of This Novel Approach in NMR Chemical Shifts Predictions**](https:\u002F\u002Fpubs.acs.org\u002Fdoi\u002Ffull\u002F10.1021\u002Facs.jpclett.0c02654), The Journal of Physical Chemistry Letters, *Peng Gao, Jie Zhang, Yuzhu Sun, and Jianguo Yu*\n\n1. [**A Graph Neural Network to Model User Comfort in Robot Navigation**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.08863), *Pilar Bachiller, Daniel Rodriguez-Criado, Ronit R. Jorvekar, Pablo Bustos, Diego R. Faria, Luis J. Manso*\n\n1. [**Medical Entity Disambiguation Using Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01488), *Alina Vretinaris, Chuan Lei, Vasilis Efthymiou, Xiao Qin, Fatma Özcan*\n\n1. [**Chemistry-informed Macromolecule Graph Representation for Similarity Computation and Supervised Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02565), *Somesh Mohapatra, Joyce An, Rafael Gómez-Bombarelli*\n\n1. [**Characterizing and Forecasting User Engagement with In-app Action Graph: A Case Study of Snapchat**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.00355.pdf), *Yozen Liu, Xiaolin Shi, Lucas Pierce, Xiang Ren*\n\n1. [**GIPA: General Information Propagation Algorithm for Graph Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.06035), *Qinkai Zheng, Houyi Li, Peng Zhang, Zhixiong Yang, Guowei Zhang, Xintan Zeng, Yongchao Liu*\n\n1. [**Graph Ensemble Learning over Multiple Dependency Trees for Aspect-level Sentiment Classification**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11794), NAACL'21, *Xiaochen Hou, Peng Qi, Guangtao Wang, Rex Ying, Jing Huang, Xiaodong He, Bowen Zhou*\n\n1. [**Enhancing Scientific Papers Summarization with Citation Graph**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.03057), AAAI'21, *Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang*\n\n1. [**Improving Graph Representation Learning by Contrastive Regularization**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.11525.pdf), *Kaili Ma, Haochen Yang, Han Yang, Tatiana Jin, Pengfei Chen, Yongqiang Chen, Barakeel Fanseu Kamhoua, James Cheng*\n\n1. [**Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.02885.pdf), WWW'21, *Cheng Yang, Jiawei Liu, Chuan Shi*\n\n1. [**VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.07164.pdf), PAKDD'21, *Viresh Gupta, Tanmoy Chakraborty*\n\n1. [**Knowledge Graph Embedding using Graph Convolutional Networks with Relation-Aware Attention**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.07200.pdf), *Nasrullah Sheikh, Xiao Qin, Berthold Reinwald, Christoph Miksovic, Thomas Gschwind, Paolo Scotton*\n\n1. [**SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.05034.pdf), *Bahare Fatemi, Layla El Asri, Seyed Mehran Kazemi*\n\n1. [**Finding Needles in Heterogeneous Haystacks**](https:\u002F\u002Fhomepage.divms.uiowa.edu\u002F~badhikari\u002Fassets\u002Fdoc\u002Fpapers\u002FCONGCNIAAI2021.pdf), AAAI'21, *Bijaya Adhikari, Liangyue Li, Nikhil Rao, Karthik Subbian*\n\n1. [**RetCL: A Selection-based Approach for Retrosynthesis via Contrastive Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00795), IJCAI 2021, *Hankook Lee, Sungsoo Ahn, Seung-Woo Seo, You Young Song, Eunho Yang, Sung-Ju Hwang, Jinwoo Shin*\n\n1. [**Accurate Prediction of Free Solvation Energy of Organic Molecules via Graph Attention Network and Message Passing Neural Network from Pairwise Atomistic Interactions**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.02048), *Ramin Ansari, Amirata Ghorbani*\n\n1. [**DIPS-Plus: The Enhanced Database of Interacting Protein Structures for Interface Prediction**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04362), *Alex Morehead, Chen Chen, Ada Sedova, Jianlin Cheng*\n\n1. [**Coreference-Aware Dialogue Summarization**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.08556), SIGDIAL'21, *Zhengyuan Liu, Ke Shi, Nancy F. Chen*\n\n1. [**Document Structure aware Relational Graph Convolutional Networks for Ontology Population**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12950), arXiv, *Abhay M Shalghar, Ayush Kumar, Balaji Ganesan, Aswin Kannan, Shobha G*\n\n1. [**Covid-19 Detection from Chest X-ray and Patient Metadata using Graph Convolutional Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.09720), *Thosini Bamunu Mudiyanselage, Nipuna Senanayake, Chunyan Ji, Yi Pan, Yanqing Zhang*\n\n1. [**Rossmann-toolbox: a deep learning-based protocol for the prediction and design of cofactor specificity in Rossmann fold proteins**](https:\u002F\u002Facademic.oup.com\u002Fbib\u002Fadvance-article\u002Fdoi\u002F10.1093\u002Fbib\u002Fbbab371\u002F6375059), Briefings in Bioinformatics, *Kamil Kaminski, Jan Ludwiczak, Maciej Jasinski, Adriana Bukala, Rafal Madaj, Krzysztof Szczepaniak, Stanislaw Dunin-Horkawicz*\n\n1. [**LGESQL: Line Graph Enhanced Text-to-SQL Model with Mixed Local and Non-Local Relations**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.01093.pdf), ACL'21, *Ruisheng Cao, Lu Chen, Zhi Chen, Yanbin Zhao, Su Zhu, Kai Yu*\n\n1. [**Enhancing Graph Neural Networks via auxiliary training for semi-supervised node classification**](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0950705121001477), Knowledge-Based System'21, *Yao Wu, Yu Song, Hong Huang, Fanghua Ye, Xing Xie, Hai Jin*\n\n1. [**Modeling Graph Node Correlations with Neighbor Mixture Models**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.15966.pdf), *Linfeng Liu, Michael C. Hughes, Li-Ping Liu*\n\n1. [**COMBINING PHYSICS AND MACHINE LEARNING FOR NETWORK FLOW ESTIMATION**](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F9dc2744a465941220de07cf308acf822ec8aaa64.pdf), ICLR'21, *Arlei Silva, Furkan Kocayusufoglu, Saber Jafarpour, Francesco Bullo, Ananthram Swami, Ambuj Singh*\n\n1. [**A Classification Method for Academic Resources Based on a Graph Attention Network**](https:\u002F\u002Fwww.mdpi.com\u002F1999-5903\u002F13\u002F3\u002F64\u002Fhtm), Future Internet'21, *Jie Yu, Yaliu Li, Chenle Pan and Junwei Wang*\n\n1. [**Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03330), *Seung Won Min, Kun Wu, Sitao Huang, Mert Hidayetoğlu, Jinjun Xiong, Eiman Ebrahimi, Deming Chen, Wen-mei Hwu*\n\n1. [**Graph Attention Multi-Layer Perception**](https:\u002F\u002Fgithub.com\u002FPKU-DAIR\u002FGAMLP\u002Fblob\u002Fmain\u002FGAMLP.pdf), *Wentao Zhang, Ziqi Yin, Zeang Sheng, Wen Ouyang, Xiaosen Li, Yangyu Tao, Zhi Yang, Bin Cui*\n\n1. [**GNNLens: A Visual Analytics Approach for Prediction Error Diagnosis of Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.11048v5), *Zhihua Jin, Yong Wang, Qianwen Wang, Yao Ming, Tengfei Ma, Huamin Qu*\n\n1. [**How Attentive are Graph Attention Networks?**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14491.pdf), *Shaked Brody, Uri Alon, Eran Yahav*, [code](https:\u002F\u002Fgithub.com\u002Ftech-srl\u002Fhow_attentive_are_gats)\n\n1. [**SCENE: Reasoning about Traffic Scenes using Heterogeneous Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.03512.pdf), *Thomas Monninger\\*, Julian Schmidt\\*, Jan Rupprecht, David Raba, Julian Jordan, Daniel Frank, Steffen Staab, Klaus Dietmayer*, [code](https:\u002F\u002Fgithub.com\u002Fschmidt-ju\u002Fscene), \\*co-first authors\n\n\u003C\u002Fdetails>\n\n## Contributing\n\nPlease let us know if you encounter a bug or have any suggestions by [filing an issue](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fissues).\n\nWe welcome all contributions from bug fixes to new features and extensions.\n\nWe expect all contributions discussed in the issue tracker and going through PRs.  Please refer to our [contribution guide](https:\u002F\u002Fdocs.dgl.ai\u002Fcontribute.html).\n\n## Cite\n\nIf you use DGL in a scientific publication, we would appreciate citations to the following paper:\n```\n@article{wang2019dgl,\n    title={Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks},\n    author={Minjie Wang and Da Zheng and Zihao Ye and Quan Gan and Mufei Li and Xiang Song and Jinjing Zhou and Chao Ma and Lingfan Yu and Yu Gai and Tianjun Xiao and Tong He and George Karypis and Jinyang Li and Zheng Zhang},\n    year={2019},\n    journal={arXiv preprint arXiv:1909.01315}\n}\n```\n\n## The Team\n\nDGL is developed and maintained by [NYU, NYU Shanghai, AWS Shanghai AI Lab, and AWS MXNet Science Team](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fabout.html).\n\n## License\n\nDGL uses Apache License 2.0.\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdmlc_dgl_readme_a155b5be4c57.jpg\" height=\"200\">\n\u003C\u002Fp>\n\n[![最新版本](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fdmlc\u002Fdgl)](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Freleases)\n[![Conda 最新版本](https:\u002F\u002Fanaconda.org\u002Fdglteam\u002Fdgl\u002Fbadges\u002Fversion.svg)](https:\u002F\u002Fanaconda.org\u002Fdglteam\u002Fdgl)\n[![构建状态](https:\u002F\u002Fci.dgl.ai\u002FbuildStatus\u002Ficon?job=DGL\u002Fmaster)](https:\u002F\u002Fci.dgl.ai\u002Fjob\u002FDGL\u002Fjob\u002Fmaster\u002F)\n[![ASV 基准测试](http:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fbenchmarked%20by-asv-green.svg?style=flat)](https:\u002F\u002Fasv.dgl.ai\u002F)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](.\u002FLICENSE)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002FDGLGraph?style=social)](https:\u002F\u002Ftwitter.com\u002FGraphDeep)\n\n[官网](https:\u002F\u002Fwww.dgl.ai) | [DGL 简明入门](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html) | 文档 ([最新](https:\u002F\u002Fwww.dgl.ai\u002Fdgl_docs\u002F) | [官方示例](examples\u002FREADME.md) | [讨论论坛](https:\u002F\u002Fdiscuss.dgl.ai) | [Slack 频道](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fdeep-graph-library\u002Fshared_invite\u002Fzt-eb4ict1g-xcg3PhZAFAB8p6dtKuP6xQ))\n\nDGL 是一个易于使用、高性能且可扩展的 Python 库，专用于图上的深度学习。DGL 不依赖于特定框架，这意味着如果深度图模型是端到端应用的一部分，其余逻辑可以使用任何主流框架实现，例如 PyTorch、Apache MXNet 或 TensorFlow。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdmlc_dgl_readme_18e7d7287542.png\" alt=\"DGL v0.4 架构\" width=\"600\">\n  \u003Cbr>\n  \u003Cb>图\u003C\u002Fb>：DGL 整体架构\n\u003C\u002Fp>\n\n## 亮点功能\n\n### 适用于 GPU 的图库\n\nDGL 提供了一个功能强大的图对象，既可以驻留在 CPU 上，也可以驻留在 GPU 上。它将结构数据和特征封装在一起，便于更好地控制。我们为图对象的计算提供了多种函数，包括高效且可定制的消息传递原语，适用于图神经网络。\n\n### GNN 研究者与从业者的多功能工具\n\n图深度学习领域仍在快速发展，许多研究思路都是站在前人的肩膀上提出的。为了简化这一过程，[DGl-Go](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fdglgo) 是一个命令行界面，帮助用户快速开始训练、使用和研究最先进的 GNN 模型。\nDGL 收集了大量热门 GNN 模型的[示例实现](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples)，涵盖广泛的主题。研究人员可以通过[搜索](https:\u002F\u002Fwww.dgl.ai\u002F)找到相关模型，从中获得创新灵感，或将其用作实验基准。此外，DGL 还提供了许多最先进的[GNN 层和模块](https:\u002F\u002Fdocs.dgl.ai\u002Fapi\u002Fpython\u002Fnn.html)，供用户构建新的模型架构。DGL 是许多标准图深度学习基准测试的首选平台之一，包括 [OGB](https:\u002F\u002Fogb.stanford.edu\u002F) 和 [GNNBenchmarks](https:\u002F\u002Fgithub.com\u002Fgraphdeeplearning\u002Fbenchmarking-gnns)。\n\n### 易学易用\n\nDGL 为各类用户提供丰富的学习资料，从机器学习研究人员到领域专家。[DGL 简明入门](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html)是一次为期 120 分钟的图机器学习基础之旅。[用户指南](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Findex.html)则更详细地解释了图的概念以及训练方法。所有这些内容都包含可在 DGL 中运行的代码片段，可以直接集成到用户的流程中。\n\n### 可扩展且高效\n\n使用 DGL 在大规模图上进行训练时，可以在**多块 GPU**或**多台机器**之间轻松实现分布式训练。DGL 对整个栈进行了全面优化，以减少通信、内存消耗和同步方面的开销。因此，DGL 能够轻松扩展到数十亿节点的图。请参阅[教程](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002Ftutorials\u002Fdist\u002Findex.html)和[用户指南](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002Flatest\u002Fguide\u002Fdistributed.html)了解分布式训练的相关内容。有关与其他工具的比较，请参阅[系统性能说明](https:\u002F\u002Fdocs.dgl.ai\u002Fperformance.html)。\n\n## 开始使用\n\n用户可以通过 [pip 和 conda](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fstart.html) 安装 DGL。您还可以从 NVIDIA NGC 下载支持 GPU 的 DGL Docker [容器](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fcontainers\u002Fdgl)（基于 PyTorch），适用于 x86 和 ARM 架构的 Linux 系统。高级用户可以按照[安装说明](https:\u002F\u002Fdocs.dgl.ai\u002Finstall\u002Findex.html#install-from-source)从源码安装。\n\n对于完全的新手，可以从[简明入门](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html)开始。它涵盖了常见图机器学习任务的基本概念，并逐步讲解如何构建图神经网络 (GNN) 来解决这些问题。\n\n对于已经有一定基础的用户，若想进一步学习：\n\n* 使用 [DGL-Go](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fdglgo)，只需两行命令即可体验最先进的 GNN 模型。\n* 通过 [DGL 示例实现](https:\u002F\u002Fwww.dgl.ai\u002F)学习流行的 GNN 模型。\n* 阅读[用户指南](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Findex.html)（[中文版链接](https:\u002F\u002Fdocs.dgl.ai\u002Fguide_cn\u002Findex.html)），其中更详细地介绍了 DGL 的概念和用法。\n* 学习高级功能的教程，例如[GNN 的随机训练](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Flarge\u002Findex.html)、在[多 GPU](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fmulti\u002Findex.html)或[多机器](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fdist\u002Findex.html)上进行训练。\n* 结合 DGL 阅读[经典论文](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fmodels\u002Findex.html)，深入了解图机器学习。\n* 在[API 参考手册](https:\u002F\u002Fdocs.dgl.ai\u002Fapi\u002Fpython\u002Findex.html)中查找特定 API 的用法，该手册按命名空间组织了所有 DGL API。\n\n所有学习资料均可在我们的[文档网站](https:\u002F\u002Fdocs.dgl.ai\u002F)上找到。如果您对深度学习尚不熟悉，不妨阅读开源书籍[动手学深度学习](https:\u002F\u002Fd2l.ai\u002F)。\n\n\n## 社区\n\n### 加入社区\n\n我们提供了多种渠道，帮助您与 DGL 开发者、用户以及广大的 GNN 学术研究者建立联系：\n\n* 我们的 Slack 频道，[点击加入](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fdeep-graph-library\u002Fshared_invite\u002Fzt-eb4ict1g-xcg3PhZAFAB8p6dtKuP6xQ)\n* 我们的讨论论坛：https:\u002F\u002Fdiscuss.dgl.ai\u002F\n* 我们的[Zhihu 博客（中文）](https:\u002F\u002Fwww.zhihu.com\u002Fcolumn\u002Fc_1070749881013936128)\n* 每月一次的 GNN 用户组线上研讨会（[活动链接](https:\u002F\u002Fwww.eventbrite.com\u002Fe\u002Fgraph-neural-networks-user-group-tickets-137512275919?utm-medium=discovery&utm-campaign=social&utm-content=attendeeshare&aff=escb&utm-source=cp&utm-term=listing) | [往期视频](https:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCnmuSDY1pTlaFH1WRQElfTg))\n\n请在此处填写调查问卷[这里](https:\u002F\u002Fforms.gle\u002FEj3jHCocACmb49Gp8)，并留下您的反馈，帮助我们使 DGL 更好地满足您的需求。感谢！\n\n### DGL 驱动的项目\n\n* DGL-LifeSci：基于 DGL 的软件包，用于生命科学领域中图神经网络的各种应用。https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdgl-lifesci\n* DGL-KE：一个高性能、易用且可扩展的软件包，用于学习大规模知识图谱嵌入。https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdgl-ke\n* 图神经网络基准测试：https:\u002F\u002Fgithub.com\u002Fgraphdeeplearning\u002Fbenchmarking-gnns\n* OGB：一组真实、大规模且多样化的图机器学习基准数据集。https:\u002F\u002Fogb.stanford.edu\u002F\n* Graph4NLP：一个易于使用的库，用于图深度学习与自然语言处理交叉领域的研发。https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\n* GNN-RecSys：https:\u002F\u002Fgithub.com\u002Fje-dbl\u002FGNN-RecSys\n* Amazon Neptune ML：Neptune 的一项新功能，利用图神经网络（GNN）——一种专为图结构设计的机器学习技术——通过图数据实现简单、快速且更准确的预测。https:\u002F\u002Faws.amazon.com\u002Fcn\u002Fneptune\u002Fmachine-learning\u002F\n* GNNLens2：图神经网络的可视化工具。https:\u002F\u002Fgithub.com\u002Fdmlc\u002FGNNLens2\n* RNAGlib：一个用于 RNA 2.5D 图构建、分析、可视化及机器学习的软件包。包含预构建的数据集：https:\u002F\u002Frnaglib.cs.mcgill.ca\n* OpenHGNN：异构图神经网络的模型库和基准测试。https:\u002F\u002Fgithub.com\u002FBUPT-GAMMA\u002FOpenHGNN\n* TGL：面向大规模时序图的图学习框架。https:\u002F\u002Fgithub.com\u002Famazon-research\u002Ftgl\n* gtrick：图神经网络技巧集。https:\u002F\u002Fgithub.com\u002Fsangyx\u002Fgtrick\n* ArangoDB-DGL 适配器：将 [ArangoDB](https:\u002F\u002Fgithub.com\u002Farangodb\u002Farangodb) 图导入 DGL，反之亦然。https:\u002F\u002Fgithub.com\u002Farangoml\u002Fdgl-adapter\n* DGLD：[DGLD](https:\u002F\u002Fgithub.com\u002FEagleLab-ZJU\u002FDGLD) 是一个基于 PyTorch 和 DGL 的开源深度图异常检测库。\n### 使用 DGL 的优秀论文\n\n1. [**图神经网络基准测试**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.00982.pdf)，*Vijay Prakash Dwivedi、Chaitanya K. Joshi、Thomas Laurent、Yoshua Bengio、Xavier Bresson*\n\n1. [**开放图基准：图机器学习数据集**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.00687.pdf)，NeurIPS'20，*Weihua Hu、Matthias Fey、Marinka Zitnik、Yuxiao Dong、Hongyu Ren、Bowen Liu、Michele Catasta、Jure Leskovec*\n\n1. [**DropEdge：迈向节点分类任务中的深层图卷积网络**](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hkx1qkrKPr)，ICLR'20，*Yu Rong、Wenbing Huang、Tingyang Xu、Junzhou Huan*\n\n1. [**话语感知的神经提取式文本摘要**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.451\u002F)，ACL'20，*Jiacheng Xu、Zhe Gan、Yu Cheng、Jingjing Liu*\n\n1. [**GCC：用于图神经网络预训练的图对比编码**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394486.3403168?casa_token=EClsH2Vc4DcAAAAA:LIB8cbtr6yTDbYuv4cTLwTIYeDq5Y2dhj_ktcWdKpzdPLGeiuL0o8GlcN4QIOnpsAnmGeGVZ)，KDD'20，*Jiezhong Qiu、Qibin Chen、Yuxiao Dong、Jing Zhang、Hongxia Yang、Ming Ding、Kuansan Wang、Jie Tang*\n\n1. [**DGL-KE：大规模知识图谱嵌入的训练**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.08532)，SIGIR'20，*Da Zheng、Xiang Song、Chao Ma、Zeyuan Tan、Zihao Ye、Jin Dong、Hao Xiong、Zheng Zhang、George Karypis*\n\n1. [**通过子图同构计数提升图神经网络表达能力**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.09252.pdf)，*Giorgos Bouritsas、Fabrizio Frasca、Stefanos Zafeiriou、Michael M. Bronstein*\n\n1. [**INT：用于评估定理证明中泛化能力的不等式基准**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.02924.pdf)，*Yuhuai Wu、Albert Q. Jiang、Jimmy Ba、Roger Grosse*\n\n1. [**寻找零号病人：利用图神经网络学习传染源**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.11913.pdf)，*Chintan Shah、Nima Dehmamy、Nicola Perra、Matteo Chinazzi、Albert-László Barabási、Alessandro Vespignani、Rose Yu*\n\n1. [**FeatGraph：图神经网络系统的灵活高效后端**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.11359.pdf)，SC'20，*Yuwei Hu、Zihao Ye、Minjie Wang、Jiali Yu、Da Zheng、Mu Li、Zheng Zhang、Zhiru Zhang、Yida Wang*\n\n\n\u003Cdetails>\u003Csummary>更多\u003C\u002Fsummary>\n\n11. [**BP-Transformer：通过二元划分建模长距离上下文**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.04070.pdf)，*Zihao Ye、Qipeng Guo、Quan Gan、Xipeng Qiu、Zheng Zhang*\n\n12. [**OptiMol：在化学空间中优化结合亲和力以用于药物发现**](https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fbiorxiv\u002Fearly\u002F2020\u002F06\u002F16\u002F2020.05.23.112201.full.pdf)，*Jacques Boitreaud、Vincent Mallet、Carlos Oliver、Jérôme Waldispühl*\n\n1. [**JAKET：知识图谱与语言理解的联合预训练**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.00796.pdf)，*Donghan Yu、Chenguang Zhu、Yiming Yang、Michael Zeng*\n\n1. [**图神经网络的架构启示**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.00804.pdf)，*Zhihui Zhang、Jingwen Leng、Lingxiao Ma、Youshan Miao、Chao Li、Minyi Guo*\n\n1. [**结合强化学习与约束规划进行组合优化**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.01610.pdf)，*Quentin Cappart、Thierry Moisan、Louis-Martin Rousseau1、Isabeau Prémont-Schwarz 和 Andre Cire*\n\n1. [**治疗学数据共同体：用于治疗学的机器学习数据集和任务**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09548)（[代码仓库](https:\u002F\u002Fgithub.com\u002Fmims-harvard\u002FTDC)），*Kexin Huang、Tianfan Fu、Wenhao Gao、Yue Zhao、Yusuf Roohani、Jure Leskovec、Connor W. Coley、Cao Xiao、Jimeng Sun、Marinka Zitnik*\n\n1. [**稀疏图注意力网络**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00552)，*Yang Ye、Shihao Ji*\n\n1. [**关于自蒸馏图神经网络**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.02255.pdf)，*Yuzhao Chen、Yatao Bian、Xi Xiao、Yu Rong、Tingyang Xu、Junzhou Huang*\n\n1. [**学习图上的鲁棒节点表示**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.11416.pdf)，*Xu Chen、Ya Zhang、Ivor Tsang 和 Yuangang Pan*\n\n1. [**递归事件网络：时序知识图谱上的自回归结构推断**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.05530)，*Woojeong Jin、Meng Qu、Xisen Jin、Xiang Ren*\n\n1. [**图神经网络常微分方程**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07532)，*Michael Poli、Stefano Massaroli、Junyoung Park、Atsushi Yamashita、Hajime Asama、Jinkyoo Park*\n\n1. [**FusedMM：用于图嵌入和图神经网络的统一 SDDMM-SpMM 核心**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.06391.pdf)，*Md. Khaledur Rahman、Majedul Haque Sujon、Ariful Azad*\n\n1. [**一种高效的基于邻域的异构图推荐交互模型**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00216.pdf)，KDD'20 *Jiarui Jin、Jiarui Qin、Yuchen Fang、Kounianhua Du、Weinan Zhang、Yong Yu、Zheng Zhang、Alexander J. Smola*\n\n1. [**学习异构信息网络中结构化邻域的交互模型**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.12683.pdf)，*Jiarui Jin、Kounianhua Du、Weinan Zhang、Jiarui Qin、Yuchen Fang、Yong Yu、Zheng Zhang、Alexander J. Smola*\n\n1. [**Graphein——用于蛋白质结构的几何深度学习与网络分析的Python库**](https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002F10.1101\u002F2020.07.15.204701v1)，*Arian R. Jamasb、Pietro Lió、Tom L. Blundell*\n\n1. [**用于大规模机器人控制的图策略梯度**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.03822)，*Arbaaz Khan、Ekaterina Tolstaya、Alejandro Ribeiro、Vijay Kumar*\n\n1. [**用于预测分子性质的异质分子图神经网络**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.12710)，*Zeren Shui、George Karypis*\n\n1. [**图神经网络能否为药物发现学习到更好的分子表示？基于描述符与图模型的比较研究**](https:\u002F\u002Fassets.researchsquare.com\u002Ffiles\u002Frs-81439\u002Fv1_stamped.pdf)，*Dejun Jiang、Zhenxing Wu、Chang-Yu Hsieh、Guangyong Chen、Ben Liao、Zhe Wang、Chao Shen、Dongsheng Cao、Jian Wu、Tingjun Hou*\n\n1. [**图网络中的主邻域聚合**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.05718)，*Gabriele Corso、Luca Cavalleri、Dominique Beaini、Pietro Lió、Petar Veličković*\n\n1. [**知识图谱之间的集体多类型实体对齐**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3366423.3380289)，*Qi Zhu、Hao Wei、Bunyamin Sisman、Da Zheng、Christos Faloutsos、Xin Luna Dong、Jiawei Han*\n\n1. [**患者医疗状况的图表示预测：迈向数字孪生**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08299)，*Pietro Barbiero、Ramon Viñas Torné、Pietro Lió*\n\n1. [**基于视觉与运动学嵌入的关系图学习，用于机器人手术中精确的手势识别**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01619)，*Yong-Hao Long、Jie-Ying Wu、Bo Lu、Yue-Ming Jin、Mathias Unberath、Yun-Hui Liu、Pheng-Ann Heng和Qi Dou*\n\n1. [**暗互惠排名：通过师生知识迁移提升图卷积自定位网络性能**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.00402)，*Takeda Koji、Tanaka Kanji*\n\n1. [**Graph InfoClust：利用节点聚类级别信息进行无监督图表示学习**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06946)，*Costas Mavromatis、George Karypis*\n\n1. [**GraphSeam：用于语义UV映射的监督图学习框架**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.13748)，*Fatemeh Teimury、Bruno Roy、Juan Sebastian Casallas、David macdonald、Mark Coates*\n\n1. [**使用图神经网络进行分子监督学习的综合研究**](https:\u002F\u002Fpubs.acs.org\u002Fdoi\u002F10.1021\u002Facs.jcim.0c00416)，*Doyeong Hwang、Soojung Yang、Yongchan Kwon、Kyung Hoon Lee、Grace Lee、Hanseok Jo、Seyeol Yoon和Seongok Ryu*\n\n1. [**用于预测miRNA-疾病关联的图自编码器模型**](https:\u002F\u002Facademic.oup.com\u002Fbib\u002Fadvance-article-abstract\u002Fdoi\u002F10.1093\u002Fbib\u002Fbbaa240\u002F5929824?redirectedFrom=fulltext)，*Zhengwei Li、Jiashu Li、Ru Nie、Zhu-Hong You、Wenzheng Bao*\n\n1. [**从稀疏心内膜地图进行心脏去极化的图卷积回归**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.14068)，STACOM 2020研讨会，*Felix Meister、Tiziano Passerini、Chloé Audigier、Èric Lluch、Viorel Mihalef、Hiroshi Ashikaga、Andreas Maier、Henry Halperin、Tommaso Mansi*\n\n1. [**AttnIO：基于内外注意力流的知识图谱探索，用于知识驱动的对话系统**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.280\u002F)，EMNLP'20，*Jaehun Jung、Bokyung Son、Sungwon Lyu*\n\n1. [**通过张量分解从非二元句法树中学习**](https:\u002F\u002Fgithub.com\u002Fdanielecastellana22\u002Ftensor-tree-nn)，COLING'20，*Daniele Castellana、Davide Bacciu*\n\n1. [**利用门控图注意力网络诱导对齐结构进行句子匹配**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07668)，*Peng Cui、Le Hu、Yuanchao Liu*\n\n1. [**利用主题感知图神经网络增强抽取式文本摘要**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.06253)，COLING'20，*Peng Cui、Le Hu、Yuanchao Liu*\n\n1. [**基于双图推理的文档级关系抽取**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.13752)，EMNLP'20，*Shuang Zeng、Runxin Xu、Baobao Chang、Lei Li*\n\n1. [**基于语言条件嵌入的gSCAN系统性泛化**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05552)，AACL-IJCNLP'20，*Tong Gao、Qi Huang、Raymond J. Mooney*\n\n1. [**利用监督图嵌入自动选择聚类算法**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.08225.pdf)，*Noy Cohen-Shapira、Lior Rokach*\n\n1. [**通过强化学习改进分支学习**](https:\u002F\u002Fopenreview.net\u002Fforum?id=z4D7-PTxTb)，*Haoran Sun、Wenbo Chen、Hui Li、Le Song*\n\n1. [**图神经网络实用指南**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.05234.pdf)，*Isaac Ronald Ward、Jack Joyner、Casey Lickfold、Stash Rowe、Yulan Guo、Mohammed Bennamoun*，[代码链接](https:\u002F\u002Fgithub.com\u002Fisolabs\u002Fgnn-tutorial)\n\n1. [**APAN：用于实时时序图嵌入的异步传播注意力网络**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.11545.pdf)，SIGMOD'21，*Xuhong Wang、Ding Lyu、Mengjian Li、Yang Xia、Qi Yang、Xinwen Wang、Xinguang Wang、Ping Cui、Yupu Yang、Bowen Sun、Zhenyu Guo、Junkui Li*\n\n1. [**不确定性匹配图神经网络以防御中毒攻击**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.14455.pdf)，*Uday Shankar Shanthamallu、Jayaraman J. Thiagarajan、Andreas Spanias*\n\n1. [**计算图神经网络：从算法到加速器的综述**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.00130.pdf)，*Sergi Abadal、Akshay Jain、Robert Guirado、Jorge López-Alonso、Eduard Alarcón*\n\n1. [**NHK_STRL在WNUT-2020任务2中的表现：以句法依存关系为边的GATs及基于CTC的损失函数用于文本分类**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.wnut-1.43.pdf)，*Yuki Yasuda、Taichi Ishiwatari、Taro Miyazaki、Jun Goto*\n\n1. [**具有关系位置编码的关系感知图注意力网络，用于对话中的情感识别**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.emnlp-main.597.pdf)，*Taichi Ishiwatari、Yuki Yasuda、Taro Miyazaki、Jun Goto*\n\n1. [**PGM-Explainer：图神经网络的概率图模型解释方法**](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F8fb134f258b1f7865a6ab2d935a897c9-Paper.pdf)，*Minh N. Vu、My T. Thai*\n\n1. [**Transformer网络向图的推广**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.09699.pdf)，*Vijay Prakash Dwivedi、Xavier Bresson*\n\n1. [**话语感知的神经抽取式文本摘要**](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.acl-main.451.pdf)，ACL'20，*Jiacheng Xu、Zhe Gan、Yu Cheng、Jingjing Liu*\n\n1. [**在图上学习鲁棒的节点表示**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.11416)，*Xu Chen、Ya Zhang、Ivor Tsang、Yuangang Pan*\n\n1. [**具有跳数级注意力的自适应图扩散网络**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15024)，*Chuxiong Sun、Guoshi Wu*\n\n1. [**光开关数据集：推动合成化学发展的分子机器学习基准**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.03226)，*Aditya R. Thawani、Ryan-Rhys Griffiths、Arian Jamasb、Anthony Bourached、Penelope Jones、William McCorkindale、Alexander A. Aldrick、Alpha A. Lee*\n\n1. [**基于社区驱动的机器学习策略空间搜索，用于发现核磁共振性质预测模型**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05994)，*Lars A. Bratholm、Will Gerrard、Brandon Anderson、Shaojie Bai、Sunghwan Choi、Lam Dang、Pavel Hanchar、Addison Howard、Guillaume Huard、Sanghoon Kim、Zico Kolter、Risi Kondor、Mordechai Kornbluth、Youhan Lee、Youngsoo Lee、Jonathan P. Mailoa、Thanh Tu Nguyen、Milos Popovic、Goran Rakocevic、Walter Reade、Wonho Song、Luka Stojanovic、Erik H. Thiede、Nebojsa Tijanic、Andres Torrubia、Devin Willmott、Craig P. Butts、David R. Glowacki、Kaggle参赛者*\n\n1. [**基于图嵌入神经网络的自适应版图分解**](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~byu\u002Fpapers\u002FC98-DAC2020-MPL-Selector.pdf)，*Wei Li、Jialu Xia、Yuzhe Ma、Jialu Li、Yibo Lin、Bei Yu*，DAC'20\n\n1. [**利用图神经网络进行共轭低聚物光电性质的迁移学习**](https:\u002F\u002Faip.scitation.org\u002Fdoi\u002F10.1063\u002F5.0037863)，《物理化学杂志》第154卷，*Chee-Kong Lee、Chengqiang Lu、Yue Yu、Qiming Sun、Chang-Yu Hsieh、Shengyu Zhang、Qi Liu以及Liang Shi*\n\n1. [**利用图网络在伦德平面上进行喷注标记**](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002FJHEP03(2021)052)，《高能物理杂志》2021年，*Frédéric A. Dreyer和Huilin Qu*\n\n1. [**全局注意力机制提升图网络的泛化能力**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07846)，*Omri Puny、Heli Ben-Hamu和Yaron Lipman*\n\n1. [**集合族上的学习——面向高阶任务的超图表示学习**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.07773)，SDM 2021，*Balasubramaniam Srinivasan、Da Zheng和George Karypis*\n\n1. [**SSFG：用于正则化图卷积网络的随机缩放特征与梯度方法**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10338)，*Haimin Zhang、Min Xu*\n\n1. [**知识图谱嵌入在生物医学数据中的应用与评估**](https:\u002F\u002Fpeerj.com\u002Farticles\u002Fcs-341\u002F)，PeerJ计算机科学第7卷，e341，*Mona Alshahrani、Maha A. Thafar、Magbubah Essack*\n\n1. [**MoTSE：一种可解释的小分子性质预测任务相似性估计器**](https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002F10.1101\u002F2021.01.13.426608v2)，bioRxiv 2021.01.13.426608，*Han Li、Xinyi Zhao、Shuya Li、Fangping Wan、Dan Zhao、Jianyang Zeng*\n\n1. [**针对图神经网络的数据投毒强化学习**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06800)，*Jacob Dineen、A S M Ahsan-Ul Haque、Matthew Bielskas*\n\n1. [**通过张量分解泛化递归神经网络模型**](https:\u002F\u002Fgithub.com\u002Fdanielecastellana22\u002Ftensor-tree-nn)，IJCNN'20，*Daniele Castellana、Davide Bacciu*\n\n1. [**树结构数据中递归神经网络的张量分解**](https:\u002F\u002Fgithub.com\u002Fdanielecastellana22\u002Ftensor-tree-nn)，ESANN'20，*Daniele Castellana、Davide Bacciu*\n\n1. [**结合自组织网络与图神经网络，用于机器人操作中可变形物体建模**](https:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpmc\u002Farticles\u002FPMC7806087\u002F)，《机器人与人工智能前沿》，*Valencia, Angel J.和Pierre Payeur*\n\n1. [**在线手写文档中结合边缘池化注意力网络进行笔画分类与文本行分组**](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0031320321000467)，《模式识别》，*Jun-Yu Ye、Yan-Ming Zhang、Qing Yang、Cheng-Lin Liu*\n\n1. [**通过量子力学描述符增强的图卷积神经网络实现原子性质的精准预测：该新方法在核磁共振化学位移预测中的应用**](https:\u002F\u002Fpubs.acs.org\u002Fdoi\u002Ffull\u002F10.1021\u002Facs.jpclett.0c02654)，《物理化学快报》，*Peng Gao、Jie Zhang、Yuzhu Sun和Jianguo Yu*\n\n1. [**用于建模用户在机器人导航中舒适度的图神经网络**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.08863)，*Pilar Bachiller、Daniel Rodriguez-Criado、Ronit R. Jorvekar、Pablo Bustos、Diego R. Faria、Luis J. Manso*\n\n1. [**利用图神经网络进行医学实体消歧**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01488)，*Alina Vretinaris、Chuan Lei、Vasilis Efthymiou、Xiao Qin、Fatma Özcan*\n\n1. [**基于化学信息的大分子图表示，用于相似性计算与监督学习**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02565)，*Somesh Mohapatra、Joyce An、Rafael Gómez-Bombarelli*\n\n1. [**利用应用内行为图刻画并预测用户参与度：以Snapchat为例**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.00355.pdf)，*Yozen Liu、Xiaolin Shi、Lucas Pierce、Xiang Ren*\n\n1. [**GIPA：用于图学习的通用信息传播算法**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.06035)，*Qinkai Zheng、Houyi Li、Peng Zhang、Zhixiong Yang、Guowei Zhang、Xintan Zeng、Yongchao Liu*\n\n1. [**基于多棵依存句法树的图集成学习，用于方面级情感分类**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11794)，NAACL'21，*Xiaochen Hou、Peng Qi、Guangtao Wang、Rex Ying、Jing Huang、Xiaodong He、Bowen Zhou*\n\n1. [**利用引用图提升科技论文摘要生成效果**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.03057)，AAAI'21，*Chenxin An、Ming Zhong、Yiran Chen、Danqing Wang、Xipeng Qiu、Xuanjing Huang*\n\n1. [**通过对比正则化改进图表示学习**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.11525.pdf)，*Kaili Ma、Haochen Yang、Han Yang、Tatiana Jin、Pengfei Chen、Yongqiang Chen、Barakeel Fanseu Kamhoua、James Cheng*\n\n1. [**提取图神经网络的知识并超越它：一种有效的知识蒸馏框架**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.02885.pdf)，WWW'21，*Cheng Yang、Jiawei Liu、Chuan Shi*\n\n1. [**VIKING：通过监督式网络投毒对网络嵌入进行对抗攻击**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.07164.pdf)，PAKDD'21，*Viresh Gupta、Tanmoy Chakraborty*\n\n1. [**使用具有关系感知注意力的图卷积网络进行知识图谱嵌入**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.07200.pdf)，*Nasrullah Sheikh、Xiao Qin、Berthold Reinwald、Christoph Miksovic、Thomas Gschwind、Paolo Scotton*\n\n1. [**SLAPS：自监督学习提升图神经网络的结构学习**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.05034.pdf)，*Bahare Fatemi、Layla El Asri、Seyed Mehran Kazemi*\n\n1. [**在异质性干草堆中寻找针尖**](https:\u002F\u002Fhomepage.divms.uiowa.edu\u002F~badhikari\u002Fassets\u002Fdoc\u002Fpapers\u002FCONGCNIAAI2021.pdf)，AAAI'21，*Bijaya Adhikari、Liangyue Li、Nikhil Rao、Karthik Subbian*\n\n1. [**RetCL：基于选择的逆合成方法，利用对比学习实现**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00795)，IJCAI 2021，*Hankook Lee、Sungsoo Ahn、Seung-Woo Seo、You Young Song、Eunho Yang、Sung-Ju Hwang、Jinwoo Shin*\n\n1. [**基于成对原子相互作用，利用图注意力网络和消息传递神经网络精确预测有机分子的自由溶剂化能**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.02048)，*Ramin Ansari、Amirata Ghorbani*\n\n1. [**DIPS-Plus：用于界面预测的增强型相互作用蛋白质结构数据库**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04362)，*Alex Morehead、Chen Chen、Ada Sedova、Jianlin Cheng*\n\n1. [**指代消解感知的对话摘要**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.08556)，SIGDIAL'21，*Zhengyuan Liu、Ke Shi、Nancy F. Chen*\n\n1. [**面向本体构建的文档结构感知关系图卷积网络**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12950)，arXiv，*Abhay M Shalghar、Ayush Kumar、Balaji Ganesan、Aswin Kannan、Shobha G*\n\n1. [**基于图卷积神经网络的胸部X光片与患者元数据新冠肺炎检测**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.09720)，*Thosini Bamunu Mudiyanselage、Nipuna Senanayake、Chunyan Ji、Yi Pan、Yanqing Zhang*\n\n1. [**Rossmann工具箱：一种基于深度学习的协议，用于预测和设计Rossmann折叠蛋白中的辅因子特异性**](https:\u002F\u002Facademic.oup.com\u002Fbib\u002Fadvance-article\u002Fdoi\u002F10.1093\u002Fbib\u002Fbbab371\u002F6375059)，《生物信息学简报》，*Kamil Kaminski、Jan Ludwiczak、Maciej Jasinski、Adriana Bukala、Rafal Madaj、Krzysztof Szczepaniak、Stanislaw Dunin-Horkawicz*\n\n1. [**LGESQL：结合局部与非局部关系的线图增强型文本到SQL模型**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.01093.pdf)，ACL'21，*Ruisheng Cao、Lu Chen、Zhi Chen、Yanbin Zhao、Su Zhu、Kai Yu*\n\n1. [**通过辅助训练提升半监督节点分类的图神经网络性能**](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0950705121001477)，《知识系统》'21，*Yao Wu、Yu Song、Hong Huang、Fanghua Ye、Xing Xie、Hai Jin*\n\n1. [**利用邻居混合模型建模图节点相关性**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.15966.pdf)，*Linfeng Liu、Michael C. Hughes、Li-Ping Liu*\n\n1. [**将物理学与机器学习相结合用于网络流量估计**](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F9dc2744a465941220de07cf308acf822ec8aaa64.pdf)，ICLR'21，*Arlei Silva、Furkan Kocayusufoglu、Saber Jafarpour、Francesco Bullo、Ananthram Swami、Ambuj Singh*\n\n1. [**基于图注意力网络的学术资源分类方法**](https:\u002F\u002Fwww.mdpi.com\u002F1999-5903\u002F13\u002F3\u002F64\u002Fhtm)，《未来互联网》'21，*Jie Yu、Yaliu Li、Chenle Pan 和 Junwei Wang*\n\n1. [**面向GPU的数据通信架构下的大规模图卷积网络训练**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03330)，*Seung Won Min、Kun Wu、Sitao Huang、Mert Hidayetoğlu、Jinjun Xiong、Eiman Ebrahimi、Deming Chen、Wen-mei Hwu*\n\n1. [**图注意力多层感知器**](https:\u002F\u002Fgithub.com\u002FPKU-DAIR\u002FGAMLP\u002Fblob\u002Fmain\u002FGAMLP.pdf)，*Wentao Zhang、Ziqi Yin、Zeang Sheng、Wen Ouyang、Xiaosen Li、Yangyu Tao、Zhi Yang、Bin Cui*\n\n1. [**GNNLens：一种用于图神经网络预测误差诊断的可视化分析方法**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.11048v5)，*Zhihua Jin、Yong Wang、Qianwen Wang、Yao Ming、Tengfei Ma、Huamin Qu*\n\n1. [**图注意力网络有多“注意”？**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14491.pdf)，*Shaked Brody、Uri Alon、Eran Yahav*，[代码链接](https:\u002F\u002Fgithub.com\u002Ftech-srl\u002Fhow_attentive_are_gats)\n\n1. [**SCENE：利用异质图神经网络进行交通场景推理**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.03512.pdf)，*Thomas Monninger\\*、Julian Schmidt\\*、Jan Rupprecht、David Raba、Julian Jordan、Daniel Frank、Steffen Staab、Klaus Dietmayer*，[代码链接](https:\u002F\u002Fgithub.com\u002Fschmidt-ju\u002Fscene)，\\*共同第一作者\n\n\u003C\u002Fdetails>\n\n\n\n## 贡献\n\n如果您遇到任何错误或有任何建议，请通过[提交问题](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fissues)告知我们。\n\n我们欢迎各种形式的贡献，从修复漏洞到新增功能和扩展。\n\n我们期望所有贡献都在议题跟踪器中讨论，并通过拉取请求（PR）提交。请参阅我们的[贡献指南](https:\u002F\u002Fdocs.dgl.ai\u002Fcontribute.html)。\n\n## 引用\n\n如果您在科学出版物中使用了DGL，我们非常感谢您引用以下论文：\n```\n@article{wang2019dgl,\n    title={深度图库：一个以图为中心、高性能的图神经网络软件包},\n    author={王敏杰、郑达、叶子豪、权甘、李牧飞、宋翔、周金晶、马超、于凌凡、盖宇、肖天俊、何彤、加里皮斯、李金阳、张正},\n    year={2019},\n    journal={arXiv预印本 arXiv:1909.01315}\n}\n```\n\n## 团队\n\nDGL由[纽约大学、纽约大学上海分校、AWS上海人工智能实验室以及AWS MXNet科研团队](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fabout.html)开发并维护。\n\n## 许可证\n\nDGL采用Apache许可证2.0版。","# DGL 快速上手指南\n\nDGL (Deep Graph Library) 是一个易于使用、高性能且可扩展的 Python 包，专为图深度学习设计。它框架无关，支持 PyTorch、Apache MXNet 和 TensorFlow 后端。\n\n## 环境准备\n\n*   **操作系统**: Linux (x86\u002FARM), macOS, Windows (部分功能支持)\n*   **Python**: 3.6 - 3.10\n*   **深度学习框架**: 需预先安装 PyTorch, TensorFlow 或 MXNet 其中之一（推荐 PyTorch）\n*   **硬件**: 支持 CPU 和 GPU (CUDA)。若在 GPU 上运行，请确保已安装对应的 NVIDIA 驱动和 CUDA 工具包。\n\n## 安装步骤\n\n### 方法一：使用 pip 安装（推荐）\n\n访问 [DGL 官方安装页面](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fstart.html) 获取匹配你当前环境的命令。以下是通用示例：\n\n**1. 安装 CPU 版本 (配合 PyTorch):**\n```bash\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Frepo.html\n```\n\n**2. 安装 GPU 版本 (配合 PyTorch 和 CUDA 11.x):**\n```bash\npip install dgl-cu113 -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Frepo.html\n```\n*(注：请将 `cu113` 替换为你实际的 CUDA 版本，如 `cu102`, `cu117` 等)*\n\n> **国内加速提示**：如果下载速度慢，可尝试配置 pip 使用国内镜像源（如清华源）：\n> ```bash\n> pip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Frepo.html -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方法二：使用 Conda 安装\n\n```bash\nconda install -c dglteam dgl\n```\n*(GPU 版本通常建议直接使用 pip 安装特定 wheel 包以确保 CUDA 版本匹配)*\n\n### 方法三：Docker (适合高级用户)\n\n直接从 NVIDIA NGC 拉取预装好环境和 PyTorch 后端的镜像：\n```bash\ndocker pull nvcr.io\u002Fnvidia\u002Fdgl:latest\n```\n\n## 基本使用\n\n以下是一个基于 PyTorch 后端的最简示例，演示如何构建一个图并执行消息传递（Message Passing）。\n\n**1. 导入库并构建图**\n\n```python\nimport dgl\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n# 创建一个简单的有向图 (5 个节点，边：0->1, 0->2, 1->2, 1->3)\nsrc = torch.tensor([0, 0, 1, 1])\ndst = torch.tensor([1, 2, 2, 3])\ng = dgl.graph((src, dst))\n\nprint(g)\n# 输出: Graph(num_nodes=4, num_edges=4, ...)\n```\n\n**2. 定义节点特征与 GNN 层**\n\n```python\n# 初始化节点特征 (4 个节点，每个节点特征维度为 2)\ng.ndata['h'] = torch.randn(4, 2)\n\n# 定义一个简单的图卷积网络 (GCN) 层\nclass GCNLayer(nn.Module):\n    def __init__(self, in_feats, out_feats):\n        super(GCNLayer, self).__init__()\n        self.linear = nn.Linear(in_feats, out_feats)\n\n    def forward(self, g, features):\n        with g.local_scope():\n            g.ndata['h'] = features\n            # 消息传递：聚合邻居信息\n            g.update_all(message_func=dgl.function.copy_u('h', 'm'),\n                         reduce_func=dgl.function.mean('m', 'h'))\n            h = g.ndata['h']\n            return self.linear(h)\n\nmodel = GCNLayer(2, 2)\n```\n\n**3. 执行前向传播**\n\n```python\n# 运行模型\noutput = model(g, g.ndata['h'])\nprint(output)\n```\n\n**下一步学习：**\n*   **新手教程**: 访问 [DGL Blitz Introduction](https:\u002F\u002Fdocs.dgl.ai\u002Ftutorials\u002Fblitz\u002Findex.html) (含中文版文档) 进行 120 分钟的系统学习。\n*   **示例代码**: 查看 [官方 Examples](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples) 仓库，包含各类主流 GNN 模型的实现。\n*   **中文文档**: 详细概念与用法请参考 [DGL 用户指南 (中文版)](https:\u002F\u002Fdocs.dgl.ai\u002Fguide_cn\u002Findex.html)。","某金融科技公司风控团队需要构建图神经网络模型，通过分析数亿用户间的转账关系来实时识别欺诈团伙。\n\n### 没有 dgl 时\n- 开发者需手动编写复杂的稀疏矩阵运算代码来实现消息传递机制，极易出错且难以调试。\n- 面对十亿级边的超大图数据，单机内存直接爆满，团队不得不花费数周定制分布式切片逻辑。\n- 想要尝试最新的 GNN 变体（如 GraphSAGE 或 GAT），必须从零复现论文算法，研发周期长达数月。\n- 代码与底层深度学习框架强耦合，若想从 PyTorch 迁移到 TensorFlow，几乎需要重写整个图计算模块。\n- 缺乏统一的数据结构管理，节点特征与拓扑结构分离存储，导致数据预处理流程繁琐且低效。\n\n### 使用 dgl 后\n- 直接调用 dgl 内置的高效消息传递原语，几行代码即可构建复杂的图卷积层，开发效率提升十倍。\n- 利用 dgl 的原生分布式训练能力，轻松将十亿级大图拆分至多机多卡集群，无需关心底层通信细节。\n- 通过 dgl 提供的丰富官方示例库和预置 SOTA 模型层，当天即可完成新算法的验证与基线测试。\n- 凭借框架无关的特性，核心图逻辑保持不变，仅需修改少量接口即可在 PyTorch、MXNet 或 TensorFlow 间无缝切换。\n- 使用 dgl 统一的图对象同时托管拓扑结构与特征数据，支持 GPU 直接加速，大幅简化了数据流水线。\n\ndgl 让团队从繁琐的底层图算子实现中解放出来，专注于业务逻辑创新，将欺诈检测模型的迭代周期从月级缩短至天级。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdmlc_dgl_a155b5be.jpg","dmlc","Distributed (Deep) Machine Learning Community","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdmlc_b69cc302.png","A Community of Awesome Machine Learning Projects",null,"https:\u002F\u002Fgithub.com\u002Fdmlc",[79,83,87,91,95,99,103,106,110],{"name":80,"color":81,"percentage":82},"Python","#3572A5",62.5,{"name":84,"color":85,"percentage":86},"C++","#f34b7d",22.8,{"name":88,"color":89,"percentage":90},"Jupyter Notebook","#DA5B0B",6.9,{"name":92,"color":93,"percentage":94},"Cuda","#3A4E3A",6.4,{"name":96,"color":97,"percentage":98},"CMake","#DA3434",0.6,{"name":100,"color":101,"percentage":102},"Shell","#89e051",0.3,{"name":104,"color":105,"percentage":102},"C","#555555",{"name":107,"color":108,"percentage":109},"Cython","#fedf5b",0.2,{"name":111,"color":112,"percentage":113},"Batchfile","#C1F12E",0.1,14265,3052,"2026-04-16T03:44:57","Apache-2.0","Linux (x86 和 ARM 架构)","非必需（支持 CPU），若使用 GPU 需 NVIDIA GPU（通过 PyTorch\u002FMXNet\u002FTensorFlow 后端），具体显存和 CUDA 版本取决于所选深度学习框架","未说明（针对十亿级节点图推荐多机分布式训练）",{"notes":122,"python":123,"dependencies":124},"DGL 是框架无关的，需安装对应的深度学习框架（PyTorch、MXNet 或 TensorFlow）。官方提供基于 PyTorch 的 GPU Docker 镜像。支持在多 GPU 或多机器上进行分布式训练以处理大规模图数据。可通过 pip、conda 或源码安装。","未说明",[125,126,127],"PyTorch","Apache MXNet","TensorFlow",[14],[130,131],"deep-learning","graph-neural-networks","2026-03-27T02:49:30.150509","2026-04-18T22:35:18.651493",[],[136,141,146,151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231],{"id":137,"version":138,"summary_zh":139,"released_at":140},324278,"v2.4.0","## 亮点\n* DGL 2.4 的文档可以在这里找到：https:\u002F\u002Fwww.dgl.ai\u002Fdgl_docs\u002Findex.html\n* 默认情况下，导入 DGL 时不会自动导入 `distributed` 模块。用户需要手动导入：`import dgl.distributed`。\n* `DistNodeDataLoader` 和 `DistEdgeDataLoader` 已从 `dgl.dataloading` 移至 `dgl.distributed`。建议用户使用 `dgl.distributed.DistNode\u002FEdgeDataLoader`，尽管 `dgl.dataloading.DistNode\u002FEdgeDataLoader` 仍然可用。这种向后兼容性将在下一个版本中移除。\n* GraphBolt 示例现在位于 `examples\u002Fgraphbolt` 目录下。\n* 如果用户安装了支持 CUDA 的 PyTorch，则现在需要单独安装 GraphBolt 的 CUDA 轮子包。\n* 现已支持 NumPy 2.x。\n* 在 @pyynb 的贡献下，DGL 现在支持 PyTorch 2.4 和 CUDA 12.4，详见 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7629。\n* 导入 DGL 不再会自动导入 GraphBolt，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7676 和 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7756 中实现。\n* GraphBolt 现已不再依赖已弃用的 `torchdata` 包，并且本版本与 `torchdata` 包不兼容，相关更改由 @frozenbugs 完成，详见 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7638、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7609、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7667 和 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7688。\n* [GraphBolt][CUDA] 使用更优的内存分配算法以避免 OOM 问题，由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7618 中完成。\n* [GraphBolt] 通过消除所有（已知的）GPU 同步操作，显著提升了 GPU 利用率：https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7528、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7682、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7709、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7707、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7712、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7705、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7602、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7603、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7634 和 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7757，均由 @mfbalin 完成。\n* [GraphBolt][io_uring] `gb.DiskBasedFeature` 现已可用于外存训练：https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7506、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7713、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7562、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7515、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7530 和 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7518，均由 @mfbalin 完成。\n* [GraphBolt] 建议用户在进行外存训练时，使用 `gb.numpy_save_aligned` 替代 `numpy.save` 来保存特征数据，由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7524 中提出。\n* [GraphBolt] 新增了 `gb.CPUCachedFeature`，以加速外存训练：https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7492、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7508、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7520、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7526、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7525、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7531、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7537、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7538、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7581、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7723、https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7644 和 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7731，均由 @mfbalin 完成。","2024-09-03T04:16:25",{"id":142,"version":143,"summary_zh":144,"released_at":145},324279,"v2.3.0","## 亮点\n* 支持 `torch 2.3.1`。支持的 PyTorch 版本范围为 `2.1` 至 `2.3`。\n* `numpy 2.0.0` 尚未完全支持或兼容。我们新增了对 NumPy 的依赖要求，版本应低于 `2.0.0`。该限制将在不久的将来移除。\n* `ItemSetDict` 已被 `HeteroItemSet` 取代。请使用新类，尽管我们仍为已弃用的旧类保留了一个别名。\n* 在 #7470、#7475 和 #7483 中，GraphBolt 新增了增量式 GPU 图缓存功能。示例用法见 #7482。\n* 分布式 DGL 现已支持 `exclude_edges` 参数。\n* 从现在起，我们将不再提供适用于 `Windows` 和 `Mac` 的预编译包。请自行从源代码构建并安装。\n\n## 错误修复\n* [DistDGL] 启用 `exclude_edges` 的 `sample_neighbors()` 功能，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7425 中实现。\n* [DistDGL] 为 `sample_etype_neighbors()` 启用 `exclude_edges`，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7427 中实现。\n* [DistDGL] 修复使用 gloo 后端调用 `all_to_all` 时出现的设备不匹配问题，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7409 中解决。\n* [GraphBolt][CUDA] 修复 GPUCachedFeature 更新问题，由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7384 中完成。\n* [GraphBolt][CUDA] 使数据加载器可序列化，由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7391 中实现。\n* [graphbolt] 跳过 `input_nodes` 中不存在的类型，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7386 中完成。\n* [DistPart] 修复分布式分区中的边界情况，该情况曾导致断言错误持续触发，由 @thvasilo 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7395 中解决。\n* [GraphBolt] 修复子图中存在空边时小批量中的 `blocks` 问题，由 @yxy235 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7413 中完成。\n* [功能] 在 `COOToCSR` 中添加非零元素数量检查，由 @Skeleton003 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7459 中实现。\n\n## 新示例\n* [GraphBolt] Labor（层邻居采样）示例，由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7437 中提供。\n\n## 新贡献者\n* @vmiheer 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7447 中完成了首次贡献。\n* @az15240 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7465 中完成了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fcompare\u002Fv2.2.1...v2.3.0","2024-06-28T00:16:49",{"id":147,"version":148,"summary_zh":149,"released_at":150},324280,"v2.2.1","我们非常高兴地宣布 **DGL 2.2.1** 正式发布！🎉🎉🎉\n\n## 主要变更\n* 支持的 PyTorch 版本更新为 *2.1.0\u002F1\u002F2*、*2.2.0\u002F1\u002F2* 和 *2.3.0*。安装命令请参见 [这里](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fstart.html)。\n* GraphBolt 中的 `MiniBatch` 已重构：在整个流水线中，`seed_nodes` 和 `node_pairs` 被统一的 `seeds` 属性所取代。更多详情请参考最新的 [示例](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fsampling\u002Fgraphbolt)，由 @yxy235 提供。\n* GraphBolt 采样功能现已在 DistGL 中支持节点分类任务。示例请参见 [这里](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fdistributed)。\n* [GraphBolt] 由 @RamonZhou 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7360 中优化了 CPU 上的异构图采样。\n* [GraphBolt] 由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7188 中为 `gb.expand_indptr` 添加了 `torch.compile()` 支持。\n* [GraphBolt] 由 @RamonZhou 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7217 和 #7239 中使 `unique_and_compact` 的行为变得确定性。\n* [GraphBolt] 由 @yxy235 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7354 中为 `subgraph_sampler` 增加了超链接支持。\n* [GraphBolt] 由 @mfbalin 在 #7205、#7208、#7212 和 #7220 中为 `gb.LayerNeighborSampler` 的 `dgl.dataloading.LaborSampler` 新增了 `layer_dependency` 和 `batch_dependency` 参数。\n* [GraphBolt][CUDA] 由 @mfbalin 实现了更快的 GPU 邻居采样与压缩内核。相关 PR 号为 #7239 和 #7215。\n* [GraphBolt][CUDA] 由 @mfbalin 通过融合内核提升了异构 CPU-GPU 性能。相关 PR 号为 #7223 和 #7312。\n* [GraphBolt][CUDA] 由 @mfbalin 在整个采样流水线中消除了 GPU 同步操作。相关 PR 号为 #7240 和 #7264。\n\n## Bug 修复\n* [DistGB] 恢复 `toindex()` 方法，并由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7197 中完善了相关测试。\n* [GraphBolt] 由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7259 中为 PyG 高级示例中的 `torch.compile()` 引入的 bug 提供了临时解决方案。\n* [CUDA][Bug] 由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7295 中修复了 CUDA 12 中的 CSR 转置 bug。\n* [确定性] 由 @TristonC 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7310 中添加了环境变量，以启用 cuSPARSE spmm 的确定性算法。\n\n## 新贡献者\n* @Chaos-Hu-edu 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7219 中完成了首次贡献。\n* @MikuSugar 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7258 中完成了首次贡献。\n* @pyynb 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7267 中完成了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fcompare\u002Fv2.1.0...v2.2.1","2024-05-11T02:59:41",{"id":152,"version":153,"summary_zh":154,"released_at":155},324281,"v2.1.0","我们非常高兴地宣布 DGL 2.1.0 正式发布！🎉🎉🎉\n\n## 主要变更：\n1. `GraphBolt` 的 CUDA 后端现已可用。感谢 @mfbalin 的卓越贡献。请参阅[更新后的示例](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fsampling\u002Fgraphbolt)。\n2. 现不再支持 PyTorch 1.13。目前支持的 PyTorch 版本为 2.0.0\u002F1、2.1.0\u002F1\u002F2 和 2.2.0\u002F1。\n3. 现不再支持 CUDA 11.6。目前支持的 CUDA 版本为 11.7、11.8 和 12.1。\n4. 通过 #7039 和 #6954 中的流水线并行化提升了数据加载性能，详情请参阅新的 [gb.DataLoader](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002F2.1.x\u002Fgenerated\u002Fdgl.graphbolt.DataLoader.html) 参数。\n5. 其他操作和内核优化。\n6. 新增支持将 `GraphBolt` 的采样输出转换为 `PyG` 数据格式，并可与 `PyG` 模型无缝训练：[示例](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fsampling\u002Fgraphbolt\u002Fpyg)。\n\n## Bug 修复\n* [Grapbolt] 负样本节点对应为二维张量，由 @peizhou001 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F6951 中修复。\n* [GraphBolt] 修复 RGCN 示例中的扇出设置，由 @RamonZhou 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F6959 中修复。\n* [GraphBolt] 修复所有工作进程间洗牌使用的随机数生成器，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F6982 中修复。\n* [GraphBolt] 修复单类型节点\u002F边图的预处理问题，由 @Rhett-Ying 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7011 中修复。\n* [GraphBolt] 修复 GPU 上的负采样器种子问题，由 @yxy235 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7068 中修复。\n* [GraphBolt][CUDA] 修复链接预测的提前停止机制，由 @mfbalin 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7083 中修复。\n\n## 新增示例\n* [功能] ARGO：一款易于使用的运行时工具，用于提升多核处理器上的 GNN 训练性能，由 @jasonlin316 在 https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F7003 中添加。\n\n## 致谢\n感谢各位的贡献！\n@drivanov @frozenbugs @LourensT @Skeleton003 @mfbalin @RamonZhou @Rhett-Ying @wkmyws @jasonlin316 @caojy1998 @czkkkkkk @hutiechuan @peizhou001 @rudongyu @xiangyuzhi @yxy235","2024-03-06T03:29:59",{"id":157,"version":158,"summary_zh":159,"released_at":160},324282,"v2.0.0","我们非常高兴地宣布 DGL 2.0.0 正式发布！这是我们在赋能开发者使用前沿图神经网络（GNN）工具道路上的一个重要里程碑。🎉🎉🎉\n\n## 新增包：dgl.graphbolt\n\n在本次发布中，我们推出了一款全新的包：[dgl.graphbolt](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002F2.0.x\u002Fapi\u002Fpython\u002Fdgl.graphbolt.html)，它是一个革命性的数据加载框架，通过优化数据流水线，大幅提升 GNN 训练和推理的效率。有关 GraphBolt 的概述及端到端笔记本教程，请参阅 [文档页面](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002F2.0.x\u002Fstochastic_training\u002Findex.html)。更多端到端示例可在 [GitHub 代码库](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002F2.0.x\u002Fexamples\u002Fsampling\u002Fgraphbolt) 中找到。\n\n## 新增内容\n- 异质关系 GCN 示例 (#6157)\n- 为异质 PGExplainer 实现添加节点解释功能 (#6050)\n- 在 LRGB 中新增肽类结构数据集 (#6337)\n- 在 LRGB 中新增肽类功能数据集 (#6363)\n- 在 LRGB 中新增 VOCSuperpixels 数据集 (#6389)\n- 添加紧凑算子 (#6352)\n- 添加 COCOsuperpixel 数据集 (#6407)\n- 添加 graphSAGE 示例 (#6481)\n- 在 benchmark-gnn 中新增 CIFAR10 和 MNIST 数据集 (#6543)\n- 添加 ogc 方法 (#6437)\n- 添加 LADIES 示例 (#6560)\n- 调整同质性和标签信息量 (#6516)\n\n## 系统\u002F示例\u002F文档增强\n- 更新 README，说明如何从 NGC 获取 DGL 容器 (#6133)\n- CPU 版 Docker 使用 tcmalloc (#5969)\n- 在 lap_pe 中使用 scipy 的 eigs 替代 numpy (#5855)\n- 添加来自 conda-forge 构建的 CMake 更改 (#6189)\n- 将 `googletest` 升级至 v1.14.0 (#6273)\n- 修复采样链接预测示例中的拼写错误 (#6268)\n- 添加稀疏矩阵切片运算符实现 (#6208)\n- 使用 torchrun 替代 torch.distributed.launch (#6304)\n- 稀疏采样实现 (#6303)\n- 添加 relabel Python API (#6323)\n- 紧凑化 C++ API (#6334)\n- 修复编译警告 (#6342)\n- 更新 Labor 采样器文档，并加入 NeurIPS 接收论文信息 (#6369)\n- 更新 LRGB 的 docstring (#6430)\n- 对于单线程情况，不融合邻居采样器 (#6421)\n- 修复 graph_transformer 示例 (#6471)\n- 为 EEG_GCNN 示例添加 `--num_workers` 输入参数 (#6467)\n- 更新 network_emb.py 文档 (#6559)\n- 在 yield 块执行过程中发生错误时，防止临时更改被持久化 (#6506)\n- 提供双向边选项 (#6566)\n- 改进 MLP 示例 (#6593)\n- 改进 JKNET 示例 (#6596)\n- 避免每次采样过程中都调用 coo\u002Fcsr 构造函数中的 `IsPinned` 方法 (#6568)\n- 添加图变换器的教程文档 (#6889, #6949)\n- 重构 SpatialEncoder3d (#5894)\n\n## Bug 修复\n- 修复 CUDA 12 下 cusparseCreateCsr 格式的兼容性问题 (#6121)\n- 修复独立模式下的一个 bug (#6179)\n- 修复 extrace_archive 默认参数的问题 (#6333)\n- 修复设备检查问题 (#6409)\n- 在 g.idtype 中返回批次相关 ID (#6578)\n- 修复 ShaDowKHopSampler 中的拼写错误 (#6587)\n- 修复整数溢出问题 (#6586)\n- 修复 DGL 节点\u002F边特征的延迟设备拷贝问题","2024-01-12T03:51:43",{"id":162,"version":163,"summary_zh":164,"released_at":165},324283,"v1.1.3","# 主要变更\n* 新增 PyTorch `2.1.0`、`2.1.1`（Windows 版本除外），目前支持的版本为 `1.13.0`、`1.13.1`、`2.0.0`、`2.0.1`、`2.1.0`、`2.1.1`。\n* 新增 CUDA `12.1`，目前支持的版本为 `11.6`、`11.7`、`11.8`、`12.1`。\n* 由于编译问题，PyTorch `2.1.0` 和 `2.1.1` 的 Windows 版本暂不支持。该问题修复后将立即提供支持。","2023-12-11T09:43:07",{"id":167,"version":168,"summary_zh":169,"released_at":170},324284,"1.1.2","# 重大变更\n* PyTorch `1.12.0` 和 `1.12.1` 已弃用，当前支持的版本为 `1.13.0`、`1.13.1`、`2.0.0` 和 `2.0.1`。\n* CUDA `10.2` 和 `11.3` 已弃用，当前支持的版本为 `11.6`、`11.7` 和 `11.8`。\n* 构建时使用的 C++ 标准已升级至 `C++17`。\n* 进行了多项性能优化，例如 #5885、#5924 等。\n* 更新了多个示例以提高可读性，例如 #6035、#6036 等。\n* 修复了若干 bug，例如 #6044、#6001 等。","2023-08-15T07:31:40",{"id":172,"version":173,"summary_zh":174,"released_at":175},324285,"1.1.1","# 新增内容\n* 添加对 PyTorch `2.0.1` 的支持。\n* 修复了多个 bug，例如 DistDGL 的 #5872、`dgl.khop_daj()` 的 #5754 等。\n* 移除了一些未使用的第三方库，如 `xbyak` 和 `tvm`。\n* 进行了几项性能优化，例如 #5508 和 #5685。","2023-06-27T03:26:45",{"id":177,"version":178,"summary_zh":179,"released_at":180},324286,"1.1.0","\r\n## 新增功能\r\n\r\n* 稀疏 API 改进\r\n* 用于在异质性条件下评估图变换器和图学习的基准数据集\r\n* 模块与工具，包括 Cugraph 卷积模块和 SubgraphX\r\n* 图变换器已弃用\r\n* 性能提升\r\n* 扩展 BF16 数据类型以支持第四代 Intel® Xeon® 可扩展处理器 [(#5497](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fpull\u002F5497))\r\n\r\n## 详细说明\r\n\r\n### 稀疏 API 改进 (@czkkkkkk )\r\n\r\n**SparseMatrix 类**\r\n\r\n* 将 DiagMatrix 类合并到 SparseMatrix 类中，对角矩阵将以稀疏矩阵形式存储，并继承稀疏矩阵的所有运算符。(#5367)\r\n* 支持将 DGLGraph 转换为 SparseMatrix。`g.adj(self, etype=None, eweight_name=None)` 返回 DGL 图 `g` 在边类型 `etype` 和边权重 `eweight_name` 下的稀疏矩阵表示。(#5372)\r\n* 通过 `dgl.sparse.to_torch_sparse_coo\u002Fcsr\u002Fcsc` 和 `dgl.sparse.from_torch_sparse`，实现 PyTorch 稀疏张量与 SparseMatrix 之间的零开销转换。(#5373)\r\n\r\n**SparseMatrix 运算符**\r\n\r\n* 支持对两个稀疏度不同的稀疏矩阵进行逐元素乘法，例如 `A * B`。(#5368)\r\n* 支持对两个稀疏度相同的稀疏矩阵进行逐元素除法，例如 `A \u002F B`。(#5369)\r\n* 支持通过 `dgl.sparse.broadcast_add\u002Fsub\u002Fmul\u002Fdiv` 对稀疏矩阵和一维张量应用广播运算符。(#5370)\r\n* 支持按列计算 softmax。(#5371)\r\n\r\n**SparseMatrix 示例**\r\n\r\n* 异构图注意力网络示例 (#5568, @mufeili )\r\n\r\n### 数据集\r\n\r\n* PATTERNDataset (#5422, @gyzhou2000 )\r\n* CLUSTERDataset (#5389, @ZHITENGLI )\r\n* ChameleonDataset (#5477, @mufeili )\r\n* SquirrelDataset (#5507, @mufeili )\r\n* ActorDataset (#5511, @mufeili )\r\n* CornellDataset (#5513, @mufeili )\r\n* TexasDataset (#5513, @mufeili )\r\n* WisconsinDataset (#5520, @mufeili )\r\n* ZINCDataset (#5428, @ZhenyuLU-Heliodore )\r\n\r\n### 模块与工具\r\n\r\n* 用于分布外 (OOD) 评估的数据集划分 (#5418, @gvbazhenov )\r\n* 基于奇异值分解的位置编码 (#5121, @ZhenyuLU-Heliodore )\r\n* 用于衡量图同质性的工具函数 (#5376, #5382, @mufeili )\r\n* EdgeGATConv (#5282, @schmidt-ju )\r\n* CuGraphGATConv (#5168, @tingyu66 )\r\n* CuGraphSAGEConv (#5137, @tingyu66 )\r\n* SubgraphX (#5315, @kunmukh )\r\n* 适用于异构图的 SubgraphX (#5530, @ndbaker1 , @kunmukh )\r\n\r\n### 已弃用 (#5100, @rudongyu )\r\n\r\n* laplacian_pe 已弃用，由 lap_pe 替代\r\n* LaplacianPE 已弃用，由 LapPE 替代\r\n* LaplacianPosEnc 已弃用，由 LapPosEncoder 替代\r\n* BiasedMultiheadAttention 已弃用，由 BiasedMHA 替代\r\n\r\n### 性能提升\r\n\r\n**加速图采样中的 CPU to_block 函数。** (#5305, @peizhou001 )\r\n\r\n* 添加一个并发哈希映射，利用多线程能力加快 ID 映射过程 (#5241, #5304)。\r\n* 通过使用新的哈希映射加速耗时的 to_block 操作，平均性能提升约 2.5 倍。","2023-05-05T08:50:34",{"id":182,"version":183,"summary_zh":184,"released_at":185},324287,"1.0.2","## 新增功能\n\n- 增加对 CUDA 11.8 的支持。请使用以下命令安装：\n  ```bash\n  pip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Fcu118\u002Frepo.html\n  conda install -c dglteam\u002Flabel\u002Fcu118 dgl\n  ```\n- 增加对 Python 3.11 的支持\n- 增加对 PyTorch 2.0 的支持","2023-03-31T09:24:03",{"id":187,"version":188,"summary_zh":189,"released_at":190},324288,"1.0.1","## What's new\r\n\r\n- Enable dgl.sparse on Mac and Windows.\r\n- Fixed several bugs.","2023-02-21T07:15:26",{"id":192,"version":193,"summary_zh":194,"released_at":195},324289,"1.0.0","v1.0.0 release is a new milestone for DGL.  🎉🎉🎉\r\n\r\n## New Package: dgl.sparse\r\n\r\nIn this release, we introduced a brand new package: [dgl.sparse](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002Flatest\u002Fapi\u002Fpython\u002Fdgl.sparse_v0.html), which allows DGL users to build GNNs in Sparse Matrix paradigm. We provided [Google Colab tutorials](https:\u002F\u002Fdocs.dgl.ai\u002Fen\u002Flatest\u002Fnotebooks\u002Fsparse\u002Findex.html) on dgl.sparse package from getting started on sparse APIs to building different types of GNN models including Graph Diffusion, Hypergraph and Graph Transformer, and 10+ examples of commonly used models in [github code base](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fsparse).\r\n\r\n**NOTE**: this feature is currently only available in Linux.\r\n\r\n## New Additions\r\n\r\n- A new example of SEAL+NGNN for OGBL datasets (#4550, #4772)\r\n- Add DeepWalk module (#4562)\r\n- A new example of BiPointNet for modelnet40 dataset (#4434)\r\n- Add Transformers related modules: Metapath2vec (#4660), LaplacianPosEnc (#4750), DegreeEncoder (#4742), ToLevi (#4884), BiasedMultiheadAttention (#4916), PathEncoder (#4956), GraphormerLayer (#4959), SpatialEncoder & SpatialEncoder3d (#4991)\r\n- Add Graph Positional Encoding Ops: double_radius_node_labeling (#4513), shortest_dist (#4799)\r\n- Add a new sample algorithm: (La)yer-Neigh(bor) sampling (#4668)\r\n\r\n## System Enhancement\r\n\r\n- Support PyTorch CUDA Stream (#4503)\r\n- Support canonical edge types in HeteroGraphConv (#4440)\r\n- Reduce Memory Consumption in Distributed Training Example (#4558)\r\n- Improve the performance of `is_unibipartite` (#4556)\r\n- Add options for padding and eigenvalues in Laplacian positional encoding transform (#4628)\r\n- Reduce startup overhead for dist training (#4735)\r\n- Add Heterogeneous Graph support for GNNExplainer (#4401)\r\n- Enable sampling with edge masks on homogeneous graph (#4748)\r\n- Enable save and load for Distributed Optimizer (#4752)\r\n- Add edge-wise message passing operators u_op_v (#4801)\r\n- Support bfloat16 (bf16) (#4648)\r\n- Accelerate CSRSliceMatrix\u003CkDGLCUDA, IdType> by leveraging hashmap (#4924)\r\n- Decouple size of node\u002Fedge data files from nodes\u002Fedges_per_chunk entries in the metadata.json for Distributed Graph Partition Pipeline(#4930)\r\n- Canonical etypes are always used during partition and loading in distributed DGL(#4777, #4814).\r\n- Add parquet support for node\u002Fedge data in Distributed Partition Pipeline.(#4933)\r\n\r\n## Deprecation & Cleanup\r\n\r\n- Deprecate unused dataset attributes (#4666)\r\n- Cleanup outdated examples (#4751)\r\n- Remove the deprecated functions (#5115, #5116, #5117)\r\n- Drop outdated modules (#5114, #5118)\r\n\r\n## Dependency Update\r\n\r\n**Starting from this release, we will drop support for CUDA 10.1 and 11.0. On windows, we will further drop support for CUDA 10.2.**\r\n\r\n\u003Chtml>\r\n\u003Cbody>\r\n\u003C!--StartFragment-->\u003Cb style=\"font-weight:normal;\" id=\"docs-internal-guid-e072c2ef-7fff-9c45-8555-b722502ff886\">\u003Cp dir=\"ltr\" style=\"line-height:1.38;margin-top:0pt;margin-bottom:0pt;\">\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">Linux: \u003C\u002Fspan>\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">CentOS 7+ \u002F Ubuntu 18.04+\u003C\u002Fspan>\u003C\u002Fp>\u003Cdiv dir=\"ltr\" style=\"margin-left:0pt;\" align=\"left\">\r\n\r\nPyTorch ver. \\ CUDA ver. | 10.2 | 11.3 | 11.6 | 11.7\r\n-- | -- | -- | -- | --\r\n1.12 | ✅ | ✅ | ✅ |  \r\n1.13 |   |   | ✅ | ✅\r\n\r\n\u003C\u002Fdiv>\u003Cbr \u002F>\u003C\u002Fb>\u003C!--EndFragment-->\r\n\u003C\u002Fbody>\r\n\u003C\u002Fhtml>\r\n\r\n\u003Chtml>\r\n\u003Cbody>\r\n\u003C!--StartFragment-->\u003Cb style=\"font-weight:normal;\" id=\"docs-internal-guid-df5163c5-7fff-fb74-9313-7dedc4866a95\">\u003Cp dir=\"ltr\" style=\"line-height:1.38;margin-top:0pt;margin-bottom:0pt;\">\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">Windows: \u003C\u002Fspan>\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">Windows 10+\u002FWindows server 2016+\u003C\u002Fspan>\u003C\u002Fp>\u003Cdiv dir=\"ltr\" style=\"margin-left:0pt;\" align=\"left\">\r\n\r\nPyTorch ver. \\ CUDA ver. | 11.3 | 11.6 | 11.7\r\n-- | -- | -- | --\r\n1.12 | ✅ | ✅ |  \r\n1.13 |   | ✅ | ✅\r\n\r\n\u003C\u002Fdiv>\u003C\u002Fb>\u003C!--EndFragment-->\r\n\u003C\u002Fbody>\r\n\u003C\u002Fhtml>\r\n\r\n## Bugfixes\r\n\r\n- Fix a bug related to EdgeDataLoader (#4497)\r\n- Fix graph structure corruption with transform (#4753)\r\n- Fix a bug causing UVA cannot work on old GPUs (#4781)\r\n- Fix NN modules crashing with non-FP32 inputs (#4829)\r\n\r\n## Installation\r\n\r\nThe installation URL and conda repository has changed for CUDA packages.  Please use the following:\r\n\r\n```\r\n# If you installed dgl","2023-01-30T07:07:35",{"id":197,"version":198,"summary_zh":199,"released_at":200},324290,"0.9.1","v0.9.1 is a minor release with the following update:\r\n\r\n## Distributed Graph Partitioning Pipeline\r\n\r\nDGL now supports partitioning and preprocessing graph data using multiple machines. At its core is a new data format called *Chunked Graph Data Format (CGDF)* which stores graph data by chunks. The new pipeline processes data chunks in parallel which not only reduces the memory requirement of each machine but also significantly accelerates the entire procedure. For the same random graph with 1B nodes\u002F5B edges, using a cluster of 8 AWS EC2 x1e.4xlarge (16 vCPU, 488GB RAM each), **the new pipeline can reduce the running time to 2.7 hours and cut down the money cost by 3.7x.** Read the [feature highlight blog](https:\u002F\u002Fwww.dgl.ai\u002Frelease\u002F2022\u002F09\u002F19\u002Frelease.html) for more details.\r\n\r\nTo get started with this new feature, check out the [new user guide chapter](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Fdistributed-preprocessing.html).\r\n\r\n## New Additions\r\n\r\n- A new example of SEAL model for OGBL datasets: https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fogb\u002Fseal_ogbl (#4291)\r\n- A new example of Directional Graph Substructure Networks (GSN) for OGBG-MolPCBA dataset: https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fogb\u002Fdirectional_GSN (#4405)\r\n- A new example of the [Network In Graph Neural Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11638) model for OGBL datasets: https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fogb\u002Fngnn (#4328)\r\n- PyTorch Multi-GPU examples are moved to [`dgl\u002Fexamples\u002Fpytorch\u002Fmultigpu\u002F`](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fmultigpu). With a new example of multi-GPU graph property prediction that can achieve **9.5x speedup on 8 GPUs**. (#4385)\r\n- A new example of Heterogeneous RGCN model on OGBN-MAG dataset: https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fogb\u002Fogbn-mag (#4331)\r\n- Refactored the code style of the following commonly visited examples: RGCN, GIN, GAT. (#4327) (#4280) (#4240)\r\n\r\n## System Enhancement\r\n\r\n- Two new APIs [`dgl.use_libxsmm`](https:\u002F\u002Fdocs.dgl.ai\u002Fgenerated\u002Fdgl.use_libxsmm.html#dgl.use_libxsmm) and [`dgl.is_libxsmm_enabled`](https:\u002F\u002Fdocs.dgl.ai\u002Fgenerated\u002Fdgl.is_libxsmm_enabled.html#dgl.is_libxsmm_enabled) to enable\u002Fdisable Intel LibXSMM. (#4455)\r\n- Added a new option `exclude_self` to exclude self-loop edges for [`dgl.knn_graph`](https:\u002F\u002Fdocs.dgl.ai\u002Fgenerated\u002Fdgl.knn_graph.html#dgl.knn_graph). The API now supports creating a batch of KNN graphs. (#4389)\r\n- The distributed training program launched by DGL will now report error when any trainer\u002Fserver fails.\r\n- Speedup DataLoader by adding CPU affinity support. (#4126)\r\n- Enable graph partition book to support canonical edge types. (#4343)\r\n- Improve the performance of CUDA SpMMCSr (#4363)\r\n- Add CUDA Weighted Neighborhood Sampling (#4064)\r\n- Enable UVA for Weighted Samplers (#4314)\r\n- Allow add data to self loop created by AddSelfLoop or add_self_loop (#4261)\r\n- Add CUDA Weighted Randomwalk Sampling (#4243)\r\n\r\n## Deprecation & Cleanup\r\n\r\n- Removed the already deprecated `AsyncTransferer` class. The functionality has been incorporated to DGL DataLoader. (#4505)\r\n- Removed the already deprecated `num_servers` and `num_workers` arguments of `dgl.distributed.initialize`. (#4284)\r\n\r\n## Dependency Update\r\n\r\n**Starting from this release, we will drop support for CUDA 10.1 and 11.0. On windows, we will further drop support for CUDA 10.2.**\r\n\r\n\u003Chtml>\r\n\u003Cbody>\r\n\u003C!--StartFragment-->\u003Cb style=\"font-weight:normal;\" id=\"docs-internal-guid-e072c2ef-7fff-9c45-8555-b722502ff886\">\u003Cp dir=\"ltr\" style=\"line-height:1.38;margin-top:0pt;margin-bottom:0pt;\">\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">Linux: \u003C\u002Fspan>\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">CentOS 7+ \u002F Ubuntu 18.04+\u003C\u002Fspan>\u003C\u002Fp>\u003Cdiv dir=\"ltr\" style=\"margin-left:0pt;\" align=\"left\">\r\n\r\nPyTorch ver. \\ CUDA ver. | 10.2 | 11.1 | 11.3 | 11.5 | 11.6\r\n-- | -- | -- | -- | -- | --\r\n1.9 | ✅ | ✅ |   |   |  \r\n1.10 | ✅ | ✅ | ✅ |   |  \r\n1.11 | ✅ | ✅ | ✅ | ✅ |  \r\n1.12 | ✅ |   | ✅ |   | ✅\r\n\r\n\u003C\u002Fdiv>\u003Cbr \u002F>\u003C\u002Fb>\u003C!--EndFragment-->\r\n\u003C\u002Fbody>\r\n\u003C\u002Fhtml>\r\n\r\n\u003Chtml>\r\n\u003Cbody>\r\n\u003C!--StartFragment-->\u003Cb style=\"font-weight:normal;\" id=\"docs-internal-guid-df5163c5-7fff-fb74-9313-7dedc4866a95\">\u003Cp dir=\"ltr\" style=\"line-height:1.38;margin-top:0pt;margin-bottom:0pt;\">\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:700;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;\">Windows: \u003C\u002Fspan>\u003Cspan style=\"font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:norm","2022-09-20T07:13:00",{"id":202,"version":203,"summary_zh":204,"released_at":205},324291,"0.9.0","This is a major update with several new features including graph prediction pipeline in DGL-Go, cuGraph support, mixed precision support, and more.\r\n\r\nStarting from 0.9 we also ship arm64 builds for Linux and OSX.\r\n\r\n## DGL-Go\r\n\r\nDGL-Go now supports training GNNs for graph property prediction tasks. It includes two popular GNN models – Graph Isomorphism Network (GIN) and Principal Neighborhood Aggregation (PNA). For example, to train a GIN model on the ogbg-molpcba dataset, first generate a YAML configuration file using command:\r\n\r\n```\r\ndgl configure graphpred --data ogbg-molpcba --model gin\r\n```\r\n\r\nwhich generates the following configuration file. Users can then manually adjust the configuration file.\r\n\r\n```\r\nversion: 0.0.2\r\npipeline_name: graphpred\r\npipeline_mode: train\r\ndevice: cpu                     # Torch device name, e.g., cpu or cuda or cuda:0\r\ndata:\r\n    name: ogbg-molpcba\r\n    split_ratio:                # Ratio to generate data split, for example set to [0.8, 0.1, 0.1] for 80% train\u002F10% val\u002F10% test. Leave blank to use builtin split in original dataset\r\nmodel:\r\n     name: gin\r\n     embed_size: 300            # Embedding size\r\n     num_layers: 5              # Number of layers\r\n     dropout: 0.5               # Dropout rate\r\n     virtual_node: false        # Whether to use virtual node\r\ngeneral_pipeline:\r\n    num_runs: 1                 # Number of experiments to run\r\n    train_batch_size: 32        # Graph batch size when training\r\n    eval_batch_size: 32         # Graph batch size when evaluating\r\n    num_workers: 4              # Number of workers for data loading\r\n    optimizer:\r\n        name: Adam\r\n        lr: 0.001\r\n        weight_decay: 0\r\n    lr_scheduler:\r\n        name: StepLR\r\n        step_size: 100\r\n        gamma: 1\r\n    loss: BCEWithLogitsLoss\r\n    metric: roc_auc_score\r\n    num_epochs: 100             # Number of training epochs\r\n    save_path: results          # Directory to save the experiment results\r\n```\r\n\r\nAlternatively, users can fetch model recipes of pre-defined hyperparameters for the original experiments.\r\n\r\n```\r\ndgl recipe get graphpred_pcba_gin.yaml\r\n```\r\n\r\nTo launch training:\r\n\r\n```\r\ndgl train --cfg graphpred_ogbg-molpcba_gin.yaml\r\n```\r\n\r\nAnother addition is a new command to conduct inference of a trained model on some other dataset. For example, the following shows how to apply the GIN model trained on `ogbg-molpcba` to `ogbg-molhiv`.\r\n\r\n```\r\n# Generate an inference configuration file from a saved experiment checkpoint\r\ndgl configure-apply graphpred --data ogbg-molhiv --cpt results\u002Frun_0.pth\r\n\r\n# Apply the trained model for inference\r\ndgl apply --cfg apply_graphpred_ogbg-molhiv_pna.yaml\r\n```\r\n\r\nIt will save the model prediction in a CSV file like below\r\n![image](https:\u002F\u002Fuser-images.githubusercontent.com\u002F2978100\u002F179543126-b5223fc5-001a-42dc-b483-9d33d3cc0ea0.png)\r\n\r\n## Mixed Precision\r\n\r\nDGL is compatible with the [PyTorch Automatic Mixed Precision (AMP)](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Famp.html) package for mixed precision training, thus saving both training time and GPU memory consumption. This feature requires PyTorch 1.6+ and Python 3.7+. \r\n\r\nBy wrapping the forward pass with `torch.cuda.amp.autocast()`, PyTorch automatically selects the appropriate data type for each op and tensor. Half precision tensors are memory efficient, most operators on half precision tensors are faster as they leverage GPU tensorcores.\r\n\r\n```python\r\nimport torch.nn.functional as F\r\nfrom torch.cuda.amp import autocast\r\n\r\ndef forward(g, feat, label, mask, model):\r\n      with autocast(enabled=True):\r\n            logit = model(g, feat)\r\n            loss = F.cross_entropy(logit[mask], label[mask])\r\n            return loss\r\n```\r\n\r\nSmall gradients in `float16` format have underflow problems (flush to zero). PyTorch provides a `GradScaler` module to address this issue. It multiplies the loss by a factor and invokes backward pass on the scaled loss to prevent the underflow problem. It then unscales the computed gradients before the optimizer updates the parameters. The scale factor is determined automatically. \r\n\r\n```python\r\nfrom torch.cuda.amp import GradScaler\r\n\r\nscaler = GradScaler()\r\n\r\ndef backward(scaler, loss, optimizer):\r\n      scaler.scale(loss).backward()\r\n      scaler.step(optimizer)\r\n      scaler.update()\r\n```\r\n\r\nPutting everything together, we have the example below.\r\n\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nfrom dgl.data import RedditDataset\r\nfrom dgl.nn import GATConv\r\nfrom dgl.transforms import AddSelfLoop\r\n\r\nclass GAT(nn.Module):\r\n      def __init__(self, in_feats, num_classes, num_hidden=256, num_heads=2):\r\n            super().__init__()\r\n            self.conv1 = GATConv(in_feats, num_hidden, num_heads, activation=F.elu)\r\n            self.conv2 = GATConv(num_hidden * num_heads, num_hidden, num_heads)\r\n\r\n      def forward(self, g, h):\r\n            h = self.conv1(g, h).flatten(1)\r\n            h = self.conv2(g, h).mean(1)\r\n            return h\r\n\r\ndevice = torch.device('cuda')\r\n","2022-07-18T15:43:47",{"id":207,"version":208,"summary_zh":209,"released_at":210},324292,"0.8.2","This is a minor release with the following updates.\r\n\r\n## Test AArch64 Build\r\nA 0.8.2 test build for AArch64 is available in\r\n```bash\r\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels-test\u002Frepo.html   # or dgl-cuXX for CUDA\r\n```\r\n\r\n## New Modules\r\n* Graph Isomorphism Network with Edge Features (#3934)\r\n* `dgl.transforms.FeatMask` for randomly dropping out dimensions of all node\u002Fedge features (#3968, @RecLusIve-F)\r\n* `dgl.transforms.RowFeatNormalizer` for normalization of all node\u002Fedge features (#3968, @RecLusIve-F)\r\n* Label propagation module (#4017)\r\n* Directional graph network layer (#4017)\r\n* Datasets for developing GNN explainability approaches (#3982)\r\n* `dgl.transforms.SIGNDiffusion` for augmenting input node features (#3982)\r\n\r\n## Quality-of-life Updates\r\n* Allow `HeteroLinear` with\u002Fwithout bias (#3970, @ksadowski13)\r\n* Allow selection of “socket” for RPC backend in distributed training (#3951)\r\n* Enable specification of maximum number of trials for socket backend in DistDGL (#3977)\r\n* Added floating-point conversion functions to `dgl.transforms.functional` (#3890, @ndickson-nvidia)\r\n* Improve the warning message when Tensoradapter is not found (#4055)\r\n* Add sanity check for `in_edges`\u002F`out_edges` on empty graphs (#4050)\r\n\r\n## System Optimization\r\n* Improved graph batching on GPU for Graph DataLoaders (#3895, @ayasar70)\r\n* CPU DataLoader affinitization (#3723 @daniil-sizov)\r\n* Memory consumption optimization on index shuffling in dataloader (#3980)\r\n* Remove unnecessary induced vertices in edge subgraph (#3978, @yaox12)\r\n* Change the `curandState` and launch dimension of GPU neighbor sampling kernel (#3990, @paoxiaode)\r\n\r\n## Bug fixes\r\n* Fix multi-GPU edge classification crashing with pure-GPU sampling (#3946)\r\n* Fixed race conditions in distributed SparseAdam and SparseAdagrad (#3971, @ndickson-nvidia)\r\n* Fix launch parameters index select kernel in sparse pull for multi-GPU sparse embedding (#3524, @nv-dlasalle)\r\n* Fix import error when tensorflow backend is specified (#4015)\r\n* Fix DistDGL crashing when sampling on bipartite graphs (#4014)\r\n* Prevent users from attempting to pin PyTorch non-contiguous tensors or views only encompassing part of tensor (#3992, @nv-dlasalle)\r\n* Fix Cython CAPI holding GIL causes deadlock when Python callback is asynchronous (#4036)\r\n* Misc unit test, example, doc fixes etc. (#3947, #3941, #3928, #3944, #3505, #3953, #3983, #3996, #4009, #4010, #4016, #4022, #4023, #4027, #4030, #4034, #4038, #4053, #4058, #4060 @Kh4L, @daniil-sizov, @HenryChang213, @sharique1006, @msharmavikram, @initzhang, @yinpeiqi, @chang-l, @nv-dlasalle, @Sanzo00, @Eurus-Holmes, @xiaopqr, @decoherencer)\r\n","2022-05-30T15:06:10",{"id":212,"version":213,"summary_zh":214,"released_at":215},324293,"0.8.1","This is a minor release that includes the following model updates, optimizations, new features and bug fixes.\r\n\r\n## Model update\r\n* `nn.GroupRevRes` from *Training Graph Neural Networks with 1000 layers* [#3842] \r\n* `transforms.LaplacianPositionalEncoding` from *Graph Neural Networks with Learnable Structural and Positional Representations* [#3869] \r\n* `transforms.RWPositionalEncoding` from *Graph Neural Networks with Learnable Structural and Positional Representations* [#3869] \r\n* `dataloading.SAINTSampler` from GraphSAINT [#3879] \r\n* `nn.EGNNConv` from *E(n) Equivariant Graph Neural Networks* [#3901] \r\n* `nn.PNAConv` from the baselines of *E(n) Equivariant Graph Neural Networks* [#3901]\r\n\r\n## Example update\r\n* *Position-aware GNN* [#3823 @RecLusIve-F] \r\n* *EGES (Enhanced Graph Embedding with Side info)* [#3756 @Wang-Yu-Qing] \r\n\r\n## Feature update (new functionalities, interface changes, etc.)\r\n* Radius graph - construct a graph by connecting points within a given distance.  [#3829 @ksadowski13] \r\n  * It uses `torch.cdist` so the space complexity is O(N^2).\r\n* Added a `get_attention` parameter in `GlobalAttentionPooling`.  [#3837 @decoherencer] \r\n\r\n## Quality of life update\r\n* Example to train with multi-GPU with PyTorch Lightning.  [#3863]\r\n* Multi-GPU inference with UVA.  [#3827 @nv-dlasalle] \r\n* Enable UVA sampling with CPU indices to save GPU memory.  [#3892] \r\n* Set `stacklevel=2` for DGL-raised warnings.  [#3816] \r\n* Pure GPU example of GraphSAGE, with both node classification and link prediction.  [#3796 @nv-dlasalle, #3856 @Kh4L] \r\n* Tensoradapter DLPack 0.6 compatibility \u002F PyTorch 1.11 support. [#3803] \r\n\r\n## System optimization\r\n* Enable UVA for PinSAGE and RandomWalk. [#3857 @yaox12] \r\n* METIS partition with communication volume minimization, reduces the communication volume by 13.4% compared with edge-cut minimization on ogbn-products. [#3821 @chwan1016] \r\n* Change parameter of curand_init for reducing GPU latency [#3794 @paoxiaode]\r\n\r\n## Bug fixes\r\n* Fix Python 3.10 import error [#3862]\r\n* Fix repeated 0’s in DataLoader index iteration when `shuffle=False` [#3892]\r\n* DataLoader device cannot be None [#3822 @yinpeiqi]\r\n* Fix device error in negative sampling with UVA [#3904 @nv-dlasalle]\r\n* Illegal instruction in ClusterGCNSampler (#3910)\r\n* Include pin memory status in pickling and deep copy [#3914]\r\n* Misc doc fixes (@lvcrek @AzureLeon1 @decoherencer @yaox12 @ketyi )","2022-04-17T05:19:36",{"id":217,"version":218,"summary_zh":219,"released_at":220},324294,"0.8.0post2","This is a bugfix release including the following bugfixes:\r\n\r\n## Quality-of-life updates\r\n* Python 3.10 support.\r\n* PyTorch 1.11 support.\r\n* CUDA 11.5 support on Linux.  Please install with\r\n  ```\r\n  pip install dgl-cu115 -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Frepo.html  # if using pip\r\n  conda install dgl-cuda11.5 -c dglteam  # if using conda\r\n  ```\r\n* Compatibility to DLPack 0.6 in tensoradapter (#3803) for PyTorch 1.11\r\n* Set stacklevel=2 for dgl_warning (#3816)\r\n* Support custom datasets in DataLoader that are not necessarily tensors (#3810 @yinpeiqi )\r\n\r\n## Bug fixes\r\n* Pass ntype\u002Fetype into partition book when node\u002Fedge_split (#3828)\r\n* Fix multi-GPU RGCN example (#3871 @yaox12)\r\n* Send rpc messages blockingly in case of congestion (#3867).  Note that this fix would probably cause speed regression in distributed DGL training.  We were still finding the root cause of the underlying issue in #3881.\r\n* Fix CopyToSharedMem assuming that all relation graphs are homogeneous (#3841)\r\n* Fix HAN example crashing with CUDA (#3841)\r\n* Fix UVA sampling crash without specifying prefetching features (#3862)\r\n* Fix documentation display issue of node\u002Fedge_split (#3858)\r\n* Fix device mismatch error in GraphSAGE distributed training example under multi-node multi-GPU (#3870)\r\n* Use `torch.distributed.algorithms.join.Join` to deal with uneven training sets in distributed training (#3870)\r\n* Dataloader documentation fixes (#3886)\r\n* Remove redundant reference of networkx package in pagerank.py (#3888 @AzureLeon1 )\r\n* Make source build work for systems where the default is Python 2 (#3718)\r\n* Fix UVA sampling with partially specified node types (#3897)","2022-04-03T08:38:32",{"id":222,"version":223,"summary_zh":224,"released_at":225},324295,"0.8.0post1","This is a quick post-release with critical bug fixes:\r\n\r\n* Fix incorrect name when fetch data in sparse optimizer #3808 \r\n* Fix DataLoader not working with heterogeneous graphs on multiple GPUs #3801 \r\n* Fix error in heterogeneous graph partitioning when the graph is unidirectional bipartite #3793 ","2022-03-08T07:51:11",{"id":227,"version":228,"summary_zh":229,"released_at":230},324296,"0.8.0","v0.8.0 is a major release with many new features, system improvement and fixes. Read [the blog](https:\u002F\u002Fwww.dgl.ai\u002Frelease\u002F2022\u002F03\u002F01\u002Frelease.html) for the highlighted features.\r\n\r\nMajor features\r\n===\r\n\r\nMini-batch Sampling Pipeline Update\r\n---\r\n\r\nEnabled CUDA UVA-based optimization and feature prefetching for all built-in graph samplers (up to **4x speedup** compared to v0.7). Users can now specify the features to prefetch and turn on UVA optimization in `dgl.dataloading.Sampler` and `dgl.dataloading.DataLoader`.\r\n```python\r\ng = ...                             # some DGLGraph data\r\ntrain_nids = ...                    # training node IDs\r\nsampler = dgl.dataloading.MultiLayerNeighborSampler(\r\n    fanout=[10, 15],\r\n    prefetch_node_feats=['feat'],   # prefetch node feature 'feat'\r\n    prefetch_labels=['label'],      # prefetch node label 'label'\r\n)\r\ndataloader = dgl.dataloading.DataLoader(\r\n    g, train_nids, sampler,\r\n    device='cuda:0',     # perform sampling on GPU 0\r\n    batch_size=1024,\r\n    shuffle=True,\r\n    use_uva=True         # turn on UVA optimization\r\n)\r\n```\r\n\r\nWe have done a major refactor on the sampling components to make it easier to implement new graph samplers. Added a new base class `dgl.dataloading.Sampler` with one abstract method sample for overriding. Added new APIs `dgl.set_src_lazy_features`, `dgl.set_dst_lazy_features`, `dgl.set_node_lazy_features`, `dgl.set_edge_lazy_features` for customizing prefetching rules. The code below shows the new user experience.\r\n\r\n```python\r\nclass NeighborSampler(dgl.dataloading.Sampler):\r\n    def __init__(self,\r\n                 fanouts : list[int],\r\n                 prefetch_node_feats: list[str] = None,\r\n                 prefetch_edge_feats: list[str] = None,\r\n                 prefetch_labels: list[str] = None):\r\n        super().__init__()\r\n        self.fanouts = fanouts\r\n        self.prefetch_node_feats = prefetch_node_feats\r\n        self.prefetch_edge_feats = prefetch_edge_feats\r\n        self.prefetch_labels = prefetch_labels\r\n\r\n    def sample(self, g, seed_nodes):\r\n        output_nodes = seed_nodes\r\n        subgs = []\r\n        for fanout in reversed(self.fanouts):\r\n            # Sample a fixed number of neighbors of the current seed nodes.\r\n            sg = g.sample_neighbors(seed_nodes, fanout)\r\n            # Convert this subgraph to a message flow graph.\r\n            sg = dgl.to_block(sg, seed_nodes)\r\n            seed_nodes = sg.srcdata[NID]\r\n            subgs.insert(0, sg)\r\n         input_nodes = seed_nodes\r\n         \r\n         # handle prefetching\r\n         dgl.set_src_lazy_features(subgs[0], self.prefetch_node_feats)\r\n         dgl.set_dst_lazy_features(subgs[-1], self.prefetch_labels)\r\n         for subg in subgs:\r\n             dgl.set_edge_lazy_features(subg, self.prefetch_edge_feats)\r\n\r\n         return input_nodes, output_nodes, subgs\r\n```\r\n\r\nRelated documentations:\r\n\r\n* Reworked the user guide chapter for [customizing graph samplers](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Fminibatch-custom-sampler.html).\r\n* Added a new user guide chapter for [writing graph samplers with feature prefetching](https:\u002F\u002Fdocs.dgl.ai\u002Fguide\u002Fminibatch-prefetching.html).\r\n\r\nWe thank Xin Yao (@yaox12 ) and Dominique LaSalle (@nv-dlasalle ) from NVIDIA and David Min (@davidmin7 ) from UIUC for their contributions.\r\n\r\nDGL-Go\r\n---\r\n\r\n[DGL-Go](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Ftree\u002Fmaster\u002Fdglgo) is a new command line tool for users to get started with training, using and studying Graph Neural Networks (GNNs). Data scientists can quickly apply GNNs to their problems, whereas researchers will find it useful to customize their experiments.\r\n\r\n\u003Cp align=\"center\">\r\n  \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fblob\u002Fmaster\u002Fdglgo\u002Fdglgo.png\" height=\"400\">\r\n\u003C\u002Fp>\r\n\r\nThe initial release include\r\n\r\n* Four commands, `dgl train`, `dgl recipe`, `dgl configure` and `dgl export`.\r\n* 3 training pipelines for node prediction using full graph training, link prediction using full graph training and node prediction using neighbor sampling.\r\n*  5 node encoding models: gat, gcn, gin, sage, sgc; 3 edge encoding models: bilinear, dot-product, element-wise.\r\n* 10 datasets including custom dataset in CSV format.\r\n\r\nNN Modules\r\n---\r\n\r\nWe have accelerated `dgl.nn.RelGraphConv` and `dgl.nn.HGTConv` by up to **36x and 12x** compared with the baselines from v0.7 and PyG. Shortened the implementation of `dgl.nn.RelGraphConv` by **3x** (from 200L → 64L).\r\n\r\n**Breaking change**: `dgl.nn.RelGraphConv` no longer accepts 1-D integer tensor representing node IDs during forward. Please switch to `torch.nn.Embedding` to explicitly represent trainable node embeddings.\r\n\r\nBelow are the new NN modules added to v0.8:\r\n\r\n* [`GATv2Conv`](https:\u002F\u002Fdocs.dgl.ai\u002Fgenerated\u002Fdgl.nn.pytorch.conv.GATv2Conv.html#dgl.nn.pytorch.conv.GATv2Conv): GATv2 from [How Attentive are Graph Attention Networks?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14491.pdf)\r\n* [`EGATConv`](https:\u002F\u002Fdocs.dgl.ai\u002Fgenerated\u002Fdgl.nn.pytorch.conv.EGATConv.html#dgl.nn.pytorch.conv.EGATConv): Graph att","2022-03-01T15:45:33",{"id":232,"version":233,"summary_zh":234,"released_at":235},324297,"0.7.2","# 0.7.2 Release Notes\r\n\r\nThis is a patch release targeting CUDA 11.3 and PyTorch 1.10. It contains (1) distributed training on heterogeneous graphs, and (2) bug fixes and code reorganization commits. The performance impact should be minimal.\r\n\r\nTo install with CUDA 11.3 support, run either\r\n```\r\npip install dgl-cu113 -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Frepo.html\r\n```\r\nor\r\n```\r\nconda install -c dglteam dgl-cuda11.3\r\n```\r\n\r\n## Distributed Training on Heterogeneous Graphs\r\n\r\nWe have made the interface of distributed sampling on heterogeneous graph consistent with single-machine code.  Please refer to https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl\u002Fblob\u002F0.7.x\u002Fexamples\u002Fpytorch\u002Frgcn\u002Fexperimental\u002Fentity_classify_dist.py for the new code.\r\n\r\n## Other fixes\r\n\r\n* [Bugfix] Fix bugs of farthest_point_sampler (#3327, @sangyx)\r\n* [Bugfix] Fix sparse embeddings for PyTorch \u003C 1.7 #3291 (#3333)\r\n* Fixes bug in hg.update_all causing crash #3312 (#3345, @sanchit-misra)\r\n* [Bugfix] And PYTHONPATH in server launch. (#3352)\r\n* [CPU][Sampling][Performance] Improve sampling on the CPU. (#3274, @nv-dlasalle)\r\n* [Performance, CPU] Rewriting OpenMP pragmas into parallel_for (#3171, @tpatejko)\r\n* [Build] Fix OpenMP header inclusion for Mac builds (#3325)\r\n* [Performance] improve coo2csr space complexity when row is not sorted (#3326)\r\n* [BugFix] initialize data if null when converting from row sorted coo to csr (#3360)\r\n* fix broadcast tensor dim in `dgl.broadcast_nodes` (#3351, @jwyyy)\r\n* [BugFix] fix typo in fakenews dataset variable name (#3363, @kayzliu)\r\n* [Doc] Added md5sum info for OGB-LSC dataset (#3332, @msharmavikram)\r\n* [Feature] Graceful handling of exceptions thrown within OpenMP blocks (#3353)\r\n* Fix torch import in example (#3372, @jwyyy)\r\n* [Distributed] Allow user to pass-in extra env parameters when launching a distributed training task. (#3375)\r\n* [BugFix] extract gz into target dir (#3389)\r\n* [Model] Refine GraphSAINT (#3328 @ljh1064126026 )\r\n* [Bug] check dtype before convert to gk (#3414)\r\n* [BugFix] add count_nonzero() into SA_Client (#3417)\r\n* [Bug] Do not skip graphconv even no edge exists (#3416)\r\n* Fix edge ID exclusion when both g and g_sampling are specified in EdgeDataLoader(#3322)\r\n* [Bugfix] three bugs related to using DGL as a subdirectory(third_party) of another project. (#3379, @yuanzexi )\r\n* [PyTorch][Bugfix] Use uint8 instead of bool in pytorch to be compatible with nightly version (#3406, #3454, @nv-dlasalle)\r\n* [Fix] Use ==\u002F!= to compare constant literals (str, bytes, int, float, tuple) (#3415, @cclauss)\r\n* [Bugfix][Pytorch] Fix model save and load bug of stgcn_wave (#3303, @HaoWei-TomTom )\r\n* [BugFix] Avoid Memory Leak Issue in PyTorch Backend (#3386, @chwan-rice )\r\n* [Fix] Split nccl sparse push into two groups (#3404, @nv-dlasalle )\r\n* [Doc] remove duplicate papers (#3393, @chwan-rice )\r\n* Fix GINConv backward #3437 (#3440)\r\n* [bugfix] Fix compilation with CUDA 11.5's CUB (#3468, @nv-dlasalle )\r\n* [Example][Performance] Enable faster validation for pytorch graphsage example (#3361, @nv-dlasalle )\r\n* [Doc] Evaluation Tutorial for Link Prediction (#3463)","2021-11-08T04:09:08"]