[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-graph4ai--graph4nlp":3,"tool-graph4ai--graph4nlp":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":10,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":107,"github_topics":108,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":115,"updated_at":116,"faqs":117,"releases":148},1412,"graph4ai\u002Fgraph4nlp","graph4nlp","Graph4nlp is the library for the easy use of Graph Neural Networks for NLP. Welcome to visit our DLG4NLP website (https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html) for various learning resources! ","Graph4NLP 是一款专为自然语言处理（NLP）与图深度学习（DLG）交叉领域设计的开源库，旨在让用户能轻松构建和应用图神经网络。它有效解决了传统开发中模型复现难、代码复用率低以及从数据构建到模型训练全流程割裂的痛点。\n\n无论是希望快速验证想法的数据科学家，还是需要灵活定制前沿算法的研究人员与开发者，都能从中获益。Graph4NLP 不仅提供了多种最先进（SOTA）模型的完整实现，还支持高度灵活的自定义接口，覆盖从数据处理、图构建、模块组合到最终应用的全流水线需求。\n\n其核心技术亮点在于底层基于高性能的 DGL（Deep Graph Library）运行时库，兼顾了运行效率与扩展性。架构上清晰划分为数据层、模块层、模型层和应用层，并支持静态与动态图的自动化构建。通过统一的参数设计和新增的推理封装函数，Graph4NLP 大幅降低了图神经网络在文本分析任务中的使用门槛，是探索图文结合技术的得力助手。","\u003Cp align=\"center\">\u003Ca href=\"https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_b7f432dd9f24.png\" width=\"800\" class=\"center\" alt=\"logo\"\u002F>\n    \u003Cbr\u002F>\n    \u003Ca\u002F>\n\u003C\u002Fp>\n   \n[pypi-image]: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fgraph4nlp.svg\n\n[pypi-url]: https:\u002F\u002Fpypi.org\u002Fproject\u002Fgraph4nlp\n\n[license-image]:https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\n\n[license-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002FLICENSE\n\n[contributor-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fgraph4ai\u002Fgraph4nlp\n\n[contributor-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fcontributors\n\n[contributing-image]:https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\n\n[contributing-url]:to_be_add\n\n[issues-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fgraph4ai\u002Fgraph4nlp\n\n[issues-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\n\n[forks-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fgraph4ai\u002Fgraph4nlp\n\n[forks-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ffork\n\n[stars-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgraph4ai\u002Fgraph4nlp\n\n[stars-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fstars\n\n![Last Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fgraph4ai\u002Fgraph4nlp)\n[![pypi][pypi-image]][pypi-url]\n[![Contributors][contributor-image]][contributor-url]\n[![Contributing][contributing-image]][contributing-url]\n[![License][license-image]][license-url]\n[![Issues][issues-image]][issues-url]\n[![Fork][forks-image]][forks-url]\n[![Star][stars-image]][stars-url]\n\n# Graph4NLP\n\n***Graph4NLP*** is an easy-to-use library for R&D at the intersection of **Deep Learning on Graphs** and\n**Natural Language Processing** (i.e., DLG4NLP). It provides both **full implementations** of state-of-the-art models for data scientists and also **flexible interfaces** to build customized models for researchers and developers with whole-pipeline support. Built upon highly-optimized runtime libraries including [DGL](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl) , ***Graph4NLP*** has both high running efficiency and great extensibility. The architecture of ***Graph4NLP*** is shown in the following figure, where boxes with dashed lines represents the features under development. Graph4NLP consists of four different layers: 1) Data Layer, 2) Module Layer, 3) Model Layer, and 4) Application Layer.\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_ae0f5f6c9205.png\" alt=\"architecture\" width=\"700\" \u002F>\n    \u003Cbr>\n    \u003Cb>Figure\u003C\u002Fb>: Graph4NLP Overall Architecture\n\u003C\u002Fp>\n\n## \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_7295806fc290.png\" alt='new' width=30 \u002F> Graph4NLP news\n**01\u002F20\u002F2022:** The **v0.5.5 release**. Try it out! \u003Cbr>\n**09\u002F26\u002F2021:** The **v0.5.1 release**. Try it out! \u003Cbr>\n**09\u002F01\u002F2021:** Welcome to visit our **DLG4NLP website (https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html)** for various learning resources! \u003Cbr>\n**06\u002F05\u002F2021:** The **v0.4.1 release**. \n\n## Major Releases\n\n| Releases | Date       | Features                                                     |\n| -------- | ---------- | ------------------------------------------------------------ |\n| v0.5.5   | 2022-01-20 | - Support model.predict API by introducing wrapper functions. \u003Cbr \u002F> - Introduce Three new inference_wrapper functions: classifier_inference_wrapper, generator_inference_wrapper, generator_inference_wrapper_for_tree. \u003Cbr \u002F> - Add the inference and inference_advance examples in each application. \u003Cbr \u002F> - Separate the graph topology and graph embedding process. \u003Cbr \u002F> - Renew all the graph construction functions. \u003Cbr \u002F> - Module graph_embedding is divided into graph_embedding_initialization and graph_embedding_learning. \u003Cbr \u002F> - Unify the parameters in Dataset. We remove the ambiguous parameter ``graph_type`` and introduce ``graph_name`` to indicate the graph construction method and ``static_or_dynamic`` to indicate the static or dynamic graph  construction type.  \u003Cbr \u002F> - New: The dataset now can automatically choose the default methods (e.g., ``topology_builder``) by only one parameter `` graph_name ``. |\n| v0.5.1   | 2021-09-26 | - Lint the codes \u003Cbr \u002F> - Support testing with users' own data \u003Cbr \u002F> - Fix the bug: The word embedding size was hard-coded in the 0.4.1 version. Now it is equal to \"word_emb_size\" parameter. \u003Cbr \u002F> - Fix the bug: The build_vocab() is called twice in the 0.4.1 version. \u003Cbr \u002F> - Fix the bug: The two main files of knowledge graph completion example missed the optional parameter \"kg_graph\" in ranking_and_hits() when resuming training the model. \u003Cbr \u002F> - Fix the bug: We have fixed the preprocessing path error in KGC readme. \u003Cbr \u002F> - Fix the bug: We have fixed embedding construction bug when setting emb_strategy to 'w2v'. |\n| v0.4.1   | 2021-06-05 | - Support the whole pipeline of Graph4NLP \u003Cbr \u002F> - GraphData and Dataset support |\n\n## Quick tour\n\n***Graph4nlp*** aims to make it incredibly easy to use GNNs in NLP tasks (check out [Graph4NLP Documentation](https:\u002F\u002Fgraph4ai.github.io\u002Fgraph4nlp\u002F)). Here is an example of how to use the [*Graph2seq*](https:\u002F\u002Fgraph4ai.github.io\u002Fgraph4nlp\u002F) model (widely used in machine translation, question answering,\nsemantic parsing, and various other NLP tasks that can be abstracted as graph-to-sequence problem and has shown superior\nperformance).\n\n\u003C!-- If you want to further improve model performance, we also support pre-trained models including [BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805), etc.\n -->\nWe also offer other high-level model APIs such as graph-to-tree models. If you are interested in DLG4NLP related research problems, you are very welcome to use our library and refer to our [graph4nlp survey](http:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06090).\n\n```python\nfrom graph4nlp.pytorch.datasets.jobs import JobsDataset\nfrom graph4nlp.pytorch.modules.graph_construction.dependency_graph_construction import DependencyBasedGraphConstruction\nfrom graph4nlp.pytorch.modules.config import get_basic_args\nfrom graph4nlp.pytorch.models.graph2seq import Graph2Seq\nfrom graph4nlp.pytorch.modules.utils.config_utils import update_values, get_yaml_config\n\n# build dataset\njobs_dataset = JobsDataset(root_dir='graph4nlp\u002Fpytorch\u002Ftest\u002Fdataset\u002Fjobs',\n                           topology_builder=DependencyBasedGraphConstruction,\n                           topology_subdir='DependencyGraph')  # You should run stanfordcorenlp at background\nvocab_model = jobs_dataset.vocab_model\n\n# build model\nuser_args = get_yaml_config(\"examples\u002Fpytorch\u002Fsemantic_parsing\u002Fgraph2seq\u002Fconfig\u002Fdependency_gcn_bi_sep_demo.yaml\")\nargs = get_basic_args(graph_construction_name=\"node_emb\", graph_embedding_name=\"gat\", decoder_name=\"stdrnn\")\nupdate_values(to_args=args, from_args_list=[user_args])\ngraph2seq = Graph2Seq.from_args(args, vocab_model)\n\n# calculation\nbatch_data = JobsDataset.collate_fn(jobs_dataset.train[0:12])\n\nscores = graph2seq(batch_data[\"graph_data\"], batch_data[\"tgt_seq\"])  # [Batch_size, seq_len, Vocab_size]\n```\n\n## Overview\n\nOur Graph4NLP computing flow is shown as below.\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_97d25e8e7a91.png\" width=\"1000\" class=\"center\" alt=\"logo\"\u002F>\n    \u003Cbr\u002F>\n\u003C\u002Fp>\n\n## Graph4NLP Models and Applications\n\n### Graph4NLP models\n\n- [Graph2Seq](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002Fgraph4nlp\u002Fpytorch\u002Fmodels\u002Fgraph2seq.py): a general end-to-end neural encoder-decoder model that maps an input graph to a sequence of tokens.  \n- [Graph2Tree](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002Fgraph4nlp\u002Fpytorch\u002Fmodels\u002Fgraph2tree.py): a general end-to-end neural encoder-decoder model that maps an input graph to a tree structure.\n\n### Graph4NLP applications\n\nWe provide a comprehensive collection of NLP applications, together with detailed examples as follows:\n\n- [Text classification](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Ftext_classification): to give the sentence or document an appropriate label.\n- [Semantic parsing](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fsemantic_parsing): to translate natural language into a machine-interpretable formal meaning representation.\n- [Neural machine translation](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fnmt): to translate a sentence in a source language to a different target language.\n- [summarization](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fsummarization): to generate a shorter version of input texts which could preserve major meaning.\n- [KG completion](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fkg_completion): to predict missing relations between two existing entities in konwledge graphs.\n- [Math word problem solving](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fmath_word_problem): to automatically solve mathematical exercises that provide background information about a problem in easy-to-understand language.\n- [Name entity recognition](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fname_entity_recognition): to tag entities in input texts with their corresponding type.\n- [Question generation](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fquestion_generation): to generate an valid and fluent question based on the given passage and target answer (optional).\n\n\n## Performance\n\nEnvironment: torch 1.8, ubuntu 16.04 with 2080ti GPUs\n\n| Task                       |              Dataset             |   GNN    Model      | Graph construction                           | Evaluation         |          Performance          |\n|----------------------------|:--------------------------------:|:-------------------:|----------------------------------------------|--------------------|:-----------------------------:|\n| Text classification        | TRECT\u003Cbr> CAirline\u003Cbr> CNSST\u003Cbr> |           GAT       | Dependency\u003Cbr> Constituency\u003Cbr> Dependency\u003Cbr> |      Accuracy    | 0.948\u003Cbr> 0.785\u003Cbr> 0.538\u003Cbr> |\n| Semantic Parsing           |               JOBS               |           SAGE      | Constituency                                 | Execution accuracy |             0.936             |\n| Question generation        |               SQuAD             |           GGNN       | Dependency                                      | BLEU-4             |             0.15175\t            |\n| Machine translation        |              IWSLT14             |           GCN       | Dynamic                                      | BLEU-4             |             0.3212            |\n| Summarization              |             CNN(30k)             |           GCN       | Dependency                                   | ROUGE-1            |              26.4             |\n| Knowledge graph completion | Kinship                          |           GCN      | Dependency                                    | MRR                | 82.4                          |\n| Math word problem          |              MAWPS               | SAGE                | Dynamic                                      | Solution accuracy   | 76.4                    |\n\n\n## Installation\n\nCurrently, users can install Graph4NLP via **pip** or **source code**. Graph4NLP supports the following OSes:\n\n- Linux-based systems (tested on Ubuntu 18.04 and later)\n- macOS (only CPU version)\n- Windows 10 (only support pytorch >= 1.8)\n\n### Installation via pip (binaries)\nWe provide pip wheels for all major OS\u002FPyTorch\u002FCUDA combinations. Note that we highly recommend `Windows` users refer to `Installation via source code` due to compatibility.\n\n#### Ensure that at least PyTorch (>=1.6.0) is installed:\nNote that `>=1.6.0` is ok.\n``` bash\n$ python -c \"import torch; print(torch.__version__)\"\n>>> 1.6.0\n```\n#### Find the CUDA version PyTorch was installed with (for GPU users):\n```bash\n$ python -c \"import torch; print(torch.version.cuda)\"\n>>> 10.2\n```\n\n#### Install the relevant dependencies:\n`torchtext` is needed since Graph4NLP relies on it to implement embeddings.\nPlease pay attention to the PyTorch requirements before installing `torchtext` with the following script! For detailed version matching please refer [here](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchtext\u002F).\n``` bash\npip install torchtext # >=0.7.0\n```\n\n\n#### Install Graph4NLP\n```bash\npip install graph4nlp${CUDA}\n```\nwhere `${CUDA}` should be replaced by the specific CUDA version (`none` (CPU version), `\"-cu92\"`, `\"-cu101\"`, `\"-cu102\"`, `\"-cu110\"`). The following table shows the concrete command lines. For CUDA 11.1 users, please refer to `Installation via source code`.\n\n| Platform  | Command                       |\n| --------- | ----------------------------- |\n| CPU       | `pip install graph4nlp`   |\n| CUDA 9.2  | `pip install graph4nlp-cu92`  |\n| CUDA 10.1 | `pip install graph4nlp-cu101` |\n| CUDA 10.2 | `pip install graph4nlp-cu102` |\n| CUDA 11.0 | `pip install graph4nlp-cu110` |\n\n### Installation via source code\n\n#### Ensure that at least PyTorch (>=1.6.0) is installed:\nNote that `>=1.6.0` is ok.\n``` bash\n$ python -c \"import torch; print(torch.__version__)\"\n>>> 1.6.0\n```\n#### Find the CUDA version PyTorch was installed with (for GPU users):\n```bash\n$ python -c \"import torch; print(torch.version.cuda)\"\n>>> 10.2\n```\n\n#### Install the relevant dependencies:\n`torchtext` is needed since Graph4NLP relies on it to implement embeddings.\nPlease pay attention to the PyTorch requirements before installing `torchtext` with the following script! For detailed version matching please refer [here](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchtext\u002F).\n``` bash\npip install torchtext # >=0.7.0\n```\n\n#### Download the source code of `Graph4NLP` from Github:\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp.git\ncd graph4nlp\n```\n#### Configure the CUDA version\nThen run `.\u002Fconfigure` (or `.\u002Fconfigure.bat`  if you are using Windows 10) to config your installation. The configuration program will ask you to specify your CUDA version. If you do not have a GPU, please type 'cpu'.\n```bash\n.\u002Fconfigure\n```\n\n#### Install the relevant packages:\n\nFinally, install the package:\n\n```shell\npython setup.py install\n```\n\n## For Hyperparameter tuning\n\nWe show some of the hyperparameters that are often tuned\n [here](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002Fe\u002F2PACX-1vQaE3BTKYt4NX0z5oJrzVESdE7Kx3dnmTCG7zTdtTqj6zuRX12qBz7OoEf0ckTDini0BljFLA9JuF5v\u002Fpubhtml?gid=0&single=true).\n\n\n## New to Deep Learning on Graphs for NLP?\n\nIf you want to learn more on applying Deep Learning on Graphs techniques to NLP tasks, welcome to visit our DLG4NLP website (https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html) for various learning resources! You can refer to our survey paper which provides an overview of this existing research direction. If you want detailed reference to our library, please refer to our docs.\n\n\u003C!-- [Docs]() | [Graph4nlp survey]() | [Related paper list]() | [Workshops]() -->\n- Documentation: [Docs](https:\u002F\u002Fgraph4ai.github.io\u002Fgraph4nlp\u002F)  \n- Graph4NLP Survey: [Graph4nlp survey](http:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06090)  \n- Graph4NLP Tutorials: \n    - [Graph4NLP-NAACL'21, SIGIR'21, IJCAI'21, KDD'21](https:\u002F\u002Fdlg4nlp.github.io\u002Ftutorials.html)\n    - [SyncedReview Invited Chinese talk](https:\u002F\u002Fapp6ca5octe2206.pc.xiaoe-tech.com\u002Fdetail\u002Fv_60e832f8e4b0876c0c23c1a7\u002F3?fromH5=true) ([video](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Lltz_kx7ECDOTLecVC9E9w) (password: wppp), [slides](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pmgX456Me_lu30VGDY3aaw) (password: flwv))  \n- Graph4NLP Workshops : \n    - [DLG4NLP-ICLR'22](https:\u002F\u002Fdlg4nlp-workshop.github.io\u002Fdlg4nlp-iclr22\u002Findex.html)  \n- Graph4NLP Demo: [Demo](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp_demo)\n- Graph4NLP Literature Review: [Literature Lists](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp_literature)  \n\n## Contributing\n\nPlease let us know if you encounter a bug or have any suggestions by filing an issue.\n\nWe welcome all contributions from bug fixes to new features and extensions.\n\nWe expect all contributions discussed in the issue tracker and going through PRs. \n\n## Citation\n\nIf you found this code useful, please consider citing the following papers.\n\n- [1] Lingfei Wu, Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Jian Pei, and Bo Long. [**\"Graph Neural Networks for Natural Language Processing: A Survey\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06090).\n- [2] [**NeurIPS 2020**] Yu Chen, Lingfei Wu and Mohammed J Zaki, [**\"Iterative Deep Graph Learning for Graph Neural Networks: Better and  Robust Node Embeddings\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13009).\n- [3] [**ICLR 2020**] Yu Chen, Lingfei Wu and Mohammed J. Zaki, [**\"Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.04942).\n- [4] Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock and Vadim Sheinin, [**\"Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00823).\n- [5] [**EMNLP 2020**] Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu and Sheng Zhong, [**\"Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem\"**](https:\u002F\u002Faclanthology.org\u002F2020.findings-emnlp.255.pdf).\n- [6] [**ACL 2020**] Luyang Huang, Lingfei Wu and Lu Wang, [**\"Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01159).\n- [7] [**EMNLP 2018**] Lingfei Wu, Ian E.H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar and Michael J. Witbrock, [**\"Word Mover's Embedding: From Word2Vec to Document Embedding\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.01713).\n- [8][**IJCAI 2020**] Yu Chen, Lingfei Wu and Mohammed J Zaki, [**\"GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for Conversational Machine Comprehension\"**](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F171).\n- [9] [**IJCAI 2020**] Kai Shen, Lingfei Wu, Fangli Xu, Siliang Tang, Jun Xiao and Yueting Zhuang, [**\"Hierarchical Attention Based Spatial-Temporal Graph-to-Sequence Learning for Grounded Video Description\"**](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F171).\n- [10] [**IJCAI 2020**] Hanning Gao, Lingfei Wu, Po Hu and Fangli Xu, [**\"RDF-to-Text Generation with Graph-augmented Structural Neural Encoders\"**](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F419).\n\n```\n@article{wu2021graph,\n  title={Graph Neural Networks for Natural Language Processing: A Survey},\n  author={Lingfei Wu and Yu Chen and Kai Shen and Xiaojie Guo and Hanning Gao and Shucheng Li and Jian Pei and Bo Long},\n  journal={arXiv preprint arXiv:2106.06090},\n  year={2021}\n}\n\n@inproceedings{chen2020iterative,\n  title={Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings},\n  author={Chen, Yu and Wu, Lingfei and Zaki, Mohammed J},\n  booktitle={Proceedings of the 34th Conference on Neural Information Processing Systems},\n  month={Dec. 6-12,},\n  year={2020}\n}\n\n@inproceedings{chen2020reinforcement,\n  author    = {Chen, Yu and Wu, Lingfei and Zaki, Mohammed J.},\n  title     = {Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation},\n  booktitle = {Proceedings of the 8th International Conference on Learning Representations},\n  month = {Apr. 26-30,},\n  year      = {2020}\n}\n\n@article{xu2018graph2seq,\n  title={Graph2seq: Graph to sequence learning with attention-based neural networks},\n  author={Xu, Kun and Wu, Lingfei and Wang, Zhiguo and Feng, Yansong and Witbrock, Michael and Sheinin, Vadim},\n  journal={arXiv preprint arXiv:1804.00823},\n  year={2018}\n}\n\n@inproceedings{li-etal-2020-graph-tree,\n    title = {Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem},\n    author = {Li, Shucheng  and\n      Wu, Lingfei  and\n      Feng, Shiwei  and\n      Xu, Fangli  and\n      Xu, Fengyuan  and\n      Zhong, Sheng},\n    booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},\n    month = {Nov},\n    year = {2020}\n}\n\n@inproceedings{huang-etal-2020-knowledge,\n    title = {Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward},\n    author = {Huang, Luyang  and\n      Wu, Lingfei  and\n      Wang, Lu},\n    booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},\n    month = {Jul},\n    year = {2020},\n    pages = {5094--5107}\n}\n\n@inproceedings{wu-etal-2018-word,\n    title = {Word Mover{'}s Embedding: From {W}ord2{V}ec to Document Embedding},\n    author = {Wu, Lingfei  and\n      Yen, Ian En-Hsu  and\n      Xu, Kun  and\n      Xu, Fangli  and\n      Balakrishnan, Avinash  and\n      Chen, Pin-Yu  and\n      Ravikumar, Pradeep  and\n      Witbrock, Michael J.},\n    booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},\n    pages = {4524--4534},\n    year = {2018},\n}\n\n@inproceedings{chen2020graphflow,\n  author    = {Yu Chen and\n               Lingfei Wu and\n               Mohammed J. Zaki},  \ntitle     = {GraphFlow: Exploiting Conversation Flow with Graph Neural Networks\n               for Conversational Machine Comprehension},\n  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on\n               Artificial Intelligence, {IJCAI} 2020},\n  publisher = {International Joint Conferences on Artificial Intelligence Organization},\n  pages     = {1230--1236},\n  year      = {2020}\n} \n  \n@inproceedings{shen2020hierarchical,\n  title={Hierarchical Attention Based Spatial-Temporal Graph-to-Sequence Learning for Grounded Video Description},\n  author={Shen, Kai and Wu, Lingfei and Xu, Fangli and Tang, Siliang and Xiao, Jun and Zhuang, Yueting},\n  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on\n               Artificial Intelligence, {IJCAI} 2020},\n  publisher = {International Joint Conferences on Artificial Intelligence Organization},\n  pages     = {941--947},\n  year      = {2020}\n}  \n\n@inproceedings{ijcai2020-419,\n  title     = {RDF-to-Text Generation with Graph-augmented Structural Neural Encoders},\n  author    = {Gao, Hanning and Wu, Lingfei and Hu, Po and Xu, Fangli},\n  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on\n               Artificial Intelligence, {IJCAI-20}},\n  publisher = {International Joint Conferences on Artificial Intelligence Organization},\n  pages     = {3030--3036},\n  year      = {2020}\n}\n\n\n```\n\n\n## Team\nGraph4AI Team: [**Lingfei Wu**](https:\u002F\u002Fsites.google.com\u002Fa\u002Femail.wm.edu\u002Fteddy-lfwu\u002Fhome) (team leader), Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Saizhuo Wang, Xiao Liu and Jing Hu. We are passionate in developing useful open-source libraries which aim to promote the easy use of various Deep Learning on Graphs techniques for Natural Language Processing. Our team consists of research scientists, applied data scientists, and graduate students from a variety of industrial and academic groups, including Pinterest (Lingfei Wu), Zhejiang University (Kai Shen), Facebook AI (Yu Chen), IBM T.J. Watson Research Center (Xiaojie Guo), Tongji University (Hanning Gao), Nanjing University (Shucheng Li), HKUST (Saizhuo Wang).\n\n## Contact\nIf you have any technical questions, please submit new issues.\n\nIf you have any other questions, please contact us: [**Lingfei Wu**](https:\u002F\u002Fsites.google.com\u002Fa\u002Femail.wm.edu\u002Fteddy-lfwu\u002Fhome) **[lwu@email.wm.edu]** and Xiaojie Guo **[xiaojie.guo@jd.com]**.\n\n## License\nGraph4NLP uses Apache License 2.0.\n","\u003Cp align=\"center\">\u003Ca href=\"https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_b7f432dd9f24.png\" width=\"800\" class=\"center\" alt=\"logo\"\u002F>\n    \u003Cbr\u002F>\n    \u003Ca\u002F>\n\u003C\u002Fp>\n   \n[pypi-image]: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fgraph4nlp.svg\n\n[pypi-url]: https:\u002F\u002Fpypi.org\u002Fproject\u002Fgraph4nlp\n\n[license-image]:https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\n\n[license-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002FLICENSE\n\n[contributor-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fgraph4ai\u002Fgraph4nlp\n\n[contributor-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fcontributors\n\n[contributing-image]:https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\n\n[contributing-url]:to_be_add\n\n[issues-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fgraph4ai\u002Fgraph4nlp\n\n[issues-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\n\n[forks-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fgraph4ai\u002Fgraph4nlp\n\n[forks-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ffork\n\n[stars-image]:https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgraph4ai\u002Fgraph4nlp\n\n[stars-url]:https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fstars\n\n![Last Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fgraph4ai\u002Fgraph4nlp)\n[![pypi][pypi-image]][pypi-url]\n[![Contributors][contributor-image]][contributor-url]\n[![Contributing][contributing-image]][contributing-url]\n[![License][license-image]][license-url]\n[![Issues][issues-image]][issues-url]\n[![Fork][forks-image]][forks-url]\n[![Star][stars-image]][stars-url]\n\n# Graph4NLP\n\n***Graph4NLP*** 是一款易于使用的库，专为**图深度学习**与**自然语言处理**（即 DLG4NLP）的交叉领域研发而设计。它既提供了面向数据科学家的最新前沿模型的**完整实现**，又提供了**灵活的接口**，方便研究人员和开发者构建定制化模型，并提供完整的全流程支持。基于高度优化的运行时库，包括 [DGL](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl)，***Graph4NLP*** 既拥有卓越的运行效率，又具备强大的扩展性。***Graph4NLP*** 的架构如图所示：虚线框代表正在开发的功能模块。Graph4NLP 由四个不同的层次组成：1）数据层、2）模块层、3）模型层，以及 4）应用层。\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_ae0f5f6c9205.png\" alt=\"architecture\" width=\"700\" \u002F>\n    \u003Cbr>\n    \u003Cb>图\u003C\u002Fb>：Graph4NLP 整体架构\n\u003C\u002Fp>\n\n## \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_7295806fc290.png\" alt='new' width=30 \u002F> Graph4NLP 新闻\n**2022年1月20日：** **v0.5.5 版本**发布！快来试用吧！\u003Cbr>\n**2021年9月26日：** **v0.5.1 版本**发布！快来试用吧！\u003Cbr>\n**2021年9月1日：** 欢迎访问我们的 **DLG4NLP 网站（https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html）**，获取丰富的学习资源！\u003Cbr>\n**2021年6月5日：** **v0.4.1 版本**发布！\n\n## 主要版本\n\n| 版本 | 发布日期       | 功能特性                                                     |\n| -------- | ---------- | ------------------------------------------------------------ |\n| v0.5.5   | 2022-01-20 | - 支持 model.predict API，通过引入包装函数实现。\u003Cbr \u002F> - 新增三个新的推理包装函数：classifier_inference_wrapper、generator_inference_wrapper、generator_inference_wrapper_for_tree。\u003Cbr \u002F> - 在每个应用中新增推理及推理前处理示例。\u003Cbr \u002F> - 将图拓扑结构与图嵌入过程分离。\u003Cbr \u002F> - 对所有图构造函数进行更新。\u003Cbr \u002F> - 模块化图嵌入被拆分为图嵌入初始化和图嵌入学习。\u003Cbr \u002F> - 统一了数据集中的参数。我们移除了模棱两可的参数“graph_type”，并引入了“graph_name”，用于标识图构造方法，以及“static_or_dynamic”，用于指示图的静态或动态构造类型。\u003Cbr \u002F> - 新功能：数据集现在可以通过一个参数“graph_name”自动选择默认方法（例如“topology_builder”）。|\n| v0.5.1   | 2021-09-26 | - 代码 lint 检查\u003Cbr \u002F> - 支持使用用户自己的数据进行测试\u003Cbr \u002F> - 修复了一个错误：在 0.4.1 版本中，词嵌入大小被硬编码。现在其值等于“word_emb_size”参数。\u003Cbr \u002F> - 修复了一个错误：在 0.4.1 版本中，build_vocab() 被调用了两次。\u003Cbr \u002F> - 修复了一个错误：知识图谱补全示例的两个主文件在恢复模型训练时，遗漏了可选参数“kg_graph”。\u003Cbr \u002F> - 修复了一个错误：我们已修正了 KGC 阅读指南中预处理路径的错误。\u003Cbr \u002F> - 修复了一个错误：我们在设置 emb_strategy 为 ‘w2v’ 时，修复了嵌入构造的错误。|\n| v0.4.1   | 2021-06-05 | - 支持 Graph4NLP 的全流程\u003Cbr \u002F> - GraphData 和 Dataset 支持 |\n\n## 快速入门\n\n***Graph4nlp*** 致力于让 NLP 任务中的 GNN 使用变得极其简单（请参阅 [Graph4NLP 文档](https:\u002F\u002Fgraph4ai.github.io\u002Fgraph4nlp\u002F)）。以下是一个使用 [*Graph2seq*](https:\u002F\u002Fgraph4ai.github.io\u002Fgraph4nlp\u002F) 模型的示例（该模型广泛应用于机器翻译、问答、语义解析以及各种其他 NLP 任务，这些任务均可抽象为图到序列的问题，并且取得了优异的性能）。\n\n\u003C!-- 如果您希望进一步提升模型性能，我们还支持多种预训练模型，包括 [BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805) 等。 -->\n此外，我们还提供其他高级模型 API，例如图到树模型。如果您对 DLG4NLP 相关的研究课题感兴趣，欢迎随时使用我们的库，并参考我们的 [graph4nlp 调研报告](http:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06090)。\n\n```python\nfrom graph4nlp.pytorch.datasets.jobs import JobsDataset\nfrom graph4nlp.pytorch.modules.graph_construction.dependency_graph_construction import DependencyBasedGraphConstruction\nfrom graph4nlp.pytorch.modules.config import get_basic_args\nfrom graph4nlp.pytorch.models.graph2seq import Graph2Seq\nfrom graph4nlp.pytorch.modules.utils.config_utils import update_values, get_yaml_config\n\n# 构建数据集\njobs_dataset = JobsDataset(root_dir='graph4nlp\u002Fpytorch\u002Ftest\u002Fdataset\u002Fjobs',\n                           topology_builder=DependencyBasedGraphConstruction,\n                           topology_subdir='DependencyGraph')  # 您应后台运行斯坦福核心词法分析器\nvocab_model = jobs_dataset.vocab_model\n\n# 构建模型\nuser_args = get_yaml_config(\"examples\u002Fpytorch\u002Fsemantic_parsing\u002Fgraph2seq\u002Fconfig\u002Fdependency_gcn_bi_sep_demo.yaml\")\nargs = get_basic_args(graph_construction_name=\"node_emb\", graph_embedding_name=\"gat\", decoder_name=\"stdrnn\")\nupdate_values(to_args=args, from_args_list=[user_args])\ngraph2seq = Graph2Seq.from_args(args, vocab_model)\n\n# 计算\nbatch_data = JobsDataset.collate_fn(jobs_dataset.train[0:12])\n\nscores = graph2seq(batch_data[\"graph_data\"], batch_data[\"tgt_seq\"])  # [Batch_size, seq_len, Vocab_size]\n```\n\n## 概述\n\n我们的 Graph4NLP 计算流程如下所示。\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_readme_97d25e8e7a91.png\" width=\"1000\" class=\"center\" alt=\"logo\"\u002F>\n    \u003Cbr\u002F>\n\u003C\u002Fp>\n\n## Graph4NLP 模型与应用\n\n### Graph4NLP 模型\n\n- [Graph2Seq](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002Fgraph4nlp\u002Fpytorch\u002Fmodels\u002Fgraph2seq.py)：一种通用的端到端神经编码解码模型，可将输入图映射为一串标记序列。  \n- [Graph2Tree](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002Fgraph4nlp\u002Fpytorch\u002Fmodels\u002Fgraph2tree.py)：一种通用的端到端神经编码解码模型，可将输入图映射为一棵树结构。\n\n### Graph4NLP 应用场景\n\n我们提供了一套全面的 NLP 应用方案，并附有详细的示例：\n\n- [文本分类](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Ftext_classification)：为句子或文档赋予恰当的标签。\n- [语义解析](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fsemantic_parsing)：将自然语言转换为机器可理解的正式语义表示。\n- [神经机器翻译](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fnmt)：将源语言中的句子翻译成目标语言。\n- [摘要生成](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fsummarization)：自动生成输入文本的简短版本，同时保留其主要含义。\n- [知识图谱补全](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fkg_completion)：预测已知知识图谱中两个现有实体之间的缺失关系。\n- [数学应用题求解](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fmath_word_problem)：以通俗易懂的语言，自动解答提供问题背景信息的数学练习题。\n- [人名实体识别](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fname_entity_recognition)：对输入文本中的实体进行标注，并为其指定相应的类型。\n- [问题生成](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Ftree\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fquestion_generation)：根据给定的文段和目标答案（可选），生成有效且流畅的问题。\n\n## 性能表现\n\n环境：PyTorch 1.8，Ubuntu 16.04，配备 2080Ti GPU\n\n| 任务                       |              数据集             |   GNN    模型      | 图结构构建                           | 评估指标         |          性能表现          |\n|----------------------------|:--------------------------------:|:-------------------:|----------------------------------------------|--------------------|:-----------------------------:|\n| 文本分类        | TRECT\u003Cbr> CAirline\u003Cbr> CNSST\u003Cbr> |           GAT       | 依赖关系\u003Cbr> 构成关系\u003Cbr> 依赖关系\u003Cbr> |      准确率    | 0.948\u003Cbr> 0.785\u003Cbr> 0.538\u003Cbr> |\n| 语义解析           |               JOBS               |           SAGE      | 构成关系                                 | 执行准确率 |             0.936             |\n| 问题生成            |               SQuAD             |           GGNN       | 依赖关系                                      | BLEU-4             |             0.15175\t            |\n| 机器翻译            |              IWSLT14             |           GCN       | 动态调整                                      | BLEU-4             |             0.3212            |\n| 摘要生成              |             CNN(30k)             |           GCN       | 依赖关系                                   | ROUGE-1            |              26.4             |\n| 知识图谱补全          | Kinship                          |           GCN      | 依赖关系                                    | MRR                | 82.4                          |\n| 数学应用题求解          |              MAWPS               | SAGE                | 动态调整                                      | 解决准确率   | 76.4                    |\n\n\n## 安装方法\n\n目前，用户可通过 **pip** 或 **源代码** 安装 Graph4NLP。Graph4NLP 支持以下操作系统：\n\n- 基于 Linux 的系统（已在 Ubuntu 18.04 及更高版本上测试）\n- macOS（仅支持 CPU 版本）\n- Windows 10（仅支持 PyTorch >= 1.8）\n\n### 使用 pip 安装（二进制包）\n我们为所有主流操作系统、PyTorch 和 CUDA 组合提供了 pip 轮子文件。请注意，我们强烈建议 Windows 用户参考“通过源代码安装”方式，以确保兼容性。\n\n#### 确保已安装至少 PyTorch (>=1.6.0)：\n注意，`>=1.6.0` 即可。\n```bash\n$ python -c \"import torch; print(torch.__version__)\"\n>>> 1.6.0\n```\n#### 查找 PyTorch 的 CUDA 版本（适用于 GPU 用户）：\n```bash\n$ python -c \"import torch; print(torch.version.cuda)\"\n>>> 10.2\n```\n\n#### 安装相关依赖项：\nGraph4NLP 依赖 `torchtext` 来实现嵌入式功能。在安装 `torchtext` 之前，请务必仔细查看 PyTorch 的依赖要求！如需详细版本匹配，请参阅 [这里](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchtext\u002F)。\n```bash\npip install torchtext # >=0.7.0\n```\n\n#### 安装 Graph4NLP\n```bash\npip install graph4nlp${CUDA}\n```\n其中 `${CUDA}` 应替换为具体的 CUDA 版本（`none`（CPU 版本）、`\"-cu92\"`、`\"-cu101\"`、`\"-cu102\"`、`\"-cu110\"`）。下表列出了具体的命令行操作。对于 CUDA 11.1 用户，请参考“通过源代码安装”。\n\n| 平台  | 命令                       |\n| --------- | ----------------------------- |\n| CPU       | `pip install graph4nlp`   |\n| CUDA 9.2  | `pip install graph4nlp-cu92`  |\n| CUDA 10.1 | `pip install graph4nlp-cu101` |\n| CUDA 10.2 | `pip install graph4nlp-cu102` |\n| CUDA 11.0 | `pip install graph4nlp-cu110` |\n\n### 通过源代码安装\n\n#### 确保已安装至少 PyTorch (>=1.6.0)：\n注意，`>=1.6.0` 即可。\n```bash\n$ python -c \"import torch; print(torch.__version__)\"\n>>> 1.6.0\n```\n#### 查找 PyTorch 的 CUDA 版本（适用于 GPU 用户）：\n```bash\n$ python -c \"import torch; print(torch.version.cuda)\"\n>>> 10.2\n```\n\n#### 安装相关依赖项：\nGraph4NLP 依赖 `torchtext` 来实现嵌入式功能。在安装 `torchtext` 之前，请务必仔细查看 PyTorch 的依赖要求！如需详细版本匹配，请参阅 [这里](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorchtext\u002F)。\n```bash\npip install torchtext # >=0.7.0\n```\n\n#### 从 GitHub 下载 Graph4NLP 的源代码：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp.git\ncd graph4nlp\n```\n#### 配置 CUDA 版本\n随后运行 `.\u002Fconfigure`（如果使用 Windows 10，则运行 `.\u002Fconfigure.bat`）以完成安装配置。配置工具会提示您指定 CUDA 版本。如果您没有 GPU，请输入 `cpu`。\n```bash\n.\u002Fconfigure\n```\n\n#### 安装相关软件包：\n\n最后，安装该软件包：\n```shell\npython setup.py install\n```\n\n## 用于超参数调优\n\n我们展示了常被调优的一些超参数\n[此处](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002Fe\u002F2PACX-1vQaE3BTKYt4NX0z5oJrzVESdE7Kx3dnmTCG7zTdtTqj6zuRX12qBz7OoEf0ckTDini0BljFLA9JuF5v\u002Fpubhtml?gid=0&single=true)。\n\n## 初学者如何学习基于图的深度学习在自然语言处理中的应用？\n\n如果您想深入了解如何将图神经网络技术应用于自然语言处理任务，欢迎访问我们的 DLG4NLP 网站（https:\u002F\u002Fdlg4nlp.github.io\u002Findex.html），这里提供丰富的学习资源！您还可以参考我们的调查论文，该论文对这一现有研究方向进行了全面概述。如需查阅我们库中更详尽的参考资料，请参阅我们的文档。\n\n\u003C!-- [文档]() | [Graph4NLP 调查]() | [相关论文列表]() | [研讨会]() -->\n- 文档：[文档](https:\u002F\u002Fgraph4ai.github.io\u002Fgraph4nlp\u002F)  \n- Graph4NLP 调查：[Graph4NLP 调查](http:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06090)  \n- Graph4NLP 教程：\n    - [Graph4NLP-NAACL'21、SIGIR'21、IJCAI'21、KDD'21](https:\u002F\u002Fdlg4nlp.github.io\u002Ftutorials.html)\n    - [SyncedReview 邀请的中文演讲](https:\u002F\u002Fapp6ca5octe2206.pc.xiaoe-tech.com\u002Fdetail\u002Fv_60e832f8e4b0876c0c23c1a7\u002F3?fromH5=true) ([视频](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Lltz_kx7ECDOTLecVC9E9w)（密码：wppp），[幻灯片](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pmgX456Me_lu30VGDY3aaw)（密码：flwv）)  \n- Graph4NLP 研讨会：\n    - [DLG4NLP-ICLR'22](https:\u002F\u002Fdlg4nlp-workshop.github.io\u002Fdlg4nlp-iclr22\u002Findex.html)  \n- Graph4NLP 演示：[演示](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp_demo)\n- Graph4NLP 文献综述：[文献列表](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp_literature)  \n\n## 贡献\n\n如果您发现错误或有任何建议，请通过提交问题报告告知我们。\n我们欢迎从修复错误到开发新功能和扩展的所有贡献。\n我们期望所有贡献都能在问题跟踪器中进行讨论，并通过 PR 进行提交。\n\n## 引用\n\n如果您觉得这段代码很有用，请考虑引用以下论文。\n\n- [1] 吴凌飞、陈宇、沈凯、郭晓杰、高汉宁、李树成、裴健和龙波。【**“面向自然语言处理的图神经网络：综述”**】（https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06090）。\n- [2] 【**NeurIPS 2020**】陈宇、吴凌飞和穆罕默德·J·扎基，【**“用于图神经网络的迭代深度图学习：更优且更鲁棒的节点嵌入”**】（https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13009）。\n- [3] 【**ICLR 2020**】陈宇、吴凌飞和穆罕默德·J·扎基，【**“基于强化学习的图到序列模型用于自然问题生成”**】（https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.04942）。\n- [4] 许坤、吴凌飞、王志国、冯彦松、迈克尔·维特布罗克和瓦迪姆·谢宁，【**“Graph2Seq：利用注意力机制的神经网络实现图到序列的学习”**】（https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00823）。\n- [5] 【**EMNLP 2020**】李树成、吴凌飞、冯世伟、徐芳丽、徐凤元和盛中，【**“用于结构化输入输出翻译的学习的图到树神经网络——兼及语义解析与数学应用题的案例研究”**】（https:\u002F\u002Faclanthology.org\u002F2020.findings-emnlp.255.pdf）。\n- [6] 【**ACL 2020**】黄路阳、吴凌飞和王璐，【**“基于语义驱动的填空奖励的知识图增强摘要生成”**】（https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01159）。\n- [7] 【**EMNLP 2018**】吴凌飞、伊恩·E·H·严、许坤、徐芳丽、阿维纳什·巴拉克里希南、陈品玉、普拉迪普·拉维库马尔和迈克尔·J·维特布罗克，【**“词迁移嵌入：从 Word2Vec 到文档嵌入”**】（https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.01713）。\n- [8] 【**IJCAI 2020**】陈宇、吴凌飞和穆罕默德·J·扎基，【**“GraphFlow：利用图神经网络挖掘对话流，实现对话式机器理解”**】（https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F171）。\n- [9] 【**IJCAI 2020**】沈凯、吴凌飞、徐芳丽、唐思亮、肖俊和庄月婷，【**“基于层次注意力的空间-时间图到序列学习用于落地视频描述”**】（https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F171）。\n- [10] 【**IJCAI 2020**】高汉宁、吴凌飞、胡博和徐芳丽，【**“利用图增强的结构化神经编码器进行 RDF 到文本生成”**】（https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F419）。\n\n```\n@article{wu2021graph,\n  title={面向自然语言处理的图神经网络：综述},\n  author={吴凌飞、陈宇、沈凯、郭晓杰、高汉宁、李树成、裴健和龙波},\n  journal={arXiv预印本 arXiv:2106.06090},\n  year={2021}\n}\n\n@inproceedings{chen2020iterative,\n  title={用于图神经网络的迭代深度图学习：更优且更鲁棒的节点嵌入},\n  author={陈宇、吴凌飞和扎基，穆罕默德·J.},\n  booktitle={第34届神经信息处理系统会议论文集},\n  month={12月6日至12日},\n  year={2020}\n}\n\n@inproceedings{chen2020reinforcement,\n  author    = {陈宇、吴凌飞和扎基，穆罕默德·J.},\n  title     = {基于强化学习的图到序列模型用于自然问题生成},\n  booktitle = {第8届国际表示学习会议论文集},\n  month = {4月26日至30日},\n  year      = {2020}\n}\n\n@article{xu2018graph2seq,\n  title={Graph2seq：利用注意力机制的神经网络实现图到序列的学习},\n  author={许坤、吴凌飞、王志国、冯彦松、维特布罗克，迈克尔和谢宁，瓦迪姆},\n  journal={arXiv预印本 arXiv:1804.00823},\n  year={2018}\n}\n\n@inproceedings{li-etal-2020-graph-tree,\n    title = {用于结构化输入输出翻译的学习的图到树神经网络——兼及语义解析与数学应用题的案例研究},\n    author = {李树成、吴凌飞、冯世伟、徐芳丽、徐凤元和盛中},\n    booktitle = {Association for Computational Linguistics：EMNLP 2020会议论文集},\n    month = {11月},\n    year = {2020}\n}\n\n@inproceedings{huang-etal-2020-knowledge,\n    title = {基于语义驱动的填空奖励的知识图增强摘要生成},\n    author = {黄路阳、吴凌飞和王璐},\n    booktitle = {第58届计算语言学协会年会论文集},\n    month = {7月},\n    year = {2020},\n    pages = {5094–5107}\n}\n\n@inproceedings{wu-etal-2018-word,\n    title = {词迁移嵌入：从 Word2Vec 到文档嵌入},\n    author = {吴凌飞、严伊恩·恩秀、许坤、徐芳丽、巴拉克里希南、陈品玉、拉维库马尔和维特布罗克，迈克尔·J.},\n    booktitle = {2018年自然语言处理实证方法大会论文集},\n    pages = {4524–4534},\n    year = {2018}\n}\n\n@inproceedings{chen2020graphflow,\n  author    = {陈宇、吴凌飞和穆罕默德·J·扎基},  \ntitle     = {GraphFlow：利用图神经网络挖掘对话流，实现对话式机器理解},\n  booktitle = {第29届国际联合人工智能会议论文集，IJCAI 2020},\n  publisher = {国际联合人工智能组织},\n  pages     = {1230–1236},\n  year      = {2020}\n}  \n\n@inproceedings{shen2020hierarchical,\n  title={基于层次注意力的空间-时间图到序列学习用于落地视频描述},\n  author={沈凯、吴凌飞、徐芳丽、唐思亮、肖俊和庄月婷},\n  booktitle = {第29届国际联合人工智能会议，IJCAI 2020},\n  publisher = {国际联合人工智能组织},\n  pages     = {941–947},\n  year      = {2020}\n}  \n\n@inproceedings{ijcai2020-419,\n  title     = {利用图增强的结构化神经编码器进行 RDF 到文本生成},\n  author    = {高汉宁、吴凌飞、胡博和徐芳丽},\n  booktitle = {第29届国际联合人工智能会议，IJCAI-20},\n  publisher = {国际联合人工智能组织},\n  pages     = {3030–3036},\n  year      = {2020}\n}\n\n\n```\n\n## 团队\nGraph4AI 团队：【**吴凌飞**】（团队负责人）、陈宇、沈凯、郭晓杰、高汉宁、李树成、王赛卓、刘晓以及胡静。我们热衷于开发实用的开源库，旨在推动自然语言处理领域中各类图神经网络技术的易用性。我们的团队由来自不同行业和学术界的科研人员、应用数据科学家以及研究生组成，其中包括 Pinterest 的吴凌飞、浙江大学的沈凯、Facebook AI 的陈宇、IBM T.J. 沃森研究中心的郭晓杰、同济大学的高汉宁、南京大学的李树成以及香港科技大学的王赛卓。\n\n## 联系方式\n如果您有任何技术问题，请提交新问题。\n\n如您有其他疑问，请联系以下人员：【**吴凌飞**】（https:\u002F\u002Fsites.google.com\u002Fa\u002Femail.wm.edu\u002Fteddy-lfwu\u002Fhome）【**lwu@email.wm.edu**】，以及郭晓杰【**xiaojie.guo@jd.com**】。\n\n## 许可证\nGraph4NLP 采用 Apache 许可证 2.0 版本。","# Graph4NLP 快速上手指南\n\nGraph4NLP 是一个易于使用的库，专为**图深度学习**与**自然语言处理**（DLG4NLP）交叉领域的研发设计。它基于高度优化的 [DGL](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl) 运行时库，既提供了最先进模型的全量实现，也提供了灵活的接口供研究者构建自定义模型。\n\n## 1. 环境准备\n\n### 系统要求\nGraph4NLP 支持以下操作系统：\n- **Linux**: Ubuntu 18.04 及更高版本（推荐）\n- **macOS**: 仅支持 CPU 版本\n- **Windows**: Windows 10（仅支持 PyTorch >= 1.8，建议通过源码安装以获得最佳兼容性）\n\n### 前置依赖\n在开始之前，请确保已安装以下基础环境：\n- **Python**: 3.6+\n- **PyTorch**: >= 1.6.0 (推荐 1.8+)\n- **CUDA**: 如需使用 GPU，请确保 CUDA 版本与 PyTorch 匹配\n\n## 2. 安装步骤\n\n### 第一步：验证 PyTorch 环境\n首先确认 PyTorch 已正确安装及其版本信息。\n\n```bash\npython -c \"import torch; print(torch.__version__)\"\n# 输出应 >= 1.6.0\n\n# 如果是 GPU 用户，检查 CUDA 版本\npython -c \"import torch; print(torch.version.cuda)\"\n```\n\n### 第二步：安装依赖库\nGraph4NLP 依赖 `torchtext` 来实现嵌入层功能。请注意版本匹配（通常需 >=0.7.0）。\n\n> **国内加速提示**：建议使用清华或阿里镜像源加速安装。\n```bash\npip install torchtext -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 第三步：安装 Graph4NLP\n可以通过 pip 直接安装二进制包（推荐 Linux\u002FMac 用户），Windows 用户若遇兼容性问题建议采用源码安装。\n\n**方式 A：通过 pip 安装**\n```bash\npip install graph4nlp -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**方式 B：通过源码安装（推荐 Windows 用户或需要最新特性者）**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp.git\ncd graph4nlp\npip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\n以下示例演示了如何使用 **Graph2Seq** 模型（广泛应用于机器翻译、问答等任务）构建一个简单的流水线，包含数据集加载、图构建、模型初始化及前向计算。\n\n> **注意**：运行此示例前，需确保后台已启动 Stanford CoreNLP 服务以进行依存句法分析。\n\n```python\nfrom graph4nlp.pytorch.datasets.jobs import JobsDataset\nfrom graph4nlp.pytorch.modules.graph_construction.dependency_graph_construction import DependencyBasedGraphConstruction\nfrom graph4nlp.pytorch.modules.config import get_basic_args\nfrom graph4nlp.pytorch.models.graph2seq import Graph2Seq\nfrom graph4nlp.pytorch.modules.utils.config_utils import update_values, get_yaml_config\n\n# 1. 构建数据集\n# topology_builder 指定图构建方法，此处使用基于依存句法的图构建\njobs_dataset = JobsDataset(root_dir='graph4nlp\u002Fpytorch\u002Ftest\u002Fdataset\u002Fjobs',\n                           topology_builder=DependencyBasedGraphConstruction,\n                           topology_subdir='DependencyGraph')  \nvocab_model = jobs_dataset.vocab_model\n\n# 2. 构建模型\n# 加载配置文件并合并参数\nuser_args = get_yaml_config(\"examples\u002Fpytorch\u002Fsemantic_parsing\u002Fgraph2seq\u002Fconfig\u002Fdependency_gcn_bi_sep_demo.yaml\")\nargs = get_basic_args(graph_construction_name=\"node_emb\", graph_embedding_name=\"gat\", decoder_name=\"stdrnn\")\nupdate_values(to_args=args, from_args_list=[user_args])\n\n# 初始化 Graph2Seq 模型\ngraph2seq = Graph2Seq.from_args(args, vocab_model)\n\n# 3. 数据预处理与计算\n# 整理批次数据\nbatch_data = JobsDataset.collate_fn(jobs_dataset.train[0:12])\n\n# 执行前向传播\n# 输入：图数据，目标序列；输出：预测分数 [Batch_size, seq_len, Vocab_size]\nscores = graph2seq(batch_data[\"graph_data\"], batch_data[\"tgt_seq\"])\n```\n\n### 核心功能概览\nGraph4NLP 内置了多种主流模型与应用场景：\n- **核心模型**: Graph2Seq (图到序列), Graph2Tree (图到树)\n- **支持任务**: 文本分类、语义解析、机器翻译、文本摘要、知识图谱补全、数学应用题求解、命名实体识别、问题生成等。\n\n更多详细配置与高级用法请参考官方文档或 `examples` 目录下的具体案例。","某医疗科技公司的算法团队正致力于构建一个基于电子病历（EMR）的疾病风险预测系统，需要深入挖掘患者、症状与药物之间复杂的非结构化关联。\n\n### 没有 graph4nlp 时\n- **图构建繁琐**：开发人员需手动编写大量底层代码，将文本数据转换为图拓扑结构，难以处理动态变化的医疗关系网络。\n- **模型复现困难**：缺乏现成的图神经网络（GNN）与自然语言处理（NLP）融合模型，复现前沿论文算法需从零搭建，耗时数月。\n- **流程割裂严重**：数据预处理、图嵌入学习与最终预测任务分散在不同框架中，接口不统一，导致调试和维护成本极高。\n- **运行效率低下**：未针对图计算进行深度优化，在处理大规模病历数据时训练速度缓慢，难以满足实时性要求。\n\n### 使用 graph4nlp 后\n- **自动化建图**：利用其 Data Layer 和更新的 `topology_builder`，仅需指定 `graph_name` 参数即可自动完成从病历文本到静态或动态图的构建。\n- **开箱即用模型**：直接调用 Model Layer 中预置的 SOTA 模型实现，快速部署疾病预测任务，将研发周期从数月缩短至数周。\n- **全链路整合**：通过统一的四層架构（数据、模块、模型、应用），实现了从数据输入到推理预测的端到端流水线，大幅降低集成难度。\n- **高性能推理**：基于高度优化的 DGL 运行时库，显著提升了图嵌入学习和模型训练效率，并支持灵活的 `inference_wrapper` 进行快速验证。\n\ngraph4nlp 通过提供标准化的全栈式解决方案，让团队能专注于医疗逻辑创新而非底层工程实现，极大加速了 AI 在复杂文本图谱场景下的落地应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgraph4ai_graph4nlp_b7f432dd.png","graph4ai","Graph4AI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fgraph4ai_4fb59856.png","",null,"https:\u002F\u002Fgithub.com\u002Fgraph4ai",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.9,{"name":87,"color":88,"percentage":89},"Shell","#89e051",0.1,{"name":91,"color":92,"percentage":93},"Batchfile","#C1F12E",0,1688,206,"2026-03-29T13:29:04","Apache-2.0","Linux (Ubuntu 18.04+), macOS (仅 CPU 版本), Windows 10 (仅支持 PyTorch >= 1.8)","非必需（macOS 仅支持 CPU）。Linux\u002FWindows GPU 用户需 NVIDIA 显卡，测试环境为 2080Ti，CUDA 版本需与已安装的 PyTorch 匹配（示例中为 10.2）","未说明",{"notes":102,"python":100,"dependencies":103},"Windows 用户强烈建议通过源代码安装而非 pip。使用依赖图构建功能（如 DependencyBasedGraphConstruction）时，需在后台运行 Stanford CoreNLP。PyTorch 版本至少为 1.6.0，Windows 平台需 1.8 以上。",[104,105,106],"torch>=1.6.0","torchtext>=0.7.0","dgl",[13,26],[109,110,111,112,113,114],"nlp","pytorch","natural-language-processing","deep-learning","machine-learning","graph-neural-networks","2026-03-27T02:49:30.150509","2026-04-06T05:17:21.183775",[118,123,128,133,138,143],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},6484,"Graph2Seq 模型的架构是否与 IBM 提出的通用 Graph2Seq 模型一致？","是的，该仓库中的 Graph2Seq 模型架构与 IBM 研究博客中描述的通用 Graph2Seq 模型（Graph2Seq: A Generalized Seq2Seq Model for Graph Inputs）是一致的。它利用双向 LSTM 编码器在处理初始词嵌入（如 word2vec 或 BERT）后，再将其输入到 GCN 编码器中，以编码整个图结构信息。","https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\u002F556",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},6485,"能否为所有上下文数据构建一个统一的图模型，并复用于下游任务（如分类、知识图谱补全）？","处理后的图数据是可以复用的。但是，由于文本分类和知识图谱补全是两个完全不同的任务，不能直接将文本分类中构建的图应用于知识图谱补全的代码中。如果需要合并不同任务构建的图，可以使用 `to_batch()` 方法进行处理。目前不支持直接构建一个“万能图”供所有未来模型直接使用，通常需要针对特定任务进行适配。","https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\u002F428",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},6486,"如何在多 GPU 环境下训练 Graph2Seq 模型？","目前有两种支持多 GPU 训练的方法：\n1. **nn.DataParallel (DP)**: 参考示例代码 [main_dataparallel.py](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Ffeature-multiplegpu\u002Fexamples\u002Fpytorch\u002Fnmt\u002Fmain_dataparallel.py)。\n2. **nn.DistributedDataParallel (DDP)**: 参考示例代码 [main_ddp.py](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Ffeature-ddp\u002Fexamples\u002Fpytorch\u002Fnmt\u002Fmain_ddp.py)。\n用户可以根据需求选择其中一种方式进行扩展训练。","https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\u002F481",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},6487,"如何为 graph2seq 模型构建自定义数据集（例如将源代码 AST 作为输入图）？","目前需要通过继承基类并重写相关方法来实现：\n1. 如果您的输入是单个静态图（如 AST），不需要解析过程，应继承 `StaticGraphConstructionBase` 并实现自定义的 `ASTGraphConstruction` 类。\n2. 在自定义类中实现 `topology` 方法来构建图数据。\n3. 如果需要使用现有的 Dataset 流程，可以重写 `_build_topology_process` 函数来调用您的拓扑构建器。\n参考示例：[NER 图数据集构建代码](https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fblob\u002Fmaster\u002Fexamples\u002Fpytorch\u002Fname_entity_recognition\u002Fdependency_graph_construction_without_tokenize.py)。官方计划在后续版本中优化此体验。","https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\u002F460",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},6488,"训练完成后，如何对未见过的字符串数据进行 Graph2Tree 模型的推理（Inference）？","目前的版本尚未提供直接的推理接口，推荐的临时解决方案是：\n1. 将待推理的字符串按照训练数据的格式（例如：`输入字符串 \\t \u003CTBD>`）追加到 `test.txt` 文件中。\n2. 在评估过程中跳过这些新增的行，或者在完成训练和模型选择后单独处理它们。\n3. 确保预处理方式（如依赖分析或成分分析）与训练时保持一致。\n官方已在计划改进推理接口，具体示例可参考 `add_inference_example_graph2tree` 分支。","https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\u002F319",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},6489,"运行 NER 示例时报错 'TypeError: Can't instantiate abstract class ConllDataset with abstract methods download' 如何解决？","该错误通常发生在 Windows 环境下或特定版本的 PyTorch\u002FTorchText 兼容性问题中，原因是 `ConllDataset` 类缺少 `download` 方法的实现。虽然这是一个已知的 Bug，但在修复前，建议检查以下几点：\n1. 确认安装的 `graph4nlp` 版本是否为最新（源码安装可能比 pip 更稳定）。\n2. 尝试手动实现 `download` 方法或继承正确的数据集类。\n3. 检查后端库版本兼容性（如 PyTorch 1.8.1 与 graph4nlp 0.4.1+）。\n由于这是普遍遇到的环境配置问题，建议关注官方 Issue 讨论以获取最新的补丁代码。","https:\u002F\u002Fgithub.com\u002Fgraph4ai\u002Fgraph4nlp\u002Fissues\u002F321",[149,154,159],{"id":150,"version":151,"summary_zh":152,"released_at":153},106066,"v0.5.5","- Support model.predict API by introducing wrapper functions.\r\n- Introduce Three new inference_wrapper functions: classifier_inference_wrapper, generator_inference_wrapper, generator_inference_wrapper_for_tree.\r\n- Add the inference and inference_advance examples in each application.\r\n- Separate the graph topology and graph embedding process.\r\n- Renew all the graph construction functions.\r\n- Module graph_embedding is divided into graph_embedding_initialization and graph_embedding_learning.\r\n- Unify the parameters in Dataset. We remove the ambiguous parameter graph_type and introduce graph_name to indicate the graph construction method and static_or_dynamic to indicate the static or dynamic graph construction type.\r\n- New: The dataset now can automatically choose the default methods (e.g., topology_builder) by only one parameter graph_name.","2022-01-20T18:07:32",{"id":155,"version":156,"summary_zh":157,"released_at":158},106067,"v0.5.1-alpha","- Lint the codes\r\n- Support testing with users' own data\r\n- Fix the bug: The word embedding size was hard-coded in the 0.4.1 version. Now it is equal to \"word_emb_size\" parameter.\r\n- Fix the bug: The build_vocab() is called twice in the 0.4.1 version.\r\n- Fix the bug: The two main files of knowledge graph completion example missed the optional parameter \"kg_graph\" in ranking_and_hits() when resuming training the model.\r\n- Fix the bug: We have fixed the preprocessing path error in KGC readme.\r\n- Fix the bug: We have fixed embedding construction bug when setting emb_strategy to 'w2v'.","2021-09-30T16:52:44",{"id":160,"version":161,"summary_zh":162,"released_at":163},106068,"v0.4.1-alpha","This is the beta-version of our graph4nlp library, which is the first library for the easy use of GNNs for NLP. ","2021-06-15T16:00:17"]