[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-RManLuo--Awesome-LLM-KG":3,"tool-RManLuo--Awesome-LLM-KG":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",155373,2,"2026-04-14T11:34:08",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":78,"stars":81,"forks":82,"last_commit_at":83,"license":78,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":91,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":101,"updated_at":102,"faqs":103,"releases":104},7444,"RManLuo\u002FAwesome-LLM-KG","Awesome-LLM-KG","Awesome papers about unifying LLMs and KGs","Awesome-LLM-KG 是一个专注于整合大型语言模型（LLM）与知识图谱（KG）的开源学术资源库。它旨在解决当前大模型虽然通用性强但缺乏准确事实知识，而知识图谱虽富含结构化数据却难以构建且动态适应性不足的痛点。通过将两者优势结合，该项目致力于提升人工智能在推理能力、事实准确性及可解释性方面的表现。\n\n这份资源库特别适合人工智能领域的研究人员、开发者以及希望深入探索“神经符号 AI\"前沿技术的学者使用。它不仅系统梳理了该领域的最新论文，还提出了清晰的三大融合框架路线图：利用知识图谱增强大模型、借助大模型辅助构建知识图谱，以及两者的深度协同机制。\n\n此外，Awesome-LLM-KG 拥有独特的技术亮点，团队基于此发布了多项被 ICML、NeurIPS、ACL 等顶级会议收录的创新成果，包括结合图神经网络与大模型的 GFM-RAG 检索增强生成管道，以及针对时序知识图谱的动态适应推理方法。无论你是想寻找前沿研究灵感，还是希望获取高质量的代码实现参考，这里都是进入这一新兴交叉领域的理想起点。","# Awesome-LLM-KG\n  [![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FRManLuo\u002FAwesome-LLM-KG) \n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n  ![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002FRManLuo\u002FAwesome-LLM-KG?color=green) \n ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-Welcome-red)\n ![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FRManLuo\u002FAwesome-LLM-KG?color=yellow)\n![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002FRManLuo\u002FAwesome-LLM-KG?color=lightblue) \n\nA collection of papers and resources about unifying large language models (LLMs) and knowledge graphs (KGs).\n\nLarge language models (LLMs) have achieved remarkable success and generalizability in various applications. However, they often fall short of capturing and accessing factual knowledge. Knowledge graphs (KGs) are structured data models that explicitly store rich factual knowledge. Nevertheless, KGs are hard to construct and existing methods in KGs are inadequate in handling the incomplete and dynamically changing nature of real-world KGs. Therefore, it is natural to unify LLMs and KGs together and simultaneously leverage their advantages.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_readme_e158148e8bea.png\" width = \"600\" \u002F>\n\n## News\n🔭 This project is under development. You can hit the **STAR** and **WATCH** to follow the updates.\n* We are happy to release the first **graph foundation model**-powered RAG pipeline (GFM-RAG) that combines the power of GNNs with LLMs to enhance reasoning. [Paper](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.01113) and [Code](https:\u002F\u002Fgithub.com\u002FRManLuo\u002Fgfm-rag).\n* Our latest work on KG + LLM reasoning: [Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13080) has been accepted by ICML 2025.\n* Our LLM for temporal KG reasoning work: [Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14170) has been accepted by NeurIPS 2024!\n* Our KG for analyzing LLM reasoning paper: [Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning\nwith Knowledge Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11199) has been accepted by ACL 2024.\n* Our [roadmap paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08302) has been accepted by TKDE.\n* Our KG for LLM probing paper: [Systematic Assessment of Factual Knowledge in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.11638) has been accepted by EMNLP 2023.\n* Our KG + LLM reasoning paper: [Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01061) has been accepted by ICLR 2024.\n* Our LLM for KG reasoning paper: [ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01538) is now public.\n* Our roadmap paper: [Unifying Large Language Models and Knowledge Graphs: A Roadmap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08302) is now public.\n\n## Overview\nIn this repository, we collect recent advances in unifying LLMs and KGs.  We present a roadmap that summarizes three general frameworks: *1) KG-enhanced LLMs*, *2) LLMs-augmented KGs*, and *3) Synergized LLMs + KGs*.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_readme_8aefc626307f.png\" width = \"800\" \u002F>\n\nWe also illustrate the involved techniques and applications.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_readme_ba0c0aee3cce.png\" width = \"600\" \u002F>\n\nWe hope this repository can help researchers and practitioners to get a better understanding of this emerging field.    \nIf this repository is helpful for you, plase help us by citing this paper:\n```bash\n@article{llm_kg,\ntitle={Unifying Large Language Models and Knowledge Graphs: A Roadmap},\nauthor={Pan, Shirui and Luo, Linhao and Wang, Yufei and Chen, Chen and Wang, Jiapu and Wu, Xindong},\njournal={IEEE Transactions on Knowledge and Data Engineering (TKDE)},\nyear={2024}\n}\n```\n\n\n## Table of Contents\n- [Awesome-LLM-KG](#awesome-llm-kg)\n  - [News](#news)\n  - [Overview](#overview)\n  - [Table of Contents](#table-of-contents)\n  - [Related Surveys](#related-surveys)\n  - [KG-enhanced LLMs](#kg-enhanced-llms)\n    - [KG-enhanced LLM Pre-training](#kg-enhanced-llm-pre-training)\n    - [KG-enhanced LLM Inference](#kg-enhanced-llm-inference)\n    - [KG-enhanced LLM Interpretability](#kg-enhanced-llm-interpretability)\n  - [LLM-augmented KGs](#llm-augmented-kgs)\n    - [LLM-augmented KG Embedding](#llm-augmented-kg-embedding)\n    - [LLM-augmented KG Completion](#llm-augmented-kg-completion)\n    - [LLM-augmented KG-to-Text Generation](#llm-augmented-kg-to-text-generation)\n    - [LLM-augmented KG Question Answering](#llm-augmented-kg-question-answering)\n  - [Synergized LLMs + KGs](#synergized-llms--kgs)\n    - [Knowledge Representation](#knowledge-representation)\n    - [Reasoning](#reasoning)\n  - [Applications](#applications)\n    - [Recommendation](#recommendation)\n    - [Fault Analysis](#fault-analysis)\n## Related Surveys\n\n* Unifying Large Language Models and Knowledge Graphs: A Roadmap (TKDE, 2024) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.08302.pdf)\n* A Survey on Knowledge-Enhanced Pre-trained Language Models (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.13428.pdf)\n* A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.08772.pdf)\n* A Review on Language Models as Knowledge Bases (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.06031.pdf)\n* Generative Knowledge Graph Construction: A Review (EMNLP, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.12714.pdf)\n* Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey (Arxiv, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.08455.pdf)\n* Reasoning over Different Types of Knowledge Graphs: Static, Temporal and Multi-Modal (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.05767)[[code]](https:\u002F\u002Fgithub.com\u002FLIANGKE23\u002FAwesome-Knowledge-Graph-Reasoning)\n\n## KG-enhanced LLMs\n### KG-enhanced LLM Pre-training\n- ERNIE: Enhanced Language Representation with Informative Entities (ACL, 2019) [[paper]](https:\u002F\u002Faclanthology.org\u002FP19-1139.pdf)\n- Exploiting structured knowledge in text via graph-guided representation learning (EMNLP, 2019) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.emnlp-main.722.pdf)\n- SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis (ACL, 2020) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.acl-main.374.pdf)\n- E-bert: A phrase and product knowledge enhanced language model for e-commerce (Arxiv, 2020) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.02835.pdf)\n- Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model (ICLR, 2020) [[paper]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BJlzm64tDH)\n- BERT-MK: Integrating graph contextualized knowledge into pre-trained language models (EMNLP, 2020) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.findings-emnlp.207.pdf)\n- K-BERT: enabling language representation with knowledge graph (AAAI, 2020) [[paper]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F5681\u002F5537)\n- CoLAKE: Contextualized language and knowledge embedding (COLING, 2020) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.coling-main.327.pdf)\n- Kepler: A unified model for knowledge embedding and pre-trained language representation (TACL, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.tacl-1.11.pdf)\n- K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters (ACL Findings, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.121.pdf)\n- Cokebert: Contextual knowledge selection and embedding towards enhanced pre-trained language models (AI Open, 2021) [[paper]](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS2666651021000188\u002Fpdfft?md5=75919f85dcb5711fd2fe9e3785b24982&pid=1-s2.0-S2666651021000188-main.pdf)\n- Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation (Arixv, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.02137)\n- Pre-training language models with deterministic factual knowledge (EMNLP, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.764.pdf)\n- Kala: Knowledge-augmented language model adaptation (NAACL, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.naacl-main.379.pdf)\n- DKPLM: decomposable knowledge-enhanced pre-trained language model for natural language understanding (AAAI, 2022) [[paper]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21425\u002F21174)\n- Dict-BERT: Enhancing language model pre-training with dictionary (ACL Findings, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.findings-acl.150.pdf)\n- JAKET: joint pre-training of knowledge graph and language understanding (AAAI, 2022) [[paper]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21417\u002F21166)\n- Tele-Knowledge Pre-training for Fault Analysis (ICDE, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11298)\n\n### KG-enhanced LLM Inference\n- Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling (ACL, 2019) [[paper]](https:\u002F\u002Faclanthology.org\u002FP19-1598.pdf)\n- Retrieval-augmented generation for knowledge-intensive nlp tasks (NeurIPS, 2020) [[paper]](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2020\u002Ffile\u002F6b493230205f780e1bc26945df7481e5-Paper.pdf)\n- Realm: Retrieval-augmented language model pre-training (ICML, 2020) [[paper]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.5555\u002F3524938.3525306)\n- QA-GNN: Reasoning with language models and knowledge graphs for question answering (NAACL, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.naacl-main.45.pdf)\n- Memory and knowledge augmented language models for inferring salience in long-form stories (EMNLP, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.65.pdf)\n- JointLK: Joint reasoning with language models and knowledge graphs for commonsense question answering (NAACL, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.naacl-main.372.pdf)\n- Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs (AAAI, 2022) [[paper]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21286)\n- Greaselm: Graph reasoning enhanced language models (ICLR, 2022) [[paper]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=41e9o6cQPj)\n- An efficient memory-augmented transformer for knowledge-intensive NLP tasks (EMNLP, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.346.pdf)\n- Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering (NLRSE@ACL, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04136)\n- LLM-Based Multi-Hop Question Answering with Knowledge Graph Integration in Evolving Environments (EMNLP Findings, 2024) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15903)] \n\n\n### KG-enhanced LLM Interpretability\n- Language models as knowledge bases (EMNLP, 2019) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.01066.pdf)\n- Kagnet: Knowledge-aware graph networks for commonsense reasoning (Arxiv, 2019) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.02151.pdf)\n- Autoprompt: Eliciting knowledge from language models with automatically generated prompts (EMNLP, 2020) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.15980.pdf)\n- How can we know what language models know? (ACL, 2020) [[paper]](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00324\u002F96460)\n- Knowledge neurons in pretrained transformers (ACL, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.08696.pdf)\n- Can Language Models be Biomedical Knowledge Bases? (EMNLP, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07154.pdf)\n- Interpreting language models through knowledge graph extraction (Arxiv, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.08546.pdf)\n- QA-GNN: Reasoning with language models and knowledge graphs for question answering (ACL, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.06378.pdf)\n- How to Query Language Models? (Arxiv, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.01928.pdf)\n- Rewire-then-probe: A contrastive recipe for probing biomedical knowledge of pre-trained language models (Arxiv, 2021) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.08173.pdf)\n- When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.10511.pdf)\n- How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.16747.pdf)\n- Can Knowledge Graphs Simplify Text? (CIKM, 2023) [[paper]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3583780.3615514)\n\n## LLM-augmented KGs\n### LLM-augmented KG Embedding\n- Entity Alignment with Noisy Annotations from Large Language Models (NeurIPS, 2024) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16806)]\n- LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph Embeddings (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.00305.pdf)\n- Integrating Knowledge Graph embedding and pretrained Language Models in Hypercomplex Spaces (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.02743.pdf)\n- Endowing Language Models with Multimodal Knowledge Graph Representations (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13163)\n- Language Model Guided Knowledge Graph Embeddings (IEEE Access, 2022) [[paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9831788)\n- Language Models as Knowledge Embeddings (IJCAI, 2022) [[paper]](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2022\u002F0318.pdf)\n- Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models (EMNLP, 2020) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.findings-emnlp.25.pdf)\n- KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation (TACL, 2020) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.06136.pdf)\n\n\n### LLM-augmented KG Completion\n- Multi-perspective Improvement of Knowledge Graph Completion with Large Language Models (COLING 2024) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01972) [[Code]](https:\u002F\u002Fgithub.com\u002Fquqxui\u002FMPIKGC)\n- KG-BERT: BERT for knowledge graph completion (Arxiv, 2019) [[paper]](http:\u002F\u002Farxiv.org\u002Fabs\u002F1909.03193)\n- Multi-task learning for knowledge graph completion with pre-trained language models (COLING, 2020) [[paper]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2020.coling-main.153)\n- Do pre-trained models benefit knowledge graph completion? A reliable evaluation and a reasonable approach (ACL, 2022) [[paper]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2022.findings-acl.282)\n- Joint language semantic and structure embedding for knowledge graph completion (COLING, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.coling-1.171)\n- MEM-KGC: masked entity model for knowledge graph completion with pre-trained language model (IEEE Access, 2021) [[paper]](https:\u002F\u002Fdoi.org\u002F10.1109\u002FACCESS.2021.3113329)\n- Knowledge graph extension with a pre-trained language model via unified learning method (Knowl. Based Syst., 2023) [[paper]](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.knosys.2022.110245)\n- Structure-augmented text representation learning for efficient knowledge graph completion (WWW, 2021) [[paper]](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3442381.3450043)\n- Simkgc: Simple contrastive knowledge graph completion with pre-trained language models (ACL, 2022) [[paper]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2022.acl-long.295)\n- Lp-bert: Multi-task pre-training knowledge graph bert for link prediction (Arxiv, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.04843)\n- From discrimination to generation: Knowledge graph completion with generative transformer (WWW, 2022) [[paper]](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3487553.3524238)\n- Sequence-to-sequence knowledge graph completion and question answering (ACL, 2022) [[paper]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2022.acl-long.201)\n- Knowledge is flat: A seq2seq generative framework for various knowledge graph completion (COLING, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.coling-1.352)\n- A framework for adapting pre-trained language models to knowledge graph completion (EMNLP, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.398)\n- Dipping plms sauce: Bridging structure and text for effective knowledge graph completion via conditional soft prompting (ACL, 2023) [[paper]](https:\u002F\u002Faclanthology.org\u002F2023.findings-acl.729\u002F)\n\n### LLM-augmented KG-to-Text Generation\n- GenWiki: A dataset of 1.3 million content-sharing text and graphs for unsupervised graph-to-text generation  (COLING, 2020) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.coling-main.217.pdf)\n- KGPT: Knowledge-grounded pre-training for data-to-text generation (EMNLP 2020) [[paper]](https:\u002F\u002Faclanthology.org\u002F2020.emnlp-main.697.pdf)\n- JointGT: Graph-text joint representation learning for text generation from knowledge graphs (ACL Findings, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.223.pdf)\n- Investigating pretrained language models for graph-to-text generation (NLP4ConvAI, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.nlp4convai-1.20.pdf)\n- Few-shot knowledge graph-to-text generation with pretrained language models (ACL, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.136.pdf)\n- EventNarrative: A large-scale Event-centric Dataset for Knowledge Graph-to-Text Generation (Neurips, 2021) [[paper]](https:\u002F\u002Fdatasets-benchmarks-proceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fa3f390d88e4c41f2747bfa2f1b5f87db-Abstract-round1.html)\n- GAP: A graph-aware language model framework for knowledge graph-to-text generation (COLING, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.coling-1.506.pdf)\n\n### LLM-augmented KG Question Answering\n- UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph (ICLR, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00959)\n- StructGPT: A General Framework for Large Language Model to Reason over Structured Data (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09645)\n- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA (AAAI, 2022) [[paper]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F20215)\n- An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering (World Wide Web Journal, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10368)\n- Empowering Language Models with Knowledge Graph Reasoning for Open-Domain Question Answering (EMNLP, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.650.pdf)\n- DRLK: Dynamic Hierarchical Reasoning with Language Model and Knowledge Graph for Question Answering (EMNLP, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.342\u002F)\n- Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering (ACL, 2022) [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.acl-long.396.pdf)\n- GREASELM: GRAPH REASONING ENHANCED LANGUAGE MODELS FOR QUESTION ANSWERING (ICLR, 2022) [[paper]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=41e9o6cQPj)\n- LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection (IJCKG, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.12888)\n- QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering (NAACL, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.naacl-main.45\u002F)\n\n\n## Synergized LLMs + KGs\n### Knowledge Representation\n* Tele-Knowledge Pre-training for Fault Analysis (ICDE, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11298)\n* Pre-training language model incorporating domain-specific heterogeneous knowledge into a unified representation (Expert Systems with Applications, 2023) [[paper]](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0957417422023879)\n* Deep Bidirectional Language-Knowledge Graph\nPretraining (NIPS, 2022) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09338)\n* KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation (TACL, 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.tacl-1.11.pdf)\n* JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs (ACL 2021) [[paper]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.223\u002F)\n\n### Reasoning\n* A Unified Knowledge Graph Augmentation Service for Boosting\nDomain-specific NLP Tasks (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.05251.pdf)\n* Unifying Structure Reasoning and Language Model Pre-training\nfor Complex Reasoning (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.08913.pdf)\n* Complex Logical Reasoning over Knowledge Graphs using Large Language Models (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01157)\n## Applications\n\n### Recommender System\n* RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models (Arxiv, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07477.pdf)\n\n### Fault Analysis\n* Tele-Knowledge Pre-training for Fault Analysis (ICDE, 2023) [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11298)\n","# Awesome-LLM-KG\n  [![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FRManLuo\u002FAwesome-LLM-KG) \n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n  ![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002FRManLuo\u002FAwesome-LLM-KG?color=green) \n ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-Welcome-red)\n ![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FRManLuo\u002FAwesome-LLM-KG?color=yellow)\n![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002FRManLuo\u002FAwesome-LLM-KG?color=lightblue) \n\n大型语言模型（LLMs）与知识图谱（KGs）融合相关论文及资源的集合。\n\n大型语言模型（LLMs）在各类应用中取得了显著的成功和强大的泛化能力。然而，它们往往难以有效捕捉和利用事实性知识。知识图谱（KGs）是一种结构化的数据模型，能够显式地存储丰富的事实性知识。尽管如此，构建知识图谱仍然具有挑战性，且现有的方法在处理现实世界中不完整且动态变化的知识图谱方面存在不足。因此，将大型语言模型与知识图谱相结合，充分发挥两者的优势，成为一种自然的选择。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_readme_e158148e8bea.png\" width = \"600\" \u002F>\n\n## 最新消息\n🔭 本项目仍在开发中。您可以点击 **STAR** 和 **WATCH** 来关注最新进展。\n* 我们很高兴发布了首个由**图基础模型**驱动的RAG管道（GFM-RAG），该管道结合了图神经网络与大型语言模型的力量，以增强推理能力。[论文](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.01113) 和 [代码](https:\u002F\u002Fgithub.com\u002FRManLuo\u002Fgfm-rag)。\n* 我们的最新关于KG + LLM推理的工作：[图约束推理：基于大型语言模型的知识图谱忠实推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13080) 已被ICML 2025接收。\n* 我们的用于时序知识图谱推理的LLM工作：[大型语言模型引导的时序知识图谱动态适应推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14170) 已被NeurIPS 2024接收！\n* 我们的用于分析LLM推理的KG论文：[利用知识图谱对多跳推理中的思维链进行直接评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11199) 已被ACL 2024接收。\n* 我们的[路线图论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08302) 已被TKDE接收。\n* 我们的用于LLM探针的KG论文：[大型语言模型中事实性知识的系统评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.11638) 已被EMNLP 2023接收。\n* 我们的KG + LLM推理论文：[图上的推理：忠实且可解释的大型语言模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01061) 已被ICLR 2024接收。\n* 我们的用于KG推理的LLM论文：[ChatRule：利用大型语言模型挖掘逻辑规则以进行知识图谱推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01538) 现已公开。\n* 我们的路线图论文：[统一大型语言模型与知识图谱：一份路线图](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08302) 现已公开。\n\n## 概述\n在这个仓库中，我们收集了近年来大型语言模型与知识图谱融合领域的最新进展。我们提出了一份路线图，总结了三种通用框架：*1) KG增强的LLMs*、*2) LLM增强的KGs*以及*3) 协同作用的LLMs + KGs*。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_readme_8aefc626307f.png\" width = \"800\" \u002F>\n\n我们还展示了相关的技术与应用。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_readme_ba0c0aee3cce.png\" width = \"600\" \u002F>\n\n我们希望这个仓库能够帮助研究人员和从业者更好地理解这一新兴领域。如果您觉得这个仓库对您有所帮助，请通过引用以下论文来支持我们：\n```bash\n@article{llm_kg,\ntitle={Unifying Large Language Models and Knowledge Graphs: A Roadmap},\nauthor={Pan, Shirui and Luo, Linhao and Wang, Yufei and Chen, Chen and Wang, Jiapu and Wu, Xindong},\njournal={IEEE Transactions on Knowledge and Data Engineering (TKDE)},\nyear={2024}\n}\n```\n\n\n## 目录\n- [Awesome-LLM-KG](#awesome-llm-kg)\n  - [最新消息](#news)\n  - [概述](#overview)\n  - [目录](#table-of-contents)\n  - [相关综述](#related-surveys)\n  - [KG增强的LLMs](#kg-enhanced-llms)\n    - [KG增强的LLM预训练](#kg-enhanced-llm-pre-training)\n    - [KG增强的LLM推理](#kg-enhanced-llm-inference)\n    - [KG增强的LLM可解释性](#kg-enhanced-llm-interpretability)\n  - [LLM增强的KGs](#llm-augmented-kgs)\n    - [LLM增强的KG嵌入](#llm-augmented-kg-embedding)\n    - [LLM增强的KG补全](#llm-augmented-kg-completion)\n    - [LLM增强的KG到文本生成](#llm-augmented-kg-to-text-generation)\n    - [LLM增强的KG问答](#llm-augmented-kg-question-answering)\n  - [协同作用的LLMs + KGs](#synergized-llms--kgs)\n    - [知识表示](#knowledge-representation)\n    - [推理](#reasoning)\n  - [应用](#applications)\n    - [推荐](#recommendation)\n    - [故障分析](#fault-analysis)\n## 相关综述\n\n* 统一大型语言模型与知识图谱：一份路线图（TKDE，2024年）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.08302.pdf)\n* 关于知识增强型预训练语言模型的综述（Arxiv，2023年）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.13428.pdf)\n* 预训练语言模型在知识密集型NLP中的应用综述（Arxiv，2022年）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.08772.pdf)\n* 将语言模型视为知识库的综述（Arxiv，2022年）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.06031.pdf)\n* 生成式知识图谱构建综述（EMNLP，2022年）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.12714.pdf)\n* 知识增强型预训练语言模型：全面综述（Arxiv，2021年）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.08455.pdf)\n* 不同类型知识图谱上的推理：静态、时序与多模态（Arxiv，2022年）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.05767)[[代码]](https:\u002F\u002Fgithub.com\u002FLIANGKE23\u002FAwesome-Knowledge-Graph-Reasoning)\n\n## KG增强的LLMs\n\n### 基于知识图谱增强的预训练大语言模型\n- ERNIE：融合结构化实体信息的语言表示（ACL，2019）[[论文]](https:\u002F\u002Faclanthology.org\u002FP19-1139.pdf)\n- 通过图引导的表示学习利用文本中的结构化知识（EMNLP，2019）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.emnlp-main.722.pdf)\n- SKEP：情感知识增强的情感分析预训练模型（ACL，2020）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.acl-main.374.pdf)\n- E-bert：面向电子商务的短语与商品知识增强语言模型（Arxiv，2020）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.02835.pdf)\n- 预训练百科全书：弱监督知识预训练语言模型（ICLR，2020）[[论文]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BJlzm64tDH)\n- BERT-MK：将图上下文知识融入预训练语言模型（EMNLP，2020）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.findings-emnlp.207.pdf)\n- K-BERT：基于知识图谱赋能语言表示（AAAI，2020）[[论文]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F5681\u002F5537)\n- CoLAKE：上下文化的语言与知识嵌入（COLING，2020）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.coling-main.327.pdf)\n- Kepler：统一的知识嵌入与预训练语言表示模型（TACL，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.tacl-1.11.pdf)\n- K-Adapter：通过适配器将知识注入预训练模型（ACL Findings，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.121.pdf)\n- Cokebert：面向增强型预训练语言模型的上下文知识选择与嵌入（AI Open，2021）[[论文]](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS2666651021000188\u002Fpdfft?md5=75919f85dcb5711fd2fe9e3785b24982&pid=1-s2.0-S2666651021000188-main.pdf)\n- Ernie 3.0：大规模知识增强的语言理解与生成预训练模型（Arixv，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.02137)\n- 使用确定性事实知识预训练语言模型（EMNLP，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.764.pdf)\n- Kala：知识增强的语言模型适配（NAACL，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.naacl-main.379.pdf)\n- DKPLM：面向自然语言理解的可分解知识增强预训练语言模型（AAAI，2022）[[论文]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21425\u002F21174)\n- Dict-BERT：借助词典增强语言模型预训练（ACL Findings，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.findings-acl.150.pdf)\n- JAKET：知识图谱与语言理解的联合预训练（AAAI，2022）[[论文]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21417\u002F21166)\n- 面向故障分析的远程知识预训练（ICDE，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11298)\n\n### 基于知识图谱增强的大语言模型推理\n- 巴拉克的妻子希拉里：利用知识图谱进行事实感知的语言建模（ACL，2019）[[论文]](https:\u002F\u002Faclanthology.org\u002FP19-1598.pdf)\n- 面向知识密集型NLP任务的检索增强生成（NeurIPS，2020）[[论文]](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2020\u002Ffile\u002F6b493230205f780e1bc26945df7481e5-Paper.pdf)\n- Realm：检索增强语言模型预训练（ICML，2020）[[论文]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.5555\u002F3524938.3525306)\n- QA-GNN：结合语言模型和知识图谱进行问答推理（NAACL，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.naacl-main.45.pdf)\n- 面向长篇故事中显著性推断的记忆与知识增强语言模型（EMNLP，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.65.pdf)\n- JointLK：结合语言模型和知识图谱进行常识性问题解答的联合推理（NAACL，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.naacl-main.372.pdf)\n- 通过动态文档驱动的知识图谱提升大语言模型的故事理解能力（AAAI，2022）[[论文]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21286)\n- Greaselm：图推理增强的语言模型（ICLR，2022）[[论文]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=41e9o6cQPj)\n- 面向知识密集型NLP任务的高效记忆增强Transformer（EMNLP，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.346.pdf)\n- 面向零样本知识图谱问答的知识增强语言模型提示（NLRSE@ACL，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04136)\n- 在动态环境中集成知识图谱的大语言模型多跳问答（EMNLP Findings，2024）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15903)\n\n### 基于知识图谱增强的大语言模型可解释性\n- 语言模型作为知识库（EMNLP，2019）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.01066.pdf)\n- Kagnet：面向常识推理的知识感知图网络（Arxiv，2019）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.02151.pdf)\n- Autoprompt：利用自动生成的提示从语言模型中提取知识（EMNLP，2020）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.15980.pdf)\n- 我们如何知道语言模型知道什么？（ACL，2020）[[论文]](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00324\u002F96460)\n- 预训练Transformer中的知识神经元（ACL，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.08696.pdf)\n- 语言模型能否成为生物医学知识库？（EMNLP，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07154.pdf)\n- 通过知识图谱抽取解释语言模型（Arxiv，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.08546.pdf)\n- QA-GNN：结合语言模型和知识图谱进行问答推理（ACL，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.06378.pdf)\n- 如何查询语言模型？（Arxiv，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.01928.pdf)\n- 重连后探测：一种对比性的方法来探查预训练语言模型的生物医学知识（Arxiv，2021）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.08173.pdf)\n- 何时不应信任语言模型：探究参数化与非参数化记忆的有效性与局限性（Arxiv，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.10511.pdf)\n- 预训练语言模型如何捕捉事实知识？一种因果启发式的分析（Arxiv，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.16747.pdf)\n- 知识图谱能否简化文本？（CIKM，2023）[[论文]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3583780.3615514)\n\n## 大语言模型增强的知识图谱\n\n### 大语言模型增强的知识图谱嵌入\n- 基于大语言模型噪声标注的实体对齐（NeurIPS，2024）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16806)\n- LambdaKG：基于预训练语言模型的知识图谱嵌入库（Arxiv，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.00305.pdf)\n- 在超复数空间中整合知识图谱嵌入与预训练语言模型（Arxiv，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.02743.pdf)\n- 为语言模型赋予多模态知识图谱表示（Arxiv，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13163)\n- 语言模型引导的知识图谱嵌入（IEEE Access，2022）[[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9831788)\n- 语言模型作为知识嵌入（IJCAI，2022）[[论文]](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2022\u002F0318.pdf)\n- Pretrain-KGE：从预训练语言模型中学习知识表示（EMNLP，2020）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.findings-emnlp.25.pdf)\n- KEPLER：知识嵌入与预训练语言表示的统一模型（TACL，2020）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.06136.pdf)\n\n\n### 大语言模型增强的知识图谱补全\n- 利用大语言模型实现知识图谱补全的多视角改进（COLING 2024）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01972) [[代码]](https:\u002F\u002Fgithub.com\u002Fquqxui\u002FMPIKGC)\n- KG-BERT：用于知识图谱补全的BERT（Arxiv，2019）[[论文]](http:\u002F\u002Farxiv.org\u002Fabs\u002F1909.03193)\n- 基于预训练语言模型的知识图谱补全的多任务学习（COLING，2020）[[论文]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2020.coling-main.153)\n- 预训练模型是否有助于知识图谱补全？一项可靠的评估与合理的方法（ACL，2022）[[论文]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2022.findings-acl.282)\n- 知识图谱补全中的联合语言语义与结构嵌入（COLING，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.coling-1.171)\n- MEM-KGC：基于掩码实体模型的预训练语言模型知识图谱补全（IEEE Access，2021）[[论文]](https:\u002F\u002Fdoi.org\u002F10.1109\u002FACCESS.2021.3113329)\n- 通过统一学习方法利用预训练语言模型扩展知识图谱（Knowl. Based Syst.，2023）[[论文]](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.knosys.2022.110245)\n- 结构增强的文本表示学习用于高效知识图谱补全（WWW，2021）[[论文]](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3442381.3450043)\n- Simkgc：基于预训练语言模型的简单对比式知识图谱补全（ACL，2022）[[论文]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2022.acl-long.295)\n- Lp-bert：用于链接预测的多任务预训练知识图谱BERT（Arxiv，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.04843)\n- 从判别到生成：基于生成式Transformer的知识图谱补全（WWW，2022）[[论文]](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3487553.3524238)\n- 序列到序列的知识图谱补全与问答（ACL，2022）[[论文]](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2022.acl-long.201)\n- 知识是扁平的：一种适用于多种知识图谱补全任务的seq2seq生成框架（COLING，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.coling-1.352)\n- 适配预训练语言模型用于知识图谱补全的框架（EMNLP，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.398)\n- 浸泡PLMs酱汁：通过条件性软提示桥接结构与文本以实现有效知识图谱补全（ACL，2023）[[论文]](https:\u002F\u002Faclanthology.org\u002F2023.findings-acl.729\u002F)\n\n### 大语言模型增强的知识图谱到文本生成\n- GenWiki：包含130万条内容共享文本和图谱的无监督图谱到文本生成数据集（COLING，2020）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.coling-main.217.pdf)\n- KGPT：面向数据到文本生成的知识图谱基础预训练（EMNLP 2020）[[论文]](https:\u002F\u002Faclanthology.org\u002F2020.emnlp-main.697.pdf)\n- JointGT：面向知识图谱文本生成的图-文联合表示学习（ACL Findings，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.223.pdf)\n- 探讨预训练语言模型在图谱到文本生成中的应用（NLP4ConvAI，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.nlp4convai-1.20.pdf)\n- 基于预训练语言模型的少样本知识图谱到文本生成（ACL，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.136.pdf)\n- EventNarrative：大规模事件驱动型知识图谱到文本生成数据集（Neurips，2021）[[论文]](https:\u002F\u002Fdatasets-benchmarks-proceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fa3f390d88e4c41f2747bfa2f1b5f87db-Abstract-round1.html)\n- GAP：面向知识图谱到文本生成的图感知语言模型框架（COLING，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.coling-1.506.pdf)\n\n### 大语言模型增强的知识图谱问答\n- UniKGQA：面向知识图谱上多跳问答的统一检索与推理（ICLR，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00959)\n- StructGPT：大型语言模型处理结构化数据的一般框架（Arxiv，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09645)\n- GPT-3在少样本知识基问答中的实证研究（AAAI，2022）[[论文]](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F20215)\n- 预训练语言模型在简单知识图谱问答中的实证研究（World Wide Web Journal，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10368)\n- 通过知识图谱推理增强语言模型以支持开放域问答（EMNLP，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.650.pdf)\n- DRLK：结合语言模型与知识图谱的动态层次推理问答（EMNLP，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.342\u002F)\n- 子图检索增强模型用于多跳知识库问答（ACL，2022）[[论文]](https:\u002F\u002Faclanthology.org\u002F2022.acl-long.396.pdf)\n- GREASELM：面向问答的图推理增强语言模型（ICLR，2022）[[论文]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=41e9o6cQPj)\n- LaKo：基于知识驱动的视觉问答，通过后期知识到文本注入实现（IJCKG，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.12888)\n- QA-GNN：结合语言模型与知识图谱进行问答推理（NAACL，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.naacl-main.45\u002F)\n\n\n## 大语言模型与知识图谱协同\n\n### 知识表示\n* 用于故障分析的远程知识预训练（ICDE，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11298)\n* 将领域特定的异构知识融入统一表示的语言模型预训练（Expert Systems with Applications，2023）[[论文]](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0957417422023879)\n* 深度双向语言-知识图谱预训练（NIPS，2022）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09338)\n* KEPLER：知识嵌入与预训练语言表示的统一模型（TACL，2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.tacl-1.11.pdf)\n* JointGT：基于知识图谱的文本生成中图-文联合表示学习（ACL 2021）[[论文]](https:\u002F\u002Faclanthology.org\u002F2021.findings-acl.223\u002F)\n\n### 推理\n* 用于提升领域特定自然语言处理任务的统一知识图谱增强服务（Arxiv，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.05251.pdf)\n* 面向复杂推理的结构推理与语言模型预训练统一方法（Arxiv，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.08913.pdf)\n* 基于大型语言模型的知识图谱复杂逻辑推理（Arxiv，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01157)\n\n## 应用\n### 推荐系统\n* RecInDial：基于预训练语言模型的对话式推荐统一框架（Arxiv，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07477.pdf)\n\n### 故障分析\n* 用于故障分析的远程知识预训练（ICDE，2023）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11298)","# Awesome-LLM-KG 快速上手指南\n\n**Awesome-LLM-KG** 并非一个可直接安装的单一软件包或库，而是一个**精选的资源集合仓库**。它整理了关于“大语言模型（LLM）”与“知识图谱（KG）”融合的最新论文、代码实现和技术路线图。\n\n本指南将帮助你如何利用该仓库进行学术研究、技术选型及代码复现。\n\n## 1. 环境准备\n\n由于本仓库主要包含文献列表和指向各个独立项目的链接，你只需要基础的开发环境即可浏览资源。若需运行仓库中提及的具体算法（如 GFM-RAG, ChatRule 等），则需根据具体论文的 `requirements.txt` 配置环境。\n\n### 系统要求\n*   **操作系统**: Linux (推荐), macOS, Windows\n*   **Python**: 3.8+ (具体版本视目标子项目而定)\n*   **Git**: 用于克隆仓库\n\n### 前置依赖\n*   **浏览器**: 用于查看整理好的 Markdown 文档和论文链接。\n*   **深度学习框架**: 如需运行代码，通常需安装 `PyTorch` 或 `TensorFlow`，以及 `Transformers`, `DGL`\u002F`PyG` (图神经网络库) 等。\n\n## 2. 安装步骤（获取资源）\n\n克隆仓库到本地以获取最新的论文列表、代码链接和技术路线图。\n\n```bash\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002FRManLuo\u002FAwesome-LLM-KG.git\n\n# 进入目录\ncd Awesome-LLM-KG\n```\n\n> **国内加速建议**：\n> 如果直接克隆速度较慢，可使用国内镜像源（如 Gitee 镜像，若有）或配置 Git 代理：\n> ```bash\n> # 临时配置 HTTP 代理 (请替换为你的代理地址)\n> export http_proxy=http:\u002F\u002F127.0.0.1:7890\n> export https_proxy=http:\u002F\u002F127.0.0.1:7890\n> git clone https:\u002F\u002Fgithub.com\u002FRManLuo\u002FAwesome-LLM-KG.git\n> ```\n\n## 3. 基本使用\n\n本仓库的核心价值在于其分类清晰的**技术路线图**和**论文索引**。以下是三种主要使用方式：\n\n### 方式一：浏览技术路线与论文\n直接在本地或 GitHub 页面阅读 `README.md`。仓库将 LLM 与 KG 的结合分为三大框架，你可以根据研究兴趣查找对应论文：\n\n1.  **KG-enhanced LLMs (KG 增强的 LLM)**\n    *   利用 KG 提升 LLM 的预训练、推理能力或可解释性。\n    *   *适用场景*: 事实性问答、减少幻觉、领域适配。\n2.  **LLM-augmented KGs (LLM 增强的 KG)**\n    *   利用 LLM 辅助 KG 的嵌入表示、补全、生成或问答。\n    *   *适用场景*: 自动化构建图谱、图谱补全、自然语言交互查询。\n3.  **Synergized LLMs + KGs (LLM 与 KG 协同)**\n    *   两者深度结合，共同进行知识表示与推理。\n    *   *适用场景*: 复杂多跳推理、动态时序图谱分析。\n\n### 方式二：复现特定项目\n仓库中的每条记录都附带了 `[paper]` (论文) 和 `[code]` (代码) 链接。以仓库推荐的 **GFM-RAG** (图基础模型驱动的 RAG) 为例：\n\n1.  在 \"News\" 或列表中点击对应项目的 **Code** 链接跳转至子项目仓库。\n2.  进入子项目后，按照其独立的 `README` 进行安装和运行。例如：\n    ```bash\n    # 示例：假设已进入 gfm-rag 子项目目录\n    pip install -r requirements.txt\n    python run.py --config config.yaml\n    ```\n\n### 方式三：引用与追踪\n如果你在该领域的研究中受益于此资源列表，请在你的论文中引用其核心综述文章：\n\n```bibtex\n@article{llm_kg,\n  title={Unifying Large Language Models and Knowledge Graphs: A Roadmap},\n  author={Pan, Shirui and Luo, Linhao and Wang, Yufei and Chen, Chen and Wang, Jiapu and Wu, Xindong},\n  journal={IEEE Transactions on Knowledge and Data Engineering (TKDE)},\n  year={2024}\n}\n```\n\n*提示：点击仓库顶部的 **STAR** 和 **WATCH** 按钮，以便在 GitHub 上接收该项目关于新论文和新代码更新的通知。*","某金融科技公司风控团队正试图构建一个能实时解释复杂欺诈链条的智能问答系统，以辅助分析师快速决策。\n\n### 没有 Awesome-LLM-KG 时\n- **事实幻觉严重**：纯大模型在回答涉及具体股权穿透或关联交易时，常编造不存在的公司关系，导致误判风险。\n- **动态更新滞后**：知识图谱构建耗时且僵化，难以利用大模型自动捕捉新闻中突发的企业变更，图谱数据往往落后于现实。\n- **推理过程黑盒**：模型给出的结论缺乏可追溯的逻辑路径，分析师无法验证其是否基于真实的图谱证据，难以通过合规审计。\n- **技术选型迷茫**：团队在\"KG 增强 LLM\"还是\"LLM 补全 KG\"等技术路线上反复试错，缺乏系统性论文指引，研发周期被大幅拉长。\n\n### 使用 Awesome-LLM-KG 后\n- **事实精准锚定**：参考库中\"图约束推理（Graph-constrained Reasoning）\"等方案，让模型严格基于图谱事实生成回答，彻底消除关键实体关系的幻觉。\n- **动态自适应进化**：引入\"时序知识图谱推理\"最新成果，利用大模型自动感知并更新图谱中的动态变化，确保欺诈网络实时鲜活。\n- **逻辑可信可查**：应用\"思维链直接评估\"技术，系统能输出基于图谱跳转的完整推理路径，让每一条风控建议都有据可查。\n- **架构快速落地**：依托清晰的三大统一框架路线图和 GFM-RAG 等成熟流水线案例，团队迅速锁定最佳技术组合，将原型开发时间缩短 60%。\n\nAwesome-LLM-KG 通过提供前沿的统一框架与实证方案，成功将大模型的泛化能力与知识图谱的精确性深度融合，打造出既聪明又靠谱的行业级推理引擎。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRManLuo_Awesome-LLM-KG_e158148e.png","RManLuo","Linhao Luo","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FRManLuo_344dd426.jpg","Research Fellow at Monash University | AI, LLM, Graph, Agent","Monash University","Melbourne",null,"https:\u002F\u002Frmanluo.github.io\u002F","https:\u002F\u002Fgithub.com\u002FRManLuo",2584,179,"2026-04-12T17:03:35",1,"","未说明",{"notes":88,"python":86,"dependencies":89},"Awesome-LLM-KG 是一个论文和资源合集列表（Awesome List），并非一个可直接运行的单一软件工具或代码库。README 中列出的内容是关于大语言模型（LLM）与知识图谱（KG）结合的研究论文、路线图及相关子项目（如 GFM-RAG、ChatRule 等）的链接。因此，该仓库本身没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。如需运行其中提到的具体算法或子项目，请参考对应论文的官方代码仓库及其环境配置说明。",[],[14,35,16],[92,93,94,95,96,97,98,99,100],"awsome","chatgpt","gpt-4","kg","knowledge-graph","language-model","large-language-model","llm","survey","2026-03-27T02:49:30.150509","2026-04-14T20:52:28.276603",[],[]]