[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-tomhartke--knowledge-graph-from-GPT":3,"similar-tomhartke--knowledge-graph-from-GPT":50},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":18,"owner_email":18,"owner_twitter":18,"owner_website":19,"owner_url":20,"languages":21,"stars":30,"forks":31,"last_commit_at":32,"license":33,"difficulty_score":34,"env_os":35,"env_gpu":36,"env_ram":36,"env_deps":37,"category_tags":40,"github_topics":18,"view_count":44,"oss_zip_url":18,"oss_zip_packed_at":18,"status":45,"created_at":46,"updated_at":47,"faqs":48,"releases":49},4740,"tomhartke\u002Fknowledge-graph-from-GPT","knowledge-graph-from-GPT","Using GPT to organize and access information, and generate questions. Long term goal is to make an agent-like research assistant.","knowledge-graph-from-GPT 是一个旨在为大型语言模型构建“外部记忆”的开源项目。它利用 GPT 自动整理、分类信息并生成知识图谱，让 AI 不仅能回答问题，还能主动发现知识盲区并提出澄清性问题，逐步向具备自主研究能力的智能助手演进。\n\n该项目重点解决了大模型常见的几大短板：缺乏长期记忆导致无法从少量示例中持续学习、逻辑推理结构松散、决策过程难以解释，以及缺乏连贯的自主性。通过将知识以人类可读的自然语言形式存储在图谱中，系统实现了更高的可解释性，并支持在不重新训练模型参数的情况下，通过更新记忆模块来持续提升系统能力。\n\n其技术亮点在于完全透明的知识结构设计，所有事实与嵌入均以自然语言呈现，便于人类理解和干预。长远来看，它适用于科研人员进行文献综述与假设生成、教育工作者开发个性化辅导工具，或开发者构建需要长期记忆与逻辑推理的智能应用。对于希望探索 AI 代理（Agent）前沿架构的研究者和工程师而言，这是一个极具参考价值的实践框架。","# A knowledge graph from GPT\n \n## High-level description\nThis program is meant to create an external memory module for a language model, and ultimately provide \nagent-like capabilities to a language model (long-term goal).\n- The combined system would ideally gather information via a text interface, categorize and structure it, then identify gaps in knowledge, \nor inconsistencies.\n- The language model can be shown subsets of this information (chosen based on the structure of \nthe knowledge graph), and then choose to propose further questions to ask the environment as clarification, \nbuilding knowledge over time.\n  \n### Goals\n\nThis project aims to address a few of the major shortcomings of language models:\n* Memory \n  * External memory solves the problem of lack of long-term learning from one or two examples.\n* Logic \n  * Language models are generally unable to force structured responses, \n  but recalling logical arguments from memory could help. \n* Interpretability \n  * By observing what memories are accessed, we can understand sources of resulting statements and information flow.\n  * Alternatively, by observing how the language model processes and categorizes information, we can understand the \n  inherent structure of the information learned by the raw language model.\n* Developing agency\n  * Language models lack coherent agency, as language models generate both sides of a conversation.\n  * Structuring the language model as a component in a reinforcement learning system, with the goal \n  of categorizing and uncovering information, restores agency.\n* Computational resource use for training \n  * Can we continuously improve the entire machine learning system (model and RL wrapper), without \n  continuously retraining the model parameters? Simply by recursively improving the memory (which is re-inserted\n  through the prompt).\n  * Language models could be trained for specialized sub-component tasks in the resulting global system.\n* Bootstrapping capabilities\n  * Is a minimal level of reason and analogy sufficient to tackle arbitrarily complex processing of knowledge,\n  by breaking down ideas into minimal components, and treating each separately?\n  * There are probably opportunities here for bootstrapping and policy improvement of the language model,\n    through self-generated examples (as used below in extracting question embeddings, and \n  generating questions from examples of clusters of questions). \n  \n### Target uses long term\n1. Database generation and parsing + question answering\n    * Summarize a research field or class notes or textbook\n    * Identify conflicting information and disputes,\n   or different explanations for the same topic or idea\n\n2. Educational tool or personal learning tool\n    * Construct the agent to serve as a spaced-repetition flashcard assistant.\n      * Learn what knowledge the user has, and how quickly they forget it, then periodically reprompt them. \n      * Learn to suggest new information for the user to learn, tailored to their current knowledge and interests. \n      * Do everything with a flexible, natural language interface to pose questions and interpret responses.\n    * A structured eucational tool\n      * Fix the knowledge in the graph (distilled from experts), then use the knowledge structure and spaced-repetition\n      framework to understand a student's learning needs, and interface with them.\n\n3. Hypothesis generation for scientific research \n    * Process entire scientific fields, including papers, textbooks, audio lectures, etc.\n    * Come up with novel ideas and research proposals \n\n### Outline of the program \n\nThe program is designed as a wrapper for a language model in python.\nThe knowledge graph (stored in python) makes periodic calls to the language model when necessary.\n\nA key feature is that the structure of the knowledge graph is fully-human interpretable.\nEven the embeddings of information, and the facts themselves, are in natural language. \nMoreover, all steps of the algorithm are observable to the user (what information is referenced, and why).\n\nSteps:\n1. Extraction\n   * The language model is shown examples of \"flashcards\" (minimal question\u002Fanswer pairs in a field of knowledge) \n   and extracts an embedding of the hierarchy of natural language concepts that the fact relates to \n   (ie. science, biology, cells, DNA...).\n2. Embedding\n   * The knowledge graph program looks at the combined set of all embeddings for all known information, and \n   constructs a human-interpretable vector embedding of each flashcard. \n      * These vector embeddings are in natural language\n        * The \"dimensions\" of the embedding vector are the names of words or concepts in the graph\n          (ie. there is a dimension for \"Science\" and a dimension for \"DNA\")\n      * The overlap of embedding vectors is high for similar concepts and similar facts, allowing clustering of knowledge. \n      * The embeddings are tailored to the local structure of learned knowledge \n     (they depend on all facts learned, not \"the internet\" or some other database).\n3. Clustering and Structuring\n   * Using embedding vectors, facts and concepts can be clustered hierarchically, giving a natural knowledge structure \n   for search and exploration. \n4. Question answering\n   * Given a novel question from the environment (ie. a query to the database), the model \n   can extract a tailored natural language embedding (the hierarchy of concepts)\n     * We can take advantage of few-shot learning here to extract these concepts, by just showing \n     examples of previous questions + extracted concepts, then prompting with the new question and asking for the \n     extracted concepts following the same style. \n     * This provides opportunities for recursive self-improvement. \n   * This question embedding can be used to identify relevant knowledge in the graph\n   * The language model can observe all relevant learned knowledge, \n   and the current question, to answer the current question.\n5. Hypothesis generation\n   * Once the knowledge in the graph is structured (clustered), the language model can use \n   clusters of existing questions as inspiration (few shot examples) for generating further clusters of questions.\n   * By showing the model a few sets of 5 related scientific questions,\n   then prompting with 4 new (but related) questions, we can generate conceptually new questions in the style of \n   a scientific researcher. \n\nFuture work is required to really make the program agent-like, by choosing what to do hierarchically,\nand where to explore further. See some suggestions below.\n\n\n## Details of the program: Constructing the knowledge graph\n\n### Initial concept extraction and embedding\n1. Concept extraction from question\u002Fanswer pair:\n![Alt text](docs\u002FConceptExtraction.jpg?raw=true \"Optional\")\n    - See files for the explicit chained prompts used to extract the few-word concepts. \n    - The basic idea is it extracts some information, then reprompts the language model with that\n   extracted information as part of the context, thereby progressively refining the concept hierarchy. \n2. Concept embedding (raw embedding then detailed embedding)\n![Alt text](docs\u002FConceptEmbedding.jpg?raw=true \"Optional\")\n   - Step 1: First we extract the raw card-concept connections, which are just a list of how often two concepts appear together, \n   as a fraction of the total time they appear.\n     - The connection strength from one concept to another is the fraction of the time the concepts appear together \n     (specifically the fraction of times the first concept appears where the second concept is present).\n   - Step 2: Next, we extract a raw concept embedding, which is the statistical significance of the connection between two concepts is measured.\n     - This is done in a bit of a complicated way, which is probably not necessary. Feel free to skip on a first read through.\n         - For each concept, all of it's neighboring concepts are ranked by their relative abstraction (measured in the card concept hierarchy).\n         - Then at each level of abstraction, we find the average connection strength within an abstraction window \n         - This average connection strength defines a beta distribution on the expected observed connection strengths for any two concepts. This allows us to identify outliers.\n         - Finally, we find the statistical outliers on the connection strength \n         (those concepts that appear together more often than expected, with a certain threshold).\n         - Something like a quasi-\"upper confidence bound\" probability is calculated to get the embedding connection strength.\n           - This says, even if I happen to be very wrong in my expected average connection strength and it's much higher\n             (in the worst case scenario, say 5% of the time), what might my average connection strength actually be?\n           - Then, even in that worst case scenario, what is the fraction of expected connection strengths which is smaller than my observed connection strength.\n           - Thus if a observed connection strength is larger than 90% of all worst case expected connection stengths, it's definitely significant.\n         - The ultimate embedding connection strength is then the statistical weight of the upper confidence bound beta distribution below the observed connection strength.\n   - Step 3: We gather the final concept embeddings (long-range) based on the raw concept embeddings from step 2. \n     - This is once again somewhat complicated, but probably more necessary.\n     It allows us to have a high-quality similarity metric between embeddings (it's just something I hacked together to have the \n     desired properties).\n       - In brief, each final concept embedding is designed so that its overlap with its neighbor concepts' raw \n       embeddings is self-consistent, and representative of the local structure of the raw embedding graph, in some sense.\n       - The final concept embedding has the following property\n         - The final embedding magnitude at each neighboring concept is equal to the fractional projection of the \n         the neighboring concept raw embedding onto the entire final embedding vector. \n         - Geometrically, this means that the final embedding vector is more or less pointing along an \n         \"average\" direction of its neighbor raw embedding vectors, weighted by their original relevance.  \n       - What this does in practice is significantly extend the conceptual reach of the embedding\n       - of each concept, since it now includes components that are really \"second order\" concept connections.\n       - To be honest, I don't have this process fully figured out (why it works well), but it does.\n3. Card embedding based on concepts\n![Alt text](docs\u002FCardEmbedding.jpg?raw=true \"Optional\")\n    - The card embedding is simple. It is a weighted sum of the concept embedding vectors from the concepts identified in the card.\n    - There is one added complication: we punish the presence of \"common\" concepts, because they aren't helpful in identifying\n   the structure of knowledge (knowing that most of the concepts are related to \"science\" isn't very helpful).\n        - To do this, concept vectors are summed, but are weighted inversely to their prevalence in the graph.\n        - Finally, after summing the concept vectors, individual components are once again divided by their prevalence in the graph\n       in order to further favor the presence of unique concepts in the resulting vector. \n\n### Structuring the knowledge graph\nThe natural language embeddings of each concept or card allow one to cluster concepts through various \nmethods. \n\n1. Defining the similarity metric\n> ![Alt text](docs\u002FSimilarityMetric.jpg?raw=true \"Optional\")\nHere is an example matrix of all card-card similarities in the knowledge graph.\n![Alt text](docs\u002FExampleSimilarityMatrix.jpg?raw=true \"Optional\") \nHere is an interactive clustering visualization.\n![Alt text](docs\u002FClusteringDemonstration.jpg?raw=true \"Optional\") \n \n\n2. Example similarity calculations\n    - ![Alt text](docs\u002FExampleNodeNodeOverlap.jpg?raw=true \"Optional\")  \n    - ![Alt text](docs\u002FExampleNodeCardOverlap.jpg?raw=true \"Optional\")  \n\n3. Clustering ideas (question\u002Fanswer pairs) based on the similarity metric. \n   - Here are some examples of the results of clustering cards. \n     - These examples are \n     phrased as a \"family tree\" of clusters (raw cards get grouped into small clusters, then larger clusters, and so on. \n     The raw cards are children, first order clusters are parents, second order clusters are called grandparents, etc.)\n   ![Alt text](docs\u002FExampleClusterHierarchies.jpg?raw=true \"Optional\")\n   - The clustering algorithm is just something I hacked together, and isn't very polished (but works reasonably well).\n     - See the code for details, but I'll describe the rough idea here. The rough idea is the following:\n       - Step 1: find approximate clusters around each card\n         - For each card, it finds nearby cards by order of similarity.\n         - It proposes using the first k cards as a cluster.\n         - The approximate \"cluster quality\" is calculated. \n           - This is a metric which is high if all the cards have a high similarity to the target card, and if there are more cards.\n           - However, it is penalized if any of the cards have a very low similarity to some other card (in particular, \n           it penalizes the cluster based on the minimum similarity measured between cards). \n           - There is a control parameter for the target cluster size, which sets the tradeoff between self similarity and lack of dissimilarity.\n         - As the proposed cluster gets larger, it may include more similar cards (and thus have a better cluster metric.)\n         However, if any proposed card has a bad similarity to some other card, that quickly kills the cluster quality.\n         - Finding the optimal cluster metric, we are left with a cluster of cards which are similar to each other, and none of which are particularly dissimilar to each other.\n         - Lastly we clean up the cluster by testing adding or removing a few individual cards (out of order), and seeing \n         if this improves the cluster quality, until this converges. \n         - We return a target cluster around the given card.\n       - Step 2: Vary the target cluster size and find the cluster with the highest global self similarity (a new metric).\n         - In this step, we find a meta-metric for the cluster quality (again dependent on how self-similar the cluster is, but using the step 1 \n         cluster metric as a way to define proposed clusters).\n         - We perform step 1 for both the target card, and all of it's identified cluster partners. This is effectively checking\n         not only what cluster we identify, but also reflexively at second order what those cards identify.\n         - If the proposed cluster partners all identify the same cluster of cards as the target card identifies, that's great. \n         It's a well isolated cluster.\n         - On the other hand, if the proposed cluster partners identify their own clusters which are much bigger, or \n         much smaller than the target card's cluster, then the proposed cluster isn't well isolated.\n         - The final meta-metric increases with cluster size, and decreases when the reflexive proposed clusters differ a lot from the target cluster. \n       - Step 3: Find the optimal cluster for each card, and repeat to form a cluster hierarchy.\n         - At each level of clustering, we take the clusters of cards from the previous level and treat those\n         clusters as one big \"card\" (a combination of all the concepts in all of the cards in the cluster) \n         and combine all their concept embeddings to get an embedding that represents the cluster.\n         - We can then cluster those cluster embeddings, and so on.\n     - The main point here is that some kind of clustering is readily possible. \n       - Probably my way isn't efficient or optimal quality, but it gets the job done as a proof of concept.\n\n### Querying the knowledge graph\nTo answer a question about facts in the knowledge graph, we determine the component concepts of a question\nand then gather similar cards to this question to re-expose to the language model at answering time.\n* Note: we do NOT use the same prompts as the original flashcard concept extraction. \n  * Instead, we leverage the work already done by the model (the examples of concept extraction from questions)\n  to serve as examples for rapid concept extraction in the same style.\n\n1. Construct a question embedding\n![Alt text](docs\u002FQuestionEmbedding.jpg?raw=true \"Optional\")\n![Alt text](docs\u002FExampleQuestionEmbeddingExtraction.jpg?raw=true \"Optional\")\n    - This few-shot prompting teaches the model to extract concepts in the correct level of detail\n      (going from most abstract to least abstract concepts), and to use the correct format (capitalize first letter, \n   and stick to one or two words usually)\n    - The two-stage reprompting also further allows the model to see what concepts are extracted from similar cards,\n   which further improves the quality of the question embedding.\n     \n2. Gather similar cards \n    1. In the simplest case, this can just mean take the top k cards ranked via the similarity metric (ie the top 10 cards or so).\n3. Answer the question by re-prompting the language model while showing related information\n![Alt text](docs\u002FQuestionAnswering.jpg?raw=true \"Optional\")\n![Alt text](docs\u002FExampleQuestionAnswering.jpg?raw=true \"Optional\")\n4. Note that, during this final answer, we can choose whether to allow the language model to use outside knowledge, or only information directly  \nwithin the re-prompted cards.\n\n### Question generation \nThis is not fully implemented yet, but has a basic demonstration. In the future, this would ideally \ninclude hypothesis generation for scientific research.\n\nWe essentially use the measured knowledge structure to gather clusters of existing questions, on similar topics,\nand use them as few-shot examples for generating further questions in a similar style.\n\n![Alt text](docs\u002FExampleQuestionGeneration.jpg?raw=true \"Optional\") \n\nFurthermore, we can change the prompt to display (and ask for) questions with increasing or decreasing level of abstraction.\nThis is possible because the flashcards' component concepts are originally originally extracted in order of abstraction,\nso we can measure this and incorporate it into the structure of the graph.\n\nLong term, this process is promising as a method of recursive self-improvement.\nIf we can get the model to ask and generate good questions, and if we can identify the best clusters of scientific questions\nin the knowledge graph to use as generative examples, the entire combined system should recursively self improve\n(think of AlphaGo zero). \n\n### User interface\nMarch 2023: I added a user interface which can be launched to interactively explore questions, then\ngenerate answers, then save them to the knowledge graph.\n\nPurpose: Use interface to explore related topics and intriguing questions.\n\nGeneral workflow:\n- First select the topic, and goal command, then use the \"generate new questions\" button to generate some number\nof new questions the user wants to explore.\n- Second look at the generated questions, and choose which ones to keep or not.\n- Third, gather related questions in the graph so that LLM knows what you already learned.\n- Generate more questions, and repeat triage process, until you have a set of questions you want answered.\n- Lastly, answer the new questions, then edit the answers, then save to the knowledge graph and long term storage. \n\nExample screenshots:\n\n>Generating questions initially, and finding related information in graph:\n![Alt text](docs\u002FUserInterface_GeneratingQuestions.jpg?raw=true \"Optional\") \n\n>Selecting which questions to keep, or remove, and repeatedly regenerating new questions:\n![Alt text](docs\u002FUserInterface_RefiningQuestions.jpg?raw=true \"Optional\") \n\n>Finally \"Answering new questions\" using ChatGPT, then editing and \"Save and Reset\" to \nload this information into the knowledge graph, and save in long term data.\n![Alt text](docs\u002FUserInterface_AnsweringQuestions.jpg?raw=true \"Optional\") \n\n\n## Future extensions\n\n### Agent-like behavior and exploration\nThe ultimate goal here is to make a self-improving agent that can explore and augment its conceptual environment \nusing reinforcement learning and fast algorithms which process the information in the knowledge graph, with only sporadic calls \nto the language model to actually structure the information. \n\nIdeally the knowledge graph can be parsed via reinforcement learning algorithms to identify gaps in knowledge, and clusters of knowledge,\nto then determine what areas to explore further and ask questions about. Then the language model can\nbe used to actually generate meaningful and conceptually different questions, which can be asked to the environment (ie. the internet) \nto gather more information. \n\nThis will require some notion of \"value\" for the information in the knowledge graph (to balance pure exploration with \nexploration of relevant concepts). \nIt will probably also be necssary to set up self pruning and revisiting of the knowledge graph, when in an idle state (like sleeping).\n\n### Structured and hierarchical question-answering\nFor improved quality of question answering, it would be good for the language model to choose to\nbreak down a question into a multi-stage question prompt with sub-questions.\nThese questions and sub-questions and their ultimate answers could be added to the knowledge graph as memory.\n\n### Specializing the language model\nIt is possible to train the language model to improve itself based on examples \n(for example when extracting the concept hierarchy). \nOne could have a dedicated smaller network for this (similar to the dedicated internal policy and value networks in alphaGo zero).\n\n### Set up as a personal learning assistant \nCan we build a smart spaced-repetition memory system for learning with language models?\n* Incorporate spaced repetition and probability of memory. \n* Generate complex and organic questions to test concepts within a cluster of cards.\n* Learn some parameters for you (your personal rate of forgetting. Your interests and base knowledge).\n* Have preset expert knowledge structures to learn, and teach (ie a module of cards on statistics).\n* Have a nice way of visualizing your personal knowledge graph by clustering.\n\nAdditional functionality:\n* The knowledge graph could be set up as a plugin to always ask you to summarize what you read the last 10 minutes.\n* It could interface with OpenAI whisper or something analogous to do speech to text and then summarize.\n\n### Long term foreseen issues\n* It would probably be helpful to use synonym directory to help collapse nodes (concepts).\n* How to handle other datatypes other than text? How to handle very short or very long texts (flashcards) or concepts (things like math, or huge formulas?).\n\n\n\n## Final comments, additional references, thoughts \nThere are a variety of related concepts floating around on the internet. \nI want to give a rough pointer to them here (not comprehensive).\n\nEmbedding vectors and figuring out how to reference memory are common concepts. \nThese are a few examples I came across after the fact: \n* OpenAI embeddings\n* GPT-index: https:\u002F\u002Fgithub.com\u002Fjerryjliu\u002Fgpt_index\n\nThere are a variety of apps and startups and research projects trying to do similar things, or parts of this: \n* Startups to build personal knowledge graphs or notes assistance (I believe using language models).\n* Tools to summarize articles and information into discrete facts (could be used as a sub-component in future work)\n* There are various research directions using re-exposure of facts to a language model to improve quality. \nFor example, non-parametric transformers, and various other transformer architectures which save memory. \n* There are attempts to build systems for structured question answering through breaking down into components, such as \nthe factored cognition primer - https:\u002F\u002Fprimer.ought.org\u002F.\n\nHinton suggests that using a hierarchy of concepts is crucial: https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.12627 (\"How to represent part-whole hierarchies in a neural network\").\n\n### Unique(ish) contributions\nJust to summarize, a few of the more unique aspects here in my implementation are (to my knowledge, though I \nprobably just haven't looked enough):\n* Embedding structure - quality and interpretability\n    * The use of local knowledge structure to construct the embeddings \n    (instead of a learned embedding from the internet). This may be more accurate\u002Fexpressive. \n    * The use of natural language in the embeddings (embedding dimensions are \"words\", not unlabeled axes).\n\n* Policy improvement through experience\n  * Question generation based on extracted structure in the local graph. \n  * Question embedding extraction based on past examples.\n  * Generally the idea that we might get very far by leveraging language models for subcomponents\n  for basic reasoning within a larger system (this idea is present in a lot of current robotics research). \n\n* The goal of building an agent and independent research assistant, or the goal to build a \npersonal assistant in learning and memorization, seems\nslightly orthogonal to most efforts.\n","# 由GPT生成的知识图谱\n \n## 高层次描述\n该程序旨在为语言模型创建一个外部记忆模块，最终赋予语言模型类似智能体的能力（长期目标）。\n- 理想情况下，整个系统将通过文本界面收集信息，对其进行分类和结构化，并识别知识中的空白或不一致之处。\n- 语言模型可以被展示这些信息的子集（根据知识图谱的结构选择），然后决定向环境提出进一步的问题以澄清疑点，从而逐步积累知识。\n  \n### 目标\n\n本项目旨在解决语言模型的几个主要缺陷：\n* 记忆 \n  * 外部记忆解决了语言模型难以从一两个例子中进行长期学习的问题。\n* 逻辑 \n  * 语言模型通常无法强制生成结构化的回答，但从记忆中调用逻辑论证可能会有所帮助。\n* 可解释性 \n  * 通过观察哪些记忆被访问，我们可以理解生成结果的来源以及信息流动的过程。\n  * 或者，通过观察语言模型如何处理和分类信息，我们可以了解原始语言模型所学习到的信息的内在结构。\n* 发展自主性\n  * 语言模型缺乏连贯的自主性，因为它们会生成对话的双方内容。\n  * 将语言模型构建为强化学习系统中的一个组件，以分类和挖掘信息为目标，可以恢复其自主性。\n* 训练时的计算资源消耗 \n  * 我们能否在不重新训练模型参数的情况下，持续改进整个机器学习系统（模型和强化学习封装层）？只需递归地改进记忆库（通过提示重新插入）即可。\n  * 语言模型可以在最终形成的全局系统中接受针对特定子任务的训练。\n* 自举能力\n  * 是否只需要最低限度的推理和类比能力，就能通过将复杂概念分解为最小单元并分别处理，来应对任意复杂的知识处理任务？\n  * 在这里可能存在通过自动生成示例来实现语言模型的自举和策略改进的机会（如下面用于提取问题嵌入以及从问题簇示例中生成新问题的方法）。 \n  \n### 长期目标用途\n1. 数据库生成与解析 + 问答系统\n    * 总结某一研究领域、课堂笔记或教科书内容\n    * 识别相互冲突的信息和争议，或对同一主题或观点的不同解释\n\n2. 教育工具或个人学习工具\n    * 构建一个作为间隔重复闪卡助手的智能体。\n      * 了解用户已掌握的知识及其遗忘速度，然后定期提醒复习。\n      * 学习根据用户的当前知识水平和兴趣推荐新的学习内容。\n      * 所有操作均通过灵活自然的语言界面进行提问和回应解读。\n    * 结构化的教育工具\n      * 将专家提炼出的知识固化在图谱中，然后利用知识结构和间隔重复框架来理解学生的学习需求，并与其互动。\n\n3. 科学研究中的假设生成 \n    * 处理整个科学领域的资料，包括论文、教科书、音频讲座等。\n    * 提出新颖的研究思路和提案 \n\n### 程序概要 \n\n该程序被设计为Python中语言模型的一个封装器。知识图谱（存储在Python中）会在必要时定期调用语言模型。\n\n其关键特性是知识图谱的结构完全可被人理解。即使是信息的嵌入以及事实本身，也都以自然语言形式呈现。此外，算法的所有步骤都可供用户观察（引用了哪些信息以及原因）。\n\n步骤：\n1. 提取\n   * 向语言模型展示“闪卡”示例（某一知识领域的极简问答对），并提取该事实所关联的自然语言概念层级的嵌入表示（例如：科学、生物学、细胞、DNA等）。\n2. 嵌入\n   * 知识图谱程序会查看所有已知信息的嵌入集合，为每张闪卡构建一个人类可理解的向量嵌入。\n      * 这些向量嵌入以自然语言形式存在\n        * 嵌入向量的“维度”是图谱中词汇或概念的名称\n          （例如，有一个“科学”的维度，还有一个“DNA”的维度）\n      * 对于相似的概念和事实，嵌入向量的重叠度较高，从而实现知识的聚类。\n      * 嵌入是根据已学习知识的局部结构定制的\n     （它们依赖于所有已学过的事实，而不是“互联网”或其他数据库）。\n3. 聚类与结构化\n   * 利用嵌入向量，可以将事实和概念按层级进行聚类，形成自然的知识结构，便于搜索和探索。\n4. 问答\n   * 给定来自环境的新问题（即对数据库的查询），模型可以提取一个定制的自然语言嵌入（概念层级）。\n     * 在这里可以利用少样本学习来提取这些概念，只需展示之前的问题示例及提取出的相应概念，再输入新问题并要求按照相同风格提取相关概念。\n     * 这为递归式的自我改进提供了机会。\n   * 这个问题嵌入可用于在图谱中定位相关知识。\n   * 语言模型可以综合所有相关已学知识和当前问题，从而回答当前问题。\n5. 假设生成\n   * 一旦图谱中的知识被结构化（聚类后），语言模型就可以以现有问题簇为灵感（少量示例），生成更多问题簇。\n   * 通过向模型展示几组各包含5个相关科学问题的集合，再输入4个新的（但相关的）问题，我们便能以科学研究者的风格生成全新的概念性问题。\n\n要进一步使程序具备智能体特性，还需要在未来的工作中实现分层决策以及确定进一步探索的方向。具体建议见下文。\n\n\n## 程序细节：构建知识图谱\n\n### 初始概念提取与嵌入\n1. 从问答对中提取概念：\n![Alt text](docs\u002FConceptExtraction.jpg?raw=true \"可选\")\n    - 请参阅文件，了解用于提取简短概念的明确链式提示。\n    - 基本思路是先提取一些信息，然后将这些提取的信息作为上下文重新输入语言模型，从而逐步细化概念层次结构。\n2. 概念嵌入（原始嵌入与详细嵌入）\n![Alt text](docs\u002FConceptEmbedding.jpg?raw=true \"可选\")\n   - 步骤1：首先提取原始的卡片-概念连接关系，即两个概念共同出现的频率占它们总出现次数的比例。\n     - 从一个概念到另一个概念的连接强度，就是这两个概念同时出现的频率（具体来说，是第一个概念出现时第二个概念也出现的频率）。\n   - 步骤2：接下来提取原始概念嵌入，通过统计显著性来衡量两个概念之间的连接强度。\n     - 这一步的实现方式较为复杂，可能并非必要。初次阅读时可以跳过。\n         - 对于每个概念，将其所有邻近概念按照其相对抽象程度（在卡片概念层次结构中衡量）进行排序。\n         - 然后在每个抽象层级上，计算该抽象窗口内的平均连接强度。\n         - 这个平均连接强度定义了一个关于任意两个概念预期观察到的连接强度的贝塔分布，从而帮助我们识别异常值。\n         - 最后，找出连接强度上的统计学异常值（即那些共同出现频率高于预期、且超过特定阈值的概念）。\n         - 类似于一种准“置信上限”的概率被用来计算嵌入中的连接强度。\n           - 这意味着，即使我对平均连接强度的预期严重偏离实际（例如，在最坏情况下，误差高达5%），我的平均连接强度可能究竟是多少？\n           - 进一步地，在这种最坏情况下，有多少比例的预期连接强度会低于我实际观测到的连接强度？\n           - 因此，如果某个观测到的连接强度超过了90%的最坏情况下的预期连接强度，则可以认为它是显著的。\n         - 最终的嵌入连接强度，就是观测到的连接强度下方的置信上限贝塔分布所对应的统计权重。\n   - 步骤3：基于步骤2中的原始概念嵌入，我们汇总得到最终的概念嵌入（长距离嵌入）。\n     - 这一步同样有些复杂，但可能更为必要。\n     它使我们能够获得高质量的嵌入相似性度量指标（这只是我个人临时拼凑出来以满足所需性质的方法）。\n       - 简而言之，每个最终概念嵌入的设计都确保其与相邻概念的原始嵌入之间具有自洽的重叠，并在某种程度上代表原始嵌入图的局部结构。\n       - 最终概念嵌入具备以下特性：\n         - 在每个邻近概念处，最终嵌入的大小等于该邻近概念原始嵌入向整个最终嵌入向量的投影分数。\n         - 从几何意义上讲，这意味着最终嵌入向量大致指向其邻近原始嵌入向量的“平均”方向，且权重由它们原有的相关性决定。\n       - 实际上，这样做显著扩展了每个概念嵌入的概念覆盖范围，因为它现在包含了真正意义上的“二阶”概念连接。\n       - 说实话，我尚未完全弄清楚这一过程为何有效，但它确实奏效。\n3. 基于概念的卡片嵌入\n![Alt text](docs\u002FCardEmbedding.jpg?raw=true \"可选\")\n    - 卡片嵌入非常简单，它就是卡片中所识别出的概念嵌入向量的加权和。\n    - 其中有一个额外的复杂点：我们会惩罚“常见”概念的存在，因为它们对于识别知识结构并无帮助（知道大多数概念都与“科学”相关，并不能提供太多有用的信息）。\n        - 为此，我们将概念向量相加，但会根据它们在图中的普遍程度对其进行反向加权。\n        - 最后，在完成概念向量的求和之后，还会再次将各个分量除以其在图中的普遍程度，以进一步强调独特概念在最终向量中的贡献。\n\n### 构建知识图谱\n每个概念或卡片的自然语言嵌入使得我们能够通过多种方法对概念进行聚类。\n\n1. 定义相似度度量\n> ![Alt text](docs\u002FSimilarityMetric.jpg?raw=true \"Optional\")\n以下是知识图谱中所有卡片之间相似度的一个示例矩阵。\n![Alt text](docs\u002FExampleSimilarityMatrix.jpg?raw=true \"Optional\") \n以下是一个交互式的聚类可视化。\n![Alt text](docs\u002FClusteringDemonstration.jpg?raw=true \"Optional\") \n \n\n2. 相似度计算示例\n    - ![Alt text](docs\u002FExampleNodeNodeOverlap.jpg?raw=true \"Optional\")  \n    - ![Alt text](docs\u002FExampleNodeCardOverlap.jpg?raw=true \"Optional\")  \n\n3. 基于相似度度量对想法（问答对）进行聚类。\n   - 以下是卡片聚类结果的一些示例。\n     - 这些示例以“家族树”的形式呈现：原始卡片首先被分组为小集群，然后逐步合并成更大的集群，依此类推。原始卡片被视为子代，第一层集群是父代，第二层集群则称为祖辈，等等。\n   ![Alt text](docs\u002FExampleClusterHierarchies.jpg?raw=true \"Optional\")\n   - 这个聚类算法是我临时拼凑出来的，并不十分完善（但效果尚可）。\n     - 具体细节请参阅代码，这里简要说明其大致思路如下：\n       - 步骤1：围绕每张卡片寻找近似的集群\n         - 对于每张卡片，按照相似度顺序找到附近的卡片。\n         - 提议将前k张卡片作为一组。\n         - 计算近似的“集群质量”。\n           - 该指标在以下情况下较高：所有卡片与目标卡片的相似度都较高，且卡片数量较多。然而，如果任何一张卡片与其他卡片的相似度非常低，则会降低集群质量（特别是根据卡片间最小相似度来惩罚集群）。\n           - 存在一个控制参数用于设定目标集群大小，从而在自我相似性和避免差异性之间取得平衡。\n         - 随着拟议集群规模的扩大，可能会包含更多相似的卡片，从而使集群质量更好。但是，一旦有某张卡片与其他卡片的相似度较低，集群质量就会迅速下降。\n         - 找到最优的集群质量后，最终得到一组彼此相似、且相互之间没有显著差异的卡片组成的集群。\n         - 最后，我们会通过增减个别卡片来进一步优化集群质量，直到达到稳定状态为止。\n         - 最终返回围绕给定卡片的目标集群。\n       - 步骤2：调整目标集群大小，寻找全局自我相似性最高的集群（一种新的度量标准）。\n         - 在这一步中，我们为集群质量定义了一个元指标（仍然依赖于集群的自我相似性程度，但使用步骤1中的集群质量指标来确定候选集群）。\n         - 我们对目标卡片及其所有已识别的集群伙伴重复执行步骤1的操作，这样不仅检查了我们所识别的集群，还递归地检查了这些卡片自身所识别的集群。\n         - 如果所有候选集群伙伴都认同与目标卡片相同的集群，那么这个集群就是高度独立的。\n         - 反之，如果候选集群伙伴识别出的集群规模远大于或远小于目标卡片的集群，则表明该集群不够独立。\n         - 最终的元指标会随着集群规模的增大而提高，但当递归识别出的集群与目标集群差异较大时，该指标会降低。\n       - 步骤3：为每张卡片找到最优集群，并重复此过程以形成集群层次结构。\n         - 在每一层聚类中，我们将上一层的集群视为一个“大卡片”（即集群中所有卡片概念的组合），并将其所有概念嵌入进行合并，从而得到代表整个集群的嵌入向量。\n         - 接着我们可以对这些集群嵌入再次进行聚类，以此类推。\n     - 重点在于，某种形式的聚类是完全可以实现的。\n       - 或许我的方法并不高效或质量最佳，但它作为一个概念验证已经足够完成任务。\n\n### 查询知识图谱\n为了回答关于知识图谱中事实的问题，我们需要先确定问题中的各个组成概念，然后收集与该问题相似的卡片，在回答时重新输入到语言模型中。\n* 注意：我们不会使用与最初闪卡概念提取相同的提示。\n  * 相反，我们会利用模型已经完成的工作（即从问题中提取概念的示例），作为快速提取同风格概念的参考。\n\n1. 构建问题嵌入\n![Alt text](docs\u002FQuestionEmbedding.jpg?raw=true \"Optional\")\n![Alt text](docs\u002FExampleQuestionEmbeddingExtraction.jpg?raw=true \"Optional\")\n    - 这种少量样本提示训练模型以正确的细节层次提取概念（从最抽象到最具体的概念），并采用正确的格式（首字母大写，通常保持一到两个词）。\n    - 两阶段的重新提示还能让模型看到从相似卡片中提取的概念，从而进一步提升问题嵌入的质量。\n     \n2. 收集相似卡片 \n    1. 在最简单的情况下，只需选取按相似度排序后的前k张卡片即可（例如前10张左右）。\n3. 通过向语言模型重新提供相关信息来回答问题\n![Alt text](docs\u002FQuestionAnswering.jpg?raw=true \"Optional\")\n![Alt text](docs\u002FExampleQuestionAnswering.jpg?raw=true \"Optional\")\n4. 需要注意的是，在最终的回答过程中，我们可以选择是否允许语言模型使用外部知识，或者仅限于重新提示中提供的卡片内的信息。\n\n### 问题生成\n目前这一功能尚未完全实现，但已有一个基本的演示。未来理想情况下，它将包括为科学研究生成假设。\n\n我们基本上利用已测量的知识结构，收集关于相似主题的现有问题簇，并将其用作少样本示例，以生成更多类似风格的问题。\n\n![Alt text](docs\u002FExampleQuestionGeneration.jpg?raw=true \"Optional\") \n\n此外，我们还可以通过调整提示来展示（并请求）抽象程度逐渐提高或降低的问题。这是可行的，因为闪卡中的组件概念原本就是按照抽象程度顺序提取的，因此我们可以衡量这一点，并将其融入图结构中。\n\n从长远来看，这一过程有望成为一种递归式自我改进的方法。如果我们能够让模型提出并生成高质量的问题，并且能够在知识图谱中识别出最佳的科学问题簇作为生成示例，那么整个系统应该能够实现递归式的自我改进（类似于AlphaGo Zero）。\n\n### 用户界面\n2023年3月：我添加了一个用户界面，可以启动该界面来交互式地探索问题、生成答案，然后将答案保存到知识图谱中。\n\n目的：使用界面探索相关主题和引人入胜的问题。\n\n一般工作流程：\n- 首先选择主题和目标指令，然后使用“生成新问题”按钮生成用户想要探索的若干新问题。\n- 其次查看生成的问题，并选择保留或删除哪些问题。\n- 第三步，在图谱中收集相关问题，以便大语言模型知道你已经学过的内容。\n- 再次生成更多问题，并重复筛选过程，直到你有一组希望得到解答的问题。\n- 最后，回答新问题，编辑答案，然后保存到知识图谱和长期存储中。\n\n示例截图：\n\n> 初始生成问题，并在图谱中查找相关信息：\n![Alt text](docs\u002FUserInterface_GeneratingQuestions.jpg?raw=true \"Optional\") \n\n> 选择保留或删除哪些问题，并反复重新生成新问题：\n![Alt text](docs\u002FUserInterface_RefiningQuestions.jpg?raw=true \"Optional\") \n\n> 最后使用ChatGPT“回答新问题”，然后编辑并点击“保存并重置”，将这些信息加载到知识图谱中，并保存到长期数据中：\n![Alt text](docs\u002FUserInterface_AnsweringQuestions.jpg?raw=true \"Optional\") \n\n\n## 未来扩展\n\n### 类代理行为与探索\n这里最终的目标是打造一个能够自我改进的智能体，它可以通过强化学习和快速算法来探索并增强其概念环境，同时仅在需要实际构建信息结构时才偶尔调用语言模型。理想情况下，知识图谱可以通过强化学习算法进行解析，以识别知识空白和知识集群，从而确定需要进一步探索和提问的领域。随后，语言模型可以用来生成有意义且概念上不同的问题，这些问题可以被提交给环境（即互联网）以获取更多信息。\n\n这将需要对知识图谱中的信息赋予某种“价值”概念（以平衡纯粹探索与相关概念的探索）。此外，在空闲状态时（如休眠状态），可能还需要设置知识图谱的自我修剪和定期回顾机制。\n\n### 结构化与层级化的问答\n为了提升问答质量，语言模型最好能够选择将一个问题分解为包含子问题的多阶段提问流程。这些问题、子问题及其最终答案都可以作为记忆被添加到知识图谱中。\n\n### 语言模型的专门化\n有可能训练语言模型根据示例来自我改进（例如在提取概念层次结构时）。为此可以设置一个专门的小型网络（类似于AlphaGo Zero中的专用内部策略和价值网络）。\n\n### 构建个人学习助手\n我们能否利用语言模型构建一个智能的间隔重复记忆系统来进行学习？\n* 融入间隔重复和记忆概率的概念。\n* 生成复杂而有机的问题，以测试卡片簇内的概念。\n* 学习一些适合你的参数（你的个人遗忘率、兴趣和基础知识）。\n* 提供预设的专家知识结构供学习和教授（例如统计学模块的卡片）。\n* 提供一种美观的方式，通过聚类来可视化你的个人知识图谱。\n\n附加功能：\n* 可以将知识图谱设置为插件，始终要求你总结过去10分钟内阅读的内容。\n* 它还可以与OpenAI Whisper或其他类似工具对接，实现语音转文字并自动摘要。\n\n### 长期预见的问题\n* 使用同义词目录来帮助合并节点（概念）可能会很有帮助。\n* 如何处理文本以外的数据类型？如何处理非常短或非常长的文本（闪卡）或概念（如数学或复杂的公式）？\n\n\n\n## 总结、补充参考及思考\n互联网上流传着许多相关的概念。在此我简单列举一些（并不全面）。\n\n嵌入向量以及如何引用记忆是常见的概念。以下是我事后了解到的一些例子：\n* OpenAI嵌入\n* GPT-index: https:\u002F\u002Fgithub.com\u002Fjerryjliu\u002Fgpt_index\n\n有许多应用、初创公司和研究项目正在尝试做类似的事情，或者其中的一部分：\n* 一些初创公司致力于构建个人知识图谱或笔记辅助工具（我认为它们会使用语言模型）。\n* 有一些工具可以将文章和信息总结成离散的事实（未来工作中可用作子组件）。\n* 还有一些研究方向通过让事实再次进入语言模型来提升质量。例如非参数化变压器以及各种保存记忆的其他变压器架构。\n* 也有人尝试通过分解成分来构建结构化问答系统，比如Factored Cognition Primer - https:\u002F\u002Fprimer.ought.org\u002F。\n\nHinton指出，使用概念层次结构至关重要：https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.12627（“如何在神经网络中表示部分-整体层次结构”）。\n\n### 独特（ish）的贡献\n简单总结一下，在我的实现中，几个较为独特的方面（据我所知，当然也可能是我了解得还不够全面）包括：\n* 嵌入结构——质量和可解释性\n  * 使用局部知识结构来构建嵌入（而非从互联网上学习得到的嵌入）。这可能会更加准确和富有表现力。\n  * 在嵌入中使用自然语言（嵌入维度是“词”，而不是无标签的轴）。\n\n* 通过经验改进策略\n  * 基于本地图谱中提取的结构生成问题。\n  * 根据过往实例提取问题嵌入。\n  * 总体而言，利用语言模型作为子组件，在更大系统中完成基础推理，这一思路可能让我们走得更远（这种理念在当前许多机器人研究中都有体现）。\n\n* 构建智能体兼独立研究助理，或者旨在打造用于学习与记忆的个人助理，这一目标似乎与大多数现有工作存在一定差异。","# knowledge-graph-from-GPT 快速上手指南\n\n本工具旨在为语言模型构建外部记忆模块，通过知识图谱结构化管理信息，解决大模型在长期记忆、逻辑推理及可解释性方面的短板。以下是基于 Python 环境的快速部署与使用指南。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows (推荐 WSL2)\n*   **Python 版本**：Python 3.8 或更高版本\n*   **前置依赖**：\n    *   `pip` (Python 包管理工具)\n    *   `git` (代码版本控制)\n    *   **LLM API Key**：本项目依赖 GPT 系列模型进行概念提取与推理，需准备 OpenAI API Key 或其他兼容接口密钥。\n\n> **国内加速建议**：\n> 若在国内网络环境下安装依赖较慢，建议使用清华或阿里云镜像源加速 pip 安装：\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n1.  **克隆项目仓库**\n    将源代码拉取到本地目录：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fyour-username\u002Fknowledge-graph-from-GPT.git\n    cd knowledge-graph-from-GPT\n    ```\n\n2.  **创建虚拟环境（推荐）**\n    为避免依赖冲突，建议创建独立的虚拟环境：\n    ```bash\n    python -m venv venv\n    source venv\u002Fbin\u002Factivate  # Linux\u002FmacOS\n    # 或\n    venv\\Scripts\\activate     # Windows\n    ```\n\n3.  **安装依赖包**\n    安装项目所需的 Python 库：\n    ```bash\n    pip install -r requirements.txt\n    ```\n\n4.  **配置 API 密钥**\n    在项目根目录下创建 `.env` 文件（或根据项目具体配置脚本），填入您的 LLM API 密钥：\n    ```bash\n    export OPENAI_API_KEY=\"sk-your-api-key-here\"\n    # 或在 .env 文件中写入：OPENAI_API_KEY=sk-your-api-key-here\n    ```\n\n## 基本使用\n\n本工具的核心流程包括：**信息提取** -> **向量化嵌入** -> **聚类结构化** -> **问答\u002F假设生成**。以下是最简化的运行示例。\n\n### 1. 初始化并添加知识卡片\n创建一个简单的 Python 脚本（例如 `demo.py`），用于向知识图谱中添加初始数据（以“闪卡”形式，即问答对）：\n\n```python\nfrom kg_gpt import KnowledgeGraph  # 假设主类导入路径，具体请参考源码结构\n\n# 初始化知识图谱实例\nkg = KnowledgeGraph()\n\n# 定义初始知识卡片 (问题，答案)\nflashcards = [\n    (\"What is DNA?\", \"Deoxyribonucleic acid, the carrier of genetic information.\"),\n    (\"Where is DNA located?\", \"In the nucleus of eukaryotic cells.\"),\n    (\"What are the components of DNA?\", \"Nucleotides consisting of a sugar, phosphate, and base.\")\n]\n\n# 执行提取与嵌入流程\n# 这一步会自动调用 LLM 提取概念层级，计算统计显著性嵌入，并更新图谱结构\nfor question, answer in flashcards:\n    kg.add_fact(question, answer)\n\nprint(\"Knowledge graph updated with initial facts.\")\n```\n\n### 2. 执行聚类与结构化\n数据录入后，运行结构化命令以生成人类可读的概念层级和聚类视图：\n\n```bash\npython scripts\u002Fstructure_graph.py\n```\n*注：运行后将生成概念相似度矩阵及聚类可视化文件（参考 `docs\u002F` 目录下的示例图片逻辑）。*\n\n### 3. 进行问答查询\n利用构建好的图谱回答新问题，系统将自动检索相关概念簇并调用 LLM 生成答案：\n\n```python\n# 继续上述 demo.py 脚本\n\nnew_query = \"How does DNA relate to cell biology?\"\nresponse = kg.query(new_query)\n\nprint(f\"Query: {new_query}\")\nprint(f\"Answer: {response}\")\nprint(f\"Referenced Concepts: {kg.get_referenced_concepts()}\")\n```\n\n### 4. 生成科学假设（进阶）\n基于现有知识簇，让模型生成新的研究问题：\n\n```python\n# 基于现有的问题簇生成新的假设性问题\nnew_hypotheses = kg.generate_hypotheses(num_questions=5)\nfor h in new_hypotheses:\n    print(h)\n```\n\n---\n**提示**：由于该工具严重依赖 LLM 的 Few-shot 学习能力，初次运行时建议在代码中提供少量高质量的领域示例（如生物学、物理学笔记），以便模型更准确地提取概念层级和构建嵌入向量。","一位生物医学研究员正在梳理数百篇关于“阿尔茨海默病新疗法”的文献，试图从中发现潜在的研究突破口并构建系统的知识体系。\n\n### 没有 knowledge-graph-from-GPT 时\n- **记忆碎片化**：大模型无法长期记住之前读过的几十篇论文细节，每次提问都像“失忆”一样，需重新投喂大量背景信息。\n- **逻辑难自洽**：面对不同研究中相互冲突的实验数据，模型难以结构化地对比分析，容易生成模棱两可或混淆的结论。\n- **黑盒不可控**：研究者无法追踪模型得出某个结论具体引用了哪篇文献，缺乏可解释性，不敢轻易采信其推导结果。\n- **被动无主见**：模型只能被动回答指令，无法像真正的科研助手那样主动识别知识盲区并提出关键的澄清性问题。\n- **重复成本高**：每当有新论文发布，为了更新认知往往需要微调模型或重新构建冗长的提示词，计算资源浪费严重。\n\n### 使用 knowledge-graph-from-GPT 后\n- **外部记忆持久化**：工具将文献信息自动分类存入外部知识图谱，模型能随时调用长期积累的结构化数据，实现真正的“过目不忘”。\n- **矛盾自动预警**：系统能识别图谱中关于同一靶点的不同解释，主动标记冲突点，辅助研究者快速定位学术争议核心。\n- **溯源清晰透明**：每一条生成的观点都可追溯至图谱中的具体节点和原始文献，信息流向一目了然，大幅提升可信度。\n- **主动探索代理**：模型化身主动代理人，在发现知识断层时会自动生成新的探究问题，引导研究者补充关键信息以完善认知网络。\n- **免训练持续进化**：无需重新训练模型参数，仅通过递归更新知识图谱即可让系统不断吸纳最新科研成果，高效且低成本。\n\nknowledge-graph-from-GPT 通过将大模型转化为具备长期记忆与主动探索能力的智能体，彻底改变了科研人员处理复杂领域知识的方式。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftomhartke_knowledge-graph-from-GPT_eb4fb9c0.png","tomhartke","Thomas Hartke","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftomhartke_b2d82cba.jpg","Experimental physicists, no building www.undermind.ai ",null,"www.tomhartke.com","https:\u002F\u002Fgithub.com\u002Ftomhartke",[22,26],{"name":23,"color":24,"percentage":25},"Jupyter Notebook","#DA5B0B",93.5,{"name":27,"color":28,"percentage":29},"Python","#3572A5",6.5,694,52,"2026-03-07T12:05:29","MIT",4,"","未说明",{"notes":38,"python":36,"dependencies":39},"README 中未明确列出具体的运行环境需求（如操作系统、GPU、内存、Python 版本及依赖库）。该工具被描述为围绕语言模型构建的 Python 封装器，通过提示词（prompt）与外部语言模型交互来构建知识图谱，而非本地训练大型模型。具体依赖可能取决于用户选择调用的语言模型接口（如 OpenAI API 或本地部署的模型），需参考项目源代码文件获取详细配置。",[],[41,42,43],"语言模型","开发框架","Agent",2,"ready","2026-03-27T02:49:30.150509","2026-04-07T09:50:18.110316",[],[],[51,62,70,78,86,95],{"id":52,"name":53,"github_repo":54,"description_zh":55,"stars":56,"difficulty_score":57,"last_commit_at":58,"category_tags":59,"status":45},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[43,42,60,61],"图像","数据工具",{"id":63,"name":64,"github_repo":65,"description_zh":66,"stars":67,"difficulty_score":57,"last_commit_at":68,"category_tags":69,"status":45},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[42,60,43],{"id":71,"name":72,"github_repo":73,"description_zh":74,"stars":75,"difficulty_score":44,"last_commit_at":76,"category_tags":77,"status":45},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",142651,"2026-04-06T23:34:12",[42,43,41],{"id":79,"name":80,"github_repo":81,"description_zh":82,"stars":83,"difficulty_score":44,"last_commit_at":84,"category_tags":85,"status":45},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[42,60,43],{"id":87,"name":88,"github_repo":89,"description_zh":90,"stars":91,"difficulty_score":44,"last_commit_at":92,"category_tags":93,"status":45},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[94,42],"插件",{"id":96,"name":97,"github_repo":98,"description_zh":99,"stars":100,"difficulty_score":57,"last_commit_at":101,"category_tags":102,"status":45},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[41,60,43,42]]