[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-materialyzeai--matgl":3,"tool-materialyzeai--matgl":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",143909,2,"2026-04-07T11:33:18",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":10,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":105,"github_topics":107,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":145},5245,"materialyzeai\u002Fmatgl","matgl","Graph deep learning library for materials","MatGL（Materials Graph Library）是一个专为材料科学设计的图深度学习开源库。它将材料的原子结构自然地转化为数学图数据，利用先进的图神经网络模型，高效预测各类材料属性，从而作为传统昂贵计算方法的强力替代方案。\n\n对于材料领域的研究人员和开发者而言，MatGL 解决了从原子结构到性能预测的建模难题，显著降低了探索新材料的计算成本与时间门槛。无论是需要快速筛选候选材料的研究团队，还是致力于开发新算法的 AI 工程师，都能通过 MatGL 灵活地构建、训练并分享自己的模型。\n\n该工具的技术亮点在于其持续的架构演进与广泛的兼容性。最新版本默认采用 PyTorch Geometric (PyG) 后端，以确保持续的技术支持，同时仍保留对 DGL 框架的兼容。MatGL 不仅内置了 M3GNet、MEGNet、CHGNet 等经典架构，还引入了最新的 QET 和 TensorNet 模型，并提供丰富的预训练权重，让用户仅需一行代码即可加载高性能模型，轻松开启材料智能发现之旅。","[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fmaterialyzeai\u002Fmatgl)](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fblob\u002Fmain\u002FLICENSE)\n[![Lint](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fworkflows\u002FLint\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fworkflows\u002FLint\u002Fbadge.svg)\n[![Test](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Factions\u002Fworkflows\u002Ftest.yml)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_readme_bc160a64d526.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmatgl)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmaterialyzeai\u002Fmatgl\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=3V3O79GODQ)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmaterialyzeai\u002Fmatgl)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmatgl?logo=pypi&logoColor=white)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmatgl?logo=pypi&logoColor=white)\n\n# Materials Graph Library \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_readme_faf83d2ac1b4.png\" alt=\"matgl\" width=\"30%\" style=\"float: right\">\n\n## Official Documentation\n\n\u003Chttps:\u002F\u002Fmatgl.ai>\n\n## Introduction\n\nMatGL (Materials Graph Library) is a graph deep learning library for materials science. Mathematical graphs are a\nnatural representation for a collection of atoms. Graph deep learning models have been shown to consistently deliver\nexceptional performance as surrogate models for the prediction of materials properties. The goal is for MatGL to serve\nas an extensible platform to develop and share materials graph deep learning models.\n\nThis first version of MatGL is a collaboration between the [Materialyze.AI][materialyze] and Intel Labs.\n\nMatGL is part of the MatML ecosystem, which includes the [MatGL] (Materials Graph Library) and [maml] (MAterials\nMachine Learning) packages, the [MatPES] (Materials Potential Energy Surface) dataset, and the [MatCalc] (Materials\nCalculator).\n\n## Status\n\nMajor milestones are summarized below. Please refer to the [changelog] for details.\n\n- v2.0.0 (Nov 13 2025): [QET] architecture added. PYG backend is now the default.\n- v1.3.0 (Aug 12 2025): Pretrained molecular potentials and PyG framework added.\n- v1.1.0 (May 7 2024): Implementation of [CHGNet] + pre-trained models.\n- v1.0.0 (Feb 14 2024): Implementation of [TensorNet] and [SO3Net].\n- v0.5.1 (Jun 9 2023): Model versioning implemented.\n- v0.5.0 (Jun 8 2023): Simplified saving and loading of models. Now models can be loaded with one line of code!\n- v0.4.0 (Jun 7 2023): Near feature parity with original TF implementations. Re-trained M3Gnet universal potential now\n  available.\n- v0.1.0 (Feb 16 2023): Initial implementations of M3GNet and MEGNet architectures have been completed. Expect\n  bugs!\n\n## Major update: v2.0.0 (Nov 12 2025)\n\nWe are in the process of moving away from the Deep Graph Library (DGL) framework to Pytorch Geometric (PyG) or even a\npure PyTorch framework. This is motivated by the fact that DGL is no longer actively maintained. For now, both PYG and DGL\nmodels are available.\n\nFrom v2.0.0, MatGL will default to a PyG backend, and DGL is no longer a required dependency. For now, only TensorNet\nhas been re-implemented in PYG. To use the DGL-based models (which includes the new QET), you will need to install the DGL dependencies manually. This typically takes about 10 minutes, depending on the speed of downloading the required GPU packages.:\n\n```bash\npip install \"numpy\u003C2\"\npip install dgl==2.2.0\npip install torch==2.3.0\npip install \"torchdata\u003C=0.8.0\"\n```\n\nand set the backend either via the environment variable `MATGL_BACKEND=DGL` or by using\n\n```python\nimport matgl\nmatgl.set_backend(\"DGL\")\n```\n\n## Current Architectures\n\n\u003Cdiv style=\"float: left; padding: 10px; width: 200px\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_readme_1e911bcb86fc.png\" alt=\"m3gnet_schematic\">\n\u003Cp>Figure: Schematic of M3GNet\u002FMEGNet\u003C\u002Fp>\n\u003C\u002Fdiv>\n\nHere, we summarize the currently implemented architectures in MatGL. It should be stressed that this is by no means\nan exhaustive list, and we expect new architectures to be added by the core MatGL team as well as other contributors\nin the future.\n\n- [QET] (DGL only, PYG coming soon), pronounced as \"ket\", is a charge-equilibrated TensorNet architecture. It is an\n  equivariant, charge-aware architecture that attains linear scaling with system size via an analytically solvable\n  charge-equilibration scheme. A pre-trained QET-MatQ FP is available, which matches state-of-the-art FPs on standard\n  materials property benchmarks but delivers qualitatively different predictions in systems dominated by charge\n  transfer, e.g., NaCl–\\ce{CaCl2} ionic liquid, reactive processes at the Li\u002F\\ce{Li6PS5Cl} solid-electrolyte interface,\n  and supports simulations under applied electrochemical potentials.\n- [TensorNet] (PYG and DGL) is an O(3)-equivariant message-passing neural network architecture that leverages Cartesian tensor\n  representations. It is a generalization of the [SO3Net] architecture, which is a minimalist SO(3)-equivariant neural\n  network. In general, TensorNet has been shown to be much more data and parameter efficient than other equivariant\n  architectures. It is currently the default architecture used in the [Materials Virtual Lab].\n- [Crystal Hamiltonian Graph Network (CHGNet)][chgnet] (DGL only) is a graph neural network based MLIP. CHGNet involves atom\n  graphs to capture atom bond relations and bond graph to capture angular information. It specializes in\n  capturing the atomic charges through learning and predicting DFT atomic magnetic moments.\n  See [original implementation][chgnetrepo]\n- [Materials 3-body Graph Network (M3GNet)][m3gnet] is an invariant graph neural network architecture that\n  incorporates 3-body interactions. An additional difference is the addition of the coordinates for atoms and\n  the 3×3 lattice matrix in crystals, which are necessary for obtaining tensorial quantities such as forces and\n  stresses via auto-differentiation. As a framework, M3GNet has diverse applications, including **Interatomic potential development.** With the same training data, M3GNet performs similarly to state-of-the-art\n  machine learning interatomic potentials (MLIPs). However, a key feature of a graph representation is its\n  flexibility to scale to diverse chemical spaces. One of the key accomplishments of M3GNet is the development of a\n  [*foundation potential*][m3gnet] that can work across the entire periodic table of the elements by training on\n  relaxations performed in the [Materials Project][mp]. Like the previous MEGNet architecture, M3GNet can be used to\n  develop surrogate models for property predictions, achieving in many cases accuracies that are better or similar to\n  other state-of-the-art ML models.\n- [MatErials Graph Network (MEGNet)][megnet] (DGL only) is an implementation of DeepMind's [graph networks][graphnetwork] for\n  machine learning in materials science. We have demonstrated its success in achieving low prediction errors in a broad\n  array of properties in both [molecules and crystals][megnet]. New releases have included our recent work on\n  [multi-fidelity materials property modeling][mfimegnet]. Figure 1 shows the sequential update steps of the graph\n  network, whereby bonds, atoms, and global state attributes are updated using information from each other, generating\n  an output graph.\n\nFor detailed performance benchmarks, please refer to the publications in the [References](#references) section.\n\n## Installation\n\nMatgl can be installed via pip:\n\n```bash\npip install matgl\n```\n\nIf you need to use DGL, it is recommended you install the latest version of DGL before installing matgl.\n\n```bash\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Ftorch-2.4\u002Frepo.html\n```\n\n### CUDA (GPU) installation\n\nIf you intend to use CUDA (GPU) to speed up training, it is important to install the appropriate versions of PyTorch\nand DGL. The basic instructions are given below, but it is recommended that you consult the\n[PyTorch docs](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) and [DGL docs](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fstart.html) if you\nrun into any problems.\n\n```shell\npip install torch==2.2.0 --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Fcu121\u002Frepo.html\npip install dglgo -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels-test\u002Frepo.html\n```\n\n## Docker images\n\nDocker images have now been built for matgl, together with LAMMPS support. They are available at the\n[Materials Virtual Lab Docker Repository]. If you wish to use MatGL with LAMMPS, this is probably the easiest option.\n\n## Usage\n\nPre-trained M3GNet universal potential and MEGNet models for the Materials Project formation energy and\nmulti-fidelity band gap are now available.\n\n### Command line (from v0.6.2)\n\nA CLI tool now provides the capability to perform quick relaxations or predictions using pre-trained models, as well\nas other simple administrative tasks (e.g., clearing the cache). Some simple examples:\n\n1. To perform a relaxation,\n\n    ```bash\n    mgl relax --infile Li2O.cif --outfile Li2O_relax.cif\n    ```\n\n2. To use one of the pre-trained property models,\n\n    ```bash\n    mgl predict --model M3GNet-MP-2018.6.1-Eform --infile Li2O.cif\n    ```\n\n3. To clear the cache,\n\n    ```bash\n    mgl clear\n    ```\n\nFor a full range of options, use `mgl -h`.\n\n### Code\n\nUsers who just want to use the models out of the box should use the newly implemented `matgl.load_model` convenience\nmethod. The following is an example of a prediction of the formation energy for CsCl.\n\n```python\nfrom pymatgen.core import Lattice, Structure\nimport matgl\n\nmodel = matgl.load_model(\"MEGNet-MP-2018.6.1-Eform\")\n\n# This is the structure obtained from the Materials Project.\nstruct = Structure.from_spacegroup(\"Pm-3m\", Lattice.cubic(4.1437), [\"Cs\", \"Cl\"], [[0, 0, 0], [0.5, 0.5, 0.5]])\neform = model.predict_structure(struct)\nprint(f\"The predicted formation energy for CsCl is {float(eform.numpy()):.3f} eV\u002Fatom.\")\n```\n\nTo obtain a listing of available pre-trained models,\n\n```python\nimport matgl\nprint(matgl.get_available_pretrained_models())\n```\n\n## Pytorch Hub\n\nThe pre-trained models are also available on Pytorch hub. To use these models, simply install matgl and use the\nfollowing commands:\n\n```python\nimport torch\n\n# To obtain a listing of models\ntorch.hub.list(\"materialsvirtuallab\u002Fmatgl\", force_reload=True)\n\n# To load a model\nmodel = torch.hub.load(\"materialyzeai\u002Fmatgl\", 'm3gnet_universal_potential')\n```\n## Model Training\n\nIn the PES training, the unit of energies, forces and stresses (optional) in the training, validation and test sets is extremely important to be consistent with the unit used in MatGL.\n\n- energies: a list of energies with unit eV.\n- forces: a list of nx3 force matrix with unit eV\u002FÅ, where n is the number of atom in each structure. n does not need to be the same for all structures.\n- stresses: a list of 3x3 stress matrices with unit GPa (optional)\n\nNote: For stresses, we use the convention that compressive stress gives negative values. Stresses obtained from VASP calculations (default unit is kBar) should be multiplied by -0.1 to work directly with the model.\n\n## Tutorials\n\nWe wrote [tutorials] on how to use MatGL. These were generated from [Jupyter notebooks]\n[jupyternb], which can be directly run on [Google Colab].\n\n## Resources\n\n- [API docs][apidocs] for all classes and methods.\n- [Developer Guide](developer.md) outlines the key design elements of `matgl`, especially for developers wishing to\n  train and contribute matgl models.\n- AdvancedSoft has implemented a [LAMMPS interface](https:\u002F\u002Fgithub.com\u002Fadvancesoftcorp\u002Flammps\u002Ftree\u002Fbased-on-lammps_2Jun2022\u002Fsrc\u002FML-M3GNET)\n  to both the TF and MatGL version of M3GNet.\n\n## References\n\nA manuscript for MatGL has been published in npj Computational Materials. please cite the following:\n> **MatGL**\n>\n> Ko, T. W.; Deng, B.; Nassar, M.; Barroso-Luque, L.; Liu, R.; Qi, J.; Thakur, A. C.; Mishra, A. R.; Liu, E.; Ceder, G.; Miret, S.; Ong, S. P.\n> *Materials Graph Library (MatGL), an Open-Source Graph Deep Learning Library for Materials Science and Chemistry.*\n> npj Comput Mater 11, 253 (2025). DOI: [https:\u002F\u002Fdoi.org\u002F10.1038\u002Fs41524-025-01742-y][matgl].\n\nIf you are using any of the pretrained models, please cite the relevant works below:\n\n> **MEGNet**\n>\n> Chen, C.; Ye, W.; Zuo, Y.; Zheng, C.; Ong, S. P. *Graph Networks as a Universal Machine Learning Framework for\n> Molecules and Crystals.* Chem. Mater. 2019, 31 (9), 3564–3572. DOI: [10.1021\u002Facs.chemmater.9b01294][megnet].\n\n> **Multi-fidelity MEGNet**\n>\n> Chen, C.; Zuo, Y.; Ye, W.; Li, X.; Ong, S. P. *Learning Properties of Ordered and Disordered Materials from\n> Multi-Fidelity Data.* Nature Computational Science, 2021, 1, 46–53. DOI: [10.1038\u002Fs43588-020-00002-x][mfimegnet].\n\n> **M3GNet**\n>\n> Chen, C., Ong, S.P. *A universal graph deep learning interatomic potential for the periodic table.* Nature\n> Computational Science, 2023, 2, 718–728. DOI: [10.1038\u002Fs43588-022-00349-3][m3gnet].\n\n>**CHGNet**\n>\n> Deng, B., Zhong, P., Jun, K. et al. *CHGNet: as a pretrained universal neural network potential for charge-informed atomistic modelling.*\n> Nat Mach Intell 5, 1031–1041 (2023). DOI:[10.1038\u002Fs42256-023-00716-3][chgnet]\n\n>**TensorNet**\n>\n> Simeon, G.  De Fabritiis, G. *Tensornet: Cartesian tensor representations for efficient learning of molecular potentials.*\n> Adv. Neural Info. Process. Syst. 36, (2024). DOI: [10.48550\u002FarXiv.2306.06482][tensornet]\n\n>**SO3Net**\n>\n> Schütt, K. T., Hessmann, S. S. P., Gebauer, N. W. A., Lederer, J., Gastegger, M. *SchNetPack 2.0: A neural network toolbox for atomistic machine learning.*\n> J. Chem. Phys. 158, 144801 (2023). DOI: [10.1063\u002F5.0138367][so3net]\n\n>**QET**\n>\n> Ko, T. W., Liu, R., Mishra, A. R., Yu, Z., Qi, J., Ong, S. P. *A Fast, Accurate, and Reactive Equivariant Foundation Potential.*\n> arXiv preprint arXiv:2511.07249 (2025). DOI: [10.48550\u002FarXiv.2511.07249][QET]\n\n## FAQs\n\n1. **The `M3GNet-MP-2021.2.8-PES` differs from the original TensorFlow (TF) implementation!**\n\n   *Answer:* `M3GNet-MP-2021.2.8-PES` is a refitted model with some data improvements and minor architectural changes.\n   Porting over the weights from the TF version to DGL\u002FPyTorch is non-trivial. We have performed reasonable benchmarking\n   to ensure that the new implementation reproduces the broad error characteristics of the original TF implementation\n   (see [examples][jupyternb]). However, it is not expected to reproduce the TF version exactly. This refitted model\n   serves as a baseline for future model improvements. We do not believe there is value in expending the resources\n   to reproduce the TF version exactly.\n\n2. **I am getting errors with `matgl.load_model()`!**\n\n   *Answer:* The most likely reason is that you have a cached older version of the model. We often refactor models to\n   ensure the best implementation. This can usually be solved by updating your `matgl` to the latest version\n   and clearing your cache using the following command `mgl clear`. On the next run, the latest model will be\n   downloaded. With effect from v0.5.2, we have implemented a model versioning scheme that will detect code vs model\n   version conflicts and alert the user of such problems.\n\n3. **What pre-trained models should I be using?**\n\n   *Answer:* There is no one definitive answer. In general, the newer the architecture and dataset, the more likely\n   the model performs better. However, it should also be noted that a model operating on a more diverse dataset may\n   compromise on  performance on a specific system. The best way is to look at the READMEs included with each model\n   and do some tests on the systems you are interested in.\n\n4. **How do I contribute to matgl?**\n\n   *Answer:* For code contributions, please fork and submit pull requests. You should read the\n   [developer guide](developer.md) to understand the general design guidelines. We welcome pre-trained model\n   contributions as well, which should also be submitted via PRs. Please follow the folder structure of the\n   pretrained models. In particular, we expect all models to come with a `README.md` and notebook\n   documenting its use and its key performance metrics. Also, we expect contributions to be on new properties\n   or systems or to significantly outperform the existing models. We will develop an alternative means for model\n   sharing in the future.\n\n5. **None of your models do what I need. Where can I get help?**\n\n   *Answer:* Please contact [Prof Ong][ongemail] with a brief description of your needs. For simple problems, we are\n   glad to advise and point you in the right direction. For more complicated problems, we are always open to\n   academic collaborations or projects. We also offer [consulting services][mqm] for companies with unique needs,\n   including but not limited to custom data generation, model development and materials design.\n\n## Acknowledgments\n\nThis work was primarily supported by the [Materials Project][mp], funded by the U.S. Department of Energy, Office of\nScience, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under contract no.\nDE-AC02-05-CH11231: Materials Project program KC23MP. This work used the Expanse supercomputing cluster at the Extreme\nScience and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number\nACI-1548562.\n\nWe also acknowledge the NVIDIA Alchemi Team, specifically Roman Zubatyuk (@zubatyuk) and Alireza Moradzadeh (@moradza),\nfor their contributions to warp-acceleration for TensorNet, which yielded ~2-3x speed and memory usage improvements.\n\n[m3gnetrepo]: https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fm3gnet \"M3GNet repo\"\n[megnetrepo]: https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmegnet \"MEGNet repo\"\n[dgl]: https:\u002F\u002Fwww.dgl.ai \"DGL website\"\n[materialyze]: http:\u002F\u002Fmaterialyze.ai \"Materialyze.AI website\"\n[changelog]: https:\u002F\u002Fmatgl.ai\u002Fchanges \"Changelog\"\n[graphnetwork]: https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.01261 \"Deepmind's paper\"\n[megnet]: https:\u002F\u002Fpubs.acs.org\u002Fdoi\u002F10.1021\u002Facs.chemmater.9b01294 \"MEGNet paper\"\n[mfimegnet]: https:\u002F\u002Fnature.com\u002Farticles\u002Fs43588-020-00002-x \"mfi MEGNet paper\"\n[m3gnet]: https:\u002F\u002Fnature.com\u002Farticles\u002Fs43588-022-00349-3 \"M3GNet paper\"\n[mp]: http:\u002F\u002Fmaterialsproject.org \"Materials Project\"\n[apidocs]: https:\u002F\u002Fmatgl.ai\u002Fmatgl.html \"MatGL API docs\"\n[doc]: https:\u002F\u002Fmatgl.ai \"MatGL Documentation\"\n[google colab]: https:\u002F\u002Fcolab.research.google.com\u002F \"Google Colab\"\n[jupyternb]: https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Ftree\u002Fmain\u002Fexamples\n[ongemail]: mailto:shyue@nus.edu.sg \"Email\"\n[mqm]: https:\u002F\u002Fmaterialsqm.com \"MaterialsQM\"\n[tutorials]: https:\u002F\u002Fmatgl.ai\u002Ftutorials \"Tutorials\"\n[matgl]: https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41524-025-01742-y#citeas \"MatGL\"\n[tensornet]: https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06482 \"TensorNet\"\n[qet]: https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07249 \"QET\"\n[so3net]: https:\u002F\u002Fpubs.aip.org\u002Faip\u002Fjcp\u002Farticle-abstract\u002F158\u002F14\u002F144801\u002F2877924\u002FSchNetPack-2-0-A-neural-network-toolbox-for \"SO3Net\"\n[chgnet]: https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-023-00716-3 \"CHGNet\"\n[chgnetrepo]: https:\u002F\u002Fgithub.com\u002FCederGroupHub\u002Fchgnet \"CHGNet repo\"\n[maml]: https:\u002F\u002Fmaterialyzeai.github.io\u002Fmaml\u002F\n[MatGL]: https:\u002F\u002Fmatgl.ai\n[MatPES]: https:\u002F\u002Fmatpes.ai\n[MatCalc]: https:\u002F\u002Fmatcalc.ai\n[Materials Virtual Lab Docker Repository]: https:\u002F\u002Fhub.docker.com\u002Forgs\u002Fmaterialsvirtuallab\u002Frepositories\n","[![GitHub 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fmaterialyzeai\u002Fmatgl)](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fblob\u002Fmain\u002FLICENSE)\n[![代码风格检查](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fworkflows\u002FLint\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fworkflows\u002FLint\u002Fbadge.svg)\n[![测试](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Factions\u002Fworkflows\u002Ftest.yml)\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_readme_bc160a64d526.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmatgl)\n[![代码覆盖率](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmaterialyzeai\u002Fmatgl\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=3V3O79GODQ)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fmaterialyzeai\u002Fmatgl)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmatgl?logo=pypi&logoColor=white)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmatgl?logo=pypi&logoColor=white)\n\n# 材料图库 \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_readme_faf83d2ac1b4.png\" alt=\"matgl\" width=\"30%\" style=\"float: right\">\n\n## 官方文档\n\n\u003Chttps:\u002F\u002Fmatgl.ai>\n\n## 简介\n\nMatGL（材料图库）是一个用于材料科学的图深度学习库。数学中的图结构天然适用于表示原子集合。研究表明，图深度学习模型在材料性质预测的代理模型中始终表现出色。MatGL 的目标是成为一个可扩展的平台，用于开发和共享材料图深度学习模型。\n\nMatGL 的第一个版本由 [Materialyze.AI][materialyze] 和英特尔实验室合作开发。\n\nMatGL 是 MatML 生态系统的一部分，该生态系统包括 [MatGL]（材料图库）、[maml]（材料机器学习）软件包、[MatPES]（材料势能面）数据集以及 [MatCalc]（材料计算器）。\n\n## 当前状态\n\n以下总结了主要里程碑，请参阅 [变更日志] 获取详细信息。\n\n- v2.0.0（2025年11月13日）：新增 QET 架构。PyG 后端现为默认设置。\n- v1.3.0（2025年8月12日）：添加预训练分子势能模型及 PyG 框架。\n- v1.1.0（2024年5月7日）：实现 [CHGNet] 并提供预训练模型。\n- v1.0.0（2024年2月14日）：实现 [TensorNet] 和 [SO3Net]。\n- v0.5.1（2023年6月9日）：实现了模型版本管理。\n- v0.5.0（2023年6月8日）：简化了模型的保存与加载，现在只需一行代码即可加载模型！\n- v0.4.0（2023年6月7日）：功能接近原始 TensorFlow 实现。重新训练的 M3Gnet 通用势能现已可用。\n- v0.1.0（2023年2月16日）：完成了 M3GNet 和 MEGNet 架构的初步实现。可能存在一些 bug！\n\n## 重大更新：v2.0.0（2025年11月12日）\n\n我们正逐步从 Deep Graph Library (DGL) 框架迁移到 PyTorch Geometric (PyG)，甚至完全采用纯 PyTorch 框架。这一调整的动机在于 DGL 已不再积极维护。目前，PYG 和 DGL 两种框架的模型均可使用。\n\n自 v2.0.0 起，MatGL 将默认使用 PyG 后端，DGL 不再是必需依赖项。目前，仅 TensorNet 已在 PYG 中重新实现。若要使用基于 DGL 的模型（包括新的 QET），您需要手动安装 DGL 相关依赖项。这通常需要约 10 分钟，具体时间取决于所需 GPU 包的下载速度：\n\n```bash\npip install \"numpy\u003C2\"\npip install dgl==2.2.0\npip install torch==2.3.0\npip install \"torchdata\u003C=0.8.0\"\n```\n\n然后通过环境变量 `MATGL_BACKEND=DGL` 或者使用以下 Python 代码设置后端：\n\n```python\nimport matgl\nmatgl.set_backend(\"DGL\")\n```\n\n## 当前架构\n\n\u003Cdiv style=\"float: left; padding: 10px; width: 200px\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_readme_1e911bcb86fc.png\" alt=\"m3gnet_schematic\">\n\u003Cp>图：M3GNet\u002FMEGNet 示意图\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n在此，我们总结了 MatGL 中目前已实现的架构。需要强调的是，这绝非详尽无遗的列表，我们预计未来将由 MatGL 核心团队及其他贡献者不断添加新的架构。\n\n- [QET]（仅支持 DGL，PYG 即将推出），发音为“ket”，是一种电荷平衡张量网络架构。它是一种等变、考虑电荷效应的架构，通过可解析求解的电荷平衡方案实现了与体系规模的线性 scaling。现已提供预训练的 QET-MatQ FP，该 FP 在标准材料性能基准测试中达到最先进水平，但在以电荷转移为主导的体系中，如 NaCl–\\ce{CaCl2} 离子液体、Li\u002F\\ce{Li6PS5Cl} 固态电解质界面处的反应过程等，其预测结果与其他 FP 存在显著差异，并且支持在施加电化学势下的模拟。\n- [TensorNet]（PYG 和 DGL）是一种 O(3) 等变的消息传递神经网络架构，利用笛卡尔张量表示。它是 [SO3Net] 架构的推广，后者是一种极简的 SO(3) 等变神经网络。总体而言，TensorNet 被证明比其他等变架构具有更高的数据和参数效率。目前，它已成为 [Materials Virtual Lab] 的默认架构。\n- [晶体哈密顿图网络 (CHGNet)][chgnet]（仅支持 DGL）是一种基于图神经网络的 MLIP。CHGNet 使用原子图捕捉原子键合关系，并使用键图捕捉角度信息。其专长在于通过学习和预测 DFT 原子磁矩来捕捉原子电荷。详见 [原始实现][chgnetrepo]。\n- [材料三体图网络 (M3GNet)][m3gnet] 是一种包含三体相互作用的不变图神经网络架构。另一个区别在于，它引入了原子坐标以及晶体中的 3×3 晶格矩阵，这些对于通过自动微分获得力和应力等张量量是必需的。作为框架，M3GNet 具有广泛的应用，包括 **原子间势能开发**。在相同的训练数据下，M3GNet 的表现与最先进的机器学习原子间势能（MLIPs）相当。然而，图表示的一个关键优势在于其能够灵活扩展到不同的化学空间。M3GNet 的一项重要成果是开发了一种 [*基础势能*][m3gnet]，该势能通过对 [Materials Project][mp] 中进行的弛豫计算进行训练，可在整个元素周期表范围内适用。与之前的 MEGNet 架构类似，M3GNet 可用于开发属性预测的代理模型，在许多情况下其精度优于或与其它最先进的机器学习模型相当。\n- [材料图网络 (MEGNet)][megnet]（仅支持 DGL）是 DeepMind 的 [图网络][graphnetwork] 在材料科学机器学习中的实现。我们已证明其在 [分子和晶体][megnet] 的广泛属性预测中均能实现较低的预测误差。最新版本还包括我们在 [多保真度材料属性建模][mfimegnet] 方面的最新工作。图 1 展示了图网络的顺序更新步骤，其中键、原子和全局状态属性会相互交换信息并进行更新，从而生成输出图。\n\n有关详细的性能基准测试，请参阅 [参考文献](#references) 部分的出版物。\n\n## 安装\n\nMatgl 可通过 pip 安装：\n\n```bash\npip install matgl\n```\n\n若需使用 DGL，建议在安装 matgl 之前先安装最新版本的 DGL。\n\n```bash\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Ftorch-2.4\u002Frepo.html\n```\n\n### CUDA（GPU）安装\n\n若打算使用 CUDA（GPU）加速训练，务必安装适当版本的 PyTorch 和 DGL。以下为基本说明，但若遇到任何问题，建议查阅 [PyTorch 文档](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 和 [DGL 文档](https:\u002F\u002Fwww.dgl.ai\u002Fpages\u002Fstart.html)。\n\n```shell\npip install torch==2.2.0 --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Fcu121\u002Frepo.html\npip install dglgo -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels-test\u002Frepo.html\n```\n\n## Docker 镜像\n\n现已为 matgl 打造了 Docker 镜像，并支持 LAMMPS。这些镜像可在 [Materials Virtual Lab Docker 仓库] 获取。若希望将 MatGL 与 LAMMPS 结合使用，这可能是最简便的方式。\n\n## 使用方法\n\n现已有针对 Materials Project 形成能及多保真度带隙的预训练 M3GNet 通用势能和 MEGNet 模型可供使用。\n\n### 命令行（自 v0.6.2 起）\n\n现在提供了一个 CLI 工具，可用于使用预训练模型进行快速弛豫或预测，以及执行其他简单管理任务（如清除缓存）。以下是一些简单示例：\n\n1. 进行弛豫时，\n\n    ```bash\n    mgl relax --infile Li2O.cif --outfile Li2O_relax.cif\n    ```\n\n2. 使用其中一个预训练的属性模型时，\n\n    ```bash\n    mgl predict --model M3GNet-MP-2018.6.1-Eform --infile Li2O.cif\n    ```\n\n3. 清除缓存时，\n\n    ```bash\n    mgl clear\n    ```\n\n如需了解所有选项，请使用 `mgl -h`。\n\n### 代码\n\n对于只想开箱即用的用户，可以使用新实现的 `matgl.load_model` 便捷方法。以下是 CsCl 形成能预测的示例。\n\n```python\nfrom pymatgen.core import Lattice, Structure\nimport matgl\n\nmodel = matgl.load_model(\"MEGNet-MP-2018.6.1-Eform\")\n\n# 这是从 Materials Project 获取的结构。\nstruct = Structure.from_spacegroup(\"Pm-3m\", Lattice.cubic(4.1437), [\"Cs\", \"Cl\"], [[0, 0, 0], [0.5, 0.5, 0.5]])\neform = model.predict_structure(struct)\nprint(f\"CsCl 的预测形成能为 {float(eform.numpy()):.3f} eV\u002Fatom。\")\n```\n\n要获取可用的预训练模型列表，\n\n```python\nimport matgl\nprint(matgl.get_available_pretrained_models())\n```\n\n## Pytorch Hub\n\n预训练模型也可在 Pytorch Hub 上获取。要使用这些模型，只需安装 matgl 并执行以下命令：\n\n```python\nimport torch\n\n# 获取模型列表\ntorch.hub.list(\"materialsvirtuallab\u002Fmatgl\", force_reload=True)\n\n# 加载模型\nmodel = torch.hub.load(\"materialyzeai\u002Fmatgl\", 'm3gnet_universal_potential')\n```\n\n## 模型训练\n\n在 PES 训练中，训练集、验证集和测试集中能量、力以及应力（可选）的单位必须与 MatGL 中使用的单位保持一致。\n\n- 能量：以 eV 为单位的能量列表。\n- 力：以 eV\u002FÅ 为单位的 nx3 力矩阵列表，其中 n 是每个结构中的原子数。不同结构的 n 可以不同。\n- 应力：以 GPa 为单位的 3x3 应力矩阵列表（可选）。\n\n注意：对于应力，我们采用压缩应力为负值的约定。从 VASP 计算得到的应力（默认单位为 kBar）需要乘以 -0.1，才能直接用于模型。\n\n## 教程\n\n我们编写了关于如何使用 MatGL 的[教程]。这些教程由[Jupyter 笔记本]生成，可以直接在[Google Colab]上运行。\n\n## 资源\n\n- 所有类和方法的[API 文档][apidocs]。\n- [开发者指南](developer.md)概述了 `matgl` 的关键设计要素，尤其适合希望训练和贡献 `matgl` 模型的开发者。\n- AdvancedSoft 已实现了[M3GNet 的 LAMMPS 接口](https:\u002F\u002Fgithub.com\u002Fadvancesoftcorp\u002Flammps\u002Ftree\u002Fbased-on-lammps_2Jun2022\u002Fsrc\u002FML-M3GNET)，分别适用于 M3GNet 的 TF 版本和 MatGL 版本。\n\n## 参考文献\n\nMatGL 的论文已发表在 npj Computational Materials 上，请引用以下内容：\n> **MatGL**\n>\n> Ko, T. W.; Deng, B.; Nassar, M.; Barroso-Luque, L.; Liu, R.; Qi, J.; Thakur, A. C.; Mishra, A. R.; Liu, E.; Ceder, G.; Miret, S.; Ong, S. P.\n> *材料图库（MatGL），一个面向材料科学和化学的开源图深度学习库。*\n> npj Comput Mater 11, 253 (2025). DOI: [https:\u002F\u002Fdoi.org\u002F10.1038\u002Fs41524-025-01742-y][matgl].\n\n如果您正在使用任何预训练模型，请引用以下相关工作：\n\n> **MEGNet**\n>\n> Chen, C.; Ye, W.; Zuo, Y.; Zheng, C.; Ong, S. P. *图网络作为分子和晶体的通用机器学习框架。* Chem. Mater. 2019, 31 (9), 3564–3572. DOI: [10.1021\u002Facs.chemmater.9b01294][megnet].\n\n> **多精度 MEGNet**\n>\n> Chen, C.; Zuo, Y.; Ye, W.; Li, X.; Ong, S. P. *从多精度数据中学习有序和无序材料的性质。* Nature Computational Science, 2021, 1, 46–53. DOI: [10.1038\u002Fs43588-020-00002-x][mfimegnet].\n\n> **M3GNet**\n>\n> Chen, C., Ong, S.P. *一种适用于元素周期表的通用图深度学习原子间势能。* Nature Computational Science, 2023, 2, 718–728. DOI: [10.1038\u002Fs43588-022-00349-3][m3gnet].\n\n> **CHGNet**\n>\n> Deng, B., Zhong, P., Jun, K. 等. *CHGNet：一种用于电荷信息原子尺度建模的预训练通用神经网络势能。* Nat Mach Intell 5, 1031–1041 (2023). DOI: [10.1038\u002Fs42256-023-00716-3][chgnet]\n\n> **TensorNet**\n>\n> Simeon, G. De Fabritiis, G. *Tensornet：用于高效学习分子势能的笛卡尔张量表示。* Adv. Neural Info. Process. Syst. 36, (2024). DOI: [10.48550\u002FarXiv.2306.06482][tensornet]\n\n> **SO3Net**\n>\n> Schütt, K. T., Hessmann, S. S. P., Gebauer, N. W. A., Lederer, J., Gastegger, M. *SchNetPack 2.0：用于原子尺度机器学习的神经网络工具箱。* J. Chem. Phys. 158, 144801 (2023). DOI: [10.1063\u002F5.0138367][so3net]\n\n> **QET**\n>\n> Ko, T. W., Liu, R., Mishra, A. R., Yu, Z., Qi, J., Ong, S. P. *一种快速、准确且具有反应性的等变基础势能。* arXiv 预印本 arXiv:2511.07249 (2025). DOI: [10.48550\u002FarXiv.2511.07249][QET]\n\n## 常见问题解答\n\n1. **`M3GNet-MP-2021.2.8-PES` 与原始 TensorFlow (TF) 实现不同！**\n\n   *答：* `M3GNet-MP-2021.2.8-PES` 是经过重新调整的模型，包含一些数据改进和轻微的架构变化。将 TF 版本的权重移植到 DGL\u002FPyTorch 并不简单。我们进行了合理的基准测试，以确保新实现能够重现原始 TF 实现的大致误差特征（参见[jupyternb]中的示例）。然而，它并不一定能完全复制 TF 版本。这个重新调整的模型是未来模型改进的基础。我们认为没有必要投入大量资源来精确复制 TF 版本。\n\n2. **我在使用 `matgl.load_model()` 时遇到错误！**\n\n   *答：* 最可能的原因是你缓存了旧版本的模型。我们经常重构模型以确保最佳实现。通常可以通过将 `matgl` 更新到最新版本，并使用以下命令清除缓存 `mgl clear` 来解决这个问题。下次运行时，将会下载最新的模型。自 v0.5.2 起，我们引入了模型版本控制机制，可以检测代码与模型版本之间的冲突，并向用户发出警告。\n\n3. **我应该使用哪些预训练模型？**\n\n   *答：* 并没有一个确定的答案。一般来说，架构和数据集越新，模型的表现往往越好。但也要注意，运行在更广泛数据集上的模型可能会在特定体系上的性能有所妥协。最好的办法是查看每个模型附带的 README 文件，并针对你感兴趣的体系进行一些测试。\n\n4. **我如何为 matgl 做贡献？**\n\n   *答：* 对于代码贡献，请先 fork 项目并提交 pull 请求。建议阅读[开发者指南](developer.md)，了解总体设计规范。我们也欢迎预训练模型的贡献，同样需要通过 PR 提交。请遵循预训练模型的文件夹结构。特别地，我们期望所有模型都附带一个 `README.md` 和一个记录其使用方法及关键性能指标的笔记本。此外，我们期待新的属性或体系方面的贡献，或者显著优于现有模型的贡献。未来我们将开发另一种模型共享方式。\n\n5. **你们的任何模型都无法满足我的需求。我可以在哪里获得帮助？**\n\n   *答：* 请简要描述您的需求，联系[Prof Ong][ongemail]。对于简单的问题，我们很乐意提供建议并为您指明方向。对于更复杂的问题，我们始终愿意开展学术合作或项目。我们还为有特殊需求的企业提供[咨询服务][mqm]，包括但不限于定制数据生成、模型开发和材料设计。\n\n## 致谢\n\n本工作主要得到了[Materials Project][mp]的支持，该计划由美国能源部科学办公室基础能源科学局材料科学与工程处资助，合同编号为DE-AC02-05-CH11231：Materials Project项目KC23MP。本研究使用了极端科学与工程发现环境（XSEDE）的Expanse超级计算集群，该集群由美国国家科学基金会资助，资助号为ACI-1548562。\n\n我们还感谢NVIDIA Alchemi团队，特别是Roman Zubatyuk (@zubatyuk)和Alireza Moradzadeh (@moradza)，他们为TensorNet的warp加速做出了贡献，使速度和内存使用效率提升了约2至3倍。\n\n[m3gnetrepo]: https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fm3gnet \"M3GNet仓库\"\n[megnetrepo]: https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmegnet \"MEGNet仓库\"\n[dgl]: https:\u002F\u002Fwww.dgl.ai \"DGL官网\"\n[materialyze]: http:\u002F\u002Fmaterialyze.ai \"Materialyze.AI官网\"\n[changelog]: https:\u002F\u002Fmatgl.ai\u002Fchanges \"更新日志\"\n[graphnetwork]: https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.01261 \"DeepMind论文\"\n[megnet]: https:\u002F\u002Fpubs.acs.org\u002Fdoi\u002F10.1021\u002Facs.chemmater.9b01294 \"MEGNet论文\"\n[mfimegnet]: https:\u002F\u002Fnature.com\u002Farticles\u002Fs43588-020-00002-x \"mfi MEGNet论文\"\n[m3gnet]: https:\u002F\u002Fnature.com\u002Farticles\u002Fs43588-022-00349-3 \"M3GNet论文\"\n[mp]: http:\u002F\u002Fmaterialsproject.org \"Materials Project\"\n[apidocs]: https:\u002F\u002Fmatgl.ai\u002Fmatgl.html \"MatGL API文档\"\n[doc]: https:\u002F\u002Fmatgl.ai \"MatGL文档\"\n[google colab]: https:\u002F\u002Fcolab.research.google.com\u002F \"Google Colab\"\n[jupyternb]: https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Ftree\u002Fmain\u002Fexamples\n[ongemail]: mailto:shyue@nus.edu.sg \"电子邮件\"\n[mqm]: https:\u002F\u002Fmaterialsqm.com \"MaterialsQM\"\n[tutorials]: https:\u002F\u002Fmatgl.ai\u002Ftutorials \"教程\"\n[matgl]: https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41524-025-01742-y#citeas \"MatGL\"\n[tensornet]: https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06482 \"TensorNet\"\n[qet]: https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07249 \"QET\"\n[so3net]: https:\u002F\u002Fpubs.aip.org\u002Faip\u002Fjcp\u002Farticle-abstract\u002F158\u002F14\u002F144801\u002F2877924\u002FSchNetPack-2-0-A-neural-network-toolbox-for \"SO3Net\"\n[chgnet]: https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-023-00716-3 \"CHGNet\"\n[chgnetrepo]: https:\u002F\u002Fgithub.com\u002FCederGroupHub\u002Fchgnet \"CHGNet仓库\"\n[maml]: https:\u002F\u002Fmaterialyzeai.github.io\u002Fmaml\u002F\n[MatGL]: https:\u002F\u002Fmatgl.ai\n[MatPES]: https:\u002F\u002Fmatpes.ai\n[MatCalc]: https:\u002F\u002Fmatcalc.ai\n[Materials Virtual Lab Docker仓库]: https:\u002F\u002Fhub.docker.com\u002Forgs\u002Fmaterialsvirtuallab\u002Frepositories","# MatGL 快速上手指南\n\nMatGL (Materials Graph Library) 是一个专为材料科学设计的图深度学习库，用于构建预测材料性质的代理模型。本指南将帮助您快速完成环境配置并开始使用预训练模型。\n\n## 1. 环境准备\n\n*   **操作系统**: Linux, macOS 或 Windows (推荐 Linux)\n*   **Python 版本**: 3.8 - 3.11\n*   **核心依赖**:\n    *   PyTorch (默认后端，v2.0.0+ 起默认使用 PyG)\n    *   pymatgen (用于处理晶体结构)\n*   **可选依赖**:\n    *   **DGL**: 如需使用仅支持 DGL 的架构（如 CHGNet, MEGNet, QET），需手动安装。\n    *   **CUDA**: 如需 GPU 加速训练，需安装对应版本的 `torch` 和 `dgl`。\n\n> **注意**：从 v2.0.0 开始，MatGL 默认使用 **PyTorch Geometric (PyG)** 后端。DGL 不再是必需依赖，但部分旧模型或新发布的 QET 架构仍需 DGL 支持。\n\n## 2. 安装步骤\n\n### 基础安装 (CPU \u002F 默认 PyG 后端)\n\n直接使用 pip 安装即可：\n\n```bash\npip install matgl\n```\n\n### 进阶安装 (如需使用 DGL 后端)\n\n如果您需要使用仅支持 DGL 的模型（例如 CHGNet 或 QET），请先安装 DGL 及相关依赖，再安装 matgl：\n\n```bash\npip install \"numpy\u003C2\"\npip install dgl==2.2.0\npip install torch==2.3.0\npip install \"torchdata\u003C=0.8.0\"\npip install matgl\n```\n\n安装后，在代码中显式设置后端：\n```python\nimport matgl\nmatgl.set_backend(\"DGL\")\n```\n\n### GPU 加速安装 (CUDA)\n\n若需使用 NVIDIA GPU 加速，请根据您的 CUDA 版本安装对应的 PyTorch 和 DGL。以下以 CUDA 12.1 为例：\n\n```bash\npip install torch==2.2.0 --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121\npip install dgl -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels\u002Fcu121\u002Frepo.html\npip install dglgo -f https:\u002F\u002Fdata.dgl.ai\u002Fwheels-test\u002Frepo.html\npip install matgl\n```\n\n> **提示**：国内用户若下载缓慢，可配置 pip 使用清华或阿里镜像源（如 `pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple ...`），但 PyTorch 和 DGL 的特定 wheel 文件建议参考官方文档确认镜像可用性。\n\n## 3. 基本使用\n\nMatGL 提供了便捷的 `load_model` 方法，可直接加载预训练模型进行性质预测或结构弛豫。\n\n### 方法一：Python 代码调用\n\n以下示例演示如何加载预训练的 MEGNet 模型预测 CsCl 的形成能：\n\n```python\nfrom pymatgen.core import Lattice, Structure\nimport matgl\n\n# 加载预训练模型 (形成能模型)\nmodel = matgl.load_model(\"MEGNet-MP-2018.6.1-Eform\")\n\n# 构建晶体结构 (此处以 CsCl 为例)\nstruct = Structure.from_spacegroup(\"Pm-3m\", Lattice.cubic(4.1437), [\"Cs\", \"Cl\"], [[0, 0, 0], [0.5, 0.5, 0.5]])\n\n# 预测性质\neform = model.predict_structure(struct)\nprint(f\"The predicted formation energy for CsCl is {float(eform.numpy()):.3f} eV\u002Fatom.\")\n```\n\n查看可用的预训练模型列表：\n\n```python\nimport matgl\nprint(matgl.get_available_pretrained_models())\n```\n\n### 方法二：命令行工具 (CLI)\n\n从 v0.6.2 版本起，MatGL 提供命令行工具 `mgl`，可快速执行结构弛豫或性质预测。\n\n**1. 结构弛豫**\n```bash\nmgl relax --infile Li2O.cif --outfile Li2O_relax.cif\n```\n\n**2. 性质预测**\n```bash\nmgl predict --model M3GNet-MP-2018.6.1-Eform --infile Li2O.cif\n```\n\n**3. 清理缓存**\n```bash\nmgl clear\n```\n\n更多命令选项请输入 `mgl -h` 查看。","某新能源电池材料研发团队正致力于从数万种候选晶体结构中，快速筛选出具有高离子电导率且热力学稳定的新型固态电解质。\n\n### 没有 matgl 时\n- **计算成本高昂**：依赖传统密度泛函理论（DFT）计算每个候选材料的能量和性质，单个结构需耗时数小时，完成万级筛选需数月甚至更久。\n- **模型复现困难**：团队试图复现最新的 M3GNet 或 CHGNet 论文模型，但需手动处理复杂的图数据结构转换，代码调试周期长达数周。\n- **框架迁移痛苦**：随着主流图神经网络库从 DGL 向 PyTorch Geometric (PyG) 迁移，原有基于旧框架的代码面临重构风险，缺乏平滑过渡方案。\n- **预训练资源缺失**：缺乏高质量、开箱即用的通用势函数预训练模型，从头训练小样本数据导致预测精度极低，无法指导实验。\n\n### 使用 matgl 后\n- **推理速度飞跃**：直接调用 matgl 内置的 M3GNet 预训练模型作为代理模型，将单结构预测时间从小时级压缩至毫秒级，万级筛选任务一天内即可完成。\n- **开发效率倍增**：利用 matgl 统一的 API 接口，仅需一行代码即可加载经过验证的 SOTA 架构（如 TensorNet、QET），免去了底层图构建的繁琐工作。\n- **无缝拥抱新技术**：借助 matgl v2.0 对 PyG 后端的原生支持，团队无需重写代码即可享受最新框架的性能优化，彻底摆脱了 DGL 停止维护的担忧。\n- **精度即时可用**：直接使用官方提供的在大规模 MatPES 数据集上预训练的势能模型，即使在少量私有数据微调下，也能获得接近 DFT 的计算精度。\n\nmatgl 通过将前沿的图深度学习架构转化为开箱即用的工业级工具，让材料科学家能将精力从繁琐的代码工程中解放出来，专注于真正的科学发现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmaterialyzeai_matgl_1e911bcb.png","materialyzeai","Materialyze.AI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmaterialyzeai_f836fc5c.png","Our mission is to accelerate the design and discovery of breakthrough materials through the integration of theory, experiments, and AI.",null,"shyue@nus.edu.sg","materialyze.ai","https:\u002F\u002Fgithub.com\u002Fmaterialyzeai",[81,85],{"name":82,"color":83,"percentage":84},"Python","#3572A5",100,{"name":86,"color":87,"percentage":88},"Shell","#89e051",0,530,108,"2026-04-07T09:54:35","BSD-3-Clause","Linux, macOS, Windows","非必需。若需加速训练，支持 NVIDIA GPU，需安装对应 CUDA 版本的 PyTorch 和 DGL（示例中提及 CUDA 12.1 和 CUDA 11.x），具体显存需求取决于模型大小和数据集，未明确说明最低要求。","未说明",{"notes":97,"python":98,"dependencies":99},"从 v2.0.0 起默认使用 PyTorch Geometric (PyG) 后端，不再强制依赖 DGL；若需使用 QET、CHGNet 或 MEGNet 等特定架构，需手动安装 DGL 并设置环境变量 MATGL_BACKEND=DGL。GPU 加速时需严格匹配 PyTorch 和 DGL 的 CUDA 版本。提供 Docker 镜像以简化包含 LAMMPS 支持的环境部署。","未说明 (隐含需支持 PyTorch 2.x 的版本，通常建议 3.8+)",[100,101,102,103,104],"torch>=2.2.0","dgl>=2.2.0 (可选，用于部分架构)","numpy\u003C2","pymatgen","torchdata\u003C=0.8.0",[106,16,14],"其他",[108,109,110,111,112],"deep","graph","learning","materials-informatics","materials-science","2026-03-27T02:49:30.150509","2026-04-08T05:28:54.823505",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},23785,"为什么微调后的 M3GNet 模型表现比预训练模型更差？","这通常是因为缺少参考能量（elemental_refs）或超参数设置不当。1. 确保在微调时提供正确的 element_refs，否则模型将没有存储参考能量；2. 检查损失函数权重设置，建议参考原始 M3GNet 配置，例如设置 force_weight=1 和 stress_weight=0.1；3. 确认数据归一化参数（data_mean, data_std）是否正确加载或使用数据集特定的值。","https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fissues\u002F264",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},23786,"M3GNet 模型的预测结果为什么具有随机性（非确定性）？","这是由于 PyTorch 中 LSTM 层的已知问题导致的非确定性行为。解决方法是在代码开头固定随机种子：`torch.manual_seed(你的种子值)`。此外，确保使用的是最新版本的 matgl，因为维护者已修复了最终层权重未保存的问题，该问题曾导致形成能模型出现随机性。","https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fissues\u002F116",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},23787,"如何使用 matgl 计算结构的应力（stresses）？","应通过 ASE 接口使用 `PESCalculator` 来计算能量、力和应力。你需要导入 `matgl.ext.ase` 中的 `PESCalculator`，将其附加到 ASE 的 Atoms 对象上，然后调用 `get_stresses()` 方法。具体用法请参考 ASE 官方文档以及 matgl 仓库中的示例代码，不要直接尝试从 pymatgen 结构计算而不经过 ASE 转换。","https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fissues\u002F321",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},23788,"如何复现与 TensorFlow 版本架构完全一致的 PyTorch M3GNet 模型？","直接使用 `Potential(M3GNet(DEFAULT_ELEMENT_TYPES, is_intensive=False))` 可能无法得到与 TF 版本完全一致的参数量。建议使用 matgl v0.5.0 及以上版本，并加载官方预训练权重（如 'MP-2021.2.8-EFS'）。在最新版本中，数据的均值和标准差会自动保存和加载，简化了复现过程。如果参数形状仍不匹配，请检查是否使用了最新的 GitHub 版本代码。","https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fissues\u002F59",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},23789,"运行代码时遇到 'No module named torchdata.datapipes' 错误怎么办？","该错误通常是由于 DGL 或其依赖项版本不兼容引起的。虽然报错指向 torchdata，但根本原因往往是后端冲突或版本缺失。尝试重新安装或升级 DGL 和 torchdata 库，确保它们与当前 PyTorch 版本兼容。另外，注意日志中关于后端设置的警告（如 'Model MEGNet is a DGL model, but the backend is PYG'），建议在加载模型前显式设置正确的后端或卸载冲突的图形神经网络库（如 pyg）。","https:\u002F\u002Fgithub.com\u002Fmaterialyzeai\u002Fmatgl\u002Fissues\u002F699",{"id":142,"question_zh":143,"answer_zh":144,"source_url":130},23790,"执行 lammps 可执行文件时出现 'symbol lookup error: undefined symbol' 错误如何解决？","该错误通常由环境变量冲突或库路径问题引起，特别是涉及 fmt 库的版本冲突。虽然这是一个较为底层的系统错误，但在使用 matgl 进行分子动力学模拟时，推荐优先使用内置的 ASE 接口（`PESCalculator`）来驱动模拟，而不是直接调用编译后的 lmp 可执行文件，这样可以避免大部分环境依赖问题。如果必须使用 LAMMPS，请检查 LD_LIBRARY_PATH 并确保所有依赖库版本一致。",[146,151,156,161,165,170,175,180,185,190,195,200,204,209,214,219,224,228,233,238],{"id":147,"version":148,"summary_zh":149,"released_at":150},145322,"v2.1.1","- 将 `TensorNet`（PyG）和 `TensorNetWarp` 合并为一个 `TensorNet` 类，并提供可选的 warp 加速功能\n （通过 `use_warp` 参数控制；在安装了 `nvalchemi-toolkit-ops` 时会自动检测并启用）。\n- 将使用 warp 加速的 `TensorEmbedding` 和 `TensorNetInteraction` 层移至 `matgl.layers._embedding_warp`\n  和 `matgl.layers._graph_convolution_warp` 模块中。\n- 使 `nvalchemiops` 成为可选依赖：在整个代码库中，无论是 `_pymatgen_pyg`、`_ase_pyg` 还是 warp 层的导入，\n  在该包未安装时都会优雅地回退到基于 pymatgen 的邻近原子列表构建方式。","2026-03-15T02:04:15",{"id":152,"version":153,"summary_zh":154,"released_at":155},145323,"v2.1.0","- 修复了默认后端被意外更改的 bug。- 更新了训练模块，以支持 QET。","2026-03-13T23:26:27",{"id":157,"version":158,"summary_zh":159,"released_at":160},145324,"v2.0.9","- 修复了 Atoms2Graph 导出缺失的 bug。","2026-03-05T23:01:06",{"id":162,"version":163,"summary_zh":159,"released_at":164},145325,"v2.0.8","2026-03-05T12:31:56",{"id":166,"version":167,"summary_zh":168,"released_at":169},145326,"v2.0.7","- 重构了 PyG TensorNet 的嵌入和交互模块，将其改为纯 PyTorch 实现，以提升兼容性。(@kenko911)\n- 改进了 `PESCalculator` 中应力单位的处理。(@kenko911)\n- 允许 CHGNet 和 TensorNet 模型返回中间晶体特征。(@bowen-bd)\n- 添加了 GPU 加速的邻近原子列表构建功能，并优化了 CUDA 邻近原子列表的性能及重试逻辑。\n  (@zubatyuk)\n- 将 NVIDIA TensorNet Warp CUDA 内核集成到主分支中。(@atulcthakur, @zubatyuk)\n- 通过更新 `Atoms2Graph`、`collate_fn_pes` 和 `MGLDataset`（包括 `include_ref_charge` 参数），增强了 QET 训练支持。(@kenko911)\n- 更新了 QET 的文档，增加了参考文献和 DOI 链接。(@kenko911)","2026-03-05T07:48:02",{"id":171,"version":172,"summary_zh":173,"released_at":174},145327,"v2.0.6","- 修复 CHGnet 加载的 bug。","2025-12-14T01:38:01",{"id":176,"version":177,"summary_zh":178,"released_at":179},145328,"v2.0.5","- 改进了后端与模型不匹配时的错误提示。尝试透明地处理简单情况。","2025-12-08T02:05:34",{"id":181,"version":182,"summary_zh":183,"released_at":184},145329,"v2.0.4","- 修复了针对不同后端的 matgl.graph.data 和 matgl.graph.converter 模块导入错误。","2025-11-26T14:56:11",{"id":186,"version":187,"summary_zh":188,"released_at":189},145330,"v2.0.3","- 修复了针对不同后端的 matgl.ext.pymatgen 导入错误。","2025-11-25T20:58:30",{"id":191,"version":192,"summary_zh":193,"released_at":194},145331,"v2.0.2","- 新增了 QET（电荷平衡张量网络）架构及预训练权重！(@kenko911)\n- 已开始将代码迁移到 PyTorch Geometric，弃用现已 deprecated 的 DGL。目前仅在 PYG 中实现了基础的 TensorNet。DGL 模型仍可运行，但需要手动配置（切换后端并安装 DGL）。","2025-11-13T17:53:26",{"id":196,"version":197,"summary_zh":198,"released_at":199},145332,"v2.0.1","- QET (Charge-Equilibrated TensorNet) architecture and pre-trained weights are added!\n- Begun a migration to Pytorch-Geometric over the now-deprecated DGL. So far, only vanilla TensorNet has been\n  implemented in PYG). DGL models still work but require a manual setup (change of backend and installation of DGL).","2025-11-13T17:17:06",{"id":201,"version":202,"summary_zh":198,"released_at":203},145333,"v2.0.0","2025-11-13T16:36:01",{"id":205,"version":206,"summary_zh":207,"released_at":208},145334,"v1.3.0","This release includes pretrained molecular potentials and implements the preliminary PyG framework for future development.\r\n## What's Changed\r\n* publish wheel as well as sdist by @dimbleby in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F597\r\n* Example notebook for fine-tuning M3GNet potential on a customized dataset with DIRECT sampling by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F599\r\n* pre-commit autoupdate by @pre-commit-ci[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F602\r\n* First stage for PyG TensorNet implementation done by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F604\r\n* Update lightning requirement from \u003C=2.5.1 to \u003C=2.5.1.post0 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F606\r\n* Add `element_types` kwarg to example notebook by @Andrew-S-Rosen in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F611\r\n* Update lightning requirement from \u003C=2.5.1.post0 to \u003C=2.6.0.dev20250629 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F610\r\n* Ruff fix in MGLDataset class by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F614\r\n* Update lightning requirement from \u003C=2.6.0.dev20250629 to \u003C=2.6.0.dev20250706 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F615\r\n* pre-commit autoupdate by @pre-commit-ci[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F616\r\n* TensorNet PyG added by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F617\r\n* Update lightning requirement from \u003C=2.6.0.dev20250706 to \u003C=2.6.0.dev20250713 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F618\r\n* Fix the bug in the united test for ase.py by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F621\r\n* Update lightning requirement from \u003C=2.6.0.dev20250713 to \u003C=2.6.0.dev20250720 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F622\r\n* Bump nokogiri from 1.18.8 to 1.18.9 in \u002Fdocs by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F623\r\n* Update lightning requirement from \u003C=2.6.0.dev20250720 to \u003C=2.6.0.dev20250727 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F624\r\n* pre-commit autoupdate by @pre-commit-ci[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F627\r\n* Update lightning requirement from \u003C=2.6.0.dev20250727 to \u003C=2.6.0.dev20250803 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F629\r\n* Added a dropout argument to the M3GNET constructor by @miicck in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F628\r\n* Update lightning requirement from \u003C=2.6.0.dev20250803 to \u003C=2.6.0.dev20250810 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F631\r\n* Pretrained molecular potentials from MatGL paper and additional thermostats for ASE MD added by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F633\r\n\r\n## New Contributors\r\n* @dimbleby made their first contribution in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F597\r\n* @miicck made their first contribution in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F628\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fcompare\u002Fv1.2.7...v1.3.0","2025-08-12T16:12:52",{"id":210,"version":211,"summary_zh":212,"released_at":213},145335,"v1.2.7","- Use original custom RemoteFile rather than fsspec, which is very finicky with SSL connections.\n- _create_directed_line_graph error handling (@bowen-bd)\n- Update Import Alias for lightning (@jcwang587)\n- Add nvt_nose_hoover to MD ensemble (@bowen-bd)\n- Allow training of magmom when no line graph presents (@bowen-bd)\n- Allow disable BondGraph in CHGNet (@bowen-bd)","2025-05-18T19:21:09",{"id":215,"version":216,"summary_zh":217,"released_at":218},145336,"v1.2.6","- Fix missing torchdata dependency for Linux.","2025-04-07T16:04:32",{"id":220,"version":221,"summary_zh":222,"released_at":223},145337,"v1.2.5","- Dependency pinning now is platform specific. Linux based systems can now work with latest DGL and torch.","2025-04-03T23:23:38",{"id":225,"version":226,"summary_zh":222,"released_at":227},145338,"v1.2.4","2025-04-03T20:29:51",{"id":229,"version":230,"summary_zh":231,"released_at":232},145339,"v1.2.1","- Bug fix for pbc dtype on Windows systems.","2025-03-17T21:11:13",{"id":234,"version":235,"summary_zh":236,"released_at":237},145340,"v1.2.0","- Release of MatPES-based models.\n- Pin DGL and PyTorch dependencies to 2.2.0 to ensure compatibility with Mac.","2025-03-17T18:40:32",{"id":239,"version":240,"summary_zh":241,"released_at":242},145341,"v1.1.3","## What's Changed\r\n* Matgl depends on torch==2.2.1 by @Badasper in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F273\r\n* FrechetCellFilter is added for variable cell relaxation in Relaxer class by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F275\r\n* Improve the TensorNet model class coverage by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F276\r\n* Improve SO3Net model class coverage and simplify TensorNet implementation by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F277\r\n* Improve the coverage in MLP_norm class by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F278\r\n* Better Documentation for M3GNet potential training with stresses by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F281\r\n* Improve the implementation of three-body interactions in M3GNet by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F282\r\n* Optimize the speed of _compute_3body implementation by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F283\r\n* Type checking for scheduler is added by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F284\r\n* Update M3GNet potential training notebook for the demonstrating of obtaining and using element offset by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F288\r\n* Smooth l1 loss function is added and the united tests are improved. by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F289\r\n* Merge the predict_structure and featurize_structure into a single method by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F290\r\n* Remove unnecessary else statement for calculating magmom loss by @kenko911 in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F294\r\n* Bump rexml from 3.2.8 to 3.3.2 in \u002Fdocs by @dependabot in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F297\r\n* Bump rexml from 3.3.2 to 3.3.3 in \u002Fdocs by @dependabot in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F298\r\n\r\n## New Contributors\r\n* @Badasper made their first contribution in https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fpull\u002F273\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmaterialsvirtuallab\u002Fmatgl\u002Fcompare\u002Fv1.1.2...v1.1.3","2024-08-07T12:24:58"]