[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-deepmodeling--deepmd-kit":3,"tool-deepmodeling--deepmd-kit":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",147882,2,"2026-04-09T11:32:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":72,"owner_website":77,"owner_url":78,"languages":79,"stars":113,"forks":114,"last_commit_at":115,"license":116,"difficulty_score":10,"env_os":117,"env_gpu":118,"env_ram":119,"env_deps":120,"category_tags":132,"github_topics":135,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":156,"updated_at":157,"faqs":158,"releases":159},5855,"deepmodeling\u002Fdeepmd-kit","deepmd-kit","A deep learning package for many-body potential energy representation and molecular dynamics","DeePMD-kit 是一款专为分子模拟设计的深度学习软件包，旨在构建高精度的原子间势能模型并执行分子动力学模拟。它巧妙地将深度学习的强大拟合能力与传统物理模拟相结合，有效解决了计算科学中长期存在的“精度与效率难以兼得”的难题：既拥有接近量子力学计算的准确性，又具备经典力场的高效性，让研究者能够以更低的成本模拟更复杂的体系。\n\n这款工具非常适合计算化学、材料科学及生物物理领域的研究人员使用，同时也欢迎希望开发新型势函数的算法开发者参与。无论是研究有机小分子、金属合金，还是半导体与绝缘体材料，DeePMD-kit 都能提供强有力的支持。\n\n其技术亮点在于极高的灵活性与兼容性：它不仅内置了成熟的 Deep Potential 系列模型，还无缝对接 TensorFlow、PyTorch、JAX 等主流深度学习框架，使训练过程高度自动化。此外，它能轻松连接 LAMMPS、GROMACS、CP2K 等高性能模拟引擎，并全面支持 GPU 加速与 MPI 并行计算，确保在大规模集群上也能高效运行。凭借模块化设计，用户还可以轻松定制不同的描述符，探索前沿的势能面表达方法。","[\u003Cpicture>\u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\".\u002Fdoc\u002F_static\u002Flogo-dark.svg\">\u003Csource media=\"(prefers-color-scheme: light)\" srcset=\".\u002Fdoc\u002F_static\u002Flogo.svg\">\u003Cimg alt=\"DeePMD-kit logo\" src=\".\u002Fdoc\u002F_static\u002Flogo.svg\">\u003C\u002Fpicture>](.\u002Fdoc\u002Flogo.md)\n\n______________________________________________________________________\n\n# DeePMD-kit\n\n[![GitHub release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002Fdeepmodeling\u002Fdeepmd-kit.svg?maxAge=86400)](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases)\n[![offline packages](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002Fdeepmodeling\u002Fdeepmd-kit\u002Ftotal?label=offline%20packages)](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases)\n[![conda-forge](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fdn\u002Fconda-forge\u002Fdeepmd-kit?color=red&label=conda-forge&logo=conda-forge)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fdeepmd-kit)\n[![pip install](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fdeepmd-kit?label=pip%20install)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fdeepmd-kit)\n[![docker pull](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fdeepmodeling\u002Fdeepmd-kit)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fdeepmodeling\u002Fdeepmd-kit)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_13d664e1afd7.png)](https:\u002F\u002Fdeepmd.readthedocs.io\u002F)\n\n## About DeePMD-kit\n\nDeePMD-kit is a package written in Python\u002FC++, designed to minimize the effort required to build deep learning-based model of interatomic potential energy and force field and to perform molecular dynamics (MD). This brings new hopes to addressing the accuracy-versus-efficiency dilemma in molecular simulations. Applications of DeePMD-kit span from finite molecules to extended systems and from metallic systems to chemically bonded systems.\n\nFor more information, check the [documentation](https:\u002F\u002Fdeepmd.readthedocs.io\u002F).\n\n### Highlighted features\n\n- **interfaced with multiple backends**, including TensorFlow, PyTorch, JAX, and Paddle, the most popular deep learning frameworks, making the training process highly automatic and efficient.\n- **interfaced with high-performance classical MD and quantum (path-integral) MD packages**, including LAMMPS, i-PI, AMBER, CP2K, GROMACS, OpenMM, and ABACUS.\n- **implements the Deep Potential series models**, which have been successfully applied to finite and extended systems, including organic molecules, metals, semiconductors, insulators, etc.\n- **implements MPI and GPU supports**, making it highly efficient for high-performance parallel and distributed computing.\n- **highly modularized**, easy to adapt to different descriptors for deep learning-based potential energy models.\n\n### License and credits\n\nThe project DeePMD-kit is licensed under [GNU LGPLv3.0](.\u002FLICENSE).\nIf you use this code in any future publications, please cite the following publications for general purpose:\n\n- Han Wang, Linfeng Zhang, Jiequn Han, and Weinan E. \"DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics.\" Computer Physics Communications 228 (2018): 178-184.\n  [![doi:10.1016\u002Fj.cpc.2018.03.016](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.1016%2Fj.cpc.2018.03.016-blue)](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.cpc.2018.03.016)\n  [![Citations](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_bb9142e54705.png)](https:\u002F\u002Fbadge.dimensions.ai\u002Fdetails\u002Fdoi\u002F10.1016\u002Fj.cpc.2018.03.016)\n- Jinzhe Zeng, Duo Zhang, Denghui Lu, Pinghui Mo, Zeyu Li, Yixiao Chen, Marián Rynik, Li'ang Huang, Ziyao Li, Shaochen Shi, Yingze Wang, Haotian Ye, Ping Tuo, Jiabin Yang, Ye Ding, Yifan Li, Davide Tisi, Qiyu Zeng, Han Bao, Yu Xia, Jiameng Huang, Koki Muraoka, Yibo Wang, Junhan Chang, Fengbo Yuan, Sigbjørn Løland Bore, Chun Cai, Yinnian Lin, Bo Wang, Jiayan Xu, Jia-Xin Zhu, Chenxing Luo, Yuzhi Zhang, Rhys E. A. Goodall, Wenshuo Liang, Anurag Kumar Singh, Sikai Yao, Jingchao Zhang, Renata Wentzcovitch, Jiequn Han, Jie Liu, Weile Jia, Darrin M. York, Weinan E, Roberto Car, Linfeng Zhang, Han Wang. \"DeePMD-kit v2: A software package for deep potential models.\" J. Chem. Phys. 159 (2023): 054801.\n  [![doi:10.1063\u002F5.0155600](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.1063%2F5.0155600-blue)](https:\u002F\u002Fdoi.org\u002F10.1063\u002F5.0155600)\n  [![Citations](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_be666e101b27.png)](https:\u002F\u002Fbadge.dimensions.ai\u002Fdetails\u002Fdoi\u002F10.1063\u002F5.0155600)\n- Jinzhe Zeng, Duo Zhang, Anyang Peng, Xiangyu Zhang, Sensen He, Yan Wang, Xinzijian Liu, Hangrui Bi, Yifan Li, Chun Cai, Chengqian Zhang, Yiming Du, Jia-Xin Zhu, Pinghui Mo, Zhengtao Huang, Qiyu Zeng, Shaochen Shi, Xuejian Qin, Zhaoxi Yu, Chenxing Luo, Ye Ding, Yun-Pei Liu, Ruosong Shi, Zhenyu Wang, Sigbjørn Løland Bore, Junhan Chang, Zhe Deng, Zhaohan Ding, Siyuan Han, Wanrun Jiang, Guolin Ke, Zhaoqing Liu, Denghui Lu, Koki Muraoka, Hananeh Oliaei, Anurag Kumar Singh, Haohui Que, Weihong Xu, Zhangmancang Xu, Yong-Bin Zhuang, Jiayu Dai, Timothy J. Giese, Weile Jia, Ben Xu, Darrin M. York, Linfeng Zhang, Han Wang. \"DeePMD-kit v3: A Multiple-Backend Framework for Machine Learning Potentials.\" J. Chem. Theory Comput. 21 (2025): 4375-4385.\n  [![doi:10.1021\u002Facs.jctc.5c00340](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.1021%2Facs.jctc.5c00340-blue)](https:\u002F\u002Fdoi.org\u002F10.1021\u002Facs.jctc.5c00340)\n  [![Citations](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_1b03acfd8cd0.png)](https:\u002F\u002Fbadge.dimensions.ai\u002Fdetails\u002Fdoi\u002F10.1021\u002Facs.jctc.5c00340)\n\nIn addition, please follow [the bib file](CITATIONS.bib) to cite the methods you used.\n\n### Highlights in major versions\n\n#### Initial version\n\nThe goal of Deep Potential is to employ deep learning techniques and realize an inter-atomic potential energy model that is general, accurate, computationally efficient and scalable. The key component is to respect the extensive and symmetry-invariant properties of a potential energy model by assigning a local reference frame and a local environment to each atom. Each environment contains a finite number of atoms, whose local coordinates are arranged in a symmetry-preserving way. These local coordinates are then transformed, through a sub-network, to so-called _atomic energy_. Summing up all the atomic energies gives the potential energy of the system.\n\nThe initial proof of concept is in the [Deep Potential][1] paper, which employed an approach that was devised to train the neural network model with the potential energy only. With typical _ab initio_ molecular dynamics (AIMD) datasets this is insufficient to reproduce the trajectories. The Deep Potential Molecular Dynamics ([DeePMD][2]) model overcomes this limitation. In addition, the learning process in DeePMD improves significantly over the Deep Potential method thanks to the introduction of a flexible family of loss functions. The NN potential constructed in this way reproduces accurately the AIMD trajectories, both classical and quantum (path integral), in extended and finite systems, at a cost that scales linearly with system size and is always several orders of magnitude lower than that of equivalent AIMD simulations.\n\nAlthough highly efficient, the original Deep Potential model satisfies the extensive and symmetry-invariant properties of a potential energy model at the price of introducing discontinuities in the model. This has negligible influence on a trajectory from canonical sampling but might not be sufficient for calculations of dynamical and mechanical properties. These points motivated us to develop the Deep Potential-Smooth Edition ([DeepPot-SE][3]) model, which replaces the non-smooth local frame with a smooth and adaptive embedding network. DeepPot-SE shows great ability in modeling many kinds of systems that are of interest in the fields of physics, chemistry, biology, and materials science.\n\nIn addition to building up potential energy models, DeePMD-kit can also be used to build up coarse-grained models. In these models, the quantity that we want to parameterize is the free energy, or the coarse-grained potential, of the coarse-grained particles. See the [DeePCG paper][4] for more details.\n\n#### v1\n\n- Code refactor to make it highly modularized.\n- GPU support for descriptors.\n\n#### v2\n\n- Model compression. Accelerate the efficiency of model inference 4-15 times.\n- New descriptors. Including `se_e2_r`, `se_e3`, and `se_atten` (DPA-1).\n- Hybridization of descriptors. Hybrid descriptor constructed from the concatenation of several descriptors.\n- Atom type embedding. Enable atom-type embedding to decline training complexity and refine performance.\n- Training and inference of the dipole (vector) and polarizability (matrix).\n- Split of training and validation dataset.\n- Optimized training on GPUs, including CUDA and ROCm.\n- Non-von-Neumann.\n- C API to interface with the third-party packages.\n\nSee [our v2 paper](https:\u002F\u002Fdoi.org\u002F10.1063\u002F5.0155600) for details of all features until v2.2.3.\n\n#### v3\n\n- Multiple backends supported. Add PyTorch and JAX backends.\n- The DPA2 and DPA3 models.\n- Plugin mechanisms for external models.\n\nSee [our v3 paper](https:\u002F\u002Fdoi.org\u002F10.1021\u002Facs.jctc.5c00340) for details of all features until v3.0.\n\n## Install and use DeePMD-kit\n\nJust copy and paste in 1s, and let it run.\n\n```sh\ncurl -fsSL https:\u002F\u002Fdp1s.deepmodeling.com | bash\n```\n\nPlease read the [online documentation](https:\u002F\u002Fdeepmd.readthedocs.io\u002F) for details and alternative installation methods.\n\nThen, read on for a brief overview of the usage of DeePMD-kit. You may start with the first step:\n\n```sh\ndp\n```\n\n## Code structure\n\nThe code is organized as follows:\n\n- `examples`: examples.\n- `deepmd`: DeePMD-kit python modules.\n- `source\u002Flib`: source code of the core library.\n- `source\u002Fop`: Operator (OP) implementation.\n- `source\u002Fapi_cc`: source code of DeePMD-kit C++ API.\n- `source\u002Fapi_c`: source code of the C API.\n- `source\u002Fnodejs`: source code of the Node.js API.\n- `source\u002Fipi`: source code of i-PI client.\n- `source\u002Flmp`: source code of LAMMPS module.\n\n# Contributing\n\nSee [DeePMD-kit Contributing Guide](CONTRIBUTING.md) to become a contributor! 🤓\n\n[1]: https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.01478\n[2]: https:\u002F\u002Fjournals.aps.org\u002Fprl\u002Fabstract\u002F10.1103\u002FPhysRevLett.120.143001\n[3]: https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09003\n[4]: https:\u002F\u002Faip.scitation.org\u002Fdoi\u002Ffull\u002F10.1063\u002F1.5027645\n","[\u003Cpicture>\u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\".\u002Fdoc\u002F_static\u002Flogo-dark.svg\">\u003Csource media=\"(prefers-color-scheme: light)\" srcset=\".\u002Fdoc\u002F_static\u002Flogo.svg\">\u003Cimg alt=\"DeePMD-kit logo\" src=\".\u002Fdoc\u002F_static\u002Flogo.svg\">\u003C\u002Fpicture>](.\u002Fdoc\u002Flogo.md)\n\n______________________________________________________________________\n\n# DeePMD-kit\n\n[![GitHub release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002Fdeepmodeling\u002Fdeepmd-kit.svg?maxAge=86400)](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases)\n[![offline packages](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002Fdeepmodeling\u002Fdeepmd-kit\u002Ftotal?label=offline%20packages)](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases)\n[![conda-forge](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fdn\u002Fconda-forge\u002Fdeepmd-kit?color=red&label=conda-forge&logo=conda-forge)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fdeepmd-kit)\n[![pip install](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fdeepmd-kit?label=pip%20install)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fdeepmd-kit)\n[![docker pull](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fdeepmodeling\u002Fdeepmd-kit)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fdeepmodeling\u002Fdeepmd-kit)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_13d664e1afd7.png)](https:\u002F\u002Fdeepmd.readthedocs.io\u002F)\n\n## 关于DeePMD-kit\n\nDeePMD-kit是一个用Python\u002FC++编写的软件包，旨在最大限度地减少构建基于深度学习的原子间势能和力场模型以及进行分子动力学（MD）模拟所需的工作量。这为解决分子模拟中精度与效率之间的矛盾带来了新的希望。DeePMD-kit的应用范围涵盖了从有限分子到扩展体系，从金属体系到化学键合体系。\n\n欲了解更多信息，请参阅[文档](https:\u002F\u002Fdeepmd.readthedocs.io\u002F)。\n\n### 突出特点\n\n- **与多种后端框架兼容**，包括TensorFlow、PyTorch、JAX和Paddle等最流行的深度学习框架，使训练过程高度自动化且高效。\n- **与高性能经典MD及量子（路径积分）MD软件包集成**，如LAMMPS、i-PI、AMBER、CP2K、GROMACS、OpenMM和ABACUS等。\n- **实现了Deep Potential系列模型**，这些模型已成功应用于有限体系和扩展体系，包括有机分子、金属、半导体、绝缘体等。\n- **支持MPI和GPU加速**，使其在高性能并行和分布式计算中表现出色。\n- **高度模块化**，易于适应不同的描述符，用于构建基于深度学习的势能模型。\n\n### 许可证与致谢\n\nDeePMD-kit项目采用[GNU LGPLv3.0](.\u002FLICENSE)许可证。如果您在未来的研究成果中使用了本代码，请引用以下通用文献：\n\n- Han Wang, Linfeng Zhang, Jiequn Han, and Weinan E. \"DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics.\" Computer Physics Communications 228 (2018): 178-184.\n  [![doi:10.1016\u002Fj.cpc.2018.03.016](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.1016%2Fj.cpc.2018.03.016-blue)](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.cpc.2018.03.016)\n  [![引用次数](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_bb9142e54705.png)](https:\u002F\u002Fbadge.dimensions.ai\u002Fdetails\u002Fdoi\u002F10.1016\u002Fj.cpc.2018.03.016)\n- Jinzhe Zeng, Duo Zhang, Denghui Lu, Pinghui Mo, Zeyu Li, Yixiao Chen, Marián Rynik, Li'ang Huang, Ziyao Li, Shaochen Shi, Yingze Wang, Haotian Ye, Ping Tuo, Jiabin Yang, Ye Ding, Yifan Li, Davide Tisi, Qiyu Zeng, Han Bao, Yu Xia, Jiameng Huang, Koki Muraoka, Yibo Wang, Junhan Chang, Fengbo Yuan, Sigbjørn Løland Bore, Chun Cai, Yinnian Lin, Bo Wang, Jiayan Xu, Jia-Xin Zhu, Chenxing Luo, Yuzhi Zhang, Rhys E. A. Goodall, Wenshuo Liang, Anurag Kumar Singh, Sikai Yao, Jingchao Zhang, Renata Wentzcovitch, Jiequn Han, Jie Liu, Weile Jia, Darrin M. York, Weinan E, Roberto Car, Linfeng Zhang, Han Wang. \"DeePMD-kit v2: A software package for deep potential models.\" J. Chem. Phys. 159 (2023): 054801.\n  [![doi:10.1063\u002F5.0155600](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.1063%2F5.0155600-blue)](https:\u002F\u002Fdoi.org\u002F10.1063\u002F5.0155600)\n  [![引用次数](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_be666e101b27.png)](https:\u002F\u002Fbadge.dimensions.ai\u002Fdetails\u002Fdoi\u002F10.1063\u002F5.0155600)\n- Jinzhe Zeng, Duo Zhang, Anyang Peng, Xiangyu Zhang, Sensen He, Yan Wang, Xinzijian Liu, Hangrui Bi, Yifan Li, Chun Cai, Chengqian Zhang, Yiming Du, Jia-Xin Zhu, Pinghui Mo, Zhengtao Huang, Qiyu Zeng, Shaochen Shi, Xuejian Qin, Zhaoxi Yu, Chenxing Luo, Ye Ding, Yun-Pei Liu, Ruosong Shi, Zhenyu Wang, Sigbjørn Løland Bore, Junhan Chang, Zhe Deng, Zhaohan Ding, Siyuan Han, Wanrun Jiang, Guolin Ke, Zhaoqing Liu, Denghui Lu, Koki Muraoka, Hananeh Oliaei, Anurag Kumar Singh, Haohui Que, Weihong Xu, Zhangmancang Xu, Yong-Bin Zhuang, Jiayu Dai, Timothy J. Giese, Weile Jia, Ben Xu, Darrin M. York, Linfeng Zhang, Han Wang. \"DeePMD-kit v3: A Multiple-Backend Framework for Machine Learning Potentials.\" J. Chem. Theory Comput. 21 (2025): 4375-4385.\n  [![doi:10.1021\u002Facs.jctc.5c00340](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.1021%2Facs.jctc.5c00340-blue)](https:\u002F\u002Fdoi.org\u002F10.1021\u002Facs.jctc.5c00340)\n  [![引用次数](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_readme_1b03acfd8cd0.png)](https:\u002F\u002Fbadge.dimensions.ai\u002Fdetails\u002Fdoi\u002F10.1021\u002Facs.jctc.5c00340)\n\n此外，请参考[CITATIONS.bib](CITATIONS.bib)文件，以引用您所使用的方法。\n\n### 主要版本亮点\n\n#### 初版\n\nDeep Potential 的目标是利用深度学习技术，构建一个通用、精确、计算高效且可扩展的原子间势能模型。其核心在于通过为每个原子分配局部参考系和局部环境，来尊重势能模型的广延性和对称不变性。每个环境包含有限数量的原子，其局部坐标以保持对称的方式排列。这些局部坐标随后通过一个子网络转换为所谓的“原子能量”。将所有原子能量相加即可得到系统的总势能。\n\n最初的概念验证发表在 [Deep Potential][1] 论文中，该论文采用了一种仅使用势能进行神经网络训练的方法。然而，对于典型的从头算分子动力学（AIMD）数据集而言，这种方法不足以重现轨迹。Deep Potential Molecular Dynamics（[DeePMD][2]）模型克服了这一局限性。此外，得益于引入灵活的损失函数族，DeePMD 的学习过程相比 Deep Potential 方法有了显著提升。以此构建的神经网络势能能够准确地再现经典和量子（路径积分）AIMD 轨迹，适用于扩展体系和有限体系，且计算成本随系统规模线性增长，始终比同等 AIMD 模拟低几个数量级。\n\n尽管效率极高，但原始的 Deep Potential 模型在满足势能模型的广延性和对称不变性的同时，也引入了模型中的不连续性。这在正则系综采样中对轨迹的影响可以忽略，但在计算动力学和力学性质时可能不够充分。基于这些考虑，我们开发了 Deep Potential-Smooth Edition（[DeepPot-SE][3]）模型，用平滑且自适应的嵌入网络替代了非平滑的局部框架。DeepPot-SE 在模拟物理、化学、生物学和材料科学等领域中感兴趣的多种体系方面表现出色。\n\n除了构建势能模型外，DeePMD-kit 还可用于构建粗粒化模型。在这些模型中，我们需要参数化的量是粗粒化粒子的自由能或粗粒化势能。更多细节请参阅 [DeePCG 论文][4]。\n\n#### v1\n\n- 代码重构，使其高度模块化。\n- 支持 GPU 计算描述符。\n\n#### v2\n\n- 模型压缩。使模型推理效率提升 4 至 15 倍。\n- 新型描述符。包括 `se_e2_r`、`se_e3` 和 `se_atten`（DPA-1）。\n- 描述符混合。通过拼接多个描述符构建混合描述符。\n- 原子类型嵌入。启用原子类型嵌入以降低训练复杂度并提升性能。\n- 电偶极矩（向量）和极化率（矩阵）的训练与推理。\n- 训练集和验证集的分离。\n- 针对 GPU 的优化训练，包括 CUDA 和 ROCm。\n- 非冯·诺依曼架构。\n- C API，用于与第三方软件包对接。\n\n有关 v2.2.3 之前所有功能的详细信息，请参阅我们的 [v2 论文](https:\u002F\u002Fdoi.org\u002F10.1063\u002F5.0155600)。\n\n#### v3\n\n- 支持多种后端。新增 PyTorch 和 JAX 后端。\n- DPA2 和 DPA3 模型。\n- 外部模型插件机制。\n\n有关 v3.0 之前所有功能的详细信息，请参阅我们的 [v3 论文](https:\u002F\u002Fdoi.org\u002F10.1021\u002Facs.jctc.5c00340)。\n\n## 安装与使用 DeePMD-kit\n\n只需 1 秒钟复制粘贴并运行即可。\n\n```sh\ncurl -fsSL https:\u002F\u002Fdp1s.deepmodeling.com | bash\n```\n\n有关详细信息及替代安装方法，请阅读 [在线文档](https:\u002F\u002Fdeepmd.readthedocs.io\u002F)。\n\n接下来，我们将简要介绍 DeePMD-kit 的使用方法。您可以从第一步开始：\n\n```sh\ndp\n```\n\n## 代码结构\n\n代码组织如下：\n\n- `examples`: 示例。\n- `deepmd`: DeePMD-kit Python 模块。\n- `source\u002Flib`: 核心库源代码。\n- `source\u002Fop`: 算子（OP）实现。\n- `source\u002Fapi_cc`: DeePMD-kit C++ API 源代码。\n- `source\u002Fapi_c`: C API 源代码。\n- `source\u002Fnodejs`: Node.js API 源代码。\n- `source\u002Fipi`: i-PI 客户端源代码。\n- `source\u002Flmp`: LAMMPS 模块源代码。\n\n# 贡献\n\n请参阅 [DeePMD-kit 贡献指南](CONTRIBUTING.md)，成为我们的贡献者吧！ 🤓\n\n[1]: https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.01478\n[2]: https:\u002F\u002Fjournals.aps.org\u002Fprl\u002Fabstract\u002F10.1103\u002FPhysRevLett.120.143001\n[3]: https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09003\n[4]: https:\u002F\u002Faip.scitation.org\u002Fdoi\u002Ffull\u002F10.1063\u002F1.5027645","# DeePMD-kit 快速上手指南\n\nDeePMD-kit 是一款基于深度学习的分子势能模型构建与分子动力学（MD）模拟软件包。它旨在解决分子模拟中精度与效率的矛盾，支持从有限分子到扩展体系（如金属、半导体、绝缘体等）的多种应用场景。\n\n## 1. 环境准备\n\n在开始之前，请确保您的系统满足以下基本要求：\n\n*   **操作系统**：推荐 Linux (Ubuntu\u002FCentOS) 或 macOS。Windows 用户建议使用 WSL2 或 Docker。\n*   **硬件要求**：\n    *   **CPU**：支持 AVX2 指令集的现代处理器。\n    *   **GPU**（可选但推荐）：NVIDIA GPU (需安装 CUDA) 或 AMD GPU (需安装 ROCm)，用于加速训练和推理。\n*   **前置依赖**：\n    *   Python 3.8+\n    *   C++ 编译器 (gcc\u002Fg++)\n    *   CMake 3.16+\n    *   MPI (用于并行计算，如 OpenMPI 或 MPICH)\n    *   深度学习框架后端：DeePMD-kit v3 支持 **PyTorch**, **JAX**, **TensorFlow**, 和 **PaddlePaddle**。您至少需要安装其中一种（官方示例通常默认使用 PyTorch 或 TensorFlow）。\n\n> **提示**：如果您希望避免复杂的环境配置，可以直接使用官方提供的 Docker 镜像，或采用下文推荐的“一键安装”脚本，该脚本会自动处理大部分依赖。\n\n## 2. 安装步骤\n\nDeePMD-kit 提供了多种安装方式，推荐使用官方提供的一键安装脚本，它会自动检测环境并选择合适的安装方式（Conda\u002FPip\u002FDocker）。\n\n### 方法一：一键安装脚本（推荐）\n\n这是最快捷的方式，适合大多数用户。在终端执行以下命令：\n\n```sh\ncurl -fsSL https:\u002F\u002Fdp1s.deepmodeling.com | bash\n```\n\n该脚本会引导您完成安装过程。安装完成后，请根据提示重启终端或激活相应的 Conda 环境。\n\n### 方法二：使用 Conda 安装\n\n如果您已经配置好 Conda 环境，可以通过 conda-forge 直接安装（以 PyTorch 后端为例）：\n\n```sh\nconda install -c conda-forge deepmd-kit-pytorch\n```\n\n如需其他后端（如 tensorflow, jax），请将 `deepmd-kit-pytorch` 替换为对应的包名。\n\n### 方法三：使用 Pip 安装\n\n```sh\npip install deepmd-kit\n```\n\n*注意：Pip 安装可能需要您预先手动安装好对应的深度学习框架（如 `torch` 或 `tensorflow`）以及 MPI 库。*\n\n### 验证安装\n\n安装完成后，运行以下命令检查版本信息，确认安装成功：\n\n```sh\ndp --version\n```\n\n## 3. 基本使用\n\nDeePMD-kit 的核心工作流通常包含三个步骤：**数据预处理** -> **模型训练** -> **模型推理\u002F压缩**。所有操作均通过 `dp` 命令行工具完成。\n\n### 第一步：查看帮助与示例\n\n输入 `dp` 查看可用命令列表：\n\n```sh\ndp\n```\n\n您可以访问官方仓库的 `examples` 目录获取完整的测试数据集和输入文件模板。\n\n### 第二步：准备输入文件\n\n您需要准备一个 JSON 格式的输入文件（例如 `input.json`），定义网络结构、训练参数和数据路径。以下是一个极简的训练配置示例（`train` 部分）：\n\n```json\n{\n  \"model\": {\n    \"type_map\": [\"O\", \"H\"],\n    \"descriptor\": {\n      \"type\": \"se_e2_a\",\n      \"sel\": [46, 92],\n      \"rcut_smth\": 0.50,\n      \"rcut\": 6.00,\n      \"neuron\": [25, 50, 100],\n      \"resnet_dt\": false,\n      \"axis_neuron\": 16,\n      \"seed\": 1\n    },\n    \"fitting_net\": {\n      \"neuron\": [240, 240, 240],\n      \"resnet_dt\": true,\n      \"seed\": 1\n    }\n  },\n  \"learning_rate\": {\n    \"type\": \"exp\",\n    \"start_lr\": 0.001,\n    \"stop_lr\": 3.51e-7,\n    \"decay_steps\": 5000\n  },\n  \"loss\": {\n    \"start_pref_e\": 0.02,\n    \"limit_pref_e\": 1,\n    \"start_pref_f\": 1000,\n    \"limit_pref_f\": 1,\n    \"start_pref_v\": 0,\n    \"limit_pref_v\": 0\n  },\n  \"training\": {\n    \"training_data\": {\n      \"systems\": [\".\u002Fdata\u002Ftrain\"],\n      \"batch_size\": \"auto\",\n      \"seed\": 1\n    },\n    \"validation_data\": {\n      \"systems\": [\".\u002Fdata\u002Fvalid\"],\n      \"batch_size\": \"auto\",\n      \"seed\": 1\n    },\n    \"numb_steps\": 100000,\n    \"seed\": 1,\n    \"disp_file\": \"lcurve.out\",\n    \"disp_freq\": 1000,\n    \"save_freq\": 10000\n  }\n}\n```\n\n### 第三步：训练模型\n\n使用准备好的数据和配置文件启动训练：\n\n```sh\ndp train input.json\n```\n\n训练过程中，程序会输出损失函数变化曲线（默认保存到 `lcurve.out`），并定期保存模型检查点（`.pb` 文件）。\n\n### 第四步：模型压缩（可选但推荐）\n\n为了提高推理速度（特别是在进行大规模分子动力学模拟时），建议对训练好的模型进行压缩：\n\n```sh\ndp compress -i frozen_model.pb -o compressed_model.pb\n```\n\n压缩后的模型推理速度通常可提升 4-15 倍，且精度损失极小。\n\n### 第五步：进行分子动力学模拟\n\nDeePMD-kit 本身不直接运行 MD，而是作为插件接口连接到经典的 MD 软件（如 LAMMPS, GROMACS, CP2K 等）。以 **LAMMPS** 为例，需要在 LAMMPS 输入脚本中加载深势模型：\n\n```lammps\npair_style deepmd compressed_model.pb\npair_coeff * *\n```\n\n随后即可像使用普通力场一样运行 `run` 命令进行模拟。\n\n---\n*更多高级功能（如多后端切换、插件开发、粗粒化模型等）请参考 [官方文档](https:\u002F\u002Fdeepmd.readthedocs.io\u002F)。*","某材料实验室团队正致力于研发新型固态电池电解质，需要模拟锂离子在复杂晶格结构中的长时程扩散行为以评估离子电导率。\n\n### 没有 deepmd-kit 时\n- **精度与效率难以兼得**：使用传统经验势函数（Force Field）计算速度虽快，但无法准确描述复杂的化学键断裂与生成；若采用第一性原理（DFT）计算，精度虽高但算力消耗巨大，仅能模拟皮秒级时长，无法捕捉离子扩散全过程。\n- **模型构建门槛极高**：研究人员需手动编写复杂的势函数代码或依赖昂贵的商业软件接口，难以灵活调整模型架构以适应特定的金属 - 非金属混合体系。\n- **大规模并行困难**：在尝试扩大模拟体系至数千原子时，传统量子力学方法因计算复杂度呈立方级增长而彻底失效，导致无法复现真实的宏观材料环境。\n\n### 使用 deepmd-kit 后\n- **突破“精度 - 效率”瓶颈**：利用深度势能（Deep Potential）模型，团队在保持接近 DFT 计算精度的同时，将分子动力学模拟速度提升了数个数量级，成功完成了纳秒级的长时程轨迹追踪。\n- **自动化建模与多后端支持**：借助其对接 TensorFlow 和 PyTorch 等主流框架的能力，研究人员通过少量配置即可自动训练出适配锂硫体系的高保真势函数，大幅降低了算法开发难度。\n- **高效扩展至工业级规模**：依托 deepmd-kit 对 MPI 和 GPU 的原生支持，团队轻松将模拟体系扩展至上万个原子，在超算集群上实现了高效并行，精准揭示了锂离子在晶界处的迁移机制。\n\ndeepmd-kit 通过深度学习重构了原子间相互作用势，让科研人员得以在量子级精度下探索以往无法触及的宏观时空尺度。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepmodeling_deepmd-kit_c5dbc33c.png","deepmodeling","DeepModeling","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdeepmodeling_f8c42637.jpg","Define the future of scientific computing together",null,"https:\u002F\u002Fdeepmodeling.com","https:\u002F\u002Fgithub.com\u002Fdeepmodeling",[80,84,88,92,96,100,104,108,110],{"name":81,"color":82,"percentage":83},"Python","#3572A5",71.4,{"name":85,"color":86,"percentage":87},"C++","#f34b7d",26,{"name":89,"color":90,"percentage":91},"Cuda","#3A4E3A",1.1,{"name":93,"color":94,"percentage":95},"C","#555555",0.7,{"name":97,"color":98,"percentage":99},"CMake","#DA3434",0.6,{"name":101,"color":102,"percentage":103},"Shell","#89e051",0.2,{"name":105,"color":106,"percentage":107},"JavaScript","#f1e05a",0,{"name":109,"color":76,"percentage":107},"SWIG",{"name":111,"color":112,"percentage":107},"Dockerfile","#384d54",1903,607,"2026-04-09T03:19:01","LGPL-3.0","Linux, macOS","非必需但强烈推荐用于高性能计算。支持 NVIDIA GPU (CUDA) 和 AMD GPU (ROCm)。具体型号和显存大小取决于模型规模和系统原子数，未明确指定最低要求。","未说明（取决于模拟系统的原子数量和模型复杂度）",{"notes":121,"python":122,"dependencies":123},"该工具支持多种深度学习后端（TensorFlow, PyTorch, JAX, Paddle），安装时需选择对应后端版本。支持 MPI 并行计算。可通过 conda、pip 或 Docker 安装。若需与经典分子动力学软件联用，需额外编译安装 LAMMPS、GROMACS 等接口模块。v3 版本引入了多后端框架和插件机制。","3.8+",[124,125,126,127,128,129,130,131],"TensorFlow","PyTorch","JAX","PaddlePaddle","LAMMPS (可选接口)","i-PI (可选接口)","NumPy","SciPy",[13,14,15,16,35,52,133,134],"其他","音频",[136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155],"deep-learning","molecular-dynamics","deepmd","lammps","potential-energy","python","tensorflow","cpp","cuda","rocm","ipi","ase","computational-chemistry","materials-science","c","nodejs","pytorch","jax","paddle","machine-learning-potential","2026-03-27T02:49:30.150509","2026-04-09T20:51:39.080622",[],[160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250,255],{"id":161,"version":162,"summary_zh":163,"released_at":164},171805,"v3.1.3","## 亮点\n\n本次发布聚焦两大主题：更便捷地获取预训练模型，以及 PyTorch 路线图的下一阶段。DeePMD-kit 现在可以直接下载内置的预训练模型，并且在同一版本系列中，还基于该机制引入了新的预训练模型 `DPA3-Omol-Large`。与此同时，我们已开始构建一个基于 Array API、`torch.export` 和 `torch.compile` 的实验性可导出 PyTorch 后端，这部分工作部分源于 `torch.jit` 的弃用。\n\n除了上述重点内容外，v3.1.3 还通过新增优化器和分布式训练支持扩展了 PyTorch 的训练能力，提升了诊断和训练安全性，增加了电荷-自旋及自旋-维里相关功能，并持续加强项目的文档、CI、打包以及后端一致性。\n\n只需三步即可试用 `DPA3-Omol-Large`：\n\n```sh\n# 安装最新版 DeePMD-kit（将在本发布后几天内可用）\ncurl -fsSL https:\u002F\u002Fdp1s.deepmodeling.com | bash\n# 重启终端，并下载预训练模型\ndp pretrained download DPA3-Omol-Large\n# 使用预训练模型评估您的训练\u002F测试数据\ndp test -m ~\u002F.cache\u002Fdeepmd\u002Fpretrained\u002Fmodels\u002FDPA3-Omol-Large.pt -s 您系统的路径\n```\n\n## 破坏性变更\n\n- @njzjz 在 [#5078](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5078) 中移除对 Python 3.9 的支持\n- @njzjz 在 [#5080](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5080) 中停止提供 CUDA 11 预编译轮子包\n- @njzjz 在 [#5122](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5122) 中准备弃用 `devel` 分支\n\n## 新特性\n\n### 预训练模型与模型分发\n\n- @njzjz-bot 在 [#5277](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5277) 中添加内置预训练模型下载器及别名后端\n- @njzjz-bot 在 [#5307](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5307) 中将 `DPA-2.4-7M` 加入预训练模型注册表\n- @njzjz-bot 在 [#5327](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5327) 中添加 `DPA3-Omol-Large`\n\n### 实验性 PyTorch 后端\n\n- @Copilot 在 [#5198](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5198) 中为 Array API 工具添加 PyTorch 支持\n- @wanghan-iapcm 在 [#5194](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5194) 中添加一个新的可导出 PyTorch 后端\n- @wanghan-iapcm 在 [#5204](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5204) 中提供将 `dpmodel` 类转换为 PyTorch 模块的基础设施\n- @wanghan-iapcm 在 [#5208](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5208) 中在实验性 PyTorch 后端中实现 `se_t` 和 `se_t_tebd` 描述符\n- @wanghan-iapcm 在 [#5218](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5218) 中在实验性 PyTorch 后端中添加能量拟合功能\n- @wanghan-iapcm 在 [#5220](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5220) 中在实验性 PyTorch 后端中添加原子模型\n- 添加完整模式","2026-03-19T09:37:17",{"id":166,"version":167,"summary_zh":168,"released_at":169},171806,"v3.1.2","\u003C!-- 发布说明由 .github\u002Frelease.yml 中的配置在 master 分支上生成 -->\n\n今天是 deepmodeling\u002Fdeepmd-kit 仓库的八周年纪念日！\n\n## 变更内容\n### 新功能\n* feat(pt): 增加 se_e3_tebd 的压缩支持，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4992 中实现\n* feat: 增强 process_systems 功能，使其能够递归搜索系统列表中的所有路径，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5033 中实现\n* feat(pt): 即使 attn_layer != 0，类型嵌入仍然可以被压缩，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5066 中实现\n* feat(pt): 实现 se_atten 的类型嵌入压缩功能，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5057 中实现\n* feat(pt): 实现 se_e3_tebd 的类型嵌入压缩功能，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5059 中实现\n* feat(pt): 在梯度计算中添加对 SiLU 激活函数的支持，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5055 中实现\n\n### Bug修复\n* fix: 将 CMake 最低版本提升至 3.25.2，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5001 中完成\n* fix(cmake): 改进 CUDA C++ 标准，以提高与 gcc-14 的兼容性，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5036 中实现\n* fix: 优化原子类型映射，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5043 中完成\n* fix(finetune): 在微调过程中使用随机拟合时，计算拟合统计信息，由 @Chengqian-Zhang 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4928 中实现\n* fix(stat): 当使用默认 fparam 并进行共享拟合时，正确计算拟合统计信息，由 @Chengqian-Zhang 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5038 中实现\n* fix: 在 pt 环境中将多进程启动方式设置为 'fork'（因为 Python 3.14 默认使用 forkserver），由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5019 中完成\n* fix(jax): 修复与 flax 0.12 的兼容性问题，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5067 中完成\n* Fix: 统一 model_output_type 的命名，由 @anyangml 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5069 中完成\n* fix(pd): 调整代码以提高硬件兼容性，由 @HydrogenSulfate 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5047 中完成\n\n### 功能增强\n* build: 将 LAMMPS 版本升级至 stable_22Jul2025_update2，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5052 中完成\n* feat: 支持 CUDA 13.0 及以上版本，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5017 中实现\n* perf: 加速训练过程中的数据加载，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5023 中完成\n* fix: 如果不需要，移除 hessian 输出定义，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5045 中完成\n* feat: 性能优化：数据加载与统计加速，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F5040 中实现\n* build(deps-dev): 更新 scikit-build-core 的依赖要求，从 !=0.6.0,\u003C0.11,>=0.5 修改为 >=0.5,!=0.6.0,\u003C0.12，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com","2025-12-12T10:39:16",{"id":171,"version":172,"summary_zh":173,"released_at":174},171807,"v3.1.1","\u003C!-- 使用 .github\u002Frelease.yml 中的配置在 master 分支上生成的发布说明 -->\n\n## 变更内容\n### 新功能\n* feat(pt): 为 dp show 添加 `observed-type` 选项，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4820 中实现\n* feat(pt): 为属性预测添加平均绝对百分比误差 (MAPE) 损失，由 @SchrodingersCattt 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4854 中实现\n* feat: 添加 eval-desc CLI 命令，用于以 3D 输出格式评估描述符，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4903 中实现\n* feat(tf): 实现 change-bias 命令，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4927 中实现\n* feat: 为 LAMMPS MD 添加 PyTorch 性能分析器支持，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4969 中实现\n* pd(feat): 支持使用 `DP` 类进行 Python 推理，由 @HydrogenSulfate 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4987 中实现\n* Feat: 支持在 dp 计算器中使用 fparam\u002Faparam，由 @anyangml 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4819 中实现\n* pd: 为 pd 后端支持 dpa3 动态形状，由 @HydrogenSulfate 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4828 中实现\n* feat(pt): 为最后一个拟合层的输出添加钩子，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4789 中实现\n* feat(pd): 支持 dpa2\u002Fdpa3 C++ 推理，由 @HydrogenSulfate 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4870 中实现\n* feat(pt): 支持 zbl 微调，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4849 中实现\n* feat: 添加 YAML 输入文件支持，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4894 中实现\n* feat(pd): 支持梯度累积，由 @HydrogenSulfate 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4920 中实现\n* feat(pt): 添加模型分支别名，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4883 中实现\n* feat: 处理测试中的掩码力，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4893 中实现\n* feat: 支持在 dp 测试中使用 input.json 中的训练\u002F验证数据，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4859 中实现\n* feat(infer): 为 DeepEval 添加 get_model 方法，用于访问后端特定的模型实例，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4931 中实现\n* feat(dp\u002Fpt): 添加 default_fparam，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4888 中实现\n* feat(pt): 实现 DeepTensorPT，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4937 中实现\n\n### 优化改进\n* pd: 添加标志 `CINN_ALLOW_DYNAMIC_SHAPE`，以提升动态形状下的性能，由 @HydrogenSulfate 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4826 中实现\n* refactor(training): 对训练损失进行平均处理，使日志记录更加平滑且更具代表性，由 @OutisLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4850 中实现\n* chore: 将 LAMMPS 升级至 stable_22Jul2025 版本，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4861 中实现\n* style: 为除后端和测试之外的核心模块添加全面的类型提示，由 @Copilot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4937 中实现\n* ……（省略部分）","2025-09-30T18:02:36",{"id":176,"version":177,"summary_zh":178,"released_at":179},171808,"v3.1.0","\u003C!-- 发行说明由 .github\u002Frelease.yml 中的配置在 devel 分支上生成 -->\n\n## 变更内容\n\n## 亮点\n\n### DPA3\n\nDPA3 是一种基于消息传递架构的先进原子间势函数。作为大型原子模型（LAM），DPA3 专为整合并同时训练来自不同学科的数据集而设计，能够涵盖不同研究领域中的多样化化学和材料体系。其模型设计确保了卓越的拟合精度以及在训练域内和跨域范围内的强大泛化能力。此外，DPA3 还保持了能量守恒，并尊重势能面的物理对称性，使其成为广泛科学应用中可靠的工具。\n\n训练脚本请参考 `examples\u002Fwater\u002Fdpa3\u002Finput_torch.json`。训练完成后，PyTorch 模型可以转换为 JAX 模型。\n\n### PaddlePaddle 后端\n\nPaddlePaddle 后端具有与 PyTorch 后端相似的 Python 接口，从而确保了模型开发的兼容性和灵活性。PaddlePaddle 在 DeePMD-kit 中引入了动态转静态功能以及 PaddlePaddle JIT 编译器（CINN），支持动态形状和高阶微分。动态转静态功能会自动捕获用户的动态图代码并将其转换为静态图。转换完成后，CINN 编译器会对计算图进行优化，从而提升模型训练和推理的效率。在 DPA-2 模型的实验中，我们发现与动态图相比，训练时间减少了约 40%，有效提高了模型训练效率。\n\n### 破坏性变更\n* 破坏性：通过 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4728 中为 PyPI 版 LAMMPS 启用 PyTorch 后端\n\n### 其他新特性\n* 功能（pt\u002Fdp）：支持案例嵌入和可共享拟合，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4417 中实现\n* 功能（pt）：支持使用能量 Hessian 进行训练，由 @1azyking 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4169 中实现\n* 功能：为大系统新增批量大小规则，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4659 中实现\n* 功能：在 pppm\u002Fdplr 中添加访问 fele 的方法，由 @HanswithCMY 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4452 中实现\n* 功能（tf\u002Fpt）：将原子权重加入张量损失，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4466 中实现\n* 功能（pt）：在属性拟合中添加 `trainable` 参数，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4599 中实现\n* 功能（pt）：支持 fitting_net 输入统计信息，由 @Chengqian-Zhang 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4504 中实现\n* 功能（jax）：支持 Hessian 计算，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4649 中实现\n* 功能：为数据预处理插件模式添加支持，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4621 中实现\n* 功能（pt）：为 PyTorch 后端添加 eta 消息，由 @HydrogenSulfate 在 https:\u002F\u002Fg","2025-06-11T06:01:47",{"id":181,"version":182,"summary_zh":183,"released_at":184},171809,"v3.1.0rc0","\u003C!-- 使用 .github\u002Frelease.yml 中的配置在 devel 分支生成的发布说明 -->\n\n## 变更内容\n\n## 亮点\n\n### DPA-3\n\nDPA-3 是一种基于消息传递架构的先进原子间势函数。作为大型原子模型（LAM），DPA-3 专为整合并同时训练来自不同学科的数据集而设计，涵盖多个研究领域中的多样化化学和材料体系。其模型设计确保了卓越的拟合精度以及在训练域内和跨域范围内的强大泛化能力。此外，DPA-3 还保持了能量守恒，并尊重势能面的物理对称性，使其成为广泛科学应用中可靠的工具。\n\n训练脚本请参考 `examples\u002Fwater\u002Fdpa3\u002Finput_torch.json`。训练完成后，PyTorch 模型可以转换为 JAX 模型。\n\n### PaddlePaddle 后端\n\nPaddlePaddle 后端具有与 PyTorch 后端相似的 Python 接口，从而保证了模型开发的兼容性和灵活性。PaddlePaddle 在 DeePMD-kit 中引入了动态图转静态图功能以及 PaddlePaddle JIT 编译器（CINN），支持动态形状和高阶微分。动态图转静态图功能会自动捕获用户的动态图代码并将其转换为静态图。转换完成后，CINN 编译器会对计算图进行优化，从而提升模型训练和推理的效率。在使用 DPA-2 模型的实验中，我们发现与动态图相比，训练时间减少了约 40%，有效提高了模型训练效率。\n\n### 破坏性变更\n* 破坏性：通过 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4728 中为 PyPI 版 LAMMPS 启用 PyTorch 后端\n\n### 其他新特性\n* 功能（pt\u002Fdp）：支持案例嵌入和可共享的拟合，由 @iProzd 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4417 中实现\n* 功能（pt）：支持使用能量 Hessian 进行训练，由 @1azyking 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4169 中实现\n* 功能：为大型系统新增批量大小规则，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4659 中实现\n* 功能：为 pppm\u002Fdplr 添加访问 fele 的方法，由 @HanswithCMY 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4452 中实现\n* 功能（tf\u002Fpt）：将原子权重添加到张量损失中，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4466 中实现\n* 功能（pt）：为属性拟合添加 `trainable` 参数，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4599 中实现\n* 功能（pt）：支持 fitting_net 输入统计信息，由 @Chengqian-Zhang 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4504 中实现\n* 功能（jax）：支持 Hessian 计算，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4649 中实现\n* 功能：为数据预处理插件模式添加支持，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4621 中实现\n* 功能（pt）：为 pt 后端添加 eta 消息，由 @HydrogenSulfate 实现","2025-05-31T18:31:28",{"id":186,"version":187,"summary_zh":188,"released_at":189},171810,"v3.0.3","\u003C!-- 使用 .github\u002Frelease.yml 中的配置生成的发布说明，版本为 r3.0 -->\n\n## 变更内容\n### 重大变更\n- breaking(wheel): 将最低 macOS 版本提升至 11.0 (#4704)\n\n### 修复\n- fix(tf): 修复 dplr 的 Python 推理问题 (#4753)\n- fix: 修正 border_op 输入中 nloc 和 nall-nloc 的数据类型 (#4653)\n- fix(data): 当 data 的元素不在 `input.json\u002Ftype_map` 中时抛出错误 (#4639)\n- fix(ase): 避免 ase 计算器重复计算应力 (#4633)\n- fix(pt): 改进 OOM 检测 (#4638)\n- fix(tf): 始终使用 float64 类型处理全局张量 (#4735)\n- fix(jax): 将 `default_matmul_precision` 设置为 `tensorfloat32` (#4726)\n- fix(jax): 修复 sigmoid 梯度中的 NaN 问题 (#4724)\n- fix: 修复与 CMake 4.0 的兼容性问题 (#4680)\n\n## CI\u002FCD\n\n- fix(CI): 设置 CMAKE_POLICY_VERSION_MINIMUM 环境变量 (#4692)\n- CI: 将 PyTorch 升级至 2.7 (#4717)\n- fix(tests): 修复 tearDownClass 方法并释放 GPU 内存 (#4702)\n- fix(CI): 升级 setuptools 以修复其与 wheel 的兼容性问题 (#4700)\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv3.0.2...v3.0.3","2025-05-23T17:27:37",{"id":191,"version":192,"summary_zh":193,"released_at":194},171811,"v3.1.0a0","\u003C!-- 发行说明由 .github\u002Frelease.yml 中的配置在 devel 分支上生成 -->\n\n## 变更内容\n\n## 亮点\n\n### DPA-3\n\nDPA-3 是一种基于消息传递架构的先进原子间势函数。作为大型原子模型（LAM），DPA-3 专为整合并同时训练来自不同学科的数据集而设计，涵盖多个研究领域中的多样化化学和材料体系。其模型设计确保了卓越的拟合精度以及在训练域内和跨域的稳健泛化能力。此外，DPA-3 还保持能量守恒，并尊重势能面的物理对称性，使其成为广泛科学应用中可靠的工具。\n\n训练脚本请参考 `examples\u002Fwater\u002Fdpa3\u002Finput_torch.json`。训练完成后，PyTorch 模型可以转换为 JAX 模型。\n\n### PaddlePaddle 后端\n\nPaddlePaddle 后端具有与 PyTorch 后端相似的 Python 接口，从而确保了模型开发的兼容性和灵活性。PaddlePaddle 在 DeePMD-kit 中引入了动态转静态功能以及 PaddlePaddle JIT 编译器（CINN），支持动态形状和高阶微分。动态转静态功能会自动捕获用户的动态图代码并将其转换为静态图。转换后，CINN 编译器会对计算图进行优化，从而提升模型训练和推理的效率。在使用 DPA-2 模型的实验中，我们发现与动态图相比，训练时间减少了约 40%，有效提高了模型训练效率。\n\n### 其他新特性\n* feat(pt\u002Fdp): 支持案例嵌入和可共享拟合，作者：@iProzd，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4417\n* feat(pt): 支持使用能量 Hessian 进行训练，作者：@1azyking，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4169\n* feat: 为大型系统新增批大小规则，作者：@caic99，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4659\n* feat: 在 pppm\u002Fdplr 中添加访问 fele 的方法，作者：@HanswithCMY，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4452\n* feat (tf\u002Fpt): 将原子权重加入张量损失，作者：@ChiahsinChu，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4466\n* feat(pt): 在属性拟合中增加 `trainable` 参数，作者：@ChiahsinChu，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4599\n* feat(pt): 支持 fitting_net 输入统计信息，作者：@Chengqian-Zhang，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4504\n* feat(jax): 添加 Hessian 计算功能，作者：@njzjz，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4649\n* feat: 为数据修改器添加插件模式，作者：@ChiahsinChu，详见 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4621\n\nv3.0.1 和 v3.0.2 中的所有变更均已包含在内。\n\n## 贡献者\n\n* @iProzd #4417 #4655 #4419 #4609 #4633 #4647 #4675\n* @pre-commit-ci #4420 #4449 #4464 #4473 #4497 #4521 #4539 #4552 #4566 #4574 #4579 #4596","2025-03-30T01:47:55",{"id":196,"version":197,"summary_zh":198,"released_at":199},171812,"v3.0.2","## 变更内容\n\n此补丁版本仅包含少量新功能、错误修复、性能优化以及文档改进。\n\n### 新功能\n* 功能（tf）：支持使用混合描述符进行张量拟合，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4542 中实现。\n\n### 性能优化\n* 性能：用索引操作替换不必要的 `torch.split`，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4505 中实现。\n* 性能：在 MLP 中使用 F.linear，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4513 中实现。\n* 维护：改进邻居统计日志，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4561 中实现。\n* 维护：将 PyTorch 版本升级至 2.6.0，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4575 中实现。\n\n### 错误修复\n* 修复：修改 DPA 模型的文档，由 @QuantumMisaka 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4510 中实现。\n* 修复（pt）：修复 set_eval_descriptor_hook 中清空列表的问题，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4534 中实现。\n* [修复 bug] 为 tf 张量模型加载 atomic_*.npy 文件，由 @ChiahsinChu 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4538 中实现。\n* 修复：将 `num_workers` 降低至 4，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4535 中实现。\n* 修复：修复 YAML 转换问题，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4565 中实现。\n* 修复（cc）：移除 C++17 的使用，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4570 中实现。\n* 修复 DeePMDConfigVersion.cmake 中的版本号，由 @RMeli 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4577 中实现。\n* 修复（pt）：分离计算出的描述符张量以防止 OOM，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4547 中实现。\n* 修复（pt）：对 GPU 张量和 CPU OP 库抛出错误，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4582 中实现。\n* 使用变量存储原子极化率的偏置项，由 @Yi-FanLi 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4581 中实现。\n* 修复：修正 pt 张量损失标签名称，由 @anyangml 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4587 中实现。\n* CI：将 jax 固定为 0.5.0 版本，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4613 中实现。\n* 修复（array-api）：修复 xp.where 的错误，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4624 中实现。\n\n### 文档更新\n* 文档：修复缩放测试表格的表头，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4507 中实现。\n* 文档：在 .readthedocs.yml 中添加 `sphinx.configuration`，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4553 中实现。\n* 文档：添加 v3 论文引用，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4619 中实现。\n* 文档：在 TensorBoard 文档中添加 PyTorch Profiler 的支持详情，由 @caic99 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4615 中实现。\n\n### CI\u002FCD\n* CI：将 linux_aarch64 切换到 GitHub 托管的运行器，由 @njzjz 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4557 中实现。\n\n## 新贡献者\n* @QuantumMisaka 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4510 中完成了首次贡献。\n* @RMeli 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeli","2025-03-02T03:32:12",{"id":201,"version":202,"summary_zh":203,"released_at":204},171813,"v3.0.1","\u003C!-- 发布说明由 .github\u002Frelease.yml 中的配置在 r3.0 版本时生成 -->\n\n此补丁版本仅包含错误修复、功能增强和文档改进。\n\n## 变更内容\n### 功能增强\n* 性能：在 rank 0 上打印摘要（deepmodeling#4434）\n* 性能：优化训练循环（deepmodeling#4426）\n* 构建维护：重构训练循环（deepmodeling#4435）\n* 性能：移除数据完整性上的冗余检查（deepmodeling#4433）\n* 性能：使用融合的 Adam 优化器（deepmodeling#4463）\n\n### 错误修复\n* 修复：向 ZBL 添加 model_def_script（deepmodeling#4423）\n* 修复：添加 pairtab 压缩功能（deepmodeling#4432）\n* 修复（TensorFlow）：在 `se_r` 中将 type_one_side 和 exclude_types 传递给 DPTabulate（https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4446）\n* 修复：如果 dlopen 失败，则打印 dlerror（#4485）\n\n### 文档\n* 构建维护（Python）：更新多任务示例（#4419）\n* 文档：更新 DPA-2 引用（deepmodeling#4483）\n* 文档：更新 deepmd-gnn 的 URL（deepmodeling#4482）\n* 文档：修复 install-from-c-library.md 标题中的一个小拼写错误（#4484）\n\n### 其他变更\n* 构建依赖：将 pypa\u002Fcibuildwheel 从 2.21 升级到 2.22，由 @dependabot 在 https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4408 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv3.0.0...v3.0.1","2024-12-23T20:14:18",{"id":206,"version":207,"summary_zh":208,"released_at":209},171814,"v3.0.0","\u003C!-- 发行说明由 .github\u002Frelease.yml 中的配置在 devel 分支上生成 -->\n\n# DeePMD-kit v3：多后端框架、DPA-2 大型原子模型及插件机制\n\n经过八个月的公开测试，我们很高兴地推出 DeePMD-kit v3 的首个稳定版本。这是一个功能强大的新版本，支持使用 TensorFlow、PyTorch 或 JAX 作为后端的深度势能模型。此外，DeePMD-kit v3 还新增了对 [DPA-2 模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15492) 的支持，这是一种专为大型原子体系优化的全新架构。本次发布还增强了插件机制，使新模型的集成与开发变得更加便捷。\n\n## 亮点\n\n### 多后端框架：支持 TensorFlow、PyTorch 和 JAX\n\n![image](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F7f7a585c-698e-4bd3-8342-16fc74229f5f)\n\nDeePMD-kit v3 引入了一个灵活且可插拔的框架，能够在多个后端之间提供一致的训练与推理体验。3.0.0 版本包含以下内容：\n\n- **TensorFlow 后端**：以其静态图设计和高效的计算性能著称。\n- **PyTorch 后端**：基于动态图，简化了模型的扩展与开发。\n- **DP 后端**：基于 NumPy 和 [Array API](https:\u002F\u002Fdata-apis.org\u002Farray-api\u002F) 构建，是无需依赖重型深度学习框架即可进行开发的参考实现。\n- **JAX 后端**：基于 DP 后端并通过 Array API 实现，属于静态图后端。\n\n| 功能                 | TensorFlow | PyTorch | JAX | DP | \n| -------------------------  |----------------- |------------ |------|------|\n| 描述符局部坐标系 | ✅ | | | |\n| 描述符 se_e2_a | ✅ | ✅ | ✅ | ✅ |\n| 描述符 se_e2_r | ✅ | ✅ | ✅ | ✅ |\n| 描述符 se_e3    | ✅ | ✅ | ✅ | ✅ |\n| 描述符 se_e3_tebd |   | ✅ |  ✅ | ✅ |\n| 描述符 DPA1    | ✅ | ✅ | ✅ | ✅ |\n| 描述符 DPA2    |    | ✅ | ✅ | ✅ |\n| 描述符 Hybrid | ✅ | ✅ | ✅ | ✅ |\n| 能量拟合         | ✅ | ✅ | ✅ | ✅ |\n| 偶极矩拟合           | ✅ | ✅ | ✅ | ✅ |\n| 极化率拟合          | ✅ | ✅ | ✅ | ✅ |\n| DOS 拟合             | ✅ | ✅ | ✅ | ✅ |\n| 其他性质拟合     |   | ✅ | ✅ | ✅ |\n| ZBL                          | ✅ | ✅ | ✅ | ✅ |\n| DPLR                       | ✅ |   |   |   |\n| DPRc                       | ✅ | ✅ | ✅ | ✅ |\n| 自旋                       | ✅ | ✅ |   | ✅ |\n| 梯度计算 | ✅ | ✅ | ✅ |  |\n| 模型训练      | ✅ | ✅ |  |  |\n| 模型压缩 | ✅ | ✅ |  |  |\n| Python 推理 | ✅ | ✅ | ✅ | ✅ |\n| C++ 推理       | ✅ | ✅ | ✅ |   |\n\n多后端框架的关键特性包括：\n\n- 可以使用相同的训练数据和输入脚本，在不同后端上训练模型，从而根据效率或便利性需求灵活切换后端。\n```sh\n# 使用 TensorFlow 后端训练模型\ndp --tf train input.json\ndp --tf freeze\ndp --tf compress\n\n# 使用 PyTorch 后端训练模型\ndp --pt train input.json\ndp --pt freeze\ndp --pt compress\n```\n\n- 在不同后端之间转换模型。","2024-11-23T08:10:22",{"id":211,"version":212,"summary_zh":213,"released_at":214},171815,"v3.0.0rc0","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n# DeePMD-kit v3: Multiple-backend Framework, DPA-2 Large Atomic Model, and Plugin Mechanisms\r\n\r\nWe are excited to present the first release candidate of DeePMD-kit v3, an advanced version that enables deep potential models with TensorFlow, PyTorch, or JAX backends. Additionally, DeePMD-kit v3 introduces support for the [DPA-2 model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15492), a novel architecture optimized for large atomic models. This release enhances plugin mechanisms, making integrating and developing new models easier.\r\n\r\n## Highlights\r\n\r\n### Multiple-backend framework: TensorFlow, PyTorch, and JAX support\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F7f7a585c-698e-4bd3-8342-16fc74229f5f)\r\n\r\nDeePMD-kit v3 adds a versatile, pluggable framework providing consistent training and inference experience across multiple backends. Version 3.0.0 includes:\r\n\r\n- **TensorFlow backend**: Known for its computational efficiency with a static graph design.\r\n- **PyTorch backend**: A dynamic graph backend that simplifies model extension and development.\r\n- **DP backend**: Built with NumPy and [Array API](https:\u002F\u002Fdata-apis.org\u002Farray-api\u002F), a reference backend for development without heavy deep-learning frameworks.\r\n- **JAX backend**: Based on the DP backend via Array API, a static graph backend.\r\n\r\n| Features                 |TensorFlow | PyTorch | JAX | DP | \r\n| -------------------------  |----------------- |------------ |------|------|\r\n|Descriptor local frame | ✅ | | | |\r\n|Descriptor se_e2_a | ✅ | ✅ | ✅ | ✅ |\r\n|Descriptor se_e2_r | ✅ | ✅ | ✅ | ✅ |\r\n|Descriptor se_e3    | ✅ | ✅ | ✅ | ✅ |\r\n|Descriptor se_e3_tebd |   | ✅ |  ✅ | ✅ |\r\n|Descriptor DPA1    | ✅ | ✅ | ✅ | ✅ |\r\n|Descriptor DPA2    |    | ✅ | ✅ | ✅ |\r\n|Descriptor Hybrid | ✅ | ✅ | ✅ | ✅ |\r\n|Fitting energy         | ✅ | ✅ | ✅ | ✅ |\r\n|Fitting dipole           | ✅ | ✅ | ✅ | ✅ |\r\n|Fitting polar          | ✅ | ✅ | ✅ | ✅ |\r\n|Fitting DOS             | ✅ | ✅ | ✅ | ✅ |\r\n|Fitting property     |   | ✅ | ✅ | ✅ |\r\n| ZBL                          | ✅ | ✅ | ✅ | ✅ |\r\n| DPLR                       | ✅ |   |   |   |\r\n| DPRc                       | ✅ | ✅ | ✅ | ✅ |\r\n| Spin                       | ✅ | ✅ |   | ✅ |\r\n| Gradient calculation | ✅ | ✅ | ✅ |  |\r\n| Model training      | ✅ | ✅ |  |  |\r\n| Model compression | ✅ | ✅ |  |  |\r\n| Python inference | ✅ | ✅ | ✅ | ✅ |\r\n| C++ inference       | ✅ | ✅ | ✅ |   |\r\n\r\nCritical features of the multiple-backend framework include the ability to:\r\n\r\n- Train models using different backends with the same training data and input script, allowing backend switching based on your efficiency or convenience needs.\r\n```sh\r\n# Training a model using the TensorFlow backend\r\ndp --tf train input.json\r\ndp --tf freeze\r\ndp --tf compress\r\n\r\n# Training a model using the PyTorch backend\r\ndp --pt train input.json\r\ndp --pt freeze\r\ndp --pt compress\r\n```\r\n\r\n- Convert models between backends using `dp convert-backend`, with backend-specific file extensions (e.g., `.pb` for TensorFlow and `.pth` for PyTorch).\r\n```sh\r\n# Convert from a TensorFlow model to a PyTorch model\r\ndp convert-backend frozen_model.pb frozen_model.pth\r\n# Convert from a PyTorch model to a TensorFlow model\r\ndp convert-backend frozen_model.pth frozen_model.pb\r\n# Convert from a PyTorch model to a JAX model\r\ndp convert-backend frozen_model.pth frozen_model.savedmodel\r\n# Convert from a PyTorch model to the backend-independent DP format\r\ndp convert-backend frozen_model.pth frozen_model.dp\r\n```\r\n\r\n- Run inference across backends via interfaces like `dp test`, Python\u002FC++\u002FC interfaces, or third-party packages (e.g., dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc.).\r\n\r\n```sh\r\n# In a LAMMPS file:\r\n# run LAMMPS with a TensorFlow backend model\r\npair_style deepmd frozen_model.pb\r\n# run LAMMPS with a PyTorch backend model\r\npair_style deepmd frozen_model.pth\r\n# run LAMMPS with a JAX backend model\r\npair_style deepmd frozen_model.savedmodel\r\n# Calculate model deviation using different models\r\npair_style deepmd frozen_model.pb frozen_model.pth frozen_model.savedmodel out_file md.out out_freq 100\r\n```\r\n\r\n- Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.\r\n\r\n### DPA-2 model: Towards a universal large atomic model for molecular and material simulation\r\n\r\nThe [DPA-2 model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15492) offers a robust architecture for large atomic models (LAM), accurately representing diverse chemical systems for high-quality simulations. In this release, DPA-2 is trainable in the PyTorch backend, with an example configuration available in `examples\u002Fwater\u002Fdpa2`. DPA-2 is available for Python inference in the JAX backend.\r\n\r\nThe DPA-2 descriptor comprises `repinit` and `repformer`, as shown below.\r\n\r\n![DPA-2](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fassets\u002F9496702\u002F9f342b7d-5b68-4dcf-9df2-0fbadb58cec3)\r\n\r\nThe PyTorch backend supports training strategies for large atomic models,","2024-11-14T19:36:17",{"id":216,"version":217,"summary_zh":218,"released_at":219},171816,"v3.0.0b4","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n## What's Changed\r\n\r\n### Breaking changes\r\n\r\n* breaking: drop C++ 11 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4068\r\n* breaking(pt\u002Fdp): tune new sub-structures for DPA2 by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4089\r\n   The default values of new options `g1_out_conv` and `g1_out_mlp` are set to `True`. The behaviors in previous versions are `False`.\r\n\r\n### New features\r\n\r\n* feat pt : Support property fitting by @Chengqian-Zhang in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3867\r\n* feat(pt\u002Fdp): support three-body type embedding by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4066\r\n* feat: load customized OP library in the C++ interface by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4073\r\n* feat: make `dp neighbor-stat --type-map` optional by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4049\r\n* feat: directional nlist by @wanghan-iapcm in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4052\r\n* feat(pt): support `eval_typeebd` for `DeepEval` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4110\r\n* feat: `DeepEval.get_model_def_script` and common `dp show` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4131\r\n* chore: support preset bias of atomic model output by @wanghan-iapcm in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4116\r\n* feat(jax): support neural networks in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4156\r\n\r\n### Enhancement\r\n\r\n* fix: bump LAMMPS to stable_29Aug2024 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4088\r\n* chore(pt): cleanup deadcode by @wanghan-iapcm in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4142\r\n* chore(pt): make comm_dict for dpa2 noncompulsory when nghost is 0 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4144\r\n* Set ROCM_ROOT to ROCM_PATH when it exist by @sigbjobo in #4150\r\n* chore(pt): move deepmd.pt.infer.deep_eval.eval_model to tests by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4153\r\n\r\n### Documentation\r\n* docs: improve docs for environment variables by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4070\r\n* docs: dynamically generate command outputs by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4071\r\n* docs: improve error message for inconsistent type maps by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4074\r\n* docs: add multiple packages to `intersphinx_mapping` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4075\r\n* docs: document CMake variables using Sphinx styles by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4079\r\n* docs: update ipi installation command by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4081\r\n* docs: fix the default value of `DP_ENABLE_PYTORCH` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4083\r\n* docs: fix defination of `se_e3` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4113\r\n* docs: update DeepModeling URLs by @njzjz-bot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4119\r\n* docs(pt): examples for new dpa2 model by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4138\r\n\r\n### Bugfix\r\n\r\n* fix: fix PT AutoBatchSize OOM bug and merge execute_all into base by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4047\r\n* fix: replace `datetime.datetime.utcnow` which is deprecated by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4067\r\n* fix:fix LAMMPS MPI tests with mpi4py 4.0.0 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4032\r\n* fix(pt): invalid type_map when multitask training by @Cloudac7 in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4031\r\n* fix: manage testing models in a standard way by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4028\r\n* fix(pt): fix ValueError when array byte order is not native by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4100\r\n* fix(pt): convert `torch.__version__` to `str` when serializing by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4106\r\n* fix(tests): fix `skip_dp` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4111\r\n* [Fix] Wrap log_path with Path by @HydrogenSulfate in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4117\r\n* fix: bugs in uts for property fit by @Chengqian-Zhang in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4120\r\n* fix: type of the preset out bias by @wanghan-iapcm in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4135\r\n* fix(pt): fix zero inputs for LayerNorm by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4134\r\n* fix(pt\u002Fdp): share params of repinit_three_body by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4139\r\n* fix(pt): move entry point from deepmd.pt.model to deepmd.pt by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4146\r\n* fix: fix DPH5Path.glob for new keys by @njzjz in https:\u002F\u002Fgithub.com\u002Fd","2024-09-25T16:01:06",{"id":221,"version":222,"summary_zh":223,"released_at":224},171817,"v3.0.0b3","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n## What's Changed\r\n### Other Changes\r\n* fix: fix nopbc in dpdata driver by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4027\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv3.0.0b2...v3.0.0b3","2024-07-27T04:25:48",{"id":226,"version":227,"summary_zh":228,"released_at":229},171818,"v3.0.0b2","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n## What's Changed\r\n### New features\r\n\r\n* feat: add documentation and options for multi-task arguments by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3989\r\n* feat: plain text model format by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4025\r\n* feat: allow model arguments to be registered outside by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3995\r\n* feat: add `get_model` classmethod to `BaseModel` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4002\r\n\r\n### Enhancement\r\n\r\n* add unittest for virial and pe\u002Fatom by @Yi-FanLi in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4013\r\n\r\n### Documentation\r\n* docs: document `PYTORCH_ROOT` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3981\r\n* docs: Disallow improper capitalization by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3982\r\n* docs: pin sphinx-argparse to \u003C 0.5.0 by @njzjz-bot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3988\r\n\r\n### Bugfixes\r\n* fix(cmake): fix `set_if_higher` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3977\r\n* fix(pt): ensure suffix of `--init_model` and `--restart` is `.pt` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3980\r\n* fix(pt): do not overwrite disp_file when restarting training by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3985\r\n* fix(cc): compile `select_map\u003Cint>` when TensorFlow backend is off by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3987\r\n* fix(pt): make 'find_' to be float in get data by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3992\r\n* fix float precision problem of se_atten in line 217 (#3961) by @LiuGroupHNU in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3978\r\n* fix: fix errors for zero atom inputs by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4005\r\n* fix(pt): optimize graph memory usage by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4006\r\n* fix(pt): fix lammps nlist sort with large sel by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3993\r\n* fix(cc): add `atomic` argument to `DeepPotBase::computew` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3996\r\n* fix(lmp): call model deviation interface without atomic properties when they are not requested by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4012\r\n* fix(c): call C++ interface without atomic properties when they are not requested by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4010\r\n* fix(pt): fix `get_dim` for `DescrptDPA1Compat` by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4007\r\n* fix(cc): fix message passing when nloc is 0 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4021\r\n* fix(pt): use user seed in `DpLoaderSet` by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4015\r\n\r\n### Code style\r\n* style: require explicit device and dtype by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4001\r\n* style: enable N804 and N805 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4024\r\n\r\n### CI\u002FCD\r\n* ci: pin PT to 2.3.1 when using CUDA by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F4009\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv3.0.0b1...v3.0.0b2","2024-07-26T18:33:10",{"id":231,"version":232,"summary_zh":233,"released_at":234},171819,"v3.0.0b1","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n## What's Changed\r\n\r\n### Breaking Changes\r\n* breaking(pt\u002Ftf\u002Fdp): disable bias in type embedding by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3958\r\n  This change may make PyTorch checkpoints generated by v3.0.0b0 cannot be used in v3.0.0b1.\r\n\r\n### New features\r\n* feat: add plugin entry point for PT by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3965\r\n* feat(tf): improve the activation setting in tebd by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3971\r\n\r\n### Bugfix\r\n* fix: remove ref-names from .git_archival.txt by @njzjz-bot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3953\r\n* fix(dp): fix dp seed in dpa2 descriptor by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3957\r\n* fix(pt): add `finetune_head` to argcheck by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3967\r\n* fix(cmake): fix USE_PT_PYTHON_LIBS by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3972\r\n* fix(cmake): set C++ standard according to the PyTorch version by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3973\r\n* Fix: tf dipole atomic key by @anyangml in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3975\r\n* fix(pt\u002Ftf\u002Fdp): normalize the econf by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3976\r\n\r\n### CI\u002FCD\r\n* ci(deps): bump uv to 0.2.24 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3964\r\n* style: enable B904 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3956\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv3.0.0b0...v3.0.0b1","2024-07-14T07:11:07",{"id":236,"version":237,"summary_zh":238,"released_at":239},171820,"v2.2.11","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at r2 -->\r\n\r\n## What's Changed\r\n\r\n### New feature\r\n- feat: apply descriptor exclude_types to env mat stat by @njzjz in #3625\r\n- feat(build): Add Git archives version files by @njzjz-bot in #3669\r\n\r\n### Enhancement\r\n- style: enable W rules by @njzjz in #3793\r\n- build: unpin tensorflow version on windows by @njzjz in #3721\r\n- Add a reminder for the illegal memory error by @Yi-FanLi in #3822\r\n- lmp: improve error message when compute\u002Ffix is not found by @njzjz in #3801\r\n\r\n### Bugfix\r\n- tf: remove freeze warning for optional nodes by @njzjz in #3381\r\n- fix: set rpath for protobuf by @njzjz in #3636\r\n- fix(tf): apply exclude types to se_atten_v2 switch by @njzjz in #3651\r\n- fix: fix git version detection in docker_package_c.sh by @njzjz in #3658\r\n- fix(tf): fix float32 for exclude_types in se_atten_v2 by @njzjz in #3682\r\n- Fix typo in smooth_type_embdding by @iProzd in #3698\r\n- test: set more lossy precision requirements by @nahso in #3726\r\n- fix: fix ipi package by @njzjz in #3835\r\n- fix(tf): prevent fitting_attr variable scope from becoming fitting_attr_1 by @njzjz in #3930\r\n- fix seeds in se_a and se_atten by @njzjz in #3880\r\n\r\n### Documentation\r\n- docs: update DPA-1 reference by @njzjz in #3810\r\n- docs: setup uv for readthedocs by @njzjz in #3685\r\n- Clarifiy se_atten_v2 compression doc by @nahso in #3727\r\n- docs: add document equations for se_atten_v2 by @Chengqian-Zhang in #3828\r\n\r\n### CI\u002FCD\r\n- CI: Accerate GitHub Actions using uv by @njzjz in #3676\r\n- ci: bump ase to 3.23.0 by @njzjz in #3846\r\n- ci(build): use uv for cibuildwheel by @njzjz in #3695\r\n- chore(ci): workaround to retry error decoding response body from uv by @njzjz in #3889\r\n\r\n### Dependency updates\r\n- build(deps): bump tar from 6.1.14 to 6.2.1 in \u002Fsource\u002Fnodejs by @dependabot in #3714\r\n- build(deps): bump pypa\u002Fcibuildwheel from 2.17 to 2.18 by @dependabot in #3777\r\n- build(deps): bump docker\u002Fbuild-push-action from 5 to 6 by @dependabot in #3882\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv2.2.10...v2.2.11","2024-07-03T19:22:15",{"id":241,"version":242,"summary_zh":243,"released_at":244},171821,"v3.0.0b0","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n## What's Changed\r\n\r\nCompared to [v3.0.0a0](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases\u002Ftag\u002Fv3.0.0a0), v3.0.0b0 contains all changes in [v2.2.10](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases\u002Ftag\u002Fv2.2.10) and [v2.2.11](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Freleases\u002Ftag\u002Fv2.2.11), as well as:\r\n\r\n### Breaking changes\r\n* breaking: remove multi-task support in tf by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3763\r\n* breaking: deprecate `set_prefix` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3753\r\n* breaking: use all sets for training and test by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3862. In previous versions, only the last set is used as the test set in `dp test`.\r\n* PyTorch models trained in v3.0.0a0 cannot be used in v3.0.0b0 due to several changes. As mentioned in the release note of v3.0.0a0, we didn't promise backward compatibility for v3.0.0a0.\r\n* The DPA-2 configurations have been changed by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3768. The old format in v3.0.0a0 is no longer supported.\r\n\r\n### Major new features\r\n\r\n- Latest supported features in the PyTorch and DP backend, which are consistent with the TensorFlow backend if possible:\r\n  - Descriptor: `se_e2_a`, `se_e2_r`, `se_e3`, `se_atten`, `se_atten_v2`, `dpa2`, `hybrid`;\r\n  - Fitting: `energy`, `dipole`, `polar`, `dos`, `fparam`\u002F`apram` support\r\n  - Model: standard, DPRc, `frozen`, ZBL, Spin\r\n  - Python inference interface\r\n  - PyTorch only: C++ inference interface for energy only\r\n  - PyTorch only: TensorBoard\r\n- Support using the DPA-2 model in the LAMMPS by @CaRoLZhangxy in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3657. If you install the Python interface from the source, you must set the environment variable `DP_ENABLE_PYTORCH=1` to build the PyTorch customized OPs.\r\n- New command line options `dp show` by @Chengqian-Zhang in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3796 and `dp change-bias` by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3933.\r\n- New training options `max_ckpt_keep` by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3441 and `change_bias_after_training`  by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3933. Several training options now take effect in the PyTorch backend, such as `seed` by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3773, `disp_training` and `time_training` by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3775, and `profiling` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3897.\r\n- Performance improvement of the PyTorch backend by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3422, https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3424, https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3425 and  by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3826\r\n- Support generating JSON schema for integration with VSCode by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3849\r\n\r\nMinor enhancements and code refactoring are listed at https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv3.0.0a0...v3.0.0b0.\r\n\r\n## Contributors\r\n\r\n- @CaRoLZhangxy: #3434, #3436, #3612, #3613, #3614, #3656, #3657, #3740, #3780, #3917, #3919\r\n- @Chengqian-Zhang: #3615, #3796, #3828, #3840, #3912\r\n- @Mancn-Xu: #3567\r\n- @Yi-FanLi: #3822\r\n- @anyangml: #3398, #3410, #3426, #3432, #3435, #3447, #3451, #3452, #3468, #3485, #3486, #3575, #3584, #3654, #3662, #3663, #3706, #3757, #3759, #3812, #3824, #3876\r\n- @caic99: #3465\r\n- @chazeon: #3473, #3652, #3653, #3739\r\n- @cherryWangY: #3877\r\n- @dependabot: #3446, #3487, #3777, #3882\r\n- @hztttt: #3762\r\n- @iProzd: #3301, #3409, #3411, #3441, #3442, #3445, #3456, #3480, #3569, #3571, #3573, #3607, #3616, #3619, #3696, #3698, #3712, #3717, #3718, #3725, #3746, #3748, #3758, #3763, #3768, #3773, #3774, #3775, #3781, #3782, #3785, #3803, #3813, #3814, #3815, #3826, #3837, #3841, #3842, #3843, #3873, #3906, #3914, #3916, #3925, #3926, #3927, #3933, #3944, #3945\r\n- @nahso: #3726, #3727\r\n- @njzjz: #3393, #3402, #3403, #3404, #3405, #3415, #3418, #3419, #3421, #3422, #3423, #3424, #3425, #3431, #3437, #3438, #3443, #3444, #3449, #3450, #3453, #3461, #3462, #3464, #3484, #3519, #3570, #3572, #3574, #3580, #3581, #3583, #3600, #3601, #3605, #3610, #3617, #3618, #3620, #3621, #3624, #3625, #3631, #3632, #3633, #3636, #3651, #3658, #3671, #3676, #3682, #3685, #3686, #3687, #3688, #3694, #3695, #3701, #3709, #3711, #3714, #3715, #3716, #3721, #3737, #3753, #3767, #3776, #3784, #3787, #3792, #3793, #3794, #3798, #3800, #3801, #3810, #3811, #3816, #3820, #3829, #3832, #3834, #3835, #3836, #3838, #3845, #3846, #3849, #3851, #3855, #3856, #3857, #3861, #3862, #3870, #3872, #3874, #3875, #3878, #3880, #3888, #3889, #3890, #3891, #3893, #3894, #3895, #3896, #3897, #3918, #3921, #3922, #3930\r\n- @njzjz-bot: #3669\r\n- @pre-commit-ci: #345","2024-07-03T19:22:49",{"id":246,"version":247,"summary_zh":248,"released_at":249},171822,"v2.2.10","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at r2 -->\r\n\r\n## What's Changed\r\n\r\n### New features\r\n\r\n* Add `max_ckpt_keep` for trainer by @iProzd in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3441\r\n* feat: model devi C\u002FC++ API without nlist by @robinzyb in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3647\r\n\r\n### Enhancement\r\n* Neighbor stat is 80x accelerated by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3275\r\n* support checkpoint path (instead of directory) in dp freeze by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3254\r\n* add fparam\u002Faparam support for finetune by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3313\r\n* chore(build): move static part of dynamic metadata to pyproject.toml by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3618\r\n* test: add LAMMPS MPI tests by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3572\r\n* support Python 3.12 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3343\r\n\r\n### Documentation\r\n* docs: rewrite README; deprecate manually written TOC by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3179\r\n* docs: apply type_one_side=True to `se_a` and `se_r` by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3364\r\n* docs: add deprecation notice for the official conda channel and more conda docs by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3462\r\n* docs: Replace quick_start.ipynb with a new version.  by @Mancn-Xu in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3567\r\n* issue template: change TF version to backend version by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3244\r\n* chore: remove incorrect memset TODOs by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3600\r\n\r\n### Bugfix\r\n* c: change the required shape of electric field to nloc * 3 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3237\r\n* Fix LAMMPS plugin symlink path on macOS platform by @chazeon in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3473\r\n* fix_dplr.cpp delete redundant setup by @shiruosong in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3344\r\n* fix_dplr.cpp set atom->image when pre_force by @shiruosong in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3345\r\n* fix: fix type hint of sel by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3624\r\n* fix: make `se_atten_v2` masking smooth when davg is not zero by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3632\r\n* fix: do not install tf-keras for cu11 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3444\r\n\r\n### CI\u002FCD\r\n\r\n* detect version in advance before building deepmd-kit-cu11 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3172\r\n* fix deepmd-kit-cu11 again by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3403\r\n* ban print by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3415\r\n* ci: add linter for markdown, yaml, CSS by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3574\r\n* fix AlmaLinux GPG key error by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3326\r\n* ci: reduce ASLR entropy by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3461\r\n\r\n### Dependency update\r\n* bump LAMMPS to stable_2Aug2023_update3 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3399\r\n* build(deps): bump codecov\u002Fcodecov-action from 3 to 4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3231\r\n* build(deps): bump pypa\u002Fcibuildwheel from 2.16 to 2.17 by @dependabot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3487\r\n* pin nvidia-cudnn-cu{11,12} to \u003C9 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3610\r\n* pin docker actions to major versions by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3238\r\n* build(deps): bump the npm_and_yarn group across 1 directories with 1 update by @dependabot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3312\r\n* bump scikit-build-core to 0.8 by @njzjz in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3369\r\n* build(deps): bump softprops\u002Faction-gh-release from 1 to 2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3446\r\n\r\n## New Contributors\r\n\r\n* @shiruosong made their first contribution in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3344\r\n* @robinzyb made their first contribution in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3647\r\n* @Mancn-Xu made their first contribution in https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fpull\u002F3567\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv2.2.9...v2.2.10","2024-04-06T19:28:14",{"id":251,"version":252,"summary_zh":253,"released_at":254},171823,"v3.0.0a0","\u003C!-- Release notes generated using configuration in .github\u002Frelease.yml at devel -->\r\n\r\n# DeePMD-kit v3: A multiple-backend framework for deep potentials\r\n\r\nWe are excited to announce the first alpha version of DeePMD-kit v3. DeePMD-kit v3 allows you to train and run deep potential models on top of TensorFlow or PyTorch. DeePMD-kit v3 also supports the [DPA-2 model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15492), a novel architecture for large atomic models.\r\n\r\n## Highlights\r\n\r\n### Multiple-backend framework\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fassets\u002F9496702\u002F6bf132d2-6952-4009-b263-3648641003e4)\r\n\r\nDeePMD-kit v3 adds a pluggable multiple-backend framework to provide consistent training and inference experiences between different backends. You can:\r\n\r\n- Use the same training data and the input script to train a deep potential model with different backends. Switch backends based on efficiency, functionality, or convenience:\r\n```sh\r\n# Training a model using the TensorFlow backend\r\ndp --tf train input.json\r\ndp --tf freeze\r\n\r\n# Training a mode using the PyTorch backend\r\ndp --pt train input.json\r\ndp --pt freeze\r\n```\r\n\r\n- Use any model to perform inference via any existing interfaces, including `dp test`, Python\u002FC++\u002FC interface, and third-party packages (dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc). Take an example on LAMMPS:\r\n```sh\r\n# run LAMMPS with a TensorFlow backend model\r\npair_style deepmd frozen_model.pb\r\n# run LAMMPS with a PyTorch backend model\r\npair_style deepmd frozen_model.pth\r\n# Calculate model deviation using both models\r\npair_style deepmd frozen_model.pb frozen_model.pth out_file md.out out_freq 100\r\n```\r\n\r\n- Convert models between backends, using `dp convert-backend`, if both backends support a model:\r\n```sh\r\ndp convert-backend frozen_model.pb frozen_model.pth\r\ndp convert-backend frozen_model.pth frozen_model.pb\r\n```\r\n\r\n- Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.\r\n\r\n### PyTorch backend: a backend designed for large atomic models and new research\r\n\r\nWe added the PyTorch backend in DeePMD-kit v3 to support the development of new models, especially for large atomic models.\r\n\r\n#### DPA-2 model: Towards a universal large atomic model for molecular and material simulation\r\n\r\n[DPA-2 model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15492) is a novel architecture for [Large Atomic Model](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fcommunity\u002Fdiscussions\u002F32) (LAM) and can accurately represent a diverse range of chemical systems and materials, enabling high-quality simulations and predictions with significantly reduced efforts compared to traditional methods. The DPA-2 model is only implemented in the PyTorch backend. An example configuration is in the `examples\u002Fwater\u002Fdpa2` directory.\r\n\r\nThe DPA-2 descriptor includes two primary components: `repinit` and `repformer`. The detailed architecture is shown in the following figure.\r\n\r\n![DPA-2](https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fassets\u002F9496702\u002F9f342b7d-5b68-4dcf-9df2-0fbadb58cec3)\r\n\r\n#### Training strategies for large atomic models\r\n\r\nThe PyTorch backend has supported multiple training strategies to develop large atomic models.\r\n\r\n**Parallel training**: Large atomic models have a number of hyper-parameters and complex architecture, so training a model on multiple GPUs is necessary. Benefiting from the PyTorch community ecosystem, the parallel training for the PyTorch backend can be driven by [`torchrun`](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Felastic\u002Frun.html), a launcher for distributed data parallel.\r\n\r\n```sh\r\ntorchrun --nproc_per_node=4 --no-python dp --pt train input.json\r\n```\r\n\r\n**Multi-task training**: Large atomic models are trained against data in a wide scope and at different DFT levels, which requires multi-task training. The PyTorch backend supports multi-task training, sharing the descriptor between different  An example is given in `examples\u002Fwater_multi_task\u002Fpytorch_example\u002Finput_torch.json`.\r\n\r\n**Finetune**: Fine-tune is useful to train a pre-train large model on a smaller, task-specific dataset. The PyTorch backend has supported `--finetune` argument in the `dp --pt train` command line.\r\n\r\n#### Developing new models using Python and dynamic graphs\r\n\r\nResearchers may feel pain about the static graph and the custom C++ OPs from the TensorFlow backend, which sacrifices research convenience for computational performance. The PyTorch backend has a well-designed code structure written using the dynamic graph, which is currently 100% written with the Python language, making extending and debugging new deep potential models easier than the static graph.\r\n\r\n#### Supporting traditional deep potential models\r\n\r\nPeople may still want to use the traditional models already supported by the TensorFlow backend in the PyTorch backend and compare the same model among different backends. We almost rewrote all of the traditional models in the PyTorch backend, which are listed below:\r\n\r\n- Features supported:\r\n  - Descr","2024-03-03T09:22:27",{"id":256,"version":257,"summary_zh":258,"released_at":259},171824,"v2.2.9","## What's Changed\r\n\r\n### Bugfixes\r\n* cc: fix returning type of sel_types by @njzjz in #3181\r\n* fix compile gromacs with precompiled C library by @njzjz in #3217\r\n* gmx: fix include directive by @njzjz in #3221\r\n* c: fix all memory leaks; add sanitizer checks in #3223\r\n\r\n### CI\u002FCD\r\n* build macos-arm64 wheel on M1 runners by @njzjz in #3206\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepmodeling\u002Fdeepmd-kit\u002Fcompare\u002Fv2.2.8...v2.2.9","2024-02-04T20:12:50"]