[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-pnnl--neuromancer":3,"tool-pnnl--neuromancer":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":32,"env_os":96,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":102,"github_topics":103,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":115,"updated_at":116,"faqs":117,"releases":138},5329,"pnnl\u002Fneuromancer","neuromancer","Pytorch-based framework for solving parametric constrained optimization problems, physics-informed system identification, and parametric model predictive control. ","NeuroMANCER 是一个基于 PyTorch 的开源可微分编程库，旨在将机器学习与科学计算深度融合。它主要解决参数化约束优化、物理信息系统的辨识以及参数化模型预测控制等复杂难题，帮助用户构建嵌入先验知识和物理规律的端到端可微模型。\n\n无论是希望“学习优化”、“学习建模”还是“学习控制”的研究人员与开发者，都能利用 NeuroMANCER 高效应对从流体力学模拟到建筑能效控制等各类实际挑战。其核心优势在于提供了直观的符号编程接口，让用户能轻松定义并嵌入物理方程与领域约束。此外，NeuroMANCER 集成了多项前沿技术，包括函数编码器（FE）、柯尔莫哥洛夫 - 阿诺德网络（KAN）、神经微分方程（NODE）以及可微分凸优化层，确保在处理非线性动力学系统和安全约束时保持业界领先水平。配合丰富的教程案例和专为辅助编码设计的 LLM 助手，NeuroMANCER 成为了连接算法理论与工程应用的强大桥梁。","\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_9286cc5b2076.png\" width=\"250\">  \n\u003C\u002Fp>\n\n# NeuroMANCER v1.5.6\n\n[![PyPi Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fneuromancer)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fneuromancer)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-BSD-blue.svg)](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FLICENSE.md)\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-online-blue.svg)](https:\u002F\u002Fpnnl.github.io\u002Fneuromancer\u002F)\n![Lightning](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Lightning-792ee5?logo=pytorchlightning&logoColor=white)\n\n**Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations (NeuroMANCER)**\nis an open-source differentiable programming (DP) library for solving parametric constrained optimization problems, \nphysics-informed system identification, and parametric model-based optimal control.\nNeuroMANCER is written in [PyTorch](https:\u002F\u002Fpytorch.org\u002F) and allows for systematic \nintegration of machine learning with scientific computing for creating end-to-end \ndifferentiable models and algorithms embedded with prior knowledge and physics.\n\n---\n\n## Table of Contents\n1. [Overview](#overview)\n2. [Key Features](#key-features)\n3. [What's New in v1.5.3](#whats-new-in-v153)\n4. [Installation](#installation)\n5. [Getting Started](#getting-started)\n6. [Tutorials](#domain-examples)\n6. [Documentation and User Guides](#documentation-and-user-guides)\n\n\n---\n\n### Key Features\n* **Learn To Model, Learn To Control, Learn To Optimize**: Our library is built to provide end users a multitude of tools to solve Learning To Optimize (L2O), Learning To Model (L2M), and Learning To Control (L2C) tasks. Tackle advanced constrained parametric optimization, model fluid dynamics using physics-informed neural networks, or learn how to control indoor air temperature in buildings to maximize building efficiency.\n* **Symbolic programming** interface makes it very easy to define and embed physics equations, domain knowledge, and constraints into those learning paradigms. \n* **Comprehensive Learning Tools**: Access a wide array of tutorials and example applications—from basic system identification to advanced predictive control—making it easy for users to learn and apply NeuroMANCER to real-world problems.\n* **State-of-the-art methods**: NeuroMANCER is up-to-date with SOTA methods such as Function Encoders (FE) and Kolgomorov-Arnold Networks (KANs) for function approximation, neural ordinary differential equations (NODEs) and neural Koopman Operator (KO) and sparse identification of non-linear dynamics (SINDy) for learning to model dynamical systems, differentiable convex optimization layers for safety constraints in learning to optimize, and Differentiable Predictive Control (DPC) for learning to control nonlinear systems.\n* **The NeuroMANCER-GPT Assistant**: We provide easy-to-use scripts to convert the contents of the NeuroMANCER library in a way that is suitable for ingestion in RAG-based \"LLM-assistant\" pipelines. Please see [Assistant](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fdevelop\u002Fassistant\u002FREADME.md) to read more about how one can quickly spin up an LLM model to help understand and code in NeuroMANCER. \n\n\n## What's New in v1.5.6\n\n\n### New Examples:\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FDAEs\u002Ftank_dae_example.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Neural DAEs via operator splitting method \n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_6_mixed_integer_decisions.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Mixed-Integer DPC for thermal system\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002Fgrid_response.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Grid-responsive DPC for building energy systems\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_3_ref_tracking_ODE.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> DPC with preview horizon \n\n\n### New Features\n+ New class SystemPreview that acts as drop-in replacement for System class enabling preview horizon functionality\n+ Unit tests brought up-to-date. \n\n### Fixed bug\n+ Fixed bug with mlflow dependency creating conflicts in Google Colab\n\n\n## Installation\nSimply run \n```\npip install neuromancer\n```\nFor manual installation, please refer to  [Installation Instructions](INSTALLATION.md)\n\n\n## Getting Started\n\nAn extensive set of tutorials can be found in the \n[examples](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fmaster\u002Fexamples) folder and the [Tutorials](#domain-examples) below.\nInteractive notebook versions of examples are available on Google Colab!\nTest out NeuroMANCER functionality before cloning the repository and setting up an\nenvironment.\n\nThe notebooks below introduce the core abstractions of the NeuroMANCER library, in particular, our symbolic programming interface and Node classes. \n\n### Symbolic Variables, Nodes, Constraints, Objectives, and Systems Classes\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ftutorials\u002Fpart_1_linear_regression.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\nPart 1: Linear regression in PyTorch vs NeuroMANCER.  \n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ftutorials\u002Fpart_2_variable.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\nPart 2: NeuroMANCER syntax tutorial: variables, constraints, and objectives.  \n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ftutorials\u002Fpart_3_node.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\nPart 3: NeuroMANCER syntax tutorial: modules, Node, and System class.\n\n### Example\nQuick example for how to solve parametric constrained optimization problem using NeuroMANCER, leveraging our symbolic programming interface, Node and Variable, Blocks, SLiM library, and PenaltyLoss classes. \n\n```python \n# Neuromancer syntax example for constrained optimization\nimport neuromancer as nm\nimport torch \n\n# define neural architecture \nfunc = nm.modules.blocks.MLP(insize=1, outsize=2, \n                             linear_map=nm.slim.maps['linear'], \n                             nonlin=torch.nn.ReLU, hsizes=[80] * 4)\n# wrap neural net into symbolic representation via the Node class: map(p) -> x\nmap = nm.system.Node(func, ['p'], ['x'], name='map')\n    \n# define decision variables\nx = nm.constraint.variable(\"x\")[:, [0]]\ny = nm.constraint.variable(\"x\")[:, [1]]\n# problem parameters sampled in the dataset\np = nm.constraint.variable('p')\n\n# define objective function\nf = (1-x)**2 + (y-x**2)**2\nobj = f.minimize(weight=1.0)\n\n# define constraints\ncon_1 = 100.*(x >= y)\ncon_2 = 100.*(x**2+y**2 \u003C= p**2)\n\n# create penalty method-based loss function\nloss = nm.loss.PenaltyLoss(objectives=[obj], constraints=[con_1, con_2])\n# construct differentiable constrained optimization problem\nproblem = nm.problem.Problem(nodes=[map], loss=loss)\n```\n\n\n## Domain Examples\n\nNeuroMANCER is built to tackle a variety of domain-specific modeling and control problems using its array of methods. Here we show how to model and control building energy systems, as well as apply load forecasting techniques. \n\nFor more in-depth coverage of our methods, please see our general [Tutorials](#tutorials-on-methods-for-modeling-optimization-and-control) section below. \n\n### Energy Systems\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FNODE_building_dynamics.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning Building Thermal Dynamics using Neural ODEs \n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FNODE_RC_networks.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Multi-zone Building Thermal Dynamics Resistance-Capacitance network with Neural ODEs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FNODE_swing_equation.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning Swing Equation Dynamics using Neural ODEs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FDPC_building_control.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to Control Indoor Air Temperature in Buildings\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FHVAC_load_forecasting.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Energy Load Forecasting for Building with MLP and CNN Models\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002Fbuilding_load_forecasting_Transformers.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>  Energy Load Forecasting for Building with Transformers Model\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FDPC_PSH.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to Control Pumped-storage Hyrdoelectricity System\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FRL_DPC_building_control.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to Control Building HVAC System With Safe RL and DPC\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002Fgrid_response.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Grid-responsive DPC for building energy systems\n\n\n## Tutorials on Methods for Modeling, Optimization, and Control\n### Learning to Optimize (L2O) for Parametric Programming\n\nNeuromancer allows you to formulate and solve a broad class of parametric optimization problems, leveraging machine learning to learn the solutions to such problems. [More information on Parametric programming](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002Fparametric_programming)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_1_basics.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to solve a constrained optimization problem.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_2_pQP.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to solve a quadratically-constrained optimization problem.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_3_pNLP.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to solve a set of 2D constrained optimization problems.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_4_projectedGradient.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to solve a constrained optimization problem with the projected gradient.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_5_cvxpy_layers.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Using Cvxpylayers for differentiable projection onto the polytopic feasible set.  \n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_6_pQp_lopoCorrection.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to optimize with metric learning for Operator Splitting layers.  \n\n### Learning to Control (L2C) via Differentiable Predictive Control (DPC)\nNeuromancer enables you to learn control policies for a full spectrum of differentiable white-box, grey-box, and black-box dynamical systems, subject to choice constraints and objective functions. \n[More information on Differential Predictive Control](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002Fcontrol)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_1_stabilize_linear_system.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to stabilize a linear dynamical system.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_2_stabilize_ODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to stabilize a nonlinear differential equation.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_3_ref_tracking_ODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning to control a nonlinear differential equation.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_4_NODE_control.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning neural ODE model and control policy for an unknown dynamical system.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_5_neural_Lyapunov.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Learning neural Lyapunov function for a nonlinear dynamical system.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_6_mixed_integer_decisions.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Mixed-Integer DPC \n\n\n### Function Approximation\nNeuromancer is up-to-date with state-of-the-art methods. Here we showcase the powerful Kolgomorov-Arnold networks [More information on Kolgomorov-Arnold Networks](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FKANs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Ffeature\u002Ffbkans\u002Fexamples\u002FKANs\u002Fp1_fbkan_vs_kan_noise_data_1d.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> A comparison of KANs and FBKANs in learning a 1D multiscale function with noise\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Ffeature\u002Ffbkans\u002Fexamples\u002FKANs\u002Fp2_fbkan_vs_kan_noise_data_2d.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> A comparison of KANs and FBKANs in learning a 2D multiscale function with noise\n  \nNeuromancer contains an implementation of function encoders, an algorithm for learning basis functions as neural networks. See [Function Encoders: A Principled Approach to Transfer Learning in Hilbert Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18373).\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ffunction_encoder\u002FPart_1_Intro_to_Function_Encoders.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> An Introduction to Function Encoders\n\n### System Identification\nNeuromancer allows one to use machine learning, prior physics, and domain knowledge to construct data-driven models of dynamical systems given the measured observations of the system behavior.\n[More information on System ID via Neural State Space Models and ODEs](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FODEs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_1_NODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Neural Ordinary Differential Equations (NODEs)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_2_param_estim_ODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Parameter estimation of an ODE system\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_3_UDE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Universal Differential Equations (UDEs)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_4_nonauto_NODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> NODEs with exogenous inputs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_5_nonauto_NSSM.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Neural State Space Models (NSSMs) with exogenous inputs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_6_NetworkODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Data-driven modeling of resistance-capacitance (RC) network ODEs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_7_DeepKoopman.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Deep Koopman operator\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_8_nonauto_DeepKoopman.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> control-oriented Deep Koopman operator\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_9_SINDy.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Sparse Identification of Nonlinear Dynamics (SINDy)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ffunction_encoder\u002FPart_2_Function_Encoder_Neural_ODE.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Function Encoders + Neural ODEs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FDAEs\u002Ftank_dae_example.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Neural Differential Algebraic Equations (DAEs) via operator splitting method \n\n\n### Physics-Informed Neural Networks (PINNs)\nNeuromancer's symbolic programming design is perfectly suited for solving PINNs. [More information on PINNs](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FPDEs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_1_PINN_DiffusionEquation.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Diffusion Equation\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_2_PINN_BurgersEquation.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Burgers' Equation\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_3_PINN_BurgersEquation_inverse.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Burgers' Equation w\u002F Parameter Estimation (Inverse Problem)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_4_PINN_LaplaceEquationSteadyState.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Laplace's Equation (steady-state)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_5_Pendulum_Stacked.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>  Damped Pendulum via stacked PINN\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_6_PINN_NavierStokesCavitySteady_KAN.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Navier-Stokes equation (lid-driven cavity flow, steady-state, KAN)\n\n### Stochastic Differential Equations (SDEs) \nNeuromancer has been integrated with TorchSDE to handle stochastic dynamical systems. [More information on SDEs](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FSDEs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FSDEs\u002Fsde_walkthrough.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> LatentSDEs: \"System Identification\" of Stochastic Processes using Neuromancer x TorchSDE\n  \n\n## Scalability and Customization\n\n### PyTorch Lightning Integration\n\nWe have integrated PyTorch Lightning to streamline code, enable custom training logic, support GPU and multi-GPU setups, and handle large-scale, memory-intensive learning tasks.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Flightning_integration_examples\u002FPart_1_lightning_basics_tutorial.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Lightning Integration Basics.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Flightning_integration_examples\u002FPart_2_lightning_advanced_and_gpu_tutorial.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Lightning Advanced Features and Automatic GPU Support.\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Flightning_integration_examples\u002Fother_examples\u002Flightning_custom_training_example.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> Defining Custom Training Logic via Lightning Modularized Code.\n\n\n## Documentation and User Guides\nThe documentation for the library can be found [online](https:\u002F\u002Fpnnl.github.io\u002Fneuromancer\u002F). \nThere is also an [introduction video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YkFKz-DgC98) covering \nthe core features of the library. \n\nFor more information, including that for developers, please go to our [Developer and User Guide](USER_GUIDE.md)\n\n## Community Information\nWe welcome contributions and feedback from the open-source community!  \n\n### Contributions, Discussions, and Issues\nPlease read the [Community Development Guidelines](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) \nfor further information on contributions, [discussions](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fdiscussions), and [Issues](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fissues).\n\n###  Release notes\nSee the [Release notes](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FRELEASE_NOTES.md) documenting new features.\n\n###  License\nNeuroMANCER comes with [BSD license](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBSD_licenses).\nSee the [license](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FLICENSE.md) for further details. \n\n\n## Publications \n+ [Ashish S. Nair, Bruno Jacob, Amanda A. Howard, Jan Drgona, Panos Stinis, E-PINNs: Epistemic Physics-Informed Neural Networks, \tarXiv:2503.19333](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19333)\n+ [Bo Tang, Elias B. Khalil, Ján Drgoňa, Learning to Optimize for Mixed-Integer Non-linear Programming, arXiv:2410.11061, 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11061)\n+ [John Viljoen, Wenceslao Shaw Cortez, Jan Drgona, Sebastian East, Masayoshi Tomizuka, Draguna Vrabie, Differentiable Predictive Control for Robotics: A Data-Driven Predictive Safety Filter Approach, arXiv:2409.13817, 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13817)\n+ [Jan Drgona, Aaron Tuor, Draguna Vrabie, Learning Constrained Parametric Differentiable Predictive Control Policies With Guarantees, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10479163)\n+ [Renukanandan Tumu, Wenceslao Shaw Cortez, Ján Drgoňa, Draguna L. Vrabie, Sonja Glavaski, Differentiable Predictive Control for Large-Scale Urban Road Networks, \tarXiv:2406.10433, 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10433)\n+ [Ethan King, James Kotary, Ferdinando Fioretto, Jan Drgona, Metric Learning to Accelerate Convergence of Operator Splitting Methods for Differentiable Parametric Programming, arXiv:2404.00882, 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00882)\n+ [James Koch, Madelyn Shapiro, Himanshu Sharma, Draguna Vrabie, Jan Drgona, Neural Differential Algebraic Equations, arXiv:2403.12938, 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12938)\n+ [Wenceslao Shaw Cortez, Jan Drgona, Draguna Vrabie, Mahantesh Halappanavar, A Robust, Efficient Predictive Safety Filter, arXiv:2311.08496, 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.08496)\n+ [Shrirang Abhyankar, Jan Drgona, Andrew August, Elliott Skomski, Aaron Tuor, Neuro-physical dynamic load modeling using differentiable parametric optimization, 2023 IEEE Power & Energy Society General Meeting (PESGM), 2023](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10253098)\n+ [James Koch, Zhao Chen, Aaron Tuor, Jan Drgona, Draguna Vrabie, Structural Inference of Networked Dynamical Systems with Universal Differential Equations, arXiv:2207.04962, (2022)](https:\u002F\u002Faps.arxiv.org\u002Fabs\u002F2207.04962)\n+ [Ján Drgoňa, Sayak Mukherjee, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Learning Stochastic Parametric Differentiable Predictive Control Policies, IFAC ROCOND conference (2022)](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS2405896322015877)\n+ [Sayak Mukherjee, Ján Drgoňa, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Neural Lyapunov Differentiable Predictive Control, IEEE Conference on Decision and Control Conference 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10728)\n+ [Wenceslao Shaw Cortez, Jan Drgona, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Differentiable Predictive Control with Safety Guarantees: A Control Barrier Function Approach, IEEE Conference on Decision and Control Conference 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.02319)\n+ [Ethan King, Jan Drgona, Aaron Tuor, Shrirang Abhyankar, Craig Bakker, Arnab Bhattacharya, Draguna Vrabie, Koopman-based Differentiable Predictive Control for the Dynamics-Aware Economic Dispatch Problem, 2022 American Control Conference (ACC)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9867379)\n+ [Drgoňa, J., Tuor, A. R., Chandan, V., & Vrabie, D. L., Physics-constrained deep learning of multi-zone building thermal dynamics. Energy and Buildings, 243, 110992, (2021)](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0378778821002760)\n+ [E. Skomski, S. Vasisht, C. Wight, A. Tuor, J. Drgoňa and D. Vrabie, \"Constrained Block Nonlinear Neural Dynamical Models,\" 2021 American Control Conference (ACC), 2021, pp. 3993-4000, doi: 10.23919\u002FACC50511.2021.9482930.](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9482930)\n+ [Skomski, E., Drgoňa, J., & Tuor, A. (2021, May). Automating Discovery of Physics-Informed Neural State Space Models via Learning and Evolution. In Learning for Dynamics and Control (pp. 980-991). PMLR.](https:\u002F\u002Fproceedings.mlr.press\u002Fv144\u002Fskomski21a.html)\n+ [Drgoňa, J., Tuor, A., Skomski, E., Vasisht, S., & Vrabie, D. (2021). Deep Learning Explicit Differentiable Predictive Control Laws for Buildings. IFAC-PapersOnLine, 54(6), 14-19.](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS2405896321012933)\n+ [Tuor, A., Drgona, J., & Vrabie, D. (2020). Constrained neural ordinary differential equations with stability guarantees. arXiv preprint arXiv:2004.10883.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.10883)\n+ [Drgona, Jan, et al. \"Differentiable Predictive Control: An MPC Alternative for Unknown Nonlinear Systems using Constrained Deep Learning.\" Journal of Process Control Volume 116, August 2022, Pages 80-92](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0959152422000981)\n+ [Drgona, J., Skomski, E., Vasisht, S., Tuor, A., & Vrabie, D. (2020). Dissipative Deep Neural Dynamical Systems, in IEEE Open Journal of Control Systems, vol. 1, pp. 100-112, 2022](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9809789)\n+ [Drgona, J., Tuor, A., & Vrabie, D., Learning Constrained Adaptive Differentiable Predictive Control Policies With Guarantees, arXiv preprint arXiv:2004.11184, (2020)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.11184)\n\n\n## Cite as\n```yaml\n@article{Neuromancer2023,\n  title={{NeuroMANCER: Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations}},\n  author={Drgona, Jan and Tuor, Aaron and Koch, James and Shapiro, Madelyn and Jacob, Bruno and Vrabie, Draguna},\n  Url= {https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer}, \n  year={2023}\n}\n```\n\n## Development team\n\n**Lead developers**: [Jan Drgona](https:\u002F\u002Fdrgona.github.io\u002F), [Aaron Tuor](https:\u002F\u002Fsw.cs.wwu.edu\u002F~tuora\u002Faarontuor\u002F)  \n**Notable contributors**: [Rahul Birmiwal](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Frahul-birmiwal009\u002F), [Bruno Jacob](https:\u002F\u002Fbrunopjacob.github.io\u002F),  [Reilly Raab](https:\u002F\u002Freillyraab.com\u002Fabout.html), Madelyn Shapiro, James Koch, Seth Briney, Bo Tang, Ethan King, Elliot Skomski, Zhao Chen, Christian Møldrup Legaard, Tyler Ingebrand, Alireza Daneshvar, Cary Faulkner  \n**Scientific advisors**: Draguna Vrabie, Panos Stinis  \n\nOpen-source contributions made by:  \n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_69e1dce6dc4c.png\" \u002F>\n\u003C\u002Fa>\n\nMade with [contrib.rocks](https:\u002F\u002Fcontrib.rocks).\n\n## Acknowledgments\nThis research was partially supported by the Mathematics for Artificial Reasoning in Science (MARS) and Data Model Convergence (DMC) initiatives via the Laboratory Directed Research and Development (LDRD) investments at Pacific Northwest National Laboratory (PNNL), by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's “Data-Driven Decision Control for Complex Systems (DnC2S)” project, and through the Energy Efficiency and Renewable Energy, Building Technologies Office under the “Dynamic decarbonization through autonomous physics-centric deep learning and optimization of building operations” and the “Advancing Market-Ready Building Energy Management by Cost-Effective Differentiable Predictive Control” projects. This project was also supported by the U.S. Department of Energy (DOE), Office of Science, Advanced Scientific Computing Research (ASCR) program, under the Uncertainty Quantification for Multifidelity Operator Learning (MOLUcQ) project (Project No. 81739). PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830. This research is also partially supported by the U.S. DOE, Office of Science, ASCR program under the Scientific Discovery through Advanced Computing (SciDAC) Institute “LEADS: LEarning-Accelerated Domain Science”. This research is also partially supported by the Ralph O’Connor Sustainable Energy Institute (ROSEI) at Johns Hopkins University.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_6be3536bf6b9.jpg\" width=\"500\">  \n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_bda84c2d7d2c.png\" width=\"500\">  \n\u003C\u002Fp>\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_9286cc5b2076.png\" width=\"250\">  \n\u003C\u002Fp>\n\n# NeuroMANCER v1.5.6\n\n[![PyPi 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fneuromancer)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fneuromancer)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-BSD-blue.svg)](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FLICENSE.md)\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-online-blue.svg)](https:\u002F\u002Fpnnl.github.io\u002Fneuromancer\u002F)\n![Lightning](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Lightning-792ee5?logo=pytorchlightning&logoColor=white)\n\n**具有自适应非线性约束和高效正则化的神经模块（NeuroMANCER）**\n是一个开源的可微分编程（DP）库，用于求解参数化约束优化问题、基于物理信息的系统辨识以及基于模型的参数化最优控制。NeuroMANCER 使用 [PyTorch](https:\u002F\u002Fpytorch.org\u002F) 编写，允许将机器学习与科学计算系统性地结合，从而创建端到端的可微模型和算法，并在其中嵌入先验知识和物理规律。\n\n---\n\n## 目录\n1. [概述](#overview)\n2. [关键特性](#key-features)\n3. [v1.5.3 新增内容](#whats-new-in-v153)\n4. [安装](#installation)\n5. [快速入门](#getting-started)\n6. [教程](#domain-examples)\n6. [文档与用户指南](#documentation-and-user-guides)\n\n\n---\n\n### 关键特性\n* **学习建模、学习控制、学习优化**：我们的库旨在为最终用户提供多种工具，以解决学习优化（L2O）、学习建模（L2M）和学习控制（L2C）任务。您可以处理高级的约束参数化优化问题，使用基于物理信息的神经网络来建模流体动力学，或者学习如何控制建筑物内的室内温度以最大化建筑能效。\n* **符号编程** 接口使得在这些学习范式中定义和嵌入物理方程、领域知识和约束变得非常简单。\n* **全面的学习工具**：提供丰富的教程和示例应用——从基础的系统辨识到高级的预测控制——使用户能够轻松学习并将 NeuroMANCER 应用于实际问题。\n* **最先进方法**：NeuroMANCER 汇集了当前最先进的方法，例如用于函数逼近的函数编码器（FE）和科尔莫戈洛夫-阿诺德网络（KAN），用于学习动态系统的神经常微分方程（NODE）、神经库普曼算子（KO）以及稀疏非线性动力学识别（SINDy），用于学习优化中的安全约束的可微凸优化层，以及用于学习控制非线性系统的可微预测控制（DPC）。\n* **NeuroMANCER-GPT 助手**：我们提供了易于使用的脚本，可以将 NeuroMANCER 库的内容转换为适合 RAG 基础“LLM 助手”管道的形式。请参阅 [Assistant](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fdevelop\u002Fassistant\u002FREADME.md) 以了解如何快速搭建一个 LLM 模型来帮助理解和编写 NeuroMANCER 代码。\n\n\n## v1.5.6 新增内容\n\n\n### 新示例：\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FDAEs\u002Ftank_dae_example.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> 通过算子分裂法实现的神经 DAE\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_6_mixed_integer_decisions.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> 针对热力系统的混合整数 DPC\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002Fgrid_response.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> 面向电网响应的建筑能源系统的 DPC\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_3_ref_tracking_ODE.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> 具有预览时域的 DPC\n\n\n### 新功能\n+ 新类 SystemPreview，可作为 System 类的直接替代品，启用预览时域功能\n+ 单元测试已更新至最新版本。\n\n### 修复的错误\n+ 修复了 mlflow 依赖项在 Google Colab 中引发冲突的问题\n\n\n## 安装\n只需运行\n```\npip install neuromancer\n```\n如需手动安装，请参阅 [安装说明](INSTALLATION.md)\n\n\n## 快速入门\n\n在 [examples](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fmaster\u002Fexamples) 文件夹和下方的 [教程](#domain-examples) 中，您可以找到大量教程。示例的交互式笔记本版本可在 Google Colab 上获取！在克隆仓库并搭建环境之前，不妨先试用一下 NeuroMANCER 的功能。\n\n以下笔记本介绍了 NeuroMANCER 库的核心抽象概念，尤其是我们的符号编程接口和 Node 类。\n\n### 符号变量、节点、约束、目标函数和系统类\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ftutorials\u002Fpart_1_linear_regression.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n第 1 部分：PyTorch 与 NeuroMANCER 中的线性回归。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ftutorials\u002Fpart_2_variable.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n第 2 部分：NeuroMANCER 语法教程——变量、约束和目标函数。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ftutorials\u002Fpart_3_node.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n第 3 部分：NeuroMANCER 语法教程——模块、Node 和 System 类。\n\n### 示例\n使用 NeuroMANCER 解决参数化约束优化问题的快速示例，利用我们的符号编程接口、Node 和 Variable、Blocks、SLiM 库以及 PenaltyLoss 类。\n\n```python \n# Neuromancer 语法示例：约束优化\nimport neuromancer as nm\nimport torch \n\n# 定义神经网络架构 \nfunc = nm.modules.blocks.MLP(insize=1, outsize=2, \n                             linear_map=nm.slim.maps['linear'], \n                             nonlin=torch.nn.ReLU, hsizes=[80] * 4)\n# 将神经网络通过 Node 类包装成符号表示：map(p) -> x\nmap = nm.system.Node(func, ['p'], ['x'], name='map')\n    \n# 定义决策变量\nx = nm.constraint.variable(\"x\")[:, [0]]\ny = nm.constraint.variable(\"x\")[:, [1]]\n\n# 数据集中采样的问题参数\np = nm.constraint.variable('p')\n\n# 定义目标函数\nf = (1-x)**2 + (y-x**2)**2\nobj = f.minimize(weight=1.0)\n\n# 定义约束条件\ncon_1 = 100.*(x >= y)\ncon_2 = 100.*(x**2+y**2 \u003C= p**2)\n\n# 创建基于惩罚法的损失函数\nloss = nm.loss.PenaltyLoss(objectives=[obj], constraints=[con_1, con_2])\n# 构造可微分的约束优化问题\nproblem = nm.problem.Problem(nodes=[map], loss=loss)\n```\n\n\n## 领域示例\n\nNeuroMANCER 旨在利用其丰富的算法来解决各种领域特定的建模和控制问题。以下展示了如何对建筑能源系统进行建模与控制，以及应用负荷预测技术。\n\n如需深入了解我们的方法，请参阅下方的[建模、优化与控制方法教程](#tutorials-on-methods-for-modeling-optimization-and-control)部分。\n\n### 能源系统\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FNODE_building_dynamics.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 使用神经ODE学习建筑热动力学\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FNODE_RC_networks.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 基于神经ODE的多区域建筑热动力学电阻-电容网络模型\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FNODE_swing_equation.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 使用神经ODE学习同步发电机摆动方程动力学\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FDPC_building_control.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习控制建筑物室内空气温度\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FHVAC_load_forecasting.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 基于MLP和CNN模型的建筑能耗负荷预测\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002Fbuilding_load_forecasting_Transformers.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 基于Transformer模型的建筑能耗负荷预测\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FDPC_PSH.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习控制抽水蓄能水电系统\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002FRL_DPC_building_control.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 使用安全强化学习和DPC控制建筑暖通空调系统\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fdomain_examples\u002Fgrid_response.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 建筑能源系统的电网响应型DPC\n\n\n## 建模、优化与控制方法教程\n### 学习优化（L2O）用于参数化编程\n\nNeuromancer 允许您构建并求解广泛的参数化优化问题，借助机器学习来学习这些问题的解。[关于参数化编程的更多信息](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002Fparametric_programming)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_1_basics.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习求解一个带约束的优化问题。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_2_pQP.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习求解一个二次约束优化问题。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_3_pNLP.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习求解一组二维约束优化问题。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_4_projectedGradient.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习使用投影梯度法求解约束优化问题。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_5_cvxpy_layers.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 使用CvxpyLayers实现到多面体可行域的可微投影。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fparametric_programming\u002FPart_6_pQp_lopoCorrection.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习通过度量学习优化算子分裂层。\n\n### 通过可微预测控制（DPC）学习控制（L2C）\nNeuromancer 使您能够为全范围的可微白箱、灰箱和黑箱动力系统学习控制策略，同时满足选择约束和目标函数的要求。\n[关于可微预测控制的更多信息](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002Fcontrol)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_1_stabilize_linear_system.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习如何稳定一个线性动力系统。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_2_stabilize_ODE.ipynb\">\n  \u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习如何稳定一个非线性微分方程。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_3_ref_tracking_ODE.ipynb\">\n  \u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习如何控制一个非线性微分方程。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_4_NODE_control.ipynb\">\n  \u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习未知动力系统的神经ODE模型及控制策略。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_5_neural_Lyapunov.ipynb\">\n  \u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 学习非线性动力系统的神经李雅普诺夫函数。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Fcontrol\u002FPart_6_mixed_integer_decisions.ipynb\">\u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 混合整数DPC\n\n\n### 函数逼近\nNeuromancer 紧跟最前沿的方法。在这里，我们展示了强大的科尔莫戈洛夫-阿诺德网络 [关于科尔莫戈洛夫-阿诺德网络的更多信息](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FKANs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Ffeature\u002Ffbkans\u002Fexamples\u002FKANs\u002Fp1_fbkan_vs_kan_noise_data_1d.ipynb\">\u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 对比KANs与FBKANs在学习带噪声的1D多尺度函数时的表现\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Ffeature\u002Ffbkans\u002Fexamples\u002FKANs\u002Fp2_fbkan_vs_kan_noise_data_2d.ipynb\">\u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 对比KANs与FBKANs在学习带噪声的2D多尺度函数时的表现\n  \nNeuromancer 包含函数编码器的实现，这是一种将基函数学习为神经网络的算法。请参阅 [函数编码器：希尔伯特空间中迁移学习的一种原则性方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18373)。\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ffunction_encoder\u002FPart_1_Intro_to_Function_Encoders.ipynb\">\u003Cimg src=\"https6:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\"\u002F>\u003C\u002Fa> 函数编码器简介\n\n### 系统辨识\nNeuromancer 允许用户结合机器学习、先验物理知识和领域专业知识，在已知系统行为观测数据的基础上，构建动力学系统的数据驱动模型。\n[关于通过神经状态空间模型和常微分方程进行系统辨识的更多信息](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FODEs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_1_NODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 神经普通微分方程 (NODEs)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_2_param_estim_ODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 常微分方程系统的参数估计\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_3_UDE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 通用微分方程 (UDEs)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_4_nonauto_NODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 带有外生输入的 NODEs\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_5_nonauto_NSSM.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 带有外生输入的神经状态空间模型 (NSSMs)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_6_NetworkODE.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 电阻-电容 (RC) 网络 ODE 的数据驱动建模\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_7_DeepKoopman.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 深度库普曼算子\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_8_nonauto_DeepKoopman.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 面向控制的深度库普曼算子\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FODEs\u002FPart_9_SINDy.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 非线性动力学稀疏识别 (SINDy)\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Ffunction_encoder\u002FPart_2_Function_Encoder_Neural_ODE.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 函数编码器 + 神经普通微分方程\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FDAEs\u002Ftank_dae_example.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 通过算子分裂法实现的神经微分代数方程 (DAEs)\n\n\n### 物理信息神经网络 (PINNs)\nNeuromancer 的符号编程设计非常适合求解 PINNs。[关于 PINNs 的更多信息](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FPDEs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_1_PINN_DiffusionEquation.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 扩散方程\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_2_PINN_BurgersEquation.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 伯格斯方程\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_3_PINN_BurgersEquation_inverse.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 带参数估计的伯格斯方程（反问题）\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_4_PINN_LaplaceEquationSteadyState.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 拉普拉斯方程（稳态）\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_5_Pendulum_Stacked.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 通过堆叠 PINN 实现阻尼摆\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FPDEs\u002FPart_6_PINN_NavierStokesCavitySteady_KAN.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 纳维-斯托克斯方程（盖驱动腔流，稳态，KAN）\n\n### 随机微分方程 (SDEs)\nNeuromancer 已与 TorchSDE 集成，以处理随机动力学系统。[关于 SDEs 的更多信息](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fdevelop\u002Fexamples\u002FSDEs)\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002FSDEs\u002Fsde_walkthrough.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 隐式 SDE：使用 Neuromancer x TorchSDE 对随机过程进行“系统辨识”\n  \n\n## 可扩展性和可定制性\n\n### PyTorch Lightning 集成\n\n我们集成了 PyTorch Lightning，以简化代码、实现自定义训练逻辑、支持 GPU 和多 GPU 配置，并处理大规模、内存密集型的学习任务。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Flightning_integration_examples\u002FPart_1_lightning_basics_tutorial.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> Lightning 集成基础教程。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Flightning_integration_examples\u002FPart_2_lightning_advanced_and_gpu_tutorial.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> Lightning 高级功能与自动 GPU 支持。\n\n+ \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002Fexamples\u002Flightning_integration_examples\u002Fother_examples\u002Flightning_custom_training_example.ipynb\">\n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> 通过 Lightning 模块化代码定义自定义训练逻辑。\n\n\n## 文档与用户指南\n该库的文档可在 [在线](https:\u002F\u002Fpnnl.github.io\u002Fneuromancer\u002F) 查阅。\n此外，还有一段 [介绍视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YkFKz-DgC98)，涵盖了该库的核心功能。\n\n如需更多信息，包括面向开发人员的内容，请访问我们的 [开发者与用户指南](USER_GUIDE.md)。\n\n## 社区信息\n我们欢迎开源社区的贡献和反馈！\n\n### 贡献、讨论与问题\n请阅读 [社区开发指南](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md)，以获取有关贡献、[讨论](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fdiscussions) 和 [问题](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fissues) 的更多信息。\n\n### 发布说明\n请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FRELEASE_NOTES.md)，其中记录了新功能。\n\n### 许可证\nNeuroMANCER 采用 [BSD 许可证](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBSD_licenses)。\n更多详情请参阅 [许可证](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fblob\u002Fmaster\u002FLICENSE.md)。\n\n## 出版物\n+ [Ashish S. Nair, Bruno Jacob, Amanda A. Howard, Jan Drgona, Panos Stinis, E-PINNs：认知不确定性物理信息神经网络，arXiv:2503.19333](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19333)\n+ [Bo Tang, Elias B. Khalil, Ján Drgoňa, 学习求解混合整数非线性规划的优化方法，arXiv:2410.11061，2024年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11061)\n+ [John Viljoen, Wenceslao Shaw Cortez, Jan Drgona, Sebastian East, Masayoshi Tomizuka, Draguna Vrabie, 面向机器人技术的可微预测控制：一种数据驱动的预测安全滤波器方法，arXiv:2409.13817，2024年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13817)\n+ [Jan Drgona, Aaron Tuor, Draguna Vrabie, 带有保证的学习受限参数化可微预测控制策略，IEEE系统、人机与控制论汇刊：系统，2024年](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10479163)\n+ [Renukanandan Tumu, Wenceslao Shaw Cortez, Ján Drgoňa, Draguna L. Vrabie, Sonja Glavaski, 面向大规模城市道路网络的可微预测控制，arXiv:2406.10433，2024年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10433)\n+ [Ethan King, James Kotary, Ferdinando Fioretto, Jan Drgona, 用于加速可微参数化规划中算子分裂法收敛的度量学习，arXiv:2404.00882，2024年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00882)\n+ [James Koch, Madelyn Shapiro, Himanshu Sharma, Draguna Vrabie, Jan Drgona, 神经微分代数方程，arXiv:2403.12938，2024年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12938)\n+ [Wenceslao Shaw Cortez, Jan Drgona, Draguna Vrabie, Mahantesh Halappanavar, 一种鲁棒高效的预测安全滤波器，arXiv:2311.08496，2024年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.08496)\n+ [Shrirang Abhyankar, Jan Drgona, Andrew August, Elliott Skomski, Aaron Tuor, 利用可微参数化优化进行神经—物理动态负荷建模，2023 IEEE电力与能源学会年会（PESGM）](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10253098)\n+ [James Koch，Zhao Chen，Aaron Tuor，Jan Drgona，Draguna Vrabie，使用通用微分方程对网络化动力系统进行结构推断，arXiv:2207.04962，2022年](https:\u002F\u002Faps.arxiv.org\u002Fabs\u002F2207.04962)\n+ [Ján Drgoňa，Sayak Mukherjee，Aaron Tuor，Mahantesh Halappanavar，Draguna Vrabie，学习随机参数化可微预测控制策略，IFAC ROCOND会议（2022年）](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS2405896322015877)\n+ [Sayak Mukherjee，Ján Drgoňa，Aaron Tuor，Mahantesh Halappanavar，Draguna Vrabie，神经李雅普诺夫可微预测控制，IEEE决策与控制会议2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10728)\n+ [Wenceslao Shaw Cortez，Jan Drgona，Aaron Tuor，Mahantesh Halappanavar，Draguna Vrabie，具有安全保证的可微预测控制：基于控制屏障函数的方法，IEEE决策与控制会议2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.02319)\n+ [Ethan King，Jan Drgona，Aaron Tuor，Shrirang Abhyankar，Craig Bakker，Arnab Bhattacharya，Draguna Vrabie，基于库普曼算子的可微预测控制在动态感知经济调度问题中的应用，2022年美国控制会议（ACC）](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9867379)\n+ [Drgoňa, J., Tuor, A. R., Chandan, V., & Vrabie, D. L., 物理约束下的多区域建筑热动力学深度学习。能源与建筑，243，110992，2021年](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0378778821002760)\n+ [E. Skomski，S. Vasisht，C. Wight，A. Tuor，J. Drgoňa和D. Vrabie，“受约束的块状非线性神经动力学模型”，2021年美国控制会议（ACC），2021年，第3993–4000页，doi：10.23919\u002FACC50511.2021.9482930。](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9482930)\n+ [Skomski，E.，Drgoňa，J.，& Tuor，A.（2021年5月）。通过学习与进化自动发现物理信息神经状态空间模型。载于《动力学与控制的学习》（第980–991页）。PMLR。](https:\u002F\u002Fproceedings.mlr.press\u002Fv144\u002Fskomski21a.html)\n+ [Drgoňa，J.，Tuor，A.，Skomski，E.，Vasisht，S.，& Vrabie，D.（2021年）。为建筑物开发深度学习显式可微预测控制律。IFAC-PapersOnLine，54(6)，第14–19页。](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS2405896321012933)\n+ [Tuor，A.，Drgona，J.，& Vrabie，D.（2020年）。具有稳定性保证的受约束神经常微分方程。arXiv预印本arXiv:2004.10883。](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.10883)\n+ [Drgona，Jan等。“可微预测控制：一种适用于未知非线性系统的MPC替代方案，采用受约束的深度学习。”过程控制杂志第116卷，2022年8月，第80–92页](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0959152422000981)\n+ [Drgona，J.，Skomski，E.，Vasisht，S.，Tuor，A.，& Vrabie，D.（2020年）。耗散型深度神经动力学系统，载于IEEE开放控制系统期刊，第1卷，第100–112页，2022年](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9809789)\n+ [Drgona，J.，Tuor，A.，& Vrabie，D.，学习带有保证的受限自适应可微预测控制策略，arXiv预印本arXiv:2004.11184，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.11184)\n\n\n## 引用方式\n```yaml\n@article{Neuromancer2023,\n  title={{NeuroMANCER：具有自适应非线性约束和高效正则化的神经模块}},\n  author={Drgona, Jan and Tuor, Aaron and Koch, James and Shapiro, Madelyn and Jacob, Bruno and Vrabie, Draguna},\n  Url= {https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer}, \n  year={2023}\n}\n```\n\n## 开发团队\n\n**主要开发者**：[Jan Drgona](https:\u002F\u002Fdrgona.github.io\u002F)，[Aaron Tuor](https:\u002F\u002Fsw.cs.wwu.edu\u002F~tuora\u002Faarontuor\u002F)  \n**重要贡献者**：[Rahul Birmiwal](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Frahul-birmiwal009\u002F)，[Bruno Jacob](https:\u002F\u002Fbrunopjacob.github.io\u002F)，[Reilly Raab](https:\u002F\u002Freillyraab.com\u002Fabout.html)，Madelyn Shapiro，James Koch，Seth Briney，Bo Tang，Ethan King，Elliot Skomski，Zhao Chen，Christian Møldrup Legaard，Tyler Ingebrand，Alireza Daneshvar，Cary Faulkner  \n**科学顾问**：Draguna Vrabie，Panos Stinis  \n\n开源贡献者：\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_69e1dce6dc4c.png\" \u002F>\n\u003C\u002Fa>\n\n由[contrib.rocks](https:\u002F\u002Fcontrib.rocks)制作。\n\n## 致谢\n本研究得到了太平洋西北国家实验室（PNNL）实验室指导研究与发展（LDRD）项目的部分支持，具体通过“科学中的人工推理数学”（MARS）和“数据模型融合”（DMC）两大倡议；同时还得到了美国能源部高级科学计算研究办公室“复杂系统的数据驱动决策控制”（DnC2S）项目的支持，以及能源效率与可再生能源建筑技术办公室下“通过以物理为中心的自主深度学习和建筑运营优化实现动态脱碳”和“以经济高效的不同可微预测控制推动市场-ready建筑能源管理”两个项目的支持。此外，本项目还获得了美国能源部科学局高级科学计算研究（ASCR）计划下“多保真度算子学习不确定性量化”（MOLUcQ）项目的资助（项目编号：81739）。PNNL是由巴特尔纪念研究所根据合同编号DE-AC05-76RL0-1830代表美国能源部（DOE）运营的多学科国家实验室。本研究亦得到美国能源部科学局ASCR计划下“先进计算促进科学发现”（SciDAC）研究所“LEADS：学习加速领域科学”项目的部分支持。同时，本研究还得到了约翰霍普金斯大学拉尔夫·奥康纳可持续能源研究所（ROSEI）的支持。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_6be3536bf6b9.jpg\" width=\"500\">  \n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_readme_bda84c2d7d2c.png\" width=\"500\">  \n\u003C\u002Fp>","# NeuroMANCER 快速上手指南\n\nNeuroMANCER 是一个基于 PyTorch 的开源可微分编程（DP）库，专为解决参数化约束优化问题、物理信息系统的辨识以及基于模型的参数化最优控制而设计。它支持将机器学习与科学计算系统性地结合，构建嵌入先验知识和物理规律端到端可微模型。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python**: 建议 Python 3.8 或更高版本\n*   **核心依赖**: \n    *   [PyTorch](https:\u002F\u002Fpytorch.org\u002F) (NeuroMANCER 基于 PyTorch 构建)\n    *   `pip` 包管理工具\n*   **推荐**: 使用虚拟环境（如 `venv` 或 `conda`）以避免依赖冲突。\n\n> **注意**：本库未提供特定的国内镜像源，安装时可使用清华或阿里镜像加速 PyTorch 及相关依赖的下载。\n\n## 安装步骤\n\n### 方法一：使用 pip 直接安装（推荐）\n\n这是最快捷的安装方式，会自动处理大部分依赖。\n\n```bash\npip install neuromancer\n```\n\n*若需加速下载，可添加国内镜像源：*\n```bash\npip install neuromancer -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方法二：手动安装\n\n如果您需要自定义安装或开发贡献，请参考项目根目录下的 `INSTALLATION.md` 文件进行手动配置。\n\n## 基本使用\n\nNeuroMANCER 的核心优势在于其**符号编程接口**，允许用户轻松定义物理方程、领域知识和约束条件。以下是一个解决参数化约束优化问题的最小化示例（基于经典的 Rosenbrock 函数变体）：\n\n### 代码示例\n\n```python \n# Neuromancer syntax example for constrained optimization\nimport neuromancer as nm\nimport torch \n\n# 1. 定义神经网络架构 (MLP)\nfunc = nm.modules.blocks.MLP(insize=1, outsize=2, \n                             linear_map=nm.slim.maps['linear'], \n                             nonlin=torch.nn.ReLU, hsizes=[80] * 4)\n\n# 2. 通过 Node 类将神经网络包装为符号表示：map(p) -> x\n# 输入变量为 'p'，输出变量为 'x'\nmap = nm.system.Node(func, ['p'], ['x'], name='map')\n    \n# 3. 定义决策变量\nx = nm.constraint.variable(\"x\")[:, [0]]\ny = nm.constraint.variable(\"x\")[:, [1]]\n\n# 4. 定义问题参数 (通常在数据集中采样)\np = nm.constraint.variable('p')\n\n# 5. 定义目标函数 (最小化 f)\nf = (1-x)**2 + (y-x**2)**2\nobj = f.minimize(weight=1.0)\n\n# 6. 定义约束条件\ncon_1 = 100.*(x >= y)\ncon_2 = 100.*(x**2+y**2 \u003C= p**2)\n\n# 7. 创建基于惩罚方法的损失函数\nloss = nm.loss.PenaltyLoss(objectives=[obj], constraints=[con_1, con_2])\n\n# 8. 构建可微分的约束优化问题\nproblem = nm.problem.Problem(nodes=[map], loss=loss)\n```\n\n### 下一步学习\n\n完成上述基础示例后，建议您访问官方的 [Google Colab 教程](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Ftree\u002Fmaster\u002Fexamples) 进行深入实践，涵盖以下内容：\n*   **基础语法**: 变量、节点、系统与约束的定义。\n*   **领域应用**: 建筑热动力学建模 (Neural ODEs)、能源负载预测、电力系统控制等。\n*   **高级方法**: 学习优化 (L2O)、学习控制 (L2C) 及微分预测控制 (DPC)。","某能源科技公司的算法团队正在开发一套工业微电网的实时调度系统，需要在满足复杂物理约束的前提下，动态优化电池充放电策略以应对波动的可再生能源输入。\n\n### 没有 neuromancer 时\n- **物理与优化割裂**：团队需分别构建物理仿真模型和优化求解器，两者数据交互繁琐，难以保证控制策略严格符合电路动力学方程。\n- **约束处理僵硬**：面对电池电压、电流等硬性安全约束，传统方法常采用惩罚函数法，导致求解速度慢且容易在边界处出现违规震荡。\n- **迭代开发低效**：每次调整电网拓扑或设备参数，都需要重新推导数学公式并修改底层求解代码，无法实现端到端的快速验证。\n- **实时性不足**：传统数值优化算法计算耗时过长，难以满足毫秒级的电网频率调节需求，往往只能依赖保守的查表法。\n\n### 使用 neuromancer 后\n- **端到端可微分建模**：利用 neuromancer 的符号编程接口，直接将基尔霍夫定律等物理方程嵌入神经网络，实现了“学习即优化”，确保输出天然符合物理规律。\n- **自适应约束保障**：通过内置的可微凸优化层，将安全约束作为网络结构的一部分，既保证了绝对安全，又避免了传统惩罚项带来的收敛难题。\n- **参数化敏捷开发**：借助其参数化建模能力，团队只需更改配置即可适应不同的微电网架构，无需重写核心算法，大幅缩短了从实验室到部署的周期。\n- **极速推理决策**：训练好的神经预测控制（DPC）模型推理速度比传统求解器快几个数量级，成功实现了复杂的混合整数决策在边缘设备上的实时运行。\n\nneuromancer 通过将领域知识与深度学习无缝融合，让复杂的受控物理系统优化变得像训练普通神经网络一样简单高效。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpnnl_neuromancer_9286cc5b.png","pnnl","Pacific Northwest National Laboratory (Public)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fpnnl_4508d3f4.jpg","This Org is intended for the hosting of approved released PNNL software repositories for public use and collaboration.",null,"http:\u002F\u002Fwww.pnnl.gov\u002F","https:\u002F\u002Fgithub.com\u002Fpnnl",[80,84,88],{"name":81,"color":82,"percentage":83},"Python","#3572A5",78.9,{"name":85,"color":86,"percentage":87},"Cuda","#3A4E3A",13.5,{"name":89,"color":90,"percentage":91},"C++","#f34b7d",7.6,1308,172,"2026-04-04T19:52:53","NOASSERTION","未说明",{"notes":98,"python":96,"dependencies":99},"该工具是一个基于 PyTorch 的可微分编程库，主要用于解决参数化约束优化、物理信息系统识别和控制问题。安装命令为 `pip install neuromancer`。README 中提到修复了 mlflow 依赖在 Google Colab 中产生冲突的问题。文档提供了大量基于 Google Colab 的教程示例，表明其兼容云端 GPU 环境，但具体的本地硬件配置（如 CUDA 版本、显存大小）在提供的文本中未明确列出。",[100,101],"torch","pytorch-lightning",[14],[104,105,106,107,108,109,110,111,112,113,114],"constrained-optimization","control-systems","deep-learning","differentiable-programming","dynamical-systems","pytorch","differentiable-optimization","nonlinear-dynamics","nonlinear-optimization","differentiable-control","physics-informed-ml","2026-03-27T02:49:30.150509","2026-04-08T10:07:02.411548",[118,123,128,133],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},24150,"在 Windows 上创建 Conda 环境时遇到 'torch-scatter' 安装错误或 'shm.dll' 加载失败怎么办？","该问题通常由缺少 OpenMP 依赖或 torch-scatter 兼容性引起。解决方案如下：\n1. 运行命令手动安装 Intel OpenMP：`conda install -c defaults intel-openmp -f`\n2. 项目已发布 V1.3 版本，更新了 `windows_env.yml` 文件并移除了 `torch-scatter` 作为直接依赖，建议升级版本或使用最新的配置文件重新创建环境。","https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fissues\u002F6",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},24151,"如何在 M1 Mac (Apple Silicon) 上成功创建 Conda 环境？","M1 Mac 用户常因 `env.yml` 文件中包含硬编码的 Linux 路径前缀（如 `prefix: \u002Fhome\u002F...`）或特定的 x86_64 包版本导致环境创建失败。\n解决方法：\n1. 编辑 `env.yml` 文件，删除以 `prefix:` 开头的行。\n2. 确保使用的是适配 arm64 架构的通道（如 conda-forge）。\n3. 如果特定包版本（如 libgd, protobuf）找不到，尝试移除版本号让 Conda 自动解析适合 M1 的版本，或直接使用项目最新更新的适用于 macOS 的配置文件。","https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fissues\u002F8",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},24152,"如何复现论文《Differentiable Predictive Control...》中基于 Neural Block SSM 的 DPC 示例？","目前官方提供的示例主要基于线性状态空间模型（Linear SSM），尚未直接开源该论文中使用的嵌入式硬件自定义代码和完整的 Neural Block SSM 实现。\n但您可以参考以下资源：\n1. 相关数据集和旧版代码（不推荐用于新开发，仅供参考实现细节）：\n   - 数据：https:\u002F\u002Fgithub.com\u002Fpnnl\u002FFlexyAirDeepMPC\u002Ftree\u002Fmaster\u002Fpsl\u002Fpsl\u002Fdatasets\u002FFlexy_air\n   - 代码：https:\u002F\u002Fgithub.com\u002Fpnnl\u002FFlexyAirDeepMPC\u002Ftree\u002Fmaster\u002Fneuromancer\u002Fneuromancer\u002Ftrain_scripts\n2. 最接近的现有示例（线性模型）：\n   - `examples\u002Fcontrol\u002Fdouble_integrator_DPC_ol_fixed_ref.py`\n   - `examples\u002Fcontrol\u002Fvtol_DPC_ol_fixed_ref.py`\n团队正在扩展控制示例库，未来将包含神经状态空间模型和参考跟踪策略。","https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fissues\u002F9",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},24153,"调用 problem.show() 绘制问题图时出现 'dot' 语法错误或 AssertionError 怎么办？","此错误通常是因为系统未安装 Graphviz 或 pydot 无法正确解析节点标签中的特殊字符。\n解决步骤：\n1. 在 Conda 环境中安装 Graphviz：`conda install graphviz`\n2. 确保系统环境变量中包含 graphviz 的可执行路径。\n3. 如果问题依旧，可能是节点名称中包含数字或特殊符号导致 pydot 解析失败，尝试简化模型中变量或节点的命名（避免使用方括号等特殊字符）。","https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fissues\u002F37",[139,144,149,154,159,164,169,174],{"id":140,"version":141,"summary_zh":142,"released_at":143},145742,"v1.5.6","### 版本 1.5.5 发行说明\n+ 新功能：使用新类 SystemPreview 的预览 horizon DPC，该类可作为 System 类的直接替代品\n+ 新示例：基于算子分裂法的神经 DAEs\n+ 新示例：用于热力系统的混合整数 DPC\n+ 新示例：用于建筑能源系统的电网响应型 DPC\n\n## 变更内容\n* 修复：由 Srindot 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F239 中更新了 linux_env.yml 文件中过时的 neuromancer 依赖项\n* 电网响应功能：由 EladMichael 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F241 中实现\n* 混合整数 DPC 和 SystemPreview 类：由 xboldocky 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F249 中实现\n* 更新 README.md：由 drgona 分别在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F251 和 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F253 中完成\n\n## 新贡献者\n* Srindot 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F239 中完成了首次贡献\n* EladMichael 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F241 中完成了首次贡献\n* xboldocky 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F249 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fcompare\u002Fv1.5.5...v1.5.6","2025-09-26T14:37:48",{"id":145,"version":146,"summary_zh":147,"released_at":148},145743,"v1.5.5","### 版本 1.5.5 发行说明\n+ 新功能：带有预览 horizon 的 DPC，使用新类 SystemPreview，可直接替代 System 类\n+ 新示例：基于算子分裂法的神经 DAEs\n+ 新示例：用于热力系统的混合整数 DPC\n+ 新示例：用于建筑能源系统的电网响应型 DPC\n\n## 变更内容\n* 由 @drgona 在 https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fpull\u002F254 中开发\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fneuromancer\u002Fcompare\u002Fv1.5.4...v1.5.5","2025-09-26T13:32:03",{"id":150,"version":151,"summary_zh":152,"released_at":153},145744,"v1.5.4","### 版本 1.5.4 发行说明\n+ 新功能：用于神经 ODE 中函数逼近和零样本泛化的函数编码器\n+ 错误修复：修复了多个示例中的绘图问题\n+ 错误修复：修复了 L2O 示例中约束违反和目标值的计算问题\n+ 错误修复：修复了 psl 子模块中的 CSTR 动力学模型\n\n本次发布得到了约翰霍普金斯大学拉尔夫·奥康纳可持续能源研究所的支持。","2025-07-02T15:43:09",{"id":155,"version":156,"summary_zh":157,"released_at":158},145745,"v1.5.3","版本 1.5.3 发行说明\n\n新功能：用于生成基于 LLM\u002FRAG 系统离线配置文档的脚本  \n新功能：深入的笔记本，比较和对比 RL 与 DPC 在构建控制系统中的应用  \n新功能：库现已支持 Python 3.11  \n新功能：更新了 Node 类，其构造函数现在可以接受已实例化的 Variable 对象  \n\n本研究部分得到了美国能源部能源效率与可再生能源办公室下属“通过以物理为中心的自主深度学习及建筑运营优化实现动态脱碳”和“通过经济高效的可微预测控制推进市场就绪的建筑能源管理”两个项目的支持。此外，本项目还获得了美国能源部先进科学计算研究计划下“多保真度算子学习中的不确定性量化（MOLUcQ）”项目的资助（项目编号：81739）。  \n\nPNNL 是一家多学科国家实验室，由巴特尔纪念研究所根据合同编号 DE-AC05-76RL0-1830 代表美国能源部（DOE）运营。  \n\n本工作的一部分得到了约翰斯·霍普金斯大学土木与系统工程系、拉尔夫·S·奥康纳可持续能源研究所（ROSEI）以及 [Ján Drgoňa](https:\u002F\u002Fdrgona.github.io\u002F) 研究组的支持。","2025-02-26T20:41:13",{"id":160,"version":161,"summary_zh":162,"released_at":163},145746,"v1.5.2","**版本 1.5.2 发行说明**\n\n- 新功能：用于实现最先进函数逼近的多保真度科尔莫戈洛夫—阿诺德网络\n- 新功能：建筑能源系统负荷预测教程\n- 新功能：Transformer 块\n\n本研究部分得到了美国能源部能源效率与可再生能源局建筑技术办公室“通过以物理为中心的自主深度学习和建筑运营优化实现动态脱碳”以及“通过经济高效的可微分预测控制推进市场就绪的建筑能源管理”两个项目的支持。此外，本项目还获得了美国能源部高级科学计算研究计划下“多保真度算子学习中的不确定性量化（MOLUcQ）”项目的资助（项目编号：81739）。\n\nPNNL 是一家多学科国家实验室，由巴特尔纪念研究所根据合同编号 DE-AC05-76RL0-1830 代表美国能源部（DOE）运营。","2024-11-07T15:16:01",{"id":165,"version":166,"summary_zh":167,"released_at":168},145747,"v1.5.1","### 版本 1.5.1 发行说明\n+ 增强功能：现已支持将所有 Lightning 钩子集成到 Neuromancer Lightning 训练器中。更多信息请参阅 Lightning 示例的 README 文件。\n+ 暂时弃用通过 `LitTrainer` 进行 WandB 超参数调优的功能。\n+ 新特性：TorchSDE 与 Neuromancer 核心库集成，即 `torchsde.sdeint()`。有关随机过程系统辨识的示例可在 examples\u002Fsdes\u002Fsde_walkthrough.ipynb 中找到。\n+ 新特性：堆叠式物理信息神经网络。\n+ 新特性：SINDy——非线性动力学系统的稀疏系统辨识方法。","2024-07-08T15:53:41",{"id":170,"version":171,"summary_zh":172,"released_at":173},145748,"v1.5.0","### 版本 1.5.0 更新日志\n新特性：NeuroMANCER 核心库与 PyTorch Lightning 集成。所有这些功能均为可选启用。\n- 代码简化：零模板代码，模块化程度更高\n- 增加了用户自定义训练逻辑的功能\n- 轻松支持 GPU 及多 GPU 训练\n- 简单易用的 Weights & Biases（https:\u002F\u002Fwandb.ai\u002Fsite）超参数调优和 TensorBoard 日志记录","2024-04-10T23:16:10",{"id":175,"version":176,"summary_zh":177,"released_at":178},145749,"v1.4.2","### 版本 1.4.2 发行说明\n+ 新功能：更新投影梯度的违反能量 #110（基于想法 #86）。\n+ 为提高数值稳定性，将 `psl.nonautonomous.TwoTank` 的 `(umin, umax)` 约束恢复为 `(0.5, 0.5)` #105\n+ 为 `problem.py` 和 `system.py` 添加了新的单元测试 #107\n+ 实现了从 `master` 到 `gh-pages` 的文档自动构建 #107\n+ 修复了位置参数错误，并在 `file_emulator.py` 中增加了对 Time 数据的支持 #119\n+ 修复了 `System` 类中的一个 bug，该 bug 导致计算图的可视化不正确。\n+ 对示例进行了小幅更新。","2023-11-07T18:38:40"]