[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-vlawhern--arl-eegmodels":3,"tool-vlawhern--arl-eegmodels":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",158594,2,"2026-04-16T23:34:05",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":73,"owner_company":73,"owner_location":73,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":75,"languages":76,"stars":81,"forks":82,"last_commit_at":83,"license":84,"difficulty_score":32,"env_os":85,"env_gpu":86,"env_ram":85,"env_deps":87,"category_tags":97,"github_topics":98,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":140},8174,"vlawhern\u002Farl-eegmodels","arl-eegmodels","This is the Army Research Laboratory (ARL) EEGModels Project: A Collection of Convolutional Neural Network (CNN) models for EEG signal classification, using Keras and Tensorflow","arl-eegmodels 是由美国陆军研究实验室（ARL）开源的项目，专为脑电信号（EEG）处理与分类打造。它基于 Keras 和 TensorFlow 框架，提供了一套经过严格验证的卷积神经网络（CNN）模型集合，旨在解决脑机接口研究中模型复现难、对比标准不统一的问题。\n\n该项目内置了包括经典的 EEGNet（含原版及改进版）、DeepConvNet、ShallowConvNet 以及针对稳态视觉诱发电位（SSVEP）优化的变体等多种主流架构。研究人员只需几行代码即可导入模型，根据信道数和采样点灵活配置，快速开展训练与预测工作。此外，arl-eegmodels 还特别集成了特征可解释性分析功能，支持结合 DeepExplain 工具复现单试次特征相关性结果，帮助深入理解模型决策依据。\n\n这套工具非常适合从事神经工程、脑机接口算法研究的科研人员，以及希望快速搭建基准模型进行对比实验的开发者使用。通过提供标准化、易用的深度学习模型，arl-eegmodels 有效降低了 EEG 信号分析的技术门槛，促进了相关领域的可重复研究与技术交流。","# Introduction\nThis is the Army Research Laboratory (ARL) EEGModels project: A Collection of Convolutional Neural Network (CNN) models for EEG signal processing and classification, written in Keras and Tensorflow. The aim of this project is to\n\n- provide a set of well-validated CNN models for EEG signal processing and classification\n- facilitate reproducible research and\n- enable other researchers to use and compare these models as easy as possible on their data\n\n# Requirements\n\n- Python == 3.7 or 3.8\n- tensorflow == 2.X (verified working with 2.0 - 2.3, both for CPU and GPU)\n\nTo run the EEG\u002FMEG ERP classification sample script, you will also need\n\n- mne >= 0.17.1\n- PyRiemann >= 0.2.5\n- scikit-learn >= 0.20.1\n- matplotlib >= 2.2.3\n\n# Models Implemented\n\n- EEGNet [[1]](http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=5\u002Fa=056013). Both the original model and the revised model are implemented.\n- EEGNet variant used for classification of Steady State Visual Evoked Potential (SSVEP) Signals [[2]](http:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Faae5d8)\n- DeepConvNet [[3]](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Ffull\u002F10.1002\u002Fhbm.23730)\n- ShallowConvNet [[3]](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Ffull\u002F10.1002\u002Fhbm.23730)\n\n\n# Usage\n\nTo use this package, place the contents of this folder in your PYTHONPATH environment variable. Then, one can simply import any model and configure it as\n\n\n```python\n\nfrom EEGModels import EEGNet, ShallowConvNet, DeepConvNet\n\nmodel  = EEGNet(nb_classes = ..., Chans = ..., Samples = ...)\n\nmodel2 = ShallowConvNet(nb_classes = ..., Chans = ..., Samples = ...)\n\nmodel3 = DeepConvNet(nb_classes = ..., Chans = ..., Samples = ...)\n\n```\n\nCompile the model with the associated loss function and optimizer (in our case, the categorical cross-entropy and Adam optimizer, respectively). Then fit the model and predict on new test data.\n\n```python\n\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'adam')\nfittedModel    = model.fit(...)\npredicted      = model.predict(...)\n\n```\n\n# EEGNet Feature Explainability\n\nNote: Please see https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels\u002Fissues\u002F29 for additional steps needed to get this to work with Tensorflow 2.\n\nTo reproduce the EEGNet single-trial feature relevance results as we reported in [[1]](http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=5\u002Fa=056013), download and install DeepExplain located [[here]](https:\u002F\u002Fgithub.com\u002Fmarcoancona\u002FDeepExplain), which implements a variety of relevance attribution methods (both gradient-based and perturbation-based). A sketch of how to use it is given below:\n\n```python\nfrom EEGModels import EEGNet\nfrom tensorflow.keras.models import Model\nfrom deepexplain.tensorflow import DeepExplain\nfrom tensorflow.keras import backend as K\n\n# configure, compile and fit the model\n \nmodel          = EEGNet(nb_classes = ..., Chans = ..., Samples = ...)\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'adam')\nfittedModel    = model.fit(...)\n\n# use DeepExplain to get individual trial feature relevances for some test data (X_test, Y_test). \n# Note that model.layers[-2] points to the dense layer prior to softmax activation. Also, we use\n# the DeepLIFT method in the paper, although other options, including epsilon-LRP, are available.\n# This works with all implemented models. \n\n# here, Y_test and X_test are the one-hot encodings of the class labels and\n# the data, respectively. \n\nwith DeepExplain(session = K.get_session()) as de:\n\tinput_tensor   = model.layers[0].input\n\tfModel         = Model(inputs = input_tensor, outputs = model.layers[-2].output)    \n\ttarget_tensor  = fModel(input_tensor)    \n\n\t# can use epsilon-LRP as well if you like.\n\tattributions   = de.explain('deeplift', target_tensor * Y_test, input_tensor, X_test)\n\t# attributions = de.explain('elrp', target_tensor * Y_test, input_tensor, X_test)\t\n\n\n```\n\n\n# Paper Citation\n\nIf you use the EEGNet model in your research and found it helpful, please cite the following paper:\n\n```\n@article{Lawhern2018,\n  author={Vernon J Lawhern and Amelia J Solon and Nicholas R Waytowich and Stephen M Gordon and Chou P Hung and Brent J Lance},\n  title={EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces},\n  journal={Journal of Neural Engineering},\n  volume={15},\n  number={5},\n  pages={056013},\n  url={http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=5\u002Fa=056013},\n  year={2018}\n}\n```\n\nIf you use the SSVEP variant of the EEGNet model in your research and found it helpful, please cite the following paper:\n\n```\n@article{Waytowich2018,\n  author={Nicholas Waytowich and Vernon J Lawhern and Javier O Garcia and Jennifer Cummings and Josef Faller and Paul Sajda and Jean M\nVettel},\n  title={Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials},\n  journal={Journal of Neural Engineering},\n  volume={15},\n  number={6},\n  pages={066031},\n  url={http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=6\u002Fa=066031},\n  year={2018}\n}\n\t\n```\n\nSimilarly, if you use the ShallowConvNet or DeepConvNet models and found them helpful, please cite the following paper:\n\n```\n@article{hbm23730,\nauthor = {Schirrmeister Robin Tibor and \n          Springenberg Jost Tobias and \n          Fiederer Lukas Dominique Josef and \n          Glasstetter Martin and \n          Eggensperger Katharina and \n          Tangermann Michael and \n          Hutter Frank and \n          Burgard Wolfram and \n          Ball Tonio},\ntitle = {Deep learning with convolutional neural networks for EEG decoding and visualization},\njournal = {Human Brain Mapping},\nvolume = {38},\nnumber = {11},\npages = {5391-5420},\nkeywords = {electroencephalography, EEG analysis, machine learning, end‐to‐end learning, brain–machine interface, brain–computer interface, model interpretability, brain mapping},\ndoi = {10.1002\u002Fhbm.23730},\nurl = {https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Fabs\u002F10.1002\u002Fhbm.23730}\n}\n```\n\n# Legal Disclaimer\n\nThis project is governed by the terms of the Creative Commons Zero 1.0 Universal (CC0 1.0) Public Domain Dedication (the Agreement). You should have received a copy of the Agreement with a copy of this software. If not, see https:\u002F\u002Fgithub.com\u002FUSArmyResearchLab\u002FARLDCCSO. Your use or distribution of ARL EEGModels, in both source and binary form, in whole or in part, implies your agreement to abide by the terms set forth in the Agreement in full. \n \nOther portions of this project are subject to domestic copyright protection under 17 USC Sec. 105.  Those portions are licensed under the Apache 2.0 license.  The complete text of the license governing this material is in the file labeled LICENSE.TXT that is a part of this project's official distribution. \n\narl-eegmodels is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. \n\nYou may find the full license in the file LICENSE in this directory.\n\n# Contributions\n\nDue to legal issues, every contributor will need to have a signed Contributor License Agreement on file. The ARL Contributor License Agreement (ARL Form 266) can be found [here](https:\u002F\u002Fgithub.com\u002FUSArmyResearchLab\u002FARL-Open-Source-Guidance-and-Instructions\u002Fblob\u002Fmaster\u002FARL%20Form%20-%20266.pdf). \n\nEach external contributor must execute and return a copy for each project that he or she intends to contribute to. \n\nOnce ARL receives the executed form, it will remain in force permanently. \n\nThus, external contributors need only execute the form once for each project that they plan on contributing to.\n\n\n","# 简介\n这是美国陆军研究实验室（ARL）的EEGModels项目：一组基于Keras和TensorFlow编写的用于脑电图（EEG）信号处理与分类的卷积神经网络（CNN）模型。本项目的目的是：\n\n- 提供一套经过充分验证的用于EEG信号处理和分类的CNN模型；\n- 促进可重复性研究；\n- 使其他研究人员能够在其数据上尽可能简便地使用和比较这些模型。\n\n# 需求\n- Python == 3.7 或 3.8\n- tensorflow == 2.X（已验证在2.0至2.3版本上均可正常运行，支持CPU和GPU）\n\n若要运行EEG\u002FMEG ERP分类示例脚本，您还需要：\n- mne >= 0.17.1\n- PyRiemann >= 0.2.5\n- scikit-learn >= 0.20.1\n- matplotlib >= 2.2.3\n\n# 已实现的模型\n- EEGNet [[1]](http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=5\u002Fa=056013)。原版模型及修订版模型均已实现。\n- 用于稳态视觉诱发电位（SSVEP）信号分类的EEGNet变体 [[2]](http:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Faae5d8)\n- DeepConvNet [[3]](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Ffull\u002F10.1002\u002Fhbm.23730)\n- ShallowConvNet [[3]](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Ffull\u002F10.1002\u002Fhbm.23730)\n\n\n# 使用方法\n要使用本包，请将此文件夹的内容添加到您的PYTHONPATH环境变量中。随后，您可以直接导入任意模型并进行配置，如下所示：\n\n```python\n\nfrom EEGModels import EEGNet, ShallowConvNet, DeepConvNet\n\nmodel  = EEGNet(nb_classes = ..., Chans = ..., Samples = ...)\n\nmodel2 = ShallowConvNet(nb_classes = ..., Chans = ..., Samples = ...)\n\nmodel3 = DeepConvNet(nb_classes = ..., Chans = ..., Samples = ...)\n\n```\n\n使用相应的损失函数和优化器编译模型（在本例中分别为分类交叉熵和Adam优化器）。然后拟合模型并对新的测试数据进行预测。\n\n```python\n\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'adam')\nfittedModel    = model.fit(...)\npredicted      = model.predict(...)\n\n```\n\n# EEGNet特征可解释性\n注意：请参阅https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels\u002Fissues\u002F29，以获取在TensorFlow 2中运行此功能所需的额外步骤。\n\n为重现我们在[[1]](http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=5\u002Fa=056013)中报告的EEGNet单次试验特征重要性结果，请下载并安装位于[[此处]](https:\u002F\u002Fgithub.com\u002Fmarcoancona\u002FDeepExplain)的DeepExplain工具，该工具实现了多种相关性归因方法（包括基于梯度和基于扰动的方法）。其使用方法简述如下：\n\n```python\nfrom EEGModels import EEGNet\nfrom tensorflow.keras.models import Model\nfrom deepexplain.tensorflow import DeepExplain\nfrom tensorflow.keras import backend as K\n\n# 配置、编译并拟合模型\n \nmodel          = EEGNet(nb_classes = ..., Chans = ..., Samples = ...)\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'adam')\nfittedModel    = model.fit(...)\n\n# 使用DeepExplain获取部分测试数据（X_test, Y_test）的单次试验特征重要性。\n# 注意，model.layers[-2]指向的是softmax激活之前的全连接层。此外，我们在此处采用了论文中的DeepLIFT方法，当然也可以选择其他方法，例如epsilon-LRP。\n# 此方法适用于所有已实现的模型。\n\n# 其中，Y_test和X_test分别是类别标签的独热编码和输入数据。\n\nwith DeepExplain(session = K.get_session()) as de:\n\tinput_tensor   = model.layers[0].input\n\tfModel         = Model(inputs = input_tensor, outputs = model.layers[-2].output)    \n\ttarget_tensor  = fModel(input_tensor)    \n\n\t# 如果需要，也可以使用epsilon-LRP。\n\tattributions   = de.explain('deeplift', target_tensor * Y_test, input_tensor, X_test)\n\t# attributions = de.explain('elrp', target_tensor * Y_test, input_tensor, X_test)\t\n\n\n```\n\n\n# 论文引用\n如果您在研究中使用了EEGNet模型并认为它有所帮助，请引用以下论文：\n\n```\n@article{Lawhern2018,\n  author={Vernon J Lawhern and Amelia J Solon and Nicholas R Waytowich and Stephen M Gordon and Chou P Hung and Brent J Lance},\n  title={EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces},\n  journal={Journal of Neural Engineering},\n  volume={15},\n  number={5},\n  pages={056013},\n  url={http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=5\u002Fa=056013},\n  year={2018}\n}\n```\n\n如果您在研究中使用了EEGNet模型的SSVEP变体并认为它有所帮助，请引用以下论文：\n\n```\n@article{Waytowich2018,\n  author={Nicholas Waytowich and Vernon J Lawhern and Javier O Garcia and Jennifer Cummings and Josef Faller and Paul Sajda and Jean M\nVettel},\n  title={Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials},\n  journal={Journal of Neural Engineering},\n  volume={15},\n  number={6},\n  pages={066031},\n  url={http:\u002F\u002Fstacks.iop.org\u002F1741-2552\u002F15\u002Fi=6\u002Fa=066031},\n  year={2018}\n}\n\t\n```\n\n同样地，如果您使用了ShallowConvNet或DeepConvNet模型并认为它们有所帮助，请引用以下论文：\n\n```\n@article{hbm23730,\nauthor = {Schirrmeister Robin Tibor and \n          Springenberg Jost Tobias and \n          Fiederer Lukas Dominique Josef and \n          Glasstetter Martin and \n          Eggensperger Katharina and \n          Tangermann Michael and \n          Hutter Frank and \n          Burgard Wolfram and \n          Ball Tonio},\ntitle = {Deep learning with convolutional neural networks for EEG decoding and visualization},\njournal = {Human Brain Mapping},\nvolume = {38},\nnumber = {11},\npages = {5391-5420},\nkeywords = {脑电图, EEG分析, 机器学习, 端到端学习, 脑机接口, 模型可解释性, 脑映射},\ndoi = {10.1002\u002Fhbm.23730},\nurl = {https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Fabs\u002F10.1002\u002Fhbm.23730}\n}\n```\n\n# 法律声明\n本项目受知识共享署名CC0 1.0公共领域奉献协议（以下简称“协议”）约束。您应随本软件收到一份该协议的副本。如未收到，请访问https:\u002F\u002Fgithub.com\u002FUSArmyResearchLab\u002FARLDCCSO。无论以源代码形式还是二进制形式，无论全部或部分，您对ARL EEGModels的使用或分发均视为您完全同意遵守该协议中的各项条款。\n\n本项目的其他部分受美国法典第17章第105节的国内版权保护。这些部分采用Apache 2.0许可证授权。管理本材料的完整许可文本包含在本项目官方发行版中的LICENSE.TXT文件中。\n\narl-eegmodels以“希望其有用”的态度发布，但不提供任何担保；甚至不提供适销性或特定用途适用性的默示担保。\n\n您可在本目录下的LICENSE文件中找到完整的许可文本。\n\n# 贡献\n\n鉴于法律方面的要求，每位贡献者都需要提交一份已签署的贡献者许可协议。ARL 贡献者许可协议（ARL 表格 266）可在此处找到：[链接](https:\u002F\u002Fgithub.com\u002FUSArmyResearchLab\u002FARL-Open-Source-Guidance-and-Instructions\u002Fblob\u002Fmaster\u002FARL%20Form%20-%20266.pdf)。\n\n每位外部贡献者必须为其拟参与的每个项目分别签署并提交一份该协议的副本。\n\n一旦 ARL 收到已签署的协议，该协议将永久有效。\n\n因此，外部贡献者只需为其计划参与的每个项目签署一次该协议即可。","# arl-eegmodels 快速上手指南\n\narl-eegmodels 是由美国陆军研究实验室（ARL）开源的 EEG（脑电图）信号处理与分类卷积神经网络模型集合，基于 Keras 和 TensorFlow 构建。本项目提供了经过验证的 EEGNet、ShallowConvNet、DeepConvNet 等经典模型，旨在简化科研复现与对比实验。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **Python 版本**：3.7 或 3.8\n*   **核心框架**：TensorFlow 2.X (已验证支持 2.0 - 2.3，适用于 CPU 和 GPU)\n*   **可选依赖**（如需运行 EEG\u002FMEG ERP 分类示例脚本）：\n    *   `mne` >= 0.17.1\n    *   `PyRiemann` >= 0.2.5\n    *   `scikit-learn` >= 0.20.1\n    *   `matplotlib` >= 2.2.3\n\n> **国内加速建议**：安装依赖时推荐使用清华或阿里镜像源以提升下载速度。\n\n## 安装步骤\n\n本项目无需通过 pip 安装，只需将代码库添加到 Python 路径即可使用。\n\n1.  **克隆项目代码**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels.git\n    ```\n\n2.  **安装 Python 依赖**\n    进入项目目录并安装所需库（推荐国内镜像）：\n    ```bash\n    cd arl-eegmodels\n    pip install tensorflow==2.3 mne PyRiemann scikit-learn matplotlib -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    *(注：请根据实际需要的 TensorFlow 版本调整 `tensorflow==2.3`)*\n\n3.  **配置环境变量**\n    将项目文件夹路径添加到 `PYTHONPATH` 环境变量中，以便直接导入模块。\n    \n    *   **Linux \u002F macOS**:\n        ```bash\n        export PYTHONPATH=\"${PYTHONPATH}:\u002Fpath\u002Fto\u002Farl-eegmodels\"\n        ```\n        *(请将 `\u002Fpath\u002Fto\u002Farl-eegmodels` 替换为实际的绝对路径)*\n        \n    *   **Windows (PowerShell)**:\n        ```powershell\n        $env:PYTHONPATH = \"$env:PYTHONPATH;C:\\path\\to\\arl-eegmodels\"\n        ```\n        *(请将 `C:\\path\\to\\arl-eegmodels` 替换为实际的绝对路径)*\n\n## 基本使用\n\n配置完成后，您可以直接在 Python 脚本中导入模型并进行训练。以下是最简使用示例：\n\n### 1. 导入与初始化模型\n支持 `EEGNet`, `ShallowConvNet`, `DeepConvNet` 等模型。需指定类别数 (`nb_classes`)、通道数 (`Chans`) 和时间采样点数 (`Samples`)。\n\n```python\nfrom EEGModels import EEGNet, ShallowConvNet, DeepConvNet\n\n# 初始化 EEGNet 模型\nmodel  = EEGNet(nb_classes = 2, Chans = 64, Samples = 128)\n\n# 初始化 ShallowConvNet 模型\nmodel2 = ShallowConvNet(nb_classes = 2, Chans = 64, Samples = 128)\n\n# 初始化 DeepConvNet 模型\nmodel3 = DeepConvNet(nb_classes = 2, Chans = 64, Samples = 128)\n```\n\n### 2. 编译、训练与预测\n使用分类交叉熵损失函数和 Adam 优化器编译模型，随后进行拟合与预测。\n\n```python\n# 编译模型\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics=['accuracy'])\n\n# 训练模型 (X_train, Y_train 需替换为您的实际数据)\nfittedModel = model.fit(X_train, Y_train, batch_size=32, epochs=50, validation_split=0.2)\n\n# 在新测试数据上进行预测\npredicted = model.predict(X_test)\n```\n\n### 3. 特征可解释性（可选）\n若需复现论文中的单试次特征相关性分析（如 DeepLIFT 方法），需额外安装 [DeepExplain](https:\u002F\u002Fgithub.com\u002Fmarcoancona\u002FDeepExplain)。注意：在 TensorFlow 2 环境下使用时，可能需要参考项目 Issue #29 进行额外配置。\n\n```python\nfrom EEGModels import EEGNet\nfrom tensorflow.keras.models import Model\nfrom deepexplain.tensorflow import DeepExplain\nfrom tensorflow.keras import backend as K\n\n# 假设模型已配置、编译并拟合完毕\nmodel = EEGNet(nb_classes=2, Chans=64, Samples=128)\n# ... compile and fit ...\n\n# 使用 DeepExplain 获取特征归因\nwith DeepExplain(session=K.get_session()) as de:\n    input_tensor  = model.layers[0].input\n    # 指向 softmax 激活前的全连接层\n    fModel        = Model(inputs=input_tensor, outputs=model.layers[-2].output)    \n    target_tensor = fModel(input_tensor)    \n\n    # X_test: 测试数据，Y_test: 标签的 one-hot 编码\n    attributions = de.explain('deeplift', target_tensor * Y_test, input_tensor, X_test)\n```","某神经工程实验室的研究团队正致力于开发一套基于脑电图（EEG）的疲劳驾驶实时监测系统，需要从原始脑电波中精准识别驾驶员的注意力状态。\n\n### 没有 arl-eegmodels 时\n- **模型构建耗时漫长**：研究人员需从零开始编写复杂的卷积神经网络（CNN）代码来适配 EEG 信号的特殊时空结构，极易出错且重复造轮子。\n- **复现经典算法困难**：想要对比业界公认的 EEGNet 或 DeepConvNet 等基准模型时，因缺乏官方标准实现，难以保证参数设置与论文一致，导致实验结果不可靠。\n- **可解释性分析缺失**：仅能得到分类结果，无法深入分析是哪些脑区特征触发了判定，难以向领域专家解释模型决策依据，阻碍了临床或实际部署的信任建立。\n- **环境配置繁琐**：自行整合 Keras、TensorFlow 及 MNE 等依赖库时版本冲突频发，大量时间浪费在调试环境而非算法优化上。\n\n### 使用 arl-eegmodels 后\n- **即插即用高效开发**：通过简单的 Python 导入语句即可调用预置的 EEGNet、ShallowConvNet 等成熟模型，只需配置通道数和采样点即可立即投入训练。\n- **基准对比严谨可靠**：直接复用经过陆军研究实验室（ARL）严格验证的模型架构，确保实验设置与顶会论文完全一致，显著提升了研究的可复现性。\n- **特征归因清晰直观**：结合 DeepExplain 模块，能轻松生成单次试验的特征相关性热力图，直观展示关键脑区和频段，让“黑盒”模型变得透明可信。\n- **生态兼容平滑顺畅**：工具原生支持 TensorFlow 2.x 并兼容主流脑电处理库，大幅降低了环境搭建门槛，让团队能专注于数据策略与业务逻辑。\n\narl-eegmodels 将原本数周的模型研发与验证周期缩短至数天，让研究人员能专注于脑机接口算法的创新突破而非底层代码实现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvlawhern_arl-eegmodels_ddc872b2.png","vlawhern",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvlawhern_ae455f44.png","https:\u002F\u002Fgithub.com\u002Fvlawhern",[77],{"name":78,"color":79,"percentage":80},"Python","#3572A5",100,1507,331,"2026-04-16T02:01:08","NOASSERTION","未说明","非必需。支持 CPU 和 GPU 运行，已验证兼容 TensorFlow 2.0 - 2.3 的 GPU 环境，但未指定具体显卡型号、显存大小或 CUDA 版本。",{"notes":88,"python":89,"dependencies":90},"该工具包需将其内容添加到 PYTHONPATH 环境变量中方可使用。若需复现论文中的 EEGNet 单试次特征相关性结果（使用 DeepLIFT 等方法），需额外安装 DeepExplain 库；在 TensorFlow 2 环境下使用时，请参考项目 Issue #29 获取额外的配置步骤。","3.7 或 3.8",[91,92,93,94,95,96],"tensorflow==2.0-2.3","mne>=0.17.1","PyRiemann>=0.2.5","scikit-learn>=0.20.1","matplotlib>=2.2.3","DeepExplain (可选，用于特征可解释性)",[14,15],[99,100,101,102,103,104,105,106,107,108],"eeg","deep-learning","convolutional-neural-networks","keras","tensorflow","brain-computer-interface","eeg-classification","time-series-classification","event-related-potentials","sensory-motor-rhythm","2026-03-27T02:49:30.150509","2026-04-17T09:54:32.578434",[112,117,122,127,132,136],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},36568,"模型训练只运行了一个 epoch 就退出了，如何解决？","维护者已修复了该问题。请拉取最新的代码提交（pull down the latest commit），然后重新运行示例脚本。注意：如果使用了检查点回调，可能需要根据实际路径修改 `checkpointPaths` 参数。该修复已在 tensorflow-cpu 和 tensorflow-gpu 环境下测试通过。","https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels\u002Fissues\u002F22",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},36569,"遇到 'Negative dimension size' 错误（DepthwiseConv2dNative）怎么办？","这通常与 TensorFlow 后端配置有关。尝试以下步骤：\n1. 确保在代码开头设置图像数据格式为 channels_first：\n   ```python\n   from tensorflow.keras import backend as K\n   K.set_image_data_format('channels_first')\n   ```\n2. 如果使用的是 AMD 处理器或遇到兼容性问题，建议卸载所有现有的 tensorflow 包，然后安装 `intel-tensorflow`：\n   ```bash\n   pip uninstall tensorflow tensorflow-gpu\n   pip install intel-tensorflow\n   ```\n3. 修改包后可能需要重启 Python 解释器或 IDE。","https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels\u002Fissues\u002F13",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},36570,"如何获取 BCI Competition IV 2a 数据集的示例代码或数据？","由于版权原因（数据属于 Berlin BCI group），项目不再直接托管该数据集的专用分析代码。维护者建议使用 MNE-Python 的自动下载工具来获取其他数据集（如 4 类 ERP EEG 分类数据集）作为示例。\n如果您需要使用 BCI IV 2a 数据，请引用原始来源：\nTangermann, Michael, et al. \"Review of the BCI competition IV.\" Frontiers in neuroscience 6 (2012): 55.\n您可以参考项目中提供的基于 MNE 的示例脚本来学习如何预处理和训练模型。","https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels\u002Fissues\u002F7",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},36571,"多次运行训练并使用同一个检查点文件能提高模型性能吗？","不能直接复用同一个检查点文件来“累积”提升性能。正确的多轮运行流程应该是：\n1. 初始化模型（随机权重）：`model = EEGNet(...)`\n2. 训练模型并使用回调保存最佳权重：`model.fit(..., callbacks=[checkpointer])`\n3. 加载该次运行的最佳权重进行测试：`model.load_weights(...)`\n每一轮运行都会生成独立的检查点文件，代表该次随机初始化下的最佳结果。如果想进行下一轮实验，必须重新初始化模型（步骤 1），使其从新的随机权重开始训练，而不是加载旧的检查点继续训练。","https:\u002F\u002Fgithub.com\u002Fvlawhern\u002Farl-eegmodels\u002Fissues\u002F23",{"id":133,"question_zh":134,"answer_zh":135,"source_url":131},36572,"训练过程中出现 AutoGraph 警告（Entity ... could not be transformed）是否正常？","这是一个常见的 TensorFlow AutoGraph 警告，通常表示某些函数无法被转换为图模式，将以即时模式（eager execution）运行。在大多数情况下，这不会影响模型的最终训练结果或准确性，可以忽略。\n如果警告伴随验证准确率（val_accuracy）突然大幅下降或训练中断，请检查：\n1. 数据是否可分（data separability）。\n2. 学习率是否过高。\n3. 是否正确地重置了模型权重进行新一轮实验。\n若问题持续且影响训练，可尝试设置环境变量 `export AUTOGRAPH_VERBOSITY=10` 获取更详细的报错信息。",{"id":137,"question_zh":138,"answer_zh":139,"source_url":121},36573,"在使用 EEGNet 处理不同维度的数据时，如何正确 reshape 输入数据？","EEGNet 默认期望输入格式为 `(trials, kernels, channels, samples)`，即 NCHW 格式。\n假设原始数据形状为 `(trials, channels, samples)`，例如 `(150, 22, 500)`，你需要添加一个内核维度（通常设为 1）并调整顺序：\n```python\nkernels, chans, samples = 1, 22, 500\n# 将数据转换为 (trials, kernels, channels, samples)\nX_train = X_train.reshape(X_train.shape[0], kernels, chans, samples)\n```\n转换后的形状应为 `(150, 1, 22, 500)`。同时，务必在代码中设置 `K.set_image_data_format('channels_first')` 以确保后端正确处理该维度顺序。",[]]