[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SuperBruceJia--EEG-DL":3,"tool-SuperBruceJia--EEG-DL":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",152630,2,"2026-04-12T23:33:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":104,"github_topics":105,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":126,"updated_at":127,"faqs":128,"releases":163},7077,"SuperBruceJia\u002FEEG-DL","EEG-DL","A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.","EEG-DL 是一个基于 TensorFlow 构建的深度学习库，专为脑电图（EEG）信号分类任务而设计。它致力于解决科研人员在处理复杂脑电数据时，从零搭建和复现前沿深度学习模型耗时费力的痛点，提供了一套标准化、开箱即用的解决方案。\n\n这款工具非常适合从事脑机接口研究的研究人员、神经科学领域的开发者以及希望快速验证算法效果的学生使用。用户无需深入纠结于底层代码实现，即可直接调用库中集成的多种先进网络架构进行实验。\n\nEEG-DL 的核心亮点在于其丰富的模型支持。它不仅涵盖了基础的全连接神经网络（DNN）和卷积神经网络（CNN），还集成了残差网络（ResNet）、密集连接网络（DenseNet）、全卷积网络（FCN）以及用于小样本学习的孪生网络等主流深度学习架构。这些模型均针对 EEG 信号特性进行了适配，并附带相关论文引用，方便用户追踪学术前沿。通过 EEG-DL，用户可以更高效地探索不同算法在脑电解码任务中的表现，加速从理论构思到实验验证的研发进程。","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\"> \u003Cimg width=\"500px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_294a4eaf0f92.png\">\u003C\u002Fa> \n  \u003Cbr \u002F>\n  \u003Cbr \u002F>\n  \u003Ca href=\"https:\u002F\u002Fgitter.im\u002FEEG-DL\u002Fcommunity\">\u003Cimg alt=\"Chat on Gitter\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgitter\u002Froom\u002Fnwjs\u002Fnw.js.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.anaconda.com\u002F\">\u003Cimg alt=\"Python Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.x-green.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.tensorflow.org\u002Finstall\">\u003Cimg alt=\"TensorFlow Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorFlow-1.13.1-red.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FLICENSE\">\u003Cimg alt=\"MIT License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003C!-- \u003Cdiv align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\"> \u003Cimg width=\"500px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_294a4eaf0f92.png\">\u003C\u002Fa> \n\u003C\u002Fdiv> -->\n\n--------------------------------------------------------------------------------\n\n# Welcome to EEG Deep Learning Library\n\n**EEG-DL** is a Deep Learning (DL) library written by [TensorFlow](https:\u002F\u002Fwww.tensorflow.org) for EEG Tasks (Signals) Classification. It provides the latest DL algorithms and keeps updated. \n\n\u003C!-- [![Gitter](https:\u002F\u002Fimg.shields.io\u002Fgitter\u002Froom\u002Fnwjs\u002Fnw.js.svg)](https:\u002F\u002Fgitter.im\u002FEEG-DL\u002Fcommunity)\n[![Python 3](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.x-green.svg)](https:\u002F\u002Fwww.anaconda.com\u002F)\n[![TensorFlow 1.13.1](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorFlow-1.13.1-red.svg)](https:\u002F\u002Fwww.tensorflow.org\u002Finstall)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg)](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FLICENSE) -->\n\n## Table of Contents\n\u003Cul>\n\u003Cli>\u003Ca href=\"#Documentation\">Documentation\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Usage-Demo\">Usage Demo\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Notice\">Notice\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Research-Ideas\">Research Ideas\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Common-Issues\">Common Issues\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Structure-of-the-Code\">Structure of the Code\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Citation\">Citation\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Other-Useful-Resources\">Other Useful Resources\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Contribution\">Contribution\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Organizations\">Organizations\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\n## Documentation\n**The supported models** include\n\n| No.   | Model                                                  | Codes           |\n| :----:| :----:                                                 | :----:          |\n| 1     | Deep Neural Networks                                   | [DNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FDNN.py) |\n| 2     | Convolutional Neural Networks [[Paper]](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Fab4af6\u002Fmeta) [[Tutorial]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-Motor-Imagery-Classification-CNNs-TensorFlow)| [CNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FCNN.py) |\n| 3     | Deep Residual Convolutional Neural Networks [[Paper]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FHe_Deep_Residual_Learning_CVPR_2016_paper.pdf) | [ResNet](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FResCNN.py) |\n| 4     | Thin Residual Convolutional Neural Networks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10107) | [Thin ResNet](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FThin_ResNet.py) |\n| 5     | Densely Connected Convolutional Neural Networks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) | [DenseNet](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FDenseCNN.py) |\n| 6     | Fully Convolutional Neural Networks [[Paper]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FLong_Fully_Convolutional_Networks_2015_CVPR_paper.pdf) | [FCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FFully_Conv_CNN.py) |\n| 7     | One Shot Learning with Siamese Networks (CNNs Backbone) \u003Cbr> [[Paper]](https:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002Fpapers\u002Foneshot1.pdf) [[Tutorial]](https:\u002F\u002Ftowardsdatascience.com\u002Fone-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d) | [Siamese Networks](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FSiamese_Network.py) |\n| 8     | Graph Convolutional Neural Networks \u003Cbr> [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9889159) [[Presentation]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FPresentation\u002FA_Summary_Three_Projects.pdf) | [GCN \u002F Graph CNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FGCN_Model.py) |\n| 9    | Deep Residual Graph Convolutional Neural Networks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13484) | [ResGCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FResGCN_Model.py) | \n| 10    | Densely Connected Graph Convolutional Neural Networks  | [DenseGCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FDenseGCN_Model.py) |\n| 11    | Bayesian Convolutional Neural Network \u003Cbr> via Variational Inference [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02731) | [Bayesian CNNs](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-BayesianCNN) |\n| 12    | Recurrent Neural Networks [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [RNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FRNN.py) |\n| 13    | Attention-based Recurrent Neural Networks [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [RNN with Attention](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FRNN_with_Attention.py) |\n| 14    | Bidirectional Recurrent Neural Networks [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiRNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiRNN.py) |\n| 15    | Attention-based Bidirectional Recurrent Neural Networks [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiRNN with Attention](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiRNN_with_Attention.py) |\n| 16    | Long-short Term Memory [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [LSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FLSTM.py) |\n| 17    | Attention-based Long-short Term Memory [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [LSTM with Attention](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FLSTM_with_Attention.py) |\n| 18    | Bidirectional Long-short Term Memory [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiLSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiLSTM.py) |\n| 19    | Attention-based Bidirectional Long-short Term Memory [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiLSTM with Attention](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiLSTM_with_Attention.py) |\n| 20    | Gated Recurrent Unit [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [GRU](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FGRU.py) |\n| 21    | Attention-based Gated Recurrent Unit [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [GRU with Attention](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FGRU_with_Attention.py) |\n| 22    | Bidirectional Gated Recurrent Unit [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiGRU](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiGRU.py) |\n| 23    | Attention-based Bidirectional Gated Recurrent Unit [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiGRU with Attention](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiGRU_with_Attention.py) |\n| 24    | Attention-based BiLSTM + GCN [[Paper]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [Attention-based BiLSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiLSTM_with_Attention.py) \u003Cbr> [GCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FGCN_Model.py) |\n| 25    | Transformer [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03762) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11929) | [Transformer](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-Transformer.py) |\n| 26    | Transfer Learning with Transformer \u003Cbr> (**This code is only for reference!**) \u003Cbr> (**You can modify the codes to fit your data.**) | Stage 1: [Pre-training](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-pretrain_model.py) \u003Cbr> Stage 2: [Fine Tuning](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-finetuning_model.py) |\n\n**One EEG Motor Imagery (MI) benchmark** is currently supported. Other benchmarks in the field of EEG or BCI can be found [here](https:\u002F\u002Fgithub.com\u002Fmeagmohit\u002FEEG-Datasets).\n\n| No.     | Dataset                                                                          | Tutorial |\n| :----:  | :----:                                                                           | :----:   |\n| 1       | [EEG Motor Movement\u002FImagery Dataset](https:\u002F\u002Farchive.physionet.org\u002Fpn4\u002Feegmmidb\u002F) | [Tutorial](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-Motor-Imagery-Classification-CNNs-TensorFlow)|\n\n**The evaluation criteria** consists of\n\n| Evaluation Metrics \t\t\t\t\t                                    | Tutorial |\n| :----:                                                                    | :----:   |\n| Confusion Matrix | [Tutorial](https:\u002F\u002Ftowardsdatascience.com\u002Funderstanding-confusion-matrix-a9ad42dcfd62) |\n| Accuracy \u002F Precision \u002F Recall \u002F F1 Score \u002F Kappa Coefficient | [Tutorial](https:\u002F\u002Ftowardsdatascience.com\u002Funderstanding-confusion-matrix-a9ad42dcfd62) |\n| Receiver Operating Characteristic (ROC) Curve \u002F Area under the Curve (AUC)| - |\n| Paired-wise t-test via R language | [Tutorial](https:\u002F\u002Fwww.analyticsvidhya.com\u002Fblog\u002F2019\u002F05\u002Fstatistics-t-test-introduction-r-implementation\u002F) |\n\n*The evaluation metrics are mainly supported for **four-class classification**. If you wish to switch to two-class or three-class classification, please modify [this file](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FEvaluation_Metrics\u002FMetrics.py) to adapt to your personal Dataset classes. Meanwhile, the details about the evaluation metrics can be found in [this paper](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Fab4af6\u002Fmeta).*\n\n## Usage Demo\n\n1. ***(Under Any Python Environment)*** Download the [EEG Motor Movement\u002FImagery Dataset](https:\u002F\u002Farchive.physionet.org\u002Fpn4\u002Feegmmidb\u002F) via [this script](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FDownload_Raw_EEG_Data\u002FMIND_Get_EDF.py).\n\n    ```text\n    $ python MIND_Get_EDF.py\n    ```\n\n2. ***(Under Python 2.7 Environment)*** Read the .edf files (One of the raw EEG signals formats) and save them into Matlab .m files via [this script](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FDownload_Raw_EEG_Data\u002FExtract-Raw-Data-Into-Matlab-Files.py). FYI, this script must be executed under the **Python 2 environment (Python 2.7 is recommended)** due to some Python 2 syntax. If using Python 3 environment to run the file, there might be no error, but the labels of EEG tasks would be totally messed up.\n\n    ```text\n    $ python Extract-Raw-Data-Into-Matlab-Files.py\n    ```\n\n3. Preprocessed the Dataset via the Matlab and save the data into the Excel files (training_set, training_label, test_set, and test_label) via [these scripts](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Ftree\u002Fmaster\u002FPreprocess_EEG_Data) with regards to different models. FYI, every lines of the Excel file is a sample, and the columns can be regarded as features, e.g., 4096 columns mean 64 channels X 64 time points. Later, the models will reshape 4096 columns into a Matrix with the shape 64 channels X 64 time points. You should can change the number of columns to fit your own needs, e.g., the real dimension of your own Dataset.\n\n4. ***(Prerequsites)*** Train and test deep learning models **under the Python 3.6 Environment (Highly Recommended)** for EEG signals \u002F tasks classification via [the EEG-DL library](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Ftree\u002Fmaster\u002FModels), which provides multiple SOTA DL models.\n\n    ```text\n    Python Version: Python 3.6 (Recommended)\n    TensorFlow Version: TensorFlow 1.13.1\n    ```\n\n    Use the below command to install TensorFlow GPU Version 1.13.1:\n\n    ```python\n    $ pip install --upgrade --force-reinstall tensorflow-gpu==1.13.1 --user\n    ```\n\n5. Read evaluation criterias (through iterations) via the [Tensorboard](https:\u002F\u002Fwww.tensorflow.org\u002Ftensorboard). You can follow [this tutorial](https:\u002F\u002Fwww.guru99.com\u002Ftensorboard-tutorial.html). When you finished training the model, you will find the \"events.out.tfevents.***\" in the folder, e.g., \"\u002FUsers\u002Fshuyuej\u002FDesktop\u002Ftrained_model\u002F\". You can use the following command in your terminal:\n\n    ```python\n    $ tensorboard --logdir=\"\u002FUsers\u002Fshuyuej\u002FDesktop\u002Ftrained_model\u002F\" --host=127.0.0.1\n    ```\n\n    You can open the website in the [Google Chrome](https:\u002F\u002Fwww.google.com\u002Fchrome\u002F) (Highly Recommended). \n    \n    ```html\n    http:\u002F\u002F127.0.0.1:6006\u002F\n    ```\n\n    Then you can read and save the criterias into Excel .csv files.\n\n6. Finally, draw beautiful paper photograph using Matlab or Python. Please follow [these scripts](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Ftree\u002Fmaster\u002FDraw_Photos).\n\n## Notice\n1. I have tested all the files (Python and Matlab) under the macOS. Be advised that for some Matlab files, several Matlab functions are different between Windows Operating System (OS) and macOS. For example, I used \"readmatrix\" function to read CSV files in the MacOS. However, I have to use “csvread” function in the Windows because there was no such \"readmatrix\" Matlab function in the Windows. If you have met similar problems, I recommend you to Google or Baidu them. You can definitely work them out.\n\n2. For the GCNs-Net (GCN Model), for the graph Convolutional layer, the dimensionality of the graph will be unchanged, and for the max-pooling layer, the dimensionality of the graph will be reduced by 2. That means, if you have N X N graph Laplacian, after the max-pooling layer, the dimension will be N\u002F2 X N\u002F2. If you have a 15-channel EEG system, it cannot use max-pooling unless you selected 14 --> 7 or 12 --> 6 --> 3 or 10 --> 5 or 8 --> 4 --> 2 --> 1, etc. The details can be reviewed from [this paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08924).\n\n3. The **Loss Function** can be changed or modified from [this file](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FLoss_Function\u002FLoss.py).\n\n4. The **Dataset Loader** can be changed or modified from [this file](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FDatasetAPI\u002FDataLoader.py).\n\n## Research Ideas\n1. Dynamic Graph Convolutional Neural Networks [[Paper Survey]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FEEG\u002FDynamic-GCN-Survey.pdf)\n\n2. Neural Architecture Search \u002F AutoML (Automatic Machine Learning) [[Tsinghua AutoGraph]](https:\u002F\u002Fgithub.com\u002FTHUMNLab\u002FAutoGL)\n\n3. Reinforcement Learning Algorithms (_e.g._, Deep Q-Learning) [[Tsinghua Tianshou]](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002Ftianshou) [[Doc for Chinese Readers]](https:\u002F\u002Ftianshou.readthedocs.io\u002Fzh\u002Flatest\u002Fdocs\u002Ftoc.html)\n\n4. Bayesian Convolutional Neural Networks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02731) [[Thesis]](https:\u002F\u002Fgithub.com\u002Fkumar-shridhar\u002FMaster-Thesis-BayesianCNN\u002Fraw\u002Fmaster\u002Fthesis.pdf) [[Codes]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-BayesianCNN)\n\n5. Transformer \u002F Self-attention \u002F Non-local Modeling [[Transformer Codes]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-Transformer.py) [[Non-local Modeling PyTorch Codes]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FNLNet-IQA)\n\n\t[[Why Non-local Modeling?]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FNLNet-IQA) [[Paper]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FMMSP\u002FMMSP22_Paper.pdf) [[A Detailed Presentation]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FPresentation\u002FA_Summary_Three_Projects.pdf) [[Slides]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FMMSP\u002FMMSP22_Slides.pdf) [[Poster]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FMMSP\u002FMMSP22_Poster.pdf)\n\n## Common Issues\n1. **ValueError: Cannot feed value of shape (1024, 1) for Tensor 'input\u002Flabel:0', which has shape '(1024,)'**\n\n    To solve this issue, you have to squeeze the shape of the labels from (1024, 1) to (1024,) using np.squeeze. Please edit the [DataLoader.py file](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FDatasetAPI\u002FDataLoader.py).\n    From original codes:\n    ```python\n    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)\n    train_labels = np.array(train_labels).astype('float32')\n\n    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)\n    test_labels = np.array(test_labels).astype('float32')\n    ```\n    to\n    ```python\n    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)\n    train_labels = np.array(train_labels).astype('float32')\n    train_labels = np.squeeze(train_labels)\n\n    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)\n    test_labels = np.array(test_labels).astype('float32')\n    test_labels = np.squeeze(test_labels)\n    ```\n\n2. **InvalidArgumentError: Nan in summary histogram for training\u002Flogits\u002Fbias\u002Fgradients**\n    \n    To solve this issue, you have to comment all the histogram summary. Please edit the [GCN_Model.py file](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FGCN_Model.py).\n\n    ```python\n    # Comment the above tf.summary.histogram from the GCN_Model.py File\n\n    # # Histograms.\n    # for grad, var in grads:\n    #     if grad is None:\n    #         print('warning: {} has no gradient'.format(var.op.name))\n    #     else:\n    #         tf.summary.histogram(var.op.name + '\u002Fgradients', grad)\n\n    def _weight_variable(self, shape, regularization=True):\n        initial = tf.truncated_normal_initializer(0, 0.1)\n        var = tf.get_variable('weights', shape, tf.float32, initializer=initial)\n        if regularization:\n            self.regularizers.append(tf.nn.l2_loss(var))\n        # tf.summary.histogram(var.op.name, var)\n        return var\n\n    def _bias_variable(self, shape, regularization=True):\n        initial = tf.constant_initializer(0.1)\n        var = tf.get_variable('bias', shape, tf.float32, initializer=initial)\n        if regularization:\n            self.regularizers.append(tf.nn.l2_loss(var))\n        # tf.summary.histogram(var.op.name, var)\n        return var\n    ```\n\n3. **TypeError: len() of unsized object**\n    \n    To solve this issue, you have to change the coarsen level to your own needs, and you can definitely change it to see the difference. Please edit the [main-GCN.py file](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-GCN.py). For example, if you want to implement the GCNs-Net to a 10-channel EEG system, you have to set \"levels\" equal to 1 or 0 because there is at most only one max-pooling (10 --> 5). And you can change argument \"level\" to 1 or 0 to see the difference.\n\n    ```python\n    # This is the coarsen levels, you can definitely change the level to observe the difference\n    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=5, self_connections=False)\n    ```\n    to\n    ```python\n    # This is the coarsen levels, you can definitely change the level to observe the difference\n    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=1, self_connections=False)\n    ```\n\n4. **tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of 7 which is outside the valid range of [0, 7).  Label values: 5 2 3 3 1 5 5 4 7 4 2 2 1 7 5 6 3 4 2 4**\n    \n    To solve this issue, for the GCNs-Net, when you make your dataset, you have to make your labels from 0 rather than 1. For example, if you have seven classes, your labels should be 0 (First class), 1 (Second class), 2 (Third class), 3 (Fourth class), 4 (Fifth class), 5 (Sixth class), 6 (Seventh class) instead of 1, 2, 3, 4, 5, 6, 7.\n\n5. **IndexError: list index out of range**\n    \n    To solve this issue, first of all, please double-check your Python Environment. **Python 2.7 Environment is required.** Besides, please install ***0.1.11*** version of ***pyEDFlib***. The installation instruction is as follows:\n    \n    ```python\n    $ pip install pyEDFlib==0.1.11\n    ```\n\n## Structure of the Code\n\nAt the root of the project, you will see:\n\n```text\n├── Download_Raw_EEG_Data\n│   ├── Extract-Raw-Data-Into-Matlab-Files.py\n│   ├── MIND_Get_EDF.py\n│   ├── README.md\n│   └── electrode_positions.txt\n├── Draw_Photos\n│   ├── Draw_Accuracy_Photo.m\n│   ├── Draw_Box_Photo.m\n│   ├── Draw_Confusion_Matrix.py\n│   ├── Draw_Loss_Photo.m\n│   ├── Draw_ROC_and_AUC.py\n│   └── figure_boxplot.m\n├── LICENSE\n├── Logo.png\n├── MANIFEST.in\n├── Models\n│   ├── DatasetAPI\n│   │   └── DataLoader.py\n│   ├── Evaluation_Metrics\n│   │   └── Metrics.py\n│   ├── Initialize_Variables\n│   │   └── Initialize.py\n│   ├── Loss_Function\n│   │   └── Loss.py\n│   ├── Network\n│   │   ├── BiGRU.py\n│   │   ├── BiGRU_with_Attention.py\n│   │   ├── BiLSTM.py\n│   │   ├── BiLSTM_with_Attention.py\n│   │   ├── BiRNN.py\n│   │   ├── BiRNN_with_Attention.py\n│   │   ├── CNN.py\n│   │   ├── DNN.py\n│   │   ├── DenseCNN.py\n│   │   ├── Fully_Conv_CNN.py\n│   │   ├── GRU.py\n│   │   ├── GRU_with_Attention.py\n│   │   ├── LSTM.py\n│   │   ├── LSTM_with_Attention.py\n│   │   ├── RNN.py\n│   │   ├── RNN_with_Attention.py\n│   │   ├── ResCNN.py\n│   │   ├── Siamese_Network.py\n│   │   ├── Thin_ResNet.py\n│   │   └── lib_for_GCN\n│   │       ├── DenseGCN_Model.py\n│   │       ├── GCN_Model.py\n│   │       ├── ResGCN_Model.py\n│   │       ├── coarsening.py\n│   │       └── graph.py\n│   ├── __init__.py\n│   ├── main-BiGRU-with-Attention.py\n│   ├── main-BiGRU.py\n│   ├── main-BiLSTM-with-Attention.py\n│   ├── main-BiLSTM.py\n│   ├── main-BiRNN-with-Attention.py\n│   ├── main-BiRNN.py\n│   ├── main-CNN.py\n│   ├── main-DNN.py\n│   ├── main-DenseCNN.py\n│   ├── main-DenseGCN.py\n│   ├── main-FullyConvCNN.py\n│   ├── main-GCN.py\n│   ├── main-GRU-with-Attention.py\n│   ├── main-GRU.py\n│   ├── main-LSTM-with-Attention.py\n│   ├── main-LSTM.py\n│   ├── main-RNN-with-Attention.py\n│   ├── main-RNN.py\n│   ├── main-ResCNN.py\n│   ├── main-ResGCN.py\n│   ├── main-Siamese-Network.py\n│   └── main-Thin-ResNet.py\n├── NEEPU.png\n├── Preprocess_EEG_Data\n│   ├── For-CNN-based-Models\n│   │   └── make_dataset.m\n│   ├── For-DNN-based-Models\n│   │   └── make_dataset.m\n│   ├── For-GCN-based-Models\n│   │   └── make_dataset.m\n│   ├── For-RNN-based-Models\n│   │   └── make_dataset.m\n│   └── For-Siamese-Network-One-Shot-Learning\n│       └── make_dataset.m\n├── README.md\n├── Saved_Files\n│   └── README.md\n├── requirements.txt\n└── setup.py\n```\n\n## Citation\n\nIf you find our library useful, please considering citing our papers in your publications.\nWe provide a BibTeX entry below.\n\n```bibtex\n@article{hou2022gcn,\n\ttitle   = {{GCNs-Net}: A Graph Convolutional Neural Network Approach for Decoding Time-Resolved EEG Motor Imagery Signals},\n        author  = {Hou, Yimin and Jia, Shuyue and Lun, Xiangmin and Hao, Ziqian and Shi, Yan and Li, Yang and Zeng, Rui and Lv, Jinglei},\n\tjournal = {IEEE Transactions on Neural Networks and Learning Systems},\n\tvolume  = {},\n\tnumber  = {},\n\tpages   = {1-12},\n\tyear    = {Sept. 2022},\n\tdoi     = {10.1109\u002FTNNLS.2022.3202569}\n}\n  \n@article{hou2020novel,\n\ttitle     = {A Novel Approach of Decoding EEG Four-class Motor Imagery Tasks via Scout {ESI} and {CNN}},\n\tauthor    = {Hou, Yimin and Zhou, Lu and Jia, Shuyue and Lun, Xiangmin},\n\tjournal   = {Journal of Neural Engineering},\n\tvolume    = {17},\n\tnumber    = {1},\n\tpages     = {016048},\n\tyear      = {Feb. 2020},\n\tpublisher = {IOP Publishing},\n\tdoi       = {10.1088\u002F1741-2552\u002Fab4af6}\n\t\n}\n\n@article{hou2022deep,\n\ttitle   = {Deep Feature Mining via the Attention-Based Bidirectional Long Short Term Memory Graph Convolutional Neural Network for Human Motor Imagery Recognition},\n\tauthor  = {Hou, Yimin and Jia, Shuyue and Lun, Xiangmin and Zhang, Shu and Chen, Tao and Wang, Fang and Lv, Jinglei},   \n\tjournal = {Frontiers in Bioengineering and Biotechnology},      \n\tvolume  = {9},      \n\tyear    = {Feb. 2022},      \n\turl     = {https:\u002F\u002Fwww.frontiersin.org\u002Farticle\u002F10.3389\u002Ffbioe.2021.706229},       \n\tdoi     = {10.3389\u002Ffbioe.2021.706229},      \n\tISSN    = {2296-4185}\n}\n\n@article{Jia2020AttentionGCN,\n\ttitle   = {Attention-based Graph {ResNet} for Motor Intent Detection from Raw EEG signals},\n\tauthor  = {Jia, Shuyue and Hou, Yimin and Lun, Xiangmin and Lv, Jinglei},\n\tjournal = {arXiv preprint arXiv:2007.13484},\n\tyear    = {2022}\n}\n```\n\nOur papers can be downloaded from:\n1. [A Novel Approach of Decoding EEG Four-class Motor Imagery Tasks via Scout ESI and CNN](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Fab4af6\u002Fmeta)\u003Cbr>\n*Codes and Tutorials for this work can be found [here](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-Motor-Imagery-Classification-CNNs-TensorFlow).*\u003Cbr>\n\n**Overall Framework**:\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_dc3d8247a637.png\" alt=\"Project1\">\n\u003C\u002Fdiv>\n\n**Proposed CNNs Architecture**:\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=60%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_f34b831331c8.png\" alt=\"Project1\">\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n2. [GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9889159)\u003Cbr> \n***Slides Presentation** for this work can be found [here](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FEEG\u002FGCNs-Net-Presentation.pdf).*\u003Cbr>\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_a445919a2d04.png\" alt=\"Project2\">\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n3. [Deep Feature Mining via Attention-based BiLSTM-GCN for Human Motor Imagery Recognition](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull)\u003Cbr>\n***Slides Presentation** for this work can be found [here](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FEEG\u002FBiLSTM-GCN-Presentation.pdf).*\u003Cbr>\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_a681a480a6c2.jpeg\" alt=\"Project3.1\">\n\u003C\u002Fdiv>\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_6ab8934cf38a.jpeg\" alt=\"Project4.1\">\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n4. [Attention-based Graph ResNet for Motor Intent Detection from Raw EEG signals](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13484)\n\n## Other Useful Resources\n\nI think the following presentations would be helpful when you guys get engaged with Python and TensorFlow.\n\n1. Python Environment Setting-up Tutorial [download](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FPython.pdf)\n\n2. Usage of Cloud Server and Setting-up Tutorial [download](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FServer.pdf)\n\n3. TensorFlow for Deep Learning Tutorial [download](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FTensorFlow.pdf)\n\n## Contribution\n\nWe always welcome contributions to help make EEG-DL Library better. If you would like to contribute or have any questions, please don't hesitate to email me at shuyuej@ieee.org.\n\n## Organizations\n\nThe library was created and open-sourced by Shuyue Jia, supervised by Prof. Yimin Hou, at the School of Automation Engineering, Northeast Electric Power University, Jilin, Jilin, China.\u003Cbr>\n\u003Ca href=\"http:\u002F\u002Fwww.neepu.edu.cn\u002F\"> \u003Cimg width=\"500\" height=\"150\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_8d59ebc0cb79.png\">\u003C\u002Fa>\n","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\"> \u003Cimg width=\"500px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_294a4eaf0f92.png\">\u003C\u002Fa> \n  \u003Cbr \u002F>\n  \u003Cbr \u002F>\n  \u003Ca href=\"https:\u002F\u002Fgitter.im\u002FEEG-DL\u002Fcommunity\">\u003Cimg alt=\"Chat on Gitter\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgitter\u002Froom\u002Fnwjs\u002Fnw.js.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.anaconda.com\u002F\">\u003Cimg alt=\"Python Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.x-green.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.tensorflow.org\u002Finstall\">\u003Cimg alt=\"TensorFlow Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorFlow-1.13.1-red.svg\" \u002F>\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FLICENSE\">\u003Cimg alt=\"MIT License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003C!-- \u003Cdiv align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\"> \u003Cimg width=\"500px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_294a4eaf0f92.png\">\u003C\u002Fa> \n\u003C\u002Fdiv> -->\n\n--------------------------------------------------------------------------------\n\n# 欢迎来到脑电深度学习库\n\n**EEG-DL** 是一个基于 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org) 的深度学习（DL）库，专门用于脑电信号分类任务。它提供了最新的深度学习算法，并会持续更新。\n\n\u003C!-- [![Gitter](https:\u002F\u002Fimg.shields.io\u002Fgitter\u002Froom\u002Fnwjs\u002Fnw.js.svg)](https:\u002F\u002Fgitter.im\u002FEEG-DL\u002Fcommunity)\n[![Python 3](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.x-green.svg)](https:\u002F\u002Fwww.anaconda.com\u002F)\n[![TensorFlow 1.13.1](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorFlow-1.13.1-red.svg)](https:\u002F\u002Fwww.tensorflow.org\u002Finstall)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg)](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FLICENSE) -->\n\n## 目录\n\u003Cul>\n\u003Cli>\u003Ca href=\"#Documentation\">文档\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Usage-Demo\">使用示例\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Notice\">注意事项\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Research-Ideas\">研究思路\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Common-Issues\">常见问题\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Structure-of-the-Code\">代码结构\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Citation\">引用\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Other-Useful-Resources\">其他有用资源\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Contribution\">贡献\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Ca href=\"#Organizations\">组织机构\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\n## 文档\n**支持的模型**包括\n\n| 序号 | 模型                                                  | 代码           |\n| :----:| :----:                                                 | :----:          |\n| 1     | 深度神经网络                                   | [DNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FDNN.py) |\n| 2     | 卷积神经网络 [[论文]](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Fab4af6\u002Fmeta) [[教程]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-Motor-Imagery-Classification-CNNs-TensorFlow)| [CNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FCNN.py) |\n| 3     | 深度残差卷积神经网络 [[论文]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FHe_Deep_Residual_Learning_CVPR_2016_paper.pdf) | [ResNet](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FResCNN.py) |\n| 4     | 瘦型残差卷积神经网络 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10107) | [Thin ResNet](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FThin_ResNet.py) |\n| 5     | 密集连接卷积神经网络 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) | [DenseNet](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FDenseCNN.py) |\n| 6     | 全卷积神经网络 [[论文]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FLong_Fully_Convolutional_Networks_2015_CVPR_paper.pdf) | [FCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FFully_Conv_CNN.py) |\n| 7     | 基于暹罗网络（以CNN为骨干）的单样本学习 \u003Cbr> [[论文]](https:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002Fpapers\u002Foneshot1.pdf) [[教程]](https:\u002F\u002Ftowardsdatascience.com\u002Fone-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d) | [暹罗网络](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FSiamese_Network.py) |\n| 8     | 图卷积神经网络 \u003Cbr> [[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9889159) [[演示文稿]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FPresentation\u002FA_Summary_Three_Projects.pdf) | [GCN \u002F 图卷积神经网络](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FGCN_Model.py) |\n| 9    | 深度残差图卷积神经网络 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13484) | [ResGCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FResGCN_Model.py) | \n| 10    | 密集连接图卷积神经网络  | [DenseGCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FDenseGCN_Model.py) |\n| 11    | 基于变分推断的贝叶斯卷积神经网络 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02731) | [贝叶斯CNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-BayesianCNN) |\n| 12    | 循环神经网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [RNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FRNN.py) |\n| 13    | 基于注意力机制的循环神经网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [带注意力机制的RNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FRNN_with_Attention.py) |\n| 14    | 双向循环神经网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiRNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiRNN.py) |\n| 15    | 基于注意力机制的双向循环神经网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [带注意力机制的BiRNN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiRNN_with_Attention.py) |\n| 16    | 长短期记忆网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [LSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FLSTM.py) |\n| 17    | 基于注意力机制的长短期记忆网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [带注意力机制的LSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FLSTM_with_Attention.py) |\n| 18    | 双向长短期记忆网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiLSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiLSTM.py) |\n| 19    | 基于注意力机制的双向长短期记忆网络 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [带注意力机制的BiLSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiLSTM_with_Attention.py) |\n| 20    | 门控循环单元 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [GRU](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FGRU.py) |\n| 21    | 基于注意力机制的门控循环单元 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [带注意力机制的GRU](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FGRU_with_Attention.py) |\n| 22    | 双向门控循环单元 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [BiGRU](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiGRU.py) |\n| 23    | 基于注意力机制的双向门控循环单元 [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [带注意力机制的BiGRU](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiGRU_with_Attention.py) |\n| 24    | 基于注意力机制的BiLSTM + GCN [[论文]](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull) | [基于注意力机制的BiLSTM](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002FBiLSTM_with_Attention.py) \u003Cbr> [GCN](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FGCN_Model.py) |\n| 25    | Transformer [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03762) [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11929) | [Transformer](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-Transformer.py) |\n| 26    | 基于Transformer的迁移学习 \u003Cbr> (**此代码仅作参考！**) \u003Cbr> (**您可以根据自己的数据修改代码。**) | 第一阶段：[预训练](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-pretrain_model.py) \u003Cbr> 第二阶段：[微调](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-finetuning_model.py) |\n\n**目前支持一个脑电波运动想象（MI）基准数据集**。更多关于脑电或BCI领域的基准数据集，请参见[这里](https:\u002F\u002Fgithub.com\u002Fmeagmohit\u002FEEG-Datasets)。\n\n| 序号     | 数据集                                                                          | 教程 |\n| :----:  | :----:                                                                           | :----:   |\n| 1       | [EEG运动\u002F意念数据集](https:\u002F\u002Farchive.physionet.org\u002Fpn4\u002Feegmmidb\u002F) | [教程](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-Motor-Imagery-Classification-CNNs-TensorFlow)|\n\n**评估标准**包括：\n\n| 评估指标 \t\t\t\t\t                                    | 教程 |\n| :----:                                                                    | :----:   |\n| 混淆矩阵 | [教程](https:\u002F\u002Ftowardsdatascience.com\u002Funderstanding-confusion-matrix-a9ad42dcfd62) |\n| 准确率 \u002F 精确率 \u002F 召回率 \u002F F1分数 \u002F 克帕系数 | [教程](https:\u002F\u002Ftowardsdatascience.com\u002Funderstanding-confusion-matrix-a9ad42dcfd62) |\n| 接收者操作特征（ROC）曲线 \u002F 曲线下面积（AUC）| - |\n| 使用R语言进行配对t检验 | [教程](https:\u002F\u002Fwww.analyticsvidhya.com\u002Fblog\u002F2019\u002F05\u002Fstatistics-t-test-introduction-r-implementation\u002F) |\n\n*这些评估指标主要适用于**四分类问题**。如果您希望切换到二分类或三分类问题，请修改[此文件](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FEvaluation_Metrics\u002FMetrics.py)，以适应您个人数据集的类别数量。同时，有关评估指标的详细信息可在[这篇论文](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Fab4af6\u002Fmeta)中找到。*\n\n\n\n## 使用演示\n\n1. ***(在任何Python环境中)*** 通过[此脚本](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FDownload_Raw_EEG_Data\u002FMIND_Get_EDF.py)下载[EEG运动\u002F意念数据集](https:\u002F\u002Farchive.physionet.org\u002Fpn4\u002Feegmmidb\u002F)。\n\n    ```text\n    $ python MIND_Get_EDF.py\n    ```\n\n2. ***(在Python 2.7环境中)*** 使用[此脚本](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FDownload_Raw_EEG_Data\u002FExtract-Raw-Data-Into-Matlab-Files.py)读取.edf文件（一种原始EEG信号格式），并将其保存为Matlab .m文件。请注意，该脚本必须在**Python 2环境（推荐Python 2.7）**下运行，因为其中包含一些Python 2特有的语法。如果使用Python 3环境运行此文件，虽然可能不会报错，但EEG任务标签将会完全混乱。\n\n    ```text\n    $ python Extract-Raw-Data-Into-Matlab-Files.py\n    ```\n\n3. 使用Matlab对数据集进行预处理，并根据不同的模型将数据保存为Excel文件（training_set、training_label、test_set和test_label）。请注意，Excel文件中的每一行代表一个样本，而列则可视为特征，例如4096列意味着64个通道×64个时间点。后续模型会将这4096列重塑为形状为64通道×64时间点的矩阵。您可以根据自身需求调整列数，例如您的实际数据集维度。\n\n4. ***(前提条件)*** 在**Python 3.6环境（强烈推荐）**下训练和测试用于EEG信号\u002F任务分类的深度学习模型，可通过[EEG-DL库](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Ftree\u002Fmaster\u002FModels)实现，该库提供了多种SOTA深度学习模型。\n\n    ```text\n    Python版本：Python 3.6（推荐）\n    TensorFlow版本：TensorFlow 1.13.1\n    ```\n\n    使用以下命令安装TensorFlow GPU版本1.13.1：\n\n    ```python\n    $ pip install --upgrade --force-reinstall tensorflow-gpu==1.13.1 --user\n    ```\n\n5. 通过[TensorBoard](https:\u002F\u002Fwww.tensorflow.org\u002Ftensorboard)读取评估指标（通过迭代）。您可以参考[此教程](https:\u002F\u002Fwww.guru99.com\u002Ftensorboard-tutorial.html)。当您完成模型训练后，会在文件夹中找到“events.out.tfevents.***”文件，例如“\u002FUsers\u002Fshuyuej\u002FDesktop\u002Ftrained_model\u002F”。您可以在终端中使用以下命令：\n\n    ```python\n    $ tensorboard --logdir=\"\u002FUsers\u002Fshuyuej\u002FDesktop\u002Ftrained_model\u002F\" --host=127.0.0.1\n    ```\n\n    建议使用[Google Chrome](https:\u002F\u002Fwww.google.com\u002Fchrome\u002F)打开网页：\n\n    ```html\n    http:\u002F\u002F127.0.0.1:6006\u002F\n    ```\n\n    这样您就可以读取并把评估指标保存为Excel .csv文件。\n\n6. 最后，使用Matlab或Python绘制精美的论文图表。请参考[这些脚本](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Ftree\u002Fmaster\u002FDraw_Photos)。\n\n## 注意事项\n1. 我已在macOS系统上测试了所有Python和Matlab文件。需要注意的是，某些Matlab函数在Windows操作系统和macOS之间存在差异。例如，在macOS中我使用“readmatrix”函数来读取CSV文件，而在Windows中则需要使用“csvread”函数，因为Windows系统中没有“readmatrix”函数。如果您遇到类似问题，建议通过Google或百度搜索解决，通常可以顺利找到解决方案。\n\n2. 对于GCNs-Net（GCN模型），图卷积层不会改变图的维度，而最大池化层会将图的维度减半。也就是说，如果初始图拉普拉斯矩阵是N×N的，经过最大池化层后，其维度将变为N\u002F2×N\u002F2。对于15通道的EEG系统，除非选择14→7、12→6→3、10→5、8→4→2→1等降维方式，否则无法应用最大池化操作。具体细节可参考[这篇论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08924)。\n\n3. **损失函数**可以从[此文件](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FLoss_Function\u002FLoss.py)中更改或修改。\n\n4. **数据加载器**可以从[此文件](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FDatasetAPI\u002FDataLoader.py)中更改或修改。\n\n## 研究思路\n1. 动态图卷积神经网络 [[论文综述]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FEEG\u002FDynamic-GCN-Survey.pdf)\n\n2. 神经架构搜索 \u002F AutoML（自动机器学习）[[清华 AutoGraph]](https:\u002F\u002Fgithub.com\u002FTHUMNLab\u002FAutoGL)\n\n3. 强化学习算法（如深度Q学习）[[清华天授]](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002Ftianshou) [[中文文档]](https:\u002F\u002Ftianshou.readthedocs.io\u002Fzh\u002Flatest\u002Fdocs\u002Ftoc.html)\n\n4. 贝叶斯卷积神经网络 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02731) [[学位论文]](https:\u002F\u002Fgithub.com\u002Fkumar-shridhar\u002FMaster-Thesis-BayesianCNN\u002Fraw\u002Fmaster\u002Fthesis.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-BayesianCNN)\n\n5. Transformer \u002F 自注意力机制 \u002F 非局部建模 [[Transformer 代码]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-Transformer.py) [[非局部建模 PyTorch 代码]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FNLNet-IQA)\n\n\t[[为什么使用非局部建模？]](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FNLNet-IQA) [[论文]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FMMSP\u002FMMSP22_Paper.pdf) [[详细报告]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FPresentation\u002FA_Summary_Three_Projects.pdf) [[幻灯片]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FMMSP\u002FMMSP22_Slides.pdf) [[海报]](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FMMSP\u002FMMSP22_Poster.pdf)\n\n## 常见问题\n1. **ValueError: Cannot feed value of shape (1024, 1) for Tensor 'input\u002Flabel:0', which has shape '(1024,)'**\n\n    解决此问题的方法是使用 np.squeeze 将标签的形状从 (1024, 1) 压缩为 (1024,)。请编辑 [DataLoader.py 文件](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FDatasetAPI\u002FDataLoader.py)。\n    原始代码：\n    ```python\n    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)\n    train_labels = np.array(train_labels).astype('float32')\n\n    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)\n    test_labels = np.array(test_labels).astype('float32')\n    ```\n    修改后：\n    ```python\n    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)\n    train_labels = np.array(train_labels).astype('float32')\n    train_labels = np.squeeze(train_labels)\n\n    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)\n    test_labels = np.array(test_labels).astype('float32')\n    test_labels = np.squeeze(test_labels)\n    ```\n\n2. **InvalidArgumentError: Nan in summary histogram for training\u002Flogits\u002Fbias\u002Fgradients**\n    \n    解决此问题的方法是注释掉所有的直方图摘要。请编辑 [GCN_Model.py 文件](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002FNetwork\u002Flib_for_GCN\u002FGCN_Model.py)。\n\n    ```python\n    # 注释掉 GCN_Model.py 文件中的上述 tf.summary.histogram\n\n    # # 直方图。\n    # for grad, var in grads:\n    #     if grad is None:\n    #         print('warning: {} has no gradient'.format(var.op.name))\n    #     else:\n    #         tf.summary.histogram(var.op.name + '\u002Fgradients', grad)\n\n    def _weight_variable(self, shape, regularization=True):\n        initial = tf.truncated_normal_initializer(0, 0.1)\n        var = tf.get_variable('weights', shape, tf.float32, initializer=initial)\n        if regularization:\n            self.regularizers.append(tf.nn.l2_loss(var))\n        # tf.summary.histogram(var.op.name, var)\n        return var\n\n    def _bias_variable(self, shape, regularization=True):\n        initial = tf.constant_initializer(0.1)\n        var = tf.get_variable('bias', shape, tf.float32, initializer=initial)\n        if regularization:\n            self.regularizers.append(tf.nn.l2_loss(var))\n        # tf.summary.histogram(var.op.name, var)\n        return var\n    ```\n\n3. **TypeError: len() of unsized object**\n    \n    解决此问题的方法是根据自己的需求调整粗化级别，也可以尝试更改以观察效果。请编辑 [main-GCN.py 文件](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fblob\u002Fmaster\u002FModels\u002Fmain-GCN.py)。例如，如果要将 GCNs-Net 应用于 10 通道 EEG 系统，则需将“levels”设置为 1 或 0，因为最多只进行一次最大池化（10 --> 5）。可以通过将参数“level”改为 1 或 0 来观察差异。\n\n    ```python\n    # 这是粗化级别，可以根据需要调整以观察不同效果\n    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=5, self_connections=False)\n    ```\n    修改为：\n    ```python\n    # 这是粗化级别，可以根据需要调整以观察不同效果\n    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=1, self_connections=False)\n    ```\n\n4. **tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of 7 which is outside the valid range of [0, 7).  Label values: 5 2 3 3 1 5 5 4 7 4 2 2 1 7 5 6 3 4 2 4**\n    \n    解决此问题的方法是：对于 GCNs-Net，在制作数据集时，标签应从 0 开始，而不是从 1 开始。例如，如果有七类，标签应为 0（第一类）、1（第二类）、2（第三类）、3（第四类）、4（第五类）、5（第六类）、6（第七类），而不是 1、2、3、4、5、6、7。\n\n5. **IndexError: list index out of range**\n    \n    解决此问题的方法是首先仔细检查你的 Python 环境。**必须使用 Python 2.7 环境。** 此外，请安装 ***pyEDFlib*** 的 ***0.1.11*** 版本。安装指令如下：\n    \n    ```python\n    $ pip install pyEDFlib==0.1.11\n    ```\n\n## 代码结构\n\n在项目的根目录下，您会看到：\n\n```text\n├── Download_Raw_EEG_Data\n│   ├── Extract-Raw-Data-Into-Matlab-Files.py\n│   ├── MIND_Get_EDF.py\n│   ├── README.md\n│   └── electrode_positions.txt\n├── Draw_Photos\n│   ├── Draw_Accuracy_Photo.m\n│   ├── Draw_Box_Photo.m\n│   ├── Draw_Confusion_Matrix.py\n│   ├── Draw_Loss_Photo.m\n│   ├── Draw_ROC_and_AUC.py\n│   └── figure_boxplot.m\n├── LICENSE\n├── Logo.png\n├── MANIFEST.in\n├── Models\n│   ├── DatasetAPI\n│   │   └── DataLoader.py\n│   ├── Evaluation_Metrics\n│   │   └── Metrics.py\n│   ├── Initialize_Variables\n│   │   └── Initialize.py\n│   ├── Loss_Function\n│   │   └── Loss.py\n│   ├── Network\n│   │   ├── BiGRU.py\n│   │   ├── BiGRU_with_Attention.py\n│   │   ├── BiLSTM.py\n│   │   ├── BiLSTM_with_Attention.py\n│   │   ├── BiRNN.py\n│   │   ├── BiRNN_with_Attention.py\n│   │   ├── CNN.py\n│   │   ├── DNN.py\n│   │   ├── DenseCNN.py\n│   │   ├── Fully_Conv_CNN.py\n│   │   ├── GRU.py\n│   │   ├── GRU_with_Attention.py\n│   │   ├── LSTM.py\n│   │   ├── LSTM_with_Attention.py\n│   │   ├── RNN.py\n│   │   ├── RNN_with_Attention.py\n│   │   ├── ResCNN.py\n│   │   ├── Siamese_Network.py\n│   │   ├── Thin_ResNet.py\n│   │   └── lib_for_GCN\n│   │       ├── DenseGCN_Model.py\n│   │       ├── GCN_Model.py\n│   │       ├── ResGCN_Model.py\n│   │       ├── coarsening.py\n│   │       └── graph.py\n│   ├── __init__.py\n│   ├── main-BiGRU-with-Attention.py\n│   ├── main-BiGRU.py\n│   ├── main-BiLSTM-with-Attention.py\n│   ├── main-BiLSTM.py\n│   ├── main-BiRNN-with-Attention.py\n│   ├── main-BiRNN.py\n│   ├── main-CNN.py\n│   ├── main-DNN.py\n│   ├── main-DenseCNN.py\n│   ├── main-DenseGCN.py\n│   ├── main-FullyConvCNN.py\n│   ├── main-GCN.py\n│   ├── main-GRU-with-Attention.py\n│   ├── main-GRU.py\n│   ├── main-LSTM-with-Attention.py\n│   ├── main-LSTM.py\n│   ├── main-RNN-with-Attention.py\n│   ├── main-RNN.py\n│   ├── main-ResCNN.py\n│   ├── main-ResGCN.py\n│   ├── main-Siamese-Network.py\n│   └── main-Thin-ResNet.py\n├── NEEPU.png\n├── Preprocess_EEG_Data\n│   ├── For-CNN-based-Models\n│   │   └── make_dataset.m\n│   ├── For-DNN-based-Models\n│   │   └── make_dataset.m\n│   ├── For-GCN-based-Models\n│   │   └── make_dataset.m\n│   ├── For-RNN-based-Models\n│   │   └── make_dataset.m\n│   └── For-Siamese-Network-One-Shot-Learning\n│       └── make_dataset.m\n├── README.md\n├── Saved_Files\n│   └── README.md\n├── requirements.txt\n└── setup.py\n```\n\n## 引用\n\n如果您觉得我们的库有用，请在您的出版物中引用我们的论文。我们在此提供 BibTeX 格式的引用条目。\n\n```bibtex\n@article{hou2022gcn,\n\ttitle   = {{GCNs-Net}: 一种用于解码时间分辨脑电图运动想象信号的图卷积神经网络方法},\n        author  = {侯一民、贾淑悦、伦祥敏、郝子谦、史燕、李阳、曾睿、吕景磊},\n\tjournal = {IEEE 神经网络与学习系统汇刊},\n\tvolume  = {},\n\tnumber  = {},\n\tpages   = {1-12},\n\tyear    = {2022年9月},\n\tdoi     = {10.1109\u002FTNNLS.2022.3202569}\n}\n  \n@article{hou2020novel,\n\ttitle     = {一种通过 Scout ESI 和 CNN 解码四类脑电图运动想象任务的新方法},\n\tauthor    = {侯一民、周璐、贾淑悦、伦祥敏},\n\tjournal   = {神经工程学杂志},\n\tvolume    = {17},\n\tnumber    = {1},\n\tpages     = {016048},\n\tyear      = {2020年2月},\n\tpublisher = {IOP 出版社},\n\tdoi       = {10.1088\u002F1741-2552\u002Fab4af6}\n\t\n}\n\n@article{hou2022deep,\n\ttitle   = {基于注意力机制的双向长短时记忆图卷积神经网络在人类运动想象识别中的深度特征挖掘},\n\tauthor  = {侯一民、贾淑悦、伦祥敏、张舒、陈涛、王芳、吕景磊},   \n\tjournal = {生物工程与生物技术前沿},      \n\tvolume  = {9},      \n\tyear    = {2022年2月},      \n\turl     = {https:\u002F\u002Fwww.frontiersin.org\u002Farticle\u002F10.3389\u002Ffbioe.2021.706229},       \n\tdoi     = {10.3389\u002Ffbioe.2021.706229},      \n\tISSN    = {2296-4185}\n}\n\n@article{Jia2020AttentionGCN,\n\ttitle   = {基于注意力机制的图 ResNet 用于从原始脑电信号中检测运动意图},\n\tauthor  = {贾淑悦、侯一民、伦祥敏、吕景磊},\n\tjournal = {arXiv 预印本 arXiv:2007.13484},\n\tyear    = {2022年}\n}\n```\n\n我们的论文可以从以下链接下载：\n1. [一种通过 Scout ESI 和 CNN 解码四类脑电图运动想象任务的新方法](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1741-2552\u002Fab4af6\u002Fmeta)\u003Cbr>\n*该工作的代码和教程可以在这里找到：[GitHub 仓库](https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-Motor-Imagery-Classification-CNNs-TensorFlow).*\u003Cbr>\n\n**整体框架**：\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_dc3d8247a637.png\" alt=\"Project1\">\n\u003C\u002Fdiv>\n\n**提出的 CNN 架构**：\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=60%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_f34b831331c8.png\" alt=\"Project1\">\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n2. [GCNs-Net：一种用于解码时间分辨脑电图运动想象信号的图卷积神经网络方法](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9889159)\u003Cbr> \n***该工作的演示文稿可以在这里找到：[PDF 文件](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FEEG\u002FGCNs-Net-Presentation.pdf).*\u003Cbr>\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_a445919a2d04.png\" alt=\"Project2\">\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n3. [基于注意力机制的 BiLSTM-GCN 在人类运动想象识别中的深度特征挖掘](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2021.706229\u002Ffull)\u003Cbr>\n***该工作的演示文稿可以在这里找到：[PDF 文件](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FEEG\u002FBiLSTM-GCN-Presentation.pdf).*\u003Cbr>\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_a681a480a6c2.jpeg\" alt=\"Project3.1\">\n\u003C\u002Fdiv>\n\n\u003Cdiv>\n    \u003Cdiv style=\"text-align:center\">\n    \u003Cimg width=100%device-width src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_6ab8934cf38a.jpeg\" alt=\"Project4.1\">\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n4. [基于注意力机制的图 ResNet 用于从原始脑电信号中检测运动意图](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13484)\n\n## 其他实用资源\n\n我认为以下演示文稿在大家开始使用 Python 和 TensorFlow 时会很有帮助。\n\n1. Python 环境搭建教程 [下载](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FPython.pdf)\n\n2. 云服务器的使用及搭建教程 [下载](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FServer.pdf)\n\n3. 深度学习中的 TensorFlow 教程 [下载](https:\u002F\u002Fshuyuej.com\u002Ffiles\u002FTensorFlow.pdf)\n\n## 贡献\n\n我们始终欢迎各位的贡献，以帮助 EEG-DL 库变得更好。如果您希望参与贡献或有任何问题，请随时通过电子邮件 shuyuej@ieee.org 与我联系。\n\n## 组织机构\n\n该库由贾书悦在侯义敏教授的指导下创建并开源，地点为中国吉林省吉林市东北电力大学自动化工程学院。\u003Cbr>\n\u003Ca href=\"http:\u002F\u002Fwww.neepu.edu.cn\u002F\"> \u003Cimg width=\"500\" height=\"150\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_readme_8d59ebc0cb79.png\">\u003C\u002Fa>","# EEG-DL 快速上手指南\n\nEEG-DL 是一个基于 TensorFlow 开发的深度学习库，专为脑电图（EEG）信号分类任务设计。它集成了多种主流深度学习模型（如 CNN, ResNet, LSTM, Transformer, GCN 等），并提供了完整的数据预处理和评估流程。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: \n    *   核心模型训练：**Python 3.x** (推荐 3.6+)\n    *   原始数据提取（.edf 转 .mat）：**Python 2.7** (必须，因脚本依赖旧版语法)\n*   **深度学习框架**: TensorFlow 1.13.1\n*   **其他依赖**: \n    *   MATLAB (用于数据预处理)\n    *   R 语言 (可选，用于统计检验)\n    *   常用 Python 库: `numpy`, `pandas`, `scipy`, `mne` (建议通过 pip 安装)\n\n> **注意**：由于项目基于 TensorFlow 1.x 开发，建议创建独立的虚拟环境以避免版本冲突。国内用户可使用清华源加速安装：\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple tensorflow==1.13.1\n> ```\n\n## 安装步骤\n\n1.  **克隆项目代码**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL.git\n    cd EEG-DL\n    ```\n\n2.  **安装 Python 依赖**\n    虽然 README 未提供 `requirements.txt`，但需确保已安装 TensorFlow 1.13.1 及其他基础科学计算库：\n    ```bash\n    pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple tensorflow==1.13.1 numpy pandas scipy matplotlib scikit-learn\n    ```\n\n3.  **验证环境**\n    进入 `Models\u002FNetwork` 目录，尝试导入任意模型文件（如 `DNN.py`）确认无报错。\n\n## 基本使用\n\n以下以经典的 **EEG 运动想象 (Motor Imagery)** 任务为例，展示从数据下载到模型运行的最小闭环流程。\n\n### 第一步：下载原始数据\n在任何 Python 环境下运行脚本，下载 PhysioNet 的 EEG Motor Movement\u002FImagery 数据集：\n```bash\npython Download_Raw_EEG_Data\u002FMIND_Get_EDF.py\n```\n\n### 第二步：转换数据格式 (关键)\n**必须切换到 Python 2.7 环境**执行此步骤，否则标签会错乱。将 `.edf` 文件转换为 MATLAB `.mat` 文件：\n```bash\n# 确保当前环境为 Python 2.7\npython Download_Raw_EEG_Data\u002FExtract-Raw-Data-Into-Matlab-Files.py\n```\n\n### 第三步：数据预处理\n使用 **MATLAB** 运行 `Preprocess_EEG_Data` 目录下的对应脚本。\n*   根据你打算使用的模型（如 CNN 或 LSTM），选择对应的预处理脚本。\n*   脚本会将数据清洗并保存为 Excel 文件 (`training_set`, `training_label`, `test_set`, `test_label`)。\n*   数据形状说明：每行一个样本，列代表特征（例如 4096 列 = 64 通道 × 64 时间点）。\n\n### 第四步：训练与评估\n切换回 **Python 3.x** 环境，进入 `Models` 目录，选择对应的模型脚本进行训练。\n\n以卷积神经网络 (CNN) 为例：\n```bash\n# 假设预处理后的数据已就位，运行主程序\npython main_CNN.py\n```\n\n*   **修改配置**：打开对应的 `.py` 文件，可根据需要调整超参数（学习率、Batch Size、Epochs）。\n*   **多分类适配**：默认支持四分类。若需改为二分类或三分类，请修改 `Models\u002FEvaluation_Metrics\u002FMetrics.py` 中的类别定义。\n*   **查看结果**：程序运行结束后，将输出混淆矩阵、准确率 (Accuracy)、F1 分数、Kappa 系数以及 ROC 曲线等评估指标。\n\n### 进阶模型\n项目支持更多先进架构，只需替换主程序文件即可尝试：\n*   **ResNet**: `python main_ResCNN.py`\n*   **LSTM**: `python main_LSTM.py`\n*   **Transformer**: `python main-Transformer.py`\n*   **图神经网络 (GCN)**: 参考 `Models\u002FNetwork\u002Flib_for_GCN\u002F` 下的模型实现。","某神经工程实验室的研究团队正致力于开发一套基于脑电信号（EEG）的瘫痪患者意念控制轮椅系统，核心任务是从嘈杂的脑电数据中高精度分类出“左转”、“右转”和“前进”等运动想象指令。\n\n### 没有 EEG-DL 时\n- **算法复现成本极高**：研究人员需手动从零编写 CNN、ResNet 或 DenseNet 等复杂深度学习模型的 TensorFlow 代码，耗费数周时间调试网络层级与参数。\n- **特征提取依赖人工**：缺乏针对脑电时序特性的专用预处理模块，团队需花费大量精力手工设计频带特征，且难以捕捉深层时空关联。\n- **模型对比困难**：每尝试一种新架构（如从普通 CNN 切换到残差网络）都需重构整个训练流程，导致无法快速验证哪种模型最适合当前的脑电数据集。\n- **实验结果不稳定**：由于缺乏统一的标准化接口，不同成员编写的代码在数据加载和评估指标上存在差异，导致实验结果难以复现和横向对比。\n\n### 使用 EEG-DL 后\n- **即插即用主流模型**：直接调用库中预置的 DNN、CNN、ResNet 及 Thin ResNet 等七种前沿模型接口，将算法部署时间从数周缩短至几小时。\n- **端到端自动特征学习**：利用内置的深度卷积架构自动从原始脑电信号中提取高维时空特征，显著减少了对人工经验设计特征的依赖。\n- **高效架构筛选**：通过统一的 API 快速切换并并行测试多种网络结构，迅速锁定针对该特定轮椅控制任务准确率最高的 Thin ResNet 模型。\n- **标准化实验流程**：依托库内规范的数据处理与评估模块，确保了不同模型间的公平对比，大幅提升了科研数据的可信度与复现性。\n\nEEG-DL 通过提供标准化的脑电深度学习工具箱，将研究人员从繁琐的代码工程中解放出来，使其能专注于脑机接口算法的核心创新与临床落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSuperBruceJia_EEG-DL_294a4eaf.png","SuperBruceJia","Shuyue Jia (Bruce)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSuperBruceJia_dd06f06f.jpg","I am a Ph.D. Student, AI Researcher, Marathon Runner, and Photographer, based in Boston, MA.","Boston University","Boston, MA","brucejia@bu.edu",null,"https:\u002F\u002Fshuyuej.com","https:\u002F\u002Fgithub.com\u002FSuperBruceJia",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",88.4,{"name":88,"color":89,"percentage":90},"MATLAB","#e16737",11.6,1153,233,"2026-04-04T00:05:39","MIT","","未说明",{"notes":98,"python":99,"dependencies":100},"该库基于较旧的 TensorFlow 1.13.1 版本。注意：虽然主模型支持 Python 3.x，但将原始 .edf 文件转换为 Matlab .m 文件的预处理脚本必须在 Python 2.7 环境下运行，否则会导致标签错误。评估指标默认针对四分类任务，若需二分类或三分类需手动修改代码。","3.x (数据预处理脚本 Extract-Raw-Data-Into-Matlab-Files.py 需在 Python 2.7 环境下运行)",[101,102,103],"TensorFlow==1.13.1","Matlab (用于数据预处理)","R language (用于配对 t 检验)",[16,35,14,15],[106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125],"deep-learning","eeg-classification","eeg-signals-processing","tensorflow","motor-imagery-classification","eeg-data","cnn","rnn","gcn","one-shot-learning","residual-learning","densenet","resnet","graph-convolutional-neural-networks","lstm","gru","attention-mechanism","fully-convolutional-networks","transformer","transformers","2026-03-27T02:49:30.150509","2026-04-13T17:51:23.919693",[129,134,139,144,149,154,158],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},31840,"运行 Extract-Raw-Data-Into-Matlab-Files.py 时出现 'IndexError: list index out of range' 错误怎么办？","该错误通常由命令参数拼写错误引起。请检查命令行参数，确保使用的是 'edf' 而不是 'def'。此外，请务必参考 README 中的 '_Common Issues 5_' 部分，并严格按照 'Usage Demo' 的指引，使用 Python 2.7 环境运行该脚本，Python 3 环境下会报错。","https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fissues\u002F3",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},31841,"是否支持 Python 3 环境运行数据提取脚本？","目前不支持。请遵循 'Usage Demo' 的指引，必须在 Python 2.7 环境下运行 Download_Raw_EEG_Data\u002FExtract-Raw-Data-Into-Matlab-Files.py。如果在 Python 3 中运行，会出现 'IndexError: list index out of range' 等兼容性错误。","https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fissues\u002F23",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},31842,"GCN 模型训练时遇到 '输入格式不正确' 或维度不匹配的问题如何解决？","这通常是因为数据的维度不符合网络架构的要求。建议根据实际数据的维度修改网络架构代码。具体配置要求请参考项目 README 中 'Notice' 部分的第二点说明。该项目代码在 TensorFlow 1.15 环境下可以正常运行。","https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fissues\u002F19",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},31843,"代码库是否有 PyTorch 版本？","目前官方仓库暂未提供 PyTorch 版本的代码（主要基于 TensorFlow）。维护者鼓励社区用户自行编写并实现 PyTorch 版本，完成后可以分享给社区。","https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fissues\u002F4",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},31844,"实现的 Transformer 模型是 Vision Transformer (ViT) 吗？位置嵌入（Positional Embedding）是如何实现的？","实现的并非 ViT 或 Swin Transformer，而是一个带有均匀初始化位置嵌入的简单 Transformer Decoder。位置嵌入的作用是考虑输入时间序列的顺序，防止模型产生排列不变性的归纳偏差。具体实现可参考 main-pretrain_model.py 文件的第 88 至 101 行代码，或者参考原始 Transformer 论文中使用不同频率正弦和余弦函数的方法。","https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fissues\u002F16",{"id":155,"question_zh":156,"answer_zh":157,"source_url":153},31845,"Transformer 模型的输入形状为什么是 291 (3*97) 而不是数据集生成的 4096 (64*64)？","main-Transformer.py 中设置的 maxlen=3 和 embed_dim=97 是针对特定信号处理方式的配置。该实现使用的是简单的 Transformer Decoder，其输入形状设计取决于具体的特征提取方式，而非直接对应 make_dataset.m 生成的原始图像化数据形状。如果需要适配其他形状，需根据数据维度调整网络架构。",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},31846,"GCN 模型的数据集文件 (dataset.mat) 中维度 [20, 84, 640] 的具体含义是什么？","在该数据结构中，20 代表被试数量（subjects），640 代表时间跨度（4 秒 × 160Hz 采样率），84 代表特征维度（4 类任务 × 21 个通道\u002F电极）。其中 21 指的是使用的电极通道数量。若使用自己的 CSV 数据，需确保格式能转换为类似的 [被试，通道×类别，时间] 结构才能兼容。","https:\u002F\u002Fgithub.com\u002FSuperBruceJia\u002FEEG-DL\u002Fissues\u002F20",[]]