[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Shikhargupta--Spiking-Neural-Network":3,"tool-Shikhargupta--Spiking-Neural-Network":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",156804,2,"2026-04-15T11:34:33",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":75,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":32,"env_os":90,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":95,"github_topics":97,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":107,"updated_at":108,"faqs":109,"releases":150},7810,"Shikhargupta\u002FSpiking-Neural-Network","Spiking-Neural-Network","Pure python implementation of SNN ","Spiking-Neural-Network 是一个纯 Python 实现的脉冲神经网络（SNN）开源项目，旨在模拟生物神经元通过“脉冲”传递信息的工作机制。它主要解决了传统人工神经网络在硬件部署时能耗高、难以直接在芯片上进行在线学习的问题，提供了一套既适合软件仿真又具备硬件高效特性的解决方案。\n\n该项目非常适合对神经形态计算感兴趣的研究人员、希望探索低功耗 AI 算法的开发者，以及需要验证片上学习策略的工程师使用。其核心技术亮点在于采用了“脉冲时间依赖可塑性”（STDP）算法进行训练，这是一种模仿生物大脑的学习规则，无需大量标注数据即可调整网络权重。此外，项目内置的分类模拟器运用了“胜者通吃”策略，能有效抑制无关神经元活动，从而在二分类及多分类任务（如 MNIST 手写数字识别）中实现精准的模式识别。通过模块化设计，用户可以直接调用其中的神经元、突触及编码器等组件，灵活构建用于边缘计算或嵌入式设备的智能模型。","# Spiking-Neural-Network\nThis is the python implementation of hardware efficient spiking neural network. It includes the modified learning and prediction rules which could be realised on hardware and are enegry efficient. Aim is to develop a network which could be used for on-chip learning as well as prediction.\n\nSpike-Time Dependent Plasticity (STDP) algorithm will be used to train the network.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_0825ca6c1798.jpg\" width=\"500\"\u002F>\n\u003C\u002Fp>\n\n## Network Elements\n  * [Neuron](neuron\u002F)\n  * [Synapse](synapse\u002F)\n  * [Receptive field](receptive_field\u002F)\n  * [Spike train](encoding\u002F)\n\n\n## [SNN Simulator for Classification](classification\u002F)\nAssuming that we have learned the optimal weights of the network using the STDP algorithm (will be implemented next), this uses the weights to classify the input patterns into different classes. The simulator uses the 'winner-takes-all' strategy to supress the non firing neurons and produce distinguishable results. Steps involved while classifying the patterns are:\n\n- For each input neuron membrane potential is calculated in its [receptive field](receptive_field\u002F) (5x5 window).\n- [Spike train](encoding\u002F) is generated for each input neuron with spike frequency proportional to the membrane potential.\n- Foe each image, at each time step, potential of the neuron is updated according to the input spike and the weights associated.\n- First firing output neuron performs lateral inhibition on the rest of the output neurons. \n- Simulator checks for output spike.\n\n### Results\nThe simulator was tested upon binary classification. It can be extended upto any number of classes. The images for two classes are:\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_a621802da8b9.png\" width=\"50\"\u002F>          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_83ff73a3e737.png\" width=\"50\"\u002F>\n\nEach of the classes were presented to the network for 1000 time units each. The activity of the neurons was recorded. Here are the graphs of the potential of output neurons versus time unit.\n\nFirst 1000 TU corresponds to class1, next 1000 to class2. Red line indicates the threshold potential.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_2530d9e7a197.png\" width=\"300\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_72eca0f4754d.png\" width=\"300\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_d995859217ca.png\" width=\"300\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_a99b2bb6b6cd.png\" width=\"300\"\u002F>\n\nThe 1st output neuron is active for class1, 2nd is active for class2, and 3rd and 4th are mute for both the classes. Hence, by recording the total spikes in output neurons, we can determine the class to which the pattern belongs.\n\nFurther, to demonstrate the results for multi-class classification, the simulator was tested upon the following 6 images (MNIST dataset).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_eeb09cd83c0f.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_df92fc68ef38.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_6946913b3662.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_ae53e2445154.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_e52ecae0aac9.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_35effaa49535.png\" width=\"50\"\u002F>\n\nEach image represents a class and to each class a neuron is delegated. 2 neurons are assigned random weights. Here are the responses of each neuron to all the classes presented. X axis is the class number and Y axis is the number of spikes during each simulation. Red bar represents the class for which it spiked the most.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_fc501ee64f66.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_85eed3567d2d.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_4128fbb77a1f.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_103c33c4cb87.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_b706232bf849.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_2e5879343961.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_b1efaabe6c4e.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_23481d468b93.jpg\" width=\"250\"\u002F>\n\n## [Training an SNN](training)\nIn the previous section we assumed that our network is trained i.e weights are learned using STDP and can be used to classify patterns. Here we'll see how STDP works and what all need to be taken care of while implementing this training algorithm.\n\n### Spike Time Dependent Plasticity\nSTDP is actually a biological process used by brain to modify it's neural connections (synapses). Since the unmatched learning efficiency of brain has been appreciated since decades, this rule was incorporated in ANNs to train a neural network. Moulding of weights is based on the following two rules -\n- Any synapse that contribute to the firing of a post-synaptic neuron should be made strong i.e it's value should be increased.\n- Synapses that don't contribute to the firing of a post-synaptic neuron should be dimished i.e it's value should be decreased.\n\nHere is an explanation of how this algorithm works:\n\nConsider the scenario depicted in this figure\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_cd05ed395c78.jpg\" width=\"350\"\u002F>\n\u003C\u002Fp>\n\nFour neurons connect to a single neuron by synapse. Each pre synaptic neuron is firing at its own rate and the spikes are sent forward by the corresponding synapse. The intensity of spike translated to post synaptic neuron depends upon the strength of the connecting synapse. Now, because of the input spikes membrane potential of post synaptic neuron increases and sends out a spike after crossing the threshold. At the time when post synaptic neuron spikes, we'll monitor which all pre synaptic neurons helped it to fire. This could be done by observing which pre synaptic neurons sent out spikes before post synaptic neuron spiked. This way they helped in post synaptic spike by increasing the membrane potential and hence the corresponding synapse is strengthend. The factor by which the weight of synapse is increased is inversly proportional to the time difference between post synaptic and pre synaptic spikes given by this graph\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_5093b28e921e.jpg\" width=\"400\"\u002F>\n\u003C\u002Fp>\n\n### Generative Property of SNN\nThis property of Spiking Neural Network is very useful in analysing training process. All the synapses connected to an output layer neuron, if scaled to proper values and rearranged in form of an image, depicts what pattern that neuron has learned and how disctinctly it can classify that pattern. For an example, after training a network with MNIST dataset if we scale the weights of all the snypases connected to a particular output neuron (784 in number) and form a 28x28 image with those scaled up weights we will get a grayscale pattern learned by that neuron. This property will be used later while demonstrating the results. [This](training\u002Freconstruct.py) file contains the function that reconstructs image from weights.\n\n### Variable Threshold\nIn unsupervised learning it is very difficult to train a network where patterns have varied amount of activations (white pixels in case of MNIST). Patterns with higher activations tend to win in competetive learning and hence overshadow others (this problem will be demonstrated later). Therefore this method of normalization was introduced to bring them all down to same level. Threshold for each pattern is calculated based on the number of activation it contains. Higher the number of activations, higher is the threshold value. [This](training\u002Fvar_th.py) file holds function to calculate threshold for each image.\n\n### Lateral Inhibition\nIn neurobiology, lateral inhibition is the capacity of an excited neuron to reduce the activity of its neighbors. Lateral inhibition disables the spreading of action potentials from excited neurons to neighboring neurons in the lateral direction. This creates a contrast in stimulation that allows increased sensory perception. This propoerty is also called as Winner-Takes-All (WTA). The neuron that gets excited first inhibits (lowers down the membrane potential) of other neurons in same layer.\n\n\n## Training for 3 class dataset\nHere are the results after training an SNN using MNIST dataset with 3 classes (0-2) with 5 output neurons. We will leverage the generative property of SNN and reconstruct the images using trained weights connected to each output neuron to see how well the network has learned each pattern. Also, we see the membrane potential versus time plots for each output neuron to see how the training process was executes and made that neuron sensitive to a particular pattern only.\n\n**Neuron1**\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_0c6fe9e3dc4a.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_eff8f6fcb0f2.png\" width=\"70\"\u002F>\n\n**Neuron2**\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_cd616acf6cad.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_982032c205a6.png\" width=\"70\"\u002F>\n\n\n**Neuron3**\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_ac3885128beb.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_d14190693492.png\" width=\"70\"\u002F>\n\n\n**Neuron4**\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_9dd5a672b58f.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_72ea8dc7a70a.png\" width=\"70\"\u002F>\n\n\nHere we can see clearly observe that Neuron 1 has learned pattern '1', Neuron 2 has learned '0', Neuron 3 is noise and Neuron 4 has learned '2'. Consider the plot of Neuron 1. In the beginning when the weights were randomly assigned it was firing for all the patterns. As the training proceeded, it became specific to pattern '1' only and was in inhibitory state for the rest. Onobserving Neuron 3 we can coclude that it reactsa to all the patterns and can be considered as noise. Hence, it is advisable to have 20% more output neurons than number of classes.\n\nThere is a slight overlapping of '2' and '0' which is a common problem in competetive learning. This can be eliminated proper fine tuning of parameters.\n\n### Improper training\nIf we don't use variable threshold for normalization, we will observe some patterns over shadowing others. Here is an example:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_18fb9412f2a5.png\" width=\"100\"\u002F>\n\u003C\u002Fp>\n\nHere same threshold voltage was used for both the patterns and hence resulted in overlapping. This could sbe avoided by either choosing a dataset where each image has more or less same number of activations or normalizing the number of activations.\n\n## Parameters\nBuilding a Spiking Neural Network from scratch not an easy job. There are several parameters that need to be tuned and taken care of. Combinations of so many parameters make it worse. Some of the major parameters that play an important role in the dynamics of network are -\n - Learning Rate\n - Threshold Potential\n - Weight Initialization\n - Number of Spikes Per Sample\n - Range of Weights\n \n I have demonstrated how some of these parameters affect the network and how they should be handeled [here](https:\u002F\u002Fgithub.com\u002FShikhargupta\u002Fsnn-brian-mlp\u002Ftree\u002Fmaster\u002Fsimple_demo) under the heading Parameter Analysis.\n \n \n## Contributions\nI was helped on this project by my collegue at Indian Institute of Technology, Guwahati - Arpan Vyas. He further went on to design an architecture of hardware accelerator for this Simplified SNN and deploy it on FPGA and hence reducing the training time considerably. [Here](https:\u002F\u002Fgithub.com\u002Farpanvyas) is his Github profile.\n","# 脉冲神经网络\n这是硬件高效的脉冲神经网络的 Python 实现。它包含了经过修改的学习和预测规则，这些规则可以在硬件上实现，并且具有较高的能源效率。目标是开发一个既可用于片上学习，又可用于预测的网络。\n\n将使用时间依赖性突触可塑性（STDP）算法来训练该网络。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_0825ca6c1798.jpg\" width=\"500\"\u002F>\n\u003C\u002Fp>\n\n## 网络元素\n  * [神经元](neuron\u002F)\n  * [突触](synapse\u002F)\n  * [感受野](receptive_field\u002F)\n  * [脉冲序列](encoding\u002F)\n\n\n## [用于分类的 SNN 模拟器](classification\u002F)\n假设我们已经使用 STDP 算法学习到了网络的最佳权重（将在后续实现），那么这个模拟器会利用这些权重将输入模式分类到不同的类别中。该模拟器采用“胜者全得”策略来抑制未发放脉冲的神经元，从而产生可区分的结果。在对模式进行分类时，所涉及的步骤如下：\n\n- 对于每个输入神经元，在其[感受野](receptive_field\u002F)（5×5 窗口）内计算膜电位。\n- 根据膜电位与脉冲频率成正比的关系，为每个输入神经元生成[脉冲序列](encoding\u002F)。\n- 对于每幅图像，在每个时间步长，根据输入脉冲及其关联的权重更新神经元的电位。\n- 首先发放脉冲的输出神经元会对其余输出神经元产生侧向抑制。\n- 模拟器检查是否有输出脉冲。\n\n### 结果\n该模拟器已针对二分类问题进行了测试。它可以扩展到任意数量的类别。两类图像如下所示：\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_a621802da8b9.png\" width=\"50\"\u002F>          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_83ff73a3e737.png\" width=\"50\"\u002F>\n\n每类图像分别向网络呈现了 1000 个时间单位。记录了神经元的活动情况。以下是输出神经元的电位随时间单位变化的图表。\n\n前 1000 个时间单位对应类别 1，接下来的 1000 个时间单位对应类别 2。红色线表示阈值电位。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_2530d9e7a197.png\" width=\"300\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_72eca0f4754d.png\" width=\"300\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_d995859217ca.png\" width=\"300\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_a99b2bb6b6cd.png\" width=\"300\"\u002F>\n\n第一个输出神经元对类别 1 有反应，第二个对类别 2 有反应，而第三个和第四个对两类均无反应。因此，通过记录输出神经元的总脉冲数，我们可以确定该模式所属的类别。\n\n此外，为了演示多分类结果，该模拟器还对以下 6 幅图像（MNIST 数据集）进行了测试。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_eeb09cd83c0f.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_df92fc68ef38.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_6946913b3662.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_ae53e2445154.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_e52ecae0aac9.png\" width=\"50\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_35effaa49535.png\" width=\"50\"\u002F>\n\n每幅图像代表一个类别，并为每个类别分配了一个神经元。另外两个神经元被赋予随机权重。以下是每个神经元对所有呈现类别的响应。X 轴表示类别编号，Y 轴表示每次模拟中的脉冲数量。红色条表示该神经元脉冲最多的类别。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_fc501ee64f66.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_85eed3567d2d.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_4128fbb77a1f.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_103c33c4cb87.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_b706232bf849.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_2e5879343961.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_b1efaabe6c4e.jpg\" width=\"250\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_23481d468b93.jpg\" width=\"250\"\u002F>\n\n## [训练 SNN](training)\n在上一节中，我们假设网络已经训练完毕，即通过 STDP 学习到了权重，可以用来对模式进行分类。在这里，我们将探讨 STDP 的工作原理，以及在实现这一训练算法时需要注意的各项事项。\n\n### 时间依赖性突触可塑性\nSTDP 实际上是一种生物过程，大脑利用它来调整自身的神经连接（突触）。由于大脑无与伦比的学习效率已被人们认可数十年之久，这一规则被引入人工神经网络以训练神经网络。权重的调整基于以下两条规则：\n- 任何有助于突触后神经元发放脉冲的突触都应加强，即其权重应增加。\n- 不有助于突触后神经元发放脉冲的突触则应减弱，即其权重应减少。\n\n下面解释一下该算法的工作方式：\n\n考虑下图所示的情景\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_cd05ed395c78.jpg\" width=\"350\"\u002F>\n\u003C\u002Fp>\n\n四个人工神经元通过突触连接到一个神经元。每个突触前神经元以各自的频率发放脉冲，并通过相应的突触将脉冲传递出去。突触后神经元接收到的脉冲强度取决于连接突触的强度。当突触后神经元因输入脉冲而使膜电位升高并超过阈值时，就会发放一个脉冲。在突触后神经元发放脉冲的时刻，我们会监测哪些突触前神经元帮助其发放了脉冲。这可以通过观察哪些突触前神经元在突触后神经元发放脉冲之前就已经发送了脉冲来实现。这样，它们就通过提高膜电位促进了突触后神经元的发放，因此相应的突触会被加强。突触权重增加的幅度与突触后脉冲和突触前脉冲之间的时间差成反比，具体关系如图所示。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_5093b28e921e.jpg\" width=\"400\"\u002F>\n\u003C\u002Fp>\n\n### SNN 的生成特性\n脉冲神经网络的这一特性对于分析训练过程非常有用。如果将连接到某个输出层神经元的所有突触按适当的比例缩放，并重新排列成一幅图像，就能显示出该神经元学到了什么样的模式，以及它能够多么清晰地对该模式进行分类。例如，在用 MNIST 数据集训练网络之后，如果我们把连接到某个特定输出神经元的所有突触权重（共 784 个）按比例放大，并用这些加权后的数值构成一幅 28×28 的灰度图像，就能得到该神经元所学习到的模式。这一特性将在后续展示结果时被使用。[此](training\u002Freconstruct.py)文件包含了一个从权重重建图像的函数。\n\n### 变量阈值\n在无监督学习中，训练一个模式激活程度（以MNIST为例即白色像素）差异较大的网络是非常困难的。激活程度较高的模式往往会在竞争性学习中占据优势，从而掩盖其他模式（这一问题将在后续演示）。因此，引入了这种归一化方法，将所有模式的激活水平拉低到同一水平。每个模式的阈值是根据其包含的激活数量来计算的：激活数量越多，阈值也就越高。[此](training\u002Fvar_th.py)文件包含了用于计算每张图像阈值的函数。\n\n### 侧向抑制\n在神经生物学中，侧向抑制是指被激活的神经元能够降低其邻近神经元活动的能力。侧向抑制会阻止动作电位从兴奋的神经元向其侧向邻近神经元传播，从而产生刺激上的对比效应，增强感官感知能力。这一特性也被称为“胜者通吃”（WTA）。最先被激活的神经元会抑制（降低膜电位）同一层中的其他神经元。\n\n## 3类数据集的训练\n以下是使用MNIST数据集中的3个类别（0-2），并配备5个输出神经元的SNN训练结果。我们将利用SNN的生成特性，通过连接到每个输出神经元的已训练权重重构图像，以观察网络对各个模式的学习效果。此外，我们还将查看每个输出神经元的膜电位随时间的变化曲线，以了解训练过程如何进行，并使该神经元仅对特定模式敏感。\n\n**神经元1**\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_0c6fe9e3dc4a.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_eff8f6fcb0f2.png\" width=\"70\"\u002F>\n\n**神经元2**\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_cd616acf6cad.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_982032c205a6.png\" width=\"70\"\u002F>\n\n\n**神经元3**\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_ac3885128beb.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_d14190693492.png\" width=\"70\"\u002F>\n\n\n**神经元4**\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_9dd5a672b58f.png\" width=\"300\"\u002F>            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_72ea8dc7a70a.png\" width=\"70\"\u002F>\n\n\n从这些结果中可以清楚地看到：神经元1学会了模式“1”，神经元2学会了“0”，神经元3表现为噪声，而神经元4则学会了模式“2”。以神经元1为例，在训练初期，由于权重随机分配，它会对所有模式都有反应。随着训练的进行，它逐渐只对模式“1”产生响应，而对于其他模式则处于抑制状态。观察神经元3时，我们可以得出结论，它对所有模式都有反应，因此可被视为噪声。因此，建议输出神经元的数量比类别数多出约20%。\n\n此外，“2”和“0”之间存在一定的重叠现象，这是竞争性学习中常见的问题。通过适当调整参数，可以有效消除这一问题。\n\n### 不当的训练\n如果不使用变量阈值进行归一化，就会出现某些模式掩盖其他模式的情况。以下是一个示例：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_readme_18fb9412f2a5.png\" width=\"100\"\u002F>\n\u003C\u002Fp>\n\n在此示例中，两种模式使用了相同的阈值电压，导致它们相互重叠。这种情况可以通过选择每张图像激活数量大致相同的数据集，或者对激活数量进行归一化来避免。\n\n## 参数设置\n从头构建脉冲神经网络并非易事。有许多参数需要调整和注意，而这些参数的组合更是复杂。其中对网络动态起关键作用的主要参数包括：\n- 学习率\n- 阈值电位\n- 权重初始化\n- 每个样本的脉冲数量\n- 权重范围\n\n我在[此处](https:\u002F\u002Fgithub.com\u002FShikhargupta\u002Fsnn-brian-mlp\u002Ftree\u002Fmaster\u002Fsimple_demo)的“参数分析”部分，展示了部分参数如何影响网络，以及应如何处理这些参数。\n\n## 贡献\n在本项目中，我得到了印度理工学院古瓦哈蒂分校同事Arpan Vyas的帮助。他进一步设计了一种针对简化SNN的硬件加速器架构，并将其部署在FPGA上，从而大幅缩短了训练时间。他的GitHub主页请见[这里](https:\u002F\u002Fgithub.com\u002Farpanvyas)。","# Spiking-Neural-Network 快速上手指南\n\n本指南旨在帮助开发者快速理解并运行基于 Python 的硬件高效脉冲神经网络（SNN）实现。该项目实现了适用于片上学习和预测的改进型学习规则，核心采用脉冲时间依赖可塑性（STDP）算法进行训练。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：推荐 Python 3.6 及以上版本\n*   **前置依赖**：\n    *   `numpy`：用于数值计算\n    *   `matplotlib`：用于绘制神经元电位和分类结果图表\n    *   `Pillow` (PIL)：用于图像处理（如需处理 MNIST 等图像数据）\n\n> **提示**：国内用户建议使用清华或阿里镜像源加速依赖安装。\n\n## 安装步骤\n\n1.  **克隆项目仓库**\n    首先将代码库克隆到本地：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network.git\n    cd Spiking-Neural-Network\n    ```\n\n2.  **安装依赖包**\n    如果项目中包含 `requirements.txt`，请直接运行：\n    ```bash\n    pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    若无该文件，请手动安装核心依赖：\n    ```bash\n    pip install numpy matplotlib pillow -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n本项目主要包含两个核心功能模块：**分类模拟**（假设权重已训练好）和**网络训练**（使用 STDP 算法）。\n\n### 1. 运行分类模拟器 (Classification)\n\n该模块使用“赢家通吃”（Winner-Takes-All）策略，利用预训练的权重对输入模式进行分类。\n\n*   **进入目录**：\n    ```bash\n    cd classification\n    ```\n*   **运行模拟**：\n    执行主脚本（通常为 `main.py` 或 `simulator.py`，具体文件名请查看目录下文件）：\n    ```bash\n    python main.py\n    ```\n    *流程说明*：程序将计算输入神经元的膜电位，生成脉冲序列，更新神经元状态，并通过侧向抑制机制确定最终输出的类别。结果将展示不同类别的神经元放电情况图表。\n\n### 2. 训练 SNN 网络 (Training)\n\n该模块演示如何使用 STDP 算法从头训练网络，使其能够识别特定模式（如 MNIST 数字）。\n\n*   **进入目录**：\n    ```bash\n    cd training\n    ```\n*   **关键组件说明**：\n    *   `stdp` 逻辑：根据突触前和突触后神经元的放电时间差调整权重。\n    *   `var_th.py`：实现可变阈值，用于归一化不同激活程度的输入图案，防止高激活图案主导训练。\n    *   `reconstruct.py`：利用 SNN 的生成特性，将训练后的权重重构为图像，以可视化神经元学到的特征。\n*   **运行训练示例**：\n    ```bash\n    python train_snn.py\n    ```\n    *(注：具体入口脚本名称请以 `training` 目录下的实际文件为准)*\n\n    *预期结果*：训练完成后，您将看到输出神经元的膜电位随时间变化的曲线，以及重构出的图像（例如：某个神经元专门学会了识别数字\"1\"，而另一个识别\"0\"）。建议设置的输出神经元数量比类别数多约 20%，以容纳噪声神经元并获得更好的分类效果。\n\n### 3. 参数调优建议\n\n构建 SNN 需要精细调整以下参数，它们直接影响网络动力学：\n*   `Learning Rate` (学习率)\n*   `Threshold Potential` (阈值电位)\n*   `Weight Initialization` (权重初始化范围)\n*   `Number of Spikes Per Sample` (每个样本的脉冲数)\n\n如需深入分析参数影响，可参考项目关联仓库中的 `Parameter Analysis` 部分。","某嵌入式开发团队正致力于在低功耗微控制器上部署实时手势识别系统，以用于智能穿戴设备。\n\n### 没有 Spiking-Neural-Network 时\n- **能耗过高**：传统人工神经网络需要持续的浮点运算，导致电池供电的设备续航时间极短，无法满足全天候运行需求。\n- **硬件适配难**：现有的深度学习框架依赖重型后端，难以直接映射到资源受限的芯片上，无法实现真正的“片上学习”。\n- **时序信息丢失**：常规模型将动态手势视频帧视为静态图像处理，忽略了动作发生的时间先后顺序，降低了识别准确率。\n- **开发门槛高**：团队若要复现脉冲神经网络（SNN）的 STDP 学习规则，需从零编写底层神经元和突触逻辑，研发周期漫长。\n\n### 使用 Spiking-Neural-Network 后\n- **极致能效**：利用 Spiking-Neural-Network 的事件驱动特性，神经元仅在接收到脉冲时才计算，大幅降低功耗，显著延长设备续航。\n- **原生硬件友好**：该工具提供的纯 Python 实现包含了可硬件化的学习与预测规则，便于后续直接迁移至 FPGA 或专用神经形态芯片。\n- **精准时序建模**：通过内置的脉冲序列编码（Spike train）机制，自然捕捉手势动作的时间动态特征，提升了对复杂动作的区分度。\n- **快速原型验证**：借助其现成的神经元、突触及 STDP 训练模块，团队无需重复造轮子，即可快速模拟并验证“赢家通吃”分类策略的效果。\n\nSpiking-Neural-Network 让开发者能以低代码成本构建出兼具高能效与时序感知能力的神经形态应用，打通了从算法仿真到端侧部署的关键路径。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShikhargupta_Spiking-Neural-Network_0825ca6c.jpg","Shikhargupta","Shikhar Gupta","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FShikhargupta_769623c0.jpg",null,"Research","San Francisco, California","shikhargupta02","shikhar.gg","https:\u002F\u002Fgithub.com\u002FShikhargupta",[82],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,1187,296,"2026-04-14T02:44:49","Apache-2.0","","未说明",{"notes":93,"python":91,"dependencies":94},"该项目是一个用于硬件高效脉冲神经网络（SNN）的 Python 实现，旨在支持片上学习和预测。核心算法包括 Spike-Time Dependent Plasticity (STDP) 和侧向抑制（Winner-Takes-All）。README 中未明确列出具体的操作系统、Python 版本或第三方依赖库要求，仅提到项目包含神经元、突触、感受野和脉冲序列编码等模块。作者曾将简化版架构部署在 FPGA 上以加速训练，但当前代码库主要作为算法仿真器使用。",[],[14,96],"其他",[98,99,100,101,102,103,104,105,106],"neuromorphic-hardware","neuromorphic","spiking-neural-networks","neural-network","spike-time-dependent-plasticity","python","synapse","mnist-classification","spike-trains","2026-03-27T02:49:30.150509","2026-04-16T01:44:45.680548",[110,115,120,125,130,135,140,145],{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},34975,"运行代码时出现 'IndexError: index 16 is out of bounds' 错误怎么办？","这通常是因为加载的图像尺寸与代码预期的不一致。该项目的示例通常使用 28x28 像素的图像。请检查您加载的图像尺寸，并将其调整为 28*28。此外，请确保图像类型（如灰度图）与示例一致，可以在加载后打印图像形状进行对比排查。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F4",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},34976,"如何利用脉冲计数（spike count）对图像进行分类？","分类逻辑基于输出层中每个神经元的脉冲发放次数。对于特定输入图像，脉冲发放次数最多的那个神经元所对应的类别，即为该图像所属的类别。例如，如果神经元 1 针对模式 'X' 训练，神经元 2 针对 'O' 训练，当输入未知图像时，若神经元 1 发放脉冲最多，则判定图像属于 'X' 类。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F2",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},34977,"运行 classify.py 时报错 'from receptive_field import rf' 找不到 'rf' 模块？","'rf' 函数确实存在，它位于 classification 目录下的 recep_field.py 文件中（注意文件名是 recep_field 而不是 receptive_field）。请检查您的导入路径是否正确指向了 `classification\u002Frecep_field.py`。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F17",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},34978,"项目参考的学术论文是哪一篇？","该项目参考的论文链接为：https:\u002F\u002Fjivp-eurasipjournals.springeropen.com\u002Farticles\u002F10.1186\u002Fs13640-015-0059-4","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F16",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},34979,"运行时提示 'No module named numpy' 错误如何解决？","这说明您的系统中未安装 numpy 包。请使用 pip 安装该依赖，命令通常为：`pip install numpy`。安装完成后即可正常运行代码。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F14",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},34980,"代码报错提示 '_neuron_' 类缺少 'Pth' 属性怎么办？","这是因为 neuron 类初始化时未定义阈值属性。请在 `neuron\u002Fneuron.py` 文件的 `__init__()` 方法中添加 `self.Pth = 25` 来初始化该属性，即可解决此错误。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F8",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},34981,"遇到 'ImportError: cannot import name reconst' 错误如何处理？","这是一个多余的导入语句。`reconstruct` 模块中的 `reconst` 在分类任务中并无用处。解决方法是直接从代码中删除或注释掉 `from reconstruct import reconst` 这一行导入语句。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F3",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},34982,"在哪里可以找到分类模块（classifier）的使用说明文档？","关于分类模块的所有必要信息和使用说明，都可以直接在项目根目录的 README.md 文件中找到，无需单独的文档。","https:\u002F\u002Fgithub.com\u002FShikhargupta\u002FSpiking-Neural-Network\u002Fissues\u002F1",[]]