[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-fangwei123456--spikingjelly":3,"tool-fangwei123456--spikingjelly":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":76,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":78,"languages":79,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":23,"env_os":96,"env_gpu":97,"env_ram":96,"env_deps":98,"category_tags":109,"github_topics":110,"view_count":23,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":117,"updated_at":118,"faqs":119,"releases":150},3461,"fangwei123456\u002Fspikingjelly","spikingjelly","SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.","SpikingJelly 是一个基于 PyTorch 构建的开源深度学习框架，专为脉冲神经网络（SNN）设计。它致力于降低 SNN 的研究与开发门槛，解决了传统 SNN 实现复杂、缺乏统一工具链以及训练效率低等痛点，让开发者能够像搭建普通神经网络一样轻松构建和训练脉冲网络。\n\n这款工具非常适合人工智能研究人员、深度学习工程师以及对类脑计算感兴趣的高校师生使用。无论是进行前沿算法探索，还是复现学术论文，SpikingJelly 都能提供强有力的支持。其独特亮点在于深度集成了现代加速技术：最新版本的神经元模型（如 LIFNode）已引入 Triton 后端，显著提升了 GPU 上的运行效率；同时提供了灵活的 ANN-SNN 转换工具，方便将传统神经网络快速迁移为脉冲形式。此外，框架还内置了显存优化模块（memopt）、操作量计数器以及丰富的神经形态数据集接口，并拥有完善的中英文双语文档与教程，帮助用户高效上手，专注于算法创新而非底层细节。","# SpikingJelly\n\n![GitHub last commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Ffangwei123456\u002Fspikingjelly)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_13d664e1afd7.png)](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Flatest)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fspikingjelly)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fspikingjelly)\n[![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fspikingjelly)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fspikingjelly)\n![repo size](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frepo-size\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub closed issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed-raw\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub pull requests](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub closed pull requests](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr-closed\u002Ffangwei123456\u002Fspikingjelly)\n![Visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_a14c2597124c.png)\n![GitHub forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub Repo stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub contributors](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Ffangwei123456\u002Fspikingjelly)\n\nEnglish | [中文(Chinese)](.\u002FREADME_cn.md)\n\n![demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_dee1d7558319.png)\n\nSpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on [PyTorch](https:\u002F\u002Fpytorch.org\u002F).\n\nThe [documentation](https:\u002F\u002Fspikingjelly.readthedocs.io) of SpikingJelly is written in both English and Chinese.\n\n- [Changelog](#changelog)\n- [Installation](#installation)\n- [Build SNN In An Unprecedented Simple Way](#build-snn-in-an-unprecedented-simple-way)\n- [Fast And Handy ANN-SNN Conversion](#fast-and-handy-ann-snn-conversion)\n- [CUDA\u002FTriton-Enhanced Neuron](#cudatriton-enhanced-neuron)\n- [Device Supports](#device-supports)\n- [Neuromorphic Datasets Supports](#neuromorphic-datasets-supports)\n- [Tutorials](#tutorials)\n- [Publications and Citation](#publications-and-citation)\n- [Contribution](#contribution)\n- [About](#about)\n\n## Changelog\n\nWe are actively maintaining and improving SpikingJelly. Below are our future plans and highlights of each release.\n\n**Highlights**\n\nOur new work [Towards Lossless Memory-efficient Training of Spiking Neural Networks via Gradient Checkpointing and Spike Compression](https:\u002F\u002Fopenreview.net\u002Fforum?id=nrBJ0Uvj7c) was recently accepted by **ICLR 2026**! The automatic training memory optimization tool is available in `spikingjelly.activation_based.memopt`. Read [the tutorial](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh-cn\u002Flatest\u002Ftutorials\u002Fen\u002Fmemopt.html) for more information.\n\nIn the latest version (Github version),\n\n- `IFNode`, `LIFNode` and `ParametricLIFNode` are now equipped with [Triton](https:\u002F\u002Fgithub.com\u002Ftriton-lang\u002Ftriton) backends;\n- `FlexSN` is available for converting PyTorch spiking neuronal dynamics to Triton kernels;\n- `SpikingSelfAttention` and `QKAttention` are available;\n- `memopt` is available;\n- `nir_exchange` is available;\n- `op_counter` is available;\n- `spikingjelly.activation_based.layer`, `spikingjelly.activation_based.functional` and `spikingjelly.datasets` are refactored;\n- Dataset implementations are refactored;\n- Docs and tutorials are updated;\n- Conv-bn fusion functions in `spikingjelly.activation_based.functional` are deprecated; use PyTorch's [`fuse_conv_bn_eval`](https:\u002F\u002Fdocs.pytorch.org\u002Fdocs\u002Fstable\u002Fgenerated\u002Ftorch.nn.utils.fuse_conv_bn_eval.html) instead.\n\n**Planned**\n\nWe are going to release version `0.0.0.1.0` soon.\n\n- [x] Add [Triton](https:\u002F\u002Fgithub.com\u002Ftriton-lang\u002Ftriton) backend for further acceleration on GPU.\n- [x] Add a transpiler for converting PyTorch spiking neurons to Triton kernels, which will be more flexible than the existing [`auto_cuda`](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Ftree\u002Fmaster\u002Fspikingjelly\u002Factivation_based\u002Fcuda_kernel\u002Fauto_cuda) subpackage.\n- [x] Add spiking self-attention implementations.\n- [x] Update docs and tutorials.\n\nOther long-term plans include:\n\n- [x] Add [NIR](https:\u002F\u002Fgithub.com\u002Fneuromorphs\u002FNIR) support.\n- [x] Optimize training memory cost.\n- [ ] Accelerate on Huawei NPU.\n\nFor early-stage experimental features, see our companion project [flashsnn](https:\u002F\u002Fgithub.com\u002FAllenYolk\u002Fflash-snn). New ideas are prototyped in flashsnn before merging into SpikingJelly.\n\n**Version notes**\n\n- The odd version number is the developing version, updated with the GitHub\u002FOpenI repository. The even version number is the stable version and is available at PyPI.\n\n- The default doc is for the latest developing version. If you are using the stable version, do not forget to switch to the doc in the corresponding version.\n\n- From the version `0.0.0.0.14`, modules including `clock_driven` and `event_driven` are renamed. Please refer to the tutorial [Migrate From Old Versions](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fmigrate_from_legacy.html).\n\n- If you use an old version of SpikingJelly, you may encounter some fatal bugs. Refer to [Bugs History with Releases](.\u002Fbugs.md) for more details.\n\n**Docs for different versions:**\n\n- [zero](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Fzero\u002F)\n- [0.0.0.0.4](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.4\u002F#index-en)\n- [0.0.0.0.6](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.6\u002F#index-en)\n- [0.0.0.0.8](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.8\u002F#index-en)\n- [0.0.0.0.10](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.10\u002F#index-en)\n- [0.0.0.0.12](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.12\u002F#index-en)\n- [0.0.0.0.14](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002F#index-en)\n- [latest](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Flatest\u002F#index-en)\n\n## Installation\n\nNote that SpikingJelly is based on PyTorch. Please make sure that you have installed [PyTorch, torchvision and torchaudio](https:\u002F\u002Fpytorch.org) before you install SpikingJelly. Note that the latest version of SpikingJelly requires `torch>=2.2.0` and is tested on `torch==2.7.1` .\n\n**Install the last stable version from** [**PyPI**](https:\u002F\u002Fpypi.org\u002Fproject\u002Fspikingjelly\u002F):\n\n```bash\npip install spikingjelly\n```\n\n**Install the latest developing version from the source code**:\n\nFrom [GitHub](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly):\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly.git\ncd spikingjelly\npip install .\n```\n\nFrom [OpenI](https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenI\u002Fspikingjelly):\n\n```bash\ngit clone https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenI\u002Fspikingjelly.git\ncd spikingjelly\npip install .\n```\n\n**Optional Dependencies**\n\nTo enable `cupy` backend, install [CuPy](https:\u002F\u002Fdocs.cupy.dev\u002Fen\u002Fstable\u002Finstall.html#installing-cupy).\n\n```bash\npip install cupy-cuda12x # for CUDA 12.x\npip install cupy-cuda11x # for CUDA 11.x\n```\n\nTo enable `triton` backend, make sure that [Triton](https:\u002F\u002Fgithub.com\u002Ftriton-lang\u002Ftriton) is installed. Typically, `triton` is installed with PyTorch 2.X. We test `triton` backend on `triton==3.3.1`.\n\n```bash\npip install triton==3.3.1\n```\n\nTo enable `nir_exchange`, install [NIR](https:\u002F\u002Fgithub.com\u002Fneuromorphs\u002FNIR) and [NIRTorch](https:\u002F\u002Fgithub.com\u002Fneuromorphs\u002FNIRTorch).\n\n```bash\npip install nir nirtorch\n```\n\n## Build SNN In An Unprecedented Simple Way\n\nSpikingJelly is user-friendly. Building SNN with SpikingJelly is as simple as building ANN in PyTorch:\n\n```python\nnn.Sequential(\n    layer.Flatten(),\n    layer.Linear(28 * 28, 10, bias=False),\n    neuron.LIFNode(tau=tau, surrogate_function=surrogate.ATan())\n)\n```\n\nThis simple network with a Poisson encoder can achieve 92% accuracy on the MNIST test dataset. Read refer to the tutorial for more details. You can also run this code in a Python terminal for training on classifying MNIST:\n\n```python\npython -m spikingjelly.activation_based.examples.lif_fc_mnist -tau 2.0 -T 100 -device cuda:0 -b 64 -epochs 100 -data-dir \u003CPATH to MNIST> -amp -opt adam -lr 1e-3 -j 8\n```\n\n## Fast And Handy ANN-SNN Conversion\n\nSpikingJelly implements a relatively general ANN-SNN Conversion interface. Users can realize the conversion through PyTorch. What's more, users can customize the conversion mode.\n\n```python\nclass ANN(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.network = nn.Sequential(\n            nn.Conv2d(1, 32, 3, 1),\n            nn.BatchNorm2d(32, eps=1e-3),\n            nn.ReLU(),\n            nn.AvgPool2d(2, 2),\n\n            nn.Conv2d(32, 32, 3, 1),\n            nn.BatchNorm2d(32, eps=1e-3),\n            nn.ReLU(),\n            nn.AvgPool2d(2, 2),\n\n            nn.Conv2d(32, 32, 3, 1),\n            nn.BatchNorm2d(32, eps=1e-3),\n            nn.ReLU(),\n            nn.AvgPool2d(2, 2),\n\n            nn.Flatten(),\n            nn.Linear(32, 10)\n        )\n\n    def forward(self,x):\n        x = self.network(x)\n        return x\n```\n\nThis simple network with analog encoding can achieve 98.44% accuracy after conversion on MNIST test dataset. Read the tutorial for more details. You can also run this code in a Python terminal for training on classifying MNIST using the converted model:\n\n```python\n>>> import spikingjelly.activation_based.ann2snn.examples.cnn_mnist as cnn_mnist\n>>> cnn_mnist.main()\n```\n\n## CUDA\u002FTriton-Enhanced Neuron\n\nSpikingJelly provides multiple backends for multi-step neurons. You can use the user-friendly `torch` backend for easily coding and debugging and use `cupy` or `triton` backend for faster training speed.\n\nThe following figure compares the execution time of `torch` and `cupy` backends of Multi-Step LIF neurons (`float32`). Generally, `triton` backend is even more efficient than `cupy` backend.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_ba5c8ca061d9.png\" alt=\"exe_time_fb\"  \u002F>\n\n`float16` is also provided by the `cupy` and `triton` backend, and can be used in [automatic mixed precision training](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Famp_examples.html).\n\nTo use the `cupy` backend, please install [CuPy](https:\u002F\u002Fdocs.cupy.dev\u002Fen\u002Fstable\u002Finstall.html). To use the `triton` backend, please install [Triton](https:\u002F\u002Ftriton-lang.org\u002Fmain\u002Findex.html). Note that the `cupy` and `triton` backend only supports GPU, while the `torch` backend supports both CPU and GPU.\n\n## Device Supports\n\n- [x] Nvidia GPU\n- [x] CPU\n- [ ] Huawei NPU\n\nAs simple as using PyTorch.\n\n```python\n>>> net = nn.Sequential(layer.Flatten(), layer.Linear(28 * 28, 10, bias=False), neuron.LIFNode(tau=tau))\n>>> net = net.to(device) # Can be CPU or CUDA devices\n```\n\n## Neuromorphic Datasets Supports\n\nSpikingJelly includes the following neuromorphic datasets:\n\n| Dataset | Source |\n| -------------- | ------------------------------------------------------------ |\n| ASL-DVS | [Graph-based Object Classification for Neuromorphic Vision Sensing](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FBi_Graph-Based_Object_Classification_for_Neuromorphic_Vision_Sensing_ICCV_2019_paper.html) |\n| Bullying10K | [Bullying10K: A Large-Scale Neuromorphic Dataset towards Privacy-Preserving Bullying Recognition](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002F05ffe69463062b7f9fb506c8351ffdd7-Paper-Datasets_and_Benchmarks.pdf) |\n| CIFAR10-DVS | [CIFAR10-DVS: An Event-Stream Dataset for Object Classification](https:\u002F\u002Finternal-journal.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2017.00309\u002Ffull) |\n| DVS-Lip | [Multi-Grained Spatio-Temporal Features Perceived Network for Event-Based Lip-Reading](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FTan_Multi-Grained_Spatio-Temporal_Features_Perceived_Network_for_Event-Based_Lip-Reading_CVPR_2022_paper.html) |\n| DVS128 Gesture | [A Low Power, Fully Event-Based Gesture Recognition System](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAmir_A_Low_Power_CVPR_2017_paper.html) |\n| ES-ImageNet | [ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2021.726582\u002Ffull) |\n| HARDVS | [HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.09648) |\n| N-Caltech101 | [Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2015.00437\u002Ffull) |\n| N-MNIST | [Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2015.00437\u002Ffull) |\n| Nav Gesture | [Event-Based Gesture Recognition With Dynamic Background Suppression Using Smartphone Computational Capabilities](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2020.00275\u002Ffull) |\n| Spiking Heidelberg Digits (SHD) | [The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTNNLS.2020.3044364) |\n| Spiking Speech Commands (SSC) | [The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTNNLS.2020.3044364) |\n| Speech Commands | [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03209) |\n\nUsers can use both the origin event data and frame data integrated by SpikingJelly:\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader\nfrom spikingjelly.datasets.utils import pad_sequence_collate, padded_sequence_mask\nfrom spikingjelly.datasets import DVS128Gesture\n\n# Set the root directory for the dataset\nroot_dir = 'D:\u002Fdatasets\u002FDVS128Gesture'\n# Load event dataset\nevent_set = DVS128Gesture(root_dir, train=True, data_type='event')\nevent, label = event_set[0]\n# Print the keys and their corresponding values in the event data\nfor k in event.keys():\n    print(k, event[k])\n\n# t [80048267 80048277 80048278 ... 85092406 85092538 85092700]\n# x [49 55 55 ... 60 85 45]\n# y [82 92 92 ... 96 86 90]\n# p [1 0 0 ... 1 0 0]\n# label 0\n\n# Load a dataset with fixed frame numbers\nfixed_frames_number_set = DVS128Gesture(root_dir, train=True, data_type='frame', frames_number=20, split_by='number')\n# Randomly select two frames and print their shapes\nrand_index = torch.randint(low=0, high=fixed_frames_number_set.__len__(), size=[2])\nfor i in rand_index:\n    frame, label = fixed_frames_number_set[i]\n    print(f'frame[{i}].shape=[T, C, H, W]={frame.shape}')\n\n# frame[308].shape=[T, C, H, W]=(20, 2, 128, 128)\n# frame[453].shape=[T, C, H, W]=(20, 2, 128, 128)\n\n# Load a dataset with a fixed duration and print the shapes of the first 5 samples\nfixed_duration_frame_set = DVS128Gesture(root_dir, data_type='frame', duration=1000000, train=True)\nfor i in range(5):\n    x, y = fixed_duration_frame_set[i]\n    print(f'x[{i}].shape=[T, C, H, W]={x.shape}')\n\n# x[0].shape=[T, C, H, W]=(6, 2, 128, 128)\n# x[1].shape=[T, C, H, W]=(6, 2, 128, 128)\n# x[2].shape=[T, C, H, W]=(5, 2, 128, 128)\n# x[3].shape=[T, C, H, W]=(5, 2, 128, 128)\n# x[4].shape=[T, C, H, W]=(7, 2, 128, 128)\n\n# Create a data loader for the fixed duration frame dataset and print the shapes and sequence lengths\ntrain_data_loader = DataLoader(fixed_duration_frame_set, collate_fn=pad_sequence_collate, batch_size=5)\nfor x, y, x_len in train_data_loader:\n    print(f'x.shape=[N, T, C, H, W]={tuple(x.shape)}')\n    print(f'x_len={x_len}')\n    mask = padded_sequence_mask(x_len)  # mask.shape = [T, N]\n    print(f'mask=\\n{mask.t().int()}')\n    break\n\n# x.shape=[N, T, C, H, W]=(5, 7, 2, 128, 128)\n# x_len=tensor([6, 6, 5, 5, 7])\n# mask=\n# tensor([[1, 1, 1, 1, 1, 1, 0],\n#         [1, 1, 1, 1, 1, 1, 0],\n#         [1, 1, 1, 1, 1, 0, 0],\n#         [1, 1, 1, 1, 1, 0, 0],\n#         [1, 1, 1, 1, 1, 1, 1]], dtype=torch.int32)\n```\n\nMore datasets will be included in the future.\n\nIf some datasets' download links are not available for some users, the users can download from the [OpenI mirror](https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenI\u002Fspikingjelly\u002Fdatasets?type=0).\n\nAll datasets saved in the OpenI mirror are allowable by their license or author's agreement.\n\n## Tutorials\n\nSpikingJelly provides elaborate tutorials. Here are some tutorials:\n\n| Figure | Tutorial |\n| ------------------------------------------------------------ | ------------------------------------------------------------ |\n| ![basic_concept](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_71ff43fcf60d.png) | [Basic Conception](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fbasic_concept.html) |\n| ![neuron](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_be7f416878e7.png) | [Neuron](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fneuron.html) |\n| ![lif_fc_mnist](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_8bfd521d9797.png) | [Single Fully Connected Layer SNN to Classify MNIST](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Flif_fc_mnist.html) |\n| ![conv_fashion_mnist](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_03a5d0e60449.png) | [Convolutional SNN to Classify FMNIST](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fconv_fashion_mnist.html) |\n| ![ann2snn](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_64b4daa96681.png) | [ANN2SNN](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fann2snn.html) |\n| ![neuromorphic_datasets](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_8a8a6c0401c1.gif) | [Neuromorphic Datasets Processing](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fneuromorphic_datasets.html) |\n| ![classify_dvsg](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_2c4550f9d20c.png) | [Classify DVS Gesture](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fclassify_dvsg.html) |\n| ![recurrent_connection_and_stateful_synapse](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_294a64be83ab.png) | [Recurrent Connection and Stateful Synapse](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Frecurrent_connection_and_stateful_synapse.html) |\n| ![stdp_learning](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_d992898a60b6.png) | [STDP Learning](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fstdp.html) |\n| ![reinforcement_learning](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_6b23d3d62047.png) | [Reinforcement Learning](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh-cn\u002Flatest\u002Ftutorials\u002Fcn\u002Filc_san.html) |\n\nOther tutorials that are not listed here are also available at the [document](https:\u002F\u002Fspikingjelly.readthedocs.io).\n\n[ZhenyuZhao](https:\u002F\u002Fgithub.com\u002F15947470421) provides [jupyter tutorial notebooks in Chinese](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Ftree\u002F8932ac0668fe19b3efd0afedb3ca454cd8c126d3\u002Fcommunity_tutorials\u002Fjupyter\u002Fchinese)。\n\n## Publications and Citation\n\nPublications using SpikingJelly are recorded in [Publications](.\u002Fpublications.md). If you use SpikingJelly in your paper, you can also add it to this table by pull request.\n\nIf you use SpikingJelly in your work, please cite it as follows:\n\n```bibtex\n@article{\ndoi:10.1126\u002Fsciadv.adi1480,\nauthor = {Wei Fang  and Yanqi Chen  and Jianhao Ding  and Zhaofei Yu  and Timothée Masquelier  and Ding Chen  and Liwei Huang  and Huihui Zhou  and Guoqi Li  and Yonghong Tian },\ntitle = {SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence},\njournal = {Science Advances},\nvolume = {9},\nnumber = {40},\npages = {eadi1480},\nyear = {2023},\ndoi = {10.1126\u002Fsciadv.adi1480},\nURL = {https:\u002F\u002Fwww.science.org\u002Fdoi\u002Fabs\u002F10.1126\u002Fsciadv.adi1480},\neprint = {https:\u002F\u002Fwww.science.org\u002Fdoi\u002Fpdf\u002F10.1126\u002Fsciadv.adi1480},\nabstract = {Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing. Motivation and introduction of the software framework SpikingJelly for spiking deep learning.}}\n```\n\n## Contribution\n\nYou can read the issues and get the problems to be solved and the latest development plans. We welcome all users to join the discussion of development plans, solve issues, and send pull requests.\n\nNot all API documents are written in both English and Chinese. We welcome users to complete translation (from English to Chinese or from Chinese to English).\n\nRead the [Contributing Guide](.\u002FCONTRIBUTING.md) for more information.\n\n## About\n\n### Institutions\n\n[Multimedia Learning Group, Institute of Digital Media (NELVT), Peking University](https:\u002F\u002Fpkuml.org\u002F) and [Peng Cheng Laboratory](http:\u002F\u002Fwww.szpclab.com\u002F) are the main institutions behind the development of SpikingJelly.\n\n\u003Cp align=\"left\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_5752f60375c8.png\" alt=\"PKU\" width=\"160\" \u002F>\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_467155d44732.png\" alt=\"PCL\" width=\"160\" \u002F>\n\u003C\u002Fp>\n\n### Main Developers\n\nSpikingJelly has been developed and maintained by multiple main developers over time.\n\n**2024.07~Now**\n\n[Yifan Huang](https:\u002F\u002Fgithub.com\u002FAllenYolk), [Peng Xue](https:\u002F\u002Fgithub.com\u002FPengXue0812)\n\n**2019.12~2024.06**\n\n[Wei Fang](https:\u002F\u002Fgithub.com\u002Ffangwei123456), [Yanqi Chen](https:\u002F\u002Fgithub.com\u002FYanqi-Chen), [Jianhao Ding](https:\u002F\u002Fgithub.com\u002FDingJianhao), [Ding Chen](https:\u002F\u002Fgithub.com\u002Flucifer2859), [Liwei Huang](https:\u002F\u002Fgithub.com\u002FGrasshlw)\n\n### All Thanks to Our Contributors\n\nThe list of contributors can be found [in the contributor page](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fgraphs\u002Fcontributors).\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_8fae2cb3e844.png\" alt=\"contributors\" \u002F>\n\u003C\u002Fa>\n\n\u003Cp align=\"right\">\u003Ca href=\"#top\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_246ec82a988f.png\" height=\"50px\" alt=\"to-top\" \u002F>\u003C\u002Fa>\u003C\u002Fp>\n","# SpikingJelly\n\n![GitHub 最后提交](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Ffangwei123456\u002Fspikingjelly)\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_13d664e1afd7.png)](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Flatest)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fspikingjelly)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fspikingjelly)\n[![PyPI - Python 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fspikingjelly)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fspikingjelly)\n![仓库大小](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frepo-size\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub 问题数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub 已关闭问题数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed-raw\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub 拉取请求数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub 已关闭拉取请求数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr-closed\u002Ffangwei123456\u002Fspikingjelly)\n![访问量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_a14c2597124c.png)\n![GitHub 分支数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub 仓库星标数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ffangwei123456\u002Fspikingjelly)\n![GitHub 贡献者数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Ffangwei123456\u002Fspikingjelly)\n\nEnglish | [中文(Chinese)](.\u002FREADME_cn.md)\n\n![demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_dee1d7558319.png)\n\nSpikingJelly 是一个基于 [PyTorch](https:\u002F\u002Fpytorch.org\u002F) 的开源脉冲神经网络（SNN）深度学习框架。\n\nSpikingJelly 的 [文档](https:\u002F\u002Fspikingjelly.readthedocs.io) 同时提供英文和中文版本。\n\n- [更新日志](#changelog)\n- [安装](#installation)\n- [以空前简单的方式构建 SNN](#build-snn-in-an-unprecedented-simple-way)\n- [快速便捷的 ANN-SNN 转换](#fast-and-handy-ann-snn-conversion)\n- [CUDA\u002FTriton 加速的神经元](#cudatriton-enhanced-neuron)\n- [设备支持](#device-supports)\n- [神经形态数据集支持](#neuromorphic-datasets-supports)\n- [教程](#tutorials)\n- [论文与引用](#publications-and-citation)\n- [贡献](#contribution)\n- [关于](#about)\n\n## 更新日志\n\n我们正在积极维护和改进 SpikingJelly。以下是我们的未来计划以及每次发布的主要亮点。\n\n**亮点**\n\n我们的新工作 [通过梯度检查点和脉冲压缩实现无损、高效的脉冲神经网络训练](https:\u002F\u002Fopenreview.net\u002Fforum?id=nrBJ0Uvj7c) 最近被 **ICLR 2026** 接受！自动训练内存优化工具现已在 `spikingjelly.activation_based.memopt` 中可用。更多信息请参阅 [教程](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh-cn\u002Flatest\u002Ftutorials\u002Fen\u002Fmemopt.html)。\n\n在最新版本（GitHub 版本）中：\n\n- `IFNode`、`LIFNode` 和 `ParametricLIFNode` 现已配备 [Triton](https:\u002F\u002Fgithub.com\u002Ftriton-lang\u002Ftriton) 后端；\n- `FlexSN` 可用于将 PyTorch 脉冲神经元动力学转换为 Triton 内核；\n- `SpikingSelfAttention` 和 `QKAttention` 已上线；\n- `memopt` 已上线；\n- `nir_exchange` 已上线；\n- `op_counter` 已上线；\n- `spikingjelly.activation_based.layer`、`spikingjelly.activation_based.functional` 和 `spikingjelly.datasets` 已重构；\n- 数据集实现已重构；\n- 文档和教程已更新；\n- `spikingjelly.activation_based.functional` 中的卷积-批归一化融合函数已被弃用；请改用 PyTorch 的 [`fuse_conv_bn_eval`](https:\u002F\u002Fdocs.pytorch.org\u002Fdocs\u002Fstable\u002Fgenerated\u002Ftorch.nn.utils.fuse_conv_bn_eval.html)。\n\n**计划中**\n\n我们即将发布版本 `0.0.0.1.0`。\n\n- [x] 添加 [Triton](https:\u002F\u002Fgithub.com\u002Ftriton-lang\u002Ftriton) 后端，进一步加速 GPU 上的计算。\n- [x] 添加一个编译器，用于将 PyTorch 脉冲神经元转换为 Triton 内核，这将比现有的 [`auto_cuda`](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Ftree\u002Fmaster\u002Fspikingjelly\u002Factivation_based\u002Fcuda_kernel\u002Fauto_cuda) 子包更加灵活。\n- [x] 添加脉冲自注意力实现。\n- [x] 更新文档和教程。\n\n其他长期计划包括：\n\n- [x] 添加 [NIR](https:\u002F\u002Fgithub.com\u002Fneuromorphs\u002FNIR) 支持。\n- [x] 优化训练内存消耗。\n- [ ] 在华为 NPU 上加速。\n\n对于早期实验性功能，请参阅我们的姊妹项目 [flashsnn](https:\u002F\u002Fgithub.com\u002FAllenYolk\u002Fflash-snn)。新的想法会在 flashsnn 中进行原型验证，然后再合并到 SpikingJelly 中。\n\n**版本说明**\n\n- 奇数版本号为开发版，随 GitHub\u002FOpenI 仓库同步更新。偶数版本号为稳定版，可在 PyPI 上获取。\n\n- 默认文档适用于最新的开发版。如果您使用的是稳定版，请务必切换到对应版本的文档。\n\n- 自版本 `0.0.0.0.14` 起，`clock_driven` 和 `event_driven` 等模块已被重命名。请参考教程 [从旧版本迁移](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fmigrate_from_legacy.html)。\n\n- 如果您使用的是较旧版本的 SpikingJelly，可能会遇到一些严重错误。更多详情请参阅 [发行版中的错误历史](.\u002Fbugs.md)。\n\n**不同版本的文档：**\n\n- [zero](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Fzero\u002F)\n- [0.0.0.0.4](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.4\u002F#index-en)\n- [0.0.0.0.6](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.6\u002F#index-en)\n- [0.0.0.0.8](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.8\u002F#index-en)\n- [0.0.0.0.10](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.10\u002F#index-en)\n- [0.0.0.0.12](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.12\u002F#index-en)\n- [0.0.0.0.14](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002F#index-en)\n- [最新版](https.:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Flatest\u002F#index-en)\n\n## 安装\n\n请注意，SpikingJelly 基于 PyTorch。在安装 SpikingJelly 之前，请确保已安装 [PyTorch、torchvision 和 torchaudio](https:\u002F\u002Fpytorch.org)。需要注意的是，SpikingJelly 的最新版本要求 `torch>=2.2.0`，并在 `torch==2.7.1` 上进行了测试。\n\n**从 [PyPI](https:\u002F\u002Fpypi.org\u002Fproject\u002Fspikingjelly\u002F) 安装最新稳定版**：\n\n```bash\npip install spikingjelly\n```\n\n**从源代码安装最新开发版**：\n\n从 [GitHub](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly)：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly.git\ncd spikingjelly\npip install .\n```\n\n从 [OpenI](https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenI\u002Fspikingjelly)：\n\n```bash\ngit clone https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenI\u002Fspikingjelly.git\ncd spikingjelly\npip install .\n```\n\n**可选依赖**\n\n要启用 `cupy` 后端，请安装 [CuPy](https:\u002F\u002Fdocs.cupy.dev\u002Fen\u002Fstable\u002Finstall.html#installing-cupy)。\n\n```bash\npip install cupy-cuda12x # 适用于 CUDA 12.x\npip install cupy-cuda11x # 适用于 CUDA 11.x\n```\n\n要启用 `triton` 后端，请确保已安装 [Triton](https:\u002F\u002Fgithub.com\u002Ftriton-lang\u002Ftriton)。通常，Triton 会随 PyTorch 2.X 一起安装。我们已在 `triton==3.3.1` 上测试了 Triton 后端。\n\n```bash\npip install triton==3.3.1\n```\n\n要启用 `nir_exchange`，请安装 [NIR](https:\u002F\u002Fgithub.com\u002Fneuromorphs\u002FNIR) 和 [NIRTorch](https:\u002F\u002Fgithub.com\u002Fneuromorphs\u002FNIRTorch)。\n\n```bash\npip install nir nirtorch\n```\n\n## 以前所未有的简单方式构建 SNN\n\nSpikingJelly 使用起来非常友好。使用 SpikingJelly 构建 SNN 就像在 PyTorch 中构建 ANN 一样简单：\n\n```python\nnn.Sequential(\n    layer.Flatten(),\n    layer.Linear(28 * 28, 10, bias=False),\n    neuron.LIFNode(tau=tau, surrogate_function=surrogate.ATan())\n)\n```\n\n这个简单的网络配合泊松编码器，在 MNIST 测试数据集上可以达到 92% 的准确率。更多详细信息请参阅教程。您也可以在 Python 终端中运行以下代码来训练 MNIST 分类任务：\n\n```python\npython -m spikingjelly.activation_based.examples.lif_fc_mnist -tau 2.0 -T 100 -device cuda:0 -b 64 -epochs 100 -data-dir \u003CMNIST 数据路径> -amp -opt adam -lr 1e-3 -j 8\n```\n\n## 快速便捷的 ANN-SNN 转换\n\nSpikingJelly 实现了一个相对通用的 ANN-SNN 转换接口。用户可以通过 PyTorch 实现转换，并且还可以自定义转换模式。\n\n```python\nclass ANN(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.network = nn.Sequential(\n            nn.Conv2d(1, 32, 3, 1),\n            nn.BatchNorm2d(32, eps=1e-3),\n            nn.ReLU(),\n            nn.AvgPool2d(2, 2),\n\n            nn.Conv2d(32, 32, 3, 1),\n            nn.BatchNorm2d(32, eps=1e-3),\n            nn.ReLU(),\n            nn.AvgPool2d(2, 2),\n\n            nn.Conv2d(32, 32, 3, 1),\n            nn.BatchNorm2d(32, eps=1e-3),\n            nn.ReLU(),\n            nn.AvgPool2d(2, 2),\n\n            nn.Flatten(),\n            nn.Linear(32, 10)\n        )\n\n    def forward(self,x):\n        x = self.network(x)\n        return x\n```\n\n这个简单的网络采用模拟编码，在转换后于 MNIST 测试数据集上可达到 98.44% 的准确率。更多细节请参阅教程。您也可以在 Python 终端中运行以下代码来使用转换后的模型进行 MNIST 分类训练：\n\n```python\n>>> import spikingjelly.activation_based.ann2snn.examples.cnn_mnist as cnn_mnist\n>>> cnn_mnist.main()\n```\n\n## CUDA\u002FTriton 加速的神经元\n\nSpikingJelly 为多步神经元提供了多种后端。您可以使用友好的 `torch` 后端进行轻松的编码和调试，也可以使用 `cupy` 或 `triton` 后端以获得更快的训练速度。\n\n下图比较了多步 LIF 神经元（`float32`）的 `torch` 和 `cupy` 后端的执行时间。通常情况下，`triton` 后端比 `cupy` 后端更加高效。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_ba5c8ca061d9.png\" alt=\"exe_time_fb\"  \u002F>\n\n`cupy` 和 `triton` 后端还支持 `float16` 数据类型，可用于 [自动混合精度训练](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Famp_examples.html)。\n\n要使用 `cupy` 后端，请安装 [CuPy](https:\u002F\u002Fdocs.cupy.dev\u002Fen\u002Fstable\u002Finstall.html)。要使用 `triton` 后端，请安装 [Triton](https:\u002F\u002Ftriton-lang.org\u002Fmain\u002Findex.html)。需要注意的是，`cupy` 和 `triton` 后端仅支持 GPU，而 `torch` 后端则同时支持 CPU 和 GPU。\n\n## 设备支持\n\n- [x] Nvidia GPU\n- [x] CPU\n- [ ] Huawei NPU\n\n使用起来就像使用 PyTorch 一样简单。\n\n```python\n>>> net = nn.Sequential(layer.Flatten(), layer.Linear(28 * 28, 10, bias=False), neuron.LIFNode(tau=tau))\n>>> net = net.to(device) # 可以是 CPU 或 CUDA 设备\n```\n\n## 神经形态数据集支持\n\nSpikingJelly 包含以下神经形态数据集：\n\n| 数据集 | 来源 |\n| -------------- | ------------------------------------------------------------ |\n| ASL-DVS | [基于图的神经形态视觉传感目标分类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FBi_Graph-Based_Object_Classification_for_Neuromorphic_Vision_Sensing_ICCV_2019_paper.html) |\n| Bullying10K | [Bullying10K：面向隐私保护的欺凌识别的大规模神经形态数据集](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002F05ffe69463062b7f9fb506c8351ffdd7-Paper-Datasets_and_Benchmarks.pdf) |\n| CIFAR10-DVS | [CIFAR10-DVS：用于目标分类的事件流数据集](https:\u002F\u002Finternal-journal.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2017.00309\u002Ffull) |\n| DVS-Lip | [基于事件的唇读多粒度时空特征感知网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FTan_Multi-Grained_Spatio-Temporal_Features_Perceived_Network_for_Event-Based_Lip-Reading_CVPR_2022_paper.html) |\n| DVS128 Gesture | [一种低功耗、完全基于事件的手势识别系统](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAmir_A_Low_Power_CVPR_2017_paper.html) |\n| ES-ImageNet | [ES-ImageNet：用于脉冲神经网络的百万级事件流分类数据集](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2021.726582\u002Ffull) |\n| HARDVS | [HARDVS：利用动态视觉传感器重新审视人类活动识别](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.09648) |\n| N-Caltech101 | [使用扫视将静态图像数据集转换为脉冲神经形态数据集](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2015.00437\u002Ffull) |\n| N-MNIST | [使用扫视将静态图像数据集转换为脉冲神经形态数据集](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2015.00437\u002Ffull) |\n| Nav Gesture | [利用智能手机计算能力实现背景抑制的基于事件的手势识别](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffnins.2020.00275\u002Ffull) |\n| 海德堡脉冲数字数据集（SHD） | [海德堡脉冲数据集：用于脉冲神经网络系统性评估的数据集](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTNNLS.2020.3044364) |\n| 脉冲语音命令数据集（SSC） | [海德堡脉冲数据集：用于脉冲神经网络系统性评估的数据集](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTNNLS.2020.3044364) |\n| 语音命令 | [语音命令：用于有限词汇量语音识别的数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03209) |\n\n用户可以使用原始事件数据以及由 SpikingJelly 整合的帧数据：\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader\nfrom spikingjelly.datasets.utils import pad_sequence_collate, padded_sequence_mask\nfrom spikingjelly.datasets import DVS128Gesture\n\n# 设置数据集的根目录\nroot_dir = 'D:\u002Fdatasets\u002FDVS128Gesture'\n# 加载事件数据集\nevent_set = DVS128Gesture(root_dir, train=True, data_type='event')\nevent, label = event_set[0]\n# 打印事件数据中各键及其对应的值\nfor k in event.keys():\n    print(k, event[k])\n\n# t [80048267 80048277 80048278 ... 85092406 85092538 85092700]\n# x [49 55 55 ... 60 85 45]\n# y [82 92 92 ... 96 86 90]\n# p [1 0 0 ... 1 0 0]\n# label 0\n\n# 加载固定帧数的数据集\nfixed_frames_number_set = DVS128Gesture(root_dir, train=True, data_type='frame', frames_number=20, split_by='number')\n# 随机选取两帧并打印其形状\nrand_index = torch.randint(low=0, high=fixed_frames_number_set.__len__(), size=[2])\nfor i in rand_index:\n    frame, label = fixed_frames_number_set[i]\n    print(f'frame[{i}].shape=[T, C, H, W]={frame.shape}')\n\n# frame[308].shape=[T, C, H, W]=(20, 2, 128, 128)\n# frame[453].shape=[T, C, H, W]=(20, 2, 128, 128)\n\n# 加载固定时长的帧数据集，并打印前5个样本的形状\nfixed_duration_frame_set = DVS128Gesture(root_dir, data_type='frame', duration=1000000, train=True)\nfor i in range(5):\n    x, y = fixed_duration_frame_set[i]\n    print(f'x[{i}].shape=[T, C, H, W]={x.shape}')\n\n# x[0].shape=[T, C, H, W]=(6, 2, 128, 128)\n# x[1].shape=[T, C, H, W]=(6, 2, 128, 128)\n# x[2].shape=[T, C, H, W]=(5, 2, 128, 128)\n# x[3].shape=[T, C, H, W]=(5, 2, 128, 128)\n# x[4].shape=[T, C, H, W]=(7, 2, 128, 128)\n\n# 为固定时长的帧数据集创建数据加载器，并打印形状和序列长度\ntrain_data_loader = DataLoader(fixed_duration_frame_set, collate_fn=pad_sequence_collate, batch_size=5)\nfor x, y, x_len in train_data_loader:\n    print(f'x.shape=[N, T, C, H, W]={tuple(x.shape)}')\n    print(f'x_len={x_len}')\n    mask = padded_sequence_mask(x_len)  # mask.shape = [T, N]\n    print(f'mask=\\n{mask.t().int()}')\n    break\n\n# x.shape=[N, T, C, H, W]=(5, 7, 2, 128, 128)\n# x_len=tensor([6, 6, 5, 5, 7])\n# mask=\n# tensor([[1, 1, 1, 1, 1, 1, 0],\n#         [1, 1, 1, 1, 1, 1, 0],\n#         [1, 1, 1, 1, 1, 0, 0],\n#         [1, 1, 1, 1, 1, 0, 0],\n#         [1, 1, 1, 1, 1, 1, 1]], dtype=torch.int32)\n```\n\n未来还将加入更多数据集。\n\n如果部分用户无法访问某些数据集的下载链接，可以从 [OpenI 镜像站](https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenI\u002Fspikingjelly\u002Fdatasets?type=0) 下载。\n\nOpenI 镜像站中保存的所有数据集均已获得许可或作者同意。\n\n## 教程\n\nSpikingJelly 提供了详尽的教程。以下是一些教程：\n\n| 图片 | 教程 |\n| ------------------------------------------------------------ | ------------------------------------------------------------ |\n| ![basic_concept](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_71ff43fcf60d.png) | [基本概念](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fbasic_concept.html) |\n| ![neuron](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_be7f416878e7.png) | [神经元](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fneuron.html) |\n| ![lif_fc_mnist](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_8bfd521d9797.png) | [单层全连接 SNN 用于分类 MNIST](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Flif_fc_mnist.html) |\n| ![conv_fashion_mnist](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_03a5d0e60449.png) | [卷积 SNN 用于分类 FMNIST](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fconv_fashion_mnist.html) |\n| ![ann2snn](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_64b4daa96681.png) | [ANN2SNN](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fann2snn.html) |\n| ![neuromorphic_datasets](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_8a8a6c0401c1.gif) | [神经形态数据集处理](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fneuromorphic_datasets.html) |\n| ![classify_dvsg](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_2c4550f9d20c.png) | [分类 DVS 手势](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fclassify_dvsg.html) |\n| ![recurrent_connection_and_stateful_synapse](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_294a64be83ab.png) | [循环连接与状态突触](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Frecurrent_connection_and_stateful_synapse.html) |\n| ![stdp_learning](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_d992898a60b6.png) | [STDP 学习](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002F0.0.0.0.14\u002Factivation_based_en\u002Fstdp.html) |\n| ![reinforcement_learning](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_6b23d3d62047.png) | [强化学习](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh-cn\u002Flatest\u002Ftutorials\u002Fcn\u002Filc_san.html) |\n\n此处未列出的其他教程也可在 [文档](https:\u002F\u002Fspikingjelly.readthedocs.io) 中找到。\n\n[ZhenyuZhao](https:\u002F\u002Fgithub.com\u002F15947470421) 提供了 [中文 Jupyter 教程笔记本](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Ftree\u002F8932ac0668fe19b3efd0afedb3ca454cd8c126d3\u002Fcommunity_tutorials\u002Fjupyter\u002Fchinese)。\n\n## 出版物与引用\n\n使用 SpikingJelly 的出版物记录在 [出版物](.\u002Fpublications.md) 中。如果您在论文中使用了 SpikingJelly，也可以通过拉取请求将其添加到此表格中。\n\n如果您在工作中使用了 SpikingJelly，请按以下方式引用：\n\n```bibtex\n@article{\ndoi:10.1126\u002Fsciadv.adi1480,\nauthor = {Wei Fang  and Yanqi Chen  and Jianhao Ding  and Zhaofei Yu  and Timothée Masquelier  and Ding Chen  and Liwei Huang  and Huihui Zhou  and Guoqi Li  and Yonghong Tian },\ntitle = {SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence},\njournal = {Science Advances},\nvolume = {9},\nnumber = {40},\npages = {eadi1480},\nyear = {2023},\ndoi = {10.1126\u002Fsciadv.adi1480},\nURL = {https:\u002F\u002Fwww.science.org\u002Fdoi\u002Fabs\u002F10.1126\u002Fsciadv.adi1480},\neprint = {https:\u002F\u002Fwww.science.org\u002Fdoi\u002Fpdf\u002F10.1126\u002Fsciadv.adi1480},\nabstract = {Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics和spike properties。As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment。In this work, we present the SpikingJelly framework to address the aforementioned dilemma。We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips。Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation。SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing。Motivation and introduction of the software framework SpikingJelly for spiking deep learning。}}\n```\n\n## 贡献\n\n您可以阅读问题列表，了解待解决的问题及最新的开发计划。我们欢迎所有用户参与开发计划的讨论、解决问题并提交拉取请求。\n\n并非所有 API 文档都同时提供英文和中文版本。我们欢迎用户完成翻译工作（从英文到中文或从中文到英文）。\n\n更多信息请参阅 [贡献指南](.\u002FCONTRIBUTING.md)。\n\n## 关于\n\n### 机构\n\n[北京大学数字媒体研究所多媒体学习组 (NELVT)](https:\u002F\u002Fpkuml.org\u002F) 和 [鹏城实验室](http:\u002F\u002Fwww.szpclab.com\u002F) 是 SpikingJelly 开发的主要机构。\n\n\u003Cp align=\"left\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_5752f60375c8.png\" alt=\"PKU\" width=\"160\" \u002F>\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_467155d44732.png\" alt=\"PCL\" width=\"160\" \u002F>\n\u003C\u002Fp>\n\n### 主要开发者\n\nSpikingJelly 历年来由多位主要开发者负责开发和维护。\n\n**2024.07~至今**\n\n[Yifan Huang](https:\u002F\u002Fgithub.com\u002FAllenYolk), [Peng Xue](https:\u002F\u002Fgithub.com\u002FPengXue0812)\n\n**2019.12~2024.06**\n\n[Wei Fang](https:\u002F\u002Fgithub.com\u002Ffangwei123456), [Yanqi Chen](https:\u002F\u002Fgithub.com\u002FYanqi-Chen), [Jianhao Ding](https:\u002F\u002Fgithub.com\u002FDingJianhao), [Ding Chen](https:\u002F\u002Fgithub.com\u002Flucifer2859), [Liwei Huang](https:\u002F\u002Fgithub.com\u002FGrasshlw)\n\n### 感谢所有贡献者\n\n贡献者名单可在 [贡献者页面](https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fgraphs\u002Fcontributors) 中找到。\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_8fae2cb3e844.png\" alt=\"contributors\" \u002F>\n\u003C\u002Fa>\n\n\u003Cp align=\"right\">\u003Ca href=\"#top\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_readme_246ec82a988f.png\" height=\"50px\" alt=\"回到顶部\" \u002F>\u003C\u002Fa>\u003C\u002Fp>","# SpikingJelly 快速上手指南\n\nSpikingJelly 是一个基于 PyTorch 的开源脉冲神经网络（SNN）深度学习框架。它提供了极简的 SNN 构建方式、高效的 ANN-SNN 转换工具以及多种硬件加速后端。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：建议 Python 3.8+\n*   **核心依赖**：\n    *   **PyTorch**: 必须预先安装。最新版本要求 `torch>=2.2.0` (已在 `torch==2.7.1` 上测试通过)。\n    *   **相关库**: 建议同时安装 `torchvision` 和 `torchaudio`。\n*   **可选加速依赖**（根据需求安装）：\n    *   **CuPy**: 用于启用 GPU 加速后端 (`cupy-cuda11x` 或 `cupy-cuda12x`)。\n    *   **Triton**: 用于启用更高效的 Triton 后端 (通常随 PyTorch 2.x 自动安装，建议版本 `triton==3.3.1`)。\n    *   **NIR**: 如需使用神经形态中间表示交换功能，需安装 `nir` 和 `nirtorch`。\n\n> **提示**：国内用户安装 PyTorch 时，推荐使用清华或中科大镜像源以加速下载。\n\n## 安装步骤\n\n您可以选择安装稳定的发布版或最新的开发版。\n\n### 1. 安装稳定版（推荐）\n\n从 PyPI 安装最新稳定版本：\n\n```bash\npip install spikingjelly\n```\n\n*国内加速方案：*\n```bash\npip install spikingjelly -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 安装开发版（源码安装）\n\n如果您需要体验最新特性（如 Triton 后端优化、新算子等），可以从 GitHub 克隆源码安装：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly.git\ncd spikingjelly\npip install .\n```\n\n*国内加速方案（使用 Gitee 镜像或代理）：*\n若直接克隆 GitHub 较慢，可配置 git 代理或使用国内镜像源（如有）。安装依赖时同样建议使用国内 pip 源。\n\n### 3. 安装可选后端\n\n如需启用高性能后端，请执行以下命令：\n\n**启用 CuPy 后端：**\n```bash\npip install cupy-cuda12x  # 适用于 CUDA 12.x\n# 或\npip install cupy-cuda11x  # 适用于 CUDA 11.x\n```\n\n**启用 Triton 后端：**\n```bash\npip install triton==3.3.1\n```\n\n**启用 NIR 支持：**\n```bash\npip install nir nirtorch\n```\n\n## 基本使用\n\nSpikingJelly 的设计哲学是让构建 SNN 像构建普通 ANN 一样简单。以下是最小化的使用示例。\n\n### 1. 构建一个简单的 SNN 模型\n\n使用 `nn.Sequential` 即可快速搭建网络，只需将普通层替换为 SpikingJelly 提供的层和神经元节点。\n\n```python\nimport torch\nimport torch.nn as nn\nfrom spikingjelly.activation_based import layer, neuron, surrogate\n\n# 定义时间步长\ntau = 2.0\n\n# 构建网络：展平层 -> 全连接层 -> LIF 神经元\nnet = nn.Sequential(\n    layer.Flatten(),\n    layer.Linear(28 * 28, 10, bias=False),\n    neuron.LIFNode(tau=tau, surrogate_function=surrogate.ATan())\n)\n\n# 将模型移动到设备 (CPU 或 CUDA)\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\nnet = net.to(device)\n\nprint(net)\n```\n\n### 2. 运行训练示例\n\n框架内置了丰富的示例脚本。您可以直接在命令行运行以下命令，在 MNIST 数据集上训练一个基于 LIF 神经元的分类网络：\n\n```bash\npython -m spikingjelly.activation_based.examples.lif_fc_mnist -tau 2.0 -T 100 -device cuda:0 -b 64 -epochs 100 -data-dir \u003CPATH to MNIST> -amp -opt adam -lr 1e-3 -j 8\n```\n\n*参数说明：*\n*   `-data-dir`: 替换为您的 MNIST 数据集存放路径。\n*   `-device`: 指定运行设备，如 `cuda:0` 或 `cpu`。\n*   `-amp`: 启用自动混合精度训练（需 GPU 支持）。\n\n### 3. ANN 转 SNN (可选)\n\nSpikingJelly 支持将训练好的普通 ANN 快速转换为 SNN。以下是一个简单的调用示例：\n\n```python\nimport spikingjelly.activation_based.ann2snn.examples.cnn_mnist as cnn_mnist\n\n# 运行内置的 CNN-MNIST 转换示例\ncnn_mnist.main()\n```\n\n更多详细教程、API 文档及高级用法（如自定义神经元、多步长优化等），请访问 [SpikingJelly 官方文档](https:\u002F\u002Fspikingjelly.readthedocs.io\u002Fzh_CN\u002Flatest\u002F)。","某边缘计算团队正在为低功耗无人机开发基于脉冲神经网络（SNN）的实时目标检测系统，以利用神经形态摄像头的低延迟特性。\n\n### 没有 spikingjelly 时\n- **开发门槛极高**：研究人员需从零手动推导并编写复杂的 SNN 反向传播算法及 CUDA 核函数，极易出错且耗时数周。\n- **硬件加速困难**：缺乏现成的高性能算子支持，模型在 GPU 上推理速度缓慢，无法满足无人机毫秒级响应需求。\n- **生态割裂严重**：难以直接复用成熟的 PyTorch 预训练模型（ANN），必须重新设计网络结构从头训练，数据利用率低。\n- **内存优化缺失**：训练深层脉冲网络时显存占用爆炸式增长，导致无法在有限资源下探索更优的网络架构。\n\n### 使用 spikingjelly 后\n- **构建极简高效**：依托其基于 PyTorch 的封装，开发者仅需几行代码即可调用 `LIFNode` 等神经元模块，将原型验证周期从数周缩短至数天。\n- **性能显著提升**：利用内置的 Triton 后端和自动 CUDA 加速功能，模型推理吞吐量大幅提升，完美适配边缘端实时任务。\n- **无缝转换迁移**：通过一键式 ANN-SNN 转换工具，直接将现有的高精度视觉模型转化为脉冲网络，保留了原有精度优势。\n- **训练显存优化**：借助最新的梯度检查点与脉冲压缩技术（memopt），在不损失精度的前提下大幅降低训练显存占用，使深层网络训练成为可能。\n\nspikingjelly 通过打通从算法理论到高性能部署的全链路，让开发者能像搭建普通深度学习模型一样轻松构建高效的脉冲神经网络。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffangwei123456_spikingjelly_dee1d755.png","fangwei123456",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffangwei123456_d504c005.jpg","https:\u002F\u002Fgithub.com\u002Ffangwei123456",[80,84,88],{"name":81,"color":82,"percentage":83},"Python","#3572A5",65.2,{"name":85,"color":86,"percentage":87},"Jupyter Notebook","#DA5B0B",30.6,{"name":89,"color":90,"percentage":91},"Cuda","#3A4E3A",4.2,1961,304,"2026-04-02T11:56:00","NOASSERTION","未说明","NVIDIA GPU 非必需（支持 CPU），但使用 CuPy 或 Triton 加速后端时必需；需安装对应版本的 CUDA（文档提及测试环境为 CUDA 11.x\u002F12.x）；显存大小未说明",{"notes":99,"python":100,"dependencies":101},"该工具基于 PyTorch。默认后端支持 CPU 和 GPU；若需高性能训练，需额外安装 CuPy 或 Triton（仅支持 GPU）。华为 NPU 目前尚在计划中，暂不支持。奇数版本号为开发版，偶数版本号为稳定版。从 v0.0.0.0.14 起部分模块名称已变更，旧版本用户需注意迁移。","3.8+ (依据 PyPI badge 推断，文档明确需 torch>=2.2.0)",[102,103,104,105,106,107,108],"torch>=2.2.0","torchvision","torchaudio","cupy-cuda11x 或 cupy-cuda12x (可选)","triton==3.3.1 (可选)","nir (可选)","nirtorch (可选)",[13],[111,112,113,114,115,116],"pytorch","spiking-neural-networks","snn","deep-learning","machine-learning","dvs","2026-03-27T02:49:30.150509","2026-04-06T07:14:58.878625",[120,125,130,135,140,145],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},15878,"如何估算脉冲神经网络的平均发放频率？","SpikingJelly 中平均发放率的计算公式为：#Spike \u002F (#Neuron * T)。其中，T 指的是仿真步长（即时间步的数量），而不是训练过程中的总推理时间。例如，如果设置 T=8，则分母中的 T 即为 8。","https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fissues\u002F59",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},15879,"加载数据集时出现 'Cannot load file containing pickled data when allow_pickle=False' 错误怎么办？","这通常是因为旧版本框架预处理的数据集与新版本框架不兼容导致的。解决方法是：删除数据文件夹中除了 'download' 以外的所有文件夹，然后重新运行代码，让新框架重新生成一次数据集即可。","https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fissues\u002F94",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},15880,"如何降低反向传播时的显存消耗？","可以通过配置将脉冲存储为布尔类型（bool tensor）来减少显存占用。具体操作是设置：\nconfigure.save_spike_as_bool_in_neuron_kernel = True\nconfigure.save_bool_spike_level = 1\n实测在 A100 和 2080Ti 上，开启此选项并结合 SpikeConv\u002FLinear 模块，可显著降低显存使用并提升运行速度。","https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fissues\u002F163",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},15881,"ANN 转 SNN (ann2snn) 后如何获取最终的权重？","转换过程中每一层前后会分别乘以缩放系数 1\u002Fs 和 s。要得到最终权重，需要将每一层及其之前所有层的缩放系数累乘，然后再乘以该层的原始权重。具体算法可参考论文 \"Parameter Normalization\" (Frontiers in Neuroscience, 2017) 的 2.2.2 节。","https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fissues\u002F153",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},15882,"可视化特征图时为什么无法正确生成脉冲或输出错误？","这是因为 SpikingJelly 将脉冲神经元视为一个层（Layer），而部分用户代码将其视为函数。如果在可视化代码中没有正确调用更新函数，就无法产生脉冲。需要手动运行 `mem_update` 函数来创建脉冲，或者确保网络前向传播逻辑符合 SpikingJelly 的层级调用规范。","https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fissues\u002F51",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},15883,"安装 GPU 版本的 SpikingJelly 时遇到 CUDA 编译错误或版本不匹配怎么办？","建议直接从 PyPI 安装最新版本的 SpikingJelly（使用 pip install spikingjelly）。最新版本不再需要预先编译 CUDA 代码，而是通过 CuPy 实现，CUDA 代码会在首次运行时自动编译，从而避免了手动配置 CUDA_HOME 和处理 PyTorch 与 CUDA 版本不匹配的问题。","https:\u002F\u002Fgithub.com\u002Ffangwei123456\u002Fspikingjelly\u002Fissues\u002F73",[]]