[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Chen-Cai-OSU--awesome-equivariant-network":3,"tool-Chen-Cai-OSU--awesome-equivariant-network":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,2,"2026-04-10T11:13:16",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75508,"2026-04-13T20:37:22",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":71,"readme_en":72,"readme_zh":73,"quickstart_zh":74,"use_case_zh":75,"hero_image_url":76,"owner_login":77,"owner_name":78,"owner_avatar_url":79,"owner_bio":80,"owner_company":81,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":82,"owner_url":83,"languages":78,"stars":84,"forks":85,"last_commit_at":86,"license":78,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":78,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":22,"created_at":94,"updated_at":95,"faqs":96,"releases":97},7346,"Chen-Cai-OSU\u002Fawesome-equivariant-network","awesome-equivariant-network","Paper list for equivariant neural network","awesome-equivariant-network 是一个专注于“等变神经网络”（Equivariant Neural Networks）的开源论文清单项目。它系统性地收集并整理了该领域从奠基之作到前沿进展的核心学术文献，涵盖群等变卷积、可操纵 CNN、球面 CNN 以及针对 3D 点云和密度估计的应用等多个方向。\n\n在深度学习中，传统模型往往需要大量数据才能学会识别旋转或平移后的物体，而等变网络通过数学上的对称性约束，让模型天生具备对几何变换的适应能力。awesome-equivariant-network 正是为了解决研究者难以全面追踪这一快速演进领域的痛点而生，它将分散的顶会论文（如 ICML、NeurIPS、CVPR 等）按理论、应用、置换等变性等主题分类，提供了清晰的学习路径。\n\n该项目特别适合人工智能研究人员、算法工程师以及对几何深度学习感兴趣的学生使用。无论是想要入门该领域的新手，还是希望查找特定群表示（如 SO(3)、SE(3)）实现细节的资深开发者，都能从中快速定位关键资源。其独特亮点在于不仅罗列论文，还附带了简要的技术注释，指出每篇工作的核心贡献（如离散群处理、","awesome-equivariant-network 是一个专注于“等变神经网络”（Equivariant Neural Networks）的开源论文清单项目。它系统性地收集并整理了该领域从奠基之作到前沿进展的核心学术文献，涵盖群等变卷积、可操纵 CNN、球面 CNN 以及针对 3D 点云和密度估计的应用等多个方向。\n\n在深度学习中，传统模型往往需要大量数据才能学会识别旋转或平移后的物体，而等变网络通过数学上的对称性约束，让模型天生具备对几何变换的适应能力。awesome-equivariant-network 正是为了解决研究者难以全面追踪这一快速演进领域的痛点而生，它将分散的顶会论文（如 ICML、NeurIPS、CVPR 等）按理论、应用、置换等变性等主题分类，提供了清晰的学习路径。\n\n该项目特别适合人工智能研究人员、算法工程师以及对几何深度学习感兴趣的学生使用。无论是想要入门该领域的新手，还是希望查找特定群表示（如 SO(3)、SE(3)）实现细节的资深开发者，都能从中快速定位关键资源。其独特亮点在于不仅罗列论文，还附带了简要的技术注释，指出每篇工作的核心贡献（如离散群处理、傅里叶空间加速等），并持续收录最新的讲座与教程链接，是探索几何先验与神经网络结合趋势的实用指南。","# awesome-equivariant-network\n\nPaper list for equivariant neural network. Work-in-progress. \n\nFeel free to suggest relevant papers in the following format. \n\n```markdown\n**Group Equivariant Convolutional Networks**  \nTaco S. Cohen, Max Welling ICML 2016 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.07576.pdf)   \n```\n\n*Acknowledgement*: I would like to thank Maurice Weiler, Fabian Fuchs, Tess Smidt, Rui Wang, David Pfau, Jonas Köhler, Taco Cohen, Gregor Simm, Erik J Bekkers, Jean-Baptiste Cordonnier, David W. Romero, Ivan Sosnovik, Kostas Daniilidis for paper suggestions! Thank Weihao Xia for helping out typesetting! \n\n### Table of Contents\n- [Equivariance and Group convolution](#equivariance-and-Group-convolution)\n- [Theory](#theory)\n- [Equivariant Density Estimation and Sampling](equivariant-density-estimation-and-sampling)\n- [Application](#application)\n- [Permutation Equivariance](#permutation-equivariance)\n- [Talk and Tutorial](#talk-and-tutorial)\n- [TO READ](#to-read)\n\n### [Equivariance and Group convolution](#content)\n\n1. **Group Equivariant Convolutional Networks**  \n   Taco S. Cohen, Max Welling ICML 2016 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.07576.pdf)   \n   Note: first paper; discrete group; \n2. **Steerable CNNs**  \n  Taco S. Cohen, Max Welling ICLR 2017 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08498)\n3. **Harmonic Networks: Deep Translation and Rotation Equivariance**  \n  Daniel E. Worrall, Stephan J. Garbin, Daniyar Turmukhambetov, Gabriel J. Brostow CVPR 2017 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.04642)   \n4. **Spherical CNNs**  \n  Taco S. Cohen, Mario Geiger, Jonas Koehler, Max Welling ICLR 2018 best paper  [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.10130)  \n  Note: use generalized FFT to speed up convolution on $S^2$ and $SO(3)$\n5. **Clebsch–Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network**  \n  Risi Kondor, Zhen Lin, Shubhendu Trivedi NeurIPS 2018 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.09231)  \n  Note: perform equivariant nonlinearity in Fourier space; \n6. **General E(2)-Equivariant Steerable CNNs**  \n  Maurice Weiler, Gabriele Cesa NeurIPS 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08251)  \n  Note: nice benchmark on different reprsentations\n7. **Learning Steerable Filters for Rotation Equivariant CNNs**  \n   Maurice Weiler, Fred A. Hamprecht, Martin Storath CVPR 2018 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.07289)   \n   Note: group convolutions, kernels parameterized in circular harmonic basis (steerable filters);\n8. **Learning SO(3) Equivariant Representations with Spherical CNNs**  \n   Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, Kostas Daniilidis ECCV 2018 [paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FCarlos_Esteves_Learning_SO3_Equivariant_ECCV_2018_paper.html)  \n   Note: SO(3) equivariance; zonal filter\n9. **Polar Transformer Networks**  \n  Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, Kostas Daniilidis ICLR 2018 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01889)  \n10. **3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data**  \n  Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, Taco Cohen  NeurIPS 2018 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02547)  \n  Note: SE(3) equivariance; characterize the basis of steerable kernel\n11. **Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds**  \n   Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, Patrick Riley  [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.08219)  \n   Note: SE(3) equivariance for point clouds\n12. **Equivariant Multi-View Networks**  \n   Carlos Esteves, Yinshuang Xu, Christine Allen-Blanchette, Kostas Daniilidis  ICCV 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00993)   \n13. **Gauge Equivariant Convolutional Networks and the Icosahedral CNN**  \n   Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling ICML 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04615), [talk](https:\u002F\u002Fslideslive.com\u002F38915809\u002Fgauge-equivariant-convolutional-networks?locale=de)  \n   Note: gauge equivariance on general manifold\n14. **Cormorant: Covariant Molecular Neural Networks**  \n   Brandon Anderson, Truong-Son Hy, Risi Kondor NeurIPS 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.04015)\n15. **Deep Scale-spaces: Equivariance Over Scale**  \n   Daniel Worrall, Max Welling NeurIPS 2019 [paper](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Ff04cd7399b2b0128970efb6d20b5c551-Abstract.html)\n16. **Scale-Equivariant Steerable Networks**  \n   Ivan Sosnovik, Michał Szmaja, Arnold Smeulders ICLR 2020 [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=HJgpugrKPS)\n17. **B-Spline CNNs on Lie Groups**  \n   Erik J Bekkers ICLR 2020 [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=H1gBhkBFDH)    \n18. **SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks**  \n   Fabian B. Fuchs, Daniel E. Worrall, Volker Fischer, Max Welling NeurIPS 2020  [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10503), [blog](https:\u002F\u002Ffabianfuchsml.github.io\u002Fse3transformer\u002F)  \n   Note: TFN + equivariant self-attention; improved spherical harmonics computation\n19. **Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs**  \n   Pim de Haan, Maurice Weiler, Taco Cohen, Max Welling ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.05425)  \n   Note: anisotropic gauge equivariant kernels + message passing  by parallel transporting features over mesh edges\n20. **Lorentz Group Equivariant Neural Network for Particle Physics**  \n   Alexander Bogatskiy, Brandon Anderson, Jan T. Offermann, Marwah Roussi, David W. Miller, Risi Kondor ICML 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.04780)  \n   Note: SO(1, 3) equivariance\n21. **CNNs on Surfaces using Rotation-Equivariant Features**  \n      Ruben Wiersma, Elmar Eisemann, Klaus Hildebrandt SIGGRAPH 2020 [paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3386569.3392437), [code](https:\u002F\u002Fgithub.com\u002Frubenwiersma\u002Fhsn)  \n22. **Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data**  \n   Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson ICML 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.12880)  \n   Note: fairly generic architecture; use Monte Carlo sampling to achieve equivariance in expectation; \n23. **Spin-Weighted Spherical CNNs**  \n   Carlos Esteves, Ameesh Makadia, Kostas Daniilidis NeurIPS 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10731)  \n   Note: anisotropic filter for vector field on sphere\n24. **Learning Invariances in Neural Networks**  \n   Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson NeurIPS 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11882)   \n   Note: very interesting approch; enfore \"soft\" invariance via learning over both model parameters and distributions over augmentations\n25. **LieTransformer: Equivariant self-attention for Lie Groups**  \n   Michael Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.10885)  \n   Note: equivariant self attention to arbitrary Lie groups and their discrete subgroups\n26. **Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring In Data**  \n   David W. Romero, Mark Hoogendoorn ICLR 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07849)\n27. **Attentive Group Equivariant Convolutional Networks**  \n   David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn ICML 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03830)\n28. **Wavelet Networks: Scale Equivariant Learning From Raw Waveforms**  \n   David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05259)\n29. **Group Equivariant Stand-Alone Self-Attention For Vision**  \n   David W. Romero, Jean-Baptiste Cordonnier ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.00977)\n30. **Incorporating Symmetry into Deep Dynamics Models for Improved Generalization**  \n   Rui Wang, Robin Walters, Rose Yu ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03061)\n31. **MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning**  \n   Elise van der Pol, Daniel E. Worrall, Herke van Hoof, Frans A. Oliehoek, Max Welling NeurIPS 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16908)\n32. **Isometric Transformation Invariant and Equivariant Graph Convolutional Networks**  \n   Masanobu Horie, Naoki Morita, Toshiaki Hishinuma, Yu Ihara, Naoto Mitsume ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.06316)\n33. **E(n) Equivariant Graph Neural Networks**  \n   Victor Garcia Satorras, Emiel Hoogeboom, Max Welling ICML 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09844)  \n   Note: a simple alternative that achieves E(n) equivariance\n34. **Vector Neurons: A General Framework for SO(3)-Equivariant Networks**  \n   Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, Leonidas Guibas  [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12229)  Note: a simple MLP for type-1 features  \n35. **Equivariant message passing for the prediction of tensorial properties and molecular spectra**  \n   Kristof T. Schütt, Oliver T. Unke, Michael Gastegger ICML 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.03150)  \n36. **Field Convolutions For Surface CNNs**  \n   Thomas W. Mitchel, Vladimir G. Kim, Michael Kazhdan ICCV 2021 (Oral) [paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FMitchel_Field_Convolutions_for_Surface_CNNs_ICCV_2021_paper.html)   \n37. **Scalars are universal: Equivariant machine learning, structured like classical physics**  \n   Soledad Villar, David W.Hogg, Kate Storey-Fisher, Weichi Yao, Ben Blum-Smith NeruIPS 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06610)\n38. **Efficient Equivariant Network**  \n   Lingshen He, Yuxuan Chen, Zhengyang shen, Yiming Dong, Yisen Wang, Zhouchen Lin NeruIPS 2021 [paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2021\u002Fhash\u002F2a79ea27c279e471f4d180b08d62b00a-Abstract.html), [code](https:\u002F\u002Fgithub.com\u002FLingshenHe\u002FEfficient-Equivariant-Network)\n39. **GemNet: Universal Directional Graph Neural Networks for Molecules**  \n   Johannes Klicpera, Florian Becker, Stephan Günnemann NeurIPS 2021 [paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F35cf8659cfcb13224cbd47863a34fc58-Abstract.html)\n40. **Automatic Symmetry Discovery with Lie Algebra Convolutional Network**  \n   Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, Rose Yu NeurIPS 2021 [paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NPOWF_ZLfC5)\n41. **Geometric and Physical Quantities improve E(3) Equivariant Message Passing**  \n   Johannes Brandstetter and Rob Hesselink and Elise van der Pol and Erik J Bekkers and Max Welling ICLR 2022 (spotlight) [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02905), [code](https:\u002F\u002Fgithub.com\u002FRobDHess\u002FSteerable-E3-GNN)\n42. **Frame Averaging for Invariant and Equivariant Network Design**  \n   Omri Puny, Matan Atzmon, Heli Ben-Hamu, Ishan Misra, Aditya Grover, Edward J. Smith, Yaron Lipman [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.03336) ICLR 2022\n43. **Learning Local Equivariant Representations for Large-Scale Atomistic Dynamics**\n   Albert Musaelian, Simon Batzner, Anders Johansson, Lixin Sun, Cameron J. Owen, Mordechai Kornbluth, Boris Kozinsky [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05249) \n44. **Möbius Convolutions for Spherical CNNs**  \n   Thomas W. Mitchel, Noam Aigerman, Vladimir G. Kim, Michael Kazhdan SIGGRAPH 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12212)  \n   (Note: Equivariance to the action of SL(2, C) on the sphere. To our knowledge this is the first *conformally* equivariant convolutional surface network)\n45. **DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point Clouds**\n   Ruben Wiersma, Ahmad Nasikun, Elmar Eisemann, Klaus Hildebrandt SIGGRAPH 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08799), [code](https:\u002F\u002Fgithub.com\u002Frubenwiersma\u002Fdeltaconv)\n   Rotation equivariance by using differential operators.\n46. **Neural ePDOs: Spatially Adaptive Equivariant Partial Differential Operator Based Networks**  \n Lingshen He*, Yuxuan Chen*, Zhengyang shen, Yibo Yang, Zhouchen Lin  ICLR 2023 (spotlight) [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=D1Iqfm7WTkk), [code](https:\u002F\u002Fgithub.com\u002FYuxuanChen21\u002FNeural_ePDOs)\n47. **Steerable Partial Differential Operators for Equivariant Neural Networks**\nErik Jenner, Maurice Weiler ICLR 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10163)\n48. **A Program to build E(N)-Equivariant Steerable CNNs**\nGabriele Cesa, Leon Lang, Maurice Weiler ICLR 2022 [paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=WE4qe9xlnQw)\n49. **Clifford-Steerable Convolutional Neural Networks**\nMaksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré ICML 2024 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14730)\n\n### [Theory](#content)\n\n1. **On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups**  \n  Risi Kondor, Shubhendu Trivedi ICML 2018 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03690)  \n  Note: convolution is all you need (for scalar fields)\n  \n3. **A General Theory of Equivariant CNNs on Homogeneous Spaces**  \n  Taco Cohen, Mario Geiger, Maurice Weiler NeurIPS 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.02017)  \n  Note: convolution is all you need (for general fields)\n  \n4. **Equivariance Through Parameter-Sharing**  \n  Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos ICML 2017 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08389)\n  \n5. **Universal approximations of invariant maps by neural networks**  \n  Dmitry Yarotsky [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10306)\n  \n6. **A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels**  \n  Leon Lang, Maurice Weiler ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.10952)  \n  Note: steerable kernel spaces are fully understood and parameterized in terms of 1) generalized reduced matrix elements, 2) Clebsch-Gordan coefficients, and 3) harmonic basis functions on homogeneous spaces.\n  \n7. **On the Universality of Rotation Equivariant Point Cloud Networks**  \n  Nadav Dym, Haggai Maron ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02449),   \n  Note: universality for TFN and se3-transformer \n  \n8. **Universal Equivariant Multilayer Perceptrons**  \nSiamak Ravanbakhsh [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.02912)\n\n9. **Provably Strict Generalisation Benefit for Equivariant Models**  \nBryn Elesedy, Sheheryar Zaidi ICML 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10333)\n\n10. **Implicit Bias of Linear Equivariant Networks**  \nHannah Lawrence, Kristian Georgiev, Andrew Dienes, Bobak T. Kiani ICML 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06084)\n\n11. **On the Expressive Power of Geometric Graph Neural Networks**  \nChaitanya K. Joshi, Cristian Bodnar, Simon V. Mathis, Taco Cohen, Pietro Liò ICML 2023 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09308)\n\n12. **Equivariant and Coordinate Independent Convolutional Networks - A Gauge Field Theory of Neural Networks**\nMaurice Weiler ,  Patrick Forré ,  Erik Verlinde ,  Max Welling [book](https:\u002F\u002Fmaurice-weiler.gitlab.io\u002Fcnn_book\u002FEquivariantAndCoordinateIndependentCNNs.pdf)\n\n\n### [Equivariant Density Estimation and Sampling](#content)\n\n1. **Equivariant Flows: Exact Likelihood Generative Learning for Symmetric Densities**    \n  Jonas Köhler, Leon Klein, Frank Noé ICML 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02425)  \n  Note: general framework for constructing equivariant normalizing flows on euclidean spaces. Instantiation for particle systems\u002Fpoint clouds = simultanoues SE(3) and permutation equivariance.\n2. **Equivariant Hamiltonian Flows**    \n  Danilo Jimenez Rezende, Sébastien Racanière, Irina Higgins, Peter Toth NeurIPS 2019 ML4Phys workshop [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13739)  \n  Note: general framework for constructing equivariant normalizing flows in phase space utilizing Hamiltonian dynamics. Instantiation for SE(2) equivariance.\n3. **Sampling using SU(N) gauge equivariant flows**    \n  Denis Boyda, Gurtej Kanwar, Sébastien Racanière, Danilo Jimenez Rezende, Michael S. Albergo, Kyle Cranmer, Daniel C. Hackett, Phiala E. Shanahan [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05456)    \n  Note: normalizing flows for lattice gauge theory. Instantiation for SU(2)\u002FSU(3) equivariance.\n4. **Exchangeable neural ode for set modeling**    \n  Yang Li, Haidong Yi, Christopher M. Bender, Siyuan Shan, Junier B. Oliva NeurIPS 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02676)  \n  Note: framework for permutation equivariant flows for set data. Instantiation for permutation equivariance.\n5. **Equivariant Normalizing Flows for Point Processes and Sets**    \n  Marin Biloš, Stephan Günnemann NeurIPS 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03242)  \n  Note: framework for permutation equivariant flows for set data.  Instantiation for permutation equivariance.\n6. **The Convolution Exponential and Generalized Sylvester Flows**    \n  Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling NeurIPS 2020 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.01910)  \n  Note: invertible convolution operators. Instantiation for permutation equivariance.\n7. **Targeted free energy estimation via learned mappings**    \n  Peter Wirnsberger, Andrew J. Ballard, George Papamakarios, Stuart Abercrombie, Sébastien Racanière, Alexander Pritzel, Danilo Jimenez Rezende, Charles Blundell J Chem Phys. 2020 Oct 14;153(14):144112. [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.04913)  \n  Note: normalizing flows for particle systems on a torus. Instantiation for permutation equivariance.\n8. **Temperature-steerable flows**    \n  Manuel Dibak, Leon Klein, Frank Noé NeurIPS 2020 ML4Phys workshops [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.00429)  \n  Note: normalizing flows in phase space with equivariance with respect to changes in temperature.\n9. **Equivariant Manifold Flows**  \n  Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser-Nam Lim, Christopher De Sa NeurIPS 2021 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.08596.pdf)\n  Note: normalizing flows that allow for learning over any Riemannian manifold with respect to any symmetry (isometry subgroup action invariance).\n\n### [Application](#content)\n1. **Trajectory Prediction using Equivariant Continuous Convolution**  \n  Robin Walters, Jinxi Li, Rose Yu ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11344)\n2. **SE(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials**  \n  Simon Batzner, Tess E. Smidt, Lixin Sun, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Boris Kozinsky [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03164)\n4. **Finding Symmetry Breaking Order Parameters with Euclidean Neural Networks**  \n  Tess E. Smidt, Mario Geiger, Benjamin Kurt Miller [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.02005)\n5. **Group Equivariant Generative Adversarial Networks**  \n  Neel Dey, Antong Chen, Soheil Ghafurian ICLR 2021  [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01683)   \n6. **Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks**  \n  David Pfau, James S. Spencer, Alexander G. de G. Matthews, W. M. C. Foulkes [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.02487)  \n7. **Symmetry-Aware Actor-Critic for 3D Molecular Design**    \n  Gregor N. C. Simm, Robert Pinsler, Gábor Csányi, José Miguel Hernández-Lobato ICLR 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.12747)\n8. **Roto-translation equivariant convolutional networks: Application to histopathology image analysis**  \n  Maxime W. Lafarge, Erik J. Bekkers, Josien P.W. Pluim, Remco Duits, Mitko Veta MedIA [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08725)\n9. **Scale Equivariance Improves Siamese Tracking**  \n  Ivan Sosnovik\\*, Artem Moskalev\\*, Arnold Smeulders WACV 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09115)\n10. **3D G-CNNs for Pulmonary Nodule Detection**\n  Marysia Winkels, Taco S. Cohen [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.04656) \n  International Conference on Medical Imaging with Deep Learning (MIDL), 2018.\n11. **Roto-translation covariant convolutional networks for medical image analysis**  \n  Erik J. Bekkers, Maxime W. Lafarge, Mitko Veta, Koen A.J. Eppenhof, Josien P.W. Pluim, Remco Duits MICCAI 2018 Young Scientist Award [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03393)\n12. **Equivariant Spherical Deconvolution: Learning Sparse Orientation Distribution Functions from Spherical Data**  \n  Axel Elaldi\\*, Neel Dey\\*, Heejong Kim, Guido Gerig, Information Processing in Medical Imaging (IPMI) 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09462)\n13. **Rotation-Equivariant Deep Learning for Diffusion MRI**  \n  Philip Müller, Vladimir Golkov, Valentina Tomassini, Daniel Cremers [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06942)\n14. **Equivariant geometric learning for digital rock physics: estimating formation factor and effective permeability tensors from Morse graph**  \nChen Cai, Nikolaos Vlassis, Lucas Magee, Ran Ma, Zeyu Xiong, Bahador Bahmani, Teng-Fong Wong, Yusu Wang, WaiChing Sun [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05608)  \nNote: equivariant nets + Morse graph for permeability tensor prediction\n15. **Direct prediction of phonon density of states with Euclidean neural network**\nZhantao Chen, Nina Andrejevic, Tess Smidt, Zhiwei Ding, Yen-Ting Chi, Quynh T. Nguyen, Ahmet Alatas, Jing Kong, Mingda Li, Advanced Science (2021) [paper](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002F10.1002\u002Fadvs.202004214) [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05163)\n16. **SE(3)-equivariant prediction of molecular wavefunctions and electronic densities**\nOliver T. Unke, Mihail Bogojeski, Michael Gastegger, Mario Geiger, Tess Smidt, Klaus-Robert Müller [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02347)\n17. **Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking**\nOctavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi Jaakkola, Andreas Krause, under review, 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07786)\n18. **Roto-translated Local Coordinate Frames For Interacting Dynamical Systems**\nMiltiadis Kofinas, Naveen Shankar Nagaraja, Efstratios Gavves NeurIPS 2021 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14961)\n19. **MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields**\u003Cbr>\nIlyes Batatia, Dávid Péter Kovács, Gregor N. C. Simm, Christoph Ortner, Gábor Csányi, under review, 2022 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07697), [code](https:\u002F\u002Fgithub.com\u002FACEsuit\u002Fmace)\n20. **Equivariant Q Learning in Spatial Action Spaces**\nDian Wang, Robin Walters, Xupeng Zhu, Robert Platt, CoRL 2021 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.15443.pdf)\n21. **SO(2)-Equivariant Reinforcement Learning**\nDian Wang, Robin Walters, Robert Platt, ICLR 2022 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04439.pdf)\n22. **Sample Efficient Grasp Learning Using Equivariant Models**\nXupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, Robert Platt, RSS 2022 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.09468.pdf)\n23. **Equivariant Transporter Network**\nHaojie Huang, Dian Wang, Robin Walters, Robert Platt, RSS 2022 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.09400.pdf)\n24. **On-Robot Learning With Equivariant Models**\nDian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, Robert Platt, CoRL 2022 [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04923.pdf)\n25. **Edge Grasp Network: Graph-Based SE(3)-invariant Approach to Grasp Detection**\nHaojie Huang, Dian Wang, Xupeng Zhu, Robin Walters, Robert Platt, Under Review [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.00191.pdf)\n26. **SEIL: Simulation-augmented Equivariant Imitation Learning**\nMingxi Jia, Dian Wang, Guanang Su, David Klee, Xupeng Zhu, Robin Walters, Robert Platt, Under Review [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.00194.pdf)\n27. **The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry**\nDian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S. Wong, Robin Walters, Robert Platt, Under Review [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.09231.pdf)\n\n### [Permutation Equivariance](#content)\nThere are many paper on this topics. I only added very few of them.\n\n1. **PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation**  \n  Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas CVPR 2017 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00593) \n2. **Deep Sets**  \nManzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola NeurIPS 2017   [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06114)\n3. **Invariant and Equivariant Graph Networks**  \n  Haggai Maron, Heli Ben-Hamu, Nadav Shamir, Yaron Lipman ICLR 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.09902)  \n4. **Provably Powerful Graph Networks**  \n  Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, Yaron Lipman NeurIPS 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11136)  \n5. **Universal Invariant and Equivariant Graph Neural Networks**  \n  Nicolas Keriven, Gabriel Peyré NeurIPS 2019 [paper](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fea9268cb43f55d1d12380fb6ea5bf572-Abstract.html)\n6. **On Learning Sets of Symmetric Elements**  \n  Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya [ICML 2020 best paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08599)\n7. **On the Universality of Invariant Networks**  \n  Haggai Maron, Ethan Fetaya, Nimrod Segol, Yaron Lipman [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.09342)\n8. **Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs**\n  Jinwoo Kim, Saeyoon Oh, Seunghoon Hong [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14416)\n\n\n### [Talk and Tutorial](#content)\n\nIAS: [Graph Nets: The Next Generation - Max Welling - YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Wx8J-Kw3fTA&t=3602s)\n\n[Equivariance and Data Augmentation workshop](https:\u002F\u002Fsites.google.com\u002Fview\u002Fequiv-data-aug\u002Fhome): many nice talks\n\nIPAM: [Tess Smidt: \"Euclidean Neural Networks for Emulating Ab Initio Calculations and Generating Atomi...\" - YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8CF8Grb_brE)\n\nIPAM: [E(3) Equivariant Neural Network Tutorial ](https:\u002F\u002Fblondegeek.github.io\u002Fe3nn_tutorial\u002F)\n\nIPAM: [Risi Kondor: \"Fourier space neural networks\" ](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=-PVyi0Keiec)\n\n[NeurIPS 2020 tutorial: Equivariant Networks](https:\u002F\u002Fnips.cc\u002Fvirtual\u002F2020\u002Fpublic\u002Ftutorial_3e267ff3c8b6621e5ad4d0f26142892b.html)\n\n[Yaron Lipman - Deep Learning of Irregular and Geometric Data - YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fveyx5zKReo&feature=youtu.be)\n\nMath-ML: [Erik J Bekkers: Group Equivariant CNNs beyond Roto-Translations: B-Spline CNNs on Lie Groups](https:\u002F\u002Fyoutu.be\u002FrakcnrgX4oo)\n\nKostas Daniilidis: [Geometry-aware deep learning: A brief history of equivariant representations and recent results](https:\u002F\u002Fmathinstitutes.org\u002Fvideos\u002Fvideos\u002Fview\u002F15146)\n\nAndrew White: [ Deep Learning for Molecules and Materials.](https:\u002F\u002Fwhitead.github.io\u002Fdmol-book\u002Fdl\u002FEquivariant.html)\n\nErik Bekkers: [An Introduction to Group Equivariant Deep Learning](https:\u002F\u002Fuvagedl.github.io) A course offered at UvA\n\nMichael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković: [Geometric Deep Learning Course](https:\u002F\u002Fgeometricdeeplearning.com\u002Flectures\u002F)\n\n\n\n### [Background](#content)\n\nI am by no means an expert in this field. Here are books and articles suggest by Taco Cohen when asked references to learn group theory and representation theory.\n\n1. [Carter, Visual Group Theory](https:\u002F\u002Fwww.amazon.com\u002FVisual-Group-Theory-Problem-Book\u002Fdp\u002F088385757X)   \n  Note: very basic intro to group theory\n\n2. [Theoretical Aspects of Group Equivariant Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.05154)  \nCarlos Esteves  \nNote: covers all the math you need for equivariant nets in a fairly compact and accessible manner.\n\n3. [Serre, Linear Representations of Finite Groups](http:\u002F\u002Fwww.math.tau.ac.il\u002F~borovoi\u002Fcourses\u002FReprFG\u002FHatzagot.pdf)   \nNote: classic text on representations of finite groups. First few chapters are relevant to equivariant nets.\n\n4. [G B Folland. A Course in Abstract Harmonic Analysis](https:\u002F\u002Fsv.20file.org\u002Fup1\u002F1415_0.pdf)   \nNote: covers representations of locally compact groups; induced representations.\n\n5. [David Gurarie. Symmetries and Laplacians: Introduction to Harmonic Analysis, Group Representations and Applications.](https:\u002F\u002Fwww.amazon.com\u002FSymmetries-Laplacians-Introduction-Representations-Applications\u002Fdp\u002F0486462889)  \n\n6. [Mark Hamilton. Mathematical Gauge Theory: With Applications to the Standard Model of Particle Physics](https:\u002F\u002Fwww.amazon.com\u002FMathematical-Gauge-Theory-Applications-Universitext\u002Fdp\u002F3319684388)   \nNote: covers fiber bundles, useful for understanding homogeneous G-CNNs and Gauge CNNs.\n\n\n### Theses \u002F Dissertations\n1. Taco Cohen, Equivariant Convolutional Networks, PhD Thesis, University of Amsterdam, 2021 [pdf] (Note: Part II contains a lot of new material, not published before)\n\n2. **Extending Convolution Through Spatially Adaptive Alignment**  \nThomas W. Mitchel, PhD Thesis, Johns Hopkins University, 2022 [pdf](https:\u002F\u002Fwww.mitchel.computer\u002Fdoc\u002Fthesis.pdf)  \n   *Presents a novel, unified theoretical framework for transformation-equivariant convolutions on arbitrary homogenous spaces and 2D Riemannian manifolds. Can handle high-dimensional, non-compact transformation groups.*\n\n\n### [TO READ](#content)\nThere are many paper I haven't read carefully yet. \n\n1. **Making Convolutional Networks Shift-Invariant Again**   \n  Richard Zhang ICML 2019 [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.11486)\n2. **Probabilistic symmetries and invariant neural networks**  \nBenjamin Bloem-Reddy, Yee Whye Teh JMLR [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06082)\n3. **On Representing (Anti)Symmetric Functions**  \nMarcus Hutter [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.15298)\n4. **PDE-based Group Equivariant Convolutional Neural Networks**  \n  Bart M.N. Smets, Jim Portegies, Erik J. Bekkers, Remco Duits [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.09046)\n\n","# 令人惊叹的等变网络\n\n等变神经网络论文列表。仍在编写中。\n\n欢迎以以下格式推荐相关论文。\n\n```markdown\n**群等变卷积网络**  \n塔科·S·科恩，马克斯·韦林 ICML 2016 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.07576.pdf)   \n```\n\n*致谢*: 我要感谢莫里斯·魏勒、法比安·福克斯、泰丝·斯密特、王睿、大卫·普夫、乔纳斯·科勒、塔科·科恩、格雷戈尔·西姆、埃里克·J·贝克尔斯、让-巴蒂斯特·科尔多尼耶、大卫·W·罗梅罗、伊万·索斯诺维克、科斯塔斯·达尼利迪斯提供的论文建议！同时感谢夏伟豪在排版方面的帮助！\n\n### 目录\n- [等变性与群卷积](#等变性与群卷积)\n- [理论](#理论)\n- [等变密度估计与采样](等变密度估计与采样)\n- [应用](#应用)\n- [排列等变性](#排列等变性)\n- [讲座与教程](#讲座与教程)\n- [待读](#待读)\n\n### [等变性与群卷积](#内容)\n\n1. **群等变卷积网络**  \n  Taco S. Cohen, Max Welling ICML 2016 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.07576.pdf)   \n  注：首篇论文；离散群；\n2. **可导向CNN**  \n  Taco S. Cohen, Max Welling ICLR 2017 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08498)\n3. **谐波网络：深度平移与旋转等变性**  \n  Daniel E. Worrall, Stephan J. Garbin, Daniyar Turmukhambetov, Gabriel J. Brostow CVPR 2017 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.04642)   \n4. **球面CNN**  \n  Taco S. Cohen, Mario Geiger, Jonas Koehler, Max Welling ICLR 2018 最佳论文  [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.10130)  \n  注：使用广义FFT加速在$S^2$和$SO(3)$上的卷积\n5. **克莱布什-高登网络：全傅里叶空间的球面卷积神经网络**  \n  Risi Kondor, Zhen Lin, Shubhendu Trivedi NeurIPS 2018 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.09231)  \n  注：在傅里叶空间中执行等变非线性；\n6. **一般E(2)等变可导向CNN**  \n  Maurice Weiler, Gabriele Cesa NeurIPS 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.08251)  \n  注：对不同表示进行了良好的基准测试\n7. **学习用于旋转等变CNN的可导向滤波器**  \n   Maurice Weiler, Fred A. Hamprecht, Martin Storath CVPR 2018 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.07289)   \n   注：群卷积，核在圆谐基中参数化（可导向滤波器）；\n8. **利用球面CNN学习SO(3)等变表示**  \n   Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, Kostas Daniilidis ECCV 2018 [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FCarlos_Esteves_Learning_SO3_Equivariant_ECCV_2018_paper.html)  \n   注：SO(3)等变性；区域滤波器\n9. **极坐标变换网络**  \n  Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, Kostas Daniilidis ICLR 2018 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01889)  \n10. **3D可导向CNN：在体数据中学习旋转等变特征**  \n  Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, Taco Cohen  NeurIPS 2018 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02547)  \n  注：SE(3)等变性；刻画了可导向核的基底\n11. **张量场网络：针对3D点云的旋转和平移等变神经网络**  \n   Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, Patrick Riley  [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.08219)  \n   注：点云的SE(3)等变性\n12. **等变多视角网络**  \n   Carlos Esteves, Yinshuang Xu, Christine Allen-Blanchette, Kostas Daniilidis  ICCV 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00993)   \n13. **规范等变卷积网络与二十面体CNN**  \n   Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling ICML 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04615), [演讲](https:\u002F\u002Fslideslive.com\u002F38915809\u002Fgauge-equivariant-convolutional-networks?locale=de)  \n   注：一般流形上的规范等变性\n14. **鸬鹚：协变分子神经网络**  \n   Brandon Anderson, Truong-Son Hy, Risi Kondor NeurIPS 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.04015)\n15. **深度尺度空间：尺度上的等变性**  \n   Daniel Worrall, Max Welling NeurIPS 2019 [论文](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Ff04cd7399b2b0128970efb6d20b5c551-Abstract.html)\n16. **尺度等变可导向网络**  \n   Ivan Sosnovik, Michał Szmaja, Arnold Smeulders ICLR 2020 [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=HJgpugrKPS)\n17. **李群上的B样条CNN**  \n   Erik J Bekkers ICLR 2020 [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=H1gBhkBFDH)    \n18. **SE(3)-Transformer：3D旋转-平移等变注意力网络**  \n   Fabian B. Fuchs, Daniel E. Worrall, Volker Fischer, Max Welling NeurIPS 2020  [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10503), [博客](https:\u002F\u002Ffabianfuchsml.github.io\u002Fse3transformer\u002F)  \n   注：TFN + 等变自注意力；改进了球谐函数计算\n19. **规范等变网格CNN：几何图上的各向异性卷积**  \n   Pim de Haan, Maurice Weiler, Taco Cohen, Max Welling ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.05425)  \n   注：各向异性规范等变核 + 通过沿网格边平行传输特征进行消息传递\n20. **适用于粒子物理的洛伦兹群等变神经网络**  \n   Alexander Bogatskiy, Brandon Anderson, Jan T. Offermann, Marwah Roussi, David W. Miller, Risi Kondor ICML 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.04780)  \n   注：SO(1, 3)等变性\n21. **利用旋转等变特征的曲面上的CNN**  \n      Ruben Wiersma, Elmar Eisemann, Klaus Hildebrandt SIGGRAPH 2020 [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3386569.3392437), [代码](https:\u002F\u002Fgithub.com\u002Frubenwiersma\u002Fhsn)  \n22. **将卷积神经网络泛化以实现对任意连续数据上李群的等变性**  \n   Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson ICML 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.12880)  \n   注：较为通用的架构；采用蒙特卡洛采样以在期望意义上实现等变性；\n23. **自旋加权球面CNN**  \n   Carlos Esteves, Ameesh Makadia, Kostas Daniilidis NeurIPS 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10731)  \n   注：用于球面上矢量场的各向异性滤波器\n24. **神经网络中的不变性学习**  \n   Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson NeurIPS 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11882)   \n   注：非常有趣的方法；通过同时学习模型参数和增强分布来强制实现“软”不变性\n25. **LieTransformer：李群的等变自注意力**  \n   Michael Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.10885)  \n   注：对任意李群及其离散子群的等变自注意力\n26. **协同注意力等变神经网络：将等变性聚焦于数据中同时发生的变换**  \n   David W. Romero, Mark Hoogendoorn ICLR 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07849)\n27. **注意力型群等变卷积网络**  \n   David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn ICML 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03830)\n28. **小波网络：从原始波形中学习尺度等变性**  \n   David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05259)\n29. **面向视觉的独立群等变自注意力**  \n   David W. Romero, Jean-Baptiste Cordonnier ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.00977)\n30. **将对称性融入深度动力学模型以提升泛化能力**  \n   Rui Wang, Robin Walters, Rose Yu ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03061)\n31. **MDP同态网络：强化学习中的群对称性**  \n   Elise van der Pol, Daniel E. Worrall, Herke van Hoof, Frans A. Oliehoek, Max Welling NeurIPS 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16908)\n32. **等距变换不变且等变的图卷积网络**  \n   Masanobu Horie, Naoki Morita, Toshiaki Hishinuma, Yu Ihara, Naoto Mitsume ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.06316)\n33. **E(n)等变图神经网络**  \n   Victor Garcia Satorras, Emiel Hoogeboom, Max Welling ICML 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09844)  \n   注：一种简单的替代方案，可实现E(n)等变性\n34. **向量神经元：SO(3)等变网络的一般框架**  \n   Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, Leonidas Guibas  [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12229)  注：一种用于1型特征的简单MLP\n35. **用于预测张量性质和分子光谱的等变消息传递**  \n   Kristof T. Schütt, Oliver T. Unke, Michael Gastegger ICML 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.03150)  \n36. **面向表面CNN的场卷积**  \n   Thomas W. Mitchel, Vladimir G. Kim, Michael Kazhdan ICCV 2021（口头报告）[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FMitchel_Field_Convolutions_for_Surface_CNNs_ICCV_2021_paper.html)   \n37. **标量是普适的：等变机器学习，结构如同经典物理学**  \n   Soledad Villar, David W.Hogg, Kate Storey-Fisher, Weichi Yao, Ben Blum-Smith NeruIPS 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06610)\n38. **高效等变网络**  \n   Lingshen He, Yuxuan Chen, Zhengyang shen, Yiming Dong, Yisen Wang, Zhouchen Lin NeruIPS 2021 [论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2021\u002Fhash\u002F2a79ea27c279e471f4d180b08d62b00a-Abstract.html), [代码](https:\u002F\u002Fgithub.com\u002FLingshenHe\u002FEfficient-Equivariant-Network)\n39. **GemNet：适用于分子的通用方向图神经网络**  \n   Johannes Klicpera, Florian Becker, Stephan Günnemann NeurIPS 2021 [论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F35cf8659cfcb13224cbd47863a34fc58-Abstract.html)\n40. **利用李代数卷积网络自动发现对称性**  \n   Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, Rose Yu NeurIPS 2021 [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NPOWF_ZLfC5)\n41. **几何与物理量改善E(3)等变消息传递**  \n   Johannes Brandstetter、Rob Hesselink、Elise van der Pol、Erik J Bekkers以及Max Welling ICLR 2022（亮点论文）[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02905)，[代码](https:\u002F\u002Fgithub.com\u002FRobDHess\u002FSteerable-E3-GNN)\n42. **用于不变与等变网络设计的帧平均法**  \n   Omri Puny、Matan Atzmon、Heli Ben-Hamu、Ishan Misra、Aditya Grover、Edward J. Smith、Yaron Lipman [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.03336) ICLR 2022\n43. **为大规模原子级动力学学习局部等变表示**  \n   Albert Musaelian、Simon Batzner、Anders Johansson、Lixin Sun、Cameron J. Owen、Mordechai Kornbluth、Boris Kozinsky [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05249) \n44. **面向球面CNN的莫比乌斯卷积**  \n   Thomas W. Mitchel、Noam Aigerman、Vladimir G. Kim、Michael Kazhdan SIGGRAPH 2022 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12212)  \n   （注：对球面上SL(2, C)的作用具有等变性。据我们所知，这是首个*共形*等变的表面卷积网络）\n45. **DeltaConv：用于点云几何深度学习的各向异性算子**  \n   Ruben Wiersma、Ahmad Nasikun、Elmar Eisemann、Klaus Hildebrandt SIGGRAPH 2022 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08799)，[代码](https:\u002F\u002Fgithub.com\u002Frubenwiersma\u002Fdeltaconv)  \n   通过使用微分算子实现旋转等变性。\n46. **神经ePDOs：基于空间自适应等变偏微分算子的网络**  \n Lingshen He*、Yuxuan Chen*、Zhengyang shen、Yibo Yang、Zhouchen Lin  ICLR 2023（亮点论文）[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=D1Iqfm7WTkk)，[代码](https:\u002F\u002Fgithub.com\u002FYuxuanChen21\u002FNeural_ePDOs)\n47. **用于等变神经网络的可导向偏微分算子**  \nErik Jenner、Maurice Weiler ICLR 2022 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10163)\n48. **构建E(N)等变可导向CNN的程序**  \nGabriele Cesa、Leon Lang、Maurice Weiler ICLR 2022 [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=WE4qe9xlnQw)\n49. **克利福德-可导向卷积神经网络**  \nMaksim Zhdanov、David Ruhe、Maurice Weiler、Ana Lucic、Johannes Brandstetter、Patrick Forré ICML 2024 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14730)\n\n### [理论](#content)\n\n1. **关于神经网络中等变性和卷积向紧致群作用的推广**  \n  Risi Kondor, Shubhendu Trivedi ICML 2018 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03690)  \n  注：对于标量场，卷积就足够了。\n\n3. **齐次空间上等变卷积神经网络的一般理论**  \n  Taco Cohen, Mario Geiger, Maurice Weiler NeurIPS 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.02017)  \n  注：对于一般场，卷积就足够了。\n\n4. **通过参数共享实现等变性**  \n  Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos ICML 2017 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08389)\n\n5. **神经网络对不变映射的通用逼近**  \n  Dmitry Yarotsky [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10306)\n\n6. **群等变卷积核的Wigner-Eckart定理**  \n  Leon Lang, Maurice Weiler ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.10952)  \n  注：可导向核空间已被完全理解，并以以下三类参数化表示：1) 广义约化矩阵元，2) Clebsch-Gordan系数，以及 3) 齐次空间上的调和基函数。\n\n7. **旋转等变点云网络的通用性研究**  \n  Nadav Dym, Haggai Maron ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02449)，  \n  注：TFN和se3-transformer具有通用性。\n\n8. **通用等变多层感知机**  \n  Siamak Ravanbakhsh [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.02912)\n\n9. **等变模型在泛化能力上的严格优势证明**  \n  Bryn Elesedy, Sheheryar Zaidi ICML 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10333)\n\n10. **线性等变网络的隐式偏差**  \n  Hannah Lawrence, Kristian Georgiev, Andrew Dienes, Bobak T. Kiani ICML 2022 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06084)\n\n11. **几何图神经网络的表达能力研究**  \n  Chaitanya K. Joshi, Cristian Bodnar, Simon V. Mathis, Taco Cohen, Pietro Liò ICML 2023 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09308)\n\n12. **等变且坐标无关的卷积神经网络——神经网络的规范场论**  \n  Maurice Weiler ,  Patrick Forré ,  Erik Verlinde ,  Max Welling [书籍](https:\u002F\u002Fmaurice-weiler.gitlab.io\u002Fcnn_book\u002FEquivariantAndCoordinateIndependentCNNs.pdf)\n\n\n### [等变密度估计与采样](#content)\n\n1. **等变流：针对对称密度的精确似然生成学习**    \n  Jonas Köhler, Leon Klein, Frank Noé ICML 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02425)  \n  注：在欧几里得空间上构建等变归一化流的一般框架。应用于粒子系统\u002F点云时，同时具备SE(3)和排列等变性。\n2. **等变哈密顿流**    \n  Danilo Jimenez Rezende, Sébastien Racanière, Irina Higgins, Peter Toth NeurIPS 2019 ML4Phys研讨会 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.13739)  \n  注：利用哈密顿动力学构建相空间中等变归一化流的一般框架。应用于SE(2)等变性。\n3. **使用SU(N)规范等变流进行采样**    \n  Denis Boyda, Gurtej Kanwar, Sébastien Racanière, Danilo Jimenez Rezende, Michael S. Albergo, Kyle Cranmer, Daniel C. Hackett, Phiala E. Shanahan [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05456)    \n  注：用于格点规范场论的归一化流。应用于SU(2)\u002FSU(3)等变性。\n4. **用于集合建模的交换神经ODE**    \n  Yang Li, Haidong Yi, Christopher M. Bender, Siyuan Shan, Junier B. Oliva NeurIPS 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02676)  \n  注：用于集合数据的排列等变流框架。应用于排列等变性。\n5. **用于点过程和集合的等变归一化流**    \n  Marin Biloš, Stephan Günnemann NeurIPS 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03242)  \n  注：用于集合数据的排列等变流框架。应用于排列等变性。\n6. **卷积指数与广义Sylvester流**    \n  Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling NeurIPS 2020 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.01910)  \n  注：可逆卷积算子。应用于排列等变性。\n7. **通过学习映射进行目标自由能估计**    \n  Peter Wirnsberger, Andrew J. Ballard, George Papamakarios, Stuart Abercrombie, Sébastien Racanière, Alexander Pritzel, Danilo Jimenez Rezende, Charles Blundell J Chem Phys. 2020 Oct 14;153(14):144112. [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.04913)  \n  注：用于环面上粒子系统的归一化流。应用于排列等变性。\n8. **温度可控流**    \n  Manuel Dibak, Leon Klein, Frank Noé NeurIPS 2020 ML4Phys研讨会 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.00429)  \n  注：相空间中的归一化流，对温度变化具有等变性。\n9. **等变流形流**  \n  Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser-Nam Lim, Christopher De Sa NeurIPS 2021 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.08596.pdf)\n  注：允许在任意黎曼流形上学习的归一化流，能够适应任意对称性（即等距子群作用不变性）。\n\n### [应用](#content)\n1. **使用等变连续卷积进行轨迹预测**  \n  Robin Walters、Jinxi Li、Rose Yu ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11344)\n2. **用于数据高效且精确的原子间势的SE(3)等变图神经网络**  \n  Simon Batzner、Tess E. Smidt、Lixin Sun、Jonathan P. Mailoa、Mordechai Kornbluth、Nicola Molinari、Boris Kozinsky [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03164)\n4. **利用欧几里得神经网络寻找对称性破缺序参量**  \n  Tess E. Smidt、Mario Geiger、Benjamin Kurt Miller [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.02005)\n5. **群等变生成对抗网络**  \n  Neel Dey、Antong Chen、Soheil Ghafurian ICLR 2021  [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01683)   \n6. **用深度神经网络从头计算求解多电子薛定谔方程**  \n  David Pfau、James S. Spencer、Alexander G. de G. Matthews、W. M. C. Foulkes [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.02487)  \n7. **面向3D分子设计的对称性感知演员-评论家方法**    \n  Gregor N. C. Simm、Robert Pinsler、Gábor Csányi、José Miguel Hernández-Lobato ICLR 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.12747)\n8. **旋转变换等变卷积网络：在组织病理学图像分析中的应用**  \n  Maxime W. Lafarge、Erik J. Bekkers、Josien P.W. Pluim、Remco Duits、Mitko Veta MedIA [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08725)\n9. **尺度等变性提升孪生跟踪性能**  \n  Ivan Sosnovik\\*、Artem Moskalev\\*、Arnold Smeulders WACV 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09115)\n10. **用于肺结节检测的3D G-CNN**  \n  Marysia Winkels、Taco S. Cohen [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.04656)  \n  国际深度学习医学影像会议（MIDL），2018年。\n11. **用于医学图像分析的旋转变换协变卷积网络**  \n  Erik J. Bekkers、Maxime W. Lafarge、Mitko Veta、Koen A.J. Eppenhof、Josien P.W. Pluim、Remco Duits MICCAI 2018 年青年科学家奖 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03393)\n12. **等变球面反卷积：从球面数据中学习稀疏的方向分布函数**  \n  Axel Elaldi\\*、Neel Dey\\*、Heejong Kim、Guido Gerig，医学影像信息处理会议（IPMI）2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09462)\n13. **用于扩散磁共振成像的旋转等变深度学习**  \n  Philip Müller、Vladimir Golkov、Valentina Tomassini、Daniel Cremers [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06942)\n14. **数字岩石物理学中的等变几何学习：基于莫尔斯图估计地层因子和有效渗透率张量**  \n  Chen Cai、Nikolaos Vlassis、Lucas Magee、Ran Ma、Zeyu Xiong、Bahador Bahmani、Teng-Fong Wong、Yusu Wang、WaiChing Sun [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05608)  \n  注：等变网络结合莫尔斯图用于渗透率张量预测\n15. **用欧几里得神经网络直接预测声子态密度**  \n  Zhantao Chen、Nina Andrejevic、Tess Smidt、Zhiwei Ding、Yen-Ting Chi、Quynh T. Nguyen、Ahmet Alatas、Jing Kong、Mingda Li，《先进科学》（2021）[论文](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002F10.1002\u002Fadvs.202004214) [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05163)\n16. **SE(3)等变预测分子波函数和电子密度**  \n  Oliver T. Unke、Mihail Bogojeski、Michael Gastegger、Mario Geiger、Tess Smidt、Klaus-Robert Müller [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02347)\n17. **独立的SE(3)等变模型用于端到端刚性蛋白质对接**  \n  Octavian-Eugen Ganea、Xinyuan Huang、Charlotte Bunne、Yatao Bian、Regina Barzilay、Tommi Jaakkola、Andreas Krause，正在审稿中，2022年 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07786)\n18. **用于相互作用动力系统的旋转变换局部坐标系**  \n  Miltiadis Kofinas、Naveen Shankar Nagaraja、Efstratios Gavves NeurIPS 2021 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14961)\n19. **MACE：高阶等变消息传递神经网络，用于快速准确的力场计算**\u003Cbr>\n  Ilyes Batatia、Dávid Péter Kovács、Gregor N. C. Simm、Christoph Ortner、Gábor Csányi，正在审稿中，2022年 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07697)，[代码](https:\u002F\u002Fgithub.com\u002FACEsuit\u002Fmace)\n20. **空间动作空间中的等变Q学习**  \n  Dian Wang、Robin Walters、Xupeng Zhu、Robert Platt，CoRL 2021 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.15443.pdf)\n21. **SO(2)等变强化学习**  \n  Dian Wang、Robin Walters、Robert Platt，ICLR 2022 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04439.pdf)\n22. **利用等变模型实现高效抓取学习**  \n  Xupeng Zhu、Dian Wang、Ondrej Biza、Guanang Su、Robin Walters、Robert Platt，RSS 2022 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.09468.pdf)\n23. **等变搬运网络**  \n  Haojie Huang、Dian Wang、Robin Walters、Robert Platt，RSS 2022 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.09400.pdf)\n24. **基于等变模型的机器人本体学习**  \n  Dian Wang、Mingxi Jia、Xupeng Zhu、Robin Walters、Robert Platt，CoRL 2022 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04923.pdf)\n25. **边缘抓取网络：基于图的SE(3)不变抓取检测方法**  \n  Haojie Huang、Dian Wang、Xupeng Zhu、Robin Walters、Robert Platt，正在审稿中 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.00191.pdf)\n26. **SEIL：仿真增强的等变模仿学习**  \n  Mingxi Jia、Dian Wang、Guanang Su、David Klee、Xupeng Zhu、Robin Walters、Robert Platt，正在审稿中 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.00194.pdf)\n27. **等变模型在具有隐式对称性的领域中的惊人有效性**  \n  Dian Wang、Jung Yeon Park、Neel Sortur、Lawson L.S. Wong、Robin Walters、Robert Platt，正在审稿中 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.09231.pdf)\n\n### [排列等变性](#content)\n关于这一主题的论文有很多。我只添加了其中很少的一部分。\n\n1. **PointNet：用于3D分类与分割的点云深度学习**  \n  Charles R. Qi、Hao Su、Kaichun Mo、Leonidas J. Guibas CVPR 2017 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00593) \n2. **深度集合**  \nManzil Zaheer、Satwik Kottur、Siamak Ravanbakhsh、Barnabas Poczos、Ruslan Salakhutdinov、Alexander Smola NeurIPS 2017   [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06114)\n3. **不变与等变图网络**  \n  Haggai Maron、Heli Ben-Hamu、Nadav Shamir、Yaron Lipman ICLR 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.09902)  \n4. **可证明强大的图网络**  \n  Haggai Maron、Heli Ben-Hamu、Hadar Serviansky、Yaron Lipman NeurIPS 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11136)  \n5. **通用的不变与等变图神经网络**  \n  Nicolas Keriven、Gabriel Peyré NeurIPS 2019 [论文](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fea9268cb43f55d1d12380fb6ea5bf572-Abstract.html)\n6. **关于对称元素集合的学习**  \n  Haggai Maron、Or Litany、Gal Chechik、Ethan Fetaya [ICML 2020最佳论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08599)\n7. **关于不变网络的通用性**  \n  Haggai Maron、Ethan Fetaya、Nimrod Segol、Yaron Lipman [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.09342)\n8. **Transformer可以推广DeepSets，并可扩展到图和超图**\n  Jinwoo Kim、Saeyoon Oh、Seunghoon Hong [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14416)\n\n\n### [讲座与教程](#content)\n\nIAS：[图网络：下一代——Max Welling - YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Wx8J-Kw3fTA&t=3602s)\n\n[Equivariance与数据增强研讨会](https:\u002F\u002Fsites.google.com\u002Fview\u002Fequiv-data-aug\u002Fhome)：许多精彩的报告\n\nIPAM：[Tess Smidt：“用于模拟从头计算并生成原子…”的欧几里得神经网络 - YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8CF8Grb_brE)\n\nIPAM：[E(3)等变神经网络教程](https:\u002F\u002Fblondegeek.github.io\u002Fe3nn_tutorial\u002F)\n\nIPAM：[Risi Kondor：“傅里叶空间神经网络”](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=-PVyi0Keiec)\n\n[NeurIPS 2020教程：等变网络](https:\u002F\u002Fnips.cc\u002Fvirtual\u002F2020\u002Fpublic\u002Ftutorial_3e267ff3c8b6621e5ad4d0f26142892b.html)\n\n[Yaron Lipman - 不规则与几何数据的深度学习 - YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fveyx5zKReo&feature=youtu.be)\n\nMath-ML：[Erik J Bekkers：超越旋转平移的群等变CNN：李群上的B样条CNN](https:\u002F\u002Fyoutu.be\u002FrakcnrgX4oo)\n\nKostas Daniilidis：[几何感知深度学习：等变表示的历史简述及最新成果](https:\u002F\u002Fmathinstitutes.org\u002Fvideos\u002Fvideos\u002Fview\u002F15146)\n\nAndrew White：[分子与材料的深度学习](https:\u002F\u002Fwhitead.github.io\u002Fdmol-book\u002Fdl\u002FEquivariant.html)\n\nErik Bekkers：[群等变深度学习导论](https:\u002F\u002Fuvagedl.github.io) 阿姆斯特丹大学开设的课程\n\nMichael M. Bronstein、Joan Bruna、Taco Cohen、Petar Veličković：[几何深度学习课程](https:\u002F\u002Fgeometricdeeplearning.com\u002Flectures\u002F)\n\n\n\n### [背景知识](#content)\n\n我绝不是这个领域的专家。以下是Taco Cohen在被问及学习群论和表示论的相关参考时推荐的书籍和文章。\n\n1. [Carter，《可视化的群论》](https:\u002F\u002Fwww.amazon.com\u002FVisual-Group-Theory-Problem-Book\u002Fdp\u002F088385757X)   \n  注：非常基础的群论入门\n\n2. [《群等变神经网络的理论方面》](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.05154)  \nCarlos Esteves  \n注：以相当紧凑且易懂的方式涵盖了等变网络所需的所有数学知识。\n\n3. [Serre，《有限群的线性表示》](http:\u002F\u002Fwww.math.tau.ac.il\u002F~borovoi\u002Fcourses\u002FReprFG\u002FHatzagot.pdf)   \n注：关于有限群表示的经典著作。前几章与等变网络相关。\n\n4. [G B Folland，《抽象调和分析课程》](https:\u002F\u002Fsv.20file.org\u002Fup1\u002F1415_0.pdf)   \n注：涵盖局部紧群的表示；诱导表示。\n\n5. [David Gurarie，《对称性与拉普拉斯算子：调和分析、群表示及应用导论》](https:\u002F\u002Fwww.amazon.com\u002FSymmetries-Laplacians-Introduction-Representations-Applications\u002Fdp\u002F0486462889)  \n\n6. [Mark Hamilton，《数学规范理论：附粒子物理标准模型的应用》](https:\u002F\u002Fwww.amazon.com\u002FMathematical-Gauge-Theory-Applications-Universitext\u002Fdp\u002F3319684388)   \n注：涵盖纤维丛，有助于理解齐次G-CNN和规范CNN。\n\n\n### 论文 \u002F 学位论文\n1. Taco Cohen，《等变卷积网络》，阿姆斯特丹大学博士论文，2021年 [pdf]（注：第二部分包含大量此前未发表的新内容）\n\n2. **通过空间自适应对齐扩展卷积**  \nThomas W. Mitchel，约翰霍普金斯大学博士论文，2022年 [pdf](https:\u002F\u002Fwww.mitchel.computer\u002Fdoc\u002Fthesis.pdf)  \n   *提出了一种新颖的统一理论框架，用于任意齐次空间和二维黎曼流形上的变换等变卷积。能够处理高维、非紧致的变换群。*\n\n\n### [待读](#content)\n有许多论文我尚未仔细阅读。\n\n1. **使卷积网络再次具备平移不变性**   \n  Richard Zhang ICML 2019 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.11486)\n2. **概率对称性与不变神经网络**  \nBenjamin Bloem-Reddy、Yee Whye Teh JMLR [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06082)\n3. **关于表示（反）对称函数的研究**  \nMarcus Hutter [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.15298)\n4. **基于偏微分方程的群等变卷积神经网络**  \n  Bart M.N. Smets、Jim Portegies、Erik J. Bekkers、Remco Duits [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.09046)","# awesome-equivariant-network 快速上手指南\n\n**注意**：`awesome-equivariant-network` 是一个**论文与资源列表**（Paper List），而非一个可直接安装的单一软件包或库。它汇集了等变神经网络（Equivariant Neural Networks）领域的核心研究成果。\n\n要使用列表中提到的具体模型（如 SE(3)-Transformers, E(n) GNNs, Spherical CNNs 等），您需要根据论文列表找到对应的独立开源项目仓库进行安装。本指南将指导您如何搭建通用的开发环境，并以列表中热门的 **E(n) Equivariant Graph Neural Networks (EGNN)** 为例演示基本使用流程。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python 版本**: 3.7 - 3.9 (大多数几何深度学习库对 Python 3.10+ 支持尚在完善中)。\n*   **硬件**: 推荐使用 NVIDIA GPU 以加速训练（需安装 CUDA Toolkit）。\n*   **前置依赖**:\n    *   `pip` (Python 包管理工具)\n    *   `git` (代码版本控制)\n    *   `CUDA` (如需 GPU 加速，版本需与 PyTorch 匹配)\n\n## 安装步骤\n\n由于该仓库本身不包含代码实现，我们将安装通用的深度学习框架，并克隆一个列表中提到的代表性项目作为示例。\n\n### 1. 创建虚拟环境 (推荐)\n\n```bash\npython -m venv equivariant-env\nsource equivariant-env\u002Fbin\u002Factivate  # Windows 用户请使用: equivariant-env\\Scripts\\activate\n```\n\n### 2. 安装 PyTorch\n\n访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002F) 获取适合您环境的命令。若使用国内镜像源加速，可参考以下命令（以 CUDA 11.8 为例）：\n\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n# 或使用清华镜像源\npip install torch torchvision torchaudio -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 获取具体模型代码 (以 EGNN 为例)\n\n列表第 33 项 **[E(n) Equivariant Graph Neural Networks](https:\u002F\u002Fgithub.com\u002Fvgsatorras\u002Fegnn)** 是一个经典实现。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvgsatorras\u002Fegnn.git\ncd egnn\n```\n\n### 4. 安装项目依赖\n\n进入项目目录后，安装其特定依赖：\n\n```bash\n# 推荐使用国内镜像源加速安装\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n# 如果项目中包含 setup.py，也可执行\npip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n*(注：列表中其他论文如 SE(3)-Transformers, GemNet 等均有各自的 GitHub 仓库，安装方式类似，需分别克隆对应仓库)*\n\n## 基本使用\n\n以下以 **EGNN** 为例，展示如何构建一个简单的等变图神经网络层。\n\n### 1. 导入库\n\n```python\nimport torch\nfrom egnn.egnn import EGNN\n```\n\n### 2. 定义模型参数\n\n假设我们要处理一个 3D 点云任务（如分子性质预测），输入特征维度为 `in_features`，隐藏层维度为 `hidden_features`。\n\n```python\n# 参数设置\nin_features = 5       # 输入节点特征维度\nhidden_features = 64  # 隐藏层维度\nout_features = 1      # 输出维度 (例如预测能量)\nn_layers = 4          # 网络层数\ncoords_dim = 3        # 坐标维度 (3D 空间)\n\n# 初始化模型\nmodel = EGNN(\n    in_node_nf=in_features,\n    in_edge_nf=0,           # 若无边特征设为 0\n    hidden_nf=hidden_features,\n    out_node_nf=out_features,\n    n_layers=n_layers,\n    coords_dim=coords_dim,\n    act_fn=torch.nn.SiLU(),\n    norm_diff=True,         # 对距离进行归一化，有助于稳定性\n    tanh=False              # 是否使用 tanh 激活坐标更新\n)\n```\n\n### 3. 前向传播示例\n\n构造一批虚拟数据进行测试。等变网络的特点是：**当输入坐标发生旋转或平移时，输出特征会随之相应变换（或保持标量不变）**。\n\n```python\nbatch_size = 8\nnum_nodes = 20\n\n# 随机生成节点特征 (Batch, Nodes, Features)\nh = torch.randn(batch_size, num_nodes, in_features)\n\n# 随机生成节点坐标 (Batch, Nodes, 3)\nx = torch.randn(batch_size, num_nodes, 3)\n\n# 可选：边索引 (2, Num_Edges)，此处简化为全连接或按需构建\nedges = None \n\n# 前向传播\n# h_final: 更新后的节点特征 (标量特征通常具有旋转不变性)\n# x_final: 更新后的坐标 (具有旋转等变性)\nh_final, x_final = model(h, x, edges)\n\nprint(f\"输入特征形状：{h.shape} -> 输出特征形状：{h_final.shape}\")\nprint(f\"输入坐标形状：{x.shape} -> 输出坐标形状：{x_final.shape}\")\n```\n\n### 下一步建议\n\n1.  **浏览论文列表**：回到 `awesome-equivariant-network` 的 README，根据您的应用场景（如 3D 分子、球面图像、粒子物理）选择最合适的论文。\n2.  **查找对应代码**：点击论文标题旁的 `[paper]` 链接查看细节，通常在论文首页或 GitHub 搜索论文标题可找到官方开源代码。\n3.  **阅读文档**：每个具体项目的 `README.md` 会有针对该模型的详细训练和评估指令。","某生物制药公司的算法团队正在研发基于 3D 点云数据的蛋白质结合位点预测模型，需要处理具有复杂空间旋转特性的分子结构数据。\n\n### 没有 awesome-equivariant-network 时\n- **文献调研效率极低**：研究人员需在 arXiv 和各大会议论文集大海捞针，难以系统掌握从基础群卷积到最新规范等变网络（Gauge Equivariant CNNs）的演进脉络。\n- **理论复现门槛高**：面对 $SO(3)$ 或 $SE(3)$ 等变性所需的球谐函数、傅里叶变换等深奥数学理论，缺乏权威论文指引导致复现经典模型（如 Tensor Field Networks）时频频出错。\n- **技术选型盲目**：无法快速对比不同架构（如 Steerable CNNs 与 Spherical CNNs）在特定三维任务上的优劣，容易选错基线模型，浪费数月算力资源进行无效训练。\n- **遗漏关键优化**：因未及时发现利用广义 FFT 加速球面卷积等关键技术细节，导致模型推理速度远慢于行业先进水平。\n\n### 使用 awesome-equivariant-network 后\n- **一站式知识索引**：团队直接利用该清单按“理论”、“应用”、“置换等变”等分类快速定位核心论文，半天内即可构建完整的领域知识图谱。\n- **精准复现路径**：通过清单中附带的详细注释（如“离散群”、“区域滤波器”等提示），迅速锁定适合蛋白质结构的 $SE(3)$ 等变模型，大幅降低数学推导与代码实现的试错成本。\n- **科学架构决策**：参考清单中收录的基准测试论文，快速选定在 3D 点云任务上表现最佳的 General E(2)-Equivariant Steerable CNNs 作为起点，确保模型先天具备旋转不变性。\n- **前沿技术同步**：及时获取关于规范等变网络等最新突破的论文链接，将先进的几何深度学习思想融入现有管线，显著提升了小样本下的预测精度。\n\nawesome-equivariant-network 通过将分散的等变神经网络研究系统化，帮助团队将原本数月的探索周期压缩至数周，实现了从“盲目尝试”到“精准建模”的跨越。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChen-Cai-OSU_awesome-equivariant-network_34d73a01.png","Chen-Cai-OSU",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FChen-Cai-OSU_42aadb1f.png","CS Ph.D. student at UCSD","UCSD","https:\u002F\u002Fchen-cai-osu.github.io\u002F","https:\u002F\u002Fgithub.com\u002FChen-Cai-OSU",1088,108,"2026-04-10T10:38:02",5,"","未说明",{"notes":91,"python":89,"dependencies":92},"该仓库（awesome-equivariant-network）是一个等变神经网络相关论文的列表清单，并非可执行的软件工具或代码库，因此 README 中不包含任何操作系统、硬件配置、Python 版本或依赖库的安装需求。用户仅需浏览列出的论文链接即可。",[],[18],"2026-03-27T02:49:30.150509","2026-04-14T12:35:50.879705",[],[]]