[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ritchieng--the-incredible-pytorch":3,"tool-ritchieng--the-incredible-pytorch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":90,"env_os":91,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":95,"github_topics":96,"view_count":103,"oss_zip_url":85,"oss_zip_packed_at":85,"status":16,"created_at":104,"updated_at":105,"faqs":106,"releases":157},505,"ritchieng\u002Fthe-incredible-pytorch","the-incredible-pytorch","The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch. ","the-incredible-pytorch 是一份精心策划的 PyTorch 资源导航清单，旨在成为深度学习爱好者的必备指南。它将分散在网上的教程、论文、项目、社区动态及视频书籍整合在一起，解决了学习者面对海量资料时难以筛选和查找的痛点。\n\n这份清单特别适合 AI 开发者、算法研究人员以及对 PyTorch 框架感兴趣的技术人员使用。其内容覆盖面极广，不仅包含基础的卷积神经网络、RNN 和 Transformer 架构，还紧跟技术潮流，囊括了大语言模型、Agent AI、量子机器学习及医疗影像分析等前沿方向。此外，它还提供了关于模型优化、量化、可解释性等专业主题的深入资源。\n\n作为一个开源社区驱动的项目，the-incredible-pytorch 采用 MIT 协议，欢迎全球贡献者通过 Pull Request 持续更新内容。对于希望在 PyTorch 生态中快速成长的用户而言，这相当于拥有一张实时更新的技术地图，能极大提升学习与研究效率。","\u003Cp align=\"center\">\u003Cimg width=\"40%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fritchieng_the-incredible-pytorch_readme_64b2d93e8e24.png\" \u002F>\u003C\u002Fp>\n\n--------------------------------------------------------------------------------\n\u003Cp align=\"center\">\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fstars-10000+-blue.svg\"\u002F>\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fforks-1900+-blue.svg\"\u002F>\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg\"\u002F>\n\u003C\u002Fp>\n\nThis is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible [PyTorch](http:\u002F\u002Fpytorch.org\u002F). Feel free to make a pull request to contribute to this list.\n\n\n# Table Of Contents\n\u003C!-- vscode-markdown-toc -->\n\n- [Table Of Contents](#table-of-contents)\n  - [Tutorials](#tutorials)\n  - [Large Language Models (LLMs)](#large-language-models-llms)\n  - [Agentic AI](#agentic-ai)\n  - [Guardrails and AI Safety](#guardrails-and-ai-safety)\n  - [Tabular Data](#tabular-data)\n  - [Visualization](#visualization)\n  - [Explainability](#explainability)\n  - [Object Detection](#object-detection)\n  - [Long-Tailed \u002F Out-of-Distribution Recognition](#long-tailed--out-of-distribution-recognition)\n  - [Activation Functions](#activation-functions)\n  - [Energy-Based Learning](#energy-based-learning)\n  - [Missing Data](#missing-data)\n  - [Architecture Search](#architecture-search)\n  - [Continual Learning](#continual-learning)\n  - [Optimization](#optimization)\n  - [Quantization](#quantization)\n  - [Quantum Machine Learning](#quantum-machine-learning)\n  - [Neural Network Compression](#neural-network-compression)\n  - [Facial, Action and Pose Recognition](#facial-action-and-pose-recognition)\n  - [Super resolution](#super-resolution)\n  - [Synthetesizing Views](#synthetesizing-views)\n  - [Voice](#voice)\n  - [Medical](#medical)\n  - [3D Segmentation, Classification and Regression](#3d-segmentation-classification-and-regression)\n  - [Video Recognition](#video-recognition)\n  - [Recurrent Neural Networks (RNNs)](#recurrent-neural-networks-rnns)\n  - [Convolutional Neural Networks (CNNs)](#convolutional-neural-networks-cnns)\n  - [Segmentation](#segmentation)\n  - [Geometric Deep Learning: Graph \\& Irregular Structures](#geometric-deep-learning-graph--irregular-structures)\n  - [Sorting](#sorting)\n  - [Ordinary Differential Equations Networks](#ordinary-differential-equations-networks)\n  - [Multi-task Learning](#multi-task-learning)\n  - [GANs, VAEs, and AEs](#gans-vaes-and-aes)\n  - [Unsupervised Learning](#unsupervised-learning)\n  - [Adversarial Attacks](#adversarial-attacks)\n  - [Style Transfer](#style-transfer)\n  - [Image Captioning](#image-captioning)\n  - [Transformers](#transformers)\n  - [Similarity Networks and Functions](#similarity-networks-and-functions)\n  - [Reasoning](#reasoning)\n  - [General NLP](#general-nlp)\n  - [Question and Answering](#question-and-answering)\n  - [Speech Generation and Recognition](#speech-generation-and-recognition)\n  - [Document and Text Classification](#document-and-text-classification)\n  - [Text Generation](#text-generation)\n  - [Text to Image](#text-to-image)\n  - [Translation](#translation)\n  - [Sentiment Analysis](#sentiment-analysis)\n  - [Deep Reinforcement Learning](#deep-reinforcement-learning)\n  - [Deep Bayesian Learning and Probabilistic Programmming](#deep-bayesian-learning-and-probabilistic-programmming)\n  - [Spiking Neural Networks](#spiking-neural-networks)\n  - [Anomaly Detection](#anomaly-detection)\n  - [Regression Types](#regression-types)\n  - [Time Series](#time-series)\n  - [Synthetic Datasets](#synthetic-datasets)\n  - [Neural Network General Improvements](#neural-network-general-improvements)\n  - [DNN Applications in Chemistry and Physics](#dnn-applications-in-chemistry-and-physics)\n  - [New Thinking on General Neural Network Architecture](#new-thinking-on-general-neural-network-architecture)\n  - [Linear Algebra](#linear-algebra)\n  - [API Abstraction](#api-abstraction)\n  - [Low Level Utilities](#low-level-utilities)\n  - [PyTorch Utilities](#pytorch-utilities)\n  - [PyTorch Video Tutorials](#pytorch-video-tutorials)\n  - [Community](#community)\n  - [To be Classified](#to-be-classified)\n  - [Links to This Repository](#links-to-this-repository)\n  - [Contributions](#contributions)\n  - [New Special Dedicated List to AI Agents | The Incredible AI Agents](#new-special-dedicated-list-to-ai-agents--the-incredible-ai-agents)\n\n\u003C!-- vscode-markdown-toc-config\n\tnumbering=false\n\tautoSave=true\n\t\u002Fvscode-markdown-toc-config -->\n\u003C!-- \u002Fvscode-markdown-toc -->\n\n## \u003Ca name='Tutorials'>\u003C\u002Fa>Tutorials\n- [Official PyTorch Tutorials](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftutorials)\n- [Official PyTorch Examples](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples)\n- [The Math Behind Artificial Intelligence: A Guide to AI Foundations [Full Book]](https:\u002F\u002Fwww.freecodecamp.org\u002Fnews\u002Fthe-math-behind-artificial-intelligence-book\u002F)\n- [Dive Into Deep Learning with PyTorch](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)\n- [How to Read Pytorch](https:\u002F\u002Fgithub.com\u002Fdavidbau\u002Fhow-to-read-pytorch)\n- [Minicourse in Deep Learning with PyTorch (Multi-language)](https:\u002F\u002Fgithub.com\u002FAtcold\u002Fpytorch-Deep-Learning-Minicourse)\n- [Practical Deep Learning with PyTorch](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fdeep-learning-wizard)\n- [Deep Learning Models](https:\u002F\u002Fgithub.com\u002Frasbt\u002Fdeeplearning-models)\n- [C++ Implementation of PyTorch Tutorial](https:\u002F\u002Fgithub.com\u002Fprabhuomkar\u002Fpytorch-cpp)\n- [Simple Examples to Introduce PyTorch](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fpytorch-examples)\n- [Mini Tutorials in PyTorch](https:\u002F\u002Fgithub.com\u002Fvinhkhuc\u002FPyTorch-Mini-Tutorials)\n- [Deep Learning for NLP](https:\u002F\u002Fgithub.com\u002Frguthrie3\u002FDeepLearningForNLPInPytorch)\n- [Deep Learning Tutorial for Researchers](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)\n- [Fully Convolutional Networks implemented with PyTorch](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-fcn)\n- [Simple PyTorch Tutorials Zero to ALL](https:\u002F\u002Fgithub.com\u002Fhunkim\u002FPyTorchZeroToAll)\n- [DeepNLP-models-Pytorch](https:\u002F\u002Fgithub.com\u002FDSKSD\u002FDeepNLP-models-Pytorch)\n- [MILA PyTorch Welcome Tutorials](https:\u002F\u002Fgithub.com\u002Fmila-udem\u002Fwelcome_tutorials)\n- [Effective PyTorch, Optimizing Runtime with TorchScript and Numerical Stability Optimization](https:\u002F\u002Fgithub.com\u002Fvahidk\u002FEffectivePyTorch)\n- [Practical PyTorch](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpractical-pytorch)\n- [PyTorch Project Template](https:\u002F\u002Fgithub.com\u002Fmoemen95\u002FPyTorch-Project-Template)\n- [Semantic Search with PyTorch](https:\u002F\u002Fgithub.com\u002Fkuutsav\u002Finformation-retrieval)\n\n## \u003Ca name='LargeLanguageModels'>\u003C\u002Fa>Large Language Models (LLMs)\n- LLM Tutorials\n  - [Build a Large Language Model (From Scratch)](https:\u002F\u002Fgithub.com\u002Frasbt\u002FLLMs-from-scratch)\n  - [Hugginface LLM Training Book, a collection of methodologies to help with successful training of large language models](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fllm_training_handbook)\n- General\n  - [Starcoder 2, family of code generation models](https:\u002F\u002Fgithub.com\u002Fbigcode-project\u002Fstarcoder2)\n  - [GPT Fast, fast and hackable pytorch native transformer inference](https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Fgpt-fast)\n  - [Mixtral Offloading, run Mixtral-8x7B models in Colab or consumer desktops](https:\u002F\u002Fgithub.com\u002Fdvmazur\u002Fmixtral-offloading)\n  - [Llama](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama)\n  - [Llama Recipes](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama-recipes)\n  - [TinyLlama](https:\u002F\u002Fgithub.com\u002Fjzhang38\u002FTinyLlama)\n  - [Mosaic Pretrained Transformers (MPT)](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fllm-foundry)\n  - [VLLM, high-throughput and memory-efficient inference and serving engine for LLMs](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm)\n  - [Dolly](https:\u002F\u002Fgithub.com\u002Fdatabrickslabs\u002Fdolly)\n  - [Vicuna](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat)\n  - [Mistral 7B](https:\u002F\u002Fgithub.com\u002Fmistralai\u002Fmistral-src)\n  - [BigDL LLM, library for running LLM (large language model) on Intel XPU (from Laptop to GPU to Cloud) using INT4 with very low latency1 (for any PyTorch model)](https:\u002F\u002Fgithub.com\u002Fintel-analytics\u002FBigDL)\n  - [Simple LLM Finetuner](https:\u002F\u002Fgithub.com\u002Flxe\u002Fsimple-llm-finetuner)\n  - [Petals, run LLMs at home, BitTorrent-style, fine-tuning and inference up to 10x faster than offloading](https:\u002F\u002Fgithub.com\u002Fbigscience-workshop\u002Fpetals)\n  - [Gemma, Google's family of lightweight, state-of-the-art open models](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fgemma_pytorch)\n  - [Qwen, Alibaba Cloud's large language model](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen)\n  - [CodeT5, code-aware encoder-decoder model for code understanding and generation](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FCodeT5)\n  - [OpenLLaMA, permissively licensed open source reproduction of Meta AI's LLaMA](https:\u002F\u002Fgithub.com\u002Fopenlm-research\u002Fopen_llama)\n  - [RedPajama, leading open-source models with package to reproduce LLaMA training dataset](https:\u002F\u002Fgithub.com\u002Ftogethercomputer\u002FRedPajama-Data)\n  - [MosaicML LLM Foundry, codebase for training, finetuning, and deploying LLMs](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fllm-foundry)\n  - [TECS-L (Golden MoE), dense-to-MoE conversion framework with optimal inhibition ratio I≈1\u002Fe for PyTorch LLMs](https:\u002F\u002Fgithub.com\u002Fneed-singularity\u002FTECS-L)\n- Japanese\n  - [Japanese Llama](https:\u002F\u002Fgithub.com\u002Fmasa3141\u002Fjapanese-alpaca-lora)\n  - [Japanese GPT Neox and Open Calm](https:\u002F\u002Fgithub.com\u002FhppRC\u002Fllm-lora-classification)\n- Chinese\n  - [Chinese Llamma-2 7B](https:\u002F\u002Fgithub.com\u002FLinkSoul-AI\u002FChinese-Llama-2-7b)\n  - [Chinese Vicuna](https:\u002F\u002Fgithub.com\u002FFacico\u002FChinese-Vicuna)\n- Retrieval Augmented Generation (RAG)\n  - [LlamaIndex, data framework for your LLM application](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index)\n- Embeddings\n  - [ChromaDB, open-source embedding database](https:\u002F\u002Fgithub.com\u002Fchroma-core\u002Fchroma)\n- Applications\n  - [Langchain, building applications with LLMs through composability](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangchain)\n  - [LangSmith, platform for building production-grade LLM applications](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangsmith-sdk)\n  - [LiteLLM, call all LLM APIs using the OpenAI format](https:\u002F\u002Fgithub.com\u002FBerriAI\u002Flitellm)\n  - [OpenAI Python, official Python library for the OpenAI API](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python)\n  - [Guidance, library for controlling large language models](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fguidance)\n- Finetuning\n  - [Huggingface PEFT, State-of-the-art Parameter-Efficient Fine-Tuning](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpeft)\n  - [Unsloth, finetune LLMs 2-5x faster with 80% less memory](https:\u002F\u002Fgithub.com\u002Funslothai\u002Funsloth)\n  - [LoRA, Low-Rank Adaptation of Large Language Models](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLoRA)\n  - [QLoRA, efficient finetuning of quantized LLMs](https:\u002F\u002Fgithub.com\u002Fartidoro\u002Fqlora)\n  - [Axolotl, tool designed to streamline the fine-tuning of various AI models](https:\u002F\u002Fgithub.com\u002FOpenAccess-AI-Collective\u002Faxolotl)\n  - [LLaMA-Factory, unified efficient fine-tuning of 100+ LLMs](https:\u002F\u002Fgithub.com\u002Fhiyouga\u002FLLaMA-Factory)\n- Training\n  - [Higgsfield, Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters](https:\u002F\u002Fgithub.com\u002Fhiggsfield-ai\u002Fhiggsfield)\n  - [DeepSpeed, deep learning optimization library](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed)\n  - [FairScale, PyTorch extensions for high performance and large scale training](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffairscale)\n  - [Accelerate, simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate)\n  - [ColossalAI, unified deep learning system for large-scale model training and inference](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI)\n- Quantization\n  - [AutoGPTQ, easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm](https:\u002F\u002Fgithub.com\u002FPanQiWei\u002FAutoGPTQ)\n  - [BitsAndBytes, accessible large language models via k-bit quantization](https:\u002F\u002Fgithub.com\u002FTimDettmers\u002Fbitsandbytes)\n  - [GPTQ-for-LLaMa, 4 bits quantization of LLaMA using GPTQ](https:\u002F\u002Fgithub.com\u002Fqwopqwop200\u002FGPTQ-for-LLaMa)\n  - [Optimum, acceleration of 🤗 Transformers and 🤗 Diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Foptimum)\n\n## \u003Ca name='AgenticAI'>\u003C\u002Fa>Agentic AI\n- Multi-Agent Systems\n  - [LangGraph, library for building stateful, multi-actor applications with LLMs](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph)\n  - [AutoGen, library that enables the creation of applications using multiple agents that can converse with each other](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fautogen)\n  - [CrewAI, framework for orchestrating role-playing, autonomous AI agents](https:\u002F\u002Fgithub.com\u002Fjoaomdmoura\u002FcrewAI)\n  - [MetaGPT, multi-agent framework for software company simulation](https:\u002F\u002Fgithub.com\u002Fgeekan\u002FMetaGPT)\n  - [AgentScope, user-friendly multi-agent platform](https:\u002F\u002Fgithub.com\u002Fmodelscope\u002Fagentscope)\n  - [Swarm, educational framework for building and deploying multi-agent systems](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fswarm)\n- Autonomous Agents\n  - [AutoGPT, autonomous GPT-4 experiment to make GPT-4 fully autonomous](https:\u002F\u002Fgithub.com\u002FSignificant-Gravitas\u002FAutoGPT)\n  - [BabyAGI, example of an AI-powered task management system](https:\u002F\u002Fgithub.com\u002Fyoheinakajima\u002Fbabyagi)\n  - [LangChain Agents, building agents with LangChain](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangchain\u002Ftree\u002Fmaster\u002Flibs\u002Flangchain\u002Flangchain\u002Fagents)\n  - [ReAct: Reasoning and Acting with Language Models](https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Freact)\n  - [Voyager, open-ended embodied agent with large language models](https:\u002F\u002Fgithub.com\u002FMineDojo\u002FVoyager)\n- Agent Orchestration and Frameworks  \n  - [Semantic Kernel, lightweight SDK for integrating AI services](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fsemantic-kernel)\n  - [OpenAI Function Calling, tools for function calling with OpenAI models](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python)\n  - [LlamaIndex Agents, data agents with LlamaIndex](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index\u002Ftree\u002Fmain\u002Fllama-index-core\u002Fllama_index\u002Fcore\u002Fagent)\n  - [Haystack Agents, building search and QA agents](https:\u002F\u002Fgithub.com\u002Fdeepset-ai\u002Fhaystack)\n  - [DSPy, framework for algorithmically optimizing LM prompts and weights](https:\u002F\u002Fgithub.com\u002Fstanfordnlp\u002Fdspy)\n- Planning and Reasoning\n  - [Tree of Thoughts, deliberate problem solving with large language models](https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm)\n  - [ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models](https:\u002F\u002Fgithub.com\u002Fbillxbf\u002FReWOO)\n  - [Plan-and-Solve Prompting](https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FPlan-and-Solve-Prompting)\n- Memory and Learning\n  - [MemGPT, creating LLM agents with long-term memory](https:\u002F\u002Fgithub.com\u002Fcpacker\u002FMemGPT)\n  - [Zep, fast, scalable building blocks for production LLM apps](https:\u002F\u002Fgithub.com\u002Fgetzep\u002Fzep)\n\n## \u003Ca name='GuardrailsandAISafety'>\u003C\u002Fa>Guardrails and AI Safety\n- Content Filtering and Moderation\n  - [Guardrails AI, framework for building reliable AI applications](https:\u002F\u002Fgithub.com\u002Fguardrails-ai\u002Fguardrails)\n  - [NeMo Guardrails, toolkit for building trustworthy, safe and secure LLM applications](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNeMo-Guardrails)\n  - [OpenAI Moderation API Tools](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fmoderation-api-release)\n  - [Detoxify, toxic comment classification using transformer models](https:\u002F\u002Fgithub.com\u002Funitaryai\u002Fdetoxify)\n  - [Perspective API PyTorch Implementation, toxicity detection](https:\u002F\u002Fgithub.com\u002Fconversationai\u002Fperspectiveapi)\n- Prompt Injection Defense\n  - [Prompt Injection Detector, detecting prompt injection attacks](https:\u002F\u002Fgithub.com\u002Fprotectai\u002Frebuff)\n  - [LLM Guard, security toolkit for LLM interactions](https:\u002F\u002Fgithub.com\u002Fprotectai\u002Fllm-guard)\n  - [Garak, LLM vulnerability scanner](https:\u002F\u002Fgithub.com\u002Fleondz\u002Fgarak)\n- Bias Detection and Mitigation\n  - [FairLearn, toolkit for assessing and improving fairness](https:\u002F\u002Fgithub.com\u002Ffairlearn\u002Ffairlearn)\n  - [AIF360, comprehensive set of fairness metrics and bias mitigation algorithms](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360)\n  - [What-If Tool, tool for analyzing and understanding ML models](https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool)\n- Privacy and Security\n  - [Opacus, library for training PyTorch models with differential privacy](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fopacus)\n  - [PySyft, secure and private Deep Learning framework](https:\u002F\u002Fgithub.com\u002FOpenMined\u002FPySyft)\n  - [CrypTen, framework for Privacy Preserving Machine Learning](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FCrypTen)\n  - [Adversarial Robustness Toolbox, library for adversarial attacks and defenses](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002Fadversarial-robustness-toolbox)\n- Model Interpretability and Explainability\n  - [LIME, explaining the predictions of machine learning classifiers](https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Flime)\n  - [SHAP, unified approach to explain the output of machine learning models](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap)\n  - [InterpretML, interpret and understand machine learning models](https:\u002F\u002Fgithub.com\u002Finterpretml\u002Finterpret)\n  - [Alibi, algorithms for explaining machine learning models](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi)\n- Safety Evaluation and Testing\n  - [AI Safety Gym, environments and tools for AI safety research](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fsafety-gym)\n  - [Anthropic's Constitutional AI implementations](https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fconstitutional-ai)\n  - [HarmBench, standardized evaluation framework for automated red teaming](https:\u002F\u002Fgithub.com\u002Fcenterforaisafety\u002FHarmBench)\n\n## \u003Ca name='TabularData'>\u003C\u002Fa>Tabular Data\n- [PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning](https:\u002F\u002Fgithub.com\u002Fpyg-team\u002Fpytorch-frame)\n- [Pytorch Tabular,standard framework for modelling Deep Learning Models for tabular data](https:\u002F\u002Fgithub.com\u002Fmanujosephv\u002Fpytorch_tabular)\n- [Tab Transformer](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Ftab-transformer-pytorch)\n- [PyTorch-TabNet: Attentive Interpretable Tabular Learning](https:\u002F\u002Fgithub.com\u002Fdreamquark-ai\u002Ftabnet)\n- [carefree-learn: A minimal Automatic Machine Learning (AutoML) solution for tabular datasets based on PyTorch](https:\u002F\u002Fgithub.com\u002Fcarefree0910\u002Fcarefree-learn)\n\n## \u003Ca name='Visualization'>\u003C\u002Fa>Visualization\n- [Loss Visualization](https:\u002F\u002Fgithub.com\u002Ftomgoldstein\u002Floss-landscape)\n- [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam)\n- [Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [SmoothGrad: removing noise by adding noise](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [DeepDream: dream-like hallucinogenic visuals](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-dream)\n- [FlashTorch: Visualization toolkit for neural networks in PyTorch](https:\u002F\u002Fgithub.com\u002FMisaOgura\u002Fflashtorch)\n- [Lucent: Lucid adapted for PyTorch](https:\u002F\u002Fgithub.com\u002Fgreentfrapp\u002Flucent)\n- [DreamCreator: Training GoogleNet models for DeepDream with custom datasets made simple](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fdream-creator)\n- [CNN Feature Map Visualisation](https:\u002F\u002Fgithub.com\u002Flewis-morris\u002Fmapextrackt)\n\n## \u003Ca name='Explainability'>\u003C\u002Fa>Explainability\n- [Neural-Backed Decision Trees](https:\u002F\u002Fgithub.com\u002Falvinwan\u002Fneural-backed-decision-trees)\n- [Efficient Covariance Estimation from Temporal Data](https:\u002F\u002Fgithub.com\u002Fhrayrhar\u002FT-CorEx)\n- [Hierarchical interpretations for neural network predictions](https:\u002F\u002Fgithub.com\u002Fcsinva\u002Fhierarchical-dnn-interpretations)\n- [Shap, a unified approach to explain the output of any machine learning model](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap)\n- [VIsualizing PyTorch saved .pth deep learning models with netron](https:\u002F\u002Fgithub.com\u002Flutzroeder\u002Fnetron)\n- [Distilling a Neural Network Into a Soft Decision Tree](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fsoft-decision-tree)\n- [Captum, A unified model interpretability library for PyTorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fcaptum)\n\n## \u003Ca name='ObjectDetection'>\u003C\u002Fa>Object Detection\n- [MMDetection Object Detection Toolbox](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)\n- [Mask R-CNN Benchmark: Faster R-CNN and Mask R-CNN in PyTorch 1.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmaskrcnn-benchmark)\n- [YOLO-World](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World)\n- [YOLOS](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FYOLOS)\n- [YOLOF](https:\u002F\u002Fgithub.com\u002Fmegvii-model\u002FYOLOF)\n- [YOLOX](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FYOLOX)\n- [YOLOv12: Attention-Centric Real-Time Object Detectors](https:\u002F\u002Fgithub.com\u002Fsunsmarterjie\u002Fyolov12)\n- [YOLOv11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n- [YOLOv10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10)\n- [YOLOv9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9)\n- [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n- [Yolov7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)\n- [YOLOv6](https:\u002F\u002Fgithub.com\u002Fmeituan\u002FYOLOv6)\n- [Yolov5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5)\n- [Yolov4](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet)\n- [YOLOv3](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov3)\n- [YOLOv2: Real-Time Object Detection](https:\u002F\u002Fgithub.com\u002Flongcw\u002Fyolo2-pytorch)\n- [SSD: Single Shot MultiBox Detector](https:\u002F\u002Fgithub.com\u002Famdegroot\u002Fssd.pytorch)\n- [Detectron models for Object Detection](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fdetectorch)\n- [Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks](https:\u002F\u002Fgithub.com\u002Fpotterhsu\u002FSVHNClassifier-PyTorch)\n- [Whale Detector](https:\u002F\u002Fgithub.com\u002FTarinZ\u002Fwhale-detector)\n- [Catalyst.Detection](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fdetection)\n\n## \u003Ca name='Long-TailedOut-of-DistributionRecognition'>\u003C\u002Fa>Long-Tailed \u002F Out-of-Distribution Recognition\n- [Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization](https:\u002F\u002Fgithub.com\u002Fkohpangwei\u002Fgroup_DRO)\n- [Invariant Risk Minimization](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)\n- [Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples](https:\u002F\u002Fgithub.com\u002Falinlab\u002FConfident_classifier)\n- [Deep Anomaly Detection with Outlier Exposure](https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Foutlier-exposure)\n- [Large-Scale Long-Tailed Recognition in an Open World](https:\u002F\u002Fgithub.com\u002Fzhmiao\u002FOpenLongTailRecognition-OLTR)\n- [Principled Detection of Out-of-Distribution Examples in Neural Networks](https:\u002F\u002Fgithub.com\u002FShiyuLiang\u002Fodin-pytorch)\n- [Learning Confidence for Out-of-Distribution Detection in Neural Networks](https:\u002F\u002Fgithub.com\u002Fuoguelph-mlrg\u002Fconfidence_estimation)\n- [PyTorch Imbalanced Class Sampler](https:\u002F\u002Fgithub.com\u002Fufoym\u002Fimbalanced-dataset-sampler)\n\n## \u003Ca name='ActivationFunctions'>\u003C\u002Fa>Activation Functions\n- [Rational Activations - Learnable Rational Activation Functions](https:\u002F\u002Fgithub.com\u002Fml-research\u002Frational_activations)\n- [FreeGrad, PyTorch library for custom backward passes, straight-through estimators and gradient transforms.](https:\u002F\u002Fgithub.com\u002Ftbox98\u002FFreeGrad)\n\n## \u003Ca name='Energy-BasedLearning'>\u003C\u002Fa>Energy-Based Learning\n- [EBGAN, Energy-Based GANs](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FPyTorch-GAN\u002Fblob\u002Fmaster\u002Fimplementations\u002Febgan\u002Febgan.py)\n- [Maximum Entropy Generators for Energy-based Models](https:\u002F\u002Fgithub.com\u002Fritheshkumar95\u002Fenergy_based_generative_models)\n\n\n## \u003Ca name='MissingData'>\u003C\u002Fa>Missing Data\n - [BRITS: Bidirectional Recurrent Imputation for Time Series](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7911-brits-bidirectional-recurrent-imputation-for-time-series)\n\n## \u003Ca name='ArchitectureSearch'>\u003C\u002Fa>Architecture Search\n- [EfficientNetV2](https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch)\n- [DenseNAS](https:\u002F\u002Fgithub.com\u002FJaminFong\u002FDenseNAS)\n- [DARTS: Differentiable Architecture Search](https:\u002F\u002Fgithub.com\u002Fquark0\u002Fdarts)\n- [Efficient Neural Architecture Search (ENAS)](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch)\n- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https:\u002F\u002Fgithub.com\u002Fzsef123\u002FEfficientNets-PyTorch)\n\n## \u003Ca name='ContinualLearning'>\u003C\u002Fa>Continual Learning\n- [Renate, Automatic Retraining of Neural Networks](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Frenate)\n\n## \u003Ca name='Optimization'>\u003C\u002Fa>Optimization\n- [AccSGD, AdaBound, AdaMod, DiffGrad, Lamb, NovoGrad, RAdam, SGDW, Yogi and more](https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer)\n- [Lookahead Optimizer: k steps forward, 1 step back](https:\u002F\u002Fgithub.com\u002Falphadl\u002Flookahead.pytorch)\n- [RAdam, On the Variance of the Adaptive Learning Rate and Beyond](https:\u002F\u002Fgithub.com\u002FLiyuanLucasLiu\u002FRAdam)\n- [Over9000, Comparison of RAdam, Lookahead, Novograd, and combinations](https:\u002F\u002Fgithub.com\u002Fmgrankin\u002Fover9000)\n- [AdaBound, Train As Fast as Adam As Good as SGD](https:\u002F\u002Fgithub.com\u002FLuolc\u002FAdaBound)\n- [Riemannian Adaptive Optimization Methods](https:\u002F\u002Fgithub.com\u002Fferrine\u002Fgeoopt)\n- [L-BFGS](https:\u002F\u002Fgithub.com\u002Fhjmshi\u002FPyTorch-LBFGS)\n- [OptNet: Differentiable Optimization as a Layer in Neural Networks](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Foptnet)\n- [Learning to learn by gradient descent by gradient descent](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-meta-optimizer)\n- [Surrogate Gradient Learning in Spiking Neural Networks](https:\u002F\u002Fgithub.com\u002Ffzenke\u002Fspytorch)\n- [TorchOpt: An Efficient Library for Differentiable Optimization](https:\u002F\u002Fgithub.com\u002Fmetaopt\u002Ftorchopt)\n- [ph-training: Automatic Training with Persistent Homology](https:\u002F\u002Fgithub.com\u002Fneed-singularity\u002Fph-training) - Uses topological data analysis (H0 persistence) to predict difficulty, find optimal LR, and detect overfitting in real-time (r=0.998).\n## \u003Ca name='Quantization'>\u003C\u002Fa>Quantization\n- [Additive Power-of-Two Quantization: An Efficient Non-uniform Discretization For Neural Networks](https:\u002F\u002Fgithub.com\u002Fyhhhli\u002FAPoT_Quantization)\n\n## \u003Ca name='QuantumMachineLearning'>\u003C\u002Fa>Quantum Machine Learning\n- [Tor10, generic tensor-network library for quantum simulation in PyTorch](https:\u002F\u002Fgithub.com\u002Fkaihsin\u002FTor10)\n- [PennyLane, cross-platform Python library for quantum machine learning with PyTorch interface](https:\u002F\u002Fgithub.com\u002FXanaduAI\u002Fpennylane)\n\n## \u003Ca name='NeuralNetworkCompression'>\u003C\u002Fa>Neural Network Compression\n- [Bayesian Compression for Deep Learning](https:\u002F\u002Fgithub.com\u002FKarenUllrich\u002FTutorial_BayesianCompressionForDL)\n- [Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fdistiller)\n- [Learning Sparse Neural Networks through L0 regularization](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FL0_regularization)\n- [Energy-constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking](https:\u002F\u002Fgithub.com\u002Fhyang1990\u002Fmodel_based_energy_constrained_compression)\n- [EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis](https:\u002F\u002Fgithub.com\u002Falecwangcq\u002FEigenDamage-Pytorch)\n- [Pruning Convolutional Neural Networks for Resource Efficient Inference](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-pruning)\n- [Pruning neural networks: is it time to nip it in the bud? (showing reduced networks work better)](https:\u002F\u002Fgithub.com\u002FBayesWatch\u002Fpytorch-prunes)\n\n## \u003Ca name='FacialActionandPoseRecognition'>\u003C\u002Fa>Facial, Action and Pose Recognition\n- [Facenet: Pretrained Pytorch face detection and recognition models](https:\u002F\u002Fgithub.com\u002Ftimesler\u002Ffacenet-pytorch)\n- [DGC-Net: Dense Geometric Correspondence Network](https:\u002F\u002Fgithub.com\u002FAaltoVision\u002FDGC-Net)\n- [High performance facial recognition library on PyTorch](https:\u002F\u002Fgithub.com\u002FZhaoJ9014\u002Fface.evoLVe.PyTorch)\n- [FaceBoxes, a CPU real-time face detector with high accuracy](https:\u002F\u002Fgithub.com\u002Fzisianw\u002FFaceBoxes.PyTorch)\n- [How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment)\n- [Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition](https:\u002F\u002Fgithub.com\u002Fkenshohara\u002F3D-ResNets-PyTorch)\n- [PyTorch Realtime Multi-Person Pose Estimation](https:\u002F\u002Fgithub.com\u002FDavexPro\u002Fpytorch-pose-estimation)\n- [SphereFace: Deep Hypersphere Embedding for Face Recognition](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fsphereface_pytorch)\n- [GANimation: Anatomically-aware Facial Animation from a Single Image](https:\u002F\u002Fgithub.com\u002Falbertpumarola\u002FGANimation)\n- [Shufflenet V2 by Face++ with better results than paper](https:\u002F\u002Fgithub.com\u002Fericsun99\u002FShufflenet-v2-Pytorch)\n- [Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpytorch-pose-hg-3d)\n- [Unsupervised Learning of Depth and Ego-Motion from Video](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FSfmLearner-Pytorch)\n- [FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fflownet2-pytorch)\n- [FlowNet: Learning Optical Flow with Convolutional Networks](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FFlowNetPytorch)\n- [Optical Flow Estimation using a Spatial Pyramid Network](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-spynet)\n- [OpenFace in PyTorch](https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch)\n- [Deep Face Recognition in PyTorch](https:\u002F\u002Fgithub.com\u002Fgrib0ed0v\u002Fface_recognition.pytorch)\n\n## \u003Ca name='Superresolution'>\u003C\u002Fa>Super resolution\n- [Enhanced Deep Residual Networks for Single Image Super-Resolution](https:\u002F\u002Fgithub.com\u002Fthstkdgus35\u002FEDSR-PyTorch)\n- [Superresolution using an efficient sub-pixel convolutional neural network](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples\u002Ftree\u002Fmaster\u002Fsuper_resolution)\n- [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https:\u002F\u002Fgithub.com\u002Fbengxy\u002FFastNeuralStyle)\n\n## \u003Ca name='SynthetesizingViews'>\u003C\u002Fa>Synthetesizing Views\n- [NeRF, Neural Radian Fields, Synthesizing Novels Views of Complex Scenes](https:\u002F\u002Fgithub.com\u002Fyenchenlin\u002Fnerf-pytorch)\n\n## \u003Ca name='Voice'>\u003C\u002Fa>Voice\n- [Google AI VoiceFilter: Targeted Voice Separatation by Speaker-Conditioned Spectrogram Masking](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter)\n\n## \u003Ca name='Medical'>\u003C\u002Fa>Medical\n- [Medical Zoo, 3D multi-modal medical image segmentation library in PyTorch]( https:\u002F\u002Fgithub.com\u002Fblack0017\u002FMedicalZooPytorch)\n- [U-Net for FLAIR Abnormality Segmentation in Brain MRI](https:\u002F\u002Fgithub.com\u002Fmateuszbuda\u002Fbrain-segmentation-pytorch)\n- [Genomic Classification via ULMFiT](https:\u002F\u002Fgithub.com\u002Fkheyer\u002FGenomic-ULMFiT)\n- [Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening](https:\u002F\u002Fgithub.com\u002Fnyukat\u002Fbreast_cancer_classifier)\n- [Delira, lightweight framework for medical imaging prototyping](https:\u002F\u002Fgithub.com\u002Fjustusschock\u002Fdelira)\n- [V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation](https:\u002F\u002Fgithub.com\u002Fmattmacy\u002Fvnet.pytorch)\n- [Medical Torch, medical imaging framework for PyTorch](https:\u002F\u002Fgithub.com\u002Fperone\u002Fmedicaltorch)\n- [TorchXRayVision - A library for chest X-ray datasets and models. Including pre-trainined models.](https:\u002F\u002Fgithub.com\u002Fmlmed\u002Ftorchxrayvision)\n\n## \u003Ca name='DSegmentationClassificationandRegression'>\u003C\u002Fa>3D Segmentation, Classification and Regression\n- [Kaolin, Library for Accelerating 3D Deep Learning Research](https:\u002F\u002Fgithub.com\u002FNVIDIAGameWorks\u002Fkaolin)\n- [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch)\n- [3D segmentation with MONAI and Catalyst](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F15wJus5WZPYxTYE51yBhIBNhk9Tj4k3BT?usp=sharing)\n\n## \u003Ca name='VideoRecognition'>\u003C\u002Fa>Video Recognition\n- [Dancing to Music](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDancing2Music)\n- [Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations](https:\u002F\u002Fgithub.com\u002Fnv-tlabs\u002FSTEAL)\n- [Deep Video Analytics](https:\u002F\u002Fgithub.com\u002FAKSHAYUBHAT\u002FDeepVideoAnalytics)\n- [PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs](https:\u002F\u002Fgithub.com\u002Fthuml\u002Fpredrnn-pytorch)\n\n## \u003Ca name='RecurrentNeuralNetworksRNNs'>\u003C\u002Fa>Recurrent Neural Networks (RNNs)\n- [SRU: training RNNs as fast as CNNs](https:\u002F\u002Fgithub.com\u002Fasappresearch\u002Fsru)\n- [Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks](https:\u002F\u002Fgithub.com\u002Fyikangshen\u002FOrdered-Neurons)\n- [Averaged Stochastic Gradient Descent with Weight Dropped LSTM](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fawd-lstm-lm)\n- [Training RNNs as Fast as CNNs](https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Fsru)\n- [Quasi-Recurrent Neural Network (QRNN)](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fpytorch-qrnn)\n- [ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation](https:\u002F\u002Fgithub.com\u002FWizaron\u002Freseg-pytorch)\n- [A Recurrent Latent Variable Model for Sequential Data (VRNN)](https:\u002F\u002Fgithub.com\u002Femited\u002FVariationalRecurrentNeuralNetwork)\n- [Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks](https:\u002F\u002Fgithub.com\u002Fdasguptar\u002Ftreelstm.pytorch)\n- [Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling](https:\u002F\u002Fgithub.com\u002FDSKSD\u002FRNN-for-Joint-NLU)\n- [Attentive Recurrent Comparators](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Farc-pytorch)\n- [Collection of Sequence to Sequence Models with PyTorch](https:\u002F\u002Fgithub.com\u002FMaximumEntropy\u002FSeq2Seq-PyTorch)\n\t1. Vanilla Sequence to Sequence models\n\t2. Attention based Sequence to Sequence models\n\t3. Faster attention mechanisms using dot products between the final encoder and decoder hidden states\n\n## \u003Ca name='ConvolutionalNeuralNetworksCNNs'>\u003C\u002Fa>Convolutional Neural Networks (CNNs)\n- [LegoNet: Efficient Convolutional Neural Networks with Lego Filters](https:\u002F\u002Fgithub.com\u002Fhuawei-noah\u002FLegoNet)\n- [MeshCNN, a convolutional neural network designed specifically for triangular meshes](https:\u002F\u002Fgithub.com\u002Franahanocka\u002FMeshCNN)\n- [Octave Convolution](https:\u002F\u002Fgithub.com\u002Fd-li14\u002Foctconv.pytorch)\n- [PyTorch Image Models, ResNet\u002FResNeXT, DPN, MobileNet-V3\u002FV2\u002FV1, MNASNet, Single-Path NAS, FBNet](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models)\n- [Deep Neural Networks with Box Convolutions](https:\u002F\u002Fgithub.com\u002Fshrubb\u002Fbox-convolutions)\n- [Invertible Residual Networks](https:\u002F\u002Fgithub.com\u002Fjarrelscy\u002FiResnet)\n- [Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks ](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FSDPoint)\n- [Faster Faster R-CNN Implementation](https:\u002F\u002Fgithub.com\u002Fjwyang\u002Ffaster-rcnn.pytorch)\n\t- [Faster R-CNN Another Implementation](https:\u002F\u002Fgithub.com\u002Flongcw\u002Ffaster_rcnn_pytorch)\n- [Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fattention-transfer)\n- [Wide ResNet model in PyTorch](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)\n\t-[DiracNets: Training Very Deep Neural Networks Without Skip-Connections](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fdiracnets)\n- [An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition](https:\u002F\u002Fgithub.com\u002Fbgshih\u002Fcrnn)\n- [Efficient Densenet](https:\u002F\u002Fgithub.com\u002Fgpleiss\u002Fefficient_densenet_pytorch)\n- [Video Frame Interpolation via Adaptive Separable Convolution](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-sepconv)\n- [Learning local feature descriptors with triplets and shallow convolutional neural networks](https:\u002F\u002Fgithub.com\u002Fedgarriba\u002Fexamples\u002Ftree\u002Fmaster\u002Ftriplet)\n- [Densely Connected Convolutional Networks](https:\u002F\u002Fgithub.com\u002Fbamos\u002Fdensenet.pytorch)\n- [Very Deep Convolutional Networks for Large-Scale Image Recognition](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fpytorch-vgg)\n- [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \\\u003C0.5MB model size](https:\u002F\u002Fgithub.com\u002Fgsp-27\u002Fpytorch_Squeezenet)\n- [Deep Residual Learning for Image Recognition](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)\n- [Training Wide ResNets for CIFAR-10 and CIFAR-100 in PyTorch](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FWideResNet-pytorch)\n- [Deformable Convolutional Network](https:\u002F\u002Fgithub.com\u002Foeway\u002Fpytorch-deform-conv)\n- [Convolutional Neural Fabrics](https:\u002F\u002Fgithub.com\u002Fvabh\u002Fconvolutional-neural-fabrics)\n- [Deformable Convolutional Networks in PyTorch](https:\u002F\u002Fgithub.com\u002F1zb\u002Fdeformable-convolution-pytorch)\n- [Dilated ResNet combination with Dilated Convolutions](https:\u002F\u002Fgithub.com\u002Ffyu\u002Fdrn)\n- [Striving for Simplicity: The All Convolutional Net](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [Convolutional LSTM Network](https:\u002F\u002Fgithub.com\u002Fautoman000\u002FConvolution_LSTM_pytorch)\n- [Big collection of pretrained classification models](https:\u002F\u002Fgithub.com\u002Fosmr\u002Fimgclsmob)\n- [PyTorch Image Classification with Kaggle Dogs vs Cats Dataset](https:\u002F\u002Fgithub.com\u002Frdcolema\u002Fpytorch-image-classification)\n- [CIFAR-10 on Pytorch with VGG, ResNet and DenseNet](https:\u002F\u002Fgithub.com\u002Fkuangliu\u002Fpytorch-cifar)\n- [Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)](https:\u002F\u002Fgithub.com\u002Faaron-xichen\u002Fpytorch-playground)\n- [NVIDIA\u002Funsupervised-video-interpolation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Funsupervised-video-interpolation)\n\n## \u003Ca name='Segmentation'>\u003C\u002Fa>Segmentation\n- [Detectron2 by FAIR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)\n- [Pixel-wise Segmentation on VOC2012 Dataset using PyTorch](https:\u002F\u002Fgithub.com\u002Fbodokaiser\u002Fpiwise)\n- [Pywick - High-level batteries-included neural network training library for Pytorch](https:\u002F\u002Fgithub.com\u002Fachaiah\u002Fpywick)\n- [Improving Semantic Segmentation via Video Propagation and Label Relaxation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fsemantic-segmentation)\n- [Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation](https:\u002F\u002Fgithub.com\u002FJianqiangWan\u002FSuper-BPD)\n- [Catalyst.Segmentation](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fsegmentation)\n- [Segmentation models with pretrained backbones](https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models.pytorch)\n\n## \u003Ca name='GeometricDeepLearning:GraphIrregularStructures'>\u003C\u002Fa>Geometric Deep Learning: Graph & Irregular Structures\n- [PyTorch Geometric, Deep Learning Extension](https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_geometric)\n- [PyTorch Geometric Temporal: A Temporal Extension Library for PyTorch Geometric](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fpytorch_geometric_temporal)\n- [PyTorch Geometric Signed Directed: A Signed & Directed Extension Library for PyTorch Geometric](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002Fpytorch_geometric_signed_directed)\n- [ChemicalX: A PyTorch Based Deep Learning Library for Drug Pair Scoring](https:\u002F\u002Fgithub.com\u002FAstraZeneca\u002Fchemicalx)\n- [Self-Attention Graph Pooling](https:\u002F\u002Fgithub.com\u002Finyeoplee77\u002FSAGPool)\n- [Position-aware Graph Neural Networks](https:\u002F\u002Fgithub.com\u002FJiaxuanYou\u002FP-GNN)\n- [Signed Graph Convolutional Neural Network](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSGCN)\n- [Graph U-Nets](https:\u002F\u002Fgithub.com\u002FHongyangGao\u002Fgunet)\n- [Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FClusterGCN)\n- [MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN)\n- [Semi-Supervised Graph Classification: A Hierarchical Graph Perspective](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSEAL-CI)\n- [PyTorch BigGraph by FAIR for Generating Embeddings From Large-scale Graph Data](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FPyTorch-BigGraph)\n- [Capsule Graph Neural Network](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FCapsGNN)\n- [Splitter: Learning Node Representations that Capture Multiple Social Contexts](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSplitter)\n- [A Higher-Order Graph Convolutional Layer](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN)\n- [Predict then Propagate: Graph Neural Networks meet Personalized PageRank](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAPPNP)\n- [Lorentz Embeddings: Learn Continuous Hierarchies in Hyperbolic Space](https:\u002F\u002Fgithub.com\u002FtheSage21\u002Florentz-embeddings)\n- [Graph Wavelet Neural Network](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGraphWaveletNeuralNetwork)\n- [Watch Your Step: Learning Node Embeddings via Graph Attention](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAttentionWalk)\n- [Signed Graph Convolutional Network](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSGCN)\n- [Graph Classification Using Structural Attention](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGAM)\n- [SimGNN: A Neural Network Approach to Fast Graph Similarity Computation](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSimGNN)\n- [SINE: Scalable Incomplete Network Embedding](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSINE)\n- [HypER: Hypernetwork Knowledge Graph Embeddings](https:\u002F\u002Fgithub.com\u002Fibalazevic\u002FHypER)\n- [TuckER: Tensor Factorization for Knowledge Graph Completion](https:\u002F\u002Fgithub.com\u002Fibalazevic\u002FTuckER)\n- [PyKEEN: A Python library for learning and evaluating knowledge graph embeddings](https:\u002F\u002Fgithub.com\u002Fpykeen\u002Fpykeen\u002F)\n- [Pathfinder Discovery Networks for Neural Message Passing](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FPDN)\n- [SSSNET: Semi-Supervised Signed Network Clustering](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002FSSSNET_Signed_Clustering)\n- [MagNet: A Neural Network for Directed Graphs](https:\u002F\u002Fgithub.com\u002Fmatthew-hirn\u002Fmagnet)\n- [PyTorch Geopooling: Geospatial Pooling Modules for Neural Networks in PyTorch](https:\u002F\u002Fgithub.com\u002Fybubnov\u002Ftorch_geopooling)\n\n## \u003Ca name='Sorting'>\u003C\u002Fa>Sorting\n- [Stochastic Optimization of Sorting Networks via Continuous Relaxations](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fneuralsort)\n\n## \u003Ca name='OrdinaryDifferentialEquationsNetworks'>\u003C\u002Fa>Ordinary Differential Equations Networks\n- [Latent ODEs for Irregularly-Sampled Time Series](https:\u002F\u002Fgithub.com\u002FYuliaRubanova\u002Flatent_ode)\n- [GRU-ODE-Bayes: continuous modelling of sporadically-observed time series](https:\u002F\u002Fgithub.com\u002Fedebrouwer\u002Fgru_ode_bayes)\n\n## \u003Ca name='Multi-taskLearning'>\u003C\u002Fa>Multi-task Learning\n- [Hierarchical Multi-Task Learning Model](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fhmtl)\n- [Task-based End-to-end Model Learning](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fe2e-model-learning)\n- [torchMTL: A lightweight module for Multi-Task Learning in pytorch](https:\u002F\u002Fgithub.com\u002Fchrisby\u002FtorchMTL)\n\n## \u003Ca name='GANsVAEsandAEs'>\u003C\u002Fa>GANs, VAEs, and AEs\n- [BigGAN: Large Scale GAN Training for High Fidelity Natural Image Synthesis](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FBigGAN-PyTorch)\n- [High Fidelity Performance Metrics for Generative Models in PyTorch](https:\u002F\u002Fgithub.com\u002Ftoshas\u002Ftorch-fidelity)\n- [Mimicry, PyTorch Library for Reproducibility of GAN Research](https:\u002F\u002Fgithub.com\u002Fkwotsin\u002Fmimicry)\n- [Clean Readable CycleGAN](https:\u002F\u002Fgithub.com\u002Faitorzip\u002FPyTorch-CycleGAN)\n- [StarGAN](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fstargan)\n- [Block Neural Autoregressive Flow](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002FBNAF)\n- [High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpix2pixHD)\n- [A Style-Based Generator Architecture for Generative Adversarial Networks](https:\u002F\u002Fgithub.com\u002Frosinality\u002Fstyle-based-gan-pytorch)\n- [GANDissect, PyTorch Tool for Visualizing Neurons in GANs](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002Fgandissect)\n- [Learning deep representations by mutual information estimation and maximization](https:\u002F\u002Fgithub.com\u002FDuaneNielsen\u002FDeepInfomaxPytorch)\n- [Variational Laplace Autoencoders](https:\u002F\u002Fgithub.com\u002Fyookoon\u002FVLAE)\n- [VeGANS, library for easily training GANs](https:\u002F\u002Fgithub.com\u002Funit8co\u002Fvegans)\n- [Progressive Growing of GANs for Improved Quality, Stability, and Variation](https:\u002F\u002Fgithub.com\u002Fgithub-pengge\u002FPyTorch-progressive_growing_of_gans)\n- [Conditional GAN](https:\u002F\u002Fgithub.com\u002Fkmualim\u002FCGAN-Pytorch\u002F)\n- [Wasserstein GAN](https:\u002F\u002Fgithub.com\u002Fmartinarjovsky\u002FWassersteinGAN)\n- [Adversarial Generator-Encoder Network](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FAGE)\n- [Image-to-Image Translation with Conditional Adversarial Networks](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002Fpytorch-CycleGAN-and-pix2pix)\n- [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002Fpytorch-CycleGAN-and-pix2pix)\n- [On the Effects of Batch and Weight Normalization in Generative Adversarial Networks](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002FGAN-weight-norm)\n- [Improved Training of Wasserstein GANs](https:\u002F\u002Fgithub.com\u002Fjalola\u002Fimproved-wgan-pytorch)\n- [Collection of Generative Models with PyTorch](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n\t- Generative Adversarial Nets (GAN)\n\t\t1. [Vanilla GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661)\n\t\t2. [Conditional GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1411.1784)\n\t\t3. [InfoGAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03657)\n\t\t4. [Wasserstein GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07875)\n\t\t5. [Mode Regularized GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.02136)\n\t- Variational Autoencoder (VAE)\n\t\t1. [Vanilla VAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114)\n\t\t2. [Conditional VAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.5298)\n\t\t3. [Denoising VAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06406)\n\t\t4. [Adversarial Autoencoder](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644)\n\t\t5. [Adversarial Variational Bayes](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.04722)\n- [Improved Training of Wasserstein GANs](https:\u002F\u002Fgithub.com\u002Fcaogang\u002Fwgan-gp)\n- [CycleGAN and Semi-Supervised GAN](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fmnist-svhn-transfer)\n- [Improving Variational Auto-Encoders using Householder Flow and using convex combination linear Inverse Autoregressive Flow](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vpflows)\n- [PyTorch GAN Collection](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-generative-model-collections)\n- [Generative Adversarial Networks, focusing on anime face drawing](https:\u002F\u002Fgithub.com\u002Fjayleicn\u002FanimeGAN)\n- [Simple Generative Adversarial Networks](https:\u002F\u002Fgithub.com\u002Fmailmahee\u002Fpytorch-generative-adversarial-networks)\n- [Adversarial Auto-encoders](https:\u002F\u002Fgithub.com\u002Ffducau\u002FAAE_pytorch)\n- [torchgan: Framework for modelling Generative Adversarial Networks in Pytorch](https:\u002F\u002Fgithub.com\u002Ftorchgan\u002Ftorchgan)\n- [Evaluating Lossy Compression Rates of Deep Generative Models](https:\u002F\u002Fgithub.com\u002Fhuangsicong\u002Frate_distortion)\n- [Catalyst.GAN](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fgan)\n    1. [Vanilla GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661)\n    2. [Conditional GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1411.1784)\n    3. [Wasserstein GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07875)\n    4. [Improved Training of Wasserstein GANs](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00028)\n\n## \u003Ca name='UnsupervisedLearning'>\u003C\u002Fa>Unsupervised Learning\n- [Unsupervised Embedding Learning via Invariant and Spreading Instance Feature](https:\u002F\u002Fgithub.com\u002Fmangye16\u002FUnsupervised_Embedding_Learning)\n- [AND: Anchor Neighbourhood Discovery](https:\u002F\u002Fgithub.com\u002FRaymond-sci\u002FAND)\n\n## \u003Ca name='AdversarialAttacks'>\u003C\u002Fa>Adversarial Attacks\n- [Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-adversarial-attacks)\n- [Explaining and Harnessing Adversarial Examples](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-adversarial-attacks)\n- [AdverTorch - A Toolbox for Adversarial Robustness Research](https:\u002F\u002Fgithub.com\u002FBorealisAI\u002Fadvertorch)\n\n## \u003Ca name='StyleTransfer'>\u003C\u002Fa>Style Transfer\n- [Pystiche: Framework for Neural Style Transfer](https:\u002F\u002Fgithub.com\u002Fpystiche\u002Fpystiche)\n- [Detecting Adversarial Examples via Neural Fingerprinting](https:\u002F\u002Fgithub.com\u002FStephanZheng\u002Fneural-fingerprinting)\n- [A Neural Algorithm of Artistic Style](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Tutorials)\n- [Multi-style Generative Network for Real-time Transfer](https:\u002F\u002Fgithub.com\u002Fzhanghang1989\u002FPyTorch-Style-Transfer)\n- [DeOldify, Coloring Old Images](https:\u002F\u002Fgithub.com\u002Fjantic\u002FDeOldify)\n- [Neural Style Transfer](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-style-pt)\n- [Fast Neural Style Transfer](https:\u002F\u002Fgithub.com\u002Fdarkstar112358\u002Ffast-neural-style)\n- [Draw like Bob Ross](https:\u002F\u002Fgithub.com\u002Fkendricktan\u002Fdrawlikebobross)\n\n## \u003Ca name='ImageCaptioning'>\u003C\u002Fa>Image Captioning\n- [CLIP (Contrastive Language-Image Pre-Training)](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP)\n- [Neuraltalk 2, Image Captioning Model, in PyTorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fneuraltalk2.pytorch)\n- [Generate captions from an image with PyTorch](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FcaptionGen)\n- [DenseCap: Fully Convolutional Localization Networks for Dense Captioning](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fdensecap)\n\n## \u003Ca name='Transformers'>\u003C\u002Fa>Transformers\n- [Attention is all you need](https:\u002F\u002Fgithub.com\u002Fjadore801120\u002Fattention-is-all-you-need-pytorch)\n- [Spatial Transformer Networks](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fstn.pytorch)\n\n## \u003Ca name='SimilarityNetworksandFunctions'>\u003C\u002Fa>Similarity Networks and Functions\n- [Conditional Similarity Networks](https:\u002F\u002Fgithub.com\u002Fandreasveit\u002Fconditional-similarity-networks)\n\n## \u003Ca name='Reasoning'>\u003C\u002Fa>Reasoning\n- [Inferring and Executing Programs for Visual Reasoning](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fclevr-iep)\n\n## \u003Ca name='GeneralNLP'>\u003C\u002Fa>General NLP\n- [nanoGPT, fastest repository for training\u002Ffinetuning medium-sized GPTs](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002FnanoGPT)\n- [minGPT, Re-implementation of GPT to be small, clean, interpretable and educational](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002FminGPT)\n- [Espresso, Module Neural Automatic Speech Recognition Toolkit](https:\u002F\u002Fgithub.com\u002Ffreewym\u002Fespresso)\n- [Label-aware Document Representation via Hybrid Attention for Extreme Multi-Label Text Classification](https:\u002F\u002Fgithub.com\u002FHX-idiot\u002FHybrid_Attention_XML)\n- [XLNet](https:\u002F\u002Fgithub.com\u002Fgraykode\u002Fxlnet-Pytorch)\n- [Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading](https:\u002F\u002Fgithub.com\u002Fqkaren\u002Fconverse_reading_cmr)\n- [Cross-lingual Language Model Pretraining](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FXLM)\n- [Libre Office Translate via PyTorch NMT](https:\u002F\u002Fgithub.com\u002Flernapparat\u002Flotranslate)\n- [BERT](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpytorch-pretrained-BERT)\n- [VSE++: Improved Visual-Semantic Embeddings](https:\u002F\u002Fgithub.com\u002Ffartashf\u002Fvsepp)\n- [A Structured Self-Attentive Sentence Embedding](https:\u002F\u002Fgithub.com\u002FExplorerFreda\u002FStructured-Self-Attentive-Sentence-Embedding)\n- [Neural Sequence labeling model](https:\u002F\u002Fgithub.com\u002Fjiesutd\u002FPyTorchSeqLabel)\n- [Skip-Thought Vectors](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Fskip-thoughts)\n- [Complete Suite for Training Seq2Seq Models in PyTorch](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002Fseq2seq.pytorch)\n- [MUSE: Multilingual Unsupervised and Supervised Embeddings](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FMUSE)\n- [TorchMoji: PyTorch Implementation of DeepMoji to under Language used to Express Emotions](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FtorchMoji)\n\n## \u003Ca name='QuestionandAnswering'>\u003C\u002Fa>Question and Answering\n- [Visual Question Answering in Pytorch](https:\u002F\u002Fgithub.com\u002FCadene\u002Fvqa.pytorch)\n- [Reading Wikipedia to Answer Open-Domain Questions](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDrQA)\n- [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fend-to-end-negotiator)\n- [Interpretable Counting for Visual Question Answering](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Firlc-vqa)\n- [Open Source Chatbot with PyTorch](https:\u002F\u002Fgithub.com\u002Fjinfagang\u002Fpytorch_chatbot)\n\n## \u003Ca name='SpeechGenerationandRecognition'>\u003C\u002Fa>Speech Generation and Recognition\n- [PyTorch-Kaldi Speech Recognition Toolkit](https:\u002F\u002Fgithub.com\u002Fmravanelli\u002Fpytorch-kaldi)\n- [WaveGlow: A Flow-based Generative Network for Speech Synthesis](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fwaveglow)\n- [OpenNMT](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-py)\n- [Deep Speech 2: End-to-End Speech Recognition in English and Mandarin](https:\u002F\u002Fgithub.com\u002FSeanNaren\u002Fdeepspeech.pytorch)\n- [WeNet: Production First and Production Ready End-to-End Speech Recognition Toolkit](https:\u002F\u002Fgithub.com\u002Fmobvoi\u002Fwenet)\n\n## \u003Ca name='DocumentandTextClassification'>\u003C\u002Fa>Document and Text Classification\n- [Hierarchical Attention Network for Document Classification](https:\u002F\u002Fgithub.com\u002Fcedias\u002FHAN-pytorch)\n- [Hierarchical Attention Networks for Document Classification](https:\u002F\u002Fgithub.com\u002FEdGENetworks\u002Fattention-networks-for-classification)\n- [CNN Based Text Classification](https:\u002F\u002Fgithub.com\u002Fxiayandi\u002FPytorch_text_classification)\n\n## \u003Ca name='TextGeneration'>\u003C\u002Fa>Text Generation\n- [Pytorch Poetry Generation](https:\u002F\u002Fgithub.com\u002Fjhave\u002Fpytorch-poetry-generation)\n\n## \u003Ca name='TexttoImage'>\u003C\u002Fa>Text to Image\n- [Stable Diffusion](https:\u002F\u002Fgithub.com\u002FCompVis\u002Fstable-diffusion)\n- [Dall-E 2](https:\u002F\u002Fgithub.com\u002Flucidrains\u002FDALLE2-pytorch)\n- [Dall-E](https:\u002F\u002Fgithub.com\u002Flucidrains\u002FDALLE-pytorch)\n\n## \u003Ca name='Translation'>\u003C\u002Fa>Translation\n- [Open-source (MIT) Neural Machine Translation (NMT) System](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-py)\n\n## \u003Ca name='SentimentAnalysis'>\u003C\u002Fa>Sentiment Analysis\n- [Recurrent Neural Networks for Sentiment Analysis (Aspect-Based) on SemEval 2014](https:\u002F\u002Fgithub.com\u002Fvanzytay\u002Fpytorch_sentiment_rnn)\n- [Seq2Seq Intent Parsing](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpytorch-seq2seq-intent-parsing)\n- [Finetuning BERT for Sentiment Analysis](https:\u002F\u002Fgithub.com\u002Fbarissayil\u002FSentimentAnalysis)\n\n## \u003Ca name='DeepReinforcementLearning'>\u003C\u002Fa>Deep Reinforcement Learning\n- [Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels](https:\u002F\u002Fgithub.com\u002Fdenisyarats\u002Fdrq)\n- [Exploration by Random Network Distillation](https:\u002F\u002Fgithub.com\u002Fopenai\u002Frandom-network-distillation)\n- [EGG: Emergence of lanGuage in Games, quickly implement multi-agent games with discrete channel communication](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FEGG)\n- [Temporal Difference VAE](https:\u002F\u002Fopenreview.net\u002Fpdf?id=S1x4ghC9tQ)\n- [High-performance Atari A3C Agent in 180 Lines PyTorch](https:\u002F\u002Fgithub.com\u002Fgreydanus\u002Fbaby-a3c)\n- [Learning when to communicate at scale in multiagent cooperative and competitive tasks](https:\u002F\u002Fgithub.com\u002FIC3Net\u002FIC3Net)\n- [Actor-Attention-Critic for Multi-Agent Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fshariqiqbal2810\u002FMAAC)\n- [PPO in PyTorch C++](https:\u002F\u002Fgithub.com\u002Fmhubii\u002Fppo_pytorch_cpp)\n- [Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback](https:\u002F\u002Fgithub.com\u002Fkhanhptnk\u002Fbandit-nmt)\n- [Asynchronous Methods for Deep Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-a3c)\n- [Continuous Deep Q-Learning with Model-based Acceleration](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-naf)\n- [Asynchronous Methods for Deep Reinforcement Learning for Atari 2600](https:\u002F\u002Fgithub.com\u002Fdgriff777\u002Frl_a3c_pytorch)\n- [Trust Region Policy Optimization](https:\u002F\u002Fgithub.com\u002Fmjacar\u002Fpytorch-trpo)\n- [Neural Combinatorial Optimization with Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fpemami4911\u002Fneural-combinatorial-rl-pytorch)\n- [Noisy Networks for Exploration](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FNoisyNet-A3C)\n- [Distributed Proximal Policy Optimization](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-DPPO)\n- [Reinforcement learning models in ViZDoom environment with PyTorch](https:\u002F\u002Fgithub.com\u002Fakolishchak\u002Fdoom-net-pytorch)\n- [Reinforcement learning models using Gym and Pytorch](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-rl)\n- [SLM-Lab: Modular Deep Reinforcement Learning framework in PyTorch](https:\u002F\u002Fgithub.com\u002Fkengz\u002FSLM-Lab)\n- [Catalyst.RL](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst-rl)\n\n## \u003Ca name='DeepBayesianLearningandProbabilisticProgrammming'>\u003C\u002Fa>Deep Bayesian Learning and Probabilistic Programmming\n- [BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning](https:\u002F\u002Fgithub.com\u002FBlackHC\u002FBatchBALD)\n- [Subspace Inference for Bayesian Deep Learning](https:\u002F\u002Fgithub.com\u002Fwjmaddox\u002Fdrbayes)\n- [Bayesian Deep Learning with Variational Inference Package](https:\u002F\u002Fgithub.com\u002Fctallec\u002Fpyvarinf)\n- [Probabilistic Programming and Statistical Inference in PyTorch](https:\u002F\u002Fgithub.com\u002Fstepelu\u002Fptstat)\n- [Bayesian CNN with Variational Inferece in PyTorch](https:\u002F\u002Fgithub.com\u002Fkumar-shridhar\u002FPyTorch-BayesianCNN)\n\n## \u003Ca name='SpikingNeuralNetworks'>\u003C\u002Fa>Spiking Neural Networks\n- [Norse, Library for Deep Learning with Spiking Neural Networks](https:\u002F\u002Fgithub.com\u002Fnorse\u002Fnorse)\n\n## \u003Ca name='AnomalyDetection'>\u003C\u002Fa>Anomaly Detection\n- [Detection of Accounting Anomalies using Deep Autoencoder Neural Networks](https:\u002F\u002Fgithub.com\u002FGitiHubi\u002FdeepAI)\n\n## \u003Ca name='RegressionTypes'>\u003C\u002Fa>Regression Types\n- [Quantile Regression DQN](https:\u002F\u002Fgithub.com\u002Fars-ashuha\u002Fquantile-regression-dqn-pytorch)\n\n## \u003Ca name='TimeSeries'>\u003C\u002Fa>Time Series\n- [Dual Self-Attention Network for Multivariate Time Series Forecasting](https:\u002F\u002Fgithub.com\u002Fbighuang624\u002FDSANet)\n- [DILATE: DIstortion Loss with shApe and tImE](https:\u002F\u002Fgithub.com\u002Fvincent-leguen\u002FDILATE)\n- [Variational Recurrent Autoencoder for Timeseries Clustering](https:\u002F\u002Fgithub.com\u002Ftejaslodaya\u002Ftimeseries-clustering-vae)\n- [Spatio-Temporal Neural Networks for Space-Time Series Modeling and Relations Discovery](https:\u002F\u002Fgithub.com\u002Fedouardelasalles\u002Fstnn)\n- [Flow Forecast: A deep learning for time series forecasting framework built in PyTorch](https:\u002F\u002Fgithub.com\u002FAIStream-Peelout\u002Fflow-forecast)\n\n## \u003Ca name='SyntheticDatasets'>\u003C\u002Fa>Synthetic Datasets\n- [Meta-Sim: Learning to Generate Synthetic Datasets](https:\u002F\u002Fgithub.com\u002Fnv-tlabs\u002Fmeta-sim)\n\n## \u003Ca name='NeuralNetworkGeneralImprovements'>\u003C\u002Fa>Neural Network General Improvements\n- [PH Training, persistent homology-based training monitor that detects overfitting early using topological data analysis](https:\u002F\u002Fgithub.com\u002Fneed-singularity\u002Fph-training)\n- [The Artificial Dendrite Network Library for PyTorch](https:\u002F\u002Fgithub.com\u002FPerforatedAI\u002FPerforatedAI)\n- [In-Place Activated BatchNorm for Memory-Optimized Training of DNNs](https:\u002F\u002Fgithub.com\u002Fmapillary\u002Finplace_abn)\n- [Train longer, generalize better: closing the generalization gap in large batch training of neural networks](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FbigBatch)\n- [FreezeOut: Accelerate Training by Progressively Freezing Layers](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FFreezeOut)\n- [Binary Stochastic Neurons](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fbinary-stochastic-neurons)\n- [Compact Bilinear Pooling](https:\u002F\u002Fgithub.com\u002FDeepInsight-PCALab\u002FCompactBilinearPooling-Pytorch)\n- [Mixed Precision Training in PyTorch](https:\u002F\u002Fgithub.com\u002Fsuvojit-0x55aa\u002Fmixed-precision-pytorch)\n\n## \u003Ca name='DNNApplicationsinChemistryandPhysics'>\u003C\u002Fa>DNN Applications in Chemistry and Physics\n- [Wave Physics as an Analog Recurrent Neural Network](https:\u002F\u002Fgithub.com\u002Ffancompute\u002Fwavetorch)\n- [Neural Message Passing for Quantum Chemistry](https:\u002F\u002Fgithub.com\u002Fpriba\u002Fnmp_qc)\n- [Automatic chemical design using a data-driven continuous representation of molecules](https:\u002F\u002Fgithub.com\u002Fcxhernandez\u002Fmolencoder)\n- [Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge](https:\u002F\u002Fgithub.com\u002Femited\u002Fflow)\n- [Differentiable Molecular Simulation for Learning and Control](https:\u002F\u002Fgithub.com\u002Fwwang2\u002Ftorchmd)\n\n## \u003Ca name='NewThinkingonGeneralNeuralNetworkArchitecture'>\u003C\u002Fa>New Thinking on General Neural Network Architecture\n- [Complement Objective Training](https:\u002F\u002Fgithub.com\u002Fhenry8527\u002FCOT)\n- [Decoupled Neural Interfaces using Synthetic Gradients](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fdni.pytorch)\n\n## \u003Ca name='LinearAlgebra'>\u003C\u002Fa>Linear Algebra\n- [Eigenvectors from Eigenvalues](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Feigenvectors-from-eigenvalues)\n\n## \u003Ca name='APIAbstraction'>\u003C\u002Fa>API Abstraction\n- [Torch Layers, Shape inference for PyTorch, SOTA Layers](https:\u002F\u002Fgithub.com\u002Fszymonmaszke\u002Ftorchlayers)\n- [Hummingbird, run trained scikit-learn models on GPU with PyTorch](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fhummingbird)\n\n## \u003Ca name='LowLevelUtilities'>\u003C\u002Fa>Low Level Utilities\n- [TorchSharp, .NET API with access to underlying library powering PyTorch](https:\u002F\u002Fgithub.com\u002Finteresaaat\u002FTorchSharp)\n\n## \u003Ca name='PyTorchUtilities'>\u003C\u002Fa>PyTorch Utilities\n- [Functorch: prototype of JAX-like composable Function transformers for PyTorch](https:\u002F\u002Fgithub.com\u002Fzou3519\u002Ffunctorch)\n- [Poutyne: Simplified Framework for Training Neural Networks](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpoutyne)\n- [PyTorch Metric Learning](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning)\n- [Kornia: an Open Source Differentiable Computer Vision Library for PyTorch](https:\u002F\u002Fkornia.org\u002F)\n- [BackPACK to easily Extract Variance, Diagonal of Gauss-Newton, and KFAC](https:\u002F\u002Ff-dangel.github.io\u002Fbackpack\u002F)\n- [PyHessian for Computing Hessian Eigenvalues, trace of matrix, and ESD](https:\u002F\u002Fgithub.com\u002Famirgholami\u002FPyHessian)\n- [Hessian in PyTorch](https:\u002F\u002Fgithub.com\u002Fmariogeiger\u002Fhessian)\n- [Differentiable Convex Layers](https:\u002F\u002Fgithub.com\u002Fcvxgrp\u002Fcvxpylayers)\n- [Albumentations: Fast Image Augmentation Library](https:\u002F\u002Fgithub.com\u002Falbu\u002Falbumentations)\n- [Higher, obtain higher order gradients over losses spanning training loops](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhigher)\n- [Neural Pipeline, Training Pipeline for PyTorch](https:\u002F\u002Fgithub.com\u002Ftoodef\u002Fneural-pipeline)\n- [Layer-by-layer PyTorch Model Profiler for Checking Model Time Consumption](https:\u002F\u002Fgithub.com\u002Fawwong1\u002Ftorchprof)\n- [Sparse Distributions](https:\u002F\u002Fgithub.com\u002Fprobabll\u002Fsparse-distributions)\n- [Diffdist, Adds Support for Differentiable Communication allowing distributed model parallelism](https:\u002F\u002Fgithub.com\u002Fag14774\u002Fdiffdist)\n- [HessianFlow, Library for Hessian Based Algorithms](https:\u002F\u002Fgithub.com\u002Famirgholami\u002FHessianFlow)\n- [Texar, PyTorch Toolkit for Text Generation](https:\u002F\u002Fgithub.com\u002Fasyml\u002Ftexar-pytorch)\n- [PyTorch FLOPs counter](https:\u002F\u002Fgithub.com\u002FLyken17\u002Fpytorch-OpCounter)\n- [PyTorch Inference on C++ in Windows](https:\u002F\u002Fgithub.com\u002Fzccyman\u002Fpytorch-inference)\n- [EuclidesDB, Multi-Model Machine Learning Feature Database](https:\u002F\u002Fgithub.com\u002Fperone\u002Feuclidesdb)\n- [Data Augmentation and Sampling for Pytorch](https:\u002F\u002Fgithub.com\u002Fncullen93\u002Ftorchsample)\n- [PyText, deep learning based NLP modelling framework officially maintained by FAIR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytext)\n- [Torchstat for Statistics on PyTorch Models](https:\u002F\u002Fgithub.com\u002FSwall0w\u002Ftorchstat)\n- [Load Audio files directly into PyTorch Tensors](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Faudio)\n- [Weight Initializations](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Fblob\u002Fmaster\u002Ftorch\u002Fnn\u002Finit.py)\n- [Spatial transformer implemented in PyTorch](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fstn.pytorch)\n- [PyTorch AWS AMI, run PyTorch with GPU support in less than 5 minutes](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fdlami)\n- [Use tensorboard with PyTorch](https:\u002F\u002Fgithub.com\u002Flanpa\u002Ftensorboard-pytorch)\n- [Simple Fit Module in PyTorch, similar to Keras](https:\u002F\u002Fgithub.com\u002Fhenryre\u002Fpytorch-fitmodule)\n- [torchbearer: A model fitting library for PyTorch](https:\u002F\u002Fgithub.com\u002Fecs-vlc\u002Ftorchbearer)\n- [PyTorch to Keras model converter](https:\u002F\u002Fgithub.com\u002Fnerox8664\u002Fpytorch2keras)\n- [Gluon to PyTorch model converter with code generation](https:\u002F\u002Fgithub.com\u002Fnerox8664\u002Fgluon2pytorch)\n- [Catalyst: High-level utils for PyTorch DL & RL research](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n- [PyTorch Lightning: Scalable and lightweight deep learning research framework](https:\u002F\u002Fgithub.com\u002FPyTorchLightning\u002Fpytorch-lightning)\n- [Determined: Scalable deep learning platform with PyTorch support](https:\u002F\u002Fgithub.com\u002Fdetermined-ai\u002Fdetermined)\n- [PyTorch-Ignite: High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fignite)\n- [torchvision: A package consisting of popular datasets, model architectures, and common image transformations for computer vision.](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)\n- [Poutyne: A Keras-like framework for PyTorch and handles much of the boilerplating code needed to train neural networks.](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpoutyne)\n- [torchensemble: Scikit-Learn like ensemble methods in PyTorch](https:\u002F\u002Fgithub.com\u002FAaronX121\u002FEnsemble-Pytorch)\n- [TorchFix - a linter for PyTorch-using code with autofix support](https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Ftorchfix)\n- [pytorch360convert - Differentiable image conversions between 360° equirectangular images, cubemaps, and perspective projections](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fpytorch360convert)\n- [torchcurves - differentiable parametric curve modules for PyTorch](https:\u002F\u002Fgithub.com\u002Falexshtf\u002Ftorchcurves)\n\n\n## \u003Ca name='PyTorchVideoTutorials'>\u003C\u002Fa>PyTorch Video Tutorials\n- [PyTorch Zero to All Lectures](http:\u002F\u002Fbit.ly\u002FPyTorchVideo)\n- [PyTorch For Deep Learning Full Course](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GIsg-ZUy0MY)\n- [PyTorch Lightning 101 with Alfredo Canziani and William Falcon](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2)\n- [Practical Deep Learning with PyTorch](https:\u002F\u002Fwww.udemy.com\u002Fpractical-deep-learning-with-pytorch)\n\n\n## \u003Ca name='Community'>\u003C\u002Fa>Community\n- [PyTorch Discussion Forum](https:\u002F\u002Fdiscuss.pytorch.org\u002F)\n- [StackOverflow PyTorch Tags](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fpytorch)\n- [Catalyst.Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fcatalyst-team-core\u002Fshared_invite\u002Fzt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)\n\n\n## \u003Ca name='TobeClassified'>\u003C\u002Fa>To be Classified\n- [Perturbative Neural Networks](https:\u002F\u002Fgithub.com\u002Fmichaelklachko\u002Fpnn.pytorch)\n- [Accurate Neural Network Potential](https:\u002F\u002Fgithub.com\u002Faiqm\u002Ftorchani)\n- [Scaling the Scattering Transform: Deep Hybrid Networks](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fpyscatwave)\n- [CortexNet: a Generic Network Family for Robust Visual Temporal Representations](https:\u002F\u002Fgithub.com\u002Fe-lab\u002Fpytorch-CortexNet)\n- [Oriented Response Networks](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FORN)\n- [Associative Compression Networks](https:\u002F\u002Fgithub.com\u002Fjalexvig\u002Fassociative_compression_networks)\n- [Clarinet](https:\u002F\u002Fgithub.com\u002Fksw0306\u002FClariNet)\n- [Continuous Wavelet Transforms](https:\u002F\u002Fgithub.com\u002Ftomrunia\u002FPyTorchWavelets)\n- [mixup: Beyond Empirical Risk Minimization](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002Fmixup_pytorch)\n- [Network In Network](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)\n- [Highway Networks](https:\u002F\u002Fgithub.com\u002Fc0nn3r\u002Fpytorch_highway_networks)\n- [Hybrid computing using a neural network with dynamic external memory](https:\u002F\u002Fgithub.com\u002Fypxie\u002Fpytorch-NeuCom)\n- [Value Iteration Networks](https:\u002F\u002Fgithub.com\u002Fonlytailei\u002FPyTorch-value-iteration-networks)\n- [Differentiable Neural Computer](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-dnc)\n- [A Neural Representation of Sketch Drawings](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Sketch-RNN)\n- [Understanding Deep Image Representations by Inverting Them](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [NIMA: Neural Image Assessment](https:\u002F\u002Fgithub.com\u002Ftruskovskiyk\u002Fnima.pytorch)\n- [NASNet-A-Mobile. Ported weights](https:\u002F\u002Fgithub.com\u002Fveronikayurchuk\u002Fpretrained-models.pytorch)\n- [Graphics code generating model using Processing](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fsketchnet)\n\n## \u003Ca name='LinkstoThisRepository'>\u003C\u002Fa>Links to This Repository\n- [Github Repository](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch)\n- [Website](https:\u002F\u002Fwww.ritchieng.com\u002Fthe-incredible-pytorch\u002F)\n\n\n## \u003Ca name='Contributions'>\u003C\u002Fa>Contributions\nDo feel free to contribute!\n\nYou can raise an issue or submit a pull request, whichever is more convenient for you. The guideline is simple: just follow the format of the previous bullet point or create a new section if it's a new category.\n\n## New Special Dedicated List to AI Agents | The Incredible AI Agents\nFeel free to visit [The Incredible AI Agents](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-ai-agents), a curated list of resources on building, evaluating, deploying, monitoring AI Agents. Feel free to view, star, share and\u002For contribute!\n","\u003Cp align=\"center\">\u003Cimg width=\"40%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fritchieng_the-incredible-pytorch_readme_64b2d93e8e24.png\" \u002F>\u003C\u002Fp>\n\n--------------------------------------------------------------------------------\n\u003Cp align=\"center\">\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fstars-10000+-blue.svg\"\u002F>\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fforks-1900+-blue.svg\"\u002F>\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.svg\"\u002F>\n\u003C\u002Fp>\n\n这是一个精心整理的列表，包含与令人难以置信的 [PyTorch](http:\u002F\u002Fpytorch.org\u002F)（一种强大的开源机器学习框架）相关的教程、项目、库、视频、论文、书籍以及其他所有内容。欢迎随时提交 pull request（拉取请求）来为此列表做出贡献。\n\n\n# Table Of Contents\n\u003C!-- vscode-markdown-toc -->\n\n- [Table Of Contents](#table-of-contents)\n  - [Tutorials](#tutorials)\n  - [大型语言模型（LLMs）](#large-language-models-llms)\n  - [代理人工智能（Agentic AI）](#agentic-ai)\n  - [护栏与人工智能安全](#guardrails-and-ai-safety)\n  - [表格数据](#tabular-data)\n  - [可视化](#visualization)\n  - [可解释性](#explainability)\n  - [目标检测](#object-detection)\n  - [长尾分布 \u002F 分布外识别](#long-tailed--out-of-distribution-recognition)\n  - [激活函数](#activation-functions)\n  - [基于能量的学习](#energy-based-learning)\n  - [缺失数据](#missing-data)\n  - [架构搜索](#architecture-search)\n  - [持续学习](#continual-learning)\n  - [优化](#optimization)\n  - [量化](#quantization)\n  - [量子机器学习](#quantum-machine-learning)\n  - [神经网络压缩](#neural-network-compression)\n  - [面部、动作和姿态识别](#facial-action-and-pose-recognition)\n  - [超分辨率](#super-resolution)\n  - [视图合成](#synthetesizing-views)\n  - [语音](#voice)\n  - [医疗](#medical)\n  - [3D 分割、分类与回归](#3d-segmentation-classification-and-regression)\n  - [视频识别](#video-recognition)\n  - [循环神经网络（RNNs）](#recurrent-neural-networks-rnns)\n  - [卷积神经网络（CNNs）](#convolutional-neural-networks-cnns)\n  - [分割](#segmentation)\n  - [几何深度学习：图与非规则结构](#geometric-deep-learning-graph--irregular-structures)\n  - [排序](#sorting)\n  - [常微分方程网络](#ordinary-differential-equations-networks)\n  - [多任务学习](#multi-task-learning)\n  - [生成对抗网络（GANs）、变分自编码器（VAEs）与自编码器（AEs）](#gans-vaes-and-aes)\n  - [无监督学习](#unsupervised-learning)\n  - [对抗攻击](#adversarial-attacks)\n  - [风格迁移](#style-transfer)\n  - [图像描述](#image-captioning)\n  - [Transformer 模型](#transformers)\n  - [相似性网络与函数](#similarity-networks-and-functions)\n  - [推理](#reasoning)\n  - [通用自然语言处理](#general-nlp)\n  - [问答](#question-and-answering)\n  - [语音生成与识别](#speech-generation-and-recognition)\n  - [文档与文本分类](#document-and-text-classification)\n  - [文本生成](#text-generation)\n  - [文生图](#text-to-image)\n  - [翻译](#translation)\n  - [情感分析](#sentiment-analysis)\n  - [深度强化学习](#deep-reinforcement-learning)\n  - [深度贝叶斯学习与概率编程](#deep-bayesian-learning-and-probabilistic-programmming)\n  - [脉冲神经网络](#spiking-neural-networks)\n  - [异常检测](#anomaly-detection)\n  - [回归类型](#regression-types)\n  - [时间序列](#time-series)\n  - [合成数据集](#synthetic-datasets)\n  - [神经网络通用改进](#neural-network-general-improvements)\n  - [深度神经网络在化学与物理中的应用](#dnn-applications-in-chemistry-and-physics)\n  - [关于通用神经网络架构的新思考](#new-thinking-on-general-neural-network-architecture)\n  - [线性代数](#linear-algebra)\n  - [API 抽象](#api-abstraction)\n  - [底层工具](#low-level-utilities)\n  - [PyTorch 工具](#pytorch-utilities)\n  - [PyTorch 视频教程](#pytorch-video-tutorials)\n  - [社区](#community)\n  - [待分类](#to-be-classified)\n  - [指向本仓库的链接](#links-to-this-repository)\n  - [贡献](#contributions)\n  - [面向 AI 代理的全新专用列表 | 令人难以置信的 AI 代理](#new-special-dedicated-list-to-ai-agents--the-incredible-ai-agents)\n\n\u003C!-- vscode-markdown-toc-config\n\tnumbering=false\n\tautoSave=true\n\t\u002Fvscode-markdown-toc-config -->\n\u003C!-- \u002Fvscode-markdown-toc -->\n\n## \u003Ca name='Tutorials'>\u003C\u002Fa>Tutorials\n- [官方 PyTorch 教程](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftutorials)\n- [官方 PyTorch 示例](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples)\n- [人工智能背后的数学：AI 基础指南 [全书]](https:\u002F\u002Fwww.freecodecamp.org\u002Fnews\u002Fthe-math-behind-artificial-intelligence-book\u002F)\n- [使用 PyTorch 深入深度学习](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)\n- [如何阅读 PyTorch 源码](https:\u002F\u002Fgithub.com\u002Fdavidbau\u002Fhow-to-read-pytorch)\n- [PyTorch 深度学习迷你课程（多语言）](https:\u002F\u002Fgithub.com\u002FAtcold\u002Fpytorch-Deep-Learning-Minicourse)\n- [使用 PyTorch 进行实用深度学习](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fdeep-learning-wizard)\n- [深度学习模型](https:\u002F\u002Fgithub.com\u002Frasbt\u002Fdeeplearning-models)\n- [PyTorch 教程的 C++ 实现](https:\u002F\u002Fgithub.com\u002Fprabhuomkar\u002Fpytorch-cpp)\n- [介绍 PyTorch 的简单示例](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fpytorch-examples)\n- [PyTorch 迷你教程](https:\u002F\u002Fgithub.com\u002Fvinhkhuc\u002FPyTorch-Mini-Tutorials)\n- [用于 NLP 的深度学习](https:\u002F\u002Fgithub.com\u002Frguthrie3\u002FDeepLearningForNLPInPytorch)\n- [研究人员深度学习教程](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)\n- [使用 PyTorch 实现的完全卷积网络](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-fcn)\n- [从零到全能的 PyTorch 简单教程](https:\u002F\u002Fgithub.com\u002Fhunkim\u002FPyTorchZeroToAll)\n- [DeepNLP 模型 - PyTorch](https:\u002F\u002Fgithub.com\u002FDSKSD\u002FDeepNLP-models-Pytorch)\n- [MILA PyTorch 欢迎教程](https:\u002F\u002Fgithub.com\u002Fmila-udem\u002Fwelcome_tutorials)\n- [高效 PyTorch：使用 TorchScript 优化运行时及数值稳定性优化](https:\u002F\u002Fgithub.com\u002Fvahidk\u002FEffectivePyTorch)\n- [实用 PyTorch](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpractical-pytorch)\n- [PyTorch 项目模板](https:\u002F\u002Fgithub.com\u002Fmoemen95\u002FPyTorch-Project-Template)\n- [使用 PyTorch 进行语义搜索](https:\u002F\u002Fgithub.com\u002Fkuutsav\u002Finformation-retrieval)\n\n## \u003Ca name='LargeLanguageModels'>\u003C\u002Fa>大型语言模型 (LLMs)\n- LLM 教程\n  - [构建大型语言模型 (LLM)（从零开始）](https:\u002F\u002Fgithub.com\u002Frasbt\u002FLLMs-from-scratch)\n  - [Hugging Face LLM 训练手册，一系列有助于成功训练大型语言模型的方法论集合](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fllm_training_handbook)\n- 通用\n  - [Starcoder 2，代码生成模型系列](https:\u002F\u002Fgithub.com\u002Fbigcode-project\u002Fstarcoder2)\n  - [GPT Fast，快速且可修改的 PyTorch 原生 Transformer 推理 (Transformer 架构)](https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Fgpt-fast)\n  - [Mixtral Offloading，在 Colab 或消费级桌面运行 Mixtral-8x7B 模型](https:\u002F\u002Fgithub.com\u002Fdvmazur\u002Fmixtral-offloading)\n  - [Llama](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama)\n  - [Llama Recipes](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama-recipes)\n  - [TinyLlama](https:\u002F\u002Fgithub.com\u002Fjzhang38\u002FTinyLlama)\n  - [Mosaic 预训练 Transformer (MPT)](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fllm-foundry)\n  - [VLLM，面向 LLM 的高吞吐量、内存高效的推理和服务引擎](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm)\n  - [Dolly](https:\u002F\u002Fgithub.com\u002Fdatabrickslabs\u002Fdolly)\n  - [Vicuna](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat)\n  - [Mistral 7B](https:\u002F\u002Fgithub.com\u002Fmistralai\u002Fmistral-src)\n  - [BigDL LLM，用于在 Intel XPU（从笔记本到 GPU 再到云端）上使用 INT4 以极低延迟运行 LLM（大型语言模型）的库（适用于任何 PyTorch 模型）](https:\u002F\u002Fgithub.com\u002Fintel-analytics\u002FBigDL)\n  - [Simple LLM 微调器](https:\u002F\u002Fgithub.com\u002Flxe\u002Fsimple-llm-finetuner)\n  - [Petals，家庭运行 LLM，采用 BitTorrent 风格，微调与推理速度比卸载快高达 10 倍](https:\u002F\u002Fgithub.com\u002Fbigscience-workshop\u002Fpetals)\n  - [Gemma，Google 的轻量级、最先进开源模型系列](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fgemma_pytorch)\n  - [Qwen，阿里云的大型语言模型](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen)\n  - [CodeT5，面向代码理解与生成的代码感知编码器 - 解码器模型](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FCodeT5)\n  - [OpenLLaMA，Meta AI 的 LLaMA 的宽松许可开源复刻版](https:\u002F\u002Fgithub.com\u002Fopenlm-research\u002Fopen_llama)\n  - [RedPajama，领先的开源模型及复现 LLaMA 训练数据集的工具包](https:\u002F\u002Fgithub.com\u002Ftogethercomputer\u002FRedPajama-Data)\n  - [MosaicML LLM Foundry，用于训练、微调和部署 LLM 的代码库](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fllm-foundry)\n  - [TECS-L (Golden MoE)，面向 PyTorch LLM 的稠密到混合专家 (MoE) 转换框架，具有最优抑制率 I≈1\u002Fe](https:\u002F\u002Fgithub.com\u002Fneed-singularity\u002FTECS-L)\n- 日语\n  - [Japanese Llama](https:\u002F\u002Fgithub.com\u002Fmasa3141\u002Fjapanese-alpaca-lora)\n  - [Japanese GPT Neox 和 Open Calm](https:\u002F\u002Fgithub.com\u002FhppRC\u002Fllm-lora-classification)\n- 中文\n  - [Chinese Llama-2 7B](https:\u002F\u002Fgithub.com\u002FLinkSoul-AI\u002FChinese-Llama-2-7b)\n  - [Chinese Vicuna](https:\u002F\u002Fgithub.com\u002FFacico\u002FChinese-Vicuna)\n- 检索增强生成 (RAG)\n  - [LlamaIndex，用于你的 LLM 应用的数据框架](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index)\n- 嵌入 (Embeddings)\n  - [ChromaDB，开源嵌入数据库](https:\u002F\u002Fgithub.com\u002Fchroma-core\u002Fchroma)\n- 应用\n  - [Langchain，通过组合性使用 LLM 构建应用](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangchain)\n  - [LangSmith，构建生产级 LLM 应用的平台](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangsmith-sdk)\n  - [LiteLLM，使用 OpenAI 格式调用所有 LLM API](https:\u002F\u002Fgithub.com\u002FBerriAI\u002Flitellm)\n  - [OpenAI Python，OpenAI API 的官方 Python 库](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python)\n  - [Guidance，控制大型语言模型的库](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fguidance)\n- 微调\n  - [Huggingface PEFT，最先进的参数高效微调 (PEFT)](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpeft)\n  - [Unsloth，以 80% 更少的内存将 LLM 微调速度快 2-5 倍](https:\u002F\u002Fgithub.com\u002Funslothai\u002Funsloth)\n  - [LoRA，大型语言模型的低秩自适应 (LoRA)](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLoRA)\n  - [QLoRA，量化 LLM 的高效微调](https:\u002F\u002Fgithub.com\u002Fartidoro\u002Fqlora)\n  - [Axolotl，旨在简化各种 AI 模型微调的工具](https:\u002F\u002Fgithub.com\u002FOpenAccess-AI-Collective\u002Faxolotl)\n  - [LLaMA-Factory，100+ LLM 的统一高效微调](https:\u002F\u002Fgithub.com\u002Fhiyouga\u002FLLaMA-Factory)\n- 训练\n  - [Higgsfield，容错、高可扩展的 GPU 编排及专为训练数十亿至万亿参数模型设计的机器学习框架](https:\u002F\u002Fgithub.com\u002Fhiggsfield-ai\u002Fhiggsfield)\n  - [DeepSpeed，深度学习优化库](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed)\n  - [FairScale，用于高性能和大規模训练的 PyTorch 扩展](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffairscale)\n  - [Accelerate，使用多 GPU、TPU、混合精度训练和使用 PyTorch 模型的简单方法](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate)\n  - [ColossalAI，大规模模型训练和推理的统一深度学习系统](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI)\n- 量化 (Quantization)\n  - [AutoGPTQ，基于 GPTQ 算法的易用 LLM 量化包，提供用户友好的 API](https:\u002F\u002Fgithub.com\u002FPanQiWei\u002FAutoGPTQ)\n  - [BitsAndBytes，通过 k-bit 量化使大型语言模型更易访问](https:\u002F\u002Fgithub.com\u002FTimDettmers\u002Fbitsandbytes)\n  - [GPTQ-for-LLaMa，使用 GPTQ 对 LLaMA 进行 4 位量化](https:\u002F\u002Fgithub.com\u002Fqwopqwop200\u002FGPTQ-for-LLaMa)\n  - [Optimum，🤗 Transformers 和 🤗 Diffusers 的加速](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Foptimum)\n\n## \u003Ca name='AgenticAI'>\u003C\u002Fa>代理式人工智能 (Agentic AI)\n- 多智能体系统\n  - [LangGraph，用于构建具有状态的多角色 LLMs (大语言模型) 应用程序的库](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph)\n  - [AutoGen，支持使用多个可以相互对话的智能体创建应用程序的库](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fautogen)\n  - [CrewAI，用于编排角色扮演、自主 AI 智能体的框架](https:\u002F\u002Fgithub.com\u002Fjoaomdmoura\u002FcrewAI)\n  - [MetaGPT，用于软件公司模拟的多智能体框架](https:\u002F\u002Fgithub.com\u002Fgeekan\u002FMetaGPT)\n  - [AgentScope，用户友好的多智能体平台](https:\u002F\u002Fgithub.com\u002Fmodelscope\u002Fagentscope)\n  - [Swarm，用于构建和部署多智能体系统的教育框架](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fswarm)\n- 自主智能体\n  - [AutoGPT，使 GPT-4 完全自主的 GPT-4 自主实验](https:\u002F\u002Fgithub.com\u002FSignificant-Gravitas\u002FAutoGPT)\n  - [BabyAGI，AI 驱动的任务管理系统示例](https:\u002F\u002Fgithub.com\u002Fyoheinakajima\u002Fbabyagi)\n  - [LangChain Agents，使用 LangChain 构建智能体](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangchain\u002Ftree\u002Fmaster\u002Flibs\u002Flangchain\u002Flangchain\u002Fagents)\n  - [ReAct：利用语言模型进行推理和行动](https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Freact)\n  - [Voyager，具有大型语言模型的开放式具身智能体](https:\u002F\u002Fgithub.com\u002FMineDojo\u002FVoyager)\n- 智能体编排与框架  \n  - [Semantic Kernel，集成 AI 服务的轻量级 SDK (软件开发工具包)](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fsemantic-kernel)\n  - [OpenAI Function Calling，用于 OpenAI 模型函数调用的工具](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python)\n  - [LlamaIndex Agents，使用 LlamaIndex 的数据智能体](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index\u002Ftree\u002Fmain\u002Fllama-index-core\u002Fllama_index\u002Fcore\u002Fagent)\n  - [Haystack Agents，构建搜索和问答智能体](https:\u002F\u002Fgithub.com\u002Fdeepset-ai\u002Fhaystack)\n  - [DSPy，用于算法优化 LM (语言模型) 提示词和权重的框架](https:\u002F\u002Fgithub.com\u002Fstanfordnlp\u002Fdspy)\n- 规划与推理\n  - [Tree of Thoughts，利用大型语言模型进行深思熟虑的问题解决](https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm)\n  - [ReWOO：解耦推理与观测以实现高效增强语言模型](https:\u002F\u002Fgithub.com\u002Fbillxbf\u002FReWOO)\n  - [Plan-and-Solve Prompting（计划与解决提示法）](https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FPlan-and-Solve-Prompting)\n- 记忆与学习\n  - [MemGPT，创建具有长期记忆的 LLM 智能体](https:\u002F\u002Fgithub.com\u002Fcpacker\u002FMemGPT)\n  - [Zep，用于生产环境 LLM 应用的快速、可扩展构建块](https:\u002F\u002Fgithub.com\u002Fgetzep\u002Fzep)\n\n## \u003Ca name='GuardrailsandAISafety'>\u003C\u002Fa>护栏与 AI 安全\n- 内容过滤与审核\n  - [Guardrails AI，构建可靠 AI 应用程序的框架](https:\u002F\u002Fgithub.com\u002Fguardrails-ai\u002Fguardrails)\n  - [NeMo Guardrails，构建可信、安全和安全的 LLM 应用程序的工具包](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNeMo-Guardrails)\n  - [OpenAI Moderation API 工具](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fmoderation-api-release)\n  - [Detoxify，使用 Transformer 模型进行有毒评论分类](https:\u002F\u002Fgithub.com\u002Funitaryai\u002Fdetoxify)\n  - [Perspective API PyTorch 实现，毒性检测](https:\u002F\u002Fgithub.com\u002Fconversationai\u002Fperspectiveapi)\n- 提示注入防御\n  - [Prompt Injection Detector，检测提示注入攻击](https:\u002F\u002Fgithub.com\u002Fprotectai\u002Frebuff)\n  - [LLM Guard，LLM 交互的安全工具包](https:\u002F\u002Fgithub.com\u002Fprotectai\u002Fllm-guard)\n  - [Garak，LLM 漏洞扫描器](https:\u002F\u002Fgithub.com\u002Fleondz\u002Fgarak)\n- 偏见检测与缓解\n  - [FairLearn，评估和改进公平性的工具包](https:\u002F\u002Fgithub.com\u002Ffairlearn\u002Ffairlearn)\n  - [AIF360，全面的公平性指标和偏见缓解算法集](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360)\n  - [What-If Tool，用于分析和理解 ML (机器学习) 模型的工具](https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fwhat-if-tool)\n- 隐私与安全\n  - [Opacus，使用差分隐私训练 PyTorch 模型的库](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fopacus)\n  - [PySyft，安全和私有的深度学习框架](https:\u002F\u002Fgithub.com\u002FOpenMined\u002FPySyft)\n  - [CrypTen，隐私保护机器学习框架](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FCrypTen)\n  - [对抗鲁棒性工具箱，用于对抗攻击和防御的库](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002Fadversarial-robustness-toolbox)\n- 模型可解释性与可解释性\n  - [LIME，解释机器学习分类器的预测](https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Flime)\n  - [SHAP，解释机器学习模型输出的统一方法](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap)\n  - [InterpretML，解释和理解机器学习模型](https:\u002F\u002Fgithub.com\u002Finterpretml\u002Finterpret)\n  - [Alibi，解释机器学习模型的算法](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi)\n- 安全评估与测试\n  - [AI Safety Gym，AI 安全研究的环境和工具](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fsafety-gym)\n  - [Anthropic 的宪法 AI 实现](https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fconstitutional-ai)\n  - [HarmBench，自动化红队测试的标准评估框架](https:\u002F\u002Fgithub.com\u002Fcenterforaisafety\u002FHarmBench)\n\n## \u003Ca name='TabularData'>\u003C\u002Fa>表格数据\n- [PyTorch Frame：用于多模态表格学习的模块化框架](https:\u002F\u002Fgithub.com\u002Fpyg-team\u002Fpytorch-frame)\n- [Pytorch Tabular，用于为表格数据建模深度学习模型的标准框架](https:\u002F\u002Fgithub.com\u002Fmanujosephv\u002Fpytorch_tabular)\n- [Tab Transformer](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Ftab-transformer-pytorch)\n- [PyTorch-TabNet：具有注意力的可解释表格学习](https:\u002F\u002Fgithub.com\u002Fdreamquark-ai\u002Ftabnet)\n- [carefree-learn：基于 PyTorch 的用于表格数据集的最小自动机器学习 (AutoML) 解决方案](https:\u002F\u002Fgithub.com\u002Fcarefree0910\u002Fcarefree-learn)\n\n## \u003Ca name='Visualization'>\u003C\u002Fa>可视化\n- [Loss 可视化](https:\u002F\u002Fgithub.com\u002Ftomgoldstein\u002Floss-landscape)\n- [Grad-CAM：通过基于梯度的定位从深度网络获取视觉解释](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam)\n- [深入卷积网络内部：可视化图像分类模型和显著性图](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [SmoothGrad：通过添加噪声去除噪声](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [DeepDream：梦幻般的致幻视觉效果](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-dream)\n- [FlashTorch：PyTorch 神经网络的可视化工具包](https:\u002F\u002Fgithub.com\u002FMisaOgura\u002Fflashtorch)\n- [Lucent：Lucid 适配于 PyTorch](https:\u002F\u002Fgithub.com\u002Fgreentfrapp\u002Flucent)\n- [DreamCreator：使用自定义数据集简单训练 GoogleNet 模型以进行 DeepDream](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fdream-creator)\n- [CNN 特征图可视化](https:\u002F\u002Fgithub.com\u002Flewis-morris\u002Fmapextrackt)\n\n## \u003Ca name='Explainability'>\u003C\u002Fa>可解释性\n- [Neural-Backed Decision Trees](https:\u002F\u002Fgithub.com\u002Falvinwan\u002Fneural-backed-decision-trees)\n- [Efficient Covariance Estimation from Temporal Data](https:\u002F\u002Fgithub.com\u002Fhrayrhar\u002FT-CorEx)\n- [Hierarchical interpretations for neural network predictions](https:\u002F\u002Fgithub.com\u002Fcsinva\u002Fhierarchical-dnn-interpretations)\n- [Shap，一种解释任何机器学习模型输出的统一方法](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap)\n- [使用 netron 可视化 PyTorch 保存的 .pth 深度学习模型](https:\u002F\u002Fgithub.com\u002Flutzroeder\u002Fnetron)\n- [Distilling a Neural Network Into a Soft Decision Tree](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fsoft-decision-tree)\n- [Captum，PyTorch 的统一模型可解释性库](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fcaptum)\n\n## \u003Ca name='ObjectDetection'>\u003C\u002Fa>目标检测\n- [MMDetection 目标检测工具箱](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)\n- [Mask R-CNN 基准：PyTorch 1.0 中的 Faster R-CNN 和 Mask R-CNN](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmaskrcnn-benchmark)\n- [YOLO-World](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World)\n- [YOLOS](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FYOLOS)\n- [YOLOF](https:\u002F\u002Fgithub.com\u002Fmegvii-model\u002FYOLOF)\n- [YOLOX](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FYOLOX)\n- [YOLOv12：以注意力为中心的实时目标检测器](https:\u002F\u002Fgithub.com\u002Fsunsmarterjie\u002Fyolov12)\n- [YOLOv11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n- [YOLOv10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10)\n- [YOLOv9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9)\n- [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n- [Yolov7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)\n- [YOLOv6](https:\u002F\u002Fgithub.com\u002Fmeituan\u002FYOLOv6)\n- [Yolov5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5)\n- [Yolov4](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet)\n- [YOLOv3](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov3)\n- [YOLOv2：实时目标检测](https:\u002F\u002Fgithub.com\u002Flongcw\u002Fyolo2-pytorch)\n- [SSD：单次多框检测器](https:\u002F\u002Fgithub.com\u002Famdegroot\u002Fssd.pytorch)\n- [Detectron 目标检测模型](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fdetectorch)\n- [使用深度卷积神经网络从街景图像中识别多位数字](https:\u002F\u002Fgithub.com\u002Fpotterhsu\u002FSVHNClassifier-PyTorch)\n- [鲸鱼检测器](https:\u002F\u002Fgithub.com\u002FTarinZ\u002Fwhale-detector)\n- [Catalyst.Detection](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fdetection)\n\n## \u003Ca name='Long-TailedOut-of-DistributionRecognition'>\u003C\u002Fa>长尾 \u002F 分布外识别\n- [用于组偏移的分布鲁棒神经网络：关于最坏情况泛化中正则化的重要性](https:\u002F\u002Fgithub.com\u002Fkohpangwei\u002Fgroup_DRO)\n- [不变风险最小化](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)\n- [训练置信度校准分类器以检测分布外样本](https:\u002F\u002Fgithub.com\u002Falinlab\u002FConfident_classifier)\n- [利用异常值暴露进行深度异常检测](https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Foutlier-exposure)\n- [开放世界中的大规模长尾识别](https:\u002F\u002Fgithub.com\u002Fzhmiao\u002FOpenLongTailRecognition-OLTR)\n- [神经网络中分布外示例的原则性检测](https:\u002F\u002Fgithub.com\u002FShiyuLiang\u002Fodin-pytorch)\n- [学习神经网络中分布外检测的置信度](https:\u002F\u002Fgithub.com\u002Fuoguelph-mlrg\u002Fconfidence_estimation)\n- [PyTorch 不平衡类别采样器](https:\u002F\u002Fgithub.com\u002Fufoym\u002Fimbalanced-dataset-sampler)\n\n## \u003Ca name='ActivationFunctions'>\u003C\u002Fa>激活函数\n- [有理激活函数 - 可学习的有理激活函数](https:\u002F\u002Fgithub.com\u002Fml-research\u002Frational_activations)\n- [FreeGrad，用于自定义反向传播、直通估计器和梯度变换的 PyTorch 库](https:\u002F\u002Fgithub.com\u002Ftbox98\u002FFreeGrad)\n\n## \u003Ca name='Energy-BasedLearning'>\u003C\u002Fa>基于能量的学习\n- [EBGAN，基于能量的生成对抗网络](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FPyTorch-GAN\u002Fblob\u002Fmaster\u002Fimplementations\u002Febgan\u002Febgan.py)\n- [基于能量模型的最大熵生成器](https:\u002F\u002Fgithub.com\u002Fritheshkumar95\u002Fenergy_based_generative_models)\n\n\n## \u003Ca name='MissingData'>\u003C\u002Fa>缺失数据\n - [BRITS：时间序列的双向循环插补](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7911-brits-bidirectional-recurrent-imputation-for-time-series)\n\n## \u003Ca name='ArchitectureSearch'>\u003C\u002Fa>架构搜索\n- [EfficientNetV2](https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch)\n- [DenseNAS](https:\u002F\u002Fgithub.com\u002FJaminFong\u002FDenseNAS)\n- [DARTS：可微架构搜索](https:\u002F\u002Fgithub.com\u002Fquark0\u002Fdarts)\n- [高效神经架构搜索 (ENAS)](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch)\n- [EfficientNet：重新思考卷积神经网络的模型缩放](https:\u002F\u002Fgithub.com\u002Fzsef123\u002FEfficientNets-PyTorch)\n\n## \u003Ca name='ContinualLearning'>\u003C\u002Fa>持续学习\n- [Renate，神经网络的自动重训练](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Frenate)\n\n## \u003Ca name='Optimization'>\u003C\u002Fa>优化\n- [AccSGD, AdaBound, AdaMod, DiffGrad, Lamb, NovoGrad, RAdam, SGDW, Yogi 等更多优化器](https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer)\n- [Lookahead 优化器：前进 k 步，后退 1 步](https:\u002F\u002Fgithub.com\u002Falphadl\u002Flookahead.pytorch)\n- [RAdam，关于自适应学习率的方差及其他](https:\u002F\u002Fgithub.com\u002FLiyuanLucasLiu\u002FRAdam)\n- [Over9000，RAdam、Lookahead、Novograd 及其组合的比较](https:\u002F\u002Fgithub.com\u002Fmgrankin\u002Fover9000)\n- [AdaBound，像 Adam 一样快，像 SGD 一样好地进行训练](https:\u002F\u002Fgithub.com\u002FLuolc\u002FAdaBound)\n- [黎曼自适应优化方法](https:\u002F\u002Fgithub.com\u002Fferrine\u002Fgeoopt)\n- [L-BFGS](https:\u002F\u002Fgithub.com\u002Fhjmshi\u002FPyTorch-LBFGS)\n- [OptNet：将可微优化作为神经网络中的一层](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Foptnet)\n- [通过梯度下降学习梯度下降](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-meta-optimizer)\n- [脉冲神经网络中的代理梯度学习](https:\u002F\u002Fgithub.com\u002Ffzenke\u002Fspytorch)\n- [TorchOpt：一个高效的微分优化库](https:\u002F\u002Fgithub.com\u002Fmetaopt\u002Ftorchopt)\n- [ph-training：使用持久同调进行自动训练](https:\u002F\u002Fgithub.com\u002Fneed-singularity\u002Fph-training) - 使用拓扑数据分析（H0 持久性）来预测难度、寻找最佳学习率并实时检测过拟合（r=0.998）。\n## \u003Ca name='Quantization'>\u003C\u002Fa>量化\n- [加性 2 的幂次量化：神经网络的高效非均匀离散化](https:\u002F\u002Fgithub.com\u002Fyhhhli\u002FAPoT_Quantization)\n\n## \u003Ca name='QuantumMachineLearning'>\u003C\u002Fa>量子机器学习\n- [Tor10，PyTorch 中用于量子模拟的通用张量网络库](https:\u002F\u002Fgithub.com\u002Fkaihsin\u002FTor10)\n- [PennyLane，带有 PyTorch 接口的跨平台 Python 量子机器学习库](https:\u002F\u002Fgithub.com\u002FXanaduAI\u002Fpennylane)\n\n## \u003Ca name='NeuralNetworkCompression'>\u003C\u002Fa>神经网络压缩\n- [面向深度学习的贝叶斯压缩](https:\u002F\u002Fgithub.com\u002FKarenUllrich\u002FTutorial_BayesianCompressionForDL)\n- [Intel AI Lab 的神经网络蒸馏器：一个用于神经网络压缩研究的 Python 包](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fdistiller)\n- [通过 L0 正则化学习稀疏神经网络](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FL0_regularization)\n- [基于加权稀疏投影和层输入掩码的深度神经网络能量约束压缩](https:\u002F\u002Fgithub.com\u002Fhyang1990\u002Fmodel_based_energy_constrained_compression)\n- [EigenDamage：在克罗内克分解特征基中的结构化剪枝](https:\u002F\u002Fgithub.com\u002Falecwangcq\u002FEigenDamage-Pytorch)\n- [为资源高效推理而剪枝卷积神经网络](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-pruning)\n- [剪枝神经网络：是时候扼杀它于萌芽了吗？（展示缩减后的网络表现更好）](https:\u002F\u002Fgithub.com\u002FBayesWatch\u002Fpytorch-prunes)\n\n## \u003Ca name='FacialActionandPoseRecognition'>\u003C\u002Fa>面部、动作与姿态识别\n- [Facenet：预训练的 PyTorch 人脸检测和识别模型](https:\u002F\u002Fgithub.com\u002Ftimesler\u002Ffacenet-pytorch)\n- [DGC-Net：密集几何对应网络](https:\u002F\u002Fgithub.com\u002FAaltoVision\u002FDGC-Net)\n- [基于 PyTorch 的高性能人脸识别库](https:\u002F\u002Fgithub.com\u002FZhaoJ9014\u002Fface.evoLVe.PyTorch)\n- [FaceBoxes：一款高精度的 CPU 实时人脸检测器](https:\u002F\u002Fgithub.com\u002Fzisianw\u002FFaceBoxes.PyTorch)\n- [我们距离解决 2D 和 3D 人脸对齐问题还有多远？（以及包含 230,000 个 3D 面部标记点的数据集）](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment)\n- [使用 3D 残差网络学习时空特征以进行动作识别](https:\u002F\u002Fgithub.com\u002Fkenshohara\u002F3D-ResNets-PyTorch)\n- [PyTorch 实时多人姿态估计](https:\u002F\u002Fgithub.com\u002FDavexPro\u002Fpytorch-pose-estimation)\n- [SphereFace：用于人脸识别的深度超球面嵌入](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fsphereface_pytorch)\n- [GANimation：基于单张图像的解剖学感知面部动画](https:\u002F\u002Fgithub.com\u002Falbertpumarola\u002FGANimation)\n- [Face++ 的 Shufflenet V2，效果优于论文](https:\u002F\u002Fgithub.com\u002Fericsun99\u002FShufflenet-v2-Pytorch)\n- [迈向野外 3D 人体姿态估计：一种弱监督方法](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpytorch-pose-hg-3d)\n- [从视频中无监督学习深度和自运动](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FSfmLearner-Pytorch)\n- [FlowNet 2.0：利用深度网络演进的光流估计](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fflownet2-pytorch)\n- [FlowNet：利用卷积网络学习光流](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FFlowNetPytorch)\n- [使用空间金字塔网络进行光流估计](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-spynet)\n- [PyTorch 版本的 OpenFace](https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch)\n- [PyTorch 中的深度人脸识别](https:\u002F\u002Fgithub.com\u002Fgrib0ed0v\u002Fface_recognition.pytorch)\n\n## \u003Ca name='Superresolution'>\u003C\u002Fa>超分辨率\n- [用于单图像超分辨率的增强型深度残差网络](https:\u002F\u002Fgithub.com\u002Fthstkdgus35\u002FEDSR-PyTorch)\n- [使用高效的亚像素卷积神经网络进行超分辨率](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples\u002Ftree\u002Fmaster\u002Fsuper_resolution)\n- [用于实时风格迁移和超分辨率的感知损失](https:\u002F\u002Fgithub.com\u002Fbengxy\u002FFastNeuralStyle)\n\n## \u003Ca name='SynthetesizingViews'>\u003C\u002Fa>合成视图\n- [NeRF，神经辐射场，合成复杂场景的新颖视图](https:\u002F\u002Fgithub.com\u002Fyenchenlin\u002Fnerf-pytorch)\n\n## \u003Ca name='Voice'>\u003C\u002Fa>语音\n- [Google AI VoiceFilter：通过说话人条件声谱图掩码进行目标语音分离](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter)\n\n## \u003Ca name='Medical'>\u003C\u002Fa>医疗\n- [Medical Zoo，PyTorch 中的 3D 多模态医学图像分割库]( https:\u002F\u002Fgithub.com\u002Fblack0017\u002FMedicalZooPytorch)\n- [用于脑部 MRI 中 FLAIR 异常分割的 U-Net](https:\u002F\u002Fgithub.com\u002Fmateuszbuda\u002Fbrain-segmentation-pytorch)\n- [通过 ULMFiT 进行基因组分类](https:\u002F\u002Fgithub.com\u002Fkheyer\u002FGenomic-ULMFiT)\n- [深度神经网络提高放射科医生在乳腺癌筛查中的表现](https:\u002F\u002Fgithub.com\u002Fnyukat\u002Fbreast_cancer_classifier)\n- [Delira，用于医学成像原型的轻量级框架](https:\u002F\u002Fgithub.com\u002Fjustusschock\u002Fdelira)\n- [V-Net：用于体积医学图像分割的全卷积神经网络](https:\u002F\u002Fgithub.com\u002Fmattmacy\u002Fvnet.pytorch)\n- [Medical Torch，PyTorch 的医学成像框架](https:\u002F\u002Fgithub.com\u002Fperone\u002Fmedicaltorch)\n- [TorchXRayVision - 用于胸部 X 射线数据集和模型的库。包括预训练模型。](https:\u002F\u002Fgithub.com\u002Fmlmed\u002Ftorchxrayvision)\n\n## \u003Ca name='DSegmentationClassificationandRegression'>\u003C\u002Fa>3D 分割、分类与回归\n- [Kaolin，加速 3D 深度学习研究的库](https:\u002F\u002Fgithub.com\u002FNVIDIAGameWorks\u002Fkaolin)\n- [PointNet：用于 3D 分类和分割的点集深度学习](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch)\n- [使用 MONAI 和 Catalyst 进行 3D 分割](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F15wJus5WZPYxTYE51yBhIBNhk9Tj4k3BT?usp=sharing)\n\n## \u003Ca name='VideoRecognition'>\u003C\u002Fa>视频识别\n- [随音乐起舞](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDancing2Music)\n- [魔鬼在边缘：从无注释中学习语义边界](https:\u002F\u002Fgithub.com\u002Fnv-tlabs\u002FSTEAL)\n- [深度视频分析](https:\u002F\u002Fgithub.com\u002FAKSHAYUBHAT\u002FDeepVideoAnalytics)\n- [PredRNN：使用时空 LSTM 进行预测学习的循环神经网络](https:\u002F\u002Fgithub.com\u002Fthuml\u002Fpredrnn-pytorch)\n\n## \u003Ca name='RecurrentNeuralNetworksRNNs'>\u003C\u002Fa>循环神经网络 (RNNs)\n- [SRU: training RNNs as fast as CNNs](https:\u002F\u002Fgithub.com\u002Fasappresearch\u002Fsru)\n- [Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks](https:\u002F\u002Fgithub.com\u002Fyikangshen\u002FOrdered-Neurons)\n- [Averaged Stochastic Gradient Descent with Weight Dropped LSTM](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fawd-lstm-lm)\n- [Training RNNs as Fast as CNNs](https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Fsru)\n- [Quasi-Recurrent Neural Network (QRNN)](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fpytorch-qrnn)\n- [ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation](https:\u002F\u002Fgithub.com\u002FWizaron\u002Freseg-pytorch)\n- [A Recurrent Latent Variable Model for Sequential Data (VRNN)](https:\u002F\u002Fgithub.com\u002Femited\u002FVariationalRecurrentNeuralNetwork)\n- [Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks](https:\u002F\u002Fgithub.com\u002Fdasguptar\u002Ftreelstm.pytorch)\n- [Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling](https:\u002F\u002Fgithub.com\u002FDSKSD\u002FRNN-for-Joint-NLU)\n- [Attentive Recurrent Comparators](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Farc-pytorch)\n- [Collection of Sequence to Sequence Models with PyTorch](https:\u002F\u002Fgithub.com\u002FMaximumEntropy\u002FSeq2Seq-PyTorch)\n\t1. Vanilla Sequence to Sequence models\n\t2. Attention based Sequence to Sequence models\n\t3. Faster attention mechanisms using dot products between the final encoder and decoder hidden states\n\n## \u003Ca name='ConvolutionalNeuralNetworksCNNs'>\u003C\u002Fa>卷积神经网络 (CNNs)\n- [LegoNet: Efficient Convolutional Neural Networks with Lego Filters](https:\u002F\u002Fgithub.com\u002Fhuawei-noah\u002FLegoNet)\n- [MeshCNN, a convolutional neural network designed specifically for triangular meshes](https:\u002F\u002Fgithub.com\u002Franahanocka\u002FMeshCNN)\n- [Octave Convolution](https:\u002F\u002Fgithub.com\u002Fd-li14\u002Foctconv.pytorch)\n- [PyTorch Image Models, ResNet\u002FResNeXT, DPN, MobileNet-V3\u002FV2\u002FV1, MNASNet, Single-Path NAS, FBNet](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models)\n- [Deep Neural Networks with Box Convolutions](https:\u002F\u002Fgithub.com\u002Fshrubb\u002Fbox-convolutions)\n- [Invertible Residual Networks](https:\u002F\u002Fgithub.com\u002Fjarrelscy\u002FiResnet)\n- [Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks ](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FSDPoint)\n- [Faster Faster R-CNN Implementation](https:\u002F\u002Fgithub.com\u002Fjwyang\u002Ffaster-rcnn.pytorch)\n\t- [Faster R-CNN Another Implementation](https:\u002F\u002Fgithub.com\u002Flongcw\u002Ffaster_rcnn_pytorch)\n- [Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fattention-transfer)\n- [Wide ResNet model in PyTorch](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)\n\t-[DiracNets: Training Very Deep Neural Networks Without Skip-Connections](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fdiracnets)\n- [An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition](https:\u002F\u002Fgithub.com\u002Fbgshih\u002Fcrnn)\n- [Efficient Densenet](https:\u002F\u002Fgithub.com\u002Fgpleiss\u002Fefficient_densenet_pytorch)\n- [Video Frame Interpolation via Adaptive Separable Convolution](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-sepconv)\n- [Learning local feature descriptors with triplets and shallow convolutional neural networks](https:\u002F\u002Fgithub.com\u002Fedgarriba\u002Fexamples\u002Ftree\u002Fmaster\u002Ftriplet)\n- [Densely Connected Convolutional Networks](https:\u002F\u002Fgithub.com\u002Fbamos\u002Fdensenet.pytorch)\n- [Very Deep Convolutional Networks for Large-Scale Image Recognition](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fpytorch-vgg)\n- [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \\\u003C0.5MB model size](https:\u002F\u002Fgithub.com\u002Fgsp-27\u002Fpytorch_Squeezenet)\n- [Deep Residual Learning for Image Recognition](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)\n- [Training Wide ResNets for CIFAR-10 and CIFAR-100 in PyTorch](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FWideResNet-pytorch)\n- [Deformable Convolutional Network](https:\u002F\u002Fgithub.com\u002Foeway\u002Fpytorch-deform-conv)\n- [Convolutional Neural Fabrics](https:\u002F\u002Fgithub.com\u002Fvabh\u002Fconvolutional-neural-fabrics)\n- [Deformable Convolutional Networks in PyTorch](https:\u002F\u002Fgithub.com\u002F1zb\u002Fdeformable-convolution-pytorch)\n- [Dilated ResNet combination with Dilated Convolutions](https:\u002F\u002Fgithub.com\u002Ffyu\u002Fdrn)\n- [Striving for Simplicity: The All Convolutional Net](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [Convolutional LSTM Network](https:\u002F\u002Fgithub.com\u002Fautoman000\u002FConvolution_LSTM_pytorch)\n- [Big collection of pretrained classification models](https:\u002F\u002Fgithub.com\u002Fosmr\u002Fimgclsmob)\n- [PyTorch Image Classification with Kaggle Dogs vs Cats Dataset](https:\u002F\u002Fgithub.com\u002Frdcolema\u002Fpytorch-image-classification)\n- [CIFAR-10 on Pytorch with VGG, ResNet and DenseNet](https:\u002F\u002Fgithub.com\u002Fkuangliu\u002Fpytorch-cifar)\n- [Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)](https:\u002F\u002Fgithub.com\u002Faaron-xichen\u002Fpytorch-playground)\n- [NVIDIA\u002Funsupervised-video-interpolation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Funsupervised-video-interpolation)\n\n## \u003Ca name='Segmentation'>\u003C\u002Fa>分割\n- [Detectron2 by FAIR](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)\n- [Pixel-wise Segmentation on VOC2012 Dataset using PyTorch](https:\u002F\u002Fgithub.com\u002Fbodokaiser\u002Fpiwise)\n- [Pywick - High-level batteries-included neural network training library for Pytorch](https:\u002F\u002Fgithub.com\u002Fachaiah\u002Fpywick)\n- [Improving Semantic Segmentation via Video Propagation and Label Relaxation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fsemantic-segmentation)\n- [Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation](https:\u002F\u002Fgithub.com\u002FJianqiangWan\u002FSuper-BPD)\n- [Catalyst.Segmentation](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fsegmentation)\n- [Segmentation models with pretrained backbones](https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models.pytorch)\n\n## \u003Ca name='GeometricDeepLearning:GraphIrregularStructures'>\u003C\u002Fa>几何深度学习：图与不规则结构\n- [PyTorch Geometric，深度学习扩展](https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_geometric)\n- [PyTorch Geometric Temporal：PyTorch Geometric 的时间序列扩展库](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fpytorch_geometric_temporal)\n- [PyTorch Geometric Signed Directed：PyTorch Geometric 的符号与有向扩展库](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002Fpytorch_geometric_signed_directed)\n- [ChemicalX：基于 PyTorch 的药物配对评分深度学习库](https:\u002F\u002Fgithub.com\u002FAstraZeneca\u002Fchemicalx)\n- [自注意力图池化](https:\u002F\u002Fgithub.com\u002Finyeoplee77\u002FSAGPool)\n- [位置感知图神经网络](https:\u002F\u002Fgithub.com\u002FJiaxuanYou\u002FP-GNN)\n- [符号图卷积神经网络](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSGCN)\n- [图 U-Nets](https:\u002F\u002Fgithub.com\u002FHongyangGao\u002Fgunet)\n- [Cluster-GCN：用于训练深度和大型图卷积网络的高效算法](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FClusterGCN)\n- [MixHop：通过稀疏邻域混合实现高阶图卷积架构](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN)\n- [半监督图分类：层次图视角](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSEAL-CI)\n- [FAIR 的 PyTorch BigGraph：用于从大规模图数据生成嵌入](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FPyTorch-BigGraph)\n- [胶囊图神经网络](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FCapsGNN)\n- [Splitter：学习捕捉多种社交上下文的节点表示](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSplitter)\n- [高阶图卷积层](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN)\n- [预测然后传播：图神经网络遇见个性化 PageRank](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAPPNP)\n- [洛伦兹嵌入：在双曲空间中学习连续层次](https:\u002F\u002Fgithub.com\u002FtheSage21\u002Florentz-embeddings)\n- [图小波神经网络](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGraphWaveletNeuralNetwork)\n- [注意你的步骤：通过图注意力学习节点嵌入](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAttentionWalk)\n- [符号图卷积网络](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSGCN)\n- [使用结构注意力进行图分类](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGAM)\n- [SimGNN：一种快速图相似度计算的神经网络方法](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSimGNN)\n- [SINE：可扩展的不完整网络嵌入](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSINE)\n- [HypER：超网络知识图谱嵌入](https:\u002F\u002Fgithub.com\u002Fibalazevic\u002FHypER)\n- [TuckER：用于知识图谱补全的张量分解](https:\u002F\u002Fgithub.com\u002Fibalazevic\u002FTuckER)\n- [PyKEEN：用于学习和评估知识图谱嵌入的 Python 库](https:\u002F\u002Fgithub.com\u002Fpykeen\u002Fpykeen\u002F)\n- [Pathfinder Discovery Networks 用于神经消息传递](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FPDN)\n- [SSSNET：半监督符号网络聚类](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002FSSSNET_Signed_Clustering)\n- [MagNet：用于有向图的神经网络](https:\u002F\u002Fgithub.com\u002Fmatthew-hirn\u002Fmagnet)\n- [PyTorch Geopooling：PyTorch 中神经网络的地理空间池化模块](https:\u002F\u002Fgithub.com\u002Fybubnov\u002Ftorch_geopooling)\n\n## \u003Ca name='Sorting'>\u003C\u002Fa>排序\n- [通过连续松弛对排序网络进行随机优化](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fneuralsort)\n\n## \u003Ca name='OrdinaryDifferentialEquationsNetworks'>\u003C\u002Fa>常微分方程网络\n- [Latent ODEs 用于不规则采样时间序列](https:\u002F\u002Fgithub.com\u002FYuliaRubanova\u002Flatent_ode)\n- [GRU-ODE-Bayes：间歇观测时间序列的连续建模](https:\u002F\u002Fgithub.com\u002Fedebrouwer\u002Fgru_ode_bayes)\n\n## \u003Ca name='Multi-taskLearning'>\u003C\u002Fa>多任务学习\n- [层次多任务学习模型](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fhmtl)\n- [基于任务的端到端模型学习](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fe2e-model-learning)\n- [torchMTL：PyTorch 中轻量级多任务学习模块](https:\u002F\u002Fgithub.com\u002Fchrisby\u002FtorchMTL)\n\n## \u003Ca name='GANsVAEsandAEs'>\u003C\u002Fa>生成对抗网络 (GANs)、变分自编码器 (VAEs) 和自编码器 (AEs)\n- [BigGAN：用于高保真自然图像合成的大规模 GAN 训练](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FBigGAN-PyTorch)\n- [PyTorch 中生成模型的高保真性能指标](https:\u002F\u002Fgithub.com\u002Ftoshas\u002Ftorch-fidelity)\n- [Mimicry，用于 GAN 研究复现的 PyTorch 库](https:\u002F\u002Fgithub.com\u002Fkwotsin\u002Fmimicry)\n- [Clean Readable CycleGAN](https:\u002F\u002Fgithub.com\u002Faitorzip\u002FPyTorch-CycleGAN)\n- [StarGAN](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fstargan)\n- [Block Neural Autoregressive Flow](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002FBNAF)\n- [使用条件 GAN 进行高分辨率图像合成与语义操作](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpix2pixHD)\n- [一种基于样式的生成器架构用于生成对抗网络](https:\u002F\u002Fgithub.com\u002Frosinality\u002Fstyle-based-gan-pytorch)\n- [GANDissect，用于可视化 GAN 中神经元的 PyTorch 工具](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002Fgandissect)\n- [通过互信息估计和最大化学习深度表示](https:\u002F\u002Fgithub.com\u002FDuaneNielsen\u002FDeepInfomaxPytorch)\n- [变分拉普拉斯自编码器](https:\u002F\u002Fgithub.com\u002Fyookoon\u002FVLAE)\n- [VeGANS，易于训练 GAN 的库](https:\u002F\u002Fgithub.com\u002Funit8co\u002Fvegans)\n- [渐进式增长 GAN 以提高质量、稳定性和变化性](https:\u002F\u002Fgithub.com\u002Fgithub-pengge\u002FPyTorch-progressive_growing_of_gans)\n- [条件 GAN](https:\u002F\u002Fgithub.com\u002Fkmualim\u002FCGAN-Pytorch\u002F)\n- [Wasserstein GAN](https:\u002F\u002Fgithub.com\u002Fmartinarjovsky\u002FWassersteinGAN)\n- [对抗生成 - 编码器网络](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FAGE)\n- [使用条件对抗网络进行图像到图像转换](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002Fpytorch-CycleGAN-and-pix2pix)\n- [使用循环一致对抗网络进行非配对图像到图像转换](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002Fpytorch-CycleGAN-and-pix2pix)\n- [关于批归一化和权重归一化在生成对抗网络中的影响](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002FGAN-weight-norm)\n- [改进的 Wasserstein GAN 训练](https:\u002F\u002Fgithub.com\u002Fjalola\u002Fimproved-wgan-pytorch)\n- [PyTorch 生成模型集合](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n\t- 生成对抗网络 (GAN)\n\t\t1. [基础 GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661)\n\t\t2. [条件 GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1411.1784)\n\t\t3. [InfoGAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03657)\n\t\t4. [Wasserstein GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07875)\n\t\t5. [模式正则化 GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.02136)\n\t- 变分自编码器 (VAE)\n\t\t1. [基础 VAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114)\n\t\t2. [条件 VAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.5298)\n\t\t3. [去噪 VAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06406)\n\t\t4. [对抗自编码器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644)\n\t\t5. [对抗变分贝叶斯](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.04722)\n- [改进的 Wasserstein GAN 训练](https:\u002F\u002Fgithub.com\u002Fcaogang\u002Fwgan-gp)\n- [CycleGAN 和半监督 GAN](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fmnist-svhn-transfer)\n- [使用 Householder Flow 和改进凸组合线性逆自回归流来改进变分自编码器](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vpflows)\n- [PyTorch GAN 集合](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-generative-model-collections)\n- [生成对抗网络，专注于动漫人脸绘制](https:\u002F\u002Fgithub.com\u002Fjayleicn\u002FanimeGAN)\n- [简单的生成对抗网络](https:\u002F\u002Fgithub.com\u002Fmailmahee\u002Fpytorch-generative-adversarial-networks)\n- [对抗自编码器](https:\u002F\u002Fgithub.com\u002Ffducau\u002FAAE_pytorch)\n- [torchgan：用于在 PyTorch 中建模生成对抗网络的框架](https:\u002F\u002Fgithub.com\u002Ftorchgan\u002Ftorchgan)\n- [评估深度生成模型的有损压缩率](https:\u002F\u002Fgithub.com\u002Fhuangsicong\u002Frate_distortion)\n- [Catalyst.GAN](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fgan)\n    1. [基础 GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661)\n    2. [条件 GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1411.1784)\n    3. [Wasserstein GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07875)\n    4. [改进的 Wasserstein GAN 训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00028)\n\n## \u003Ca name='UnsupervisedLearning'>\u003C\u002Fa>无监督学习\n- [通过不变性和散布实例特征进行无监督嵌入学习](https:\u002F\u002Fgithub.com\u002Fmangye16\u002FUnsupervised_Embedding_Learning)\n- [AND：锚点邻域发现](https:\u002F\u002Fgithub.com\u002FRaymond-sci\u002FAND)\n\n## \u003Ca name='AdversarialAttacks'>\u003C\u002Fa>对抗攻击\n- [深度神经网络容易被欺骗：不可识别图像的高置信度预测](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-adversarial-attacks)\n- [解释和利用对抗样本](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-adversarial-attacks)\n- [AdverTorch - 对抗鲁棒性研究的工具箱](https:\u002F\u002Fgithub.com\u002FBorealisAI\u002Fadvertorch)\n\n## \u003Ca name='StyleTransfer'>\u003C\u002Fa>风格迁移\n- [Pystiche：神经风格迁移框架](https:\u002F\u002Fgithub.com\u002Fpystiche\u002Fpystiche)\n- [通过神经指纹检测对抗样本](https:\u002F\u002Fgithub.com\u002FStephanZheng\u002Fneural-fingerprinting)\n- [艺术风格的神经算法](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Tutorials)\n- [用于实时传输的多风格生成网络](https:\u002F\u002Fgithub.com\u002Fzhanghang1989\u002FPyTorch-Style-Transfer)\n- [DeOldify，老照片上色](https:\u002F\u002Fgithub.com\u002Fjantic\u002FDeOldify)\n- [神经风格迁移](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-style-pt)\n- [快速神经风格迁移](https:\u002F\u002Fgithub.com\u002Fdarkstar112358\u002Ffast-neural-style)\n- [像 Bob Ross 一样绘画](https:\u002F\u002Fgithub.com\u002Fkendricktan\u002Fdrawlikebobross)\n\n## \u003Ca name='ImageCaptioning'>\u003C\u002Fa>图像描述\n- [CLIP (对比语言 - 图像预训练)](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP)\n- [Neuraltalk 2，PyTorch 中的图像描述模型](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fneuraltalk2.pytorch)\n- [使用 PyTorch 从图像生成描述](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FcaptionGen)\n- [DenseCap：用于密集描述的完全卷积定位网络](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fdensecap)\n\n## \u003Ca name='Transformers'>\u003C\u002Fa>Transformer 架构\n- [Attention is all you need](https:\u002F\u002Fgithub.com\u002Fjadore801120\u002Fattention-is-all-you-need-pytorch)\n- [空间变换网络](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fstn.pytorch)\n\n## \u003Ca name='SimilarityNetworksandFunctions'>\u003C\u002Fa>相似度网络与函数\n- [条件相似度网络](https:\u002F\u002Fgithub.com\u002Fandreasveit\u002Fconditional-similarity-networks)\n\n## \u003Ca name='Reasoning'>\u003C\u002Fa>推理\n- [推断和执行视觉推理程序](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fclevr-iep)\n\n## \u003Ca name='GeneralNLP'>\u003C\u002Fa>通用自然语言处理 (NLP)\n- [nanoGPT，用于训练\u002F微调中等规模 GPT 的最快仓库](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002FnanoGPT)\n- [minGPT，重新实现 GPT 使其小巧、简洁、可解释且具教育意义](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002FminGPT)\n- [Espresso，模块化神经自动语音识别工具包](https:\u002F\u002Fgithub.com\u002Ffreewym\u002Fespresso)\n- [基于混合注意力的标签感知文档表示，用于极端多标签文本分类](https:\u002F\u002Fgithub.com\u002FHX-idiot\u002FHybrid_Attention_XML)\n- [XLNet](https:\u002F\u002Fgithub.com\u002Fgraykode\u002Fxlnet-Pytorch)\n- [通过阅读进行对话：按需机器阅读的有内容神经网络对话](https:\u002F\u002Fgithub.com\u002Fqkaren\u002Fconverse_reading_cmr)\n- [跨语言语言模型预训练](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FXLM)\n- [通过 PyTorch(深度学习框架) NMT(神经机器翻译) 的 Libre Office 翻译](https:\u002F\u002Fgithub.com\u002Flernapparat\u002Flotranslate)\n- [BERT](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpytorch-pretrained-BERT)\n- [VSE++：改进的视觉 - 语义嵌入](https:\u002F\u002Fgithub.com\u002Ffartashf\u002Fvsepp)\n- [一种结构化自注意力句子嵌入](https:\u002F\u002Fgithub.com\u002FExplorerFreda\u002FStructured-Self-Attentive-Sentence-Embedding)\n- [神经序列标注模型](https:\u002F\u002Fgithub.com\u002Fjiesutd\u002FPyTorchSeqLabel)\n- [Skip-Thought 向量](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Fskip-thoughts)\n- [PyTorch 中训练 Seq2Seq(序列到序列) 模型的完整套件](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002Fseq2seq.pytorch)\n- [MUSE：多语言无监督和有监督嵌入](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FMUSE)\n- [TorchMoji：PyTorch 实现的 DeepMoji，用于理解表达情感的语言](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FtorchMoji)\n\n## \u003Ca name='QuestionandAnswering'>\u003C\u002Fa>问答\n- [PyTorch 中的视觉问答 (VQA)](https:\u002F\u002Fgithub.com\u002FCadene\u002Fvqa.pytorch)\n- [阅读维基百科以回答开放域问题](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDrQA)\n- [成交还是不成交？谈判对话的端到端学习](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fend-to-end-negotiator)\n- [视觉问答 (VQA) 的可解释计数](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Firlc-vqa)\n- [使用 PyTorch 的开源聊天机器人](https:\u002F\u002Fgithub.com\u002Fjinfagang\u002Fpytorch_chatbot)\n\n## \u003Ca name='SpeechGenerationandRecognition'>\u003C\u002Fa>语音生成与识别\n- [PyTorch-Kaldi 语音识别工具包](https:\u002F\u002Fgithub.com\u002Fmravanelli\u002Fpytorch-kaldi)\n- [WaveGlow：一种基于流的语音合成生成网络](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fwaveglow)\n- [OpenNMT](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-py)\n- [Deep Speech 2：英语和普通话的端到端语音识别](https:\u002F\u002Fgithub.com\u002FSeanNaren\u002Fdeepspeech.pytorch)\n- [WeNet：面向生产且就绪的端到端语音识别工具包](https:\u002F\u002Fgithub.com\u002Fmobvoi\u002Fwenet)\n\n## \u003Ca name='DocumentandTextClassification'>\u003C\u002Fa>文档与文本分类\n- [用于文档分类的层次注意力网络](https:\u002F\u002Fgithub.com\u002Fcedias\u002FHAN-pytorch)\n- [用于文档分类的层次注意力网络](https:\u002F\u002Fgithub.com\u002FEdGENetworks\u002Fattention-networks-for-classification)\n- [基于 CNN(卷积神经网络) 的文本分类](https:\u002F\u002Fgithub.com\u002Fxiayandi\u002FPytorch_text_classification)\n\n## \u003Ca name='TextGeneration'>\u003C\u002Fa>文本生成\n- [Pytorch 诗歌生成](https:\u002F\u002Fgithub.com\u002Fjhave\u002Fpytorch-poetry-generation)\n\n## \u003Ca name='TexttoImage'>\u003C\u002Fa>文生图\n- [Stable Diffusion](https:\u002F\u002Fgithub.com\u002FCompVis\u002Fstable-diffusion)\n- [Dall-E 2](https:\u002F\u002Fgithub.com\u002Flucidrains\u002FDALLE2-pytorch)\n- [Dall-E](https:\u002F\u002Fgithub.com\u002Flucidrains\u002FDALLE-pytorch)\n\n## \u003Ca name='Translation'>\u003C\u002Fa>翻译\n- [开源 (MIT) 神经机器翻译 (NMT) 系统](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-py)\n\n## \u003Ca name='SentimentAnalysis'>\u003C\u002Fa>情感分析\n- [用于 SemEval 2014 情感分析（基于方面）的 RNN(循环神经网络)](https:\u002F\u002Fgithub.com\u002Fvanzytay\u002Fpytorch_sentiment_rnn)\n- [Seq2Seq(序列到序列) 意图解析](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpytorch-seq2seq-intent-parsing)\n- [微调 BERT 用于情感分析](https:\u002F\u002Fgithub.com\u002Fbarissayil\u002FSentimentAnalysis)\n\n## \u003Ca name='DeepReinforcementLearning'>\u003C\u002Fa>深度强化学习 (RL)\n- [图像增强是你所需的一切：从像素正则化深度强化学习](https:\u002F\u002Fgithub.com\u002Fdenisyarats\u002Fdrq)\n- [通过随机网络蒸馏进行探索](https:\u002F\u002Fgithub.com\u002Fopenai\u002Frandom-network-distillation)\n- [EGG：游戏中语言的涌现，快速实现具有离散通道通信的多智能体游戏](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FEGG)\n- [时序差分 VAE(变分自编码器)](https:\u002F\u002Fopenreview.net\u002Fpdf?id=S1x4ghC9tQ)\n- [180 行 PyTorch 代码的高性能 Atari A3C(异步优势演员 - 评论家) 代理](https:\u002F\u002Fgithub.com\u002Fgreydanus\u002Fbaby-a3c)\n- [在大规模多智能体合作与竞争任务中学习何时通信](https:\u002F\u002Fgithub.com\u002FIC3Net\u002FIC3Net)\n- [用于多智能体强化学习的 Actor-Attention-Critic](https:\u002F\u002Fgithub.com\u002Fshariqiqbal2810\u002FMAAC)\n- [PyTorch C++ 中的 PPO(近端策略优化)](https:\u002F\u002Fgithub.com\u002Fmhubii\u002Fppo_pytorch_cpp)\n- [结合模拟人类反馈的 Bandit 神经机器翻译强化学习](https:\u002F\u002Fgithub.com\u002Fkhanhptnk\u002Fbandit-nmt)\n- [深度强化学习 (RL) 的异步方法](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-a3c)\n- [基于模型加速的连续深度 Q-Learning(Q 学习)](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-naf)\n- [用于 Atari 2600 的深度强化学习 (RL) 异步方法](https:\u002F\u002Fgithub.com\u002Fdgriff777\u002Frl_a3c_pytorch)\n- [信任区域策略优化](https:\u002F\u002Fgithub.com\u002Fmjacar\u002Fpytorch-trpo)\n- [结合强化学习的神经组合优化](https:\u002F\u002Fgithub.com\u002Fpemami4911\u002Fneural-combinatorial-rl-pytorch)\n- [用于探索的噪声网络](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FNoisyNet-A3C)\n- [分布式近端策略优化](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-DPPO)\n- [ViZDoom 环境中的强化学习模型，使用 PyTorch](https:\u002F\u002Fgithub.com\u002Fakolishchak\u002Fdoom-net-pytorch)\n- [使用 Gym 和 Pytorch 的强化学习模型](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-rl)\n- [SLM-Lab：PyTorch 中的模块化深度强化学习框架](https:\u002F\u002Fgithub.com\u002Fkengz\u002FSLM-Lab)\n- [Catalyst.RL](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst-rl)\n\n## \u003Ca name='DeepBayesianLearningandProbabilisticProgrammming'>\u003C\u002Fa>深度贝叶斯学习与概率编程\n- [BatchBALD：高效且多样化的深度贝叶斯主动学习批次采集](https:\u002F\u002Fgithub.com\u002FBlackHC\u002FBatchBALD)\n- [贝叶斯深度学习的子空间推断](https:\u002F\u002Fgithub.com\u002Fwjmaddox\u002Fdrbayes)\n- [带有变分推断包的贝叶斯深度学习](https:\u002F\u002Fgithub.com\u002Fctallec\u002Fpyvarinf)\n- [PyTorch 中的概率编程与统计推断](https:\u002F\u002Fgithub.com\u002Fstepelu\u002Fptstat)\n- [PyTorch 中具有变分推理的贝叶斯 CNN(卷积神经网络)](https:\u002F\u002Fgithub.com\u002Fkumar-shridhar\u002FPyTorch-BayesianCNN)\n\n## \u003Ca name='SpikingNeuralNetworks'>\u003C\u002Fa>脉冲神经网络 (SNN)\n- [Norse，用于脉冲神经网络深度学习的库](https:\u002F\u002Fgithub.com\u002Fnorse\u002Fnorse)\n\n## \u003Ca name='AnomalyDetection'>\u003C\u002Fa>异常检测\n- [使用深度自编码器神经网络 (Deep Autoencoder Neural Networks) 进行会计异常检测](https:\u002F\u002Fgithub.com\u002FGitiHubi\u002FdeepAI)\n\n## \u003Ca name='RegressionTypes'>\u003C\u002Fa>回归类型\n- [分位数回归 DQN](https:\u002F\u002Fgithub.com\u002Fars-ashuha\u002Fquantile-regression-dqn-pytorch)\n\n## \u003Ca name='TimeSeries'>\u003C\u002Fa>时间序列\n- [用于多变量时间序列预测的双重自注意力网络 (Dual Self-Attention Network)](https:\u002F\u002Fgithub.com\u002Fbighuang624\u002FDSANet)\n- [DILATE: 带有形状和时间的失真损失 (DIstortion Loss with shApe and tImE)](https:\u002F\u002Fgithub.com\u002Fvincent-leguen\u002FDILATE)\n- [用于时间序列聚类的变分循环自编码器 (Variational Recurrent Autoencoder)](https:\u002F\u002Fgithub.com\u002Ftejaslodaya\u002Ftimeseries-clustering-vae)\n- [用于时空系列建模和关系发现的时空神经网络 (Spatio-Temporal Neural Networks)](https:\u002F\u002Fgithub.com\u002Fedouardelasalles\u002Fstnn)\n- [Flow Forecast: 基于 PyTorch 构建的时间序列预测深度学习框架](https:\u002F\u002Fgithub.com\u002FAIStream-Peelout\u002Fflow-forecast)\n\n## \u003Ca name='SyntheticDatasets'>\u003C\u002Fa>合成数据集\n- [Meta-Sim: 学习生成合成数据集](https:\u002F\u002Fgithub.com\u002Fnv-tlabs\u002Fmeta-sim)\n\n## \u003Ca name='NeuralNetworkGeneralImprovements'>\u003C\u002Fa>神经网络通用改进\n- [PH Training: 基于持久同调 (Persistent Homology) 的训练监控器，利用拓扑数据分析 (Topological Data Analysis) 早期检测过拟合 (Overfitting)](https:\u002F\u002Fgithub.com\u002Fneed-singularity\u002Fph-training)\n- [PyTorch 的人工树突网络库 (The Artificial Dendrite Network Library)](https:\u002F\u002Fgithub.com\u002FPerforatedAI\u002FPerforatedAI)\n- [用于 DNN 内存优化训练的原地激活批归一化 (In-Place Activated BatchNorm)](https:\u002F\u002Fgithub.com\u002Fmapillary\u002Finplace_abn)\n- [训练更久，泛化更好：缩小神经网络大批次训练中的泛化差距 (Generalization Gap)](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FbigBatch)\n- [FreezeOut: 通过逐步冻结层来加速训练](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FFreezeOut)\n- [二元随机神经元 (Binary Stochastic Neurons)](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fbinary-stochastic-neurons)\n- [紧凑双线性池化 (Compact Bilinear Pooling)](https:\u002F\u002Fgithub.com\u002FDeepInsight-PCALab\u002FCompactBilinearPooling-Pytorch)\n- [PyTorch 中的混合精度训练 (Mixed Precision Training)](https:\u002F\u002Fgithub.com\u002Fsuvojit-0x55aa\u002Fmixed-precision-pytorch)\n\n## \u003Ca name='DNNApplicationsinChemistryandPhysics'>\u003C\u002Fa>深度神经网络 (DNN) 在化学和物理中的应用\n- [波物理作为模拟循环神经网络 (Recurrent Neural Network)](https:\u002F\u002Fgithub.com\u002Ffancompute\u002Fwavetorch)\n- [用于量子化学的神经消息传递 (Neural Message Passing)](https:\u002F\u002Fgithub.com\u002Fpriba\u002Fnmp_qc)\n- [使用数据驱动的分子连续表示进行自动化学设计](https:\u002F\u002Fgithub.com\u002Fcxhernandez\u002Fmolencoder)\n- [物理过程的深度学习：整合先验科学知识 (Deep Learning for Physical Processes)](https:\u002F\u002Fgithub.com\u002Femited\u002Fflow)\n- [用于学习和控制的微分分子模拟 (Differentiable Molecular Simulation)](https:\u002F\u002Fgithub.com\u002Fwwang2\u002Ftorchmd)\n\n## \u003Ca name='NewThinkingonGeneralNeuralNetworkArchitecture'>\u003C\u002Fa>关于通用神经网络架构的新思考\n- [互补目标训练 (Complement Objective Training)](https:\u002F\u002Fgithub.com\u002Fhenry8527\u002FCOT)\n- [使用合成梯度 (Synthetic Gradients) 的解耦神经接口 (Decoupled Neural Interfaces)](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fdni.pytorch)\n\n## \u003Ca name='LinearAlgebra'>\u003C\u002Fa>线性代数\n- [从特征值获取特征向量 (Eigenvectors from Eigenvalues)](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Feigenvectors-from-eigenvalues)\n\n## \u003Ca name='APIAbstraction'>\u003C\u002Fa>API 抽象\n- [Torch Layers, PyTorch 的形状推断 (Shape inference), 最先进 (SOTA) 层](https:\u002F\u002Fgithub.com\u002Fszymonmaszke\u002Ftorchlayers)\n- [Hummingbird, 使用 PyTorch 在 GPU (图形处理器) 上运行训练好的 scikit-learn 模型](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fhummingbird)\n\n## \u003Ca name='LowLevelUtilities'>\u003C\u002Fa>底层工具\n- [TorchSharp, 提供访问驱动 PyTorch 的底层库的 .NET API](https:\u002F\u002Fgithub.com\u002Finteresaaat\u002FTorchSharp)\n\n## \u003Ca name='PyTorchUtilities'>\u003C\u002Fa>PyTorch 工具集\n- [Functorch：用于 PyTorch 的类似 JAX 的可组合函数转换器原型](https:\u002F\u002Fgithub.com\u002Fzou3519\u002Ffunctorch)\n- [Poutyne：简化神经网络训练框架](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpoutyne)\n- [PyTorch 度量学习](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning)\n- [Kornia：用于 PyTorch 的开源可微计算机视觉库](https:\u002F\u002Fkornia.org\u002F)\n- [BackPACK：轻松提取方差、高斯 - 牛顿对角线和 KFAC](https:\u002F\u002Ff-dangel.github.io\u002Fbackpack\u002F)\n- [PyHessian：用于计算海森矩阵特征值、矩阵迹和 ESD](https:\u002F\u002Fgithub.com\u002Famirgholami\u002FPyHessian)\n- [Hessian in PyTorch](https:\u002F\u002Fgithub.com\u002Fmariogeiger\u002Fhessian)\n- [可微凸层](https:\u002F\u002Fgithub.com\u002Fcvxgrp\u002Fcvxpylayers)\n- [Albumentations：快速图像增强库](https:\u002F\u002Fgithub.com\u002Falbu\u002Falbumentations)\n- [Higher：获取跨越训练循环损失的高阶梯度](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhigher)\n- [Neural Pipeline，PyTorch 训练流水线](https:\u002F\u002Fgithub.com\u002Ftoodef\u002Fneural-pipeline)\n- [逐层 PyTorch 模型分析器，用于检查模型时间消耗](https:\u002F\u002Fgithub.com\u002Fawwong1\u002Ftorchprof)\n- [稀疏分布](https:\u002F\u002Fgithub.com\u002Fprobabll\u002Fsparse-distributions)\n- [Diffdist：添加对可微通信的支持，以实现分布式模型并行](https:\u002F\u002Fgithub.com\u002Fag14774\u002Fdiffdist)\n- [HessianFlow：基于海森矩阵算法的库](https:\u002F\u002Fgithub.com\u002Famirgholami\u002FHessianFlow)\n- [Texar，用于文本生成的 PyTorch 工具包](https:\u002F\u002Fgithub.com\u002Fasyml\u002Ftexar-pytorch)\n- [PyTorch FLOPs 计数器](https:\u002F\u002Fgithub.com\u002FLyken17\u002Fpytorch-OpCounter)\n- [Windows 下 C++ 上的 PyTorch 推理](https:\u002F\u002Fgithub.com\u002Fzccyman\u002Fpytorch-inference)\n- [EuclidesDB：多模型机器学习特征数据库](https:\u002F\u002Fgithub.com\u002Fperone\u002Feuclidesdb)\n- [PyTorch 的数据增强与采样](https:\u002F\u002Fgithub.com\u002Fncullen93\u002Ftorchsample)\n- [PyText，由 FAIR 官方维护的基于深度学习的自然语言处理 (NLP) 建模框架](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytext)\n- [Torchstat：用于 PyTorch 模型的统计信息](https:\u002F\u002Fgithub.com\u002FSwall0w\u002Ftorchstat)\n- [直接将音频文件加载到 PyTorch 张量中](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Faudio)\n- [权重初始化](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Fblob\u002Fmaster\u002Ftorch\u002Fnn\u002Finit.py)\n- [在 PyTorch 中实现的空间变换器](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fstn.pytorch)\n- [PyTorch AWS AMI，在不到 5 分钟内运行支持 GPU 的 PyTorch](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fdlami)\n- [将 TensorBoard 与 PyTorch 配合使用](https:\u002F\u002Fgithub.com\u002Flanpa\u002Ftensorboard-pytorch)\n- [PyTorch 中的简单拟合模块，类似于 Keras](https:\u002F\u002Fgithub.com\u002Fhenryre\u002Fpytorch-fitmodule)\n- [torchbearer：用于 PyTorch 的模型拟合库](https:\u002F\u002Fgithub.com\u002Fecs-vlc\u002Ftorchbearer)\n- [PyTorch 到 Keras 模型转换器](https:\u002F\u002Fgithub.com\u002Fnerox8664\u002Fpytorch2keras)\n- [带代码生成的 Gluon 到 PyTorch 模型转换器](https:\u002F\u002Fgithub.com\u002Fnerox8664\u002Fgluon2pytorch)\n- [Catalyst：用于 PyTorch 深度学习 (DL) 与强化学习 (RL) 研究的高级工具](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n- [PyTorch Lightning：可扩展且轻量级的深度学习研究框架](https:\u002F\u002Fgithub.com\u002FPyTorchLightning\u002Fpytorch-lightning)\n- [Determined：支持 PyTorch 的可扩展深度学习平台](https:\u002F\u002Fgithub.com\u002Fdetermined-ai\u002Fdetermined)\n- [PyTorch-Ignite：灵活透明地帮助在 PyTorch 中训练和评估神经网络的高级库](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fignite)\n- [torchvision：一个包含流行数据集、模型架构和计算机视觉常用图像转换的包。](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)\n- [Poutyne：一个类似 Keras 的 PyTorch 框架，处理训练神经网络所需的大部分样板代码。](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpoutyne)\n- [torchensemble：PyTorch 中类似 Scikit-Learn 的集成方法](https:\u002F\u002Fgithub.com\u002FAaronX121\u002FEnsemble-Pytorch)\n- [TorchFix - 带有自动修复支持的 PyTorch 代码检查器](https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Ftorchfix)\n- [pytorch360convert - 360°等距柱状投影图像、立方图和透视投影之间的可微图像转换](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fpytorch360convert)\n- [torchcurves - 用于 PyTorch 的可微参数曲线模块](https:\u002F\u002Fgithub.com\u002Falexshtf\u002Ftorchcurves)\n\n\n## \u003Ca name='PyTorchVideoTutorials'>\u003C\u002Fa>PyTorch 视频教程\n- [PyTorch 从零开始讲座](http:\u002F\u002Fbit.ly\u002FPyTorchVideo)\n- [PyTorch 深度学习完整课程](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GIsg-ZUy0MY)\n- [PyTorch Lightning 101 (Alfredo Canziani 和 William Falcon)](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2)\n- [PyTorch 实用深度学习](https:\u002F\u002Fwww.udemy.com\u002Fpractical-deep-learning-with-pytorch)\n\n\n## \u003Ca name='Community'>\u003C\u002Fa>社区\n- [PyTorch 讨论论坛](https:\u002F\u002Fdiscuss.pytorch.org\u002F)\n- [StackOverflow PyTorch 标签](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fpytorch)\n- [Catalyst Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fcatalyst-team-core\u002Fshared_invite\u002Fzt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)\n\n\n## \u003Ca name='TobeClassified'>\u003C\u002Fa>待分类\n- [扰动神经网络](https:\u002F\u002Fgithub.com\u002Fmichaelklachko\u002Fpnn.pytorch)\n- [精确神经网络势函数](https:\u002F\u002Fgithub.com\u002Faiqm\u002Ftorchani)\n- [扩展散射变换：深度混合网络](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fpyscatwave)\n- [CortexNet：用于鲁棒视觉时序表示的通用网络家族](https:\u002F\u002Fgithub.com\u002Fe-lab\u002Fpytorch-CortexNet)\n- [定向响应网络](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FORN)\n- [关联压缩网络](https:\u002F\u002Fgithub.com\u002Fjalexvig\u002Fassociative_compression_networks)\n- [Clarinet](https:\u002F\u002Fgithub.com\u002Fksw0306\u002FClariNet)\n- [连续小波变换](https:\u002F\u002Fgithub.com\u002Ftomrunia\u002FPyTorchWavelets)\n- [mixup：超越经验风险最小化](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002Fmixup_pytorch)\n- [网络中的网络](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)\n- [高速公路网络](https:\u002F\u002Fgithub.com\u002Fc0nn3r\u002Fpytorch_highway_networks)\n- [使用具有动态外部记忆的神经网络进行混合计算](https:\u002F\u002Fgithub.com\u002Fypxie\u002Fpytorch-NeuCom)\n- [价值迭代网络](https:\u002F\u002Fgithub.com\u002Fonlytailei\u002FPyTorch-value-iteration-networks)\n- [可微神经计算机](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-dnc)\n- [草图的神经表示](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Sketch-RNN)\n- [通过反转理解深度图像表示](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-cnn-visualizations)\n- [NIMA：神经图像评估](https:\u002F\u002Fgithub.com\u002Ftruskovskiyk\u002Fnima.pytorch)\n- [NASNet-A-Mobile。移植后的权重](https:\u002F\u002Fgithub.com\u002Fveronikayurchuk\u002Fpretrained-models.pytorch)\n- [使用 Processing 生成图形的代码模型](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fsketchnet)\n\n## \u003Ca name='LinkstoThisRepository'>\u003C\u002Fa>指向本仓库的链接\n- [GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch)\n- [官网](https:\u002F\u002Fwww.ritchieng.com\u002Fthe-incredible-pytorch\u002F)\n\n\n## \u003Ca name='Contributions'>\u003C\u002Fa>贡献\n欢迎随时参与贡献！\n\n您可以提出 issue（问题）或提交 pull request（拉取请求），选择对您更方便的方式即可。指南很简单：只需遵循前一个项目符号的格式，如果是新类别则创建新章节。\n\n## 面向 AI Agents（人工智能代理）的全新专属列表 | The Incredible AI Agents\n欢迎访问 [The Incredible AI Agents](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-ai-agents)，这是一个关于构建、评估、部署及监控 AI Agents 的资源精选列表。欢迎查看、Star（星标）、分享和\u002F或贡献！","# the-incredible-pytorch 快速上手指南\n\n`the-incredible-pytorch` 是一个精心策划的 PyTorch 资源导航库，包含教程、项目、库、论文及视频等内容。它本身不是一个可安装的软件包，而是一个供开发者查阅和学习的项目集合。\n\n## 1. 环境准备\n\n由于该仓库收录了大量 PyTorch 相关的实战项目和教程，为了顺利运行其中的示例代码，建议准备以下开发环境：\n\n- **操作系统**：Linux \u002F macOS \u002F Windows\n- **编程语言**：Python 3.8+\n- **核心依赖**：PyTorch (根据硬件配置选择 CPU 或 CUDA 版本)\n- **版本控制**：Git (用于克隆仓库)\n\n## 2. 安装步骤\n\n本工具无需通过 `pip` 安装，直接通过 Git 克隆至本地即可查看和管理资源列表。\n\n```bash\n# 克隆仓库到本地\ngit clone https:\u002F\u002Fgithub.com\u002Fharvardnlp\u002Fthe-incredible-pytorch.git\n\n# 进入项目目录\ncd the-incredible-pytorch\n```\n\n> **注意**：国内用户若下载速度较慢，建议使用 GitHub 加速服务或 Gitee 镜像进行克隆。\n\n## 3. 基本使用\n\n克隆完成后，打开根目录下的 `README.md` 文件开始浏览。\n\n### 浏览资源\n利用文档顶部的 **Table Of Contents**（目录）快速定位感兴趣的领域，例如：\n- **入门学习**：[Tutorials](#tutorials)\n- **大语言模型**：[Large Language Models (LLMs)](#large-language-models-llms)\n- **视觉任务**：[Object Detection](#object-detection), [Segmentation](#segmentation)\n- **特定场景**：[Medical](#medical), [Time Series](#time-series)\n\n### 调用资源\n点击目录中的链接跳转至外部项目页面，获取具体的实现代码。您可以直接将推荐的项目集成到您的工作中。\n\n```python\n# 示例：参考仓库中推荐的库进行开发\nimport torch\nfrom transformers import AutoModelForCausalLM\n\n# 加载模型（示例代码，具体请参考对应项目的 README）\nmodel = AutoModelForCausalLM.from_pretrained(\"path\u002Fto\u002Fmodel\")\n```\n\n### 贡献更新\n如果您发现了优质的新资源，欢迎向该仓库提交 Pull Request 以丰富列表内容。","某电商公司算法团队正在开发基于深度学习的商品瑕疵检测系统，工程师小张急需引入注意力机制提升模型精度并解释决策依据。\n\n### 没有 the-incredible-pytorch 时\n- 需要在多个搜索引擎和论文库之间切换，难以辨别哪些 PyTorch 教程适合当前版本。\n- 搜索具体的注意力机制实现代码耗时过长，往往下载到过时的示例。\n- 缺乏可视化工具推荐，无法直观展示模型关注区域，客户质疑模型可信度。\n- 遇到架构设计瓶颈时，找不到相关的几何深度学习或 Transformer 变体资料。\n\n### 使用 the-incredible-pytorch 后\n- 直接在“可视化”和“对象检测”板块找到适配的最新开源项目，节省大量检索时间。\n- 获取经过验证的注意力机制实现代码，确保与当前 PyTorch 版本兼容。\n- 利用精选的解释性工具包快速生成热力图，有效回应业务方对模型透明度的要求。\n- 参考“新神经网络架构思考”章节，成功引入改进型 Transformer 结构提升准确率。\n\nthe-incredible-pytorch 通过聚合高质量资源，帮助开发者从资料搜集转向核心代码实现，显著提升研发效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fritchieng_the-incredible-pytorch_64b2d93e.png","ritchieng","Ritchie Ng","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fritchieng_1edc2eaa.jpg","Notes: ritchieng.com • Guides: deeplearningwizard.com","Imperial College London | NUS","Singapore","ritchieng@u.nus.edu","RitchieNg","ritchieng.com","https:\u002F\u002Fgithub.com\u002Fritchieng",null,12485,2212,"2026-04-05T10:15:57","MIT",1,"未说明",{"notes":93,"python":91,"dependencies":94},"该仓库是一个 PyTorch 相关的资源聚合列表（Curated List），包含教程、项目、库等链接，并非独立的软件包。因此没有统一的运行环境要求，使用其中列出的具体项目时需参考各项目文档。",[91],[13],[97,98,99,100,101,102],"pytorch","python","deep-learning-tutorial","deep-neural-networks","deep-learning-library","deep-learning",22,"2026-03-27T02:49:30.150509","2026-04-06T05:44:26.731888",[107,112,117,122,127,132,137,142,147,152],{"id":108,"question_zh":109,"answer_zh":110,"source_url":111},2010,"哪里可以找到持续学习（Continual Learning）的 PyTorch 资源？","已创建新分类并收录了 AWS Labs 的 Renate 库（https:\u002F\u002Fgithub.com\u002Fawslabs\u002Frenate），这是专门用于持续学习算法的库。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F133",{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},2008,"图像描述（Image Captioning）推荐哪个 PyTorch 实现？","虽然 DenseCap 是经典论文，但其原始实现为 Lua\u002FTorch。本仓库推荐 Salesforce 团队维护的稳定版本（PyTorch），地址为：https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fdensecap。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F118",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},2009,"该资源库是否包含表格数据（Tabular Data）相关的 PyTorch 模型？","是的，已添加 Tabular Data 分类。推荐使用 pytorch-tabnet，它支持基于注意力的 TabNet 模型，可用于二分类、多分类及回归任务。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F102",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},2011,"“视频深度和自运动无监督学习”这篇论文有 PyTorch 原生实现吗？","原始实现是 TensorFlow 版本的（由 tinghuiz 开发）。本仓库收录的是 ClementPinard 基于 PyTorch 的实现版本（SfmLearner-Pytorch），已在仓库中更正分类。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F42",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},2012,"医学图像分割有哪些推荐的 PyTorch 框架？","推荐 MedicalZoo Pytorch，这是一个用于多模态 2D\u002F3D 医学图像分割的深度学习的框架，已在医疗板块收录。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F94",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},2013,"适合初学者的 PyTorch 教程在哪里？","推荐参考 yunjey 编写的 PyTorch 教程（https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial），内容详尽且易于理解，已被添加到列表中。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F8",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},2014,"是否有基于 PyTorch 的 GAN 框架推荐？","可以查看 torchgan，这是一个基于 TFGAN 的框架，允许轻松构建复杂的 GAN 模型，并提供模型动物园（model-zoo）供参考。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F59",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},2015,"序列标注（如命名实体识别）有哪些 PyTorch 实现？","推荐 PyTorchSeqLabel，它实现了 LSTM-CRF 结构，支持字符级 LSTM\u002FCNN 特征输入，复现了 NAACL2016 和 ACL2016 的相关架构。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F46",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},2016,"这个资源库的内容是如何组织的？","为了便于查找，内容已按子类别划分，包括元学习（meta-learning）、深度强化学习、GANs 等具体领域，避免列表过于庞大难以检索。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F61",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},2017,"是否有 PyTorch 相关的书籍推荐？","目前主要收录代码仓库和教程。如果有 PyTorch 相关的书籍，欢迎通过 Pull Request 贡献到 Books 部分，以丰富该分类。","https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch\u002Fissues\u002F115",[]]