[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bharathgs--Awesome-pytorch-list":3,"tool-bharathgs--Awesome-pytorch-list":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":86,"forks":87,"last_commit_at":88,"license":85,"difficulty_score":89,"env_os":90,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":98,"github_topics":99,"view_count":23,"oss_zip_url":85,"oss_zip_packed_at":85,"status":16,"created_at":120,"updated_at":121,"faqs":122,"releases":152},3399,"bharathgs\u002FAwesome-pytorch-list","Awesome-pytorch-list","A comprehensive list of pytorch related content on github,such as different models,implementations,helper libraries,tutorials etc.","Awesome-pytorch-list 是一个专为 PyTorch 开发者打造的开源资源导航库，旨在解决深度学习领域资源分散、难以检索的痛点。它系统性地整理了 GitHub 上成千上万个与 PyTorch 相关的高质量项目，涵盖从基础库、辅助工具到前沿模型实现的方方面面。\n\n无论是自然语言处理（如机器翻译、语音合成）、计算机视觉，还是概率生成模型，用户都能在这里找到对应的成熟框架与代码示例，例如 AllenNLP、Fairseq 以及各类经典论文的复现版本。此外，它还收录了丰富的教程、书籍和技术会议资料，帮助使用者快速上手或深入钻研。\n\n这份清单特别适合 AI 研究人员、算法工程师以及正在学习深度学习的学生使用。对于希望避免重复造轮子、快速寻找可靠代码基底的开发者而言，Awesome-pytorch-list 提供了极高的参考价值。其核心亮点在于分类清晰、更新及时且社区活跃，将零散的生态资源汇聚成一张详尽的“地图”，让用户能高效定位所需工具，显著提升研发与学习效率。","Awesome-Pytorch-list\n========================\n\n![pytorch-logo-dark](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbharathgs_Awesome-pytorch-list_readme_c47227853472.png)\n\n\u003Cp align=\"center\">\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fstars-12400+-brightgreen.svg?style=flat\"\u002F>\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\">\n\u003C\u002Fp>\n\n## Contents\n- [Pytorch & related libraries](#pytorch--related-libraries)\n  - [NLP & Speech Processing](#nlp--Speech-Processing)\n  - [Computer Vision](#cv)\n  - [Probabilistic\u002FGenerative Libraries](#probabilisticgenerative-libraries)\n  - [Other libraries](#other-libraries)\n- [Tutorials, books & examples](#tutorials-books--examples)\n- [Paper implementations](#paper-implementations)\n- [Talks & Conferences](#talks--conferences)\n- [Pytorch elsewhere](#pytorch-elsewhere)\n\n## Pytorch & related libraries\n\n1. [pytorch](http:\u002F\u002Fpytorch.org): Tensors and Dynamic neural networks in Python with strong GPU acceleration.\n2. [Captum](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fcaptum): Model interpretability and understanding for PyTorch.\n\n### NLP & Speech Processing:\n\n1. [pytorch text](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftext): Torch text related contents.  \n2. [pytorch-seq2seq](https:\u002F\u002Fgithub.com\u002FIBM\u002Fpytorch-seq2seq): A framework for sequence-to-sequence (seq2seq) models implemented in PyTorch.  \n3. [anuvada](https:\u002F\u002Fgithub.com\u002FSandeep42\u002Fanuvada): Interpretable Models for NLP using PyTorch.\n4. [audio](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Faudio): simple audio I\u002FO for pytorch.\n5. [loop](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Floop): A method to generate speech across multiple speakers\n6. [fairseq-py](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffairseq-py): Facebook AI Research Sequence-to-Sequence Toolkit written in Python.\n7. [speech](https:\u002F\u002Fgithub.com\u002Fawni\u002Fspeech): PyTorch ASR Implementation.\n8. [OpenNMT-py](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-py): Open-Source Neural Machine Translation in PyTorch http:\u002F\u002Fopennmt.net \n9. [neuralcoref](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fneuralcoref): State-of-the-art coreference resolution based on neural nets and spaCy huggingface.co\u002Fcoref\n10. [sentiment-discovery](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fsentiment-discovery): Unsupervised Language Modeling at scale for robust sentiment classification.\n11. [MUSE](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FMUSE): A library for Multilingual Unsupervised or Supervised word Embeddings\n12. [nmtpytorch](https:\u002F\u002Fgithub.com\u002Flium-lst\u002Fnmtpytorch): Neural Machine Translation Framework in PyTorch.\n13. [pytorch-wavenet](https:\u002F\u002Fgithub.com\u002Fvincentherrmann\u002Fpytorch-wavenet): An implementation of WaveNet with fast generation\n14. [Tacotron-pytorch](https:\u002F\u002Fgithub.com\u002Fsoobinseo\u002FTacotron-pytorch): Tacotron: Towards End-to-End Speech Synthesis.\n15. [AllenNLP](https:\u002F\u002Fgithub.com\u002Fallenai\u002Fallennlp): An open-source NLP research library, built on PyTorch.\n16. [PyTorch-NLP](https:\u002F\u002Fgithub.com\u002FPetrochukM\u002FPyTorch-NLP): Text utilities and datasets for PyTorch pytorchnlp.readthedocs.io\n17. [quick-nlp](https:\u002F\u002Fgithub.com\u002Foutcastofmusic\u002Fquick-nlp): Pytorch NLP library based on FastAI. \n18. [TTS](https:\u002F\u002Fgithub.com\u002Fmozilla\u002FTTS): Deep learning for Text2Speech\n19. [LASER](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FLASER): Language-Agnostic SEntence Representations\n20. [pyannote-audio](https:\u002F\u002Fgithub.com\u002Fpyannote\u002Fpyannote-audio): Neural building blocks for speaker diarization: speech activity detection, speaker change detection, speaker embedding\n21. [gensen](https:\u002F\u002Fgithub.com\u002FMaluuba\u002Fgensen): Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning.\n22. [translate](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftranslate): Translate - a PyTorch Language Library.\n23. [espnet](https:\u002F\u002Fgithub.com\u002Fespnet\u002Fespnet): End-to-End Speech Processing Toolkit espnet.github.io\u002Fespnet\n24. [pythia](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpythia): A software suite for Visual Question Answering\n25. [UnsupervisedMT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FUnsupervisedMT): Phrase-Based & Neural Unsupervised Machine Translation.\n26. [jiant](https:\u002F\u002Fgithub.com\u002Fjsalt18-sentence-repl\u002Fjiant): The jiant sentence representation learning toolkit. \n27. [BERT-PyTorch](https:\u002F\u002Fgithub.com\u002Fcodertimo\u002FBERT-pytorch): Pytorch implementation of Google AI's 2018 BERT, with simple annotation\n28. [InferSent](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInferSent): Sentence embeddings (InferSent) and training code for NLI.\n29. [uis-rnn](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fuis-rnn):This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization. arxiv.org\u002Fabs\u002F1810.04719 \n30. [flair](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Fflair): A very simple framework for state-of-the-art Natural Language Processing (NLP)\n31. [pytext](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytext): A natural language modeling framework based on PyTorch fb.me\u002Fpytextdocs\n32. [voicefilter](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter): Unofficial PyTorch implementation of Google AI's VoiceFilter system http:\u002F\u002Fswpark.me\u002Fvoicefilter\n33. [BERT-NER](https:\u002F\u002Fgithub.com\u002Fkamalkraj\u002FBERT-NER): Pytorch-Named-Entity-Recognition-with-BERT. \n34. [transfer-nlp](https:\u002F\u002Fgithub.com\u002Ffeedly\u002Ftransfer-nlp): NLP library designed for flexible research and development\n35. [texar-pytorch](https:\u002F\u002Fgithub.com\u002Fasyml\u002Ftexar-pytorch): Toolkit for Machine Learning and Text Generation, in PyTorch texar.io\n36. [pytorch-kaldi](https:\u002F\u002Fgithub.com\u002Fmravanelli\u002Fpytorch-kaldi): pytorch-kaldi is a project for developing state-of-the-art DNN\u002FRNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.\n37. [NeMo](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNeMo): Neural Modules: a toolkit for conversational AI nvidia.github.io\u002FNeMo\n38. [pytorch-struct](https:\u002F\u002Fgithub.com\u002Fharvardnlp\u002Fpytorch-struct): A library of vectorized implementations of core structured prediction algorithms (HMM, Dep Trees, CKY, ..,)\n39. [espresso](https:\u002F\u002Fgithub.com\u002Ffreewym\u002Fespresso): Espresso: A Fast End-to-End Neural Speech Recognition Toolkit\n40. [transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers): huggingface Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch. huggingface.co\u002Ftransformers\n41. [reformer-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Freformer-pytorch): Reformer, the efficient Transformer, in Pytorch\n42. [torch-metrics](https:\u002F\u002Fgithub.com\u002Fenochkan\u002Ftorch-metrics): Metrics for model evaluation in pytorch\n43. [speechbrain](https:\u002F\u002Fgithub.com\u002Fspeechbrain\u002Fspeechbrain): SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.\n44. [Backprop](https:\u002F\u002Fgithub.com\u002Fbackprop-ai\u002Fbackprop): Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.\n\n### CV:\n\n1. [pytorch vision](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision): Datasets, Transforms and Models specific to Computer Vision.\n2. [pt-styletransfer](https:\u002F\u002Fgithub.com\u002Ftymokvo\u002Fpt-styletransfer): Neural style transfer as a class in PyTorch.\n3. [OpenFacePytorch](https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch):  PyTorch module to use OpenFace's nn4.small2.v1.t7 model\n4. [img_classification_pk_pytorch](https:\u002F\u002Fgithub.com\u002Ffelixgwu\u002Fimg_classification_pk_pytorch): Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)\n5. [SparseConvNet](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FSparseConvNet): Submanifold sparse convolutional networks.\n6. [Convolution_LSTM_pytorch](https:\u002F\u002Fgithub.com\u002Fautoman000\u002FConvolution_LSTM_pytorch): A multi-layer convolution LSTM module\n7. [face-alignment](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment): :fire: 2D and 3D Face alignment library build using pytorch adrianbulat.com\n8. [pytorch-semantic-segmentation](https:\u002F\u002Fgithub.com\u002FZijunDeng\u002Fpytorch-semantic-segmentation): PyTorch for Semantic Segmentation.\n9. [RoIAlign.pytorch](https:\u002F\u002Fgithub.com\u002Flongcw\u002FRoIAlign.pytorch): This is a PyTorch version of RoIAlign. This implementation is based on crop_and_resize and supports both forward and backward on CPU and GPU.\n10. [pytorch-cnn-finetune](https:\u002F\u002Fgithub.com\u002Fcreafz\u002Fpytorch-cnn-finetune): Fine-tune pretrained Convolutional Neural Networks with PyTorch.\n11. [detectorch](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fdetectorch): Detectorch - detectron for PyTorch\n12. [Augmentor](https:\u002F\u002Fgithub.com\u002Fmdbloice\u002FAugmentor): Image augmentation library in Python for machine learning. http:\u002F\u002Faugmentor.readthedocs.io\n13. [s2cnn](https:\u002F\u002Fgithub.com\u002Fjonas-koehler\u002Fs2cnn): \nThis library contains a PyTorch implementation of the SO(3) equivariant CNNs for spherical signals (e.g. omnidirectional cameras, signals on the globe)\n14. [TorchCV](https:\u002F\u002Fgithub.com\u002Fdonnyyou\u002Ftorchcv): A PyTorch-Based Framework for Deep Learning in Computer Vision. \n15. [maskrcnn-benchmark](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmaskrcnn-benchmark): Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.\n16. [image-classification-mobile](https:\u002F\u002Fgithub.com\u002Fosmr\u002Fimgclsmob): Collection of classification models pretrained on the ImageNet-1K.\n17. [medicaltorch](https:\u002F\u002Fgithub.com\u002Fperone\u002Fmedicaltorch): A medical imaging framework for Pytorch http:\u002F\u002Fmedicaltorch.readthedocs.io\n18. [albumentations](https:\u002F\u002Fgithub.com\u002Falbu\u002Falbumentations): Fast image augmentation library.\n19. [kornia](https:\u002F\u002Fgithub.com\u002Farraiyopensource\u002Fkornia): Differentiable computer vision library.\n20. [pytorch-text-recognition](https:\u002F\u002Fgithub.com\u002Fs3nh\u002Fpytorch-text-recognition): Text recognition combo - CRAFT + CRNN.\n21. [facenet-pytorch](https:\u002F\u002Fgithub.com\u002Ftimesler\u002Ffacenet-pytorch): Pretrained Pytorch face detection and recognition models ported from davidsandberg\u002Ffacenet.\n22. [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2): Detectron2 is FAIR's next-generation research platform for object detection and segmentation.\n23. [vedaseg](https:\u002F\u002Fgithub.com\u002FMedia-Smart\u002Fvedaseg): A semantic segmentation framework by pyotrch\n24. [ClassyVision](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FClassyVision): An end-to-end PyTorch framework for image and video classification.\n25. [detecto](https:\u002F\u002Fgithub.com\u002Falankbi\u002Fdetecto):Computer vision in Python with less than 10 lines of code\n26. [pytorch3d](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d): PyTorch3D is FAIR's library of reusable components for deep learning with 3D data pytorch3d.org\n27. [MMDetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection): MMDetection is an open source object detection toolbox, a part of the [OpenMMLab project](https:\u002F\u002Fopen-mmlab.github.io\u002F).\n28. [neural-dream](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-dream): A PyTorch implementation of the DeepDream algorithm. Creates dream-like hallucinogenic visuals.\n29. [FlashTorch](https:\u002F\u002Fgithub.com\u002FMisaOgura\u002Fflashtorch): Visualization toolkit for neural networks in PyTorch!\n30. [Lucent](https:\u002F\u002Fgithub.com\u002Fgreentfrapp\u002Flucent): Tensorflow and OpenAI Clarity's Lucid adapted for PyTorch.\n31. [MMDetection3D](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection3d): MMDetection3D is OpenMMLab's next-generation platform for general 3D object detection, a part of the [OpenMMLab project](https:\u002F\u002Fopen-mmlab.github.io\u002F).\n32. [MMSegmentation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation): MMSegmentation is a semantic segmentation toolbox and benchmark, a part of the [OpenMMLab project](https:\u002F\u002Fopen-mmlab.github.io\u002F).\n33. [MMEditing](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting): MMEditing is a image and video editing toolbox, a part of the [OpenMMLab project](https:\u002F\u002Fopen-mmlab.github.io\u002F).\n34. [MMAction2](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmaction2): MMAction2 is OpenMMLab's next generation action understanding toolbox and benchmark, a part of the [OpenMMLab project](https:\u002F\u002Fopen-mmlab.github.io\u002F).\n35. [MMPose](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmpose): MMPose is a pose estimation toolbox and benchmark, a part of the [OpenMMLab project](https:\u002F\u002Fopen-mmlab.github.io\u002F).\n36. [lightly](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly) - Lightly is a computer vision framework for self-supervised learning.\n37. [RoMa](https:\u002F\u002Fnaver.github.io\u002Froma\u002F): a lightweight and efficient library to deal with 3D rotations.\n\n\n### Probabilistic\u002FGenerative Libraries:\n\n1. [ptstat](https:\u002F\u002Fgithub.com\u002Fstepelu\u002Fptstat): Probabilistic Programming and Statistical Inference in PyTorch\n2. [pyro](https:\u002F\u002Fgithub.com\u002Fuber\u002Fpyro): Deep universal probabilistic programming with Python and PyTorch http:\u002F\u002Fpyro.ai\n3. [probtorch](https:\u002F\u002Fgithub.com\u002Fprobtorch\u002Fprobtorch): Probabilistic Torch is library for deep generative models that extends PyTorch.\n4. [paysage](https:\u002F\u002Fgithub.com\u002Fdrckf\u002Fpaysage): Unsupervised learning and generative models in python\u002Fpytorch.\n5. [pyvarinf](https:\u002F\u002Fgithub.com\u002Fctallec\u002Fpyvarinf): Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch. \n6. [pyprob](https:\u002F\u002Fgithub.com\u002Fprobprog\u002Fpyprob): A PyTorch-based library for probabilistic programming and inference compilation.\n7. [mia](https:\u002F\u002Fgithub.com\u002Fspring-epfl\u002Fmia): A library for running membership inference attacks against ML models. \n8. [pro_gan_pytorch](https:\u002F\u002Fgithub.com\u002Fakanimax\u002Fpro_gan_pytorch): ProGAN package implemented as an extension of PyTorch nn.Module.\n9. [botorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fbotorch): Bayesian optimization in PyTorch\n\n### Other libraries:\n\n1. [pytorch extras](https:\u002F\u002Fgithub.com\u002Fmrdrozdov\u002Fpytorch-extras): Some extra features for pytorch.    \n2. [functional zoo](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo): PyTorch, unlike lua torch, has autograd in it's core, so using modular structure of torch.nn modules is not necessary, one can easily allocate needed Variables and write a function that utilizes them, which is sometimes more convenient. This repo contains model definitions in this functional way, with pretrained weights for some models. \n3. [torch-sampling](https:\u002F\u002Fgithub.com\u002Fncullen93\u002Ftorchsample): This package provides a set of transforms and data structures for sampling from in-memory or out-of-memory data. \n4. [torchcraft-py](https:\u002F\u002Fgithub.com\u002Fdeepcraft\u002Ftorchcraft-py): Python wrapper for TorchCraft, a bridge between Torch and StarCraft for AI research.\n5. [aorun](https:\u002F\u002Fgithub.com\u002Framon-oliveira\u002Faorun): Aorun intend to be a Keras with PyTorch as backend. \n6. [logger](https:\u002F\u002Fgithub.com\u002Foval-group\u002Flogger): A simple logger for experiments.\n7. [PyTorch-docset](https:\u002F\u002Fgithub.com\u002Fiamaziz\u002FPyTorch-docset): PyTorch docset! use with Dash, Zeal, Velocity, or LovelyDocs.  \n8. [convert_torch_to_pytorch](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fconvert_torch_to_pytorch): Convert torch t7 model to pytorch model and source.\n9. [pretrained-models.pytorch](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch): The goal of this repo is to help to reproduce research papers results.  \n10. [pytorch_fft](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fpytorch_fft): PyTorch wrapper for FFTs\n11. [caffe_to_torch_to_pytorch](https:\u002F\u002Fgithub.com\u002Ffanq15\u002Fcaffe_to_torch_to_pytorch)\n12. [pytorch-extension](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-extension): This is a CUDA extension for PyTorch which computes the Hadamard product of two tensors.\n13. [tensorboard-pytorch](https:\u002F\u002Fgithub.com\u002Flanpa\u002Ftensorboard-pytorch): This module saves PyTorch tensors in tensorboard format for inspection. Currently supports scalar, image, audio, histogram features in tensorboard.\n14. [gpytorch](https:\u002F\u002Fgithub.com\u002Fjrg365\u002Fgpytorch): GPyTorch is a Gaussian Process library, implemented using PyTorch. It is designed for creating flexible and modular Gaussian Process models with ease, so that you don't have to be an expert to use GPs.\n15. [spotlight](https:\u002F\u002Fgithub.com\u002Fmaciejkula\u002Fspotlight): Deep recommender models using PyTorch.\n16. [pytorch-cns](https:\u002F\u002Fgithub.com\u002Fawentzonline\u002Fpytorch-cns): Compressed Network Search with PyTorch\n17. [pyinn](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fpyinn): CuPy fused PyTorch neural networks ops\n18. [inferno](https:\u002F\u002Fgithub.com\u002Fnasimrahaman\u002Finferno): A utility library around PyTorch\n19. [pytorch-fitmodule](https:\u002F\u002Fgithub.com\u002Fhenryre\u002Fpytorch-fitmodule): Super simple fit method for PyTorch modules\n20. [inferno-sklearn](https:\u002F\u002Fgithub.com\u002Fdnouri\u002Finferno): A scikit-learn compatible neural network library that wraps pytorch.\n21. [pytorch-caffe-darknet-convert](https:\u002F\u002Fgithub.com\u002Fmarvis\u002Fpytorch-caffe-darknet-convert): convert between pytorch, caffe prototxt\u002Fweights and darknet cfg\u002Fweights\n22. [pytorch2caffe](https:\u002F\u002Fgithub.com\u002Flongcw\u002Fpytorch2caffe): Convert PyTorch model to Caffemodel\n23. [pytorch-tools](https:\u002F\u002Fgithub.com\u002Fnearai\u002Fpytorch-tools): Tools for PyTorch\n24. [sru](https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Fsru): Training RNNs as Fast as CNNs (arxiv.org\u002Fabs\u002F1709.02755)\n25. [torch2coreml](https:\u002F\u002Fgithub.com\u002Fprisma-ai\u002Ftorch2coreml): Torch7 -> CoreML\n26. [PyTorch-Encoding](https:\u002F\u002Fgithub.com\u002Fzhanghang1989\u002FPyTorch-Encoding): PyTorch Deep Texture Encoding Network http:\u002F\u002Fhangzh.com\u002FPyTorch-Encoding\n27. [pytorch-ctc](https:\u002F\u002Fgithub.com\u002Fryanleary\u002Fpytorch-ctc): PyTorch-CTC is an implementation of CTC (Connectionist Temporal Classification) beam search decoding for PyTorch. C++ code borrowed liberally from TensorFlow with some improvements to increase flexibility.\n28. [candlegp](https:\u002F\u002Fgithub.com\u002Ft-vi\u002Fcandlegp): Gaussian Processes in Pytorch. \n29. [dpwa](https:\u002F\u002Fgithub.com\u002Floudinthecloud\u002Fdpwa): Distributed Learning by Pair-Wise Averaging. \n30. [dni-pytorch](https:\u002F\u002Fgithub.com\u002Fkoz4k\u002Fdni-pytorch): Decoupled Neural Interfaces using Synthetic Gradients for PyTorch.\n31. [skorch](https:\u002F\u002Fgithub.com\u002Fdnouri\u002Fskorch): A scikit-learn compatible neural network library that wraps pytorch\n32. [ignite](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fignite): Ignite is a high-level library to help with training neural networks in PyTorch.\n33. [Arnold](https:\u002F\u002Fgithub.com\u002Fglample\u002FArnold): Arnold - DOOM Agent\n34. [pytorch-mcn](https:\u002F\u002Fgithub.com\u002Falbanie\u002Fpytorch-mcn): Convert models from MatConvNet to PyTorch\n35. [simple-faster-rcnn-pytorch](https:\u002F\u002Fgithub.com\u002Fchenyuntc\u002Fsimple-faster-rcnn-pytorch): A simplified implemention of Faster R-CNN with competitive performance.\n36. [generative_zoo](https:\u002F\u002Fgithub.com\u002FDL-IT\u002Fgenerative_zoo): generative_zoo is a repository that provides working implementations of some generative models in PyTorch.\n37. [pytorchviz](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fpytorchviz): A small package to create visualizations of PyTorch execution graphs. \n38. [cogitare](https:\u002F\u002Fgithub.com\u002Fcogitare-ai\u002Fcogitare): Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework in Python. \n39. [pydlt](https:\u002F\u002Fgithub.com\u002Fdmarnerides\u002Fpydlt): PyTorch based Deep Learning Toolbox\n40. [semi-supervised-pytorch](https:\u002F\u002Fgithub.com\u002Fwohlert\u002Fsemi-supervised-pytorch): Implementations of different VAE-based semi-supervised and generative models in PyTorch. \n41. [pytorch_cluster](https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_cluster): PyTorch Extension Library of Optimised Graph Cluster Algorithms.\n42. [neural-assembly-compiler](https:\u002F\u002Fgithub.com\u002Faditya-khant\u002Fneural-assembly-compiler): A neural assembly compiler for pyTorch based on adaptive-neural-compilation. \n43. [caffemodel2pytorch](https:\u002F\u002Fgithub.com\u002Fvadimkantorov\u002Fcaffemodel2pytorch): Convert Caffe models to PyTorch.\n44. [extension-cpp](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fextension-cpp): C++ extensions in PyTorch\n45. [pytoune](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpytoune): A Keras-like framework and utilities for PyTorch\n46. [jetson-reinforcement](https:\u002F\u002Fgithub.com\u002Fdusty-nv\u002Fjetson-reinforcement): Deep reinforcement learning libraries for NVIDIA Jetson TX1\u002FTX2 with PyTorch, OpenAI Gym, and Gazebo robotics simulator.\n47. [matchbox](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fmatchbox): Write PyTorch code at the level of individual examples, then run it efficiently on minibatches.\n48. [torch-two-sample](https:\u002F\u002Fgithub.com\u002Fjosipd\u002Ftorch-two-sample): A PyTorch library for two-sample tests\n49. [pytorch-summary](https:\u002F\u002Fgithub.com\u002Fsksq96\u002Fpytorch-summary): Model summary in PyTorch similar to `model.summary()` in Keras\n50. [mpl.pytorch](https:\u002F\u002Fgithub.com\u002FBelBES\u002Fmpl.pytorch): Pytorch implementation of MaxPoolingLoss.\n51. [scVI-dev](https:\u002F\u002Fgithub.com\u002FYosefLab\u002FscVI-dev): Development branch of the scVI project in PyTorch\n52. [apex](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fapex): An Experimental PyTorch Extension(will be deprecated at a later point)\n53. [ELF](https:\u002F\u002Fgithub.com\u002Fpytorch\u002FELF): ELF: a platform for game research.\n54. [Torchlite](https:\u002F\u002Fgithub.com\u002FEKami\u002FTorchlite): A high level library on top of(not only) Pytorch\n55. [joint-vae](https:\u002F\u002Fgithub.com\u002FSchlumberger\u002Fjoint-vae): Pytorch implementation of JointVAE, a framework for disentangling continuous and discrete factors of variation star2\n56. [SLM-Lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002FSLM-Lab): Modular Deep Reinforcement Learning framework in PyTorch.\n57. [bindsnet](https:\u002F\u002Fgithub.com\u002FHananel-Hazan\u002Fbindsnet): A Python package used for simulating spiking neural networks (SNNs) on CPUs or GPUs using PyTorch\n58. [pro_gan_pytorch](https:\u002F\u002Fgithub.com\u002Fakanimax\u002Fpro_gan_pytorch): ProGAN package implemented as an extension of PyTorch nn.Module\n59. [pytorch_geometric](https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_geometric): Geometric Deep Learning Extension Library for PyTorch\n60. [torchplus](https:\u002F\u002Fgithub.com\u002Fknighton\u002Ftorchplus): Implements the + operator on PyTorch modules, returning sequences.\n61. [lagom](https:\u002F\u002Fgithub.com\u002Fzuoxingdong\u002Flagom): lagom: A light PyTorch infrastructure to quickly prototype reinforcement learning algorithms.\n62. [torchbearer](https:\u002F\u002Fgithub.com\u002Fecs-vlc\u002Ftorchbearer): torchbearer: A model training library for researchers using PyTorch.\n63. [pytorch-maml-rl](https:\u002F\u002Fgithub.com\u002Ftristandeleu\u002Fpytorch-maml-rl): Reinforcement Learning with Model-Agnostic Meta-Learning in Pytorch. \n64. [NALU](https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FNALU): Basic pytorch implementation of NAC\u002FNALU from Neural Arithmetic Logic Units paper by trask et.al arxiv.org\u002Fpdf\u002F1808.00508.pdf\n66. [QuCumber](https:\u002F\u002Fgithub.com\u002FPIQuIL\u002FQuCumber): Neural Network Many-Body Wavefunction Reconstruction\n67. [magnet](https:\u002F\u002Fgithub.com\u002FMagNet-DL\u002Fmagnet): Deep Learning Projects that Build Themselves http:\u002F\u002Fmagnet-dl.readthedocs.io\u002F\n68. [opencv_transforms](https:\u002F\u002Fgithub.com\u002Fjbohnslav\u002Fopencv_transforms): OpenCV implementation of Torchvision's image augmentations\n69. [fastai](https:\u002F\u002Fgithub.com\u002Ffastai\u002Ffastai): The fast.ai deep learning library, lessons, and tutorials\n70. [pytorch-dense-correspondence](https:\u002F\u002Fgithub.com\u002FRobotLocomotion\u002Fpytorch-dense-correspondence): Code for \"Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation\" arxiv.org\u002Fpdf\u002F1806.08756.pdf\n71. [colorization-pytorch](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fcolorization-pytorch): PyTorch reimplementation of Interactive Deep Colorization richzhang.github.io\u002Fideepcolor\n72. [beauty-net](https:\u002F\u002Fgithub.com\u002Fcms-flash\u002Fbeauty-net): A simple, flexible, and extensible template for PyTorch. It's beautiful.\n73. [OpenChem](https:\u002F\u002Fgithub.com\u002FMariewelt\u002FOpenChem): OpenChem: Deep Learning toolkit for Computational Chemistry and Drug Design Research mariewelt.github.io\u002FOpenChem \n74. [torchani](https:\u002F\u002Fgithub.com\u002Faiqm\u002Ftorchani): Accurate Neural Network Potential on PyTorch aiqm.github.io\u002Ftorchani\n75. [PyTorch-LBFGS](https:\u002F\u002Fgithub.com\u002Fhjmshi\u002FPyTorch-LBFGS): A PyTorch implementation of L-BFGS.\n76. [gpytorch](https:\u002F\u002Fgithub.com\u002Fcornellius-gp\u002Fgpytorch): A highly efficient and modular implementation of Gaussian Processes in PyTorch.\n77. [hessian](https:\u002F\u002Fgithub.com\u002Fmariogeiger\u002Fhessian): hessian in pytorch. \n78. [vel](https:\u002F\u002Fgithub.com\u002FMillionIntegrals\u002Fvel): Velocity in deep-learning research.\n79. [nonechucks](https:\u002F\u002Fgithub.com\u002Fmsamogh\u002Fnonechucks): Skip bad items in your PyTorch DataLoader, use Transforms as Filters, and more!\n80. [torchstat](https:\u002F\u002Fgithub.com\u002FSwall0w\u002Ftorchstat): Model analyzer in PyTorch.\n81. [QNNPACK](https:\u002F\u002Fgithub.com\u002Fpytorch\u002FQNNPACK): Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators.\n82. [torchdiffeq](https:\u002F\u002Fgithub.com\u002Frtqichen\u002Ftorchdiffeq): Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.\n83. [redner](https:\u002F\u002Fgithub.com\u002FBachiLi\u002Fredner): A differentiable Monte Carlo path tracer\n84. [pixyz](https:\u002F\u002Fgithub.com\u002Fmasa-su\u002Fpixyz): a library for developing deep generative models in a more concise, intuitive and extendable way. \n85. [euclidesdb](https:\u002F\u002Fgithub.com\u002Fperone\u002Feuclidesdb): A multi-model machine learning feature embedding database http:\u002F\u002Feuclidesdb.readthedocs.io \n86. [pytorch2keras](https:\u002F\u002Fgithub.com\u002Fnerox8664\u002Fpytorch2keras): Convert PyTorch dynamic graph to Keras model.\n87. [salad](https:\u002F\u002Fgithub.com\u002Fdomainadaptation\u002Fsalad): Semi-Supervised Learning and Domain Adaptation.\n88. [netharn](https:\u002F\u002Fgithub.com\u002FErotemic\u002Fnetharn): Parameterized fit and prediction harnesses for pytorch.\n89. [dgl](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl): Python package built to ease deep learning on graph, on top of existing DL frameworks. http:\u002F\u002Fdgl.ai. \n90. [gandissect](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002Fgandissect): Pytorch-based tools for visualizing and understanding the neurons of a GAN. gandissect.csail.mit.edu \n91. [delira](https:\u002F\u002Fgithub.com\u002Fjustusschock\u002Fdelira): Lightweight framework for fast prototyping and training deep neural networks in medical imaging delira.rtfd.io\n92. [mushroom](https:\u002F\u002Fgithub.com\u002FAIRLab-POLIMI\u002Fmushroom): Python library for Reinforcement Learning experiments.\n93. [Xlearn](https:\u002F\u002Fgithub.com\u002Fthuml\u002FXlearn): Transfer Learning Library\n94. [geoopt](https:\u002F\u002Fgithub.com\u002Fferrine\u002Fgeoopt): Riemannian Adaptive Optimization Methods with pytorch optim\n95. [vegans](https:\u002F\u002Fgithub.com\u002Funit8co\u002Fvegans): A library providing various existing GANs in PyTorch.\n96. [torchgeometry](https:\u002F\u002Fgithub.com\u002Farraiyopensource\u002Ftorchgeometry): TGM: PyTorch Geometry\n97. [AdverTorch](https:\u002F\u002Fgithub.com\u002FBorealisAI\u002Fadvertorch): A Toolbox for Adversarial Robustness (attack\u002Fdefense\u002Ftraining) Research\n98. [AdaBound](https:\u002F\u002Fgithub.com\u002FLuolc\u002FAdaBound): An optimizer that trains as fast as Adam and as good as SGD.a\n99. [fenchel-young-losses](https:\u002F\u002Fgithub.com\u002Fmblondel\u002Ffenchel-young-losses): Probabilistic classification in PyTorch\u002FTensorFlow\u002Fscikit-learn with Fenchel-Young losses\n100. [pytorch-OpCounter](https:\u002F\u002Fgithub.com\u002FLyken17\u002Fpytorch-OpCounter): Count the FLOPs of your PyTorch model.\n101. [Tor10](https:\u002F\u002Fgithub.com\u002Fkaihsin\u002FTor10): A Generic Tensor-Network library that is designed for quantum simulation, base on the pytorch.\n102. [Catalyst](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst): High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code\u002Fideas reusing. Being able to research\u002Fdevelop something new, rather than write another regular train loop.\n103. [Ax](https:\u002F\u002Fgithub.com\u002Ffacebook\u002FAx): Adaptive Experimentation Platform\n104. [pywick](https:\u002F\u002Fgithub.com\u002Fachaiah\u002Fpywick): High-level batteries-included neural network training library for Pytorch\n105. [torchgpipe](https:\u002F\u002Fgithub.com\u002Fkakaobrain\u002Ftorchgpipe): A GPipe implementation in PyTorch torchgpipe.readthedocs.io\n106. [hub](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fhub): Pytorch Hub is a pre-trained model repository designed to facilitate research reproducibility.\n107. [pytorch-lightning](https:\u002F\u002Fgithub.com\u002FwilliamFalcon\u002Fpytorch-lightning): Rapid research framework for Pytorch. The researcher's version of keras.\n108. [Tor10](https:\u002F\u002Fgithub.com\u002Fkaihsin\u002FTor10): A Generic Tensor-Network library that is designed for quantum simulation, base on the pytorch.\n109. [tensorwatch](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftensorwatch): Debugging, monitoring and visualization for Deep Learning and Reinforcement Learning from Microsoft Research.\n110. [wavetorch](https:\u002F\u002Fgithub.com\u002Ffancompute\u002Fwavetorch): Numerically solving and backpropagating through the wave equation arxiv.org\u002Fabs\u002F1904.12831\n111. [diffdist](https:\u002F\u002Fgithub.com\u002Fag14774\u002Fdiffdist): diffdist is a python library for pytorch. It extends the default functionality of torch.autograd and adds support for differentiable communication between processes. \n112. [torchprof](https:\u002F\u002Fgithub.com\u002Fawwong1\u002Ftorchprof): A minimal dependency library for layer-by-layer profiling of Pytorch models.\n113. [osqpth](https:\u002F\u002Fgithub.com\u002Foxfordcontrol\u002Fosqpth): The differentiable OSQP solver layer for PyTorch. \n114. [mctorch](https:\u002F\u002Fgithub.com\u002Fmctorch\u002Fmctorch): A manifold optimization library for deep learning. \n115. [pytorch-hessian-eigenthings](https:\u002F\u002Fgithub.com\u002Fnoahgolmant\u002Fpytorch-hessian-eigenthings): Efficient PyTorch Hessian eigendecomposition using the Hessian-vector product and stochastic power iteration. \n116. [MinkowskiEngine](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FMinkowskiEngine): Minkowski Engine is an auto-diff library for generalized sparse convolutions and high-dimensional sparse tensors.\n117. [pytorch-cpp-rl](https:\u002F\u002Fgithub.com\u002FOmegastick\u002Fpytorch-cpp-rl): PyTorch C++ Reinforcement Learning\n118. [pytorch-toolbelt](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002Fpytorch-toolbelt): PyTorch extensions for fast R&D prototyping and Kaggle farming\n119. [argus-tensor-stream](https:\u002F\u002Fgithub.com\u002FFonbet\u002Fargus-tensor-stream): A library for real-time video stream decoding to CUDA memory tensorstream.argus-ai.com\n120. [macarico](https:\u002F\u002Fgithub.com\u002Fhal3\u002Fmacarico): learning to search in pytorch\n121. [rlpyt](https:\u002F\u002Fgithub.com\u002Fastooke\u002Frlpyt): Reinforcement Learning in PyTorch\n122. [pywarm](https:\u002F\u002Fgithub.com\u002Fblue-season\u002Fpywarm): A cleaner way to build neural networks for PyTorch. blue-season.github.io\u002Fpywarm\n123. [learn2learn](https:\u002F\u002Fgithub.com\u002Flearnables\u002Flearn2learn): PyTorch Meta-learning Framework for Researchers http:\u002F\u002Flearn2learn.net\n124. [torchbeast](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ftorchbeast): A PyTorch Platform for Distributed RL\n125. [higher](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhigher): higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.\n126. [Torchelie](https:\u002F\u002Fgithub.com\u002FVermeille\u002FTorchelie\u002F): Torchélie is a set of utility functions, layers, losses, models, trainers and other things for PyTorch. torchelie.readthedocs.org \n127. [CrypTen](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FCrypTen): CrypTen is a Privacy Preserving Machine Learning framework written using PyTorch that allows researchers and developers to train models using encrypted data. CrypTen currently supports Secure multi-party computation as its encryption mechanism.\n128. [cvxpylayers](https:\u002F\u002Fgithub.com\u002Fcvxgrp\u002Fcvxpylayers): cvxpylayers is a Python library for constructing differentiable convex optimization layers in PyTorch\n129. [RepDistiller](https:\u002F\u002Fgithub.com\u002FHobbitLong\u002FRepDistiller): Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods\n130. [kaolin](https:\u002F\u002Fgithub.com\u002FNVIDIAGameWorks\u002Fkaolin): PyTorch library aimed at accelerating 3D deep learning research\n131. [PySNN](https:\u002F\u002Fgithub.com\u002FBasBuller\u002FPySNN): Efficient Spiking Neural Network framework, built on top of PyTorch for GPU acceleration.\n132. [sparktorch](https:\u002F\u002Fgithub.com\u002Fdmmiller612\u002Fsparktorch): Train and run Pytorch models on Apache Spark.\n133. [pytorch-metric-learning](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning): The easiest way to use metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.\n134. [autonomous-learning-library](https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library): A PyTorch library for building deep reinforcement learning agents.\n135. [flambe](https:\u002F\u002Fgithub.com\u002Fasappresearch\u002Fflambe): An ML framework to accelerate research and its path to production. flambe.ai\n136. [pytorch-optimizer](https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer): Collections of modern optimization algorithms for PyTorch, includes: AccSGD, AdaBound, AdaMod, DiffGrad, Lamb, RAdam, RAdam, Yogi.\n137. [PyTorch-VAE](https:\u002F\u002Fgithub.com\u002FAntixK\u002FPyTorch-VAE): A Collection of Variational Autoencoders (VAE) in PyTorch.\n138. [ray](https:\u002F\u002Fgithub.com\u002Fray-project\u002Fray): A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library. ray.io\n139. [Pytorch Geometric Temporal](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fpytorch_geometric_temporal): A temporal extension library for PyTorch Geometric \n140. [Poutyne](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpoutyne): A Keras-like framework for PyTorch that handles much of the boilerplating code needed to train neural networks.\n141. [Pytorch-Toolbox](https:\u002F\u002Fgithub.com\u002FPistonY\u002Ftorch-toolbox): This is toolbox project for Pytorch. Aiming to make you write Pytorch code more easier, readable and concise.\n142. [Pytorch-contrib](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fcontrib): It contains reviewed implementations of ideas from recent machine learning papers.\n143. [EfficientNet PyTorch](https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch): It contains an op-for-op PyTorch reimplementation of EfficientNet, along with pre-trained models and examples.\n144. [PyTorch\u002FXLA](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fxla): PyTorch\u002FXLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs.\n145. [webdataset](https:\u002F\u002Fgithub.com\u002Ftmbdev\u002Fwebdataset): WebDataset is a PyTorch Dataset (IterableDataset) implementation providing efficient access to datasets stored in POSIX tar archives.\n146. [volksdep](https:\u002F\u002Fgithub.com\u002FMedia-Smart\u002Fvolksdep): volksdep is an open-source toolbox for deploying and accelerating PyTorch, Onnx and Tensorflow models with TensorRT.\n147. [PyTorch-StudioGAN](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN): StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional\u002Funconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea.\n148. [torchdrift](https:\u002F\u002Fgithub.com\u002Ftorchdrift\u002Ftorchdrift\u002F): drift detection library\n149. [accelerate](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate) : A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision \n150. [lightning-transformers](https:\u002F\u002Fgithub.com\u002FPyTorchLightning\u002Flightning-transformers):  Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. \n151. [Flower](https:\u002F\u002Fflower.dev\u002F) A unified approach to federated learning, analytics, and evaluation. It allows to federated any machine learning workload.\n152. [lightning-flash](https:\u002F\u002Fgithub.com\u002FPyTorchLightning\u002Flightning-flash): Flash is a collection of tasks for fast prototyping, baselining and fine-tuning scalable Deep Learning models, built on PyTorch Lightning.\n153. [Pytorch Geometric Signed Directed](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002Fpytorch_geometric_signed_directed): A signed and directed extension library for PyTorch Geometric. \n154. [Koila](https:\u002F\u002Fgithub.com\u002Frentruewang\u002Fkoila): A simple wrapper around pytorch that prevents CUDA out of memory issues.\n155. [Renate](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Frenate): A library for real-world continual learning.\n156. [ANEE](https:\u002F\u002Fgithub.com\u002Fabkmystery\u002FANEE) – Adaptive Neural Execution Engine for PyTorch transformers. Provides per-token dynamic layer skipping, profiler-based gating, and KV-cache-safe sparse inference.\n\n\n## Tutorials, books, & examples\n\n1. **[Practical Pytorch](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpractical-pytorch)**: Tutorials explaining different RNN models\n2. [DeepLearningForNLPInPytorch](https:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fdeep_learning_nlp_tutorial.html): An IPython Notebook tutorial on deep learning, with an emphasis on Natural Language Processing. \n3. [pytorch-tutorial](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial): tutorial for researchers to learn deep learning with pytorch.\n4.  [pytorch-exercises](https:\u002F\u002Fgithub.com\u002Fkeon\u002Fpytorch-exercises): pytorch-exercises collection. \n5.  [pytorch tutorials](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftutorials): Various pytorch tutorials. \n6.  [pytorch examples](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples):  A repository showcasing examples of using pytorch \n7. [pytorch practice](https:\u002F\u002Fgithub.com\u002Fnapsternxg\u002Fpytorch-practice): Some example scripts on pytorch.  \n8.  [pytorch mini tutorials](https:\u002F\u002Fgithub.com\u002Fvinhkhuc\u002FPyTorch-Mini-Tutorials):  Minimal tutorials for PyTorch adapted from Alec Radford's Theano tutorials. \n9.  [pytorch text classification](https:\u002F\u002Fgithub.com\u002Fxiayandi\u002FPytorch_text_classification): A simple implementation of CNN based text classification in Pytorch \n10. [cats vs dogs](https:\u002F\u002Fgithub.com\u002Fdesimone\u002Fpytorch-cat-vs-dogs): Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition. Currently #27 (0.05074) on the leaderboard.  \n11. [convnet](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FconvNet.pytorch): This is a complete training example for Deep Convolutional Networks on various datasets (ImageNet, Cifar10, Cifar100, MNIST).\n12. [pytorch-generative-adversarial-networks](https:\u002F\u002Fgithub.com\u002Fmailmahee\u002Fpytorch-generative-adversarial-networks): simple generative adversarial network (GAN) using PyTorch.   \n13. [pytorch containers](https:\u002F\u002Fgithub.com\u002Famdegroot\u002Fpytorch-containers): This repository aims to help former Torchies more seamlessly transition to the \"Containerless\" world of PyTorch by providing a list of PyTorch implementations of Torch Table Layers.  \n14. [T-SNE in pytorch](https:\u002F\u002Fgithub.com\u002Fcemoody\u002Ftopicsne): t-SNE experiments in pytorch \n15. [AAE_pytorch](https:\u002F\u002Fgithub.com\u002Ffducau\u002FAAE_pytorch): Adversarial Autoencoders (with Pytorch). \n16. [Kind_PyTorch_Tutorial](https:\u002F\u002Fgithub.com\u002FGunhoChoi\u002FKind_PyTorch_Tutorial): Kind PyTorch Tutorial for beginners.  \n17.  [pytorch-poetry-gen](https:\u002F\u002Fgithub.com\u002Fjustdark\u002Fpytorch-poetry-gen): a char-RNN based on pytorch.  \n18. [pytorch-REINFORCE](https:\u002F\u002Fgithub.com\u002FJamesChuanggg\u002Fpytorch-REINFORCE): PyTorch implementation of REINFORCE, This repo supports both continuous and discrete environments in OpenAI gym.\n19.  **[PyTorch-Tutorial](https:\u002F\u002Fgithub.com\u002FMorvanZhou\u002FPyTorch-Tutorial)**: Build your neural network easy and fast  https:\u002F\u002Fmorvanzhou.github.io\u002Ftutorials\u002F \n20. [pytorch-intro](https:\u002F\u002Fgithub.com\u002Fjoansj\u002Fpytorch-intro): A couple of scripts to illustrate how to do CNNs and RNNs in PyTorch\n21. [pytorch-classification](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fpytorch-classification): A unified framework for the image classification task on CIFAR-10\u002F100 and ImageNet.\n22. [pytorch_notebooks - hardmaru](https:\u002F\u002Fgithub.com\u002Fhardmaru\u002Fpytorch_notebooks): Random tutorials created in NumPy and PyTorch.\n23. [pytorch_tutoria-quick](https:\u002F\u002Fgithub.com\u002Fsoravux\u002Fpytorch_tutorial): Quick PyTorch introduction and tutorial. Targets computer vision, graphics and machine learning researchers eager to try a new framework.  \n24. [Pytorch_fine_tuning_Tutorial](https:\u002F\u002Fgithub.com\u002FSpandan-Madan\u002FPytorch_fine_tuning_Tutorial): A short tutorial on performing fine tuning or transfer learning in PyTorch.\n25. [pytorch_exercises](https:\u002F\u002Fgithub.com\u002FKyubyong\u002Fpytorch_exercises): pytorch-exercises \n26. [traffic-sign-detection](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Ftraffic-sign-detection-homework): nyu-cv-fall-2017 example\n27. [mss_pytorch](https:\u002F\u002Fgithub.com\u002FJs-Mim\u002Fmss_pytorch): Singing Voice Separation via Recurrent Inference and Skip-Filtering Connections - PyTorch Implementation. Demo: js-mim.github.io\u002Fmss_pytorch\n28. [DeepNLP-models-Pytorch](https:\u002F\u002Fgithub.com\u002FDSKSD\u002FDeepNLP-models-Pytorch) Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ: NLP with Deep Learning)\n29. [Mila introductory tutorials](https:\u002F\u002Fgithub.com\u002Fmila-udem\u002Fwelcome_tutorials): Various tutorials given for welcoming new students at MILA.\n30. [pytorch.rl.learning](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fpytorch.rl.learning): for learning reinforcement learning using PyTorch.\n31. [minimal-seq2seq](https:\u002F\u002Fgithub.com\u002Fkeon\u002Fseq2seq): Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch\n32. [tensorly-notebooks](https:\u002F\u002Fgithub.com\u002FJeanKossaifi\u002Ftensorly-notebooks): Tensor methods in Python with TensorLy tensorly.github.io\u002Fdev\n33. [pytorch_bits](https:\u002F\u002Fgithub.com\u002Fjpeg729\u002Fpytorch_bits): time-series prediction related examples.\n34. [skip-thoughts](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Fskip-thoughts): An implementation of Skip-Thought Vectors in PyTorch.\n35. [video-caption-pytorch](https:\u002F\u002Fgithub.com\u002FxiadingZ\u002Fvideo-caption-pytorch): pytorch code for video captioning. \n36. [Capsule-Network-Tutorial](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002FCapsule-Network-Tutorial): Pytorch easy-to-follow Capsule Network tutorial.\n37. [code-of-learn-deep-learning-with-pytorch](https:\u002F\u002Fgithub.com\u002FSherlockLiao\u002Fcode-of-learn-deep-learning-with-pytorch): This is code of book \"Learn Deep Learning with PyTorch\" item.jd.com\u002F17915495606.html\n38. [RL-Adventure](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002FRL-Adventure): Pytorch easy-to-follow step-by-step Deep Q Learning tutorial with clean readable code.\n39. [accelerated_dl_pytorch](https:\u002F\u002Fgithub.com\u002Fhpcgarage\u002Faccelerated_dl_pytorch): Accelerated Deep Learning with PyTorch at Jupyter Day Atlanta II. \n40. [RL-Adventure-2](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002FRL-Adventure-2): PyTorch4 tutorial of: actor critic \u002F proximal policy optimization \u002F acer \u002F ddpg \u002F twin dueling ddpg \u002F soft actor critic \u002F generative adversarial imitation learning \u002F hindsight experience replay\n41. [Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch)](https:\u002F\u002Fmedium.com\u002F@devnag\u002Fgenerative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f)\n42. [adversarial-autoencoders-with-pytorch](https:\u002F\u002Fblog.paperspace.com\u002Fadversarial-autoencoders-with-pytorch\u002F)\n43. [transfer learning using pytorch](https:\u002F\u002Fmedium.com\u002F@vishnuvig\u002Ftransfer-learning-using-pytorch-4c3475f4495)\n44. [how-to-implement-a-yolo-object-detector-in-pytorch](https:\u002F\u002Fblog.paperspace.com\u002Fhow-to-implement-a-yolo-object-detector-in-pytorch\u002F)\n45. [pytorch-for-recommenders-101](http:\u002F\u002Fblog.fastforwardlabs.com\u002F2018\u002F04\u002F10\u002Fpytorch-for-recommenders-101.html)\n46. [pytorch-for-numpy-users](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-for-numpy-users)\n47. [PyTorch Tutorial](http:\u002F\u002Fwww.pytorchtutorial.com\u002F): PyTorch Tutorials in Chinese.\n48. [grokking-pytorch](https:\u002F\u002Fgithub.com\u002FKaixhin\u002Fgrokking-pytorch): The Hitchiker's Guide to PyTorch\n49. [PyTorch-Deep-Learning-Minicourse](https:\u002F\u002Fgithub.com\u002FAtcold\u002FPyTorch-Deep-Learning-Minicourse): Minicourse in Deep Learning with PyTorch.\n50. [pytorch-custom-dataset-examples](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-custom-dataset-examples): Some custom dataset examples for PyTorch\n51. [Multiplicative LSTM for sequence-based Recommenders](https:\u002F\u002Fflorianwilhelm.info\u002F2018\u002F08\u002Fmultiplicative_LSTM_for_sequence_based_recos\u002F)\n52. [deeplearning.ai-pytorch](https:\u002F\u002Fgithub.com\u002Ffurkanu\u002Fdeeplearning.ai-pytorch): PyTorch Implementations of Coursera's Deep Learning(deeplearning.ai) Specialization. \n53. [MNIST_Pytorch_python_and_capi](https:\u002F\u002Fgithub.com\u002Ftobiascz\u002FMNIST_Pytorch_python_and_capi): This is an example of how to train a MNIST network in Python and run it in c++ with pytorch 1.0\n54. [torch_light](https:\u002F\u002Fgithub.com\u002Fne7ermore\u002Ftorch_light): Tutorials and examples include Reinforcement Training, NLP, CV\n55. [portrain-gan](https:\u002F\u002Fgithub.com\u002Fdribnet\u002Fportrain-gan): torch code to decode (and almost encode) latents from art-DCGAN's Portrait GAN.\n56. [mri-analysis-pytorch](https:\u002F\u002Fgithub.com\u002Fomarsar\u002Fmri-analysis-pytorch): MRI analysis using PyTorch and MedicalTorch\n57. [cifar10-fast](https:\u002F\u002Fgithub.com\u002Fdavidcpage\u002Fcifar10-fast): \nDemonstration of training a small ResNet on CIFAR10 to 94% test accuracy in 79 seconds as described in this [blog series](https:\u002F\u002Fwww.myrtle.ai\u002F2018\u002F09\u002F24\u002Fhow_to_train_your_resnet\u002F).\n58. [Intro to Deep Learning with PyTorch](https:\u002F\u002Fin.udacity.com\u002Fcourse\u002Fdeep-learning-pytorch--ud188): A free course by Udacity and facebook, with a good intro to PyTorch, and an interview with Soumith Chintala, one of the original authors of PyTorch.\n59. [pytorch-sentiment-analysis](https:\u002F\u002Fgithub.com\u002Fbentrevett\u002Fpytorch-sentiment-analysis): Tutorials on getting started with PyTorch and TorchText for sentiment analysis.\n60. [pytorch-image-models](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models): PyTorch image models, scripts, pretrained weights -- (SE)ResNet\u002FResNeXT, DPN, EfficientNet, MobileNet-V3\u002FV2\u002FV1, MNASNet, Single-Path NAS, FBNet, and more.\n61. [CIFAR-ZOO](https:\u002F\u002Fgithub.com\u002FBIGBALLON\u002FCIFAR-ZOO): Pytorch implementation for multiple CNN architectures and improve methods with state-of-the-art results. \n62. [d2l-pytorch](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch): This is an attempt to modify Dive into Deep Learning, Berkeley STAT 157 (Spring 2019) textbook's code into PyTorch.\n63. [thinking-in-tensors-writing-in-pytorch](https:\u002F\u002Fgithub.com\u002Fstared\u002Fthinking-in-tensors-writing-in-pytorch): Thinking in tensors, writing in PyTorch (a hands-on deep learning intro).\n64. [NER-BERT-pytorch](https:\u002F\u002Fgithub.com\u002Flemonhu\u002FNER-BERT-pytorch): PyTorch solution of named entity recognition task Using Google AI's pre-trained BERT model.\n65. [pytorch-sync-batchnorm-example](https:\u002F\u002Fgithub.com\u002Fdougsouza\u002Fpytorch-sync-batchnorm-example): How to use Cross Replica \u002F Synchronized Batchnorm in Pytorch. \n66. [SentimentAnalysis](https:\u002F\u002Fgithub.com\u002Fbarissayil\u002FSentimentAnalysis): Sentiment analysis neural network trained by fine tuning BERT on the Stanford Sentiment Treebank, thanks to [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002F)'s Transformers library.\n67. [pytorch-cpp](https:\u002F\u002Fgithub.com\u002Fprabhuomkar\u002Fpytorch-cpp): C++ implementations of PyTorch tutorials for deep learning researchers (based on the Python tutorials from [pytorch-tutorial](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)). \n68. [Deep Learning with PyTorch: Zero to GANs](https:\u002F\u002Fjovian.ml\u002Faakashns\u002Fcollections\u002Fdeep-learning-with-pytorch): Interactive and coding-focused tutorial series on introduction to Deep Learning with PyTorch ([video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GIsg-ZUy0MY)).\n69. [Deep Learning with PyTorch](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-pytorch): Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch, the book includes a case study: building an algorithm capable of detecting malignant lung tumors using CT scans.\n70. [Serverless Machine Learning in Action with PyTorch and AWS](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fserverless-machine-learning-in-action): Serverless Machine Learning in Action is a guide to bringing your experimental PyTorch machine learning code to production using serverless capabilities from major cloud providers like AWS, Azure, or GCP.\n71. [LabML NN](https:\u002F\u002Fgithub.com\u002Flab-ml\u002Fnn): A collection of PyTorch implementations of neural networks architectures and algorithms with side-by-side notes.\n72. [Run your PyTorch Example Fedarated with Flower](https:\u002F\u002Fgithub.com\u002Fadap\u002Fflower\u002Ftree\u002Fmain\u002Fexamples\u002Fpytorch_from_centralized_to_federated): This example demonstrates how an already existing centralized PyTorch machine learning project can be federated with Flower. A Cifar-10 dataset is used together with a convolutional neural network (CNN).\n73. [The Math Behind Artificial Intelligence](https:\u002F\u002Fwww.freecodecamp.org\u002Fnews\u002Fthe-math-behind-artificial-intelligence-book): A free FreeCodeCamp book teaching the math behind AI in plain English from an engineering point of view. It covers linear algebra, calculus, probability & statistics, and optimization theory with analogies, real-life applications, and Python code examples.\n\n## Paper implementations\n\n1. [google_evolution](https:\u002F\u002Fgithub.com\u002Fneuralix\u002Fgoogle_evolution): This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. \n2. [pyscatwave](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fpyscatwave): Fast Scattering Transform with CuPy\u002FPyTorch,read the paper [here](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08961)\n3. [scalingscattering](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fscalingscattering): Scaling The Scattering Transform : Deep Hybrid Networks.  \n4. [deep-auto-punctuation](https:\u002F\u002Fgithub.com\u002Fepisodeyang\u002Fdeep-auto-punctuation): a pytorch implementation of auto-punctuation learned character by character.  \n5. [Realtime_Multi-Person_Pose_Estimation](https:\u002F\u002Fgithub.com\u002Ftensorboy\u002Fpytorch_Realtime_Multi-Person_Pose_Estimation): This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is [here](https:\u002F\u002Fgithub.com\u002FZheC\u002FRealtime_Multi-Person_Pose_Estimation) .\n6. [PyTorch-value-iteration-networks](https:\u002F\u002Fgithub.com\u002Fonlytailei\u002FPyTorch-value-iteration-networks): PyTorch implementation of the Value Iteration Networks (NIPS '16) paper  \n7. [pytorch_Highway](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_Highway): Highway network implemented in pytorch.\n8. [pytorch_NEG_loss](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_NEG_loss): NEG loss implemented in pytorch.  \n9. [pytorch_RVAE](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_RVAE): Recurrent Variational Autoencoder that generates sequential data implemented in pytorch.   \n10. [pytorch_TDNN](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_TDNN): Time Delayed NN implemented in pytorch.  \n11. [eve.pytorch](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Feve.pytorch): An implementation of Eve Optimizer, proposed in Imploving Stochastic Gradient Descent with Feedback, Koushik and Hayashi, 2016.  \n12. [e2e-model-learning](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fe2e-model-learning): Task-based end-to-end model learning.  \n13. [pix2pix-pytorch](https:\u002F\u002Fgithub.com\u002Fmrzhu-cool\u002Fpix2pix-pytorch): PyTorch implementation of \"Image-to-Image Translation Using Conditional Adversarial Networks\".   \n14. [Single Shot MultiBox Detector](https:\u002F\u002Fgithub.com\u002Famdegroot\u002Fssd.pytorch): A PyTorch Implementation of Single Shot MultiBox Detector.  \n15. [DiscoGAN](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDiscoGAN-pytorch): PyTorch implementation of \"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks\"  \n16. [official DiscoGAN implementation](https:\u002F\u002Fgithub.com\u002FSKTBrain\u002FDiscoGAN): Official implementation of \"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks\".  \n17. [pytorch-es](https:\u002F\u002Fgithub.com\u002Fatgambardella\u002Fpytorch-es): This is a PyTorch implementation of [Evolution Strategies](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.03864) .  \n18. [piwise](https:\u002F\u002Fgithub.com\u002Fbodokaiser\u002Fpiwise): Pixel-wise segmentation on VOC2012 dataset using pytorch.  \n19. [pytorch-dqn](https:\u002F\u002Fgithub.com\u002Ftransedward\u002Fpytorch-dqn): Deep Q-Learning Network in pytorch.  \n20. [neuraltalk2-pytorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fneuraltalk2.pytorch): image captioning model in pytorch(finetunable cnn in branch with_finetune)\n21. [vnet.pytorch](https:\u002F\u002Fgithub.com\u002Fmattmacy\u002Fvnet.pytorch): A Pytorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.    \n22. [pytorch-fcn](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-fcn): PyTorch implementation of Fully Convolutional Networks.  \n23. [WideResNets](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FWideResNet-pytorch): WideResNets for CIFAR10\u002F100 implemented in PyTorch. This implementation requires less GPU memory than what is required by the official Torch implementation: https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fwide-residual-networks .\n24. [pytorch_highway_networks](https:\u002F\u002Fgithub.com\u002Fc0nn3r\u002Fpytorch_highway_networks): Highway networks implemented in PyTorch.  \n25. [pytorch-NeuCom](https:\u002F\u002Fgithub.com\u002Fypxie\u002Fpytorch-NeuCom): Pytorch implementation of DeepMind's differentiable neural computer paper.  \n26. [captionGen](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FcaptionGen): Generate captions for an image using PyTorch.  \n27. [AnimeGAN](https:\u002F\u002Fgithub.com\u002Fjayleicn\u002FanimeGAN): A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. \n28. [Cnn-text classification](https:\u002F\u002Fgithub.com\u002FShawn1993\u002Fcnn-text-classification-pytorch): This is the implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in PyTorch.  \n29. [deepspeech2](https:\u002F\u002Fgithub.com\u002FSeanNaren\u002Fdeepspeech.pytorch): Implementation of DeepSpeech2 using Baidu Warp-CTC. Creates a network based on the DeepSpeech2 architecture, trained with the CTC activation function.\n30. [seq2seq](https:\u002F\u002Fgithub.com\u002FMaximumEntropy\u002FSeq2Seq-PyTorch): This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch  \n31. [Asynchronous Advantage Actor-Critic in PyTorch](https:\u002F\u002Fgithub.com\u002Frarilurelo\u002Fpytorch_a3c): This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learning. Since PyTorch has a easy method to control shared memory within multiprocess, we can easily implement asynchronous method like A3C.    \n32. [densenet](https:\u002F\u002Fgithub.com\u002Fbamos\u002Fdensenet.pytorch): This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Convolutional Networks by G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. This implementation gets a CIFAR-10+ error rate of 4.77 with a 100-layer DenseNet-BC with a growth rate of 12. Their official implementation and links to many other third-party implementations are available in the liuzhuang13\u002FDenseNet repo on GitHub.  \n33. [nninit](https:\u002F\u002Fgithub.com\u002Falykhantejani\u002Fnninit): Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin.  \n34. [faster rcnn](https:\u002F\u002Fgithub.com\u002Flongcw\u002Ffaster_rcnn_pytorch): This is a PyTorch implementation of Faster RCNN. This project is mainly based on py-faster-rcnn and TFFRCNN.For details about R-CNN please refer to the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun. \n35. [doomnet](https:\u002F\u002Fgithub.com\u002Fakolishchak\u002Fdoom-net-pytorch): PyTorch's version of Doom-net implementing some RL models in ViZDoom environment.  \n36. [flownet](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FFlowNetPytorch): Pytorch implementation of FlowNet by Dosovitskiy et al.  \n37. [sqeezenet](https:\u002F\u002Fgithub.com\u002Fgsp-27\u002Fpytorch_Squeezenet): Implementation of Squeezenet in pytorch, #### pretrained models on CIFAR10 data to come Plan to train the model on cifar 10 and add block connections too.  \n38. [WassersteinGAN](https:\u002F\u002Fgithub.com\u002Fmartinarjovsky\u002FWassersteinGAN): wassersteinGAN in pytorch. \n39. [optnet](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Foptnet): This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our paper OptNet: Differentiable Optimization as a Layer in Neural Networks.  \n40. [qp solver](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fqpth): A fast and differentiable QP solver for PyTorch. Crafted by Brandon Amos and J. Zico Kolter.  \n41. [Continuous Deep Q-Learning with Model-based Acceleration ](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-naf): Reimplementation of Continuous Deep Q-Learning with Model-based Acceleration.  \n42. [Learning to learn by gradient descent by gradient descent](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-meta-optimizer): PyTorch implementation of Learning to learn by gradient descent by gradient descent.\n43. [fast-neural-style](https:\u002F\u002Fgithub.com\u002Fdarkstar112358\u002Ffast-neural-style): pytorch implementation of fast-neural-style, The model uses the method described in [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08155) along with Instance Normalization.\n44. [PytorchNeuralStyleTransfer](https:\u002F\u002Fgithub.com\u002Fleongatys\u002FPytorchNeuralStyleTransfer): Implementation of Neural Style Transfer in Pytorch. \n45. [Fast Neural Style for Image Style Transform by Pytorch](https:\u002F\u002Fgithub.com\u002Fbengxy\u002FFastNeuralStyle): Fast Neural Style for Image Style Transform by Pytorch .\n46. [neural style transfer](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Tutorials): An introduction to PyTorch through the Neural-Style algorithm (https:\u002F\u002Farxiv.org\u002Fabs\u002F1508.06576) developed by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge.   \n47. [VIN_PyTorch_Visdom](https:\u002F\u002Fgithub.com\u002Fzuoxingdong\u002FVIN_PyTorch_Visdom): PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.  \n48. [YOLO2](https:\u002F\u002Fgithub.com\u002Flongcw\u002Fyolo2-pytorch): YOLOv2 in PyTorch.   \n49. [attention-transfer](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fattention-transfer): Attention transfer in pytorch, read the paper [here](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03928).  \n50. [SVHNClassifier](https:\u002F\u002Fgithub.com\u002Fpotterhsu\u002FSVHNClassifier-PyTorch): A PyTorch implementation of [Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1312.6082.pdf).  \n51. [pytorch-deform-conv](https:\u002F\u002Fgithub.com\u002Foeway\u002Fpytorch-deform-conv): PyTorch implementation of Deformable Convolution.  \n52. [BEGAN-pytorch](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FBEGAN-pytorch): PyTorch implementation of [BEGAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10717): Boundary Equilibrium Generative Adversarial Networks.  \n53. [treelstm.pytorch](https:\u002F\u002Fgithub.com\u002Fdasguptar\u002Ftreelstm.pytorch): Tree LSTM implementation in PyTorch.\n54. [AGE](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FAGE): Code for paper \"Adversarial Generator-Encoder Networks\" by Dmitry Ulyanov, Andrea Vedaldi and Victor Lempitsky which can be found [here](http:\u002F\u002Fsites.skoltech.ru\u002Fapp\u002Fdata\u002Fuploads\u002Fsites\u002F25\u002F2017\u002F04\u002FAGE.pdf) \n55. [ResNeXt.pytorch](https:\u002F\u002Fgithub.com\u002Fprlz77\u002FResNeXt.pytorch): Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch.\n56. [pytorch-rl](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-rl): Deep Reinforcement Learning with pytorch & visdom  \n57. [Deep-Leafsnap](https:\u002F\u002Fgithub.com\u002Fsujithv28\u002FDeep-Leafsnap): LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.  \n58. [pytorch-CycleGAN-and-pix2pix](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002Fpytorch-CycleGAN-and-pix2pix): PyTorch implementation for both unpaired and paired image-to-image translation.\n59. [A3C-PyTorch](https:\u002F\u002Fgithub.com\u002Fonlytailei\u002FA3C-PyTorch):PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch\n60. [pytorch-value-iteration-networks](https:\u002F\u002Fgithub.com\u002Fkentsommer\u002Fpytorch-value-iteration-networks): Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)  \n61. [PyTorch-Style-Transfer](https:\u002F\u002Fgithub.com\u002Fzhanghang1989\u002FPyTorch-Style-Transfer): PyTorch Implementation of Multi-style Generative Network for Real-time Transfer\n62. [pytorch-deeplab-resnet](https:\u002F\u002Fgithub.com\u002Fisht7\u002Fpytorch-deeplab-resnet): pytorch-deeplab-resnet-model.\n63. [pointnet.pytorch](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch): pytorch implementation for \"PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation\" https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00593  \n64. **[pytorch-playground](https:\u002F\u002Fgithub.com\u002Faaron-xichen\u002Fpytorch-playground): Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)**.\n65. [pytorch-dnc](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-dnc): Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom. \n66. [pytorch_image_classifier](https:\u002F\u002Fgithub.com\u002Fjinfagang\u002Fpytorch_image_classifier): Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.  \n67. [mnist-svhn-transfer](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fmnist-svhn-transfer): PyTorch Implementation of CycleGAN and SGAN for Domain Transfer (Minimal).\n68. [pytorch-yolo2](https:\u002F\u002Fgithub.com\u002Fmarvis\u002Fpytorch-yolo2): pytorch-yolo2\n69. [dni](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fdni.pytorch): Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch\n70. [wgan-gp](https:\u002F\u002Fgithub.com\u002Fcaogang\u002Fwgan-gp): A pytorch implementation of Paper \"Improved Training of Wasserstein GANs\".\n71. [pytorch-seq2seq-intent-parsing](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpytorch-seq2seq-intent-parsing): Intent parsing and slot filling in PyTorch with seq2seq + attention\n72. [pyTorch_NCE](https:\u002F\u002Fgithub.com\u002Fdemelin\u002FpyTorch_NCE): An implementation of the Noise Contrastive Estimation algorithm for pyTorch. Working, yet not very efficient.\n73. [molencoder](https:\u002F\u002Fgithub.com\u002Fcxhernandez\u002Fmolencoder): Molecular AutoEncoder in PyTorch\n74. [GAN-weight-norm](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002FGAN-weight-norm): Code for \"On the Effects of Batch and Weight Normalization in Generative Adversarial Networks\"\n75. [lgamma](https:\u002F\u002Fgithub.com\u002Frachtsingh\u002Flgamma): Implementations of polygamma, lgamma, and beta functions for PyTorch\n76. [bigBatch](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FbigBatch): Code used to generate the results appearing in \"Train longer, generalize better: closing the generalization gap in large batch training of neural networks\" \n77. [rl_a3c_pytorch](https:\u002F\u002Fgithub.com\u002Fdgriff777\u002Frl_a3c_pytorch): Reinforcement learning with implementation of A3C LSTM for Atari 2600. \n78. [pytorch-retraining](https:\u002F\u002Fgithub.com\u002Fahirner\u002Fpytorch-retraining): Transfer Learning Shootout for PyTorch's model zoo (torchvision)\n79. [nmp_qc](https:\u002F\u002Fgithub.com\u002Fpriba\u002Fnmp_qc): Neural Message Passing for Computer Vision\n80. [grad-cam](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam): Pytorch implementation of Grad-CAM\n81. [pytorch-trpo](https:\u002F\u002Fgithub.com\u002Fmjacar\u002Fpytorch-trpo): PyTorch Implementation of Trust Region Policy Optimization (TRPO)\n82. [pytorch-explain-black-box](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-explain-black-box): PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation\n83. [vae_vpflows](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vpflows): Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling https:\u002F\u002Fjmtomczak.github.io\u002Fdeebmed.html \n84. [relational-networks](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Frelational-networks): Pytorch implementation of \"A simple neural network module for relational reasoning\" (Relational Networks) https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.01427.pdf\n85. [vqa.pytorch](https:\u002F\u002Fgithub.com\u002FCadene\u002Fvqa.pytorch): Visual Question Answering in Pytorch\n86. [end-to-end-negotiator](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fend-to-end-negotiator): Deal or No Deal? End-to-End Learning for Negotiation Dialogues\n87. [odin-pytorch](https:\u002F\u002Fgithub.com\u002FShiyuLiang\u002Fodin-pytorch): Principled Detection of Out-of-Distribution Examples in Neural Networks. \n88. [FreezeOut](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FFreezeOut): Accelerate Neural Net Training by Progressively Freezing Layers. \n89. [ARAE](https:\u002F\u002Fgithub.com\u002Fjakezhaojb\u002FARAE): Code for the paper \"Adversarially Regularized Autoencoders for Generating Discrete Structures\" by Zhao, Kim, Zhang, Rush and LeCun.\n90. [forward-thinking-pytorch](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fforward-thinking-pytorch): Pytorch implementation of \"Forward Thinking: Building and Training Neural Networks One Layer at a Time\" https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.02480.pdf  \n91. [context_encoder_pytorch](https:\u002F\u002Fgithub.com\u002FBoyuanJiang\u002Fcontext_encoder_pytorch): PyTorch Implement of Context Encoders\n92. [attention-is-all-you-need-pytorch](https:\u002F\u002Fgithub.com\u002Fjadore801120\u002Fattention-is-all-you-need-pytorch): A PyTorch implementation of the Transformer model in \"Attention is All You Need\".https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch\n93. [OpenFacePytorch](https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch): PyTorch module to use OpenFace's nn4.small2.v1.t7 model \n94. [neural-combinatorial-rl-pytorch](https:\u002F\u002Fgithub.com\u002Fpemami4911\u002Fneural-combinatorial-rl-pytorch):  PyTorch implementation of Neural Combinatorial Optimization with Reinforcement Learning.\n95. [pytorch-nec](https:\u002F\u002Fgithub.com\u002Fmjacar\u002Fpytorch-nec): PyTorch Implementation of Neural Episodic Control (NEC)\n96. [seq2seq.pytorch](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002Fseq2seq.pytorch): Sequence-to-Sequence learning using PyTorch\n97. [Pytorch-Sketch-RNN](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Sketch-RNN): a pytorch implementation of arxiv.org\u002Fabs\u002F1704.03477\n98. [pytorch-pruning](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-pruning): PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference\n99. [DrQA](https:\u002F\u002Fgithub.com\u002Fhitvoice\u002FDrQA): A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.\n100. [YellowFin_Pytorch](https:\u002F\u002Fgithub.com\u002FJianGoForIt\u002FYellowFin_Pytorch): auto-tuning momentum SGD optimizer\n101. [samplernn-pytorch](https:\u002F\u002Fgithub.com\u002Fdeepsound-project\u002Fsamplernn-pytorch): PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model. \n102. [AEGeAN](https:\u002F\u002Fgithub.com\u002Ftymokvo\u002FAEGeAN): Deeper DCGAN with AE stabilization\n103. [\u002Fpytorch-SRResNet](https:\u002F\u002Fgithub.com\u002Ftwtygqyy\u002Fpytorch-SRResNet): pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802v2 \n104. [vsepp](https:\u002F\u002Fgithub.com\u002Ffartashf\u002Fvsepp): Code for the paper \"VSE++: Improved Visual Semantic Embeddings\"\n105. [Pytorch-DPPO](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-DPPO): Pytorch implementation of Distributed Proximal Policy Optimization: arxiv.org\u002Fabs\u002F1707.02286\n106. [UNIT](https:\u002F\u002Fgithub.com\u002Fmingyuliutw\u002FUNIT): PyTorch Implementation of our Coupled VAE-GAN algorithm for Unsupervised Image-to-Image Translation\n107. [efficient_densenet_pytorch](https:\u002F\u002Fgithub.com\u002Fgpleiss\u002Fefficient_densenet_pytorch): A memory-efficient implementation of DenseNets\n108. [tsn-pytorch](https:\u002F\u002Fgithub.com\u002Fyjxiong\u002Ftsn-pytorch): Temporal Segment Networks (TSN) in PyTorch.\n109. [SMASH](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FSMASH): An experimental technique for efficiently exploring neural architectures.\n110. [pytorch-retinanet](https:\u002F\u002Fgithub.com\u002Fkuangliu\u002Fpytorch-retinanet): RetinaNet in PyTorch\n111. [biogans](https:\u002F\u002Fgithub.com\u002Faosokin\u002Fbiogans):  Implementation supporting the ICCV 2017 paper \"GANs for Biological Image Synthesis\". \n112. [Semantic Image Synthesis via Adversarial Learning]( https:\u002F\u002Fgithub.com\u002Fwoozzu\u002Fdong_iccv_2017): A PyTorch implementation of the paper \"Semantic Image Synthesis via Adversarial Learning\" in ICCV 2017. \n113. [fmpytorch](https:\u002F\u002Fgithub.com\u002Fjmhessel\u002Ffmpytorch): A PyTorch implementation of a Factorization Machine module in cython.\n114. [ORN](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FORN): A PyTorch implementation of the paper \"Oriented Response Networks\" in CVPR 2017. \n115. [pytorch-maml](https:\u002F\u002Fgithub.com\u002Fkaterakelly\u002Fpytorch-maml): PyTorch implementation of MAML: arxiv.org\u002Fabs\u002F1703.03400\n116. [pytorch-generative-model-collections](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-generative-model-collections):  Collection of generative models in Pytorch version.\n117. [vqa-winner-cvprw-2017](https:\u002F\u002Fgithub.com\u002Fmarkdtw\u002Fvqa-winner-cvprw-2017): Pytorch Implementation of winner from VQA Chllange Workshop in CVPR'17. \n118. [tacotron_pytorch](https:\u002F\u002Fgithub.com\u002Fr9y9\u002Ftacotron_pytorch):  PyTorch implementation of Tacotron speech synthesis model. \n119. [pspnet-pytorch](https:\u002F\u002Fgithub.com\u002FLextal\u002Fpspnet-pytorch): PyTorch implementation of PSPNet segmentation network\n120. [LM-LSTM-CRF](https:\u002F\u002Fgithub.com\u002FLiyuanLucasLiu\u002FLM-LSTM-CRF): Empower Sequence Labeling with Task-Aware Language Model http:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04109\n121. [face-alignment](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment): Pytorch implementation of the paper \"How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)\", ICCV 2017\n122. [DepthNet](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FDepthNet): PyTorch DepthNet Training on Still Box dataset. \n123. [EDSR-PyTorch](https:\u002F\u002Fgithub.com\u002Fthstkdgus35\u002FEDSR-PyTorch): PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)\n124. [e2c-pytorch](https:\u002F\u002Fgithub.com\u002Fethanluoyc\u002Fe2c-pytorch): Embed to Control implementation in PyTorch.\n125. [3D-ResNets-PyTorch](https:\u002F\u002Fgithub.com\u002Fkenshohara\u002F3D-ResNets-PyTorch): 3D ResNets for Action Recognition.\n126. [bandit-nmt](https:\u002F\u002Fgithub.com\u002Fkhanhptnk\u002Fbandit-nmt): This is code repo for our EMNLP 2017 paper \"Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback\", which implements the A2C algorithm on top of a neural encoder-decoder model and benchmarks the combination under simulated noisy rewards.\n127. [pytorch-a2c-ppo-acktr](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-a2c-ppo-acktr): PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO) and Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR).\n128. [zalando-pytorch](https:\u002F\u002Fgithub.com\u002FbaldassarreFe\u002Fzalando-pytorch): Various experiments on the [Fashion-MNIST](zalandoresearch\u002Ffashion-mnist) dataset from Zalando.\n129. [sphereface_pytorch](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fsphereface_pytorch): A PyTorch Implementation of SphereFace.\n130. [Categorical DQN](https:\u002F\u002Fgithub.com\u002Ffloringogianu\u002Fcategorical-dqn): A PyTorch Implementation of Categorical DQN from [A Distributional Perspective on Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06887).\n131. [pytorch-ntm](https:\u002F\u002Fgithub.com\u002Floudinthecloud\u002Fpytorch-ntm): pytorch ntm implementation. \n132. [mask_rcnn_pytorch](https:\u002F\u002Fgithub.com\u002Ffelixgwu\u002Fmask_rcnn_pytorch): Mask RCNN in PyTorch.\n133. [graph_convnets_pytorch](https:\u002F\u002Fgithub.com\u002Fxbresson\u002Fgraph_convnets_pytorch): PyTorch implementation of graph ConvNets, NIPS’16\n134. [pytorch-faster-rcnn](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fpytorch-faster-rcnn): A pytorch implementation of faster RCNN detection framework based on Xinlei Chen's tf-faster-rcnn.\n135. [torchMoji](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FtorchMoji): A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc.\n136. [semantic-segmentation-pytorch](https:\u002F\u002Fgithub.com\u002Fhangzhaomit\u002Fsemantic-segmentation-pytorch): Pytorch implementation for Semantic Segmentation\u002FScene Parsing on [MIT ADE20K dataset](http:\u002F\u002Fsceneparsing.csail.mit.edu)\n137. [pytorch-qrnn](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fpytorch-qrnn): PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM\n138. [pytorch-sgns](https:\u002F\u002Fgithub.com\u002Ftheeluwin\u002Fpytorch-sgns): Skipgram Negative Sampling in PyTorch.\n139. [SfmLearner-Pytorch ](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FSfmLearner-Pytorch): Pytorch version of SfmLearner from Tinghui Zhou et al.\n140. [deformable-convolution-pytorch](https:\u002F\u002Fgithub.com\u002F1zb\u002Fdeformable-convolution-pytorch): PyTorch implementation of Deformable Convolution. \n141. [skip-gram-pytorch](https:\u002F\u002Fgithub.com\u002Ffanglanting\u002Fskip-gram-pytorch): A complete pytorch implementation of skipgram model (with subsampling and negative sampling). The embedding result is tested with Spearman's rank correlation.\n142. [stackGAN-v2](https:\u002F\u002Fgithub.com\u002Fhanzhanggit\u002FStackGAN-v2): Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks by Han Zhang*, Tao Xu*, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas.\n143. [self-critical.pytorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fself-critical.pytorch): Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. \n144. [pygcn](https:\u002F\u002Fgithub.com\u002Ftkipf\u002Fpygcn): Graph Convolutional Networks in PyTorch.\n145. [dnc](https:\u002F\u002Fgithub.com\u002Fixaxaar\u002Fpytorch-dnc): Differentiable Neural Computers, for Pytorch\n146. [prog_gans_pytorch_inference](https:\u002F\u002Fgithub.com\u002Fptrblck\u002Fprog_gans_pytorch_inference): PyTorch inference for \"Progressive Growing of GANs\" with CelebA snapshot.\n147. [pytorch-capsule](https:\u002F\u002Fgithub.com\u002Ftimomernick\u002Fpytorch-capsule): Pytorch implementation of Hinton's Dynamic Routing Between Capsules.\n148. [PyramidNet-PyTorch](https:\u002F\u002Fgithub.com\u002Fdyhan0920\u002FPyramidNet-PyTorch): A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks, arxiv.org\u002Fabs\u002F1610.02915)\n149. [radio-transformer-networks](https:\u002F\u002Fgithub.com\u002Fgram-ai\u002Fradio-transformer-networks): A PyTorch implementation of Radio Transformer Networks from the paper \"An Introduction to Deep Learning for the Physical Layer\". arxiv.org\u002Fabs\u002F1702.00832\n150. [honk](https:\u002F\u002Fgithub.com\u002Fcastorini\u002Fhonk): PyTorch reimplementation of Google's TensorFlow CNNs for keyword spotting.\n151. [DeepCORAL](https:\u002F\u002Fgithub.com\u002FSSARCandy\u002FDeepCORAL): A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016\n152. [pytorch-pose](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fpytorch-pose): A PyTorch toolkit for 2D Human Pose Estimation.\n153. [lang-emerge-parlai](https:\u002F\u002Fgithub.com\u002Fkarandesai-96\u002Flang-emerge-parlai): Implementation of EMNLP 2017 Paper \"Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog\" using PyTorch and ParlAI\n154. [Rainbow](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FRainbow): Rainbow: Combining Improvements in Deep Reinforcement Learning \n155. [pytorch_compact_bilinear_pooling v1](https:\u002F\u002Fgithub.com\u002Fgdlg\u002Fpytorch_compact_bilinear_pooling): This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.\n156. [CompactBilinearPooling-Pytorch v2](https:\u002F\u002Fgithub.com\u002FDeepInsight-PCALab\u002FCompactBilinearPooling-Pytorch): (Yang Gao, et al.) A Pytorch Implementation for Compact Bilinear Pooling.\n157. [FewShotLearning](https:\u002F\u002Fgithub.com\u002Fgitabcworld\u002FFewShotLearning): Pytorch implementation of the paper \"Optimization as a Model for Few-Shot Learning\"\n158. [meProp](https:\u002F\u002Fgithub.com\u002Fjklj077\u002FmeProp): Codes for \"meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting\".\n159. [SFD_pytorch](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002FSFD_pytorch): A PyTorch Implementation of Single Shot Scale-invariant Face Detector.\n160. [GradientEpisodicMemory](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FGradientEpisodicMemory): Continuum Learning with GEM: Gradient Episodic Memory. https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08840\n161. [DeblurGAN](https:\u002F\u002Fgithub.com\u002FKupynOrest\u002FDeblurGAN): Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks.\n162. [StarGAN](https:\u002F\u002Fgithub.com\u002Fyunjey\u002FStarGAN): StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tranlsation.\n163. [CapsNet-pytorch](https:\u002F\u002Fgithub.com\u002Fadambielski\u002FCapsNet-pytorch): PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules.\n164. [CondenseNet](https:\u002F\u002Fgithub.com\u002FShichenLiu\u002FCondenseNet): CondenseNet: An Efficient DenseNet using Learned Group Convolutions.\n165. [deep-image-prior](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002Fdeep-image-prior): Image restoration with neural networks but without learning.\n166. [deep-head-pose](https:\u002F\u002Fgithub.com\u002Fnatanielruiz\u002Fdeep-head-pose): Deep Learning Head Pose Estimation using PyTorch.\n167. [Random-Erasing](https:\u002F\u002Fgithub.com\u002Fzhunzhong07\u002FRandom-Erasing): This code has the source code for the paper \"Random Erasing Data Augmentation\".\n168. [FaderNetworks](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FFaderNetworks): Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017\n169. [FlowNet 2.0](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fflownet2-pytorch): FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks\n170. [pix2pixHD](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpix2pixHD): Synthesizing and manipulating 2048x1024 images with conditional GANs tcwang0509.github.io\u002Fpix2pixHD \n171. [pytorch-smoothgrad](https:\u002F\u002Fgithub.com\u002Fpkdn\u002Fpytorch-smoothgrad): SmoothGrad implementation in PyTorch\n172. [RetinaNet](https:\u002F\u002Fgithub.com\u002Fc0nn3r\u002FRetinaNet): An implementation of RetinaNet in PyTorch.\n173. [faster-rcnn.pytorch](https:\u002F\u002Fgithub.com\u002Fjwyang\u002Ffaster-rcnn.pytorch): This project is a faster faster R-CNN implementation, aimed to accelerating the training of faster R-CNN object detection models. \n174. [mixup_pytorch](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002Fmixup_pytorch): A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch.\n175. [inplace_abn](https:\u002F\u002Fgithub.com\u002Fmapillary\u002Finplace_abn): In-Place Activated BatchNorm for Memory-Optimized Training of DNNs\n176. [pytorch-pose-hg-3d](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpytorch-pose-hg-3d): PyTorch implementation for 3D human pose estimation\n177. [nmn-pytorch](https:\u002F\u002Fgithub.com\u002FHarshTrivedi\u002Fnmn-pytorch): Neural Module Network for VQA in Pytorch.\n178. [bytenet](https:\u002F\u002Fgithub.com\u002Fkefirski\u002Fbytenet): Pytorch implementation of bytenet from \"Neural Machine Translation in Linear Time\" paper\n179. [bottom-up-attention-vqa](https:\u002F\u002Fgithub.com\u002Fhengyuan-hu\u002Fbottom-up-attention-vqa): vqa, bottom-up-attention, pytorch\n180. [yolo2-pytorch](https:\u002F\u002Fgithub.com\u002Fruiminshen\u002Fyolo2-pytorch): The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorch as the developing framework to increase productivity, and utilize ONNX to convert models into Caffe 2 to benifit engineering deployment.\n181. [reseg-pytorch](https:\u002F\u002Fgithub.com\u002FWizaron\u002Freseg-pytorch): PyTorch Implementation of ReSeg (arxiv.org\u002Fpdf\u002F1511.07053.pdf)\n182. [binary-stochastic-neurons](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fbinary-stochastic-neurons): Binary Stochastic Neurons in PyTorch.\n183. [pytorch-pose-estimation](https:\u002F\u002Fgithub.com\u002FDavexPro\u002Fpytorch-pose-estimation): PyTorch Implementation of Realtime Multi-Person Pose Estimation project.\n184. [interaction_network_pytorch](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002Finteraction_network_pytorch): Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics.\n185. [NoisyNaturalGradient](https:\u002F\u002Fgithub.com\u002Fwlwkgus\u002FNoisyNaturalGradient): Pytorch Implementation of paper \"Noisy Natural Gradient as Variational Inference\". \n186. [ewc.pytorch](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fewc.pytorch): An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks 2016(10.1073\u002Fpnas.1611835114).\n187. [pytorch-zssr](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-zssr): PyTorch implementation of 1712.06087 \"Zero-Shot\" Super-Resolution using Deep Internal Learning\n188. [deep_image_prior](https:\u002F\u002Fgithub.com\u002Fatiyo\u002Fdeep_image_prior): An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch.\n189. [pytorch-transformer](https:\u002F\u002Fgithub.com\u002Fleviswind\u002Fpytorch-transformer): pytorch implementation of Attention is all you need.\n190. [DeepRL-Grounding](https:\u002F\u002Fgithub.com\u002Fdevendrachaplot\u002FDeepRL-Grounding): This is a PyTorch implementation of the AAAI-18 paper Gated-Attention Architectures for Task-Oriented Language Grounding\n191. [deep-forecast-pytorch](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fdeep-forecast-pytorch): Wind Speed Prediction using LSTMs in PyTorch (arxiv.org\u002Fpdf\u002F1707.08110.pdf)\n192. [cat-net](https:\u002F\u002Fgithub.com\u002FutiasSTARS\u002Fcat-net):  Canonical Appearance Transformations\n193. [minimal_glo](https:\u002F\u002Fgithub.com\u002Ftneumann\u002Fminimal_glo): Minimal PyTorch implementation of Generative Latent Optimization from the paper \"Optimizing the Latent Space of Generative Networks\"\n194. [LearningToCompare-Pytorch](https:\u002F\u002Fgithub.com\u002Fdragen1860\u002FLearningToCompare-Pytorch): Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning. \n195. [poincare-embeddings](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpoincare-embeddings): PyTorch implementation of the NIPS-17 paper \"Poincaré Embeddings for Learning Hierarchical Representations\". \n196. [pytorch-trpo(Hessian-vector product version)](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-trpo): This is a PyTorch implementation of \"Trust Region Policy Optimization (TRPO)\" with exact Hessian-vector product instead of finite differences approximation.\n197. [ggnn.pytorch](https:\u002F\u002Fgithub.com\u002FJamesChuanggg\u002Fggnn.pytorch): A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN). \n198. [visual-interaction-networks-pytorch](https:\u002F\u002Fgithub.com\u002FMrgemy95\u002Fvisual-interaction-networks-pytorch): This's an implementation of deepmind Visual Interaction Networks paper using pytorch\n199. [adversarial-patch](https:\u002F\u002Fgithub.com\u002Fjhayes14\u002Fadversarial-patch): PyTorch implementation of adversarial patch. \n200. [Prototypical-Networks-for-Few-shot-Learning-PyTorch](https:\u002F\u002Fgithub.com\u002Forobix\u002FPrototypical-Networks-for-Few-shot-Learning-PyTorch): Implementation of Prototypical Networks for Few Shot Learning (arxiv.org\u002Fabs\u002F1703.05175) in Pytorch\n201. [Visual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch](https:\u002F\u002Fgithub.com\u002Forobix\u002FVisual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch): Implementation of Visual Feature Attribution using Wasserstein GANs (arxiv.org\u002Fabs\u002F1711.08998) in PyTorch.\n202. [PhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch](https:\u002F\u002Fgithub.com\u002FBlade6570\u002FPhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch): Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation\n203. [ENAS-pytorch](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch): PyTorch implementation of \"Efficient Neural Architecture Search via Parameters Sharing\". \n204. [Neural-IMage-Assessment](https:\u002F\u002Fgithub.com\u002Fkentsyx\u002FNeural-IMage-Assessment): A PyTorch Implementation of Neural IMage Assessment. \n205. [proxprop](https:\u002F\u002Fgithub.com\u002Ftfrerix\u002Fproxprop): Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps.\n206. [FastPhotoStyle](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFastPhotoStyle): A Closed-form Solution to Photorealistic Image Stylization\n207. [Deep-Image-Analogy-PyTorch](https:\u002F\u002Fgithub.com\u002FBen-Louis\u002FDeep-Image-Analogy-PyTorch): A python implementation of Deep-Image-Analogy based on pytorch.\n208. [Person-reID_pytorch](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch): PyTorch for Person re-ID. \n209. [pt-dilate-rnn](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Fpt-dilate-rnn): Dilated RNNs in pytorch. \n210. [pytorch-i-revnet](https:\u002F\u002Fgithub.com\u002Fjhjacobsen\u002Fpytorch-i-revnet): Pytorch implementation of i-RevNets.\n211. [OrthNet](https:\u002F\u002Fgithub.com\u002FOrcuslc\u002FOrthNet): TensorFlow and PyTorch layers for generating Orthogonal Polynomials.\n212. [DRRN-pytorch](https:\u002F\u002Fgithub.com\u002Fjt827859032\u002FDRRN-pytorch): An implementation of Deep Recursive Residual Network for Super Resolution (DRRN), CVPR 2017\n213. [shampoo.pytorch](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fshampoo.pytorch): An implementation of shampoo.\n214. [Neural-IMage-Assessment 2](https:\u002F\u002Fgithub.com\u002Ftruskovskiyk\u002Fnima.pytorch): A PyTorch Implementation of Neural IMage Assessment.\n215. [TCN](https:\u002F\u002Fgithub.com\u002Flocuslab\u002FTCN): Sequence modeling benchmarks and temporal convolutional networks locuslab\u002FTCN\n216. [DCC](https:\u002F\u002Fgithub.com\u002Fshahsohil\u002FDCC): This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper.\n217. [packnet](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpacknet): Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning arxiv.org\u002Fabs\u002F1711.05769\n218. [PyTorch-progressive_growing_of_gans](https:\u002F\u002Fgithub.com\u002Fgithub-pengge\u002FPyTorch-progressive_growing_of_gans): PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.\n219. [nonauto-nmt](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fnonauto-nmt): PyTorch Implementation of \"Non-Autoregressive Neural Machine Translation\"\n220. [PyTorch-GAN](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FPyTorch-GAN): PyTorch implementations of Generative Adversarial Networks.\n221. [PyTorchWavelets](https:\u002F\u002Fgithub.com\u002Ftomrunia\u002FPyTorchWavelets): PyTorch implementation of the wavelet analysis found in Torrence and Compo (1998)\n222. [pytorch-made](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fpytorch-made): MADE (Masked Autoencoder Density Estimation) implementation in PyTorch\n223. [VRNN](https:\u002F\u002Fgithub.com\u002Femited\u002FVariationalRecurrentNeuralNetwork): Pytorch implementation of the Variational RNN (VRNN), from A Recurrent Latent Variable Model for Sequential Data.\n224. [flow](https:\u002F\u002Fgithub.com\u002Femited\u002Fflow): Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.\n225. [deepvoice3_pytorch](https:\u002F\u002Fgithub.com\u002Fr9y9\u002Fdeepvoice3_pytorch): PyTorch implementation of convolutional networks-based text-to-speech synthesis models\n226. [psmm](https:\u002F\u002Fgithub.com\u002Felanmart\u002Fpsmm): imlementation of the the Pointer Sentinel Mixture Model, as described in the paper by Stephen Merity et al.\n227. [tacotron2](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Ftacotron2): Tacotron 2 - PyTorch implementation with faster-than-realtime inference.\n228. [AccSGD](https:\u002F\u002Fgithub.com\u002Frahulkidambi\u002FAccSGD): Implements pytorch code for the Accelerated SGD algorithm.\n229. [QANet-pytorch](https:\u002F\u002Fgithub.com\u002Fhengruo\u002FQANet-pytorch): an implementation of QANet with PyTorch (EM\u002FF1 = 70.5\u002F77.2 after 20 epoches for about 20 hours on one 1080Ti card.)\n230. [ConvE](https:\u002F\u002Fgithub.com\u002FTimDettmers\u002FConvE): Convolutional 2D Knowledge Graph Embeddings\n231. [Structured-Self-Attention](https:\u002F\u002Fgithub.com\u002Fkaushalshetty\u002FStructured-Self-Attention): Implementation for the paper A Structured Self-Attentive Sentence Embedding, which is published in ICLR 2017: arxiv.org\u002Fabs\u002F1703.03130 .\n232. [graphsage-simple](https:\u002F\u002Fgithub.com\u002Fwilliamleif\u002Fgraphsage-simple): Simple reference implementation of GraphSAGE.\n233. [Detectron.pytorch](https:\u002F\u002Fgithub.com\u002Froytseng-tw\u002FDetectron.pytorch): A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.\n234. [R2Plus1D-PyTorch](https:\u002F\u002Fgithub.com\u002Firhumshafkat\u002FR2Plus1D-PyTorch): PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper \"A Closer Look at Spatiotemporal Convolutions for Action Recognition\"\n235. [StackNN](https:\u002F\u002Fgithub.com\u002Fviking-sudo-rm\u002FStackNN): A PyTorch implementation of differentiable stacks for use in neural networks.\n236. [translagent](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ftranslagent): Code for Emergent Translation in Multi-Agent Communication.\n237. [ban-vqa](https:\u002F\u002Fgithub.com\u002Fjnhwkim\u002Fban-vqa): Bilinear attention networks for visual question answering. \n238. [pytorch-openai-transformer-lm](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpytorch-openai-transformer-lm): This is a PyTorch implementation of the TensorFlow code provided with OpenAI's paper \"Improving Language Understanding by Generative Pre-Training\" by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.\n239. [T2F](https:\u002F\u002Fgithub.com\u002Fakanimax\u002FT2F): Text-to-Face generation using Deep Learning. This project combines two of the recent architectures StackGAN and ProGAN for synthesizing faces from textual descriptions.\n240. [pytorch - fid](https:\u002F\u002Fgithub.com\u002Fmseitzer\u002Fpytorch-fid): A Port of Fréchet Inception Distance (FID score) to PyTorch\n241. [vae_vpflows](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vpflows):Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling jmtomczak.github.io\u002Fdeebmed.html\n242. [CoordConv-pytorch](https:\u002F\u002Fgithub.com\u002Fmkocabas\u002FCoordConv-pytorch): Pytorch implementation of CoordConv introduced in 'An intriguing failing of convolutional neural networks and the CoordConv solution' paper. (arxiv.org\u002Fpdf\u002F1807.03247.pdf)\n243. [SDPoint](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FSDPoint): Implementation of \"Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks\", published in CVPR 2018. \n244. [SRDenseNet-pytorch](https:\u002F\u002Fgithub.com\u002Fwxywhu\u002FSRDenseNet-pytorch): SRDenseNet-pytorch（ICCV_2017）\n245. [GAN_stability](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FGAN_stability): Code for paper \"Which Training Methods for GANs do actually Converge? (ICML 2018)\"\n246. [Mask-RCNN](https:\u002F\u002Fgithub.com\u002FwannabeOG\u002FMask-RCNN): A PyTorch implementation of the architecture of Mask RCNN, serves as an introduction to working with PyTorch\n247. [pytorch-coviar](https:\u002F\u002Fgithub.com\u002Fchaoyuaw\u002Fpytorch-coviar): Compressed Video Action Recognition\n248. [PNASNet.pytorch](https:\u002F\u002Fgithub.com\u002Fchenxi116\u002FPNASNet.pytorch): PyTorch implementation of PNASNet-5 on ImageNet. \n249. [NALU-pytorch](https:\u002F\u002Fgithub.com\u002Fkevinzakka\u002FNALU-pytorch): Basic pytorch implementation of NAC\u002FNALU from Neural Arithmetic Logic Units arxiv.org\u002Fpdf\u002F1808.00508.pdf\n250. [LOLA_DiCE](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FLOLA_DiCE): Pytorch implementation of LOLA (arxiv.org\u002Fabs\u002F1709.04326) using DiCE (arxiv.org\u002Fabs\u002F1802.05098)\n251. [generative-query-network-pytorch](https:\u002F\u002Fgithub.com\u002Fwohlert\u002Fgenerative-query-network-pytorch): Generative Query Network (GQN) in PyTorch as described in \"Neural Scene Representation and Rendering\"\n252. [pytorch_hmax](https:\u002F\u002Fgithub.com\u002Fwmvanvliet\u002Fpytorch_hmax): Implementation of the HMAX model of vision in PyTorch.\n253. [FCN-pytorch-easiest](https:\u002F\u002Fgithub.com\u002Fyunlongdong\u002FFCN-pytorch-easiest): trying to be the most easiest and just get-to-use pytorch implementation of FCN (Fully Convolotional Networks)\n254. [transducer](https:\u002F\u002Fgithub.com\u002Fawni\u002Ftransducer): A Fast Sequence Transducer Implementation with PyTorch Bindings.\n255. [AVO-pytorch](https:\u002F\u002Fgithub.com\u002Fartix41\u002FAVO-pytorch): Implementation of Adversarial Variational Optimization in PyTorch.\n256. [HCN-pytorch](https:\u002F\u002Fgithub.com\u002Fhuguyuehuhu\u002FHCN-pytorch): A pytorch reimplementation of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.\n257. [binary-wide-resnet](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fbinary-wide-resnet): PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnel (ICLR 2018)\n258. [piggyback](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpiggyback): Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights arxiv.org\u002Fabs\u002F1801.06519\n259. [vid2vid](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fvid2vid): Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.\n260. [poisson-convolution-sum](https:\u002F\u002Fgithub.com\u002Fcranmer\u002Fpoisson-convolution-sum): Implements an infinite sum of poisson-weighted convolutions\n261. [tbd-nets](https:\u002F\u002Fgithub.com\u002Fdavidmascharka\u002Ftbd-nets): PyTorch implementation of \"Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning\" arxiv.org\u002Fabs\u002F1803.05268 \n262. [attn2d](https:\u002F\u002Fgithub.com\u002Felbayadm\u002Fattn2d): Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction\n263. [yolov3](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov3): YOLOv3: Training and inference in PyTorch pjreddie.com\u002Fdarknet\u002Fyolo\n264. [deep-dream-in-pytorch](https:\u002F\u002Fgithub.com\u002Fduc0\u002Fdeep-dream-in-pytorch): Pytorch implementation of the DeepDream computer vision algorithm. \n265. [pytorch-flows](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-flows): PyTorch implementations of algorithms for density estimation\n266. [quantile-regression-dqn-pytorch](https:\u002F\u002Fgithub.com\u002Fars-ashuha\u002Fquantile-regression-dqn-pytorch): Quantile Regression DQN a Minimal Working Example\n267. [relational-rnn-pytorch](https:\u002F\u002Fgithub.com\u002FL0SG\u002Frelational-rnn-pytorch): An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.\n268. [DEXTR-PyTorch](https:\u002F\u002Fgithub.com\u002Fscaelles\u002FDEXTR-PyTorch): Deep Extreme Cut http:\u002F\u002Fwww.vision.ee.ethz.ch\u002F~cvlsegmentation\u002Fdextr\n269. [PyTorch_GBW_LM](https:\u002F\u002Fgithub.com\u002Frdspring1\u002FPyTorch_GBW_LM): PyTorch Language Model for Google Billion Word Dataset.\n270. [Pytorch-NCE](https:\u002F\u002Fgithub.com\u002FStonesjtu\u002FPytorch-NCE): The Noise Contrastive Estimation for softmax output written in Pytorch\n271. [generative-models](https:\u002F\u002Fgithub.com\u002Fshayneobrien\u002Fgenerative-models): Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN. \n272. [convnet-aig](https:\u002F\u002Fgithub.com\u002Fandreasveit\u002Fconvnet-aig): PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs.\n273. [integrated-gradient-pytorch](https:\u002F\u002Fgithub.com\u002FTianhongDai\u002Fintegrated-gradient-pytorch): This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.\n274. [MalConv-Pytorch](https:\u002F\u002Fgithub.com\u002FAlexander-H-Liu\u002FMalConv-Pytorch): Pytorch implementation of MalConv. \n275. [trellisnet](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Ftrellisnet): Trellis Networks for Sequence Modeling\n276. [Learning to Communicate with Deep Multi-Agent Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fminqi\u002Flearning-to-communicate-pytorch): pytorch implementation of  Learning to Communicate with Deep Multi-Agent Reinforcement Learning paper.\n277. [pnn.pytorch](https:\u002F\u002Fgithub.com\u002Fmichaelklachko\u002Fpnn.pytorch): PyTorch implementation of CVPR'18 - Perturbative Neural Networks http:\u002F\u002Fxujuefei.com\u002Fpnn.html.\n278. [Face_Attention_Network](https:\u002F\u002Fgithub.com\u002Frainofmine\u002FFace_Attention_Network): Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occluded Faces.\n279. [waveglow](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fwaveglow): A Flow-based Generative Network for Speech Synthesis.\n280. [deepfloat](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeepfloat): This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in \"Rethinking floating point for deep learning\" \n281. [EPSR](https:\u002F\u002Fgithub.com\u002Fsubeeshvasu\u002F2018_subeesh_epsr_eccvw): Pytorch implementation of [Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.00344.pdf). This work has won the first place in PIRM2018-SR competition (region 1) held as part of the ECCV 2018.\n282. [ClariNet](https:\u002F\u002Fgithub.com\u002Fksw0306\u002FClariNet): A Pytorch Implementation of ClariNet arxiv.org\u002Fabs\u002F1807.07281\n283. [pytorch-pretrained-BERT](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpytorch-pretrained-BERT): PyTorch version of Google AI's BERT model with script to load Google's pre-trained models\n284. [torch_waveglow](https:\u002F\u002Fgithub.com\u002Fnpuichigo\u002Fwaveglow): A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis. \n285. [3DDFA](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA): The pytorch improved re-implementation of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.\n286. [loss-landscape](https:\u002F\u002Fgithub.com\u002Ftomgoldstein\u002Floss-landscape): loss-landscape Code for visualizing the loss landscape of neural nets.\n287. [famos](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Ffamos): \nPytorch implementation of the paper \"Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Image Stylization\" available at http:\u002F\u002Farxiv.org\u002Fabs\u002F1811.09236.\n288. [back2future.pytorch](https:\u002F\u002Fgithub.com\u002Fanuragranj\u002Fback2future.pytorch): This is a Pytorch implementation of\nJanai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with Occlusions. ECCV 2018.\n289. [FFTNet](https:\u002F\u002Fgithub.com\u002Fmozilla\u002FFFTNet): Unofficial Implementation of FFTNet vocode paper.\n290. [FaceBoxes.PyTorch](https:\u002F\u002Fgithub.com\u002Fzisianw\u002FFaceBoxes.PyTorch): A PyTorch Implementation of FaceBoxes.\n291. [Transformer-XL](https:\u002F\u002Fgithub.com\u002Fkimiyoung\u002Ftransformer-xl): Transformer-XL: Attentive Language Models Beyond a Fixed-Length Contexthttps:\u002F\u002Fgithub.com\u002Fkimiyoung\u002Ftransformer-xl\n292. [associative_compression_networks](https:\u002F\u002Fgithub.com\u002Fjalexvig\u002Fassociative_compression_networks): Associative Compression Networks for Representation Learning. \n293. [fluidnet_cxx](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Ffluidnet_cxx): FluidNet re-written with ATen tensor lib. \n294. [Deep-Reinforcement-Learning-Algorithms-with-PyTorch](https:\u002F\u002Fgithub.com\u002Fp-christ\u002FDeep-Reinforcement-Learning-Algorithms-with-PyTorch): This repository contains PyTorch implementations of deep reinforcement learning algorithms.\n295. [Shufflenet-v2-Pytorch](https:\u002F\u002Fgithub.com\u002Fericsun99\u002FShufflenet-v2-Pytorch): This is a Pytorch implementation of faceplusplus's ShuffleNet-v2. \n296. [GraphWaveletNeuralNetwork](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGraphWaveletNeuralNetwork): This is a Pytorch implementation of Graph Wavelet Neural Network. ICLR 2019. \n297. [AttentionWalk](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAttentionWalk): This is a Pytorch implementation of Watch Your Step: Learning Node Embeddings via Graph Attention. NIPS 2018.\n298. [SGCN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSGCN): This is a Pytorch implementation of Signed Graph Convolutional Network. ICDM 2018.\n299. [SINE](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSINE): This is a Pytorch implementation of SINE: Scalable Incomplete Network Embedding. ICDM 2018.\n300. [GAM](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGAM): This is a Pytorch implementation of Graph Classification using Structural Attention. KDD 2018.\n301. [neural-style-pt](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-style-pt): A PyTorch implementation of Justin Johnson's Neural-style.\n302. [TuckER](https:\u002F\u002Fgithub.com\u002Fibalazevic\u002FTuckER): TuckER: Tensor Factorization for Knowledge Graph Completion.\n303. [pytorch-prunes](https:\u002F\u002Fgithub.com\u002FBayesWatch\u002Fpytorch-prunes): Pruning neural networks: is it time to nip it in the bud?\n304. [SimGNN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSimGNN): SimGNN: A Neural Network Approach to Fast Graph Similarity Computation.\n305. [Character CNN](https:\u002F\u002Fgithub.com\u002Fahmedbesbes\u002Fcharacter-based-cnn): PyTorch implementation of the Character-level Convolutional Networks for Text Classification paper. \n306. [XLM](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FXLM): PyTorch original implementation of Cross-lingual Language Model Pretraining.\n307. [DiffAI](https:\u002F\u002Fgithub.com\u002Feth-sri\u002Fdiffai): A provable defense against adversarial examples and library for building compatible PyTorch models.\n308. [APPNP](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAPPNP): Combining Neural Networks with Personalized PageRank for Classification on Graphs. ICLR 2019.\n309. [NGCN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN): A Higher-Order Graph Convolutional Layer. NeurIPS 2018.\n310. [gpt-2-Pytorch](https:\u002F\u002Fgithub.com\u002Fgraykode\u002Fgpt-2-Pytorch): Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation\n311. [Splitter](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSplitter): Splitter: Learning Node Representations that Capture Multiple Social Contexts. (WWW 2019).\n312. [CapsGNN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FCapsGNN): Capsule Graph Neural Network. (ICLR 2019).\n313. [BigGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FBigGAN-PyTorch): The author's officially unofficial PyTorch BigGAN implementation.\n314. [ppo_pytorch_cpp](https:\u002F\u002Fgithub.com\u002Fmhubii\u002Fppo_pytorch_cpp): This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch.\n315. [RandWireNN](https:\u002F\u002Fgithub.com\u002Fseungwonpark\u002FRandWireNN): Implementation of: \"Exploring Randomly Wired Neural Networks for Image Recognition\".\n316. [Zero-shot Intent CapsNet](https:\u002F\u002Fgithub.com\u002Fjoel-huang\u002Fzeroshot-capsnet-pytorch): GPU-accelerated PyTorch implementation of \"Zero-shot User Intent Detection via Capsule Neural Networks\".\n317. [SEAL-CI](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSEAL-CI) Semi-Supervised Graph Classification: A Hierarchical Graph Perspective. (WWW 2019).\n318. [MixHop](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN): MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. ICML 2019.\n319. [densebody_pytorch](https:\u002F\u002Fgithub.com\u002FLotayou\u002Fdensebody_pytorch): PyTorch implementation of CloudWalk's recent paper DenseBody.\n320. [voicefilter](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter): Unofficial PyTorch implementation of Google AI's VoiceFilter system http:\u002F\u002Fswpark.me\u002Fvoicefilter. \n321. [NVIDIA\u002Fsemantic-segmentation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fsemantic-segmentation): A PyTorch Implementation of [Improving Semantic Segmentation via Video Propagation and Label Relaxation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.01593), In CVPR2019. \n322. [ClusterGCN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FClusterGCN): A PyTorch implementation of \"Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks\" (KDD 2019).\n323. [NVlabs\u002FDG-Net](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDG-Net): A PyTorch implementation of \"Joint Discriminative and Generative Learning for Person Re-identification\" (CVPR19 Oral). \n324. [NCRF](https:\u002F\u002Fgithub.com\u002Fbaidu-research\u002FNCRF): Cancer metastasis detection with neural conditional random field (NCRF)\n325. [pytorch-sift](https:\u002F\u002Fgithub.com\u002Fducha-aiki\u002Fpytorch-sift): PyTorch implementation of SIFT descriptor. \n326. [brain-segmentation-pytorch](https:\u002F\u002Fgithub.com\u002Fmateuszbuda\u002Fbrain-segmentation-pytorch): U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI. \n327. [glow-pytorch](https:\u002F\u002Fgithub.com\u002Frosinality\u002Fglow-pytorch): PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (arxiv.org\u002Fabs\u002F1807.03039) \n328. [EfficientNets-PyTorch](https:\u002F\u002Fgithub.com\u002Fzsef123\u002FEfficientNets-PyTorch): A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.\n329. [STEAL](https:\u002F\u002Fgithub.com\u002Fnv-tlabs\u002FSTEAL): STEAL - Learning Semantic Boundaries from Noisy Annotations nv-tlabs.github.io\u002FSTEAL\n330. [EigenDamage-Pytorch](https:\u002F\u002Fgithub.com\u002Falecwangcq\u002FEigenDamage-Pytorch): Official implementation of the ICML'19 paper \"EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis\".\n331. [Aspect-level-sentiment](https:\u002F\u002Fgithub.com\u002Fruidan\u002FAspect-level-sentiment): Code and dataset for ACL2018 paper \"Exploiting Document Knowledge for Aspect-level Sentiment Classification\"\n332. [breast_cancer_classifier](https:\u002F\u002Fgithub.com\u002Fnyukat\u002Fbreast_cancer_classifier): Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening arxiv.org\u002Fabs\u002F1903.08297\n333. [DGC-Net](https:\u002F\u002Fgithub.com\u002FAaltoVision\u002FDGC-Net): A PyTorch implementation of \"DGC-Net: Dense Geometric Correspondence Network\".\n334. [universal-triggers](https:\u002F\u002Fgithub.com\u002FEric-Wallace\u002Funiversal-triggers): Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)\n335. [Deep-Reinforcement-Learning-Algorithms-with-PyTorch](https:\u002F\u002Fgithub.com\u002Fp-christ\u002FDeep-Reinforcement-Learning-Algorithms-with-PyTorch): PyTorch implementations of deep reinforcement learning algorithms and environments.\n336. [simple-effective-text-matching-pytorch](https:\u002F\u002Fgithub.com\u002Falibaba-edu\u002Fsimple-effective-text-matching-pytorch): A pytorch implementation of the ACL2019 paper \"Simple and Effective Text Matching with Richer Alignment Features\".\n336. [Adaptive-segmentation-mask-attack (ASMA)](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fadaptive-segmentation-mask-attack): A pytorch implementation of the MICCAI2019 paper \"Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation\".\n337. [NVIDIA\u002Funsupervised-video-interpolation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Funsupervised-video-interpolation): A PyTorch Implementation of [Unsupervised Video Interpolation Using Cycle Consistency](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.05928), In ICCV 2019. \n338. [Seg-Uncertainty](https:\u002F\u002Fgithub.com\u002Flayumi\u002FSeg-Uncertainty): Unsupervised Scene Adaptation with Memory Regularization in vivo, In IJCAI 2020.\n339. [pulse](https:\u002F\u002Fgithub.com\u002Fadamian98\u002Fpulse): Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models\n340. [distance-encoding](https:\u002F\u002Fgithub.com\u002Fsnap-stanford\u002Fdistance-encoding): Distance-Encoding - Design Provably More PowerfulGNNs for Structural Representation Learning.\n341. [Pathfinder Discovery Networks](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FPDN): Pathfinder Discovery Networks for Neural Message Passing.\n342. [PyKEEN](https:\u002F\u002Fgithub.com\u002Fpykeen\u002Fpykeen): A Python library for learning and evaluating knowledge graph embeddings.\n343. [SSSNET](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002FSSSNET_Signed_Clustering): Official implementation of the SDM2022 paper \"SSSNET: Semi-Supervised Signed Network Clustering\".\n344. [MagNet](https:\u002F\u002Fgithub.com\u002Fmatthew-hirn\u002Fmagnet): Official implementation of the NeurIPS2021 paper \"MagNet: A Neural Network for Directed Graphs\".\n345. [Semantic Search](https:\u002F\u002Fgithub.com\u002Fkuutsav\u002Finformation-retrieval): Latest in the field of neural information retrieval \u002F semantic search.\n346. [FreeGrad](https:\u002F\u002Fgithub.com\u002Ftbox98\u002FFreeGrad) - PyTorch library for custom backward passes, straight-through estimators and gradient transforms.\n\n\n## Talks & conferences\n\n1. [PyTorch Conference 2018](https:\u002F\u002Fdevelopers.facebook.com\u002Fvideos\u002F2018\u002Fpytorch-developer-conference\u002F): First PyTorch developer conference at 2018.\n\n## Pytorch elsewhere\n\n1. **[the-incredible-pytorch](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch)**: The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch. \n2. [generative models](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models): Collection of generative models, e.g. GAN, VAE in Tensorflow, Keras, and Pytorch. http:\u002F\u002Fwiseodd.github.io  \n3. [pytorch vs tensorflow](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FMachineLearning\u002Fcomments\u002F5w3q74\u002Fd_so_pytorch_vs_tensorflow_whats_the_verdict_on\u002F): an informative thread on reddit. \n4. [Pytorch discussion forum](https:\u002F\u002Fdiscuss.pytorch.org\u002F)  \n5. [pytorch notebook: docker-stack](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fescong\u002Fpytorch-notebook\u002F): A project similar to [Jupyter Notebook Scientific Python Stack](https:\u002F\u002Fgithub.com\u002Fjupyter\u002Fdocker-stacks\u002Ftree\u002Fmaster\u002Fscipy-notebook)\n6. [drawlikebobross](https:\u002F\u002Fgithub.com\u002Fkendricktan\u002Fdrawlikebobross): Draw like Bob Ross using the power of Neural Networks (With PyTorch)!\n7. [pytorch-tvmisc](https:\u002F\u002Fgithub.com\u002Ft-vi\u002Fpytorch-tvmisc): Totally Versatile Miscellanea for Pytorch\n8. [pytorch-a3c-mujoco](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fpytorch-a3c-mujoco): Implement A3C for Mujoco gym envs.\n9. [PyTorch in 5 Minutes](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nbJ-2G2GXL0&list=WL&index=9).\n10. [pytorch_chatbot](https:\u002F\u002Fgithub.com\u002Fjinfagang\u002Fpytorch_chatbot): A Marvelous ChatBot implemented using PyTorch.\n11. [malmo-challenge](https:\u002F\u002Fgithub.com\u002FKaixhin\u002Fmalmo-challenge): Malmo Collaborative AI Challenge - Team Pig Catcher\n12. [sketchnet](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fsketchnet): A model that takes an image and generates Processing source code to regenerate that image\n13. [Deep-Learning-Boot-Camp](https:\u002F\u002Fgithub.com\u002FQuantScientist\u002FDeep-Learning-Boot-Camp): A nonprofit community run, 5-day Deep Learning Bootcamp http:\u002F\u002Fdeep-ml.com. \n14. [Amazon_Forest_Computer_Vision](https:\u002F\u002Fgithub.com\u002Fmratsim\u002FAmazon_Forest_Computer_Vision): Satellite Image tagging code using PyTorch \u002F Keras with lots of PyTorch tricks. kaggle competition.\n15. [AlphaZero_Gomoku](https:\u002F\u002Fgithub.com\u002Fjunxiaosong\u002FAlphaZero_Gomoku): An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)\n16. [pytorch-cv](https:\u002F\u002Fgithub.com\u002Fyouansheng\u002Fpytorch-cv): Repo for Object Detection, Segmentation & Pose Estimation.\n17. [deep-person-reid](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fdeep-person-reid): Pytorch implementation of deep person re-identification approaches.\n18. [pytorch-template](https:\u002F\u002Fgithub.com\u002Fvictoresque\u002Fpytorch-template): PyTorch template project\n19. [Deep Learning With Pytorch TextBook](https:\u002F\u002Fwww.packtpub.com\u002Fbig-data-and-business-intelligence\u002Fdeep-learning-pytorch) A practical guide to build neural network models in text and vision using PyTorch. [Purchase on Amazon ](https:\u002F\u002Fwww.amazon.in\u002FDeep-Learning-PyTorch-practical-approach\u002Fdp\u002F1788624335\u002Fref=tmm_pap_swatch_0?_encoding=UTF8&qid=1523853954&sr=8-1)     [github code repo](https:\u002F\u002Fgithub.com\u002Fsvishnu88\u002FDLwithPyTorch) \n20. [compare-tensorflow-pytorch](https:\u002F\u002Fgithub.com\u002Fjalola\u002Fcompare-tensorflow-pytorch): Compare outputs between layers written in Tensorflow and layers written in Pytorch.\n21. [hasktorch](https:\u002F\u002Fgithub.com\u002Fhasktorch\u002Fhasktorch): Tensors and neural networks in Haskell\n22. [Deep Learning With Pytorch](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-pytorch) Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch. \n23. [nimtorch](https:\u002F\u002Fgithub.com\u002Ffragcolor-xyz\u002Fnimtorch): PyTorch - Python + Nim\n24. [derplearning](https:\u002F\u002Fgithub.com\u002FJohn-Ellis\u002Fderplearning): Self Driving RC Car Code. \n25. [pytorch-saltnet](https:\u002F\u002Fgithub.com\u002Ftugstugi\u002Fpytorch-saltnet): Kaggle | 9th place single model solution for TGS Salt Identification Challenge.\n26. [pytorch-scripts](https:\u002F\u002Fgithub.com\u002Fpeterjc123\u002Fpytorch-scripts): A few Windows specific scripts for PyTorch.\n27. [pytorch_misc](https:\u002F\u002Fgithub.com\u002Fptrblck\u002Fpytorch_misc): Code snippets created for the PyTorch discussion board.\n28. [awesome-pytorch-scholarship](https:\u002F\u002Fgithub.com\u002Farnas\u002Fawesome-pytorch-scholarship): A list of awesome PyTorch scholarship articles, guides, blogs, courses and other resources.\n29. [MentisOculi](https:\u002F\u002Fgithub.com\u002Fmmirman\u002FMentisOculi): A raytracer written in PyTorch (raynet?)\n30. [DoodleMaster](https:\u002F\u002Fgithub.com\u002Fkaranchahal\u002FDoodleMaster): \"Don't code your UI, Draw it !\"\n31. [ocaml-torch](https:\u002F\u002Fgithub.com\u002FLaurentMazare\u002Focaml-torch): OCaml bindings for PyTorch.\n32. [extension-script](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fextension-script): Example repository for custom C++\u002FCUDA operators for TorchScript.\n33. [pytorch-inference](https:\u002F\u002Fgithub.com\u002Fzccyman\u002Fpytorch-inference): PyTorch 1.0 inference in C++ on Windows10 platforms. \n34. [pytorch-cpp-inference](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fpytorch-cpp-inference): Serving PyTorch 1.0 Models as a Web Server in C++.\n35. [tch-rs](https:\u002F\u002Fgithub.com\u002FLaurentMazare\u002Ftch-rs): Rust bindings for PyTorch.\n36. [TorchSharp](https:\u002F\u002Fgithub.com\u002Finteresaaat\u002FTorchSharp): .NET bindings for the Pytorch engine\n37. [ML Workspace](https:\u002F\u002Fgithub.com\u002Fml-tooling\u002Fml-workspace): All-in-one web IDE for machine learning and data science. Combines Jupyter, VS Code, PyTorch, and many other tools\u002Flibraries into one Docker image.\n38. [PyTorch Style Guide](https:\u002F\u002Fgithub.com\u002FIgorSusmelj\u002Fpytorch-styleguide) Style guide for PyTorch code. Consistent and good code style helps collaboration and prevents errors!\n\n\n##### Feedback: If you have any ideas or you want any other content to be added to this list, feel free to contribute.\n","超棒的 PyTorch 列表\n========================\n\n![pytorch-logo-dark](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbharathgs_Awesome-pytorch-list_readme_c47227853472.png)\n\n\u003Cp align=\"center\">\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fstars-12400+-brightgreen.svg?style=flat\"\u002F>\n\t\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\">\n\u003C\u002Fp>\n\n## 目录\n- [PyTorch 及相关库](#pytorch--related-libraries)\n  - [NLP 和语音处理](#nlp--Speech-Processing)\n  - [计算机视觉](#cv)\n  - [概率\u002F生成模型库](#probabilisticgenerative-libraries)\n  - [其他库](#other-libraries)\n- [教程、书籍和示例](#tutorials-books--examples)\n- [论文实现](#paper-implementations)\n- [演讲和会议](#talks--conferences)\n- [PyTorch 的其他应用](#pytorch-elsewhere)\n\n## PyTorch 及相关库\n\n1. [pytorch](http:\u002F\u002Fpytorch.org): 在 Python 中使用张量和动态神经网络，并具有强大的 GPU 加速功能。\n2. [Captum](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fcaptum): 用于 PyTorch 的模型可解释性和理解工具。\n\n### NLP 和语音处理：\n\n1. [pytorch text](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftext): 与 Torch 文本相关的资源。  \n2. [pytorch-seq2seq](https:\u002F\u002Fgithub.com\u002FIBM\u002Fpytorch-seq2seq): 一个基于 PyTorch 实现的序列到序列（seq2seq）模型框架。  \n3. [anuvada](https:\u002F\u002Fgithub.com\u002FSandeep42\u002Fanuvada): 使用 PyTorch 构建的可解释 NLP 模型。\n4. [audio](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Faudio): 简单的 PyTorch 音频输入输出工具。\n5. [loop](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Floop): 一种跨多个说话人生成语音的方法。\n6. [fairseq-py](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffairseq-py): Facebook AI Research 用 Python 编写的序列到序列工具包。\n7. [speech](https:\u002F\u002Fgithub.com\u002Fawni\u002Fspeech): PyTorch 的自动语音识别实现。\n8. [OpenNMT-py](https:\u002F\u002Fgithub.com\u002FOpenNMT\u002FOpenNMT-py): 开源的 PyTorch 神经机器翻译系统 http:\u002F\u002Fopennmt.net \n9. [neuralcoref](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fneuralcoref): 基于神经网络和 spaCy 的最先进共指消解模型 huggingface.co\u002Fcoref\n10. [sentiment-discovery](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fsentiment-discovery): 大规模无监督语言建模，用于鲁棒的情感分类。\n11. [MUSE](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FMUSE): 一个多语言无监督或监督词嵌入库。\n12. [nmtpytorch](https:\u002F\u002Fgithub.com\u002Flium-lst\u002Fnmtpytorch): 基于 PyTorch 的神经机器翻译框架。\n13. [pytorch-wavenet](https:\u002F\u002Fgithub.com\u002Fvincentherrmann\u002Fpytorch-wavenet): WaveNet 的快速生成实现。\n14. [Tacotron-pytorch](https:\u002F\u002Fgithub.com\u002Fsoobinseo\u002FTacotron-pytorch): Tacotron：迈向端到端语音合成。\n15. [AllenNLP](https:\u002F\u002Fgithub.com\u002Fallenai\u002Fallennlp): 一个开源的 NLP 研究库，基于 PyTorch 构建。\n16. [PyTorch-NLP](https:\u002F\u002Fgithub.com\u002FPetrochukM\u002FPyTorch-NLP): 适用于 PyTorch 的文本工具和数据集 pytorchnlp.readthedocs.io\n17. [quick-nlp](https:\u002F\u002Fgithub.com\u002Foutcastofmusic\u002Fquick-nlp): 基于 FastAI 的 PyTorch NLP 库。 \n18. [TTS](https:\u002F\u002Fgithub.com\u002Fmozilla\u002FTTS): 用于文本转语音的深度学习技术。\n19. [LASER](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FLASER): 语言无关的句子表示方法。\n20. [pyannote-audio](https:\u002F\u002Fgithub.com\u002Fpyannote\u002Fpyannote-audio): 用于说话人日志的神经网络构建模块：语音活动检测、说话人变化检测、说话人嵌入。\n21. [gensen](https:\u002F\u002Fgithub.com\u002FMaluuba\u002Fgensen): 通过大规模多任务学习，学习通用分布式句子表示。\n22. [translate](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftranslate): Translate——一个 PyTorch 语言库。\n23. [espnet](https:\u002F\u002Fgithub.com\u002Fespnet\u002Fespnet): 端到端语音处理工具箱 espnet.github.io\u002Fespnet\n24. [pythia](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpythia): 一套用于视觉问答的软件工具。\n25. [UnsupervisedMT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FUnsupervisedMT): 基于短语和神经网络的无监督机器翻译。\n26. [jiant](https:\u002F\u002Fgithub.com\u002Fjsalt18-sentence-repl\u002Fjiant): Jiant 句子表示学习工具箱。 \n27. [BERT-PyTorch](https:\u002F\u002Fgithub.com\u002Fcodertimo\u002FBERT-pytorch): Google AI 2018 年 BERT 的 PyTorch 实现，并附有简单注释。\n28. [InferSent](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInferSent): 句子嵌入（InferSent）及用于自然语言推理的训练代码。\n29. [uis-rnn](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fuis-rnn): 这是无界交错状态循环神经网络（UIS-RNN）算法的库，对应于论文《完全监督的说话人日志》。arxiv.org\u002Fabs\u002F1810.04719 \n30. [flair](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Fflair): 一个非常简单的框架，用于最先进的自然语言处理（NLP）。\n31. [pytext](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytext): 一个基于 PyTorch 的自然语言建模框架 fb.me\u002Fpytextdocs\n32. [voicefilter](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter): Google AI VoiceFilter 系统的非官方 PyTorch 实现 http:\u002F\u002Fswpark.me\u002Fvoicefilter\n33. [BERT-NER](https:\u002F\u002Fgithub.com\u002Fkamalkraj\u002FBERT-NER): 基于 BERT 的 PyTorch 命名实体识别。\n34. [transfer-nlp](https:\u002F\u002Fgithub.com\u002Ffeedly\u002Ftransfer-nlp): 专为灵活的研究和开发设计的 NLP 库。\n35. [texar-pytorch](https:\u002F\u002Fgithub.com\u002Fasyml\u002Ftexar-pytorch): 用于机器学习和文本生成的工具包，基于 PyTorch texar.io\n36. [pytorch-kaldi](https:\u002F\u002Fgithub.com\u002Fmravanelli\u002Fpytorch-kaldi): pytorch-kaldi 是一个用于开发最先进的 DNN\u002FRNN 混合语音识别系统的项目。其中 DNN 部分由 PyTorch 负责，而特征提取、标签计算和解码则由 Kaldi 工具箱完成。\n37. [NeMo](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNeMo): 神经模块：用于对话式 AI 的工具包 nvidia.github.io\u002FNeMo\n38. [pytorch-struct](https:\u002F\u002Fgithub.com\u002Fharvardnlp\u002Fpytorch-struct): 核心结构化预测算法的向量化实现库（HMM、依存句法树、CKY 等）。\n39. [espresso](https:\u002F\u002Fgithub.com\u002Ffreewym\u002Fespresso): Espresso：一个快速的端到端神经语音识别工具。\n40. [transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers): Hugging Face Transformers：面向 TensorFlow 2.0 和 PyTorch 的最先进的自然语言处理工具。huggingface.co\u002Ftransformers\n41. [reformer-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Freformer-pytorch): Reformer，高效的 Transformer，在 PyTorch 中实现。\n42. [torch-metrics](https:\u002F\u002Fgithub.com\u002Fenochkan\u002Ftorch-metrics): 用于 PyTorch 模型评估的指标。\n43. [speechbrain](https:\u002F\u002Fgithub.com\u002Fspeechbrain\u002Fspeechbrain): SpeechBrain 是一个基于 PyTorch 的开源、一体化语音工具箱。\n44. [Backprop](https:\u002F\u002Fgithub.com\u002Fbackprop-ai\u002Fbackprop): Backprop 让使用、微调和部署最先进的机器学习模型变得简单。\n\n### 简历：\n\n1. [pytorch vision](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)：专用于计算机视觉的数据集、变换和模型。\n2. [pt-styletransfer](https:\u002F\u002Fgithub.com\u002Ftymokvo\u002Fpt-styletransfer)：基于 PyTorch 的类实现的神经风格迁移。\n3. [OpenFacePytorch](https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch)：用于使用 OpenFace 的 nn4.small2.v1.t7 模型的 PyTorch 模块。\n4. [img_classification_pk_pytorch](https:\u002F\u002Fgithub.com\u002Ffelixgwu\u002Fimg_classification_pk_pytorch)：快速将你的图像分类模型与当前最先进的模型（如 DenseNet、ResNet 等）进行比较。\n5. [SparseConvNet](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FSparseConvNet)：子流形稀疏卷积网络。\n6. [Convolution_LSTM_pytorch](https:\u002F\u002Fgithub.com\u002Fautoman000\u002FConvolution_LSTM_pytorch)：一个多层卷积 LSTM 模块。\n7. [face-alignment](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment)：:fire: 基于 PyTorch 构建的 2D 和 3D 面部对齐库，adrianbulat.com。\n8. [pytorch-semantic-segmentation](https:\u002F\u002Fgithub.com\u002FZijunDeng\u002Fpytorch-semantic-segmentation)：用于语义分割的 PyTorch 实现。\n9. [RoIAlign.pytorch](https:\u002F\u002Fgithub.com\u002Flongcw\u002FRoIAlign.pytorch)：这是 RoIAlign 的 PyTorch 版本。该实现基于 crop_and_resize，支持 CPU 和 GPU 上的前向和反向传播。\n10. [pytorch-cnn-finetune](https:\u002F\u002Fgithub.com\u002Fcreafz\u002Fpytorch-cnn-finetune)：使用 PyTorch 微调预训练的卷积神经网络。\n11. [detectorch](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fdetectorch)：Detectorch——面向 PyTorch 的 detectron。\n12. [Augmentor](https:\u002F\u002Fgithub.com\u002Fmdbloice\u002FAugmentor)：用于机器学习的 Python 图像增强库。http:\u002F\u002Faugmentor.readthedocs.io\n13. [s2cnn](https:\u002F\u002Fgithub.com\u002Fjonas-koehler\u002Fs2cnn)：\n该库包含用于球面信号（例如全向相机、地球表面信号）的 SO(3) 等变 CNN 的 PyTorch 实现。\n14. [TorchCV](https:\u002F\u002Fgithub.com\u002Fdonnyyou\u002Ftorchcv)：一个基于 PyTorch 的计算机视觉深度学习框架。\n15. [maskrcnn-benchmark](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmaskrcnn-benchmark)：在 PyTorch 中快速、模块化的实例分割和目标检测算法参考实现。\n16. [image-classification-mobile](https:\u002F\u002Fgithub.com\u002Fosmr\u002Fimgclsmob)：在 ImageNet-1K 上预训练的分类模型集合。\n17. [medicaltorch](https:\u002F\u002Fgithub.com\u002Fperone\u002Fmedicaltorch)：一个用于 PyTorch 的医学影像框架 http:\u002F\u002Fmedicaltorch.readthedocs.io\n18. [albumentations](https:\u002F\u002Fgithub.com\u002Falbu\u002Falbumentations)：快速的图像增强库。\n19. [kornia](https:\u002F\u002Fgithub.com\u002Farraiyopensource\u002Fkornia)：可微分的计算机视觉库。\n20. [pytorch-text-recognition](https:\u002F\u002Fgithub.com\u002Fs3nh\u002Fpytorch-text-recognition)：文本识别组合——CRAFT + CRNN。\n21. [facenet-pytorch](https:\u002F\u002Fgithub.com\u002Ftimesler\u002Ffacenet-pytorch)：从 davidsandberg\u002Ffacenet 移植而来的预训练 PyTorch 人脸检测和识别模型。\n22. [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)：Detectron2 是 FAIR 的下一代目标检测和分割研究平台。\n23. [vedaseg](https:\u002F\u002Fgithub.com\u002FMedia-Smart\u002Fvedaseg)：一个基于 PyTorch 的语义分割框架。\n24. [ClassyVision](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FClassyVision)：一个端到端的 PyTorch 框架，用于图像和视频分类。\n25. [detecto](https:\u002F\u002Fgithub.com\u002Falankbi\u002Fdetecto)：用不到 10 行代码即可实现的 Python 计算机视觉。\n26. [pytorch3d](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch3d)：PyTorch3D 是 FAIR 提供的用于处理 3D 数据深度学习的可重用组件库 pytorch3d.org。\n27. [MMDetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)：MMDetection 是一个开源的目标检测工具箱，属于 [OpenMMLab 项目](https:\u002F\u002Fopen-mmlab.github.io\u002F)。\n28. [neural-dream](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-dream)：DeepDream 算法的 PyTorch 实现。能够生成梦幻般的幻觉视觉效果。\n29. [FlashTorch](https:\u002F\u002Fgithub.com\u002FMisaOgura\u002Fflashtorch)：用于 PyTorch 神经网络的可视化工具包！\n30. [Lucent](https:\u002F\u002Fgithub.com\u002Fgreentfrapp\u002Flucent)：TensorFlow 和 OpenAI Clarity 的 Lucid 被适配到 PyTorch 上。\n31. [MMDetection3D](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection3d)：MMDetection3D 是 OpenMMLab 下一代通用 3D 目标检测平台，属于 [OpenMMLab 项目](https:\u002F\u002Fopen-mmlab.github.io\u002F)。\n32. [MMSegmentation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation)：MMSegmentation 是语义分割工具箱和基准测试平台，属于 [OpenMMLab 项目](https:\u002F\u002Fopen-mmlab.github.io\u002F)。\n33. [MMEditing](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting)：MMEditing 是图像和视频编辑工具箱，属于 [OpenMMLab 项目](https:\u002F\u002Fopen-mmlab.github.io\u002F)。\n34. [MMAction2](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmaction2)：MMAction2 是 OpenMMLab 下一代动作理解工具箱和基准测试平台，属于 [OpenMMLab 项目](https:\u002F\u002Fopen-mmlab.github.io\u002F)。\n35. [MMPose](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmpose)：MMPose 是姿态估计工具箱和基准测试平台，属于 [OpenMMLab 项目](https:\u002F\u002Fopen-mmlab.github.io\u002F)。\n36. [lightly](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly)：Lightly 是一个用于自监督学习的计算机视觉框架。\n37. [RoMa](https:\u002F\u002Fnaver.github.io\u002Froma\u002F)：一个轻量级且高效的库，用于处理 3D 旋转。\n\n\n### 概率\u002F生成式库：\n\n1. [ptstat](https:\u002F\u002Fgithub.com\u002Fstepelu\u002Fptstat)：在 PyTorch 中进行概率编程和统计推断。\n2. [pyro](https:\u002F\u002Fgithub.com\u002Fuber\u002Fpyro)：使用 Python 和 PyTorch 进行深度通用概率编程 http:\u002F\u002Fpyro.ai\n3. [probtorch](https:\u002F\u002Fgithub.com\u002Fprobtorch\u002Fprobtorch)：Probabilistic Torch 是一个扩展 PyTorch 的深度生成模型库。\n4. [paysage](https:\u002F\u002Fgithub.com\u002Fdrckf\u002Fpaysage)：在 Python\u002FPyTorch 中进行无监督学习和生成模型。\n5. [pyvarinf](https:\u002F\u002Fgithub.com\u002Fctallec\u002Fpyvarinf)：一个 Python 包，用于促进在 PyTorch 中使用基于变分推断的贝叶斯深度学习方法。\n6. [pyprob](https:\u002F\u002Fgithub.com\u002Fprobprog\u002Fpyprob)：一个基于 PyTorch 的概率编程和推理编译库。\n7. [mia](https:\u002F\u002Fgithub.com\u002Fspring-epfl\u002Fmia)：一个用于对机器学习模型发起成员身份推断攻击的库。\n8. [pro_gan_pytorch](https:\u002F\u002Fgithub.com\u002Fakanimax\u002Fpro_gan_pytorch)：以 PyTorch nn.Module 扩展形式实现的 ProGAN 包。\n9. [botorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fbotorch)：在 PyTorch 中进行贝叶斯优化。\n\n### 其他库：\n\n1. [pytorch extras](https:\u002F\u002Fgithub.com\u002Fmrdrozdov\u002Fpytorch-extras)：PyTorch 的一些额外功能。  \n2. [functional zoo](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Ffunctional-zoo)：与 Lua Torch 不同，PyTorch 的核心内置了自动微分机制，因此无需依赖 `torch.nn` 模块的模块化结构，可以直接创建所需的变量并编写函数来操作它们，这种方式有时更为便捷。该仓库以函数式方式定义模型，并为部分模型提供了预训练权重。  \n3. [torch-sampling](https:\u002F\u002Fgithub.com\u002Fncullen93\u002Ftorchsample)：该包提供了一组用于从内存或磁盘数据中进行采样的变换和数据结构。  \n4. [torchcraft-py](https:\u002F\u002Fgithub.com\u002Fdeepcraft\u002Ftorchcraft-py)：TorchCraft 的 Python 封装，是连接 Torch 和 StarCraft 的桥梁，专为 AI 研究设计。  \n5. [aorun](https:\u002F\u002Fgithub.com\u002Framon-oliveira\u002Faorun)：Aorun 旨在成为以 PyTorch 为后端的 Keras。  \n6. [logger](https:\u002F\u002Fgithub.com\u002Foval-group\u002Flogger)：一个用于实验的简单日志记录工具。  \n7. [PyTorch-docset](https:\u002F\u002Fgithub.com\u002Fiamaziz\u002FPyTorch-docset)：PyTorch 文档集！可与 Dash、Zeal、Velocity 或 LovelyDocs 配合使用。  \n8. [convert_torch_to_pytorch](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fconvert_torch_to_pytorch)：将 Torch t7 模型转换为 PyTorch 模型及源代码。  \n9. [pretrained-models.pytorch](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch)：该项目的目标是帮助复现研究论文的结果。  \n10. [pytorch_fft](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fpytorch_fft)：PyTorch 的 FFT 封装库。  \n11. [caffe_to_torch_to_pytorch](https:\u002F\u002Fgithub.com\u002Ffanq15\u002Fcaffe_to_torch_to_pytorch)：用于在 Caffe、Torch 和 PyTorch 之间进行模型转换的工具。  \n12. [pytorch-extension](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-extension)：这是一个基于 CUDA 的 PyTorch 扩展，用于计算两个张量的哈达玛积。  \n13. [tensorboard-pytorch](https:\u002F\u002Fgithub.com\u002Flanpa\u002Ftensorboard-pytorch)：该模块以 TensorBoard 格式保存 PyTorch 张量，便于检查。目前支持 TensorBoard 中的标量、图像、音频和直方图类型数据。  \n14. [gpytorch](https:\u002F\u002Fgithub.com\u002Fjrg365\u002Fgpytorch)：GPyTorch 是一个基于 PyTorch 实现的高斯过程库，旨在轻松构建灵活且模块化的高斯过程模型，即使非专家也能使用。  \n15. [spotlight](https:\u002F\u002Fgithub.com\u002Fmaciejkula\u002Fspotlight)：基于 PyTorch 的深度推荐模型。  \n16. [pytorch-cns](https:\u002F\u002Fgithub.com\u002Fawentzonline\u002Fpytorch-cns)：使用 PyTorch 进行压缩网络搜索。  \n17. [pyinn](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fpyinn)：结合 CuPy 的 PyTorch 神经网络融合操作。  \n18. [inferno](https:\u002F\u002Fgithub.com\u002Fnasimrahaman\u002Finferno)：围绕 PyTorch 的实用工具库。  \n19. [pytorch-fitmodule](https:\u002F\u002Fgithub.com\u002Fhenryre\u002Fpytorch-fitmodule)：适用于 PyTorch 模块的超简单拟合方法。  \n20. [inferno-sklearn](https:\u002F\u002Fgithub.com\u002Fdnouri\u002Finferno)：一个兼容 scikit-learn 的神经网络库，封装了 PyTorch。  \n21. [pytorch-caffe-darknet-convert](https:\u002F\u002Fgithub.com\u002Fmarvis\u002Fpytorch-caffe-darknet-convert)：在 PyTorch、Caffe prototxt\u002F权重以及 Darknet cfg\u002F权重之间进行转换。  \n22. [pytorch2caffe](https:\u002F\u002Fgithub.com\u002Flongcw\u002Fpytorch2caffe)：将 PyTorch 模型转换为 Caffe 模型。  \n23. [pytorch-tools](https:\u002F\u002Fgithub.com\u002Fnearai\u002Fpytorch-tools)：PyTorch 相关工具集合。  \n24. [sru](https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Fsru)：以 CNN 的速度训练 RNN（arxiv.org\u002Fabs\u002F1709.02755）。  \n25. [torch2coreml](https:\u002F\u002Fgithub.com\u002Fprisma-ai\u002Ftorch2coreml)：将 Torch7 转换为 CoreML 格式。  \n26. [PyTorch-Encoding](https:\u002F\u002Fgithub.com\u002Fzhanghang1989\u002FPyTorch-Encoding)：PyTorch 深度纹理编码网络 http:\u002F\u002Fhangzh.com\u002FPyTorch-Encoding。  \n27. [pytorch-ctc](https:\u002F\u002Fgithub.com\u002Fryanleary\u002Fpytorch-ctc)：PyTorch-CTC 是一种基于 Beam Search 的 CTC（联结时序分类）解码实现，适用于 PyTorch。其 C++ 代码大量借鉴自 TensorFlow，并进行了改进以提高灵活性。  \n28. [candlegp](https:\u002F\u002Fgithub.com\u002Ft-vi\u002Fcandlegp)：基于 PyTorch 的高斯过程实现。  \n29. [dpwa](https:\u002F\u002Fgithub.com\u002Floudinthecloud\u002Fdpwa)：通过成对平均法进行分布式学习。  \n30. [dni-pytorch](https:\u002F\u002Fgithub.com\u002Fkoz4k\u002Fdni-pytorch)：利用合成梯度为 PyTorch 提供解耦的神经接口。  \n31. [skorch](https:\u002F\u002Fgithub.com\u002Fdnouri\u002Fskorch)：一个兼容 scikit-learn 的神经网络库，封装了 PyTorch。  \n32. [ignite](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fignite)：Ignite 是一个高级库，用于辅助在 PyTorch 中训练神经网络。  \n33. [Arnold](https:\u002F\u002Fgithub.com\u002Fglample\u002FArnold)：Arnold——DOOM 智能体。  \n34. [pytorch-mcn](https:\u002F\u002Fgithub.com\u002Falbanie\u002Fpytorch-mcn)：将 MatConvNet 模型转换为 PyTorch 格式。  \n35. [simple-faster-rcnn-pytorch](https:\u002F\u002Fgithub.com\u002Fchenyuntc\u002Fsimple-faster-rcnn-pytorch)：简化版 Faster R-CNN 实现，性能具有竞争力。  \n36. [generative_zoo](https:\u002F\u002Fgithub.com\u002FDL-IT\u002Fgenerative_zoo)：该仓库提供了几种生成模型在 PyTorch 中的可用实现。  \n37. [pytorchviz](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fpytorchviz)：一个用于可视化 PyTorch 执行图的小型工具包。  \n38. [cogitare](https:\u002F\u002Fgithub.com\u002Fcogitare-ai\u002Fcogitare)：Cogitare——一个现代、快速且模块化的 Python 深度学习和机器学习框架。  \n39. [pydlt](https:\u002F\u002Fgithub.com\u002Fdmarnerides\u002Fpydlt)：基于 PyTorch 的深度学习工具箱。  \n40. [semi-supervised-pytorch](https:\u002F\u002Fgithub.com\u002Fwohlert\u002Fsemi-supervised-pytorch)：在 PyTorch 中实现多种基于 VAE 的半监督和生成模型。  \n41. [pytorch_cluster](https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_cluster)：优化的图聚类算法 PyTorch 扩展库。  \n42. [neural-assembly-compiler](https:\u002F\u002Fgithub.com\u002Faditya-khant\u002Fneural-assembly-compiler)：基于自适应神经编译的 PyTorch 神经汇编编译器。  \n43. [caffemodel2pytorch](https:\u002F\u002Fgithub.com\u002Fvadimkantorov\u002Fcaffemodel2pytorch)：将 Caffe 模型转换为 PyTorch 格式。  \n44. [extension-cpp](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fextension-cpp)：PyTorch 中的 C++ 扩展。  \n45. [pytoune](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpytoune)：一个类似 Keras 的框架及 PyTorch 工具集。  \n46. [jetson-reinforcement](https:\u002F\u002Fgithub.com\u002Fdusty-nv\u002Fjetson-reinforcement)：面向 NVIDIA Jetson TX1\u002FTX2 的深度强化学习库，结合 PyTorch、OpenAI Gym 和 Gazebo 机器人仿真器。  \n47. [matchbox](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fmatchbox)：以单个样本级别编写 PyTorch 代码，然后高效地在小批量上运行。  \n48. [torch-two-sample](https:\u002F\u002Fgithub.com\u002Fjosipd\u002Ftorch-two-sample)：用于双样本检验的 PyTorch 库。  \n49. [pytorch-summary](https:\u002F\u002Fgithub.com\u002Fsksq96\u002Fpytorch-summary)：类似于 Keras 中 `model.summary()` 的 PyTorch 模型摘要。  \n50. [mpl.pytorch](https:\u002F\u002Fgithub.com\u002FBelBES\u002Fmpl.pytorch)：MaxPoolingLoss 的 PyTorch 实现。  \n51. [scVI-dev](https:\u002F\u002Fgithub.com\u002FYosefLab\u002FscVI-dev)：scVI 项目的 PyTorch 开发分支。  \n52. [apex](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fapex)：一个实验性的 PyTorch 扩展（未来将被弃用）。  \n53. [ELF](https:\u002F\u002Fgithub.com\u002Fpytorch\u002FELF)：ELF——一个用于游戏研究的平台。  \n54. [Torchlite](https:\u002F\u002Fgithub.com\u002FEKami\u002FTorchlite)：一个基于 PyTorch 的高级库。  \n55. [joint-vae](https:\u002F\u002Fgithub.com\u002FSchlumberger\u002Fjoint-vae)：JointVAE 的 PyTorch 实现，用于分离连续和离散的变化因子。  \n56. [SLM-Lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002FSLM-Lab)：基于 PyTorch 的模块化深度强化学习框架。  \n57. [bindsnet](https:\u002F\u002Fgithub.com\u002FHananel-Hazan\u002Fbindsnet)：一个用于在 CPU 或 GPU 上使用 PyTorch 模拟脉冲神经网络 (SNN) 的 Python 包。  \n58. [pro_gan_pytorch](https:\u002F\u002Fgithub.com\u002Fakanimax\u002Fpro_gan_pytorch)：作为 PyTorch nn.Module 扩展实现的 ProGAN 包。  \n59. [pytorch_geometric](https:\u002F\u002Fgithub.com\u002Frusty1s\u002Fpytorch_geometric)：PyTorch 的几何深度学习扩展库。  \n60. [torchplus](https:\u002F\u002Fgithub.com\u002Fknighton\u002Ftorchplus)：在 PyTorch 模块上实现 `+` 运算符，返回序列。  \n61. [lagom](https:\u002F\u002Fgithub.com\u002Fzuoxingdong\u002Flagom)：lagom——一个轻量级 PyTorch 基础设施，用于快速原型化强化学习算法。  \n62. [torchbearer](https:\u002F\u002Fgithub.com\u002Fecs-vlc\u002Ftorchbearer)：一款面向研究人员的 PyTorch 模型训练库。  \n63. [pytorch-maml-rl](https:\u002F\u002Fgithub.com\u002Ftristandeleu\u002Fpytorch-maml-rl)：基于 PyTorch 的模型无关元学习强化学习。  \n64. [NALU](https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FNALU)：Trask 等人发表的“神经算术逻辑单元”论文中 NAC\u002FNALU 的基础 PyTorch 实现 arxiv.org\u002Fpdf\u002F1808.00508.pdf。  \n66. [QuCumber](https:\u002F\u002Fgithub.com\u002FPIQuIL\u002FQuCumber)：神经网络多体波函数重构。  \n67. [magnet](https:\u002F\u002Fgithub.com\u002FMagNet-DL\u002Fmagnet)：自我构建的深度学习项目 http:\u002F\u002Fmagnet-dl.readthedocs.io\u002F。  \n68. [opencv_transforms](https:\u002F\u002Fgithub.com\u002Fjbohnslav\u002Fopencv_transforms)：Torchvision 图像增强的 OpenCV 实现。  \n69. [fastai](https:\u002F\u002Fgithub.com\u002Ffastai\u002Ffastai)：fast.ai 深度学习库、课程和教程。  \n70. [pytorch-dense-correspondence](https:\u002F\u002Fgithub.com\u002FRobotLocomotion\u002Fpytorch-dense-correspondence)：用于“密集对象网络：为机器人操作而学习密集视觉对象描述符”的代码 arxiv.org\u002Fpdf\u002F1806.08756.pdf。  \n71. [colorization-pytorch](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fcolorization-pytorch)：Interactive Deep Colorization 的 PyTorch 重实现 richzhang.github.io\u002Fideepcolor。  \n72. [beauty-net](https:\u002F\u002Fgithub.com\u002Fcms-flash\u002Fbeauty-net)：一个简单、灵活且可扩展的 PyTorch 模板。它非常美观。  \n73. [OpenChem](https:\u002F\u002Fgithub.com\u002FMariewelt\u002FOpenChem)：面向计算化学和药物设计研究的深度学习工具包 mariewelt.github.io\u002FOpenChem。  \n74. [torchani](https:\u002F\u002Fgithub.com\u002Faiqm\u002Ftorchani)：基于 PyTorch 的精确神经网络势能 aiqm.github.io\u002Ftorchani。  \n75. [PyTorch-LBFGS](https:\u002F\u002Fgithub.com\u002Fhjmshi\u002FPyTorch-LBFGS)：L-BFGS 的 PyTorch 实现。  \n76. [gpytorch](https:\u002F\u002Fgithub.com\u002Fcornellius-gp\u002Fgpytorch)：一个高效且模块化的 PyTorch 高斯过程实现。  \n77. [hessian](https:\u002F\u002Fgithub.com\u002Fmariogeiger\u002Fhessian)：PyTorch 中的 Hessian 矩阵计算。  \n78. [vel](https:\u002F\u002Fgithub.com\u002FMillionIntegrals\u002Fvel)：深度学习研究中的速度指标。  \n79. [nonechucks](https:\u002F\u002Fgithub.com\u002Fmsamogh\u002Fnonechucks)：跳过 PyTorch DataLoader 中的不良样本，将变换用作过滤器等！  \n80. [torchstat](https:\u002F\u002Fgithub.com\u002FSwall0w\u002Ftorchstat)：PyTorch 模型分析工具。  \n81. [QNNPACK](https:\u002F\u002Fgithub.com\u002Fpytorch\u002FQNNPACK)：量化神经网络软件包——针对移动端优化的量化神经网络运算实现。  \n82. [torchdiffeq](https:\u002F\u002Fgithub.com\u002Frtqichen\u002Ftorchdiffeq)：具有完全 GPU 支持和 O(1) 内存反向传播的可微分常微分方程求解器。  \n83. [redner](https:\u002F\u002Fgithub.com\u002FBachiLi\u002Fredner)：一个可微分的蒙特卡洛路径追踪器。  \n84. [pixyz](https:\u002F\u002Fgithub.com\u002Fmasa-su\u002Fpixyz)：一个以更简洁、直观和可扩展的方式开发深度生成模型的库。  \n85. [euclidesdb](https:\u002F\u002Fgithub.com\u002Fperone\u002Feuclidesdb)：一个多模型机器学习特征嵌入数据库 http:\u002F\u002Feuclidesdb.readthedocs.io。  \n86. [pytorch2keras](https:\u002F\u002Fgithub.com\u002Fnerox8664\u002Fpytorch2keras)：将 PyTorch 动态图转换为 Keras 模型。  \n87. [salad](https:\u002F\u002Fgithub.com\u002Fdomainadaptation\u002Fsalad)：半监督学习和领域适应。  \n88. [netharn](https:\u002F\u002Fgithub.com\u002FErotemic\u002Fnetharn)：用于 PyTorch 的参数化拟合和预测工具箱。  \n89. [dgl](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fdgl)：一个基于现有深度学习框架构建的 Python 包，旨在简化图上的深度学习工作。http:\u002F\u002Fdgl.ai。  \n90. [gandissect](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002Fgandissect)：基于 PyTorch 的工具，用于可视化和理解 GAN 的神经元结构。gandissect.csail.mit.edu。  \n91. [delira](https:\u002F\u002Fgithub.com\u002Fjustusschock\u002Fdelira)：一个轻量级框架，用于医学影像领域的快速原型化和训练深度神经网络 delira.rtfd.io。  \n92. [mushroom](https:\u002F\u002Fgithub.com\u002FAIRLab-POLIMI\u002Fmushroom)：一个用于强化学习实验的 Python 库。  \n93. [Xlearn](https:\u002F\u002Fgithub.com\u002Fthuml\u002FXlearn)：迁移学习库。  \n94. [geoopt](https:\u002F\u002Fgithub.com\u002Fferrine\u002Fgeoopt)：结合 PyTorch 优化器的黎曼流形自适应优化方法。  \n95. [vegans](https:\u002F\u002Fgithub.com\u002Funit8co\u002Fvegans)：一个提供多种现有 GAN 模型的 PyTorch 库。  \n96. [torchgeometry](https:\u002F\u002Fgithub.com\u002Farraiyopensource\u002Ftorchgeometry)：TGM——PyTorch 几何学。  \n97. [AdverTorch](https:\u002F\u002Fgithub.com\u002FBorealisAI\u002Fadvertorch)：对抗鲁棒性（攻击\u002F防御\u002F训练）研究的工具箱。  \n98. [AdaBound](https:\u002F\u002Fgithub.com\u002FLuolc\u002FAdaBound)：一种优化器，训练速度如 Adam，效果如 SGD。  \n99. [fenchel-young-losses](https:\u002F\u002Fgithub.com\u002Fmblondel\u002Ffenchel-young-losses)：使用 Fenchel-Young 损失函数在 PyTorch\u002FTensorFlow\u002Fscikit-learn 中进行概率分类。  \n100. [pytorch-OpCounter](https:\u002F\u002Fgithub.com\u002FLyken17\u002Fpytorch-OpCounter)：统计你的 PyTorch 模型的浮点运算次数。  \n101. [Tor10](https:\u002F\u002Fgithub.com\u002Fkaihsin\u002FTor10)：一个通用张量网络库，专为量子模拟设计，基于 PyTorch 构建。  \n102. [Catalyst](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)：PyTorch DL & RL 研究的高级工具集。专注于可重复性、快速实验以及代码\u002F想法的复用，使研究人员能够专注于创新而非重复编写训练循环。  \n103. [Ax](https:\u002F\u002Fgithub.com\u002Ffacebook\u002FAx)：自适应实验平台。  \n104. [pywick](https:\u002F\u002Fgithub.com\u002Fachaiah\u002Fpywick)：一个包含所有必要组件的 PyTorch 高级神经网络训练库。  \n105. [torchgpipe](https:\u002F\u002Fgithub.com\u002Fkakaobrain\u002Ftorchgpipe)：PyTorch 中的 GPipe 实现 torchgpipe.readthedocs.io。  \n106. [hub](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fhub)：PyTorch Hub 是一个预训练模型仓库，旨在促进研究的可重复性。  \n107. [pytorch-lightning](https:\u002F\u002Fgithub.com\u002FwilliamFalcon\u002Fpytorch-lightning)：PyTorch 的快速研究框架，相当于研究人员版本的 Keras。  \n108. [Tor10](https:\u002F\u002Fgithub.com\u002Fkaihsin\u002FTor10)：一个通用张量网络库，专为量子模拟设计，基于 PyTorch 构建。  \n109. [tensorwatch](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftensorwatch)：来自微软研究院的深度学习和强化学习调试、监控与可视化工具。  \n110. [wavetorch](https:\u002F\u002Fgithub.com\u002Ffancompute\u002Fwavetorch)：数值求解波动方程并进行反向传播 arxiv.org\u002Fabs\u002F1904.12831。  \n111. [diffdist](https:\u002F\u002Fgithub.com\u002Fag14774\u002Fdiffdist)：一个用于 PyTorch 的 Python 库，扩展了默认的 `torch.autograd` 功能，并增加了进程间可微分通信的支持。  \n112. [torchprof](https:\u002F\u002Fgithub.com\u002Fawwong1\u002Ftorchprof)：一个极低依赖的 PyTorch 模型逐层剖析库。  \n113. [osqpth](https:\u002F\u002Fgithub.com\u002Foxfordcontrol\u002Fosqpth)：PyTorch 中的可微分 OSQP 求解器层。  \n114. [mctorch](https:\u002F\u002Fgithub.com\u002Fmctorch\u002Fmctorch)：一个用于深度学习的流形优化库。  \n115. [pytorch-hessian-eigenthings](https:\u002F\u002Fgithub.com\u002Fnoahgolmant\u002Fpytorch-hessian-eigenthings)：利用 Hessian-向量乘积和随机幂迭代法高效分解 PyTorch Hessian 矩阵的特征值和特征向量。  \n116. [MinkowskiEngine](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FMinkowskiEngine)：Minkowski Engine 是一个自动微分库，专为广义稀疏卷积和高维稀疏张量设计。  \n117. [pytorch-cpp-rl](https:\u002F\u002Fgithub.com\u002FOmegastick\u002Fpytorch-cpp-rl)：PyTorch C++ 强化学习。  \n118. [pytorch-toolbelt](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002Fpytorch-toolbelt)：用于快速研发原型和 Kaggle 比赛的 PyTorch 扩展工具包。  \n119. [argus-tensor-stream](https:\u002F\u002Fgithub.com\u002FFonbet\u002Fargus-tensor-stream)：一个用于实时视频流解码并存储到 CUDA 内存的库 tensorstream.argus-ai.com。  \n120. [macarico](https:\u002F\u002Fgithub.com\u002Fhal3\u002Fmacarico)：在 PyTorch 中学习搜索。  \n121. [rlpyt](https:\u002F\u002Fgithub.com\u002Fastooke\u002Frlpyt)：PyTorch 中的强化学习。  \n122. [pywarm](https:\u002F\u002Fgithub.com\u002Fblue-season\u002Fpywarm)：一种更整洁的方式构建 PyTorch 神经网络。blue-season.github.io\u002Fpywarm。  \n123. [learn2learn](https:\u002F\u002Fgithub.com\u002Flearnables\u002Flearn2learn)：面向研究人员的 PyTorch 元学习框架 http:\u002F\u002Flearn2learn.net。  \n124. [torchbeast](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ftorchbeast)：一个用于分布式强化学习的 PyTorch 平台。  \n125. [higher](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fhigher)：一个允许用户获取跨越整个训练周期而非单个步骤的损失的更高阶梯度的 PyTorch 库。  \n126. [Torchelie](https:\u002F\u002Fgithub.com\u002FVermeille\u002FTorchelie\u002F)：Torchelie 是一套用于 PyTorch 的实用函数、层、损失、模型、训练器等工具。torchelie.readthedocs.org。  \n127. [CrypTen](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FCrypTen)：一个基于 PyTorch 的隐私保护机器学习框架，允许研究人员和开发者使用加密数据训练模型。CrypTen 目前支持安全多方计算作为其加密机制。  \n128. [cvxpylayers](https:\u002F\u002Fgithub.com\u002Fcvxgrp\u002Fcvxpylayers)：一个用于在 PyTorch 中构建可微分凸优化层的 Python 库。  \n129. [RepDistiller](https:\u002F\u002Fgithub.com\u002FHobbitLong\u002FRepDistiller)：对比表征蒸馏 (CRD)，以及近期知识蒸馏方法的基准测试。  \n130. [kaolin](https:\u002F\u002Fgithub.com\u002FNVIDIAGameWorks\u002Fkaolin)：一个旨在加速 3D 深度学习研究的 PyTorch 库。  \n131. [PySNN](https:\u002F\u002Fgithub.com\u002FBasBuller\u002FPySNN)：高效的脉冲神经网络框架，基于 PyTorch 构建以利用 GPU 加速。  \n132. [sparktorch](https:\u002F\u002Fgithub.com\u002Fdmmiller612\u002Fsparktorch)：在 Apache Spark 上训练和运行 PyTorch 模型。  \n133. [pytorch-metric-learning](https:\u002F\u002Fgithub.com\u002FKevinMusgrave\u002Fpytorch-metric-learning)：在应用中使用度量学习的最简单方式。模块化、灵活且可扩展，基于 PyTorch 实现。  \n134. [autonomous-learning-library](https:\u002F\u002Fgithub.com\u002Fcpnota\u002Fautonomous-learning-library)：一个用于构建深度强化学习智能体的 PyTorch 库。  \n135. [flambe](https:\u002F\u002Fgithub.com\u002Fasappresearch\u002Fflambe)：一个 ML 框架，用于加速研究及其走向生产的过程。flambe.ai。  \n136. [pytorch-optimizer](https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer)：一系列现代优化算法的集合，包括：AccSGD、AdaBound、AdaMod、DiffGrad、Lamb、RAdam、Yogi。  \n137. [PyTorch-VAE](https:\u002F\u002Fgithub.com\u002FAntixK\u002FPyTorch-VAE)：一组基于 PyTorch 的变分自编码器 (VAE)。  \n138. [ray](https:\u002F\u002Fgithub.com\u002Fray-project\u002Fray)：一个快速简单的框架，用于构建和运行分布式应用程序。Ray 自带 RLlib 可扩展强化学习库和 Tune 可扩展超参数调优库。ray.io。  \n139. [Pytorch Geometric Temporal](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fpytorch_geometric_temporal)：PyTorch Geometric 的时间序列扩展库。  \n140. [Poutyne](https:\u002F\u002Fgithub.com\u002FGRAAL-Research\u002Fpoutyne)：一个类似 Keras 的 PyTorch 框架，可以处理训练神经网络所需的大量样板代码。  \n141. [Pytorch-Toolbox](https:\u002F\u002Fgithub.com\u002FPistonY\u002Ftorch-toolbox)：一个面向 PyTorch 的工具箱项目，旨在让编写 PyTorch 代码更加简单、易读和简洁。  \n142. [Pytorch-contrib](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fcontrib)：包含了对近期机器学习论文中提出的想法的评审实现。  \n143. [EfficientNet PyTorch](https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch)：包含逐层对应的 EfficientNet 的 PyTorch 重实现，以及预训练模型和示例。  \n144. [PyTorch\u002FXLA](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fxla)：一个使用 XLA 深度学习编译器将 PyTorch 深度学习框架与 Cloud TPU 连接的 Python 包。  \n145. [webdataset](https:\u002F\u002Fgithub.com\u002Ftmbdev\u002Fwebdataset)：WebDataset 是一个 PyTorch 数据集（IterableDataset）实现，提供对存储在 POSIX tar 归档中的数据集的高效访问。  \n146. [volksdep](https:\u002F\u002Fgithub.com\u002FMedia-Smart\u002Fvolksdep)：一个开源工具箱，用于部署和加速 PyTorch、Onnx 和 Tensorflow 模型，并结合 TensorRT 使用。  \n147. [PyTorch-StudioGAN](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN)：StudioGAN 是一个 PyTorch 库，提供了用于条件\u002F无条件图像生成的代表性生成对抗网络 (GAN) 实现。StudioGAN 旨在为现代 GAN 提供一个统一的比较和分析平台，方便机器学习研究人员评估新想法。  \n148. [torchdrift](https:\u002F\u002Fgithub.com\u002Ftorchdrift\u002Ftorchdrift\u002F)：漂移检测库。  \n149. [accelerate](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate)：一种简单的方法，用于使用多 GPU、TPU 或混合精度训练和使用 PyTorch 模型。  \n150. [lightning-transformers](https:\u002F\u002Fgithub.com\u002FPyTorchLightning\u002Flightning-transformers)：一个灵活的接口，用于借助 PyTorch Lightning、Transformers 和 Hydra 进行高性能 SOTA 变换器研究。  \n151. [Flower](https:\u002F\u002Fflower.dev\u002F)：一种统一的联邦学习、分析和评估方法，允许联邦任何机器学习任务。  \n152. [lightning-flash](https:\u002F\u002Fgithub.com\u002FPyTorchLightning\u002Flightning-flash)：Flash 是一系列用于快速原型化、基线建立和可扩展深度学习模型微调的任务集合，基于 PyTorch Lightning 构建。  \n153. [Pytorch Geometric Signed Directed](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002Fpytorch_geometric_signed_directed)：PyTorch Geometric 的有符号和有向扩展库。  \n154. [Koila](https:\u002F\u002Fgithub.com\u002Frentruewang\u002Fkoila)：一个简单的 PyTorch 包装器，用于防止 CUDA 内存不足问题。  \n155. [Renate](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Frenate)：一个用于真实世界持续学习的库。  \n156. [ANEE](https:\u002F\u002Fgithub.com\u002Fabkmystery\u002FANEE)：面向 PyTorch 转换器的自适应神经执行引擎。提供按令牌动态跳过层、基于性能分析的门控机制以及 KV 缓存安全的稀疏推理等功能。\n\n## 教程、书籍和示例\n\n1. **[实用 PyTorch](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpractical-pytorch)**：讲解不同 RNN 模型的教程\n2. [DeepLearningForNLPInPytorch](https:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fdeep_learning_nlp_tutorial.html)：一个关于深度学习的 IPython Notebook 教程，重点在于自然语言处理。\n3. [pytorch-tutorial](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)：面向研究人员的 PyTorch 深度学习教程。\n4.  [pytorch-exercises](https:\u002F\u002Fgithub.com\u002Fkeon\u002Fpytorch-exercises)：PyTorch 练习集。\n5.  [pytorch tutorials](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftutorials)：各种 PyTorch 教程。\n6.  [pytorch examples](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples)：展示 PyTorch 使用示例的仓库。\n7. [pytorch practice](https:\u002F\u002Fgithub.com\u002Fnapsternxg\u002Fpytorch-practice)：一些 PyTorch 示例脚本。\n8.  [pytorch mini tutorials](https:\u002F\u002Fgithub.com\u002Fvinhkhuc\u002FPyTorch-Mini-Tutorials)：基于 Alec Radford 的 Theano 教程改编的 PyTorch 极简教程。\n9.  [pytorch text classification](https:\u002F\u002Fgithub.com\u002Fxiayandi\u002FPytorch_text_classification)：在 PyTorch 中实现的一个基于 CNN 的简单文本分类模型。\n10. [cats vs dogs](https:\u002F\u002Fgithub.com\u002Fdesimone\u002Fpytorch-cat-vs-dogs)：用于 Kaggle 竞赛“Dogs vs. Cats Redux: Kernels Edition”的 PyTorch 网络微调示例。目前在排行榜上排名第 27 位（0.05074）。\n11. [convnet](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FconvNet.pytorch)：这是一个针对多种数据集（ImageNet、Cifar10、Cifar100、MNIST）的深度卷积神经网络完整训练示例。\n12. [pytorch-generative-adversarial-networks](https:\u002F\u002Fgithub.com\u002Fmailmahee\u002Fpytorch-generative-adversarial-networks)：使用 PyTorch 实现的简单生成对抗网络（GAN）。\n13. [pytorch containers](https:\u002F\u002Fgithub.com\u002Famdegroot\u002Fpytorch-containers)：该仓库旨在通过提供 Torch Table Layers 的 PyTorch 实现列表，帮助原 Torch 用户更顺利地过渡到无容器的 PyTorch 世界。\n14. [T-SNE in pytorch](https:\u002F\u002Fgithub.com\u002Fcemoody\u002Ftopicsne)：PyTorch 中的 t-SNE 实验。\n15. [AAE_pytorch](https:\u002F\u002Fgithub.com\u002Ffducau\u002FAAE_pytorch)：使用 PyTorch 实现的对抗自编码器。\n16. [Kind_PyTorch_Tutorial](https:\u002F\u002Fgithub.com\u002FGunhoChoi\u002FKind_PyTorch_Tutorial)：面向初学者的友好 PyTorch 教程。\n17.  [pytorch-poetry-gen](https:\u002F\u002Fgithub.com\u002Fjustdark\u002Fpytorch-poetry-gen)：基于 PyTorch 的字符级 RNN。\n18. [pytorch-REINFORCE](https:\u002F\u002Fgithub.com\u002FJamesChuanggg\u002Fpytorch-REINFORCE)：PyTorch 版 REINFORCE 算法实现。该仓库支持 OpenAI Gym 中的连续和离散环境。\n19.  **[PyTorch-Tutorial](https:\u002F\u002Fgithub.com\u002FMorvanZhou\u002FPyTorch-Tutorial)**：轻松快速构建你的神经网络 https:\u002F\u002Fmorvanzhou.github.io\u002Ftutorials\u002F\n20. [pytorch-intro](https:\u002F\u002Fgithub.com\u002Fjoansj\u002Fpytorch-intro)：几段脚本，用于演示如何在 PyTorch 中进行 CNN 和 RNN 操作。\n21. [pytorch-classification](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fpytorch-classification)：一个统一的框架，用于 CIFAR-10\u002F100 和 ImageNet 上的图像分类任务。\n22. [pytorch_notebooks - hardmaru](https:\u002F\u002Fgithub.com\u002Fhardmaru\u002Fpytorch_notebooks)：用 NumPy 和 PyTorch 编写的随机教程。\n23. [pytorch_tutoria-quick](https:\u002F\u002Fgithub.com\u002Fsoravux\u002Fpytorch_tutorial)：快速 PyTorch 入门与教程。目标用户是希望尝试新框架的计算机视觉、图形学和机器学习研究人员。\n24. [Pytorch_fine_tuning_Tutorial](https:\u002F\u002Fgithub.com\u002FSpandan-Madan\u002FPytorch_fine_tuning_Tutorial)：关于在 PyTorch 中进行微调或迁移学习的简短教程。\n25. [pytorch_exercises](https:\u002F\u002Fgithub.com\u002FKyubyong\u002Fpytorch_exercises)：PyTorch 练习。\n26. [traffic-sign-detection](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Ftraffic-sign-detection-homework)：nyu-cv-fall-2017 示例。\n27. [mss_pytorch](https:\u002F\u002Fgithub.com\u002FJs-Mim\u002Fmss_pytorch)：基于循环推理和跳跃滤波连接的歌声分离——PyTorch 实现。演示地址：js-mim.github.io\u002Fmss_pytorch。\n28. [DeepNLP-models-Pytorch](https:\u002F\u002Fgithub.com\u002FDSKSD\u002FDeepNLP-models-Pytorch)：斯坦福大学 CS-224n 课程中各种深度 NLP 模型的 PyTorch 实现。\n29. [Mila introductory tutorials](https:\u002F\u002Fgithub.com\u002Fmila-udem\u002Fwelcome_tutorials)：为 MILA 新生提供的各类入门教程。\n30. [pytorch.rl.learning](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fpytorch.rl.learning)：用于使用 PyTorch 学习强化学习。\n31. [minimal-seq2seq](https:\u002F\u002Fgithub.com\u002Fkeon\u002Fseq2seq)：在 PyTorch 中实现的带有注意力机制的极简序列到序列模型，用于神经机器翻译。\n32. [tensorly-notebooks](https:\u002F\u002Fgithub.com\u002FJeanKossaifi\u002Ftensorly-notebooks)：使用 TensorLy 库在 Python 中进行张量方法操作 tensorly.github.io\u002Fdev。\n33. [pytorch_bits](https:\u002F\u002Fgithub.com\u002Fjpeg729\u002Fpytorch_bits)：与时间序列预测相关的示例。\n34. [skip-thoughts](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Fskip-thoughts)：在 PyTorch 中实现的 Skip-Thought 向量。\n35. [video-caption-pytorch](https:\u002F\u002Fgithub.com\u002FxiadingZ\u002Fvideo-caption-pytorch)：用于视频字幕生成的 PyTorch 代码。\n36. [Capsule-Network-Tutorial](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002FCapsule-Network-Tutorial)：易于理解的 PyTorch 胶囊网络教程。\n37. [code-of-learn-deep-learning-with-pytorch](https:\u002F\u002Fgithub.com\u002FSherlockLiao\u002Fcode-of-learn-deep-learning-with-pytorch)：这是书籍《用 PyTorch 学习深度学习》的配套代码 item.jd.com\u002F17915495606.html。\n38. [RL-Adventure](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002FRL-Adventure)：易于跟随的 PyTorch 步骤式深度 Q 学习教程，代码清晰易读。\n39. [accelerated_dl_pytorch](https:\u002F\u002Fgithub.com\u002Fhpcgarage\u002Faccelerated_dl_pytorch)：在亚特兰大 Jupyter Day II 上加速的 PyTorch 深度学习实践。\n40. [RL-Adventure-2](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002FRL-Adventure-2)：PyTorch 教程，内容包括：演员-评论家算法 \u002F 近端策略优化 \u002F ACER \u002F DDPG \u002F 双重决斗 DDPG \u002F 软演员-评论家算法 \u002F 生成对抗模仿学习 \u002F 回顾性经验回放。\n41. [50 行代码实现的生成对抗网络（GAN）（PyTorch）](https:\u002F\u002Fmedium.com\u002F@devnag\u002Fgenerative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f)\n42. [使用 PyTorch 的对抗自编码器](https:\u002F\u002Fblog.paperspace.com\u002Fadversarial-autoencoders-with-pytorch\u002F)\n43. [使用 PyTorch 进行迁移学习](https:\u002F\u002Fmedium.com\u002F@vishnuvig\u002Ftransfer-learning-using-pytorch-4c3475f4495)\n44. [如何在 PyTorch 中实现 YOLO 目标检测器](https:\u002F\u002Fblog.paperspace.com\u002Fhow-to-implement-a-yolo-object-detector-in-pytorch\u002F)\n45. [推荐系统中的 PyTorch 入门](http:\u002F\u002Fblog.fastforwardlabs.com\u002F2018\u002F04\u002F10\u002Fpytorch-for-recommenders-101.html)\n46. [面向 NumPy 用户的 PyTorch](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-for-numpy-users)\n47. [PyTorch Tutorial](http:\u002F\u002Fwww.pytorchtutorial.com\u002F)：中文版 PyTorch 教程。\n48. [grokking-pytorch](https:\u002F\u002Fgithub.com\u002FKaixhin\u002Fgrokking-pytorch)：PyTorch 指南。\n49. [PyTorch-Deep-Learning-Minicourse](https:\u002F\u002Fgithub.com\u002FAtcold\u002FPyTorch-Deep-Learning-Minicourse)：PyTorch 深度学习迷你课程。\n50. [pytorch-custom-dataset-examples](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fpytorch-custom-dataset-examples)：一些用于 PyTorch 的自定义数据集示例。\n51. [用于基于序列的推荐系统的乘法 LSTM](https:\u002F\u002Fflorianwilhelm.info\u002F2018\u002F08\u002Fmultiplicative_LSTM_for_sequence_based_recos\u002F)\n52. [deeplearning.ai-pytorch](https:\u002F\u002Fgithub.com\u002Ffurkanu\u002Fdeeplearning.ai-pytorch)：Coursera 的深度学习（deeplearning.ai）专项课程的 PyTorch 实现。\n53. [MNIST_Pytorch_python_and_capi](https:\u002F\u002Fgithub.com\u002Ftobiascz\u002FMNIST_Pytorch_python_and_capi)：这是一个示例，说明如何用 Python 训练 MNIST 网络，并使用 PyTorch 1.0 在 C++ 中运行。\n54. [torch_light](https:\u002F\u002Fgithub.com\u002Fne7ermore\u002Ftorch_light)：教程和示例涵盖强化学习、NLP 和 CV。\n55. [portrain-gan](https:\u002F\u002Fgithub.com\u002Fdribnet\u002Fportrain-gan)：用于解码（并几乎编码）艺术 DC-GAN 的 Portrait GAN 隐空间的 PyTorch 代码。\n56. [mri-analysis-pytorch](https:\u002F\u002Fgithub.com\u002Fomarsar\u002Fmri-analysis-pytorch)：使用 PyTorch 和 MedicalTorch 进行 MRI 分析。\n57. [cifar10-fast](https:\u002F\u002Fgithub.com\u002Fdavidcpage\u002Fcifar10-fast)：\n展示了如何按照这篇博客系列[https:\u002F\u002Fwww.myrtle.ai\u002F2018\u002F09\u002F24\u002Fhow_to_train_your_resnet\u002F]的描述，在 CIFAR10 数据集上训练一个小 ResNet，使其在 79 秒内达到 94% 的测试准确率。\n58. [PyTorch 深度学习入门](https:\u002F\u002Fin.udacity.com\u002Fcourse\u002Fdeep-learning-pytorch--ud188)：由 Udacity 和 Facebook 联合提供的免费课程，包含良好的 PyTorch 入门介绍，并采访了 PyTorch 的原始作者之一 Soumith Chintala。\n59. [pytorch-sentiment-analysis](https:\u002F\u002Fgithub.com\u002Fbentrevett\u002Fpytorch-sentiment-analysis)：关于如何开始使用 PyTorch 和 TorchText 进行情感分析的教程。\n60. [pytorch-image-models](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models)：PyTorch 图像模型、脚本及预训练权重——(SE)ResNet\u002FResNeXT、DPN、EfficientNet、MobileNet-V3\u002FV2\u002FV1、MNASNet、Single-Path NAS、FBNet 等。\n61. [CIFAR-ZOO](https:\u002F\u002Fgithub.com\u002FBIGBALLON\u002FCIFAR-ZOO)：多个 CNN 架构及改进方法的 PyTorch 实现，取得了最先进的结果。\n62. [d2l-pytorch](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch)：试图将伯克利 STAT 157（2019 年春季）教材《深入深度学习》中的代码修改为 PyTorch 格式。\n63. [thinking-in-tensors-writing-in-pytorch](https:\u002F\u002Fgithub.com\u002Fstared\u002Fthinking-in-tensors-writing-in-pytorch)：以张量思维、PyTorch 编写（一本动手实践的深度学习入门）。\n64. [NER-BERT-pytorch](https:\u002F\u002Fgithub.com\u002Flemonhu\u002FNER-BERT-pytorch)：使用 Google AI 预训练的 BERT 模型解决命名实体识别任务的 PyTorch 解法。\n65. [pytorch-sync-batchnorm-example](https:\u002F\u002Fgithub.com\u002Fdougsouza\u002Fpytorch-sync-batchnorm-example)：如何在 PyTorch 中使用跨副本\u002F同步批归一化。\n66. [SentimentAnalysis](https:\u002F\u002Fgithub.com\u002Fbarissayil\u002FSentimentAnalysis)：基于斯坦福情感树库对 BERT 进行微调训练的情感分析神经网络，感谢[Hugging Face](https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002F)的 Transformers 库。\n67. [pytorch-cpp](https:\u002F\u002Fgithub.com\u002Fprabhuomkar\u002Fpytorch-cpp)：面向深度学习研究人员的 PyTorch 教程 C++ 实现（基于[yunjey\u002Fpytorch-tutorial](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)中的 Python 教程）。\n68. [深度学习与 PyTorch：从零到 GAN](https:\u002F\u002Fjovian.ml\u002Faakashns\u002Fcollections\u002Fdeep-learning-with-pytorch)：一系列互动且注重编码的教程，介绍深度学习与 PyTorch（[视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GIsg-ZUy0MY)）。\n69. [深度学习与 PyTorch](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-pytorch)：本书教你如何使用 Python 和 PyTorch 实现深度学习算法，书中包含一个案例研究：构建一种能够利用 CT 扫描检测恶性肺肿瘤的算法。\n70. [使用 PyTorch 和 AWS 实现无服务器机器学习](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fserverless-machine-learning-in-action)：这是一本指南，教你如何利用 AWS、Azure 或 GCP 等主要云服务提供商的无服务器功能，将实验性的 PyTorch 机器学习代码投入生产。\n71. [LabML NN](https:\u002F\u002Fgithub.com\u002Flab-ml\u002Fnn)：一组神经网络架构和算法的 PyTorch 实现，并附有并排注释。\n72. [使用 Flower 将你的 PyTorch 示例联邦化](https:\u002F\u002Fgithub.com\u002Fadap\u002Fflower\u002Ftree\u002Fmain\u002Fexamples\u002Fpytorch_from_centralized_to_federated)：此示例展示了如何将现有的集中式 PyTorch 机器学习项目通过 Flower 实现联邦化。使用了 Cifar-10 数据集和卷积神经网络（CNN）。\n73. [人工智能背后的数学](https:\u002F\u002Fwww.freecodecamp.org\u002Fnews\u002Fthe-math-behind-artificial-intelligence-book)：一本免费的 FreeCodeCamp 书籍，从工程角度用通俗易懂的语言讲解 AI 背后的数学知识。内容涵盖线性代数、微积分、概率统计以及优化理论，并配有类比、实际应用和 Python 代码示例。\n\n## 论文实现\n\n1. [google_evolution](https:\u002F\u002Fgithub.com\u002Fneuralix\u002Fgoogle_evolution): This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. \n2. [pyscatwave](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fpyscatwave): Fast Scattering Transform with CuPy\u002FPyTorch,read the paper [here](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08961)\n3. [scalingscattering](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fscalingscattering): Scaling The Scattering Transform : Deep Hybrid Networks.  \n4. [deep-auto-punctuation](https:\u002F\u002Fgithub.com\u002Fepisodeyang\u002Fdeep-auto-punctuation): a pytorch implementation of auto-punctuation learned character by character.  \n5. [Realtime_Multi-Person_Pose_Estimation](https:\u002F\u002Fgithub.com\u002Ftensorboy\u002Fpytorch_Realtime_Multi-Person_Pose_Estimation): This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is [here](https:\u002F\u002Fgithub.com\u002FZheC\u002FRealtime_Multi-Person_Pose_Estimation) .\n6. [PyTorch-value-iteration-networks](https:\u002F\u002Fgithub.com\u002Fonlytailei\u002FPyTorch-value-iteration-networks): PyTorch implementation of the Value Iteration Networks (NIPS '16) paper  \n7. [pytorch_Highway](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_Highway): Highway network implemented in pytorch.\n8. [pytorch_NEG_loss](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_NEG_loss): NEG loss implemented in pytorch.  \n9. [pytorch_RVAE](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_RVAE): Recurrent Variational Autoencoder that generates sequential data implemented in pytorch.   \n10. [pytorch_TDNN](https:\u002F\u002Fgithub.com\u002Fanalvikingur\u002Fpytorch_TDNN): Time Delayed NN implemented in pytorch.  \n11. [eve.pytorch](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Feve.pytorch): An implementation of Eve Optimizer, proposed in Imploving Stochastic Gradient Descent with Feedback, Koushik and Hayashi, 2016.  \n12. [e2e-model-learning](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fe2e-model-learning): Task-based end-to-end model learning.  \n13. [pix2pix-pytorch](https:\u002F\u002Fgithub.com\u002Fmrzhu-cool\u002Fpix2pix-pytorch): PyTorch implementation of \"Image-to-Image Translation Using Conditional Adversarial Networks\".   \n14. [Single Shot MultiBox Detector](https:\u002F\u002Fgithub.com\u002Famdegroot\u002Fssd.pytorch): A PyTorch Implementation of Single Shot MultiBox Detector.  \n15. [DiscoGAN](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDiscoGAN-pytorch): PyTorch implementation of \"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks\"  \n16. [official DiscoGAN implementation](https:\u002F\u002Fgithub.com\u002FSKTBrain\u002FDiscoGAN): Official implementation of \"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks\".  \n17. [pytorch-es](https:\u002F\u002Fgithub.com\u002Fatgambardella\u002Fpytorch-es): This is a PyTorch implementation of [Evolution Strategies](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.03864) .  \n18. [piwise](https:\u002F\u002Fgithub.com\u002Fbodokaiser\u002Fpiwise): Pixel-wise segmentation on VOC2012 dataset using pytorch.  \n19. [pytorch-dqn](https:\u002F\u002Fgithub.com\u002Ftransedward\u002Fpytorch-dqn): Deep Q-Learning Network in pytorch.  \n20. [neuraltalk2-pytorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fneuraltalk2.pytorch): image captioning model in pytorch(finetunable cnn in branch with_finetune)\n21. [vnet.pytorch](https:\u002F\u002Fgithub.com\u002Fmattmacy\u002Fvnet.pytorch): A Pytorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.    \n22. [pytorch-fcn](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-fcn): PyTorch implementation of Fully Convolutional Networks.  \n23. [WideResNets](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FWideResNet-pytorch): WideResNets for CIFAR10\u002F100 implemented in PyTorch. This implementation requires less GPU memory than what is required by the official Torch implementation: https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fwide-residual-networks .\n24. [pytorch_highway_networks](https:\u002F\u002Fgithub.com\u002Fc0nn3r\u002Fpytorch_highway_networks): Highway networks implemented in PyTorch.  \n25. [pytorch-NeuCom](https:\u002F\u002Fgithub.com\u002Fypxie\u002Fpytorch-NeuCom): Pytorch implementation of DeepMind's differentiable neural computer paper.  \n26. [captionGen](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FcaptionGen): Generate captions for an image using PyTorch.  \n27. [AnimeGAN](https:\u002F\u002Fgithub.com\u002Fjayleicn\u002FanimeGAN): A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. \n28. [Cnn-text classification](https:\u002F\u002Fgithub.com\u002FShawn1993\u002Fcnn-text-classification-pytorch): This is the implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in PyTorch.  \n29. [deepspeech2](https:\u002F\u002Fgithub.com\u002FSeanNaren\u002Fdeepspeech.pytorch): Implementation of DeepSpeech2 using Baidu Warp-CTC. Creates a network based on the DeepSpeech2 architecture, trained with the CTC activation function.\n30. [seq2seq](https:\u002F\u002Fgithub.com\u002FMaximumEntropy\u002FSeq2Seq-PyTorch): This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch  \n31. [Asynchronous Advantage Actor-Critic in PyTorch](https:\u002F\u002Fgithub.com\u002Frarilurelo\u002Fpytorch_a3c): This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learning. Since PyTorch has a easy method to control shared memory within multiprocess, we can easily implement asynchronous method like A3C.    \n32. [densenet](https:\u002F\u002Fgithub.com\u002Fbamos\u002Fdensenet.pytorch): This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Convolutional Networks by G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. This implementation gets a CIFAR-10+ error rate of 4.77 with a 100-layer DenseNet-BC with a growth rate of 12. Their official implementation and links to many other third-party implementations are available in the liuzhuang13\u002FDenseNet repo on GitHub.  \n33. [nninit](https:\u002F\u002Fgithub.com\u002Falykhantejani\u002Fnninit): Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin.  \n34. [faster rcnn](https:\u002F\u002Fgithub.com\u002Flongcw\u002Ffaster_rcnn_pytorch): This is a PyTorch implementation of Faster RCNN. This project is mainly based on py-faster-rcnn and TFFRCNN.For details about R-CNN please refer to the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun. \n35. [doomnet](https:\u002F\u002Fgithub.com\u002Fakolishchak\u002Fdoom-net-pytorch): PyTorch's version of Doom-net implementing some RL models in ViZDoom environment.  \n36. [flownet](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FFlowNetPytorch): Pytorch implementation of FlowNet by Dosovitskiy et al.  \n37. [sqeezenet](https:\u002F\u002Fgithub.com\u002Fgsp-27\u002Fpytorch_Squeezenet): Implementation of Squeezenet in pytorch, #### pretrained models on CIFAR10 data to come Plan to train the model on cifar 10 and add block connections too.  \n38. [WassersteinGAN](https:\u002F\u002Fgithub.com\u002Fmartinarjovsky\u002FWassersteinGAN): wassersteinGAN in pytorch. \n39. [optnet](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Foptnet): This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our paper OptNet: Differentiable Optimization as a Layer in Neural Networks.  \n40. [qp solver](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fqpth): A fast and differentiable QP solver for PyTorch. Crafted by Brandon Amos and J. Zico Kolter.  \n41. [Continuous Deep Q-Learning with Model-based Acceleration ](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-naf): Reimplementation of Continuous Deep Q-Learning with Model-based Acceleration.  \n42. [Learning to learn by gradient descent by gradient descent](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-meta-optimizer): PyTorch implementation of Learning to learn by gradient descent by gradient descent.\n43. [fast-neural-style](https:\u002F\u002Fgithub.com\u002Fdarkstar112358\u002Ffast-neural-style): pytorch implementation of fast-neural-style, The model uses the method described in [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08155) along with Instance Normalization.\n44. [PytorchNeuralStyleTransfer](https:\u002F\u002Fgithub.com\u002Fleongatys\u002FPytorchNeuralStyleTransfer): Implementation of Neural Style Transfer in Pytorch. \n45. [Fast Neural Style for Image Style Transform by Pytorch](https:\u002F\u002Fgithub.com\u002Fbengxy\u002FFastNeuralStyle): Fast Neural Style for Image Style Transform by Pytorch .\n46. [neural style transfer](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Tutorials): An introduction to PyTorch through the Neural-Style algorithm (https:\u002F\u002Farxiv.org\u002Fabs\u002F1508.06576) developed by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge.   \n47. [VIN_PyTorch_Visdom](https:\u002F\u002Fgithub.com\u002Fzuoxingdong\u002FVIN_PyTorch_Visdom): PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.  \n48. [YOLO2](https:\u002F\u002Fgithub.com\u002Flongcw\u002Fyolo2-pytorch): YOLOv2 in PyTorch.   \n49. [attention-transfer](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fattention-transfer): Attention transfer in pytorch, read the paper [here](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03928).  \n50. [SVHNClassifier](https:\u002F\u002Fgithub.com\u002Fpotterhsu\u002FSVHNClassifier-PyTorch): A PyTorch implementation of [Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1312.6082.pdf).  \n51. [pytorch-deform-conv](https:\u002F\u002Fgithub.com\u002Foeway\u002Fpytorch-deform-conv): PyTorch implementation of Deformable Convolution.  \n52. [BEGAN-pytorch](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FBEGAN-pytorch): PyTorch implementation of [BEGAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10717): Boundary Equilibrium Generative Adversarial Networks.  \n53. [treelstm.pytorch](https:\u002F\u002Fgithub.com\u002Fdasguptar\u002Ftreelstm.pytorch): Tree LSTM implementation in PyTorch.\n54. [AGE](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FAGE): Code for paper \"Adversarial Generator-Encoder Networks\" by Dmitry Ulyanov, Andrea Vedaldi and Victor Lempitsky which can be found [here](http:\u002F\u002Fsites.skoltech.ru\u002Fapp\u002Fdata\u002Fuploads\u002Fsites\u002F25\u002F2017\u002F04\u002FAGE.pdf) \n55. [ResNeXt.pytorch](https:\u002F\u002Fgithub.com\u002Fprlz77\u002FResNeXt.pytorch): Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch.\n56. [pytorch-rl](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-rl): Deep Reinforcement Learning with pytorch & visdom  \n57. [Deep-Leafsnap](https:\u002F\u002Fgithub.com\u002Fsujithv28\u002FDeep-Leafsnap): LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.  \n58. [pytorch-CycleGAN-and-pix2pix](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002Fpytorch-CycleGAN-and-pix2pix): PyTorch implementation for both unpaired and paired image-to-image translation.\n59. [A3C-PyTorch](https:\u002F\u002Fgithub.com\u002Fonlytailei\u002FA3C-PyTorch):PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch\n60. [pytorch-value-iteration-networks](https:\u002F\u002Fgithub.com\u002Fkentsommer\u002Fpytorch-value-iteration-networks): Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)  \n61. [PyTorch-Style-Transfer](https:\u002F\u002Fgithub.com\u002Fzhanghang1989\u002FPyTorch-Style-Transfer): PyTorch Implementation of Multi-style Generative Network for Real-time Transfer\n62. [pytorch-deeplab-resnet](https:\u002F\u002Fgithub.com\u002Fisht7\u002Fpytorch-deeplab-resnet): pytorch-deeplab-resnet-model.\n63. [pointnet.pytorch](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch): pytorch implementation for \"PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation\" https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00593  \n64. **[pytorch-playground](https:\u002F\u002Fgithub.com\u002Faaron-xichen\u002Fpytorch-playground): Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)**.\n65. [pytorch-dnc](https:\u002F\u002Fgithub.com\u002Fjingweiz\u002Fpytorch-dnc): Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom. \n66. [pytorch_image_classifier](https:\u002F\u002Fgithub.com\u002Fjinfagang\u002Fpytorch_image_classifier): Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.  \n67. [mnist-svhn-transfer](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fmnist-svhn-transfer): PyTorch Implementation of CycleGAN and SGAN for Domain Transfer (Minimal).\n68. [pytorch-yolo2](https:\u002F\u002Fgithub.com\u002Fmarvis\u002Fpytorch-yolo2): pytorch-yolo2\n69. [dni](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fdni.pytorch): Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch\n70. [wgan-gp](https:\u002F\u002Fgithub.com\u002Fcaogang\u002Fwgan-gp): A pytorch implementation of Paper \"Improved Training of Wasserstein GANs\".\n71. [pytorch-seq2seq-intent-parsing](https:\u002F\u002Fgithub.com\u002Fspro\u002Fpytorch-seq2seq-intent-parsing): Intent parsing and slot filling in PyTorch with seq2seq + attention\n72. [pyTorch_NCE](https:\u002F\u002Fgithub.com\u002Fdemelin\u002FpyTorch_NCE): An implementation of the Noise Contrastive Estimation algorithm for pyTorch. Working, yet not very efficient.\n73. [molencoder](https:\u002F\u002Fgithub.com\u002Fcxhernandez\u002Fmolencoder): Molecular AutoEncoder in PyTorch\n74. [GAN-weight-norm](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002FGAN-weight-norm): Code for \"On the Effects of Batch and Weight Normalization in Generative Adversarial Networks\"\n75. [lgamma](https:\u002F\u002Fgithub.com\u002Frachtsingh\u002Flgamma): Implementations of polygamma, lgamma, and beta functions for PyTorch\n76. [bigBatch](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FbigBatch): Code used to generate the results appearing in \"Train longer, generalize better: closing the generalization gap in large batch training of neural networks\" \n77. [rl_a3c_pytorch](https:\u002F\u002Fgithub.com\u002Fdgriff777\u002Frl_a3c_pytorch): Reinforcement learning with implementation of A3C LSTM for Atari 2600. \n78. [pytorch-retraining](https:\u002F\u002Fgithub.com\u002Fahirner\u002Fpytorch-retraining): Transfer Learning Shootout for PyTorch's model zoo (torchvision)\n79. [nmp_qc](https:\u002F\u002Fgithub.com\u002Fpriba\u002Fnmp_qc): Neural Message Passing for Computer Vision\n80. [grad-cam](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-grad-cam): Pytorch implementation of Grad-CAM\n81. [pytorch-trpo](https:\u002F\u002Fgithub.com\u002Fmjacar\u002Fpytorch-trpo): PyTorch Implementation of Trust Region Policy Optimization (TRPO)\n82. [pytorch-explain-black-box](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-explain-black-box): PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation\n83. [vae_vpflows](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vpflows): Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling https:\u002F\u002Fjmtomczak.github.io\u002Fdeebmed.html \n84. [relational-networks](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Frelational-networks): Pytorch implementation of \"A simple neural network module for relational reasoning\" (Relational Networks) https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.01427.pdf\n85. [vqa.pytorch](https:\u002F\u002Fgithub.com\u002FCadene\u002Fvqa.pytorch): Visual Question Answering in Pytorch\n86. [end-to-end-negotiator](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fend-to-end-negotiator): Deal or No Deal? End-to-End Learning for Negotiation Dialogues\n87. [odin-pytorch](https:\u002F\u002Fgithub.com\u002FShiyuLiang\u002Fodin-pytorch): Principled Detection of Out-of-Distribution Examples in Neural Networks. \n88. [FreezeOut](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FFreezeOut): Accelerate Neural Net Training by Progressively Freezing Layers. \n89. [ARAE](https:\u002F\u002Fgithub.com\u002Fjakezhaojb\u002FARAE): Code for the paper \"Adversarially Regularized Autoencoders for Generating Discrete Structures\" by Zhao, Kim, Zhang, Rush and LeCun.\n90. [forward-thinking-pytorch](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fforward-thinking-pytorch): Pytorch implementation of \"Forward Thinking: Building and Training Neural Networks One Layer at a Time\" https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.02480.pdf  \n91. [context_encoder_pytorch](https:\u002F\u002Fgithub.com\u002FBoyuanJiang\u002Fcontext_encoder_pytorch): PyTorch Implement of Context Encoders\n92. [attention-is-all-you-need-pytorch](https:\u002F\u002Fgithub.com\u002Fjadore801120\u002Fattention-is-all-you-need-pytorch): A PyTorch implementation of the Transformer model in \"Attention is All You Need\".https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch\n93. [OpenFacePytorch](https:\u002F\u002Fgithub.com\u002Fthnkim\u002FOpenFacePytorch): PyTorch module to use OpenFace's nn4.small2.v1.t7 model \n94. [neural-combinatorial-rl-pytorch](https:\u002F\u002Fgithub.com\u002Fpemami4911\u002Fneural-combinatorial-rl-pytorch):  PyTorch implementation of Neural Combinatorial Optimization with Reinforcement Learning.\n95. [pytorch-nec](https:\u002F\u002Fgithub.com\u002Fmjacar\u002Fpytorch-nec): PyTorch Implementation of Neural Episodic Control (NEC)\n96. [seq2seq.pytorch](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002Fseq2seq.pytorch): Sequence-to-Sequence learning using PyTorch\n97. [Pytorch-Sketch-RNN](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-Sketch-RNN): a pytorch implementation of arxiv.org\u002Fabs\u002F1704.03477\n98. [pytorch-pruning](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-pruning): PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference\n99. [DrQA](https:\u002F\u002Fgithub.com\u002Fhitvoice\u002FDrQA): A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.\n100. [YellowFin_Pytorch](https:\u002F\u002Fgithub.com\u002FJianGoForIt\u002FYellowFin_Pytorch): auto-tuning momentum SGD optimizer\n101. [samplernn-pytorch](https:\u002F\u002Fgithub.com\u002Fdeepsound-project\u002Fsamplernn-pytorch): PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model. \n102. [AEGeAN](https:\u002F\u002Fgithub.com\u002Ftymokvo\u002FAEGeAN): Deeper DCGAN with AE stabilization\n103. [\u002Fpytorch-SRResNet](https:\u002F\u002Fgithub.com\u002Ftwtygqyy\u002Fpytorch-SRResNet): pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802v2 \n104. [vsepp](https:\u002F\u002Fgithub.com\u002Ffartashf\u002Fvsepp): Code for the paper \"VSE++: Improved Visual Semantic Embeddings\"\n105. [Pytorch-DPPO](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FPytorch-DPPO): Pytorch implementation of Distributed Proximal Policy Optimization: arxiv.org\u002Fabs\u002F1707.02286\n106. [UNIT](https:\u002F\u002Fgithub.com\u002Fmingyuliutw\u002FUNIT): PyTorch Implementation of our Coupled VAE-GAN algorithm for Unsupervised Image-to-Image Translation\n107. [efficient_densenet_pytorch](https:\u002F\u002Fgithub.com\u002Fgpleiss\u002Fefficient_densenet_pytorch): A memory-efficient implementation of DenseNets\n108. [tsn-pytorch](https:\u002F\u002Fgithub.com\u002Fyjxiong\u002Ftsn-pytorch): Temporal Segment Networks (TSN) in PyTorch.\n109. [SMASH](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FSMASH): An experimental technique for efficiently exploring neural architectures.\n110. [pytorch-retinanet](https:\u002F\u002Fgithub.com\u002Fkuangliu\u002Fpytorch-retinanet): RetinaNet in PyTorch\n111. [biogans](https:\u002F\u002Fgithub.com\u002Faosokin\u002Fbiogans):  Implementation supporting the ICCV 2017 paper \"GANs for Biological Image Synthesis\". \n112. [Semantic Image Synthesis via Adversarial Learning]( https:\u002F\u002Fgithub.com\u002Fwoozzu\u002Fdong_iccv_2017): A PyTorch implementation of the paper \"Semantic Image Synthesis via Adversarial Learning\" in ICCV 2017. \n113. [fmpytorch](https:\u002F\u002Fgithub.com\u002Fjmhessel\u002Ffmpytorch): A PyTorch implementation of a Factorization Machine module in cython.\n114. [ORN](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FORN): A PyTorch implementation of the paper \"Oriented Response Networks\" in CVPR 2017. \n115. [pytorch-maml](https:\u002F\u002Fgithub.com\u002Fkaterakelly\u002Fpytorch-maml): PyTorch implementation of MAML: arxiv.org\u002Fabs\u002F1703.03400\n116. [pytorch-generative-model-collections](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-generative-model-collections):  Collection of generative models in Pytorch version.\n117. [vqa-winner-cvprw-2017](https:\u002F\u002Fgithub.com\u002Fmarkdtw\u002Fvqa-winner-cvprw-2017): Pytorch Implementation of winner from VQA Chllange Workshop in CVPR'17. \n118. [tacotron_pytorch](https:\u002F\u002Fgithub.com\u002Fr9y9\u002Ftacotron_pytorch):  PyTorch implementation of Tacotron speech synthesis model. \n119. [pspnet-pytorch](https:\u002F\u002Fgithub.com\u002FLextal\u002Fpspnet-pytorch): PyTorch implementation of PSPNet segmentation network\n120. [LM-LSTM-CRF](https:\u002F\u002Fgithub.com\u002FLiyuanLucasLiu\u002FLM-LSTM-CRF): Empower Sequence Labeling with Task-Aware Language Model http:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04109\n121. [face-alignment](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment): Pytorch implementation of the paper \"How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)\", ICCV 2017\n122. [DepthNet](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FDepthNet): PyTorch DepthNet Training on Still Box dataset. \n123. [EDSR-PyTorch](https:\u002F\u002Fgithub.com\u002Fthstkdgus35\u002FEDSR-PyTorch): PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)\n124. [e2c-pytorch](https:\u002F\u002Fgithub.com\u002Fethanluoyc\u002Fe2c-pytorch): Embed to Control implementation in PyTorch.\n125. [3D-ResNets-PyTorch](https:\u002F\u002Fgithub.com\u002Fkenshohara\u002F3D-ResNets-PyTorch): 3D ResNets for Action Recognition.\n126. [bandit-nmt](https:\u002F\u002Fgithub.com\u002Fkhanhptnk\u002Fbandit-nmt): This is code repo for our EMNLP 2017 paper \"Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback\", which implements the A2C algorithm on top of a neural encoder-decoder model and benchmarks the combination under simulated noisy rewards.\n127. [pytorch-a2c-ppo-acktr](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-a2c-ppo-acktr): PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO) and Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR).\n128. [zalando-pytorch](https:\u002F\u002Fgithub.com\u002FbaldassarreFe\u002Fzalando-pytorch): Various experiments on the [Fashion-MNIST](zalandoresearch\u002Ffashion-mnist) dataset from Zalando.\n129. [sphereface_pytorch](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fsphereface_pytorch): A PyTorch Implementation of SphereFace.\n130. [Categorical DQN](https:\u002F\u002Fgithub.com\u002Ffloringogianu\u002Fcategorical-dqn): A PyTorch Implementation of Categorical DQN from [A Distributional Perspective on Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06887).\n131. [pytorch-ntm](https:\u002F\u002Fgithub.com\u002Floudinthecloud\u002Fpytorch-ntm): pytorch ntm implementation. \n132. [mask_rcnn_pytorch](https:\u002F\u002Fgithub.com\u002Ffelixgwu\u002Fmask_rcnn_pytorch): Mask RCNN in PyTorch.\n133. [graph_convnets_pytorch](https:\u002F\u002Fgithub.com\u002Fxbresson\u002Fgraph_convnets_pytorch): PyTorch implementation of graph ConvNets, NIPS’16\n134. [pytorch-faster-rcnn](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fpytorch-faster-rcnn): A pytorch implementation of faster RCNN detection framework based on Xinlei Chen's tf-faster-rcnn.\n135. [torchMoji](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002FtorchMoji): A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc.\n136. [semantic-segmentation-pytorch](https:\u002F\u002Fgithub.com\u002Fhangzhaomit\u002Fsemantic-segmentation-pytorch): Pytorch implementation for Semantic Segmentation\u002FScene Parsing on [MIT ADE20K dataset](http:\u002F\u002Fsceneparsing.csail.mit.edu)\n137. [pytorch-qrnn](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fpytorch-qrnn): PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM\n138. [pytorch-sgns](https:\u002F\u002Fgithub.com\u002Ftheeluwin\u002Fpytorch-sgns): Skipgram Negative Sampling in PyTorch.\n139. [SfmLearner-Pytorch ](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FSfmLearner-Pytorch): Pytorch version of SfmLearner from Tinghui Zhou et al.\n140. [deformable-convolution-pytorch](https:\u002F\u002Fgithub.com\u002F1zb\u002Fdeformable-convolution-pytorch): PyTorch implementation of Deformable Convolution. \n141. [skip-gram-pytorch](https:\u002F\u002Fgithub.com\u002Ffanglanting\u002Fskip-gram-pytorch): A complete pytorch implementation of skipgram model (with subsampling and negative sampling). The embedding result is tested with Spearman's rank correlation.\n142. [stackGAN-v2](https:\u002F\u002Fgithub.com\u002Fhanzhanggit\u002FStackGAN-v2): Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks by Han Zhang*, Tao Xu*, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas.\n143. [self-critical.pytorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fself-critical.pytorch): Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. \n144. [pygcn](https:\u002F\u002Fgithub.com\u002Ftkipf\u002Fpygcn): Graph Convolutional Networks in PyTorch.\n145. [dnc](https:\u002F\u002Fgithub.com\u002Fixaxaar\u002Fpytorch-dnc): Differentiable Neural Computers, for Pytorch\n146. [prog_gans_pytorch_inference](https:\u002F\u002Fgithub.com\u002Fptrblck\u002Fprog_gans_pytorch_inference): PyTorch inference for \"Progressive Growing of GANs\" with CelebA snapshot.\n147. [pytorch-capsule](https:\u002F\u002Fgithub.com\u002Ftimomernick\u002Fpytorch-capsule): Pytorch implementation of Hinton's Dynamic Routing Between Capsules.\n148. [PyramidNet-PyTorch](https:\u002F\u002Fgithub.com\u002Fdyhan0920\u002FPyramidNet-PyTorch): A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks, arxiv.org\u002Fabs\u002F1610.02915)\n149. [radio-transformer-networks](https:\u002F\u002Fgithub.com\u002Fgram-ai\u002Fradio-transformer-networks): A PyTorch implementation of Radio Transformer Networks from the paper \"An Introduction to Deep Learning for the Physical Layer\". arxiv.org\u002Fabs\u002F1702.00832\n150. [honk](https:\u002F\u002Fgithub.com\u002Fcastorini\u002Fhonk): PyTorch reimplementation of Google's TensorFlow CNNs for keyword spotting.\n151. [DeepCORAL](https:\u002F\u002Fgithub.com\u002FSSARCandy\u002FDeepCORAL): A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016\n152. [pytorch-pose](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fpytorch-pose): A PyTorch toolkit for 2D Human Pose Estimation.\n153. [lang-emerge-parlai](https:\u002F\u002Fgithub.com\u002Fkarandesai-96\u002Flang-emerge-parlai): Implementation of EMNLP 2017 Paper \"Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog\" using PyTorch and ParlAI\n154. [Rainbow](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FRainbow): Rainbow: Combining Improvements in Deep Reinforcement Learning \n155. [pytorch_compact_bilinear_pooling v1](https:\u002F\u002Fgithub.com\u002Fgdlg\u002Fpytorch_compact_bilinear_pooling): This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.\n156. [CompactBilinearPooling-Pytorch v2](https:\u002F\u002Fgithub.com\u002FDeepInsight-PCALab\u002FCompactBilinearPooling-Pytorch): (Yang Gao, et al.) A Pytorch Implementation for Compact Bilinear Pooling.\n157. [FewShotLearning](https:\u002F\u002Fgithub.com\u002Fgitabcworld\u002FFewShotLearning): Pytorch implementation of the paper \"Optimization as a Model for Few-Shot Learning\"\n158. [meProp](https:\u002F\u002Fgithub.com\u002Fjklj077\u002FmeProp): Codes for \"meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting\".\n159. [SFD_pytorch](https:\u002F\u002Fgithub.com\u002Fclcarwin\u002FSFD_pytorch): A PyTorch Implementation of Single Shot Scale-invariant Face Detector.\n160. [GradientEpisodicMemory](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FGradientEpisodicMemory): Continuum Learning with GEM: Gradient Episodic Memory. https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08840\n161. [DeblurGAN](https:\u002F\u002Fgithub.com\u002FKupynOrest\u002FDeblurGAN): Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks.\n162. [StarGAN](https:\u002F\u002Fgithub.com\u002Fyunjey\u002FStarGAN): StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tranlsation.\n163. [CapsNet-pytorch](https:\u002F\u002Fgithub.com\u002Fadambielski\u002FCapsNet-pytorch): PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules.\n164. [CondenseNet](https:\u002F\u002Fgithub.com\u002FShichenLiu\u002FCondenseNet): CondenseNet: An Efficient DenseNet using Learned Group Convolutions.\n165. [deep-image-prior](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002Fdeep-image-prior): Image restoration with neural networks but without learning.\n166. [deep-head-pose](https:\u002F\u002Fgithub.com\u002Fnatanielruiz\u002Fdeep-head-pose): Deep Learning Head Pose Estimation using PyTorch.\n167. [Random-Erasing](https:\u002F\u002Fgithub.com\u002Fzhunzhong07\u002FRandom-Erasing): This code has the source code for the paper \"Random Erasing Data Augmentation\".\n168. [FaderNetworks](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FFaderNetworks): Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017\n169. [FlowNet 2.0](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fflownet2-pytorch): FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks\n170. [pix2pixHD](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpix2pixHD): Synthesizing and manipulating 2048x1024 images with conditional GANs tcwang0509.github.io\u002Fpix2pixHD \n171. [pytorch-smoothgrad](https:\u002F\u002Fgithub.com\u002Fpkdn\u002Fpytorch-smoothgrad): SmoothGrad implementation in PyTorch\n172. [RetinaNet](https:\u002F\u002Fgithub.com\u002Fc0nn3r\u002FRetinaNet): An implementation of RetinaNet in PyTorch.\n173. [faster-rcnn.pytorch](https:\u002F\u002Fgithub.com\u002Fjwyang\u002Ffaster-rcnn.pytorch): This project is a faster faster R-CNN implementation, aimed to accelerating the training of faster R-CNN object detection models. \n174. [mixup_pytorch](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002Fmixup_pytorch): A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch.\n175. [inplace_abn](https:\u002F\u002Fgithub.com\u002Fmapillary\u002Finplace_abn): In-Place Activated BatchNorm for Memory-Optimized Training of DNNs\n176. [pytorch-pose-hg-3d](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpytorch-pose-hg-3d): PyTorch implementation for 3D human pose estimation\n177. [nmn-pytorch](https:\u002F\u002Fgithub.com\u002FHarshTrivedi\u002Fnmn-pytorch): Neural Module Network for VQA in Pytorch.\n178. [bytenet](https:\u002F\u002Fgithub.com\u002Fkefirski\u002Fbytenet): Pytorch implementation of bytenet from \"Neural Machine Translation in Linear Time\" paper\n179. [bottom-up-attention-vqa](https:\u002F\u002Fgithub.com\u002Fhengyuan-hu\u002Fbottom-up-attention-vqa): vqa, bottom-up-attention, pytorch\n180. [yolo2-pytorch](https:\u002F\u002Fgithub.com\u002Fruiminshen\u002Fyolo2-pytorch): The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorch as the developing framework to increase productivity, and utilize ONNX to convert models into Caffe 2 to benifit engineering deployment.\n181. [reseg-pytorch](https:\u002F\u002Fgithub.com\u002FWizaron\u002Freseg-pytorch): PyTorch Implementation of ReSeg (arxiv.org\u002Fpdf\u002F1511.07053.pdf)\n182. [binary-stochastic-neurons](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fbinary-stochastic-neurons): Binary Stochastic Neurons in PyTorch.\n183. [pytorch-pose-estimation](https:\u002F\u002Fgithub.com\u002FDavexPro\u002Fpytorch-pose-estimation): PyTorch Implementation of Realtime Multi-Person Pose Estimation project.\n184. [interaction_network_pytorch](https:\u002F\u002Fgithub.com\u002Fhiggsfield\u002Finteraction_network_pytorch): Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics.\n185. [NoisyNaturalGradient](https:\u002F\u002Fgithub.com\u002Fwlwkgus\u002FNoisyNaturalGradient): Pytorch Implementation of paper \"Noisy Natural Gradient as Variational Inference\". \n186. [ewc.pytorch](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fewc.pytorch): An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks 2016(10.1073\u002Fpnas.1611835114).\n187. [pytorch-zssr](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-zssr): PyTorch implementation of 1712.06087 \"Zero-Shot\" Super-Resolution using Deep Internal Learning\n188. [deep_image_prior](https:\u002F\u002Fgithub.com\u002Fatiyo\u002Fdeep_image_prior): An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch.\n189. [pytorch-transformer](https:\u002F\u002Fgithub.com\u002Fleviswind\u002Fpytorch-transformer): pytorch implementation of Attention is all you need.\n190. [DeepRL-Grounding](https:\u002F\u002Fgithub.com\u002Fdevendrachaplot\u002FDeepRL-Grounding): This is a PyTorch implementation of the AAAI-18 paper Gated-Attention Architectures for Task-Oriented Language Grounding\n191. [deep-forecast-pytorch](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fdeep-forecast-pytorch): Wind Speed Prediction using LSTMs in PyTorch (arxiv.org\u002Fpdf\u002F1707.08110.pdf)\n192. [cat-net](https:\u002F\u002Fgithub.com\u002FutiasSTARS\u002Fcat-net):  Canonical Appearance Transformations\n193. [minimal_glo](https:\u002F\u002Fgithub.com\u002Ftneumann\u002Fminimal_glo): Minimal PyTorch implementation of Generative Latent Optimization from the paper \"Optimizing the Latent Space of Generative Networks\"\n194. [LearningToCompare-Pytorch](https:\u002F\u002Fgithub.com\u002Fdragen1860\u002FLearningToCompare-Pytorch): Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning. \n195. [poincare-embeddings](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpoincare-embeddings): PyTorch implementation of the NIPS-17 paper \"Poincaré Embeddings for Learning Hierarchical Representations\". \n196. [pytorch-trpo(Hessian-vector product version)](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-trpo): This is a PyTorch implementation of \"Trust Region Policy Optimization (TRPO)\" with exact Hessian-vector product instead of finite differences approximation.\n197. [ggnn.pytorch](https:\u002F\u002Fgithub.com\u002FJamesChuanggg\u002Fggnn.pytorch): A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN). \n198. [visual-interaction-networks-pytorch](https:\u002F\u002Fgithub.com\u002FMrgemy95\u002Fvisual-interaction-networks-pytorch): This's an implementation of deepmind Visual Interaction Networks paper using pytorch\n199. [adversarial-patch](https:\u002F\u002Fgithub.com\u002Fjhayes14\u002Fadversarial-patch): PyTorch implementation of adversarial patch. \n200. [Prototypical-Networks-for-Few-shot-Learning-PyTorch](https:\u002F\u002Fgithub.com\u002Forobix\u002FPrototypical-Networks-for-Few-shot-Learning-PyTorch): Implementation of Prototypical Networks for Few Shot Learning (arxiv.org\u002Fabs\u002F1703.05175) in Pytorch\n201. [Visual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch](https:\u002F\u002Fgithub.com\u002Forobix\u002FVisual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch): Implementation of Visual Feature Attribution using Wasserstein GANs (arxiv.org\u002Fabs\u002F1711.08998) in PyTorch.\n202. [PhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch](https:\u002F\u002Fgithub.com\u002FBlade6570\u002FPhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch): Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation\n203. [ENAS-pytorch](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch): PyTorch implementation of \"Efficient Neural Architecture Search via Parameters Sharing\". \n204. [Neural-IMage-Assessment](https:\u002F\u002Fgithub.com\u002Fkentsyx\u002FNeural-IMage-Assessment): A PyTorch Implementation of Neural IMage Assessment. \n205. [proxprop](https:\u002F\u002Fgithub.com\u002Ftfrerix\u002Fproxprop): Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps.\n206. [FastPhotoStyle](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFastPhotoStyle): A Closed-form Solution to Photorealistic Image Stylization\n207. [Deep-Image-Analogy-PyTorch](https:\u002F\u002Fgithub.com\u002FBen-Louis\u002FDeep-Image-Analogy-PyTorch): A python implementation of Deep-Image-Analogy based on pytorch.\n208. [Person-reID_pytorch](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch): PyTorch for Person re-ID. \n209. [pt-dilate-rnn](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Fpt-dilate-rnn): Dilated RNNs in pytorch. \n210. [pytorch-i-revnet](https:\u002F\u002Fgithub.com\u002Fjhjacobsen\u002Fpytorch-i-revnet): Pytorch implementation of i-RevNets.\n211. [OrthNet](https:\u002F\u002Fgithub.com\u002FOrcuslc\u002FOrthNet): TensorFlow and PyTorch layers for generating Orthogonal Polynomials.\n212. [DRRN-pytorch](https:\u002F\u002Fgithub.com\u002Fjt827859032\u002FDRRN-pytorch): An implementation of Deep Recursive Residual Network for Super Resolution (DRRN), CVPR 2017\n213. [shampoo.pytorch](https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fshampoo.pytorch): An implementation of shampoo.\n214. [Neural-IMage-Assessment 2](https:\u002F\u002Fgithub.com\u002Ftruskovskiyk\u002Fnima.pytorch): A PyTorch Implementation of Neural IMage Assessment.\n215. [TCN](https:\u002F\u002Fgithub.com\u002Flocuslab\u002FTCN): Sequence modeling benchmarks and temporal convolutional networks locuslab\u002FTCN\n216. [DCC](https:\u002F\u002Fgithub.com\u002Fshahsohil\u002FDCC): This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper.\n217. [packnet](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpacknet): Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning arxiv.org\u002Fabs\u002F1711.05769\n218. [PyTorch-progressive_growing_of_gans](https:\u002F\u002Fgithub.com\u002Fgithub-pengge\u002FPyTorch-progressive_growing_of_gans): PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.\n219. [nonauto-nmt](https:\u002F\u002Fgithub.com\u002Fsalesforce\u002Fnonauto-nmt): PyTorch Implementation of \"Non-Autoregressive Neural Machine Translation\"\n220. [PyTorch-GAN](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FPyTorch-GAN): PyTorch implementations of Generative Adversarial Networks.\n221. [PyTorchWavelets](https:\u002F\u002Fgithub.com\u002Ftomrunia\u002FPyTorchWavelets): PyTorch implementation of the wavelet analysis found in Torrence and Compo (1998)\n222. [pytorch-made](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fpytorch-made): MADE (Masked Autoencoder Density Estimation) implementation in PyTorch\n223. [VRNN](https:\u002F\u002Fgithub.com\u002Femited\u002FVariationalRecurrentNeuralNetwork): Pytorch implementation of the Variational RNN (VRNN), from A Recurrent Latent Variable Model for Sequential Data.\n224. [flow](https:\u002F\u002Fgithub.com\u002Femited\u002Fflow): Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.\n225. [deepvoice3_pytorch](https:\u002F\u002Fgithub.com\u002Fr9y9\u002Fdeepvoice3_pytorch): PyTorch implementation of convolutional networks-based text-to-speech synthesis models\n226. [psmm](https:\u002F\u002Fgithub.com\u002Felanmart\u002Fpsmm): imlementation of the the Pointer Sentinel Mixture Model, as described in the paper by Stephen Merity et al.\n227. [tacotron2](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Ftacotron2): Tacotron 2 - PyTorch implementation with faster-than-realtime inference.\n228. [AccSGD](https:\u002F\u002Fgithub.com\u002Frahulkidambi\u002FAccSGD): Implements pytorch code for the Accelerated SGD algorithm.\n229. [QANet-pytorch](https:\u002F\u002Fgithub.com\u002Fhengruo\u002FQANet-pytorch): an implementation of QANet with PyTorch (EM\u002FF1 = 70.5\u002F77.2 after 20 epoches for about 20 hours on one 1080Ti card.)\n230. [ConvE](https:\u002F\u002Fgithub.com\u002FTimDettmers\u002FConvE): Convolutional 2D Knowledge Graph Embeddings\n231. [Structured-Self-Attention](https:\u002F\u002Fgithub.com\u002Fkaushalshetty\u002FStructured-Self-Attention): Implementation for the paper A Structured Self-Attentive Sentence Embedding, which is published in ICLR 2017: arxiv.org\u002Fabs\u002F1703.03130 .\n232. [graphsage-simple](https:\u002F\u002Fgithub.com\u002Fwilliamleif\u002Fgraphsage-simple): Simple reference implementation of GraphSAGE.\n233. [Detectron.pytorch](https:\u002F\u002Fgithub.com\u002Froytseng-tw\u002FDetectron.pytorch): A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.\n234. [R2Plus1D-PyTorch](https:\u002F\u002Fgithub.com\u002Firhumshafkat\u002FR2Plus1D-PyTorch): PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper \"A Closer Look at Spatiotemporal Convolutions for Action Recognition\"\n235. [StackNN](https:\u002F\u002Fgithub.com\u002Fviking-sudo-rm\u002FStackNN): A PyTorch implementation of differentiable stacks for use in neural networks.\n236. [translagent](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ftranslagent): Code for Emergent Translation in Multi-Agent Communication.\n237. [ban-vqa](https:\u002F\u002Fgithub.com\u002Fjnhwkim\u002Fban-vqa): Bilinear attention networks for visual question answering. \n238. [pytorch-openai-transformer-lm](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpytorch-openai-transformer-lm): This is a PyTorch implementation of the TensorFlow code provided with OpenAI's paper \"Improving Language Understanding by Generative Pre-Training\" by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.\n239. [T2F](https:\u002F\u002Fgithub.com\u002Fakanimax\u002FT2F): Text-to-Face generation using Deep Learning. This project combines two of the recent architectures StackGAN and ProGAN for synthesizing faces from textual descriptions.\n240. [pytorch - fid](https:\u002F\u002Fgithub.com\u002Fmseitzer\u002Fpytorch-fid): A Port of Fréchet Inception Distance (FID score) to PyTorch\n241. [vae_vpflows](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vpflows):Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling jmtomczak.github.io\u002Fdeebmed.html\n242. [CoordConv-pytorch](https:\u002F\u002Fgithub.com\u002Fmkocabas\u002FCoordConv-pytorch): Pytorch implementation of CoordConv introduced in 'An intriguing failing of convolutional neural networks and the CoordConv solution' paper. (arxiv.org\u002Fpdf\u002F1807.03247.pdf)\n243. [SDPoint](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FSDPoint): Implementation of \"Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks\", published in CVPR 2018. \n244. [SRDenseNet-pytorch](https:\u002F\u002Fgithub.com\u002Fwxywhu\u002FSRDenseNet-pytorch): SRDenseNet-pytorch（ICCV_2017）\n245. [GAN_stability](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FGAN_stability): Code for paper \"Which Training Methods for GANs do actually Converge? (ICML 2018)\"\n246. [Mask-RCNN](https:\u002F\u002Fgithub.com\u002FwannabeOG\u002FMask-RCNN): A PyTorch implementation of the architecture of Mask RCNN, serves as an introduction to working with PyTorch\n247. [pytorch-coviar](https:\u002F\u002Fgithub.com\u002Fchaoyuaw\u002Fpytorch-coviar): Compressed Video Action Recognition\n248. [PNASNet.pytorch](https:\u002F\u002Fgithub.com\u002Fchenxi116\u002FPNASNet.pytorch): PyTorch implementation of PNASNet-5 on ImageNet. \n249. [NALU-pytorch](https:\u002F\u002Fgithub.com\u002Fkevinzakka\u002FNALU-pytorch): Basic pytorch implementation of NAC\u002FNALU from Neural Arithmetic Logic Units arxiv.org\u002Fpdf\u002F1808.00508.pdf\n250. [LOLA_DiCE](https:\u002F\u002Fgithub.com\u002Falexis-jacq\u002FLOLA_DiCE): Pytorch implementation of LOLA (arxiv.org\u002Fabs\u002F1709.04326) using DiCE (arxiv.org\u002Fabs\u002F1802.05098)\n251. [generative-query-network-pytorch](https:\u002F\u002Fgithub.com\u002Fwohlert\u002Fgenerative-query-network-pytorch): Generative Query Network (GQN) in PyTorch as described in \"Neural Scene Representation and Rendering\"\n252. [pytorch_hmax](https:\u002F\u002Fgithub.com\u002Fwmvanvliet\u002Fpytorch_hmax): Implementation of the HMAX model of vision in PyTorch.\n253. [FCN-pytorch-easiest](https:\u002F\u002Fgithub.com\u002Fyunlongdong\u002FFCN-pytorch-easiest): trying to be the most easiest and just get-to-use pytorch implementation of FCN (Fully Convolotional Networks)\n254. [transducer](https:\u002F\u002Fgithub.com\u002Fawni\u002Ftransducer): A Fast Sequence Transducer Implementation with PyTorch Bindings.\n255. [AVO-pytorch](https:\u002F\u002Fgithub.com\u002Fartix41\u002FAVO-pytorch): Implementation of Adversarial Variational Optimization in PyTorch.\n256. [HCN-pytorch](https:\u002F\u002Fgithub.com\u002Fhuguyuehuhu\u002FHCN-pytorch): A pytorch reimplementation of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.\n257. [binary-wide-resnet](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fbinary-wide-resnet): PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnel (ICLR 2018)\n258. [piggyback](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpiggyback): Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights arxiv.org\u002Fabs\u002F1801.06519\n259. [vid2vid](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fvid2vid): Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.\n260. [poisson-convolution-sum](https:\u002F\u002Fgithub.com\u002Fcranmer\u002Fpoisson-convolution-sum): Implements an infinite sum of poisson-weighted convolutions\n261. [tbd-nets](https:\u002F\u002Fgithub.com\u002Fdavidmascharka\u002Ftbd-nets): PyTorch implementation of \"Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning\" arxiv.org\u002Fabs\u002F1803.05268 \n262. [attn2d](https:\u002F\u002Fgithub.com\u002Felbayadm\u002Fattn2d): Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction\n263. [yolov3](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov3): YOLOv3: Training and inference in PyTorch pjreddie.com\u002Fdarknet\u002Fyolo\n264. [deep-dream-in-pytorch](https:\u002F\u002Fgithub.com\u002Fduc0\u002Fdeep-dream-in-pytorch): Pytorch implementation of the DeepDream computer vision algorithm. \n265. [pytorch-flows](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-flows): PyTorch implementations of algorithms for density estimation\n266. [quantile-regression-dqn-pytorch](https:\u002F\u002Fgithub.com\u002Fars-ashuha\u002Fquantile-regression-dqn-pytorch): Quantile Regression DQN a Minimal Working Example\n267. [relational-rnn-pytorch](https:\u002F\u002Fgithub.com\u002FL0SG\u002Frelational-rnn-pytorch): An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.\n268. [DEXTR-PyTorch](https:\u002F\u002Fgithub.com\u002Fscaelles\u002FDEXTR-PyTorch): Deep Extreme Cut http:\u002F\u002Fwww.vision.ee.ethz.ch\u002F~cvlsegmentation\u002Fdextr\n269. [PyTorch_GBW_LM](https:\u002F\u002Fgithub.com\u002Frdspring1\u002FPyTorch_GBW_LM): PyTorch Language Model for Google Billion Word Dataset.\n270. [Pytorch-NCE](https:\u002F\u002Fgithub.com\u002FStonesjtu\u002FPytorch-NCE): The Noise Contrastive Estimation for softmax output written in Pytorch\n271. [generative-models](https:\u002F\u002Fgithub.com\u002Fshayneobrien\u002Fgenerative-models): Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN. \n272. [convnet-aig](https:\u002F\u002Fgithub.com\u002Fandreasveit\u002Fconvnet-aig): PyTorch implementation for Convolutional Networks with Adaptive Inference Graphs.\n273. [integrated-gradient-pytorch](https:\u002F\u002Fgithub.com\u002FTianhongDai\u002Fintegrated-gradient-pytorch): This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.\n274. [MalConv-Pytorch](https:\u002F\u002Fgithub.com\u002FAlexander-H-Liu\u002FMalConv-Pytorch): Pytorch implementation of MalConv. \n275. [trellisnet](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Ftrellisnet): Trellis Networks for Sequence Modeling\n276. [Learning to Communicate with Deep Multi-Agent Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fminqi\u002Flearning-to-communicate-pytorch): pytorch implementation of  Learning to Communicate with Deep Multi-Agent Reinforcement Learning paper.\n277. [pnn.pytorch](https:\u002F\u002Fgithub.com\u002Fmichaelklachko\u002Fpnn.pytorch): PyTorch implementation of CVPR'18 - Perturbative Neural Networks http:\u002F\u002Fxujuefei.com\u002Fpnn.html.\n278. [Face_Attention_Network](https:\u002F\u002Fgithub.com\u002Frainofmine\u002FFace_Attention_Network): Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occluded Faces.\n279. [waveglow](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fwaveglow): A Flow-based Generative Network for Speech Synthesis.\n280. [deepfloat](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeepfloat): This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in \"Rethinking floating point for deep learning\" \n281. [EPSR](https:\u002F\u002Fgithub.com\u002Fsubeeshvasu\u002F2018_subeesh_epsr_eccvw): Pytorch implementation of [Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.00344.pdf). This work has won the first place in PIRM2018-SR competition (region 1) held as part of the ECCV 2018.\n282. [ClariNet](https:\u002F\u002Fgithub.com\u002Fksw0306\u002FClariNet): A Pytorch Implementation of ClariNet arxiv.org\u002Fabs\u002F1807.07281\n283. [pytorch-pretrained-BERT](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpytorch-pretrained-BERT): PyTorch version of Google AI's BERT model with script to load Google's pre-trained models\n284. [torch_waveglow](https:\u002F\u002Fgithub.com\u002Fnpuichigo\u002Fwaveglow): A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis. \n285. [3DDFA](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA): The pytorch improved re-implementation of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.\n286. [loss-landscape](https:\u002F\u002Fgithub.com\u002Ftomgoldstein\u002Floss-landscape): loss-landscape Code for visualizing the loss landscape of neural nets.\n287. [famos](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Ffamos): \nPytorch implementation of the paper \"Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Image Stylization\" available at http:\u002F\u002Farxiv.org\u002Fabs\u002F1811.09236.\n288. [back2future.pytorch](https:\u002F\u002Fgithub.com\u002Fanuragranj\u002Fback2future.pytorch): This is a Pytorch implementation of\nJanai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with Occlusions. ECCV 2018.\n289. [FFTNet](https:\u002F\u002Fgithub.com\u002Fmozilla\u002FFFTNet): Unofficial Implementation of FFTNet vocode paper.\n290. [FaceBoxes.PyTorch](https:\u002F\u002Fgithub.com\u002Fzisianw\u002FFaceBoxes.PyTorch): A PyTorch Implementation of FaceBoxes.\n291. [Transformer-XL](https:\u002F\u002Fgithub.com\u002Fkimiyoung\u002Ftransformer-xl): Transformer-XL: Attentive Language Models Beyond a Fixed-Length Contexthttps:\u002F\u002Fgithub.com\u002Fkimiyoung\u002Ftransformer-xl\n292. [associative_compression_networks](https:\u002F\u002Fgithub.com\u002Fjalexvig\u002Fassociative_compression_networks): Associative Compression Networks for Representation Learning. \n293. [fluidnet_cxx](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Ffluidnet_cxx): FluidNet re-written with ATen tensor lib. \n294. [Deep-Reinforcement-Learning-Algorithms-with-PyTorch](https:\u002F\u002Fgithub.com\u002Fp-christ\u002FDeep-Reinforcement-Learning-Algorithms-with-PyTorch): This repository contains PyTorch implementations of deep reinforcement learning algorithms.\n295. [Shufflenet-v2-Pytorch](https:\u002F\u002Fgithub.com\u002Fericsun99\u002FShufflenet-v2-Pytorch): This is a Pytorch implementation of faceplusplus's ShuffleNet-v2. \n296. [GraphWaveletNeuralNetwork](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGraphWaveletNeuralNetwork): This is a Pytorch implementation of Graph Wavelet Neural Network. ICLR 2019. \n297. [AttentionWalk](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAttentionWalk): This is a Pytorch implementation of Watch Your Step: Learning Node Embeddings via Graph Attention. NIPS 2018.\n298. [SGCN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSGCN): This is a Pytorch implementation of Signed Graph Convolutional Network. ICDM 2018.\n299. [SINE](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSINE): This is a Pytorch implementation of SINE: Scalable Incomplete Network Embedding. ICDM 2018.\n300. [GAM](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FGAM): This is a Pytorch implementation of Graph Classification using Structural Attention. KDD 2018.\n301. [neural-style-pt](https:\u002F\u002Fgithub.com\u002FProGamerGov\u002Fneural-style-pt): A PyTorch implementation of Justin Johnson's Neural-style.\n302. [TuckER](https:\u002F\u002Fgithub.com\u002Fibalazevic\u002FTuckER): TuckER: Tensor Factorization for Knowledge Graph Completion.\n303. [pytorch-prunes](https:\u002F\u002Fgithub.com\u002FBayesWatch\u002Fpytorch-prunes): Pruning neural networks: is it time to nip it in the bud?\n304. [SimGNN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSimGNN): SimGNN: A Neural Network Approach to Fast Graph Similarity Computation.\n305. [Character CNN](https:\u002F\u002Fgithub.com\u002Fahmedbesbes\u002Fcharacter-based-cnn): PyTorch implementation of the Character-level Convolutional Networks for Text Classification paper. \n306. [XLM](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FXLM): PyTorch original implementation of Cross-lingual Language Model Pretraining.\n307. [DiffAI](https:\u002F\u002Fgithub.com\u002Feth-sri\u002Fdiffai): A provable defense against adversarial examples and library for building compatible PyTorch models.\n308. [APPNP](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FAPPNP): Combining Neural Networks with Personalized PageRank for Classification on Graphs. ICLR 2019.\n309. [NGCN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN): A Higher-Order Graph Convolutional Layer. NeurIPS 2018.\n310. [gpt-2-Pytorch](https:\u002F\u002Fgithub.com\u002Fgraykode\u002Fgpt-2-Pytorch): Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation\n311. [Splitter](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSplitter): Splitter: Learning Node Representations that Capture Multiple Social Contexts. (WWW 2019).\n312. [CapsGNN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FCapsGNN): Capsule Graph Neural Network. (ICLR 2019).\n313. [BigGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fajbrock\u002FBigGAN-PyTorch): The author's officially unofficial PyTorch BigGAN implementation.\n314. [ppo_pytorch_cpp](https:\u002F\u002Fgithub.com\u002Fmhubii\u002Fppo_pytorch_cpp): This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch.\n315. [RandWireNN](https:\u002F\u002Fgithub.com\u002Fseungwonpark\u002FRandWireNN): Implementation of: \"Exploring Randomly Wired Neural Networks for Image Recognition\".\n316. [Zero-shot Intent CapsNet](https:\u002F\u002Fgithub.com\u002Fjoel-huang\u002Fzeroshot-capsnet-pytorch): GPU-accelerated PyTorch implementation of \"Zero-shot User Intent Detection via Capsule Neural Networks\".\n317. [SEAL-CI](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FSEAL-CI) Semi-Supervised Graph Classification: A Hierarchical Graph Perspective. (WWW 2019).\n318. [MixHop](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FMixHop-and-N-GCN): MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. ICML 2019.\n319. [densebody_pytorch](https:\u002F\u002Fgithub.com\u002FLotayou\u002Fdensebody_pytorch): PyTorch implementation of CloudWalk's recent paper DenseBody.\n320. [voicefilter](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter): Unofficial PyTorch implementation of Google AI's VoiceFilter system http:\u002F\u002Fswpark.me\u002Fvoicefilter. \n321. [NVIDIA\u002Fsemantic-segmentation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fsemantic-segmentation): A PyTorch Implementation of [Improving Semantic Segmentation via Video Propagation and Label Relaxation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.01593), In CVPR2019. \n322. [ClusterGCN](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FClusterGCN): A PyTorch implementation of \"Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks\" (KDD 2019).\n323. [NVlabs\u002FDG-Net](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDG-Net): A PyTorch implementation of \"Joint Discriminative and Generative Learning for Person Re-identification\" (CVPR19 Oral). \n324. [NCRF](https:\u002F\u002Fgithub.com\u002Fbaidu-research\u002FNCRF): Cancer metastasis detection with neural conditional random field (NCRF)\n325. [pytorch-sift](https:\u002F\u002Fgithub.com\u002Fducha-aiki\u002Fpytorch-sift): PyTorch implementation of SIFT descriptor. \n326. [brain-segmentation-pytorch](https:\u002F\u002Fgithub.com\u002Fmateuszbuda\u002Fbrain-segmentation-pytorch): U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI. \n327. [glow-pytorch](https:\u002F\u002Fgithub.com\u002Frosinality\u002Fglow-pytorch): PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (arxiv.org\u002Fabs\u002F1807.03039) \n328. [EfficientNets-PyTorch](https:\u002F\u002Fgithub.com\u002Fzsef123\u002FEfficientNets-PyTorch): A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.\n329. [STEAL](https:\u002F\u002Fgithub.com\u002Fnv-tlabs\u002FSTEAL): STEAL - Learning Semantic Boundaries from Noisy Annotations nv-tlabs.github.io\u002FSTEAL\n330. [EigenDamage-Pytorch](https:\u002F\u002Fgithub.com\u002Falecwangcq\u002FEigenDamage-Pytorch): Official implementation of the ICML'19 paper \"EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis\".\n331. [Aspect-level-sentiment](https:\u002F\u002Fgithub.com\u002Fruidan\u002FAspect-level-sentiment): Code and dataset for ACL2018 paper \"Exploiting Document Knowledge for Aspect-level Sentiment Classification\"\n332. [breast_cancer_classifier](https:\u002F\u002Fgithub.com\u002Fnyukat\u002Fbreast_cancer_classifier): Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening arxiv.org\u002Fabs\u002F1903.08297\n333. [DGC-Net](https:\u002F\u002Fgithub.com\u002FAaltoVision\u002FDGC-Net): A PyTorch implementation of \"DGC-Net: Dense Geometric Correspondence Network\".\n334. [universal-triggers](https:\u002F\u002Fgithub.com\u002FEric-Wallace\u002Funiversal-triggers): Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)\n335. [Deep-Reinforcement-Learning-Algorithms-with-PyTorch](https:\u002F\u002Fgithub.com\u002Fp-christ\u002FDeep-Reinforcement-Learning-Algorithms-with-PyTorch): PyTorch implementations of deep reinforcement learning algorithms and environments.\n336. [simple-effective-text-matching-pytorch](https:\u002F\u002Fgithub.com\u002Falibaba-edu\u002Fsimple-effective-text-matching-pytorch): A pytorch implementation of the ACL2019 paper \"Simple and Effective Text Matching with Richer Alignment Features\".\n336. [Adaptive-segmentation-mask-attack (ASMA)](https:\u002F\u002Fgithub.com\u002Futkuozbulak\u002Fadaptive-segmentation-mask-attack): A pytorch implementation of the MICCAI2019 paper \"Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation\".\n337. [NVIDIA\u002Funsupervised-video-interpolation](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Funsupervised-video-interpolation): A PyTorch Implementation of [Unsupervised Video Interpolation Using Cycle Consistency](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.05928), In ICCV 2019. \n338. [Seg-Uncertainty](https:\u002F\u002Fgithub.com\u002Flayumi\u002FSeg-Uncertainty): Unsupervised Scene Adaptation with Memory Regularization in vivo, In IJCAI 2020.\n339. [pulse](https:\u002F\u002Fgithub.com\u002Fadamian98\u002Fpulse): Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models\n340. [distance-encoding](https:\u002F\u002Fgithub.com\u002Fsnap-stanford\u002Fdistance-encoding): Distance-Encoding - Design Provably More PowerfulGNNs for Structural Representation Learning.\n341. [Pathfinder Discovery Networks](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002FPDN): Pathfinder Discovery Networks for Neural Message Passing.\n342. [PyKEEN](https:\u002F\u002Fgithub.com\u002Fpykeen\u002Fpykeen): A Python library for learning and evaluating knowledge graph embeddings.\n343. [SSSNET](https:\u002F\u002Fgithub.com\u002FSherylHYX\u002FSSSNET_Signed_Clustering): Official implementation of the SDM2022 paper \"SSSNET: Semi-Supervised Signed Network Clustering\".\n344. [MagNet](https:\u002F\u002Fgithub.com\u002Fmatthew-hirn\u002Fmagnet): Official implementation of the NeurIPS2021 paper \"MagNet: A Neural Network for Directed Graphs\".\n345. [Semantic Search](https:\u002F\u002Fgithub.com\u002Fkuutsav\u002Finformation-retrieval): Latest in the field of neural information retrieval \u002F semantic search.\n346. [FreeGrad](https:\u002F\u002Fgithub.com\u002Ftbox98\u002FFreeGrad) - PyTorch library for custom backward passes, straight-through estimators and gradient transforms.\n\n## 演讲与会议\n\n1. [PyTorch 2018开发者大会](https:\u002F\u002Fdevelopers.facebook.com\u002Fvideos\u002F2018\u002Fpytorch-developer-conference\u002F)：2018年首次举办的PyTorch开发者大会。\n\n## PyTorch相关资源\n\n1. **[the-incredible-pytorch](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fthe-incredible-pytorch)**：令人惊叹的PyTorch：一个精心整理的列表，包含与PyTorch相关的教程、论文、项目、社区等资源。\n2. [生成模型](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)：在TensorFlow、Keras和PyTorch中实现的生成模型集合，如GAN、VAE等。http:\u002F\u002Fwiseodd.github.io\n3. [PyTorch与TensorFlow对比](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FMachineLearning\u002Fcomments\u002F5w3q74\u002Fd_so_pytorch_vs_tensorflow_whats_the_verdict_on\u002F)：Reddit上的一篇信息丰富的讨论帖。\n4. [PyTorch讨论论坛](https:\u002F\u002Fdiscuss.pytorch.org\u002F)\n5. [PyTorch笔记本：Docker堆栈](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fescong\u002Fpytorch-notebook\u002F)：一个类似于[Jupyter Notebook Scientific Python Stack](https:\u002F\u002Fgithub.com\u002Fjupyter\u002Fdocker-stacks\u002Ftree\u002Fmaster\u002Fscipy-notebook)的项目。\n6. [像鲍勃·罗斯一样绘画](https:\u002F\u002Fgithub.com\u002Fkendricktan\u002Fdrawlikebobross)：利用神经网络的力量，用PyTorch像鲍勃·罗斯一样作画！\n7. [pytorch-tvmisc](https:\u002F\u002Fgithub.com\u002Ft-vi\u002Fpytorch-tvmisc)：适用于PyTorch的多功能杂项工具集。\n8. [PyTorch A3C Mujoco实现](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fpytorch-a3c-mujoco)：在Mujoco Gym环境中实现A3C算法。\n9. [5分钟学会PyTorch](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nbJ-2G2GXL0&list=WL&index=9)。\n10. [PyTorch聊天机器人](https:\u002F\u002Fgithub.com\u002Fjinfagang\u002Fpytorch_chatbot)：使用PyTorch实现的一款出色的聊天机器人。\n11. [Malmo协作AI挑战赛](https:\u002F\u002Fgithub.com\u002FKaixhin\u002Fmalmo-challenge)：Malmo协作AI挑战赛——猪捕手团队。\n12. [sketchnet](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fsketchnet)：一个能够接收图像并生成Processing源代码以重新生成该图像的模型。\n13. [深度学习训练营](https:\u002F\u002Fgithub.com\u002FQuantScientist\u002FDeep-Learning-Boot-Camp)：由非营利社区运营的为期5天的深度学习训练营 http:\u002F\u002Fdeep-ml.com。\n14. [亚马逊森林计算机视觉项目](https:\u002F\u002Fgithub.com\u002Fmratsim\u002FAmazon_Forest_Computer_Vision)：使用PyTorch\u002FKeras编写的卫星图像标注代码，包含大量PyTorch技巧。参与Kaggle竞赛。\n15. [AlphaZero五子棋实现](https:\u002F\u002Fgithub.com\u002Fjunxiaosong\u002FAlphaZero_Gomoku)：AlphaZero算法在五子棋（又称连珠或五子棋）中的实现。\n16. [pytorch-cv](https:\u002F\u002Fgithub.com\u002Fyouansheng\u002Fpytorch-cv)：用于目标检测、分割和姿态估计的仓库。\n17. [深度人体重识别](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fdeep-person-reid)：基于PyTorch的深度人体重识别方法实现。\n18. [PyTorch模板项目](https:\u002F\u002Fgithub.com\u002Fvictoresque\u002Fpytorch-template)。\n19. [《使用PyTorch进行深度学习》图书](https:\u002F\u002Fwww.packtpub.com\u002Fbig-data-and-business-intelligence\u002Fdeep-learning-pytorch)：一本实用指南，介绍如何使用PyTorch构建文本和视觉领域的神经网络模型。[亚马逊购买链接](https:\u002F\u002Fwww.amazon.in\u002FDeep-Learning-PyTorch-practical-approach\u002Fdp\u002F1788624335\u002Fref=tmm_pap_swatch_0?_encoding=UTF8&qid=1523853954&sr=8-1)   [GitHub代码库](https:\u002F\u002Fgithub.com\u002Fsvishnu88\u002FDLwithPyTorch)\n20. [TensorFlow与PyTorch对比](https:\u002F\u002Fgithub.com\u002Fjalola\u002Fcompare-tensorflow-pytorch)：比较用TensorFlow和PyTorch编写的层之间的输出结果。\n21. [hasktorch](https:\u002F\u002Fgithub.com\u002Fhasktorch\u002Fhasktorch)：在Haskell中实现张量和神经网络。\n22. [《使用PyTorch进行深度学习》书籍](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-pytorch)：本书教你如何使用Python和PyTorch实现深度学习算法。\n23. [nimtorch](https:\u002F\u002Fgithub.com\u002Ffragcolor-xyz\u002Fnimtorch)：PyTorch - Python + Nim。\n24. [derplearning](https:\u002F\u002Fgithub.com\u002FJohn-Ellis\u002Fderplearning)：自动驾驶遥控车代码。\n25. [pytorch-saltnet](https:\u002F\u002Fgithub.com\u002Ftugstugi\u002Fpytorch-saltnet)：Kaggle TGS盐矿识别挑战赛中单模型解决方案第9名。\n26. [PyTorch脚本](https:\u002F\u002Fgithub.com\u002Fpeterjc123\u002Fpytorch-scripts)：一些专为Windows系统设计的PyTorch脚本。\n27. [pytorch_misc](https:\u002F\u002Fgithub.com\u002Fptrblck\u002Fpytorch_misc)：为PyTorch讨论区创建的代码片段。\n28. [awesome-pytorch-scholarship](https:\u002F\u002Fgithub.com\u002Farnas\u002Fawesome-pytorch-scholarship)：一份优秀的PyTorch奖学金文章、指南、博客、课程及其他资源列表。\n29. [MentisOculi](https:\u002F\u002Fgithub.com\u002Fmmirman\u002FMentisOculi)：用PyTorch编写的光线追踪器（raynet？）。\n30. [涂鸦大师](https:\u002F\u002Fgithub.com\u002Fkaranchahal\u002FDoodleMaster)：“不要编写你的UI，而是画出来！”\n31. [ocaml-torch](https:\u002F\u002Fgithub.com\u002FLaurentMazare\u002Focaml-torch)：PyTorch的OCaml绑定。\n32. [扩展脚本](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fextension-script)：用于TorchScript自定义C++\u002FCUDA算子的示例仓库。\n33. [PyTorch推理](https:\u002F\u002Fgithub.com\u002Fzccyman\u002Fpytorch-inference)：在Windows 10平台上使用C++实现PyTorch 1.0推理。\n34. [PyTorch C++推理](https:\u002F\u002Fgithub.com\u002FWizaron\u002Fpytorch-cpp-inference)：以C++作为Web服务器来部署PyTorch 1.0模型。\n35. [tch-rs](https:\u002F\u002Fgithub.com\u002FLaurentMazare\u002Ftch-rs)：PyTorch的Rust绑定。\n36. [TorchSharp](https:\u002F\u002Fgithub.com\u002Finteresaaat\u002FTorchSharp)：PyTorch引擎的.NET绑定。\n37. [ML Workspace](https:\u002F\u002Fgithub.com\u002Fml-tooling\u002Fml-workspace)：面向机器学习和数据科学的一体化Web IDE。将Jupyter、VS Code、PyTorch等多种工具\u002F库整合到一个Docker镜像中。\n38. [PyTorch编码规范](https:\u002F\u002Fgithub.com\u002FIgorSusmelj\u002Fpytorch-styleguide)：PyTorch代码的编码规范。一致且良好的代码风格有助于协作并避免错误！\n\n\n##### 反馈：如果您有任何想法，或者希望在此列表中添加其他内容，请随时贡献。","# Awesome-PyTorch-List 快速上手指南\n\n`Awesome-PyTorch-List` 并非一个单一的 Python 包，而是一个精选的 PyTorch 生态开源项目合集，涵盖自然语言处理（NLP）、计算机视觉（CV）、语音处理等领域的库、教程及论文实现。本指南将帮助你搭建基础环境，并演示如何从该列表中选取并使用主流工具。\n\n## 环境准备\n\n在开始使用前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux (推荐 Ubuntu 18.04+), macOS, 或 Windows 10\u002F11。\n*   **Python 版本**：建议安装 Python 3.8 - 3.11。\n*   **硬件加速**：推荐使用 NVIDIA GPU 以获得最佳性能（需安装对应的 CUDA 驱动）。\n*   **前置依赖**：\n    *   `pip` 或 `conda` 包管理器。\n    *   Git（用于克隆具体项目仓库）。\n\n> **国内加速建议**：\n> 在中国大陆地区，建议配置清华或阿里镜像源以加速依赖下载。\n> *   **pip 临时使用镜像**：在命令后添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n> *   **conda 配置镜像**：\n>   ```bash\n>   conda config --add channels https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fanaconda\u002Fpkgs\u002Fmain\u002F\n>   conda config --add channels https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fanaconda\u002Fpkgs\u002Ffree\u002F\n> ```\n\n## 安装步骤\n\n由于这是一个资源列表，你不需要安装\"Awesome-PyTorch-List\"本身，而是根据需求安装列表中的具体库。以下是安装核心框架及两个典型领域库的步骤。\n\n### 1. 安装 PyTorch 核心框架\n访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 获取最新命令。以下为使用 pip 安装支持 CUDA 11.8 版本的示例（国内用户请加上镜像源参数）：\n\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n若无需 GPU 支持：\n```bash\npip install torch torchvision torchaudio -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 安装 NLP 领域工具 (以 Transformers 为例)\n列表中包含 Hugging Face 的 `transformers`，这是目前最流行的 NLP 库之一。\n\n```bash\npip install transformers accelerate -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 安装 CV 领域工具 (以 MMDetection 为例)\n列表中包含 OpenMMLab 的 `MMDetection`，适用于目标检测任务。\n\n```bash\n# 先安装 mmcv (预编译版本可加速安装)\npip install mmcv-full -f https:\u002F\u002Fdownload.openmmlab.com\u002Fmmcv\u002Fdist\u002Fcu118\u002Ftorch1.13\u002Findex.html -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装 MMDetection\ngit clone https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection.git\ncd mmdetection\npip install -v -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n以下展示如何调用列表中两个代表性库进行最简单的推理演示。\n\n### 场景一：使用 Transformers 进行文本情感分析 (NLP)\n\n```python\nfrom transformers import pipeline\n\n# 加载预训练的情感分析模型\nclassifier = pipeline(\"sentiment-analysis\")\n\n# 执行预测\nresult = classifier(\"I love using PyTorch for deep learning projects!\")\n\nprint(result)\n# 输出示例：[{'label': 'POSITIVE', 'score': 0.9998}]\n```\n\n### 场景二：使用 TorchVision 进行图像分类 (CV)\n\n```python\nimport torch\nfrom torchvision import models, transforms\nfrom PIL import Image\n\n# 加载预训练的 ResNet18 模型\nmodel = models.resnet18(pretrained=True)\nmodel.eval()\n\n# 图像预处理\npreprocess = transforms.Compose([\n    transforms.Resize(256),\n    transforms.CenterCrop(224),\n    transforms.ToTensor(),\n    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n])\n\n# 假设当前目录下有一张名为 'dog.jpg' 的图片\nimg_path = \"dog.jpg\" \ninput_image = Image.open(img_path)\ninput_tensor = preprocess(input_image)\ninput_batch = input_tensor.unsqueeze(0) \n\n# 推理\nwith torch.no_grad():\n    output = model(input_batch)\n\n# 获取概率最高的类别索引\nprobabilities = torch.nn.functional.softmax(output[0], dim=0)\ntop5_prob, top5_catid = torch.topk(probabilities, 5)\n\nprint(\"Top 5 predictions:\")\nfor i in range(top5_prob.size(0)):\n    print(f\"Category ID: {top5_catid[i].item()}, Probability: {top5_prob[i].item()}\")\n```\n\n### 场景三：克隆特定项目运行 (通用方法)\n\n对于列表中未打包成 PyPI 库的项目（如某些论文复现代码），通常直接使用 Git 克隆并运行其提供的脚本。\n\n```bash\n# 以 pytorch-seq2seq 为例\ngit clone https:\u002F\u002Fgithub.com\u002FIBM\u002Fpytorch-seq2seq.git\ncd pytorch-seq2seq\n\n# 安装该项目特定依赖\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 运行示例脚本 (具体命令需参考该项目自身的 README)\npython examples\u002Fsample.py\n```","某初创公司的算法工程师团队正致力于开发一款支持多方言的智能客服语音系统，急需在 PyTorch 框架下快速集成先进的语音识别与合成模块。\n\n### 没有 Awesome-pytorch-list 时\n- **资源检索如大海捞针**：工程师需在 GitHub 海量仓库中盲目搜索，难以区分哪些是过时的 Demo，哪些是维护良好的生产级代码（如区分基础的 `pytorch-seq2seq` 与成熟的 `fairseq-py`）。\n- **技术选型试错成本高**：缺乏权威对比，团队可能误选不再维护的库（如旧版 NLP 工具），导致后期重构浪费数周时间。\n- **前沿论文复现困难**：找不到 SOTA（最先进）模型的具体实现地址，例如难以快速定位到 `Tacotron-pytorch` 或 `espnet` 等端到端语音处理工具箱。\n- **生态组件分散**：需要分别寻找数据预处理、模型解释性（如 Captum）及多语言嵌入（如 MUSE）等辅助库，协作效率低下。\n\n### 使用 Awesome-pytorch-list 后\n- **一站式精准导航**：直接通过分类目录锁定高质量资源，迅速选定 `espnet` 作为核心语音引擎，并利用 `pyannote-audio` 解决说话人分离难题。\n- **规避废弃项目风险**：依据列表中的星标数和社区热度，直接排除过时方案，确保基于 `AllenNLP` 或 `Hugging Face` 生态构建稳健基线。\n- **加速论文落地**：通过\"Paper implementations\"板块快速找到神经机器翻译和语音合成的官方复现代码，将研发周期从数月缩短至数周。\n- **全链路工具整合**：按图索骥集齐从音频 I\u002FO (`pytorch\u002Faudio`) 到模型可解释性的完整工具链，大幅提升系统迭代速度。\n\nAwesome-pytorch-list 将原本碎片化的 PyTorch 生态整合为一张清晰的“寻宝图”，让开发者从繁琐的搜寻工作中解放出来，专注于核心算法的创新与落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbharathgs_Awesome-pytorch-list_c4722785.png","bharathgs","bharath g s","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbharathgs_62658f8a.png","Machine Learning Engineer","@betterup","☁️","royalkingpin@gmail.com","iambharathgs","iambharathgs.in","https:\u002F\u002Fgithub.com\u002Fbharathgs",null,16460,2830,"2026-04-04T11:22:54",1,"","未说明",{"notes":93,"python":91,"dependencies":94},"该仓库是一个 PyTorch 相关库、教程和论文实现的精选列表（Awesome List），本身不是一个可单独运行的软件工具。因此没有统一的运行环境需求。具体的环境要求取决于用户选择使用的列表中的某个特定子项目（如 transformers, detectron2, fairseq 等）。通常这些项目需要安装 PyTorch，并建议根据具体子项目的文档配置相应的 CUDA 版本和依赖库。",[95,96,97],"torch","torchvision","torchaudio",[51,14,13,26,54],[100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119],"pytorch","python","machine-learning","deep-learning","tutorials","papers","awesome","awesome-list","pytorch-tutorials","data-science","nlp","nlp-library","cv","computer-vision","natural-language-processing","facebook","probabilistic-programming","utility-library","neural-network","pytorch-model","2026-03-27T02:49:30.150509","2026-04-06T08:46:25.987215",[123,128,133,138,143,148],{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},15622,"我可以基于这个列表创建一个中文版本吗？","可以。已有用户创建了名为 'Awesome-pytorch-list-CNVersion' 的中文版本项目。该项目使用了 GitHub API 来添加每个仓库的星标数量，目前仍有大量翻译工作正在进行中。","https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FAwesome-pytorch-list\u002Fissues\u002F76",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},15623,"如何向列表中推荐新的 PyTorch 项目或库？","维护者建议通过提交 Pull Request (PR) 的方式来添加新项目。您可以在 Issue 中先提出建议，维护者通常会回复请求您开设一个 PR 并将内容添加进去（例如 RoMa 库和 GeomLoss 的案例）。","https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FAwesome-pytorch-list\u002Fissues\u002F125",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},15624,"列表中的链接出现重复怎么办？","如果您发现列表中有重复的条目（例如 tensorboard-pytorch 曾被列出两次），可以直接在 Issue 中指出，维护者会确认并移除重复项。","https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FAwesome-pytorch-list\u002Fissues\u002F12",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},15625,"能否在列表中自动显示仓库的最后更新日期？","这是一个功能建议。维护者表示不确定如何自动化实现此功能（即随仓库更新而自动更新日期），但欢迎社区提供实现思路或直接提交包含该功能的 Pull Request。","https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FAwesome-pytorch-list\u002Fissues\u002F72",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},15626,"我想复现论文实验但遇到数据生成问题，能通过 GitHub 获取代码吗？","GitHub Issue 主要用于项目讨论而非私人代码求助。如果遇到邮件退回等情况试图通过 Issue 索取特定实验代码（如 .npy 生成代码），通常无法得到直接回应，且可能因联系错对象而无果。建议检查原论文仓库或尝试联系原作者。","https:\u002F\u002Fgithub.com\u002Fbharathgs\u002FAwesome-pytorch-list\u002Fissues\u002F59",{"id":149,"question_zh":150,"answer_zh":151,"source_url":127},15627,"发现列表中包含泄露凭证的笔记本文件该怎么办？","如果您发现列表中链接的笔记本文件暴露了 GitHub 凭证等敏感信息，应立即通知维护者。维护者会感谢提醒并尽快移除或要求相关方删除该敏感内容。",[]]