[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ChristosChristofidis--awesome-deep-learning":3,"tool-ChristosChristofidis--awesome-deep-learning":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":75,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":75,"owner_url":76,"languages":75,"stars":77,"forks":78,"last_commit_at":79,"license":75,"difficulty_score":80,"env_os":81,"env_gpu":82,"env_ram":82,"env_deps":83,"category_tags":86,"github_topics":87,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":97,"updated_at":98,"faqs":99,"releases":100},9844,"ChristosChristofidis\u002Fawesome-deep-learning","awesome-deep-learning","A curated list of awesome Deep Learning tutorials, projects and communities.","awesome-deep-learning 是一份精心整理的深度学习资源清单，旨在为学习者与从业者提供一站式的知识导航。面对深度学习领域海量且分散的教程、论文、框架及数据集，用户往往难以快速筛选出高质量内容，而这份清单有效解决了信息过载与检索困难的问题。\n\n它系统地涵盖了从入门到精通的全方位资料，包括经典书籍（如 Yoshua Bengio 的《Deep Learning》）、名校课程（如吴恩达的机器学习课）、前沿论文、视频教程、主流开发框架以及关键数据集等。其独特亮点在于“精选”机制，由社区共同维护，确保收录内容的权威性与时效性，并细分为书籍、课程、工具、会议等十余个类别，结构清晰，便于按需查找。\n\n无论是刚踏入 AI 领域的学生、需要追踪最新进展的研究人员，还是致力于模型落地的开发者，都能从中找到适合自己的学习路径或参考项目。设计师若希望了解技术边界以辅助创作，亦可在此获取灵感。awesome-deep-learning 不直接提供代码运行环境，而是作为一份可靠的“地图”，帮助用户高效构建知识体系，避免在繁杂的信息海洋中迷失方向，是深度学习爱好者不可或缺的案头指南。","﻿# Awesome Deep Learning [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n## Table of Contents\n\n* **[Books](#books)**\n\n* **[Courses](#courses)**  \n\n* **[Videos and Lectures](#videos-and-lectures)**  \n\n* **[Papers](#papers)**  \n\n* **[Tutorials](#tutorials)**  \n\n* **[Researchers](#researchers)**  \n\n* **[Websites](#websites)**  \n\n* **[Datasets](#datasets)**\n\n* **[Conferences](#Conferences)**\n\n* **[Frameworks](#frameworks)**  \n\n* **[Tools](#tools)**  \n\n* **[Miscellaneous](#miscellaneous)**  \n\n* **[Contributing](#contributing)**  \n\n\n### Books\n\n1.  [Deep Learning](http:\u002F\u002Fwww.deeplearningbook.org\u002F) by Yoshua Bengio, Ian Goodfellow and Aaron Courville  (05\u002F07\u002F2015)\n2.  [Neural Networks and Deep Learning](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F) by  Michael Nielsen (Dec 2014)\n3.  [Deep Learning](http:\u002F\u002Fresearch.microsoft.com\u002Fpubs\u002F209355\u002FDeepLearning-NowPublishing-Vol7-SIG-039.pdf) by Microsoft Research (2013)\n4.  [Deep Learning Tutorial](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fdeeplearning.pdf) by LISA lab, University of Montreal (Jan 6 2015)\n5.  [neuraltalk](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fneuraltalk) by Andrej Karpathy : numpy-based RNN\u002FLSTM implementation\n6.  [An introduction to genetic algorithms](http:\u002F\u002Fwww.boente.eti.br\u002Ffuzzy\u002Febook-fuzzy-mitchell.pdf)\n7.  [Artificial Intelligence: A Modern Approach](http:\u002F\u002Faima.cs.berkeley.edu\u002F)\n8.  [Deep Learning in Neural Networks: An Overview](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1404.7828v4.pdf)\n9.  [Artificial intelligence and machine learning: Topic wise explanation](https:\u002F\u002Fleonardoaraujosantos.gitbooks.io\u002Fartificial-inteligence\u002F)\n10. [Grokking Deep Learning for Computer Vision](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-deep-learning-for-computer-vision)\n11. [Dive into Deep Learning](https:\u002F\u002Fd2l.ai\u002F) - numpy based interactive Deep Learning book\n12. [Practical Deep Learning for Cloud, Mobile, and Edge](https:\u002F\u002Fwww.oreilly.com\u002Flibrary\u002Fview\u002Fpractical-deep-learning\u002F9781492034858\u002F) - A book for optimization techniques during production.\n13. [Math and Architectures of Deep Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fmath-and-architectures-of-deep-learning) - by Krishnendu Chaudhury\n14. [TensorFlow 2.0 in Action](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Ftensorflow-in-action) - by Thushan Ganegedara\n15. [Deep Learning for Natural Language Processing](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-for-natural-language-processing) - by Stephan Raaijmakers\n16. [Deep Learning Patterns and Practices](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-patterns-and-practices) - by Andrew Ferlitsch\n17. [Inside Deep Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Finside-deep-learning) - by Edward Raff\n18. [Deep Learning with Python, Second Edition](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python-second-edition) - by François Chollet\n19. [Evolutionary Deep Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fevolutionary-deep-learning) - by Micheal Lanham\n20. [Engineering Deep Learning Platforms](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fengineering-deep-learning-platforms) - by Chi Wang and Donald Szeto\n21. [Deep Learning with R, Second Edition](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-r-second-edition) - by François Chollet with Tomasz Kalinowski and J. J. Allaire\n22. [Regularization in Deep Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fregularization-in-deep-learning) - by Liu Peng\n23. [Jax in Action](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fjax-in-action) - by Grigory Sapunov\n24. [Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow](https:\u002F\u002Fwww.knowledgeisle.com\u002Fwp-content\u002Fuploads\u002F2019\u002F12\u002F2-Aur%C3%A9lien-G%C3%A9ron-Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O%E2%80%99Reilly-Media-2019.pdf) by Aurélien Géron  | Oct 15, 2019\n\n### Courses\n\n1.  [Machine Learning - Stanford](https:\u002F\u002Fclass.coursera.org\u002Fml-005) by Andrew Ng in Coursera (2010-2014)\n2.  [Machine Learning - Caltech](http:\u002F\u002Fwork.caltech.edu\u002Flectures.html) by Yaser Abu-Mostafa (2012-2014)\n3.  [Machine Learning - Carnegie Mellon](http:\u002F\u002Fwww.cs.cmu.edu\u002F~tom\u002F10701_sp11\u002Flectures.shtml) by Tom Mitchell (Spring 2011)\n2.  [Neural Networks for Machine Learning](https:\u002F\u002Fclass.coursera.org\u002Fneuralnets-2012-001) by Geoffrey Hinton in Coursera (2012)\n3.  [Neural networks class](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH) by Hugo Larochelle from Université de Sherbrooke (2013)\n4.  [Deep Learning Course](http:\u002F\u002Fcilvr.cs.nyu.edu\u002Fdoku.php?id=deeplearning:slides:start) by CILVR lab @ NYU (2014)\n5.  [A.I - Berkeley](https:\u002F\u002Fcourses.edx.org\u002Fcourses\u002FBerkeleyX\u002FCS188x_1\u002F1T2013\u002Fcourseware\u002F) by Dan Klein and Pieter Abbeel (2013)\n6.  [A.I - MIT](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-034-artificial-intelligence-fall-2010\u002Flecture-videos\u002F) by Patrick Henry Winston (2010)\n7.  [Vision and learning - computers and brains](http:\u002F\u002Fweb.mit.edu\u002Fcourse\u002Fother\u002Fi2course\u002Fwww\u002Fvision_and_learning_fall_2013.html) by Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013)\n9.  [Convolutional Neural Networks for Visual Recognition - Stanford](http:\u002F\u002Fvision.stanford.edu\u002Fteaching\u002Fcs231n\u002Fsyllabus.html) by Fei-Fei Li, Andrej Karpathy (2017)\n10.  [Deep Learning for Natural Language Processing - Stanford](http:\u002F\u002Fcs224d.stanford.edu\u002F)\n11.  [Neural Networks - usherbrooke](http:\u002F\u002Finfo.usherbrooke.ca\u002Fhlarochelle\u002Fneural_networks\u002Fcontent.html)\n12.  [Machine Learning - Oxford](https:\u002F\u002Fwww.cs.ox.ac.uk\u002Fpeople\u002Fnando.defreitas\u002Fmachinelearning\u002F) (2014-2015)\n13.  [Deep Learning - Nvidia](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeep-learning-courses) (2015)\n14.  [Graduate Summer School: Deep Learning, Feature Learning](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLHyI3Fbmv0SdzMHAy0aN59oYnLy5vyyTA) by Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng, Nando de Freitas and several others @ IPAM, UCLA (2012)\n15.  [Deep Learning - Udacity\u002FGoogle](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning--ud730) by Vincent Vanhoucke and Arpan Chakraborty (2016)\n16.  [Deep Learning - UWaterloo](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLehuLRPyt1Hyi78UOkMPWCGRxGcA9NVOE) by Prof. Ali Ghodsi at University of Waterloo (2015)\n17.  [Statistical Machine Learning - CMU](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=azaLcvuql_g&list=PLjbUi5mgii6BWEUZf7He6nowWvGne_Y8r) by Prof. Larry Wasserman\n18.  [Deep Learning Course](https:\u002F\u002Fwww.college-de-france.fr\u002Fsite\u002Fen-yann-lecun\u002Fcourse-2015-2016.htm) by Yann LeCun (2016)\n19. [Designing, Visualizing and Understanding Deep Neural Networks-UC Berkeley](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm)\n20. [UVA Deep Learning Course](http:\u002F\u002Fuvadlc.github.io) MSc in Artificial Intelligence for the University of Amsterdam.\n21. [MIT 6.S094: Deep Learning for Self-Driving Cars](http:\u002F\u002Fselfdrivingcars.mit.edu\u002F)\n22. [MIT 6.S191: Introduction to Deep Learning](http:\u002F\u002Fintrotodeeplearning.com\u002F)\n23. [Berkeley CS 294: Deep Reinforcement Learning](http:\u002F\u002Frll.berkeley.edu\u002Fdeeprlcourse\u002F)\n24. [Keras in Motion video course](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fkeras-in-motion)\n25. [Practical Deep Learning For Coders](http:\u002F\u002Fcourse.fast.ai\u002F) by Jeremy Howard - Fast.ai\n26. [Introduction to Deep Learning](http:\u002F\u002Fdeeplearning.cs.cmu.edu\u002F) by Prof. Bhiksha Raj (2017)\n27. [AI for Everyone](https:\u002F\u002Fwww.deeplearning.ai\u002Fai-for-everyone\u002F) by Andrew Ng (2019)\n28. [MIT Intro to Deep Learning 7 day bootcamp](https:\u002F\u002Fintrotodeeplearning.com) - A seven day bootcamp designed in MIT to introduce deep learning methods and applications (2019)\n29. [Deep Blueberry: Deep Learning](https:\u002F\u002Fmithi.github.io\u002Fdeep-blueberry) - A free five-weekend plan to self-learners to learn the basics of deep-learning architectures like CNNs, LSTMs, RNNs, VAEs, GANs, DQN, A3C and more (2019)\n30. [Spinning Up in Deep Reinforcement Learning](https:\u002F\u002Fspinningup.openai.com\u002F) - A free deep reinforcement learning course by OpenAI (2019)\n31. [Deep Learning Specialization - Coursera](https:\u002F\u002Fwww.coursera.org\u002Fspecializations\u002Fdeep-learning) - Breaking into AI with the best course from Andrew NG.\n32. [Deep Learning - UC Berkeley | STAT-157](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLZSO_6-bSqHQHBCoGaObUljoXAyyqhpFW) by Alex Smola and Mu Li (2019)\n33. [Machine Learning for Mere Mortals video course](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fmachine-learning-for-mere-mortals) by Nick Chase\n34. [Machine Learning Crash Course with TensorFlow APIs](https:\u002F\u002Fdevelopers.google.com\u002Fmachine-learning\u002Fcrash-course\u002F) -Google AI\n35. [Deep Learning from the Foundations](https:\u002F\u002Fcourse.fast.ai\u002Fpart2) Jeremy Howard - Fast.ai\n36. [Deep Reinforcement Learning (nanodegree) - Udacity](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-reinforcement-learning-nanodegree--nd893) a 3-6 month Udacity nanodegree, spanning multiple courses (2018)\n37. [Grokking Deep Learning in Motion](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fgrokking-deep-learning-in-motion) by Beau Carnes (2018)\n38. [Face Detection with Computer Vision and Deep Learning](https:\u002F\u002Fwww.udemy.com\u002Fshare\u002F1000gAA0QdcV9aQng=\u002F) by Hakan Cebeci\n39. [Deep Learning Online Course list at Classpert](https:\u002F\u002Fclasspert.com\u002Fdeep-learning) List of Deep Learning online courses (some are free) from Classpert Online Course Search\n40. [AWS Machine Learning](https:\u002F\u002Faws.training\u002Fmachinelearning) Machine Learning and Deep Learning Courses from Amazon's Machine Learning university\n41. [Intro to Deep Learning with PyTorch](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning-pytorch--ud188) - A great introductory course on Deep Learning by Udacity and Facebook AI\n42. [Deep Learning by Kaggle](https:\u002F\u002Fwww.kaggle.com\u002Flearn\u002Fdeep-learning) - Kaggle's  free course on Deep Learning\n43. [Yann LeCun’s Deep Learning Course at CDS](https:\u002F\u002Fcds.nyu.edu\u002Fdeep-learning\u002F) - DS-GA 1008 · SPRING 2021 \n44. [Neural Networks and Deep Learning](https:\u002F\u002Fwebcms3.cse.unsw.edu.au\u002FCOMP9444\u002F19T3\u002F) - COMP9444 19T3\n45. [Deep Learning A.I.Shelf](http:\u002F\u002Faishelf.org\u002Fcategory\u002Fia\u002Fdeep-learning\u002F)\n\n### Videos and Lectures\n\n1.  [How To Create A Mind](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RIkxVci-R4k) By Ray Kurzweil\n2.  [Deep Learning, Self-Taught Learning and Unsupervised Feature Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=n1ViNeWhC24) By Andrew Ng\n3.  [Recent Developments in Deep Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vShMxxqtDDs&amp;index=3&amp;list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT) By Geoff Hinton\n4.  [The Unreasonable Effectiveness of Deep Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=sc-KbuZqGkI) by Yann LeCun\n5.  [Deep Learning of Representations](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=4xsVFLnHC_0) by Yoshua bengio\n6.  [Principles of Hierarchical Temporal Memory](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6ufPpZDmPKA) by Jeff Hawkins\n7.  [Machine Learning Discussion Group - Deep Learning w\u002F Stanford AI Lab](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2QJi0ArLq7s&amp;list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT) by Adam Coates\n8.  [Making Sense of the World with Deep Learning](http:\u002F\u002Fvimeo.com\u002F80821560) By Adam Coates\n9.  [Demystifying Unsupervised Feature Learning ](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wZfVBwOO0-k) By Adam Coates\n10.  [Visual Perception with Deep Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3boKlkPBckA) By Yann LeCun\n11.  [The Next Generation of Neural Networks](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AyzOUbkUf3M) By Geoffrey Hinton at GoogleTechTalks\n12.  [The wonderful and terrifying implications of computers that can learn](http:\u002F\u002Fwww.ted.com\u002Ftalks\u002Fjeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn) By Jeremy Howard at TEDxBrussels\n13.  [Unsupervised Deep Learning - Stanford](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs294a\u002Fhandouts.html) by Andrew Ng in Stanford (2011)\n14.  [Natural Language Processing](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002Fhandouts\u002F) By Chris Manning in Stanford\n15.  [A beginners Guide to Deep Neural Networks](http:\u002F\u002Fgoogleresearch.blogspot.com\u002F2015\u002F09\u002Fa-beginners-guide-to-deep-neural.html) By Natalie Hammel and Lorraine Yurshansky\n16.  [Deep Learning: Intelligence from Big Data](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=czLI3oLDe8M) by Steve Jurvetson (and panel) at VLAB in Stanford.\n17. [Introduction to Artificial Neural Networks and Deep Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FoO8qDB8gUU) by Leo Isikdogan at Motorola Mobility HQ\n18. [NIPS 2016 lecture and workshop videos](https:\u002F\u002Fnips.cc\u002FConferences\u002F2016\u002FSchedule) - NIPS 2016\n19. [Deep Learning Crash Course](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=oS5fz_mHVz0&list=PLWKotBjTDoLj3rXBL-nEIPRN9V3a9Cx07): a series of mini-lectures by Leo Isikdogan on YouTube (2018)\n20. [Deep Learning Crash Course](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fdeep-learning-crash-course) By Oliver Zeigermann\n21. [Deep Learning with R in Motion](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fdeep-learning-with-r-in-motion): a live video course that teaches how to apply deep learning to text and images using the powerful Keras library and its R language interface.\n22. [Medical Imaging with Deep Learning Tutorial](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLheiZMDg_8ufxEx9cNVcOYXsT3BppJP4b): This tutorial is styled as a graduate lecture about medical imaging with deep learning. This will cover the background of popular medical image domains (chest X-ray and histology) as well as methods to tackle multi-modality\u002Fview, segmentation, and counting tasks.\n23. [Deepmind x UCL Deeplearning](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLqYmG7hTraZCDxZ44o4p3N5Anz3lLRVZF): 2020 version \n24. [Deepmind x UCL Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLqYmG7hTraZBKeNJ-JE_eyJHZ7XgBoAyb): Deep Reinforcement Learning\n25. [CMU 11-785 Intro to Deep learning Spring 2020](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLp-0K3kfddPzCnS4CqKphh-zT3aDwybDe) Course: 11-785, Intro to Deep Learning by Bhiksha Raj \n26. [Machine Learning CS 229](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU) : End part focuses on deep learning By Andrew Ng\n27. [What is Neural Structured Learning by Andrew Ferlitsch](https:\u002F\u002Fyoutu.be\u002FLXWSE_9gHd0)\n28. [Deep Learning Design Patterns by Andrew Ferlitsch](https:\u002F\u002Fyoutu.be\u002F_DaviS6K0Vc)\n29. [Architecture of a Modern CNN: the design pattern approach by Andrew Ferlitsch](https:\u002F\u002Fyoutu.be\u002FQCGSS3kyGo0)\n30. [Metaparameters in a CNN by Andrew Ferlitsch](https:\u002F\u002Fyoutu.be\u002FK1PLeggQ33I)\n31. [Multi-task CNN: a real-world example by Andrew Ferlitsch](https:\u002F\u002Fyoutu.be\u002FdH2nuI-1-qM)\n32. [A friendly introduction to deep reinforcement learning by Luis Serrano](https:\u002F\u002Fyoutu.be\u002F1FyAh07jh0o)\n33. [What are GANs and how do they work? by Edward Raff](https:\u002F\u002Fyoutu.be\u002Ff6ivp84qFUc)\n34. [Coding a basic WGAN in PyTorch by Edward Raff](https:\u002F\u002Fyoutu.be\u002F7VRdaqMDalQ)\n35. [Training a Reinforcement Learning Agent by Miguel Morales](https:\u002F\u002Fyoutu.be\u002F8TMT-gHlj_Q)\n36. [Understand what is Deep Learning](https:\u002F\u002Fwww.scaler.com\u002Ftopics\u002Fwhat-is-deep-learning\u002F)\n\n### Papers\n*You can also find the most cited deep learning papers from [here](https:\u002F\u002Fgithub.com\u002Fterryum\u002Fawesome-deep-learning-papers)*\n\n1.  [ImageNet Classification with Deep Convolutional Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)\n2.  [Using Very Deep Autoencoders for Content Based Image Retrieval](http:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002Fabsps\u002Fesann-deep-final.pdf)\n3.  [Learning Deep Architectures for AI](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~lisa\u002Fpointeurs\u002FTR1312.pdf)\n4.  [CMU’s list of papers](http:\u002F\u002Fdeeplearning.cs.cmu.edu\u002F)\n5.  [Neural Networks for Named Entity Recognition](http:\u002F\u002Fnlp.stanford.edu\u002F~socherr\u002Fpa4_ner.pdf) [zip](http:\u002F\u002Fnlp.stanford.edu\u002F~socherr\u002Fpa4-ner.zip)\n6. [Training tricks by YB](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~bengioy\u002Fpapers\u002FYB-tricks.pdf)\n7. [Geoff Hinton's reading list (all papers)](http:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002Fdeeprefs.html)\n8. [Supervised Sequence Labelling with Recurrent Neural Networks](http:\u002F\u002Fwww.cs.toronto.edu\u002F~graves\u002Fpreprint.pdf)\n9.  [Statistical Language Models based on Neural Networks](http:\u002F\u002Fwww.fit.vutbr.cz\u002F~imikolov\u002Frnnlm\u002Fthesis.pdf)\n10.  [Training Recurrent Neural Networks](http:\u002F\u002Fwww.cs.utoronto.ca\u002F~ilya\u002Fpubs\u002Filya_sutskever_phd_thesis.pdf)\n11.  [Recursive Deep Learning for Natural Language Processing and Computer Vision](http:\u002F\u002Fnlp.stanford.edu\u002F~socherr\u002Fthesis.pdf)\n12.  [Bi-directional RNN](http:\u002F\u002Fwww.di.ufpe.br\u002F~fnj\u002FRNA\u002Fbibliografia\u002FBRNN.pdf)\n13.  [LSTM](http:\u002F\u002Fweb.eecs.utk.edu\u002F~itamar\u002Fcourses\u002FECE-692\u002FBobby_paper1.pdf)\n14.  [GRU - Gated Recurrent Unit](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1406.1078v3.pdf)\n15.  [GFRNN](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.02367v3.pdf) [.](http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv37\u002Fchung15.pdf) [.](http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv37\u002Fchung15-supp.pdf)\n16.  [LSTM: A Search Space Odyssey](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1503.04069v1.pdf)\n17.  [A Critical Review of Recurrent Neural Networks for Sequence Learning](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.00019v1.pdf)\n18.  [Visualizing and Understanding Recurrent Networks](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.02078v1.pdf)\n19.  [Wojciech Zaremba, Ilya Sutskever, An Empirical Exploration of Recurrent Network Architectures](http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv37\u002Fjozefowicz15.pdf)\n20.  [Recurrent Neural Network based Language Model](http:\u002F\u002Fwww.fit.vutbr.cz\u002Fresearch\u002Fgroups\u002Fspeech\u002Fpubli\u002F2010\u002Fmikolov_interspeech2010_IS100722.pdf)\n21.  [Extensions of Recurrent Neural Network Language Model](http:\u002F\u002Fwww.fit.vutbr.cz\u002Fresearch\u002Fgroups\u002Fspeech\u002Fpubli\u002F2011\u002Fmikolov_icassp2011_5528.pdf)\n22.  [Recurrent Neural Network based Language Modeling in Meeting Recognition](http:\u002F\u002Fwww.fit.vutbr.cz\u002F~imikolov\u002Frnnlm\u002FApplicationOfRNNinMeetingRecognition_IS2011.pdf)\n23.  [Deep Neural Networks for Acoustic Modeling in Speech Recognition](http:\u002F\u002Fcs224d.stanford.edu\u002Fpapers\u002Fmaas_paper.pdf)\n24.  [Speech Recognition with Deep Recurrent Neural Networks](http:\u002F\u002Fwww.cs.toronto.edu\u002F~fritz\u002Fabsps\u002FRNN13.pdf)\n25.  [Reinforcement Learning Neural Turing Machines](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1505.00521v1)\n26.  [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1406.1078v3.pdf)\n27. [Google - Sequence to Sequence  Learning with Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5346-sequence-to-sequence-learning-with-neural-networks.pdf)\n28. [Memory Networks](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1410.3916v10)\n29. [Policy Learning with Continuous Memory States for Partially Observed Robotic Control](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1507.01273v1)\n30. [Microsoft - Jointly Modeling Embedding and Translation to Bridge Video and Language](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1505.01861v1.pdf)\n31. [Neural Turing Machines](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1410.5401v2.pdf)\n32. [Ask Me Anything: Dynamic Memory Networks for Natural Language Processing](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.07285v1.pdf)\n33. [Mastering the Game of Go with Deep Neural Networks and Tree Search](http:\u002F\u002Fwww.nature.com\u002Fnature\u002Fjournal\u002Fv529\u002Fn7587\u002Fpdf\u002Fnature16961.pdf)\n34. [Batch Normalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167)\n35. [Residual Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.03385v1.pdf)\n36. [Image-to-Image Translation with Conditional Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07004v1.pdf)\n37. [Berkeley AI Research (BAIR) Laboratory](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07004v1.pdf)\n38. [MobileNets by Google](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861)\n39. [Cross Audio-Visual Recognition in the Wild Using Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05739)\n40. [Dynamic Routing Between Capsules](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.09829)\n41. [Matrix Capsules With Em Routing](https:\u002F\u002Fopenreview.net\u002Fpdf?id=HJWLfGWRb)\n42. [Efficient BackProp](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fpublis\u002Fpdf\u002Flecun-98b.pdf)\n43. [Generative Adversarial Nets](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1406.2661v1.pdf)\n44. [Fast R-CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1504.08083.pdf)\n45. [FaceNet: A Unified Embedding for Face Recognition and Clustering](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1503.03832.pdf)\n46. [Siamese Neural Networks for One-shot Image Recognition](https:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002Fpapers\u002Foneshot1.pdf)\n47. [Unsupervised Translation of Programming Languages](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.03511.pdf)\n48. [Matching Networks for One Shot Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6385-matching-networks-for-one-shot-learning.pdf)\n49. [VOLO: Vision Outlooker for Visual Recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.13112.pdf)\n50. [ViT: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.11929.pdf)\n51. [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fioffe15.pdf)\n52. [DeepFaceDrawing: Deep Generation of Face Images from Sketches](http:\u002F\u002Fgeometrylearning.com\u002Fpaper\u002FDeepFaceDrawing.pdf?fbclid=IwAR0colWFHPGBCB1APZq9JVsWeWtmeZd9oCTNQvR52T5PRUJP_dLOwB8pt0I)\n\n### Tutorials\n\n1.  [UFLDL Tutorial 1](http:\u002F\u002Fdeeplearning.stanford.edu\u002Fwiki\u002Findex.php\u002FUFLDL_Tutorial)\n2.  [UFLDL Tutorial 2](http:\u002F\u002Fufldl.stanford.edu\u002Ftutorial\u002Fsupervised\u002FLinearRegression\u002F)\n3.  [Deep Learning for NLP (without Magic)](http:\u002F\u002Fwww.socher.org\u002Findex.php\u002FDeepLearningTutorial\u002FDeepLearningTutorial)\n4.  [A Deep Learning Tutorial: From Perceptrons to Deep Networks](http:\u002F\u002Fwww.toptal.com\u002Fmachine-learning\u002Fan-introduction-to-deep-learning-from-perceptrons-to-deep-networks)\n5.  [Deep Learning from the Bottom up](http:\u002F\u002Fwww.metacademy.org\u002Froadmaps\u002Frgrosse\u002Fdeep_learning)\n6.  [Theano Tutorial](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fdeeplearning.pdf)\n7.  [Neural Networks for Matlab](http:\u002F\u002Fuk.mathworks.com\u002Fhelp\u002Fpdf_doc\u002Fnnet\u002Fnnet_ug.pdf)\n8.  [Using convolutional neural nets to detect facial keypoints tutorial](http:\u002F\u002Fdanielnouri.org\u002Fnotes\u002F2014\u002F12\u002F17\u002Fusing-convolutional-neural-nets-to-detect-facial-keypoints-tutorial\u002F)\n9.  [Torch7 Tutorials](https:\u002F\u002Fgithub.com\u002Fclementfarabet\u002Fipam-tutorials\u002Ftree\u002Fmaster\u002Fth_tutorials)\n10.  [The Best Machine Learning Tutorials On The Web](https:\u002F\u002Fgithub.com\u002Fjosephmisiti\u002Fmachine-learning-module)\n11. [VGG Convolutional Neural Networks Practical](http:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpracticals\u002Fcnn\u002Findex.html)\n12. [TensorFlow tutorials](https:\u002F\u002Fgithub.com\u002Fnlintz\u002FTensorFlow-Tutorials)\n13. [More TensorFlow tutorials](https:\u002F\u002Fgithub.com\u002Fpkmital\u002Ftensorflow_tutorials)\n13. [TensorFlow Python Notebooks](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples)\n14. [Keras and Lasagne Deep Learning Tutorials](https:\u002F\u002Fgithub.com\u002FVict0rSch\u002Fdeep_learning)\n15. [Classification on raw time series in TensorFlow with a LSTM RNN](https:\u002F\u002Fgithub.com\u002Fguillaume-chevalier\u002FLSTM-Human-Activity-Recognition)\n16. [Using convolutional neural nets to detect facial keypoints tutorial](http:\u002F\u002Fdanielnouri.org\u002Fnotes\u002F2014\u002F12\u002F17\u002Fusing-convolutional-neural-nets-to-detect-facial-keypoints-tutorial\u002F)\n17. [TensorFlow-World](https:\u002F\u002Fgithub.com\u002Fastorfi\u002FTensorFlow-World)\n18. [Deep Learning with Python](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python)\n19. [Grokking Deep Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-deep-learning)\n20. [Deep Learning for Search](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-for-search)\n21. [Keras Tutorial: Content Based Image Retrieval Using a Convolutional Denoising Autoencoder](https:\u002F\u002Fmedium.com\u002Fsicara\u002Fkeras-tutorial-content-based-image-retrieval-convolutional-denoising-autoencoder-dc91450cc511)\n22. [Pytorch Tutorial by Yunjey Choi](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)\n23. [Understanding deep Convolutional Neural Networks with a practical use-case in Tensorflow and Keras](https:\u002F\u002Fahmedbesbes.com\u002Funderstanding-deep-convolutional-neural-networks-with-a-practical-use-case-in-tensorflow-and-keras.html)\n24. [Overview and benchmark of traditional and deep learning models in text classification](https:\u002F\u002Fahmedbesbes.com\u002Foverview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification.html)\n25. [Hardware for AI: Understanding computer hardware & build your own computer](https:\u002F\u002Fgithub.com\u002FMelAbgrall\u002FHardwareforAI)\n26. [Programming Community Curated Resources](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai)\n27. [The Illustrated Self-Supervised Learning](https:\u002F\u002Famitness.com\u002F2020\u002F02\u002Fillustrated-self-supervised-learning\u002F)\n28. [Visual Paper Summary: ALBERT (A Lite BERT)](https:\u002F\u002Famitness.com\u002F2020\u002F02\u002Falbert-visual-summary\u002F)\n28. [Semi-Supervised Deep Learning with GANs for Melanoma Detection](https:\u002F\u002Fwww.manning.com\u002Fliveproject\u002Fsemi-supervised-deep-learning-with-gans-for-melanoma-detection\u002F)\n29. [Named Entity Recognition using Reformers](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FTrax-Examples\u002Fblob\u002Fmain\u002FNLP\u002FNER%20using%20Reformer.ipynb)\n30. [Deep N-Gram Models on Shakespeare’s works](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FTrax-Examples\u002Fblob\u002Fmain\u002FNLP\u002FDeep%20N-Gram.ipynb)\n31. [Wide Residual Networks](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FTrax-Examples\u002Fblob\u002Fmain\u002Fvision\u002Fillustrated-wideresnet.ipynb)\n32. [Fashion MNIST using Flax](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FFlax-Examples)\n33. [Fake News Classification (with streamlit deployment)](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FFake-News-Classification)\n34. [Regression Analysis for Primary Biliary Cirrhosis](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FCoxPH-Model-for-Primary-Biliary-Cirrhosis)\n35. [Cross Matching Methods for Astronomical Catalogs](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FCross-Matching-Methods-for-Astronomical-Catalogs)\n36. [Named Entity Recognition using BiDirectional LSTMs](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FNamed-Entity-Recognition-)\n37. [Image Recognition App using Tflite and Flutter](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FFlutter_Image-Recognition)\n\n## Researchers\n\n1. [Aaron Courville](http:\u002F\u002Faaroncourville.wordpress.com)\n2. [Abdel-rahman Mohamed](http:\u002F\u002Fwww.cs.toronto.edu\u002F~asamir\u002F)\n3. [Adam Coates](http:\u002F\u002Fcs.stanford.edu\u002F~acoates\u002F)\n4. [Alex Acero](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Falexac\u002F)\n5. [ Alex Krizhevsky ](http:\u002F\u002Fwww.cs.utoronto.ca\u002F~kriz\u002Findex.html)\n6. [ Alexander Ilin ](http:\u002F\u002Fusers.ics.aalto.fi\u002Falexilin\u002F)\n7. [ Amos Storkey ](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Famos\u002F)\n8. [ Andrej Karpathy ](https:\u002F\u002Fkarpathy.ai\u002F)\n9. [ Andrew M. Saxe ](http:\u002F\u002Fwww.stanford.edu\u002F~asaxe\u002F)\n10. [ Andrew Ng ](http:\u002F\u002Fwww.cs.stanford.edu\u002Fpeople\u002Fang\u002F)\n11. [ Andrew W. Senior ](http:\u002F\u002Fresearch.google.com\u002Fpubs\u002Fauthor37792.html)\n12. [ Andriy Mnih ](http:\u002F\u002Fwww.gatsby.ucl.ac.uk\u002F~amnih\u002F)\n13. [ Ayse Naz Erkan ](http:\u002F\u002Fwww.cs.nyu.edu\u002F~naz\u002F)\n14. [ Benjamin Schrauwen ](http:\u002F\u002Freslab.elis.ugent.be\u002Fbenjamin)\n15. [ Bernardete Ribeiro ](https:\u002F\u002Fwww.cisuc.uc.pt\u002Fpeople\u002Fshow\u002F2020)\n16. [ Bo David Chen ](http:\u002F\u002Fvision.caltech.edu\u002F~bchen3\u002FSite\u002FBo_David_Chen.html)\n17. [ Boureau Y-Lan ](http:\u002F\u002Fcs.nyu.edu\u002F~ylan\u002F)\n18. [ Brian Kingsbury ](http:\u002F\u002Fresearcher.watson.ibm.com\u002Fresearcher\u002Fview.php?person=us-bedk)\n19. [ Christopher Manning ](http:\u002F\u002Fnlp.stanford.edu\u002F~manning\u002F)\n20. [ Clement Farabet ](http:\u002F\u002Fwww.clement.farabet.net\u002F)\n21. [ Dan Claudiu Cireșan ](http:\u002F\u002Fwww.idsia.ch\u002F~ciresan\u002F)\n22. [ David Reichert ](http:\u002F\u002Fserre-lab.clps.brown.edu\u002Fperson\u002Fdavid-reichert\u002F)\n23. [ Derek Rose ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F5.html)\n24. [ Dong Yu ](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Fdongyu\u002Fdefault.aspx)\n25. [ Drausin Wulsin ](http:\u002F\u002Fwww.seas.upenn.edu\u002F~wulsin\u002F)\n26. [ Erik M. Schmidt ](http:\u002F\u002Fmusic.ece.drexel.edu\u002Fpeople\u002Feschmidt)\n27. [ Eugenio Culurciello ](https:\u002F\u002Fengineering.purdue.edu\u002FBME\u002FPeople\u002FviewPersonById?resource_id=71333)\n28. [ Frank Seide ](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Ffseide\u002F)\n29. [ Galen Andrew ](http:\u002F\u002Fhomes.cs.washington.edu\u002F~galen\u002F)\n30. [ Geoffrey Hinton ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002F)\n31. [ George Dahl ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~gdahl\u002F)\n32. [ Graham Taylor ](http:\u002F\u002Fwww.uoguelph.ca\u002F~gwtaylor\u002F)\n33. [ Grégoire Montavon ](http:\u002F\u002Fgregoire.montavon.name\u002F)\n34. [ Guido Francisco Montúfar ](http:\u002F\u002Fpersonal-homepages.mis.mpg.de\u002Fmontufar\u002F)\n35. [ Guillaume Desjardins ](http:\u002F\u002Fbrainlogging.wordpress.com\u002F)\n36. [ Hannes Schulz ](http:\u002F\u002Fwww.ais.uni-bonn.de\u002F~schulz\u002F)\n37. [ Hélène Paugam-Moisy ](http:\u002F\u002Fwww.lri.fr\u002F~hpaugam\u002F)\n38. [ Honglak Lee ](http:\u002F\u002Fweb.eecs.umich.edu\u002F~honglak\u002F)\n39. [ Hugo Larochelle ](http:\u002F\u002Fwww.dmi.usherb.ca\u002F~larocheh\u002Findex_en.html)\n40. [ Ilya Sutskever ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~ilya\u002F)\n41. [ Itamar Arel ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F2.html)\n42. [ James Martens ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~jmartens\u002F)\n43. [ Jason Morton ](http:\u002F\u002Fwww.jasonmorton.com\u002F)\n44. [ Jason Weston ](http:\u002F\u002Fwww.thespermwhale.com\u002Fjaseweston\u002F)\n45. [ Jeff Dean ](http:\u002F\u002Fresearch.google.com\u002Fpubs\u002Fjeff.html)\n46. [ Jiquan Mgiam ](http:\u002F\u002Fcs.stanford.edu\u002F~jngiam\u002F)\n47. [ Joseph Turian ](http:\u002F\u002Fwww-etud.iro.umontreal.ca\u002F~turian\u002F)\n48. [ Joshua Matthew Susskind ](http:\u002F\u002Faclab.ca\u002Fusers\u002Fjosh\u002Findex.html)\n49. [ Jürgen Schmidhuber ](http:\u002F\u002Fwww.idsia.ch\u002F~juergen\u002F)\n50. [ Justin A. Blanco ](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fblancousna\u002F)\n51. [ Koray Kavukcuoglu ](http:\u002F\u002Fkoray.kavukcuoglu.org\u002F)\n52. [ KyungHyun Cho ](http:\u002F\u002Fusers.ics.aalto.fi\u002Fkcho\u002F)\n53. [ Li Deng ](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Fdeng\u002F)\n54. [ Lucas Theis ](http:\u002F\u002Fwww.kyb.tuebingen.mpg.de\u002Fnc\u002Femployee\u002Fdetails\u002Flucas.html)\n55. [ Ludovic Arnold ](http:\u002F\u002Fludovicarnold.altervista.org\u002Fhome\u002F)\n56. [ Marc'Aurelio Ranzato ](http:\u002F\u002Fwww.cs.nyu.edu\u002F~ranzato\u002F)\n57. [ Martin Längkvist ](http:\u002F\u002Faass.oru.se\u002F~mlt\u002F)\n58. [ Misha Denil ](http:\u002F\u002Fmdenil.com\u002F)\n59. [ Mohammad Norouzi ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~norouzi\u002F)\n60. [ Nando de Freitas ](http:\u002F\u002Fwww.cs.ubc.ca\u002F~nando\u002F)\n61. [ Navdeep Jaitly ](http:\u002F\u002Fwww.cs.utoronto.ca\u002F~ndjaitly\u002F)\n62. [ Nicolas Le Roux ](http:\u002F\u002Fnicolas.le-roux.name\u002F)\n63. [ Nitish Srivastava ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~nitish\u002F)\n64. [ Noel Lopes ](https:\u002F\u002Fwww.cisuc.uc.pt\u002Fpeople\u002Fshow\u002F2028)\n65. [ Oriol Vinyals ](http:\u002F\u002Fwww.cs.berkeley.edu\u002F~vinyals\u002F)\n66. [ Pascal Vincent ](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~vincentp)\n67. [ Patrick Nguyen ](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fdrpngx\u002F)\n68. [ Pedro Domingos ](http:\u002F\u002Fhomes.cs.washington.edu\u002F~pedrod\u002F)\n69. [ Peggy Series ](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fpseries\u002F)\n70. [ Pierre Sermanet ](http:\u002F\u002Fcs.nyu.edu\u002F~sermanet)\n71. [ Piotr Mirowski ](http:\u002F\u002Fwww.cs.nyu.edu\u002F~mirowski\u002F)\n72. [ Quoc V. Le ](http:\u002F\u002Fai.stanford.edu\u002F~quocle\u002F)\n73. [ Reinhold Scherer ](http:\u002F\u002Fbci.tugraz.at\u002Fscherer\u002F)\n74. [ Richard Socher ](http:\u002F\u002Fwww.socher.org\u002F)\n75. [ Rob Fergus ](http:\u002F\u002Fcs.nyu.edu\u002F~fergus\u002Fpmwiki\u002Fpmwiki.php)\n76. [ Robert Coop ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F19.html)\n77. [ Robert Gens ](http:\u002F\u002Fhomes.cs.washington.edu\u002F~rcg\u002F)\n78. [ Roger Grosse ](http:\u002F\u002Fpeople.csail.mit.edu\u002Frgrosse\u002F)\n79. [ Ronan Collobert ](http:\u002F\u002Fronan.collobert.com\u002F)\n80. [ Ruslan Salakhutdinov ](http:\u002F\u002Fwww.utstat.toronto.edu\u002F~rsalakhu\u002F)\n81. [ Sebastian Gerwinn ](http:\u002F\u002Fwww.kyb.tuebingen.mpg.de\u002Fnc\u002Femployee\u002Fdetails\u002Fsgerwinn.html)\n82. [ Stéphane Mallat ](http:\u002F\u002Fwww.cmap.polytechnique.fr\u002F~mallat\u002F)\n83. [ Sven Behnke ](http:\u002F\u002Fwww.ais.uni-bonn.de\u002Fbehnke\u002F)\n84. [ Tapani Raiko ](http:\u002F\u002Fusers.ics.aalto.fi\u002Fpraiko\u002F)\n85. [ Tara Sainath ](https:\u002F\u002Fsites.google.com\u002Fsite\u002Ftsainath\u002F)\n86. [ Tijmen Tieleman ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~tijmen\u002F)\n87. [ Tom Karnowski ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F36.html)\n88. [ Tomáš Mikolov ](https:\u002F\u002Fresearch.facebook.com\u002Ftomas-mikolov)\n89. [ Ueli Meier ](http:\u002F\u002Fwww.idsia.ch\u002F~meier\u002F)\n90. [ Vincent Vanhoucke ](http:\u002F\u002Fvincent.vanhoucke.com)\n91. [ Volodymyr Mnih ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~vmnih\u002F)\n92. [ Yann LeCun ](http:\u002F\u002Fyann.lecun.com\u002F)\n93. [ Yichuan Tang ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~tang\u002F)\n94. [ Yoshua Bengio ](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~bengioy\u002Fyoshua_en\u002Findex.html)\n95. [ Yotaro Kubo ](http:\u002F\u002Fyota.ro\u002F)\n96. [ Youzhi (Will) Zou ](http:\u002F\u002Fai.stanford.edu\u002F~wzou)\n97. [ Fei-Fei Li ](http:\u002F\u002Fvision.stanford.edu\u002Ffeifeili)\n98. [ Ian Goodfellow ](https:\u002F\u002Fresearch.google.com\u002Fpubs\u002F105214.html)\n99. [ Robert Laganière ](http:\u002F\u002Fwww.site.uottawa.ca\u002F~laganier\u002F)\n100. [Merve Ayyüce Kızrak](http:\u002F\u002Fwww.ayyucekizrak.com\u002F)\n\n\n### Websites\n\n1.  [deeplearning.net](http:\u002F\u002Fdeeplearning.net\u002F)\n2.  [deeplearning.stanford.edu](http:\u002F\u002Fdeeplearning.stanford.edu\u002F)\n3.  [nlp.stanford.edu](http:\u002F\u002Fnlp.stanford.edu\u002F)\n4.  [ai-junkie.com](http:\u002F\u002Fwww.ai-junkie.com\u002Fann\u002Fevolved\u002Fnnt1.html)\n5.  [cs.brown.edu\u002Fresearch\u002Fai](http:\u002F\u002Fcs.brown.edu\u002Fresearch\u002Fai\u002F)\n6.  [eecs.umich.edu\u002Fai](http:\u002F\u002Fwww.eecs.umich.edu\u002Fai\u002F)\n7.  [cs.utexas.edu\u002Fusers\u002Fai-lab](http:\u002F\u002Fwww.cs.utexas.edu\u002Fusers\u002Fai-lab\u002F)\n8.  [cs.washington.edu\u002Fresearch\u002Fai](http:\u002F\u002Fwww.cs.washington.edu\u002Fresearch\u002Fai\u002F)\n9.  [aiai.ed.ac.uk](http:\u002F\u002Fwww.aiai.ed.ac.uk\u002F)\n10.  [www-aig.jpl.nasa.gov](http:\u002F\u002Fwww-aig.jpl.nasa.gov\u002F)\n11.  [csail.mit.edu](http:\u002F\u002Fwww.csail.mit.edu\u002F)\n12.  [cgi.cse.unsw.edu.au\u002F~aishare](http:\u002F\u002Fcgi.cse.unsw.edu.au\u002F~aishare\u002F)\n13.  [cs.rochester.edu\u002Fresearch\u002Fai](http:\u002F\u002Fwww.cs.rochester.edu\u002Fresearch\u002Fai\u002F)\n14.  [ai.sri.com](http:\u002F\u002Fwww.ai.sri.com\u002F)\n15.  [isi.edu\u002FAI\u002Fisd.htm](http:\u002F\u002Fwww.isi.edu\u002FAI\u002Fisd.htm)\n16.  [nrl.navy.mil\u002Fitd\u002Faic](http:\u002F\u002Fwww.nrl.navy.mil\u002Fitd\u002Faic\u002F)\n17.  [hips.seas.harvard.edu](http:\u002F\u002Fhips.seas.harvard.edu\u002F)\n18.  [AI Weekly](http:\u002F\u002Faiweekly.co)\n19.  [stat.ucla.edu](http:\u002F\u002Fstatistics.ucla.edu\u002F)\n20.  [deeplearning.cs.toronto.edu](http:\u002F\u002Fdeeplearning.cs.toronto.edu\u002Fi2t)\n21.  [jeffdonahue.com\u002Flrcn\u002F](http:\u002F\u002Fjeffdonahue.com\u002Flrcn\u002F)\n22.  [visualqa.org](http:\u002F\u002Fwww.visualqa.org\u002F)\n23.  [www.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision...](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision-and-multimodal-computing\u002F)\n24.  [Deep Learning News](http:\u002F\u002Fnews.startup.ml\u002F)\n25.  [Machine Learning is Fun! Adam Geitgey's Blog](https:\u002F\u002Fmedium.com\u002F@ageitgey\u002F)\n26.  [Guide to Machine Learning](http:\u002F\u002Fyerevann.com\u002Fa-guide-to-deep-learning\u002F)\n27.  [Deep Learning for Beginners](https:\u002F\u002Fspandan-madan.github.io\u002FDeepLearningProject\u002F)\n28.  [Machine Learning Mastery blog](https:\u002F\u002Fmachinelearningmastery.com\u002Fblog\u002F)\n29.  [ML Compiled](https:\u002F\u002Fml-compiled.readthedocs.io\u002Fen\u002Flatest\u002F)\n30.  [Programming Community Curated Resources](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai)\n31.  [A Beginner's Guide To Understanding Convolutional Neural Networks](https:\u002F\u002Fadeshpande3.github.io\u002FA-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks\u002F)\n32.  [ahmedbesbes.com](http:\u002F\u002Fahmedbesbes.com)\n33.  [amitness.com](https:\u002F\u002Famitness.com\u002F)\n34.  [AI Summer](https:\u002F\u002Ftheaisummer.com\u002F)\n35.  [AI Hub - supported by AAAI, NeurIPS](https:\u002F\u002Faihub.org\u002F)\n36.  [CatalyzeX: Machine Learning Hub for Builders and Makers](https:\u002F\u002Fwww.catalyzeX.com)\n37.  [The Epic Code](https:\u002F\u002Ftheepiccode.com\u002F)\n38.  [all AI news](https:\u002F\u002Fallainews.com\u002F)\n\n### Datasets\n\n1.  [MNIST](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F) Handwritten digits\n2.  [Google House Numbers](http:\u002F\u002Fufldl.stanford.edu\u002Fhousenumbers\u002F) from street view\n3.  [CIFAR-10 and CIFAR-100](http:\u002F\u002Fwww.cs.toronto.edu\u002F~kriz\u002Fcifar.html)\n4.  [IMAGENET](http:\u002F\u002Fwww.image-net.org\u002F)\n5.  [Tiny Images](http:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002FTinyImages\u002F) 80 Million tiny images6.  \n6.  [Flickr Data](https:\u002F\u002Fyahooresearch.tumblr.com\u002Fpost\u002F89783581601\u002Fone-hundred-million-creative-commons-flickr-images) 100 Million Yahoo dataset\n7.  [Berkeley Segmentation Dataset 500](http:\u002F\u002Fwww.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fbsds\u002F)\n8.  [UC Irvine Machine Learning Repository](http:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002F)\n9.  [Flickr 8k](http:\u002F\u002Fnlp.cs.illinois.edu\u002FHockenmaierGroup\u002FFraming_Image_Description\u002FKCCA.html)\n10. [Flickr 30k](http:\u002F\u002Fshannon.cs.illinois.edu\u002FDenotationGraph\u002F)\n11. [Microsoft COCO](http:\u002F\u002Fmscoco.org\u002Fhome\u002F)\n12. [VQA](http:\u002F\u002Fwww.visualqa.org\u002F)\n13. [Image QA](http:\u002F\u002Fwww.cs.toronto.edu\u002F~mren\u002Fimageqa\u002Fdata\u002Fcocoqa\u002F)\n14. [AT&T Laboratories Cambridge face database](http:\u002F\u002Fwww.uk.research.att.com\u002Ffacedatabase.html)\n15. [AVHRR Pathfinder](http:\u002F\u002Fxtreme.gsfc.nasa.gov)\n16. [Air Freight](http:\u002F\u002Fwww.anc.ed.ac.uk\u002F~amos\u002Fafreightdata.html) - The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. (455 images + GT, each 160x120 pixels). (Formats: PNG)  \n17. [Amsterdam Library of Object Images](http:\u002F\u002Fwww.science.uva.nl\u002F~aloi\u002F) - ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. (Formats: png)\n18. [Annotated face, hand, cardiac & meat images](http:\u002F\u002Fwww.imm.dtu.dk\u002F~aam\u002F) - Most images & annotations are supplemented by various ASM\u002FAAM analyses using the AAM-API. (Formats: bmp,asf)\n19. [Image Analysis and Computer Graphics](http:\u002F\u002Fwww.imm.dtu.dk\u002Fimage\u002F)  \n21. [Brown University Stimuli](http:\u002F\u002Fwww.cog.brown.edu\u002F~tarr\u002Fstimuli.html) - A variety of datasets including geons, objects, and \"greebles\". Good for testing recognition algorithms. (Formats: pict)\n22. [CAVIAR video sequences of mall and public space behavior](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Frbf\u002FCAVIARDATA1\u002F) - 90K video frames in 90 sequences of various human activities, with XML ground truth of detection and behavior classification (Formats: MPEG2 & JPEG)\n23. [Machine Vision Unit](http:\u002F\u002Fwww.ipab.inf.ed.ac.uk\u002Fmvu\u002F)\n25. [CCITT Fax standard images](http:\u002F\u002Fwww.cs.waikato.ac.nz\u002F~singlis\u002Fccitt.html) - 8 images (Formats: gif)\n26. [CMU CIL's Stereo Data with Ground Truth](cil-ster.html) - 3 sets of 11 images, including color tiff images with spectroradiometry (Formats: gif, tiff)\n27. [CMU PIE Database](http:\u002F\u002Fwww.ri.cmu.edu\u002Fprojects\u002Fproject_418.html) - A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.\n28. [CMU VASC Image Database](http:\u002F\u002Fwww.ius.cs.cmu.edu\u002Fidb\u002F) - Images, sequences, stereo pairs (thousands of images) (Formats: Sun Rasterimage)\n29. [Caltech Image Database](http:\u002F\u002Fwww.vision.caltech.edu\u002Fhtml-files\u002Farchive.html) - about 20 images - mostly top-down views of small objects and toys. (Formats: GIF)\n30. [Columbia-Utrecht Reflectance and Texture Database](http:\u002F\u002Fwww.cs.columbia.edu\u002FCAVE\u002Fcuret\u002F) - Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. (Formats: bmp)\n31. [Computational Colour Constancy Data](http:\u002F\u002Fwww.cs.sfu.ca\u002F~colour\u002Fdata\u002Findex.html) - A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. (Formats: tiff)\n32. [Computational Vision Lab](http:\u002F\u002Fwww.cs.sfu.ca\u002F~colour\u002F)\n34. [Content-based image retrieval database](http:\u002F\u002Fwww.cs.washington.edu\u002Fresearch\u002Fimagedatabase\u002Fgroundtruth\u002F) - 11 sets of color images for testing algorithms for content-based retrieval. Most sets have a description file with names of objects in each image. (Formats: jpg)\n35. [Efficient Content-based Retrieval Group](http:\u002F\u002Fwww.cs.washington.edu\u002Fresearch\u002Fimagedatabase\u002F)\n37. [Densely Sampled View Spheres](http:\u002F\u002Fls7-www.cs.uni-dortmund.de\u002F~peters\u002Fpages\u002Fresearch\u002Fmodeladaptsys\u002Fmodeladaptsys_vba_rov.html) - Densely sampled view spheres - upper half of the view sphere of two toy objects with 2500 images each. (Formats: tiff)\n38. [Computer Science VII (Graphical Systems)](http:\u002F\u002Fls7-www.cs.uni-dortmund.de\u002F)\n40. [Digital Embryos](https:\u002F\u002Fweb-beta.archive.org\u002Fweb\u002F20011216051535\u002Fvision.psych.umn.edu\u002Fwww\u002Fkersten-lab\u002Fdemos\u002Fdigitalembryo.html) - Digital embryos are novel objects which may be used to develop and test object recognition systems. They have an organic appearance. (Formats: various formats are available on request)\n41. [Univerity of Minnesota Vision Lab](http:\u002F\u002Fvision.psych.umn.edu\u002Fusers\u002Fkersten\u002F\u002Fkersten-lab\u002Fkersten-lab.html) \n42. [El Salvador Atlas of Gastrointestinal VideoEndoscopy](http:\u002F\u002Fwww.gastrointestinalatlas.com) - Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. (Formats: jpg, mpg, gif)\n43. [FG-NET Facial Aging Database](http:\u002F\u002Fsting.cycollege.ac.cy\u002F~alanitis\u002Ffgnetaging\u002Findex.htm) - Database contains 1002 face images showing subjects at different ages. (Formats: jpg)\n44. [FVC2000 Fingerprint Databases](http:\u002F\u002Fbias.csr.unibo.it\u002Ffvc2000\u002F) - FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark (3520 fingerprints in all).\n45. [Biometric Systems Lab](http:\u002F\u002Fbiolab.csr.unibo.it\u002Fhome.asp) - University of Bologna\n46. [Face and Gesture images and image sequences](http:\u002F\u002Fwww.fg-net.org) - Several image datasets of faces and gestures that are ground truth annotated for benchmarking\n47. [German Fingerspelling Database](http:\u002F\u002Fwww-i6.informatik.rwth-aachen.de\u002F~dreuw\u002Fdatabase.html) - The database contains 35 gestures and consists of 1400 image sequences that contain gestures of 20 different persons recorded under non-uniform daylight lighting conditions. (Formats: mpg,jpg)  \n48. [Language Processing and Pattern Recognition](http:\u002F\u002Fwww-i6.informatik.rwth-aachen.de\u002F)\n50. [Groningen Natural Image Database](http:\u002F\u002Fhlab.phys.rug.nl\u002Farchive.html) - 4000+ 1536x1024 (16 bit) calibrated outdoor images (Formats: homebrew)\n51. [ICG Testhouse sequence](http:\u002F\u002Fwww.icg.tu-graz.ac.at\u002F~schindler\u002FData) -  2 turntable sequences from different viewing heights, 36 images each, resolution 1000x750, color (Formats: PPM)\n52. [Institute of Computer Graphics and Vision](http:\u002F\u002Fwww.icg.tu-graz.ac.at)\n54. [IEN Image Library](http:\u002F\u002Fwww.ien.it\u002Fis\u002Fvislib\u002F) - 1000+ images, mostly outdoor sequences (Formats: raw, ppm)  \n55. [INRIA's Syntim images database](http:\u002F\u002Fwww-rocq.inria.fr\u002F~tarel\u002Fsyntim\u002Fimages.html) - 15 color image of simple objects (Formats: gif)\n56. [INRIA](http:\u002F\u002Fwww.inria.fr\u002F)\n57. [INRIA's Syntim stereo databases](http:\u002F\u002Fwww-rocq.inria.fr\u002F~tarel\u002Fsyntim\u002Fpaires.html) - 34 calibrated color stereo pairs (Formats: gif)\n58. [Image Analysis Laboratory](http:\u002F\u002Fwww.ece.ncsu.edu\u002Fimaging\u002FArchives\u002FImageDataBase\u002Findex.html) - Images obtained from a variety of imaging modalities -- raw CFA images, range images and a host of \"medical images\". (Formats: homebrew)\n59. [Image Analysis Laboratory](http:\u002F\u002Fwww.ece.ncsu.edu\u002Fimaging)\n61. [Image Database](http:\u002F\u002Fwww.prip.tuwien.ac.at\u002Fprip\u002Fimage.html) - An image database including some textures  \n62. [JAFFE Facial Expression Image Database](http:\u002F\u002Fwww.mis.atr.co.jp\u002F~mlyons\u002Fjaffe.html) - The JAFFE database consists of 213 images of Japanese female subjects posing 6 basic facial expressions as well as a neutral pose. Ratings on emotion adjectives are also available, free of charge, for research purposes. (Formats: TIFF Grayscale images.)\n63. [ATR Research, Kyoto, Japan](http:\u002F\u002Fwww.mic.atr.co.jp\u002F)\n64. [JISCT Stereo Evaluation](ftp:\u002F\u002Fftp.vislist.com\u002FIMAGERY\u002FJISCT\u002F) - 44 image pairs. These data have been used in an evaluation of stereo analysis, as described in the April 1993 ARPA Image Understanding Workshop paper ``The JISCT Stereo Evaluation'' by R.C.Bolles, H.H.Baker, and M.J.Hannah, 263--274 (Formats: SSI)\n65. [MIT Vision Texture](https:\u002F\u002Fvismod.media.mit.edu\u002Fvismod\u002Fimagery\u002FVisionTexture\u002Fvistex.html) - Image archive (100+ images) (Formats: ppm)\n66. [MIT face images and more](ftp:\u002F\u002Fwhitechapel.media.mit.edu\u002Fpub\u002Fimages) - hundreds of images (Formats: homebrew)\n67. [Machine Vision](http:\u002F\u002Fvision.cse.psu.edu\u002Fbook\u002Ftestbed\u002Fimages\u002F) - Images from the textbook by Jain, Kasturi, Schunck (20+ images) (Formats: GIF TIFF)\n68. [Mammography Image Databases](http:\u002F\u002Fmarathon.csee.usf.edu\u002FMammography\u002FDatabase.html) - 100 or more images of mammograms with ground truth. Additional images available by request, and links to several other mammography databases are provided. (Formats: homebrew)\n69. [ftp:\u002F\u002Fftp.cps.msu.edu\u002Fpub\u002Fprip](ftp:\u002F\u002Fftp.cps.msu.edu\u002Fpub\u002Fprip) - many images (Formats: unknown)\n70. [Middlebury Stereo Data Sets with Ground Truth](http:\u002F\u002Fwww.middlebury.edu\u002Fstereo\u002Fdata.html) - Six multi-frame stereo data sets of scenes containing planar regions. Each data set contains 9 color images and subpixel-accuracy ground-truth data. (Formats: ppm)\n71. [Middlebury Stereo Vision Research Page](http:\u002F\u002Fwww.middlebury.edu\u002Fstereo) - Middlebury College\n72. [Modis Airborne simulator, Gallery and data set](http:\u002F\u002Fltpwww.gsfc.nasa.gov\u002FMODIS\u002FMAS\u002F) - High Altitude Imagery from around the world for environmental modeling in support of NASA EOS program (Formats: JPG and HDF)\n73. [NIST Fingerprint and handwriting](ftp:\u002F\u002Fsequoyah.ncsl.nist.gov\u002Fpub\u002Fdatabases\u002Fdata) - datasets - thousands of images (Formats: unknown)\n74. [NIST Fingerprint data](ftp:\u002F\u002Fftp.cs.columbia.edu\u002Fjpeg\u002Fother\u002Fuuencoded) - compressed multipart uuencoded tar file\n75. [NLM HyperDoc Visible Human Project](http:\u002F\u002Fwww.nlm.nih.gov\u002Fresearch\u002Fvisible\u002Fvisible_human.html) - Color, CAT and MRI image samples - over 30 images (Formats: jpeg)\n76. [National Design Repository](http:\u002F\u002Fwww.designrepository.org) - Over 55,000 3D CAD and solid models of (mostly) mechanical\u002Fmachined engineering designs. (Formats: gif,vrml,wrl,stp,sat) \n77. [Geometric & Intelligent Computing Laboratory](http:\u002F\u002Fgicl.mcs.drexel.edu)\n79. [OSU (MSU) 3D Object Model Database](http:\u002F\u002Feewww.eng.ohio-state.edu\u002F~flynn\u002F3DDB\u002FModels\u002F) - several sets of 3D object models collected over several years to use in object recognition research (Formats: homebrew, vrml)\n80. [OSU (MSU\u002FWSU) Range Image Database](http:\u002F\u002Feewww.eng.ohio-state.edu\u002F~flynn\u002F3DDB\u002FRID\u002F) - Hundreds of real and synthetic images (Formats: gif, homebrew)\n81. [OSU\u002FSAMPL Database: Range Images, 3D Models, Stills, Motion Sequences](http:\u002F\u002Fsampl.eng.ohio-state.edu\u002F~sampl\u002Fdatabase.htm) - Over 1000 range images, 3D object models, still images and motion sequences (Formats: gif, ppm, vrml, homebrew)\n82. [Signal Analysis and Machine Perception Laboratory](http:\u002F\u002Fsampl.eng.ohio-state.edu)\n84. [Otago Optical Flow Evaluation Sequences](http:\u002F\u002Fwww.cs.otago.ac.nz\u002Fresearch\u002Fvision\u002FResearch\u002FOpticalFlow\u002Fopticalflow.html) - Synthetic and real sequences with machine-readable ground truth optical flow fields, plus tools to generate ground truth for new sequences. (Formats: ppm,tif,homebrew)\n85. [Vision Research Group](http:\u002F\u002Fwww.cs.otago.ac.nz\u002Fresearch\u002Fvision\u002Findex.html)\n87. [ftp:\u002F\u002Fftp.limsi.fr\u002Fpub\u002Fquenot\u002Fopflow\u002Ftestdata\u002Fpiv\u002F](ftp:\u002F\u002Fftp.limsi.fr\u002Fpub\u002Fquenot\u002Fopflow\u002Ftestdata\u002Fpiv\u002F) - Real and synthetic image sequences used for testing a Particle Image Velocimetry application. These images may be used for the test of optical flow and image matching algorithms. (Formats: pgm (raw))\n88. [LIMSI-CNRS\u002FCHM\u002FIMM\u002Fvision](http:\u002F\u002Fwww.limsi.fr\u002FRecherche\u002FIMM\u002FPageIMM.html)\n89. [LIMSI-CNRS](http:\u002F\u002Fwww.limsi.fr\u002F)\n90. [Photometric 3D Surface Texture Database](http:\u002F\u002Fwww.taurusstudio.net\u002Fresearch\u002Fpmtexdb\u002Findex.htm) - This is the first 3D texture database which provides both full real surface rotations and registered photometric stereo data (30 textures, 1680 images). (Formats: TIFF)\n91. [SEQUENCES FOR OPTICAL FLOW ANALYSIS (SOFA)](http:\u002F\u002Fwww.cee.hw.ac.uk\u002F~mtc\u002Fsofa) - 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. (Formats: gif)\n92. [Computer Vision Group](http:\u002F\u002Fwww.cee.hw.ac.uk\u002F~mtc\u002Fresearch.html)\n94. [Sequences for Flow Based Reconstruction](http:\u002F\u002Fwww.nada.kth.se\u002F~zucch\u002FCAMERA\u002FPUB\u002Fseq.html) - synthetic sequence for testing structure from motion algorithms (Formats: pgm)\n95. [Stereo Images with Ground Truth Disparity and Occlusion](http:\u002F\u002Fwww-dbv.cs.uni-bonn.de\u002Fstereo_data\u002F) - a small set of synthetic images of a hallway with varying amounts of noise added. Use these images to benchmark your stereo algorithm. (Formats: raw, viff (khoros), or tiff)\n96. [Stuttgart Range Image Database](http:\u002F\u002Frange.informatik.uni-stuttgart.de) - A collection of synthetic range images taken from high-resolution polygonal models available on the web (Formats: homebrew)\n97. [Department Image Understanding](http:\u002F\u002Fwww.informatik.uni-stuttgart.de\u002Fipvr\u002Fbv\u002Fbv_home_engl.html)\n99. [The AR Face Database](http:\u002F\u002Fwww2.ece.ohio-state.edu\u002F~aleix\u002FARdatabase.html) - Contains over 4,000 color images corresponding to 126 people's faces (70 men and 56 women). Frontal views with variations in facial expressions, illumination, and occlusions. (Formats: RAW (RGB 24-bit))\n100. [Purdue Robot Vision Lab](http:\u002F\u002Frvl.www.ecn.purdue.edu\u002FRVL\u002F)\n101. [The MIT-CSAIL Database of Objects and Scenes](http:\u002F\u002Fweb.mit.edu\u002Ftorralba\u002Fwww\u002Fdatabase.html) - Database for testing multiclass object detection and scene recognition algorithms. Over 72,000 images with 2873 annotated frames. More than 50 annotated object classes. (Formats: jpg)\n102. [The RVL SPEC-DB (SPECularity DataBase)](http:\u002F\u002Frvl1.ecn.purdue.edu\u002FRVL\u002Fspecularity_database\u002F) - A collection of over 300 real images of 100 objects taken under three different illuminaiton conditions (Diffuse\u002FAmbient\u002FDirected). -- Use these images to test algorithms for detecting and compensating specular highlights in color images. (Formats: TIFF )\n103. [Robot Vision Laboratory](http:\u002F\u002Frvl1.ecn.purdue.edu\u002FRVL\u002F)\n105. [The Xm2vts database](http:\u002F\u002Fxm2vtsdb.ee.surrey.ac.uk) - The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.\n106. [Centre for Vision, Speech and Signal Processing](http:\u002F\u002Fwww.ee.surrey.ac.uk\u002FResearch\u002FCVSSP)\n107. [Traffic Image Sequences and 'Marbled Block' Sequence](http:\u002F\u002Fi21www.ira.uka.de\u002Fimage_sequences) - thousands of frames of digitized traffic image sequences as well as the 'Marbled Block' sequence (grayscale images) (Formats: GIF)\n108. [IAKS\u002FKOGS](http:\u002F\u002Fi21www.ira.uka.de)\n110. [U Bern Face images](ftp:\u002F\u002Fftp.iam.unibe.ch\u002Fpub\u002FImages\u002FFaceImages) - hundreds of images (Formats: Sun rasterfile)\n111. [U Michigan textures](ftp:\u002F\u002Ffreebie.engin.umich.edu\u002Fpub\u002Fmisc\u002Ftextures) (Formats: compressed raw)\n112. [U Oulu wood and knots database](http:\u002F\u002Fwww.ee.oulu.fi\u002F~olli\u002FProjects\u002FLumber.Grading.html) - Includes classifications - 1000+ color images (Formats: ppm)\n113. [UCID - an Uncompressed Colour Image Database](http:\u002F\u002Fvision.doc.ntu.ac.uk\u002Fdatasets\u002FUCID\u002Fucid.html) - a benchmark database for image retrieval with predefined ground truth. (Formats: tiff)\n115. [UMass Vision Image Archive](http:\u002F\u002Fvis-www.cs.umass.edu\u002F~vislib\u002F) - Large image database with aerial, space, stereo, medical images and more. (Formats: homebrew)\n116. [UNC's 3D image database](ftp:\u002F\u002Fsunsite.unc.edu\u002Fpub\u002Facademic\u002Fcomputer-science\u002Fvirtual-reality\u002F3d) - many images (Formats: GIF)\n117. [USF Range Image Data with Segmentation Ground Truth](http:\u002F\u002Fmarathon.csee.usf.edu\u002Frange\u002Fseg-comp\u002FSegComp.html) - 80 image sets (Formats: Sun rasterimage)\n118. [University of Oulu Physics-based Face Database](http:\u002F\u002Fwww.ee.oulu.fi\u002Fresearch\u002Fimag\u002Fcolor\u002Fpbfd.html) - contains color images of faces under different illuminants and camera calibration conditions as well as skin spectral reflectance measurements of each person.\n119. [Machine Vision and Media Processing Unit](http:\u002F\u002Fwww.ee.oulu.fi\u002Fmvmp\u002F)\n121. [University of Oulu Texture Database](http:\u002F\u002Fwww.outex.oulu.fi) - Database of 320 surface textures, each captured under three illuminants, six spatial resolutions and nine rotation angles. A set of test suites is also provided so that texture segmentation, classification, and retrieval algorithms can be tested in a standard manner. (Formats: bmp, ras, xv)\n122. [Machine Vision Group](http:\u002F\u002Fwww.ee.oulu.fi\u002Fmvg)\n124. [Usenix face database](ftp:\u002F\u002Fftp.uu.net\u002Fpublished\u002Fusenix\u002Ffaces) - Thousands of face images from many different sites (circa 994)\n125. [View Sphere Database](http:\u002F\u002Fwww-prima.inrialpes.fr\u002FPrima\u002Fhall\u002Fview_sphere.html) - Images of 8 objects seen from many different view points. The view sphere is sampled using a geodesic with 172 images\u002Fsphere. Two sets for training and testing are available. (Formats: ppm)\n126. [PRIMA, GRAVIR](http:\u002F\u002Fwww-prima.inrialpes.fr\u002FPrima\u002F)\n127. [Vision-list Imagery Archive](ftp:\u002F\u002Fftp.vislist.com\u002FIMAGERY\u002F) - Many images, many formats\n128. [Wiry Object Recognition Database](http:\u002F\u002Fwww.cs.cmu.edu\u002F~owenc\u002Fword.htm) - Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. (Formats: jpg)\n129. [3D Vision Group](http:\u002F\u002Fwww.cs.cmu.edu\u002F0.000000E+003dvision\u002F)\n131. [Yale Face Database](http:\u002F\u002Fcvc.yale.edu\u002Fprojects\u002Fyalefaces\u002Fyalefaces.html) -  165 images (15 individuals) with different lighting, expression, and occlusion configurations.\n132. [Yale Face Database B](http:\u002F\u002Fcvc.yale.edu\u002Fprojects\u002FyalefacesB\u002FyalefacesB.html) - 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions). (Formats: PGM)\n133. [Center for Computational Vision and Control](http:\u002F\u002Fcvc.yale.edu\u002F)\n134. [DeepMind QA Corpus](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frc-data) - Textual QA corpus from CNN and DailyMail. More than 300K documents in total. [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.03340) for reference.\n135. [YouTube-8M Dataset](https:\u002F\u002Fresearch.google.com\u002Fyoutube8m\u002F) - YouTube-8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities.\n136. [Open Images dataset](https:\u002F\u002Fgithub.com\u002Fopenimages\u002Fdataset) - Open Images is a dataset of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories.\n137. [Visual Object Classes Challenge 2012 (VOC2012)](http:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002Fvoc2012\u002Findex.html#devkit) - VOC2012 dataset containing 12k images with 20 annotated classes for object detection and segmentation.\n138. [Fashion-MNIST](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Ffashion-mnist) - MNIST like fashion product dataset consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes.\n139. [Large-scale Fashion (DeepFashion) Database](http:\u002F\u002Fmmlab.ie.cuhk.edu.hk\u002Fprojects\u002FDeepFashion.html) - Contains over 800,000 diverse fashion images.  Each image in this dataset is labeled with 50 categories, 1,000 descriptive attributes, bounding box and clothing landmarks\n140. [FakeNewsCorpus](https:\u002F\u002Fgithub.com\u002Fseveral27\u002FFakeNewsCorpus) - Contains about 10 million news articles classified using [opensources.co](http:\u002F\u002Fopensources.co) types\n141. [LLVIP](https:\u002F\u002Fgithub.com\u002Fbupt-ai-cz\u002FLLVIP) - 15488 visible-infrared paired images (30976 images) for low-light vision research, [Project_Page](https:\u002F\u002Fbupt-ai-cz.github.io\u002FLLVIP\u002F)\n142. [MSDA](https:\u002F\u002Fgithub.com\u002Fbupt-ai-cz\u002FMeta-SelfLearning) - Over over 5 million images from 5 different domains for multi-source ocr\u002Ftext recognition DA research, [Project_Page](https:\u002F\u002Fbupt-ai-cz.github.io\u002FMeta-SelfLearning\u002F)\n143. [SANAD: Single-Label Arabic News Articles Dataset for Automatic Text Categorization](https:\u002F\u002Fdata.mendeley.com\u002Fdatasets\u002F57zpx667y9\u002F2) - SANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. \n144. [Referit3D](https:\u002F\u002Freferit3d.github.io) - Two large-scale and complementary visio-linguistic datasets (aka Nr3D and Sr3D) for identifying fine-grained 3D objects in ScanNet scenes. Nr3D contains 41.5K natural, free-form utterances, and Sr3d contains 83.5K template-based utterances.\n145. [SQuAD](https:\u002F\u002Frajpurkar.github.io\u002FSQuAD-explorer\u002F) - Stanford released ~100,000 English QA pairs and ~50,000 unanswerable questions\n146. [FQuAD](https:\u002F\u002Ffquad.illuin.tech\u002F) - ~25,000 French QA pairs released by Illuin Technology\n147. [GermanQuAD and GermanDPR](https:\u002F\u002Fwww.deepset.ai\u002Fgermanquad) - deepset released ~14,000 German QA pairs\n148. [SberQuAD](https:\u002F\u002Fgithub.com\u002Fannnyway\u002FQA-for-Russian) - Sberbank released ~90,000 Russian QA pairs\n149. [ArtEmis](http:\u002F\u002Fartemisdataset.org\u002F) - Contains 450K affective annotations of emotional responses and linguistic explanations for 80,000 artworks of WikiArt.\n\n### Conferences\n\n1. [CVPR - IEEE Conference on Computer Vision and Pattern Recognition](http:\u002F\u002Fcvpr2018.thecvf.com)\n2. [AAMAS - International Joint Conference on Autonomous Agents and Multiagent Systems](http:\u002F\u002Fcelweb.vuse.vanderbilt.edu\u002Faamas18\u002F)\n3. [IJCAI - \tInternational Joint Conference on Artificial Intelligence](https:\u002F\u002Fwww.ijcai-18.org\u002F)\n4. [ICML - \tInternational Conference on Machine Learning](https:\u002F\u002Ficml.cc)\n5. [ECML - European Conference on Machine Learning](http:\u002F\u002Fwww.ecmlpkdd2018.org)\n6. [KDD - Knowledge Discovery and Data Mining](http:\u002F\u002Fwww.kdd.org\u002Fkdd2018\u002F)\n7. [NIPS - Neural Information Processing Systems](https:\u002F\u002Fnips.cc\u002FConferences\u002F2018)\n8. [O'Reilly AI Conference - \tO'Reilly Artificial Intelligence Conference](https:\u002F\u002Fconferences.oreilly.com\u002Fartificial-intelligence\u002Fai-ny)\n9. [ICDM - International Conference on Data Mining](https:\u002F\u002Fwww.waset.org\u002Fconference\u002F2018\u002F07\u002Fistanbul\u002FICDM)\n10. [ICCV - International Conference on Computer Vision](http:\u002F\u002Ficcv2017.thecvf.com)\n11. [AAAI - Association for the Advancement of Artificial Intelligence](https:\u002F\u002Fwww.aaai.org)\n12. [MAIS - Montreal AI Symposium](https:\u002F\u002Fmontrealaisymposium.wordpress.com\u002F)\n\n### Frameworks\n\n1.  [Caffe](http:\u002F\u002Fcaffe.berkeleyvision.org\u002F)  \n2.  [Torch7](http:\u002F\u002Ftorch.ch\u002F)\n3.  [Theano](http:\u002F\u002Fdeeplearning.net\u002Fsoftware\u002Ftheano\u002F)\n4.  [cuda-convnet](https:\u002F\u002Fcode.google.com\u002Fp\u002Fcuda-convnet2\u002F)\n5.  [convetjs](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs)\n5.  [Ccv](http:\u002F\u002Flibccv.org\u002Fdoc\u002Fdoc-convnet\u002F)\n6.  [NuPIC](http:\u002F\u002Fnumenta.org\u002Fnupic.html)\n7.  [DeepLearning4J](http:\u002F\u002Fdeeplearning4j.org\u002F)\n8.  [Brain](https:\u002F\u002Fgithub.com\u002Fharthur\u002Fbrain)\n9.  [DeepLearnToolbox](https:\u002F\u002Fgithub.com\u002Frasmusbergpalm\u002FDeepLearnToolbox)\n10.  [Deepnet](https:\u002F\u002Fgithub.com\u002Fnitishsrivastava\u002Fdeepnet)\n11.  [Deeppy](https:\u002F\u002Fgithub.com\u002Fandersbll\u002Fdeeppy)\n12.  [JavaNN](https:\u002F\u002Fgithub.com\u002Fivan-vasilev\u002Fneuralnetworks)\n13.  [hebel](https:\u002F\u002Fgithub.com\u002Fhannes-brt\u002Fhebel)\n14.  [Mocha.jl](https:\u002F\u002Fgithub.com\u002Fpluskid\u002FMocha.jl)\n15.  [OpenDL](https:\u002F\u002Fgithub.com\u002Fguoding83128\u002FOpenDL)\n16.  [cuDNN](https:\u002F\u002Fdeveloper.nvidia.com\u002FcuDNN)\n17.  [MGL](http:\u002F\u002Fmelisgl.github.io\u002Fmgl-pax-world\u002Fmgl-manual.html)\n18.  [Knet.jl](https:\u002F\u002Fgithub.com\u002Fdenizyuret\u002FKnet.jl)\n19.  [Nvidia DIGITS - a web app based on Caffe](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FDIGITS)\n20.  [Neon - Python based Deep Learning Framework](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fneon)\n21.  [Keras - Theano based Deep Learning Library](http:\u002F\u002Fkeras.io)\n22.  [Chainer - A flexible framework of neural networks for deep learning](http:\u002F\u002Fchainer.org\u002F)\n23.  [RNNLM Toolkit](http:\u002F\u002Frnnlm.org\u002F)\n24.  [RNNLIB - A recurrent neural network library](http:\u002F\u002Fsourceforge.net\u002Fp\u002Frnnl\u002Fwiki\u002FHome\u002F)\n25.  [char-rnn](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fchar-rnn)\n26.  [MatConvNet: CNNs for MATLAB](https:\u002F\u002Fgithub.com\u002Fvlfeat\u002Fmatconvnet)\n27.  [Minerva - a fast and flexible tool for deep learning on multi-GPU](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fminerva)\n28.  [Brainstorm - Fast, flexible and fun neural networks.](https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fbrainstorm)\n29.  [Tensorflow - Open source software library for numerical computation using data flow graphs](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow)\n30.  [DMTK - Microsoft Distributed Machine Learning Tookit](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FDMTK)\n31.  [Scikit Flow - Simplified interface for TensorFlow (mimicking Scikit Learn)](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fskflow)\n32.  [MXnet - Lightweight, Portable, Flexible Distributed\u002FMobile Deep Learning framework](https:\u002F\u002Fgithub.com\u002Fapache\u002Fincubator-mxnet)\n33.  [Veles - Samsung Distributed machine learning platform](https:\u002F\u002Fgithub.com\u002FSamsung\u002Fveles)\n34.  [Marvin - A Minimalist GPU-only N-Dimensional ConvNets Framework](https:\u002F\u002Fgithub.com\u002FPrincetonVision\u002Fmarvin)\n35.  [Apache SINGA - A General Distributed Deep Learning Platform](http:\u002F\u002Fsinga.incubator.apache.org\u002F)\n36.  [DSSTNE - Amazon's library for building Deep Learning models](https:\u002F\u002Fgithub.com\u002Famznlabs\u002Famazon-dsstne)\n37.  [SyntaxNet - Google's syntactic parser - A TensorFlow dependency library](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fsyntaxnet)\n38.  [mlpack - A scalable Machine Learning library](http:\u002F\u002Fmlpack.org\u002F)\n39.  [Torchnet - Torch based Deep Learning Library](https:\u002F\u002Fgithub.com\u002Ftorchnet\u002Ftorchnet)\n40.  [Paddle - PArallel Distributed Deep LEarning by Baidu](https:\u002F\u002Fgithub.com\u002Fbaidu\u002Fpaddle)\n41.  [NeuPy - Theano based Python library for ANN and Deep Learning](http:\u002F\u002Fneupy.com)\n42.  [Lasagne - a lightweight library to build and train neural networks in Theano](https:\u002F\u002Fgithub.com\u002FLasagne\u002FLasagne)\n43.  [nolearn - wrappers and abstractions around existing neural network libraries, most notably Lasagne](https:\u002F\u002Fgithub.com\u002Fdnouri\u002Fnolearn)\n44.  [Sonnet - a library for constructing neural networks by Google's DeepMind](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fsonnet)\n45.  [PyTorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch)\n46.  [CNTK - Microsoft Cognitive Toolkit](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FCNTK)\n47.  [Serpent.AI - Game agent framework: Use any video game as a deep learning sandbox](https:\u002F\u002Fgithub.com\u002FSerpentAI\u002FSerpentAI)\n48.  [Caffe2 - A New Lightweight, Modular, and Scalable Deep Learning Framework](https:\u002F\u002Fgithub.com\u002Fcaffe2\u002Fcaffe2)\n49.  [deeplearn.js - Hardware-accelerated deep learning and linear algebra (NumPy) library for the web](https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fdeeplearnjs)\n50.  [TVM - End to End Deep Learning Compiler Stack for CPUs, GPUs and specialized accelerators](https:\u002F\u002Ftvm.ai\u002F)\n51.  [Coach - Reinforcement Learning Coach by Intel® AI Lab](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fcoach)\n52.  [albumentations - A fast and framework agnostic image augmentation library](https:\u002F\u002Fgithub.com\u002Falbu\u002Falbumentations)\n53.  [Neuraxle - A general-purpose ML pipelining framework](https:\u002F\u002Fgithub.com\u002FNeuraxio\u002FNeuraxle)\n54.  [Catalyst: High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code\u002Fideas reusing](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n55.  [garage - A toolkit for reproducible reinforcement learning research](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fgarage)\n56.  [Detecto - Train and run object detection models with 5-10 lines of code](https:\u002F\u002Fgithub.com\u002Falankbi\u002Fdetecto)\n57.  [Karate Club - An unsupervised machine learning library for graph structured data](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fkarateclub)\n58.  [Synapses - A lightweight library for neural networks that runs anywhere](https:\u002F\u002Fgithub.com\u002Fmrdimosthenis\u002FSynapses)\n59.  [TensorForce - A TensorFlow library for applied reinforcement learning](https:\u002F\u002Fgithub.com\u002Freinforceio\u002Ftensorforce)\n60.  [Hopsworks - A Feature Store for ML and Data-Intensive AI](https:\u002F\u002Fgithub.com\u002Flogicalclocks\u002Fhopsworks)\n61.  [Feast - A Feature Store for ML for GCP by Gojek\u002FGoogle](https:\u002F\u002Fgithub.com\u002Fgojek\u002Ffeast)\n62.  [PyTorch Geometric Temporal - Representation learning on dynamic graphs](https:\u002F\u002Fgithub.com\u002Fgojek\u002Ffeast)\n63.  [lightly - A computer vision framework for self-supervised learning](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly)\n64.  [Trax — Deep Learning with Clear Code and Speed](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Ftrax)\n65.  [Flax - a neural network ecosystem for JAX that is designed for flexibility](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fflax)\n66.  [QuickVision](https:\u002F\u002Fgithub.com\u002FQuick-AI\u002Fquickvision)\n67.  [Colossal-AI - An Integrated Large-scale Model Training System with Efficient Parallelization Techniques](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI)\n68.  [haystack: an open-source neural search framework](https:\u002F\u002Fhaystack.deepset.ai\u002Fdocs\u002Fintromd)\n69.  [Maze](https:\u002F\u002Fgithub.com\u002Fenlite-ai\u002Fmaze) - Application-oriented deep reinforcement learning framework addressing real-world decision problems.\n70.  [InsNet - A neural network library for building instance-dependent NLP models with padding-free dynamic batching](https:\u002F\u002Fgithub.com\u002Fchncwang\u002FInsNet)\n\n### Tools\n\n1.  [Nebullvm](https:\u002F\u002Fgithub.com\u002Fnebuly-ai\u002Fnebullvm) - Easy-to-use library to boost deep learning inference leveraging multiple deep learning compilers.\n2.  [Netron](https:\u002F\u002Fgithub.com\u002Flutzroeder\u002Fnetron) - Visualizer for deep learning and machine learning models\n2.  [Jupyter Notebook](http:\u002F\u002Fjupyter.org) - Web-based notebook environment for interactive computing\n3.  [TensorBoard](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorboard) - TensorFlow's Visualization Toolkit\n4.  [Visual Studio Tools for AI](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fproject\u002Fvisual-studio-code-tools-ai\u002F) - Develop, debug and deploy deep learning and AI solutions\n5.  [TensorWatch](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftensorwatch) - Debugging and visualization for deep learning\n6. [ML Workspace](https:\u002F\u002Fgithub.com\u002Fml-tooling\u002Fml-workspace) - All-in-one web-based IDE for machine learning and data science.\n7.  [dowel](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fdowel) - A little logger for machine learning research. Log any object to the console, CSVs, TensorBoard, text log files, and more with just one call to `logger.log()`\n8.  [Neptune](https:\u002F\u002Fneptune.ai\u002F) - Lightweight tool for experiment tracking and results visualization. \n9.  [CatalyzeX](https:\u002F\u002Fchrome.google.com\u002Fwebstore\u002Fdetail\u002Fcode-finder-for-research\u002Faikkeehnlfpamidigaffhfmgbkdeheil) - Browser extension ([Chrome](https:\u002F\u002Fchrome.google.com\u002Fwebstore\u002Fdetail\u002Fcode-finder-for-research\u002Faikkeehnlfpamidigaffhfmgbkdeheil) and [Firefox](https:\u002F\u002Faddons.mozilla.org\u002Fen-US\u002Ffirefox\u002Faddon\u002Fcode-finder-catalyzex\u002F)) that automatically finds and links to code implementations for ML papers anywhere online: Google, Twitter, Arxiv, Scholar, etc.\n10. [Determined](https:\u002F\u002Fgithub.com\u002Fdetermined-ai\u002Fdetermined) - Deep learning training platform with integrated support for distributed training, hyperparameter tuning, smart GPU scheduling, experiment tracking, and a model registry.\n11. [DAGsHub](https:\u002F\u002Fdagshub.com\u002F) - Community platform for Open Source ML – Manage experiments, data & models and create collaborative ML projects easily.\n12. [hub](https:\u002F\u002Fgithub.com\u002Factiveloopai\u002FHub) - Fastest unstructured dataset management for TensorFlow\u002FPyTorch by activeloop.ai. Stream & version-control data. Converts large data into single     numpy-like array on the cloud, accessible on any machine.\n13. [DVC](https:\u002F\u002Fdvc.org\u002F) - DVC is built to make ML models shareable and reproducible. It is designed to handle large files, data sets, machine learning models, and metrics as well as code.\n14. [CML](https:\u002F\u002Fcml.dev\u002F) - CML helps you bring your favorite DevOps tools to machine learning.\n15. [MLEM](https:\u002F\u002Fmlem.ai\u002F) - MLEM is a tool to easily package, deploy and serve Machine Learning models. It seamlessly supports a variety of scenarios like real-time serving and batch processing.\n16. [Maxim AI](https:\u002F\u002Fgetmaxim.ai) - Tool for AI Agent Simulation, Evaluation & Observability.\n\n\n### Miscellaneous\n\n1.  [Caffe Webinar](http:\u002F\u002Fon-demand-gtc.gputechconf.com\u002Fgtcnew\u002Fon-demand-gtc.php?searchByKeyword=shelhamer&amp;searchItems=&amp;sessionTopic=&amp;sessionEvent=4&amp;sessionYear=2014&amp;sessionFormat=&amp;submit=&amp;select=+)\n2.  [100 Best Github Resources in Github for DL](http:\u002F\u002Fmeta-guide.com\u002Fsoftware-meta-guide\u002F100-best-github-deep-learning\u002F)\n3.  [Word2Vec](https:\u002F\u002Fcode.google.com\u002Fp\u002Fword2vec\u002F)\n4.  [Caffe DockerFile](https:\u002F\u002Fgithub.com\u002Ftleyden\u002Fdocker\u002Ftree\u002Fmaster\u002Fcaffe)\n5.  [TorontoDeepLEarning convnet](https:\u002F\u002Fgithub.com\u002FTorontoDeepLearning\u002Fconvnet)\n6.  [gfx.js](https:\u002F\u002Fgithub.com\u002Fclementfarabet\u002Fgfx.js)\n7.  [Torch7 Cheat sheet](https:\u002F\u002Fgithub.com\u002Ftorch\u002Ftorch7\u002Fwiki\u002FCheatsheet)\n8. [Misc from MIT's 'Advanced Natural Language Processing' course](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-864-advanced-natural-language-processing-fall-2005\u002F)\n9. [Misc from MIT's 'Machine Learning' course](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-867-machine-learning-fall-2006\u002Flecture-notes\u002F)\n10. [Misc from MIT's 'Networks for Learning: Regression and Classification' course](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Fbrain-and-cognitive-sciences\u002F9-520-a-networks-for-learning-regression-and-classification-spring-2001\u002F)\n11. [Misc from MIT's 'Neural Coding and Perception of Sound' course](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Fhealth-sciences-and-technology\u002Fhst-723j-neural-coding-and-perception-of-sound-spring-2005\u002Findex.htm)\n12. [Implementing a Distributed Deep Learning Network over Spark](http:\u002F\u002Fwww.datasciencecentral.com\u002Fprofiles\u002Fblogs\u002Fimplementing-a-distributed-deep-learning-network-over-spark)\n13. [A chess AI that learns to play chess using deep learning.](https:\u002F\u002Fgithub.com\u002Ferikbern\u002Fdeep-pink)\n14. [Reproducing the results of \"Playing Atari with Deep Reinforcement Learning\" by DeepMind](https:\u002F\u002Fgithub.com\u002Fkristjankorjus\u002FReplicating-DeepMind)\n15. [Wiki2Vec. Getting Word2vec vectors for entities and word from Wikipedia Dumps](https:\u002F\u002Fgithub.com\u002Fidio\u002Fwiki2vec)\n16. [The original code from the DeepMind article + tweaks](https:\u002F\u002Fgithub.com\u002Fkuz\u002FDeepMind-Atari-Deep-Q-Learner)\n17. [Google deepdream - Neural Network art](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fdeepdream)\n18. [An efficient, batched LSTM.](https:\u002F\u002Fgist.github.com\u002Fkarpathy\u002F587454dc0146a6ae21fc)\n19. [A recurrent neural network designed to generate classical music.](https:\u002F\u002Fgithub.com\u002Fhexahedria\u002Fbiaxial-rnn-music-composition)\n20. [Memory Networks Implementations - Facebook](https:\u002F\u002Fgithub.com\u002Ffacebook\u002FMemNN)\n21. [Face recognition with Google's FaceNet deep neural network.](https:\u002F\u002Fgithub.com\u002Fcmusatyalab\u002Fopenface)\n22. [Basic digit recognition neural network](https:\u002F\u002Fgithub.com\u002Fjoeledenberg\u002FDigitRecognition)\n23. [Emotion Recognition API Demo - Microsoft](https:\u002F\u002Fwww.projectoxford.ai\u002Fdemo\u002Femotion#detection)\n24. [Proof of concept for loading Caffe models in TensorFlow](https:\u002F\u002Fgithub.com\u002Fethereon\u002Fcaffe-tensorflow)\n25. [YOLO: Real-Time Object Detection](http:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F#webcam)\n26. [YOLO: Practical Implementation using Python](https:\u002F\u002Fwww.analyticsvidhya.com\u002Fblog\u002F2018\u002F12\u002Fpractical-guide-object-detection-yolo-framewor-python\u002F)\n27. [AlphaGo - A replication of DeepMind's 2016 Nature publication, \"Mastering the game of Go with deep neural networks and tree search\"](https:\u002F\u002Fgithub.com\u002FRochester-NRT\u002FAlphaGo)\n28. [Machine Learning for Software Engineers](https:\u002F\u002Fgithub.com\u002FZuzooVn\u002Fmachine-learning-for-software-engineers)\n29. [Machine Learning is Fun!](https:\u002F\u002Fmedium.com\u002F@ageitgey\u002Fmachine-learning-is-fun-80ea3ec3c471#.oa4rzez3g)\n30. [Siraj Raval's Deep Learning tutorials](https:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCWN3xxRkmTPmbKwht9FuE5A)\n31. [Dockerface](https:\u002F\u002Fgithub.com\u002Fnatanielruiz\u002Fdockerface) - Easy to install and use deep learning Faster R-CNN face detection for images and video in a docker container.\n32. [Awesome Deep Learning Music](https:\u002F\u002Fgithub.com\u002Fybayle\u002Fawesome-deep-learning-music) - Curated list of articles related to deep learning scientific research applied to music\n33. [Awesome Graph Embedding](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fawesome-graph-embedding) - Curated list of articles related to deep learning scientific research on graph structured data at the graph level.\n34. [Awesome Network Embedding](https:\u002F\u002Fgithub.com\u002Fchihming\u002Fawesome-network-embedding) - Curated list of articles related to deep learning scientific research on graph structured data at the node level.\n35. [Microsoft Recommenders](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FRecommenders) contains examples, utilities and best practices for building recommendation systems. Implementations of several state-of-the-art algorithms are provided for self-study and customization in your own applications.\n36. [The Unreasonable Effectiveness of Recurrent Neural Networks](http:\u002F\u002Fkarpathy.github.io\u002F2015\u002F05\u002F21\u002Frnn-effectiveness\u002F) - Andrej Karpathy blog post about using RNN for generating text.\n37. [Ladder Network](https:\u002F\u002Fgithub.com\u002Fdivamgupta\u002Fladder_network_keras) - Keras Implementation of Ladder Network for Semi-Supervised Learning \n38. [toolbox: Curated list of ML libraries](https:\u002F\u002Fgithub.com\u002Famitness\u002Ftoolbox)\n39. [CNN Explainer](https:\u002F\u002Fpoloclub.github.io\u002Fcnn-explainer\u002F)\n40. [AI Expert Roadmap](https:\u002F\u002Fgithub.com\u002FAMAI-GmbH\u002FAI-Expert-Roadmap) - Roadmap to becoming an Artificial Intelligence Expert\n41. [Awesome Drug Interactions, Synergy, and Polypharmacy Prediction](https:\u002F\u002Fgithub.com\u002FAstraZeneca\u002Fawesome-polipharmacy-side-effect-prediction\u002F)\n\n-----\n### Contributing\nHave anything in mind that you think is awesome and would fit in this list? Feel free to send a [pull request](https:\u002F\u002Fgithub.com\u002Fashara12\u002Fawesome-deeplearning\u002Fpulls).\n\n-----\n## License\n\n[![CC0](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChristosChristofidis_awesome-deep-learning_readme_b7657951a0bb.png)](http:\u002F\u002Fcreativecommons.org\u002Fpublicdomain\u002Fzero\u002F1.0\u002F)\n\nTo the extent possible under law, [Christos Christofidis](https:\u002F\u002Flinkedin.com\u002Fin\u002FChristofidis) has waived all copyright and related or neighboring rights to this work.\n","﻿# 令人惊叹的深度学习 [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n## 目录\n\n* **[书籍](#books)**\n\n* **[课程](#courses)**  \n\n* **[视频与讲座](#videos-and-lectures)**  \n\n* **[论文](#papers)**  \n\n* **[教程](#tutorials)**  \n\n* **[研究者](#researchers)**  \n\n* **[网站](#websites)**  \n\n* **[数据集](#datasets)**\n\n* **[会议](#Conferences)**\n\n* **[框架](#frameworks)**  \n\n* **[工具](#tools)**  \n\n* **[其他](#miscellaneous)**  \n\n* **[贡献](#contributing)**  \n\n\n### 书籍\n\n1.  《深度学习》（Deep Learning）由Yoshua Bengio、Ian Goodfellow和Aaron Courville编写（2015年7月5日）\n2.  《神经网络与深度学习》（Neural Networks and Deep Learning）由Michael Nielsen编写（2014年12月）\n3.  《深度学习》（Deep Learning）由微软研究院编写（2013年）\n4.  《深度学习教程》（Deep Learning Tutorial）由蒙特利尔大学LISA实验室编写（2015年1月6日）\n5.  [neuraltalk](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fneuraltalk)由Andrej Karpathy开发：基于numpy的RNN\u002FLSTM实现\n6.  《遗传算法导论》（An introduction to genetic algorithms）（http:\u002F\u002Fwww.boente.eti.br\u002Ffuzzy\u002Febook-fuzzy-mitchell.pdf）\n7.  《人工智能：一种现代方法》（Artificial Intelligence: A Modern Approach）（http:\u002F\u002Faima.cs.berkeley.edu\u002F）\n8.  《神经网络中的深度学习：概述》（Deep Learning in Neural Networks: An Overview）（http:\u002F\u002Farxiv.org\u002Fpdf\u002F1404.7828v4.pdf）\n9.  《人工智能与机器学习：按主题解释》（Artificial intelligence and machine learning: Topic wise explanation）（https:\u002F\u002Fleonardoaraujosantos.gitbooks.io\u002Fartificial-inteligence\u002F）\n10. 《掌握计算机视觉中的深度学习》（Grokking Deep Learning for Computer Vision）（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-deep-learning-for-computer-vision）\n11. 《深入深度学习》（Dive into Deep Learning）——一本基于numpy的交互式深度学习书籍（https:\u002F\u002Fd2l.ai\u002F）\n12. 《面向云、移动和边缘设备的实用深度学习》（Practical Deep Learning for Cloud, Mobile, and Edge）（https:\u002F\u002Fwww.oreilly.com\u002Flibrary\u002Fview\u002Fpractical-deep-learning\u002F9781492034858\u002F）——一本关于生产环境中优化技术的书籍。\n13. 《深度学习的数学与架构》（Math and Architectures of Deep Learning）由Krishnendu Chaudhury编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fmath-and-architectures-of-deep-learning）\n14. 《TensorFlow 2.0实战》（Tensorflow 2.0 in Action）由Thushan Ganegedara编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Ftensorflow-in-action）\n15. 《自然语言处理中的深度学习》（Deep Learning for Natural Language Processing）由Stephan Raaijmakers编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-for-natural-language-processing）\n16. 《深度学习模式与实践》（Deep Learning Patterns and Practices）由Andrew Ferlitsch编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-patterns-and-practices）\n17. 《深度学习内幕》（Inside Deep Learning）由Edward Raff编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Finside-deep-learning）\n18. 《用Python进行深度学习（第二版）》（Deep Learning with Python, Second Edition）由François Chollet编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python-second-edition）\n19. 《进化式深度学习》（Evolutionary Deep Learning）由Micheal Lanham编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fevolutionary-deep-learning）\n20. 《深度学习平台工程》（Engineering Deep Learning Platforms）由Chi Wang和Donald Szeto编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fengineering-deep-learning-platforms）\n21. 《用R进行深度学习（第二版）》（Deep Learning with R, Second Edition）由François Chollet联合Tomasz Kalinowski和J. J. Allaire编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-r-second-edition）\n22. 《深度学习中的正则化》（Regularization in Deep Learning）由Liu Peng编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fregularization-in-deep-learning）\n23. 《Jax实战》（Jax in Action）由Grigory Sapunov编写（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fjax-in-action）\n24. 《动手学机器学习：使用Scikit-Learn、Keras和TensorFlow》（Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow）由Aurélien Géron编写 | 2019年10月15日\n\n### 课程\n\n1.  [机器学习 - 斯坦福大学](https:\u002F\u002Fclass.coursera.org\u002Fml-005) 安德鲁·吴在Coursera上开设（2010-2014）\n2.  [机器学习 - 加州理工学院](http:\u002F\u002Fwork.caltech.edu\u002Flectures.html) 亚塞尔·阿布-穆斯塔法主讲（2012-2014）\n3.  [机器学习 - 卡内基梅隆大学](http:\u002F\u002Fwww.cs.cmu.edu\u002F~tom\u002F10701_sp11\u002Flectures.shtml) 汤姆·米切尔主讲（2011年春季）\n2.  [机器学习中的神经网络](https:\u002F\u002Fclass.coursera.org\u002Fneuralnets-2012-001) 杰弗里·辛顿在Coursera上开设（2012）\n3.  [神经网络课程](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH) 雪尔布鲁克大学的于戈·拉罗谢尔主讲（2013）\n4.  [深度学习课程](http:\u002F\u002Fcilvr.cs.nyu.edu\u002Fdoku.php?id=deeplearning:slides:start) 纽约大学CILVR实验室开设（2014）\n5.  [人工智能 - 伯克利大学](https:\u002F\u002Fcourses.edx.org\u002Fcourses\u002FBerkeleyX\u002FCS188x_1\u002F1T2013\u002Fcourseware\u002F) 丹·克莱因和皮特·阿贝尔主讲（2013）\n6.  [人工智能 - MIT](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-034-artificial-intelligence-fall-2010\u002Flecture-videos\u002F) 帕特里克·亨利·温斯顿主讲（2010）\n7.  [视觉与学习：计算机与大脑](http:\u002F\u002Fweb.mit.edu\u002Fcourse\u002Fother\u002Fi2course\u002Fwww\u002Fvision_and_learning_fall_2013.html) 施蒙·乌尔曼、托马索·波吉奥、伊森·梅耶斯等在MIT主讲（2013）\n9.  [用于视觉识别的卷积神经网络 - 斯坦福大学](http:\u002F\u002Fvision.stanford.edu\u002Fteaching\u002Fcs231n\u002Fsyllabus.html) 菲菲·李、安德烈·卡帕西主讲（2017）\n10.  [自然语言处理的深度学习 - 斯坦福大学](http:\u002F\u002Fcs224d.stanford.edu\u002F)\n11.  [神经网络 - 雪尔布鲁克大学](http:\u002F\u002Finfo.usherbrooke.ca\u002Fhlarochelle\u002Fneural_networks\u002Fcontent.html)\n12.  [机器学习 - 牛津大学](https:\u002F\u002Fwww.cs.ox.ac.uk\u002Fpeople\u002Fnando.defreitas\u002Fmachinelearning\u002F) （2014-2015）\n13.  [深度学习 - NVIDIA](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeep-learning-courses) （2015）\n14.  [研究生暑期学校：深度学习、特征学习](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLHyI3Fbmv0SdzMHAy0aN59oYnLy5vyyTA) 杰弗里·辛顿、约书亚·本吉奥、扬·勒丘恩、安德鲁·吴、南多·德·弗雷塔斯等在IPAM、UCLA举办（2012）\n15.  [深度学习 - Udacity\u002F谷歌](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning--ud730) 文森特·范霍克和阿尔潘·查克拉博蒂主讲（2016）\n16.  [深度学习 - 滑铁卢大学](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLehuLRPyt1Hyi78UOkMPWCGRxGcA9NVOE) 阿里·戈德西教授主讲（2015）\n17.  [统计机器学习 - 卡内基梅隆大学](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=azaLcvuql_g&list=PLjbUi5mgii6BWEUZf7He6nowWvGne_Y8r) 劳里·瓦瑟曼教授主讲\n18.  [深度学习课程](https:\u002F\u002Fwww.college-de-france.fr\u002Fsite\u002Fen-yann-lecun\u002Fcourse-2015-2016.htm) 扬·勒丘恩主讲（2016）\n19.  [设计、可视化与理解深度神经网络 - 伯克利大学](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm)\n20.  [UVA深度学习课程](http:\u002F\u002Fuvadlc.github.io) 阿姆斯特丹大学的人工智能硕士课程。\n21.  [MIT 6.S094：自动驾驶汽车的深度学习](http:\u002F\u002Fselfdrivingcars.mit.edu\u002F)\n22.  [MIT 6.S191：深度学习导论](http:\u002F\u002Fintrotodeeplearning.com\u002F)\n23.  [伯克利CS 294：深度强化学习](http:\u002F\u002Frll.berkeley.edu\u002Fdeeprlcourse\u002F)\n24.  [Keras in Motion视频课程](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fkeras-in-motion)\n25.  [面向编码者的实用深度学习](http:\u002F\u002Fcourse.fast.ai\u002F) 杰里米·霍华德 - Fast.ai\n26.  [深度学习导论](http:\u002F\u002Fdeeplearning.cs.cmu.edu\u002F) 比克沙·拉杰教授主讲（2017）\n27.  [AI for Everyone](https:\u002F\u002Fwww.deeplearning.ai\u002Fai-for-everyone\u002F) 安德鲁·吴主讲（2019）\n28.  [MIT深度学习入门7天训练营](https:\u002F\u002Fintrotodeeplearning.com) - MIT设计的为期七天的训练营，旨在介绍深度学习的方法和应用（2019）\n29.  [Deep Blueberry：深度学习](https:\u002F\u002Fmithi.github.io\u002Fdeep-blueberry) - 一个免费的五周末计划，帮助自学者学习CNN、LSTM、RNN、VAE、GAN、DQN、A3C等深度学习架构的基础知识（2019）\n30.  [Spinning Up in Deep Reinforcement Learning](https:\u002F\u002Fspinningup.openai.com\u002F) - OpenAI提供的免费深度强化学习课程（2019）\n31.  [深度学习专项课程 - Coursera](https:\u002F\u002Fwww.coursera.org\u002Fspecializations\u002Fdeep-learning) - 由安德鲁·吴提供的最佳课程，助你进入AI领域。\n32.  [深度学习 - UC伯克利 | STAT-157](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLZSO_6-bSqHQHBCoGaObUljoXAyyqhpFW) 由亚历克斯·斯莫拉和穆·李主讲（2019）\n33.  [面向普通人的机器学习视频课程](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fmachine-learning-for-mere-mortals) 尼克·切斯主讲\n34.  [TensorFlow API的机器学习速成课](https:\u002F\u002Fdevelopers.google.com\u002Fmachine-learning\u002Fcrash-course\u002F) - Google AI\n35.  [从基础开始的深度学习](https:\u002F\u002Fcourse.fast.ai\u002Fpart2) 杰里米·霍华德 - Fast.ai\n36.  [深度强化学习（纳米学位） - Udacity](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-reinforcement-learning-nanodegree--nd893) 一项持续3至6个月的Udacity纳米学位课程，涵盖多门课程（2018）\n37.  [深入理解运动中的深度学习](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fgrokking-deep-learning-in-motion) 由博·卡恩斯主讲（2018）\n38.  [使用计算机视觉和深度学习进行人脸检测](https:\u002F\u002Fwww.udemy.com\u002Fshare\u002F1000gAA0QdcV9aQng=\u002F) 由哈坎·切贝奇主讲\n39.  [Classpert上的深度学习在线课程列表](https:\u002F\u002Fclasspert.com\u002Fdeep-learning) Classpert在线课程搜索提供的深度学习在线课程列表（部分免费）\n40.  [AWS机器学习](https:\u002F\u002Faws.training\u002Fmachinelearning) 亚马逊机器学习大学提供的机器学习和深度学习课程\n41.  [PyTorch深度学习导论](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning-pytorch--ud188) - Udacity和Facebook AI联合推出的优秀深度学习入门课程\n42.  [Kaggle的深度学习课程](https:\u002F\u002Fwww.kaggle.com\u002Flearn\u002Fdeep-learning) - Kaggle提供的免费深度学习课程\n43.  [扬·勒丘恩在CDS的深度学习课程](https:\u002F\u002Fcds.nyu.edu\u002Fdeep-learning\u002F) - DS-GA 1008 · 2021年春季\n44.  [神经网络与深度学习](https:\u002F\u002Fwebcms3.cse.unsw.edu.au\u002FCOMP9444\u002F19T3\u002F) - COMP9444 19T3\n45.  [深度学习 A.I.Shelf](http:\u002F\u002Faishelf.org\u002Fcategory\u002Fia\u002Fdeep-learning\u002F)\n\n### 视频与讲座\n\n1.  [如何创造思维](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RIkxVci-R4k) 雷·库兹韦尔 著\n2.  [深度学习、自监督学习与无监督特征学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=n1ViNeWhC24) 吴恩达 著\n3.  [深度学习的最新进展](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vShMxxqtDDs&amp;index=3&amp;list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT) 杰弗里·辛顿 著\n4.  [深度学习的不可思议有效性](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=sc-KbuZqGkI) 扬·勒丘恩 著\n5.  [表示的深度学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=4xsVFLnHC_0) 约书亚·本吉奥 著\n6.  [层次时序记忆的原理](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6ufPpZDmPKA) 杰夫·霍金斯 葳\n7.  [机器学习讨论组——斯坦福人工智能实验室的深度学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2QJi0ArLq7s&amp;list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT) 亚当·科茨 著\n8.  [用深度学习理解世界](http:\u002F\u002Fvimeo.com\u002F80821560) 亚当·科茨 著\n9.  [揭秘无监督特征学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wZfVBwOO0-k) 亚当·科茨 著\n10.  [深度学习与视觉感知](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3boKlkPBckA) 扬·勒丘恩 著\n11.  [下一代神经网络](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AyzOUbkUf3M) 杰弗里·辛顿 在 GoogleTechTalks 上发表\n12.  [能够学习的计算机所带来的奇妙而恐怖的影响](http:\u002F\u002Fwww.ted.com\u002Ftalks\u002Fjeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn) 杰里米·霍华德 在 TEDxBrussels 上发表\n13.  [斯坦福大学的无监督深度学习](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs294a\u002Fhandouts.html) 吴恩达 在斯坦福大学（2011年）讲授\n14.  [自然语言处理](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002Fhandouts\u002F) 克里斯·曼宁 在斯坦福大学教授\n15.  [深度神经网络入门指南](http:\u002F\u002Fgoogleresearch.blogspot.com\u002F2015\u002F09\u002Fa-beginners-guide-to-deep-neural.html) 娜塔莉·哈梅尔和洛林·尤尔尚斯基 著\n16.  [深度学习：来自大数据的智能](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=czLI3oLDe8M) 史蒂夫·朱维特森（及专家组）在斯坦福大学 VLAB 发表\n17.  [人工神经网络与深度学习导论](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FoO8qDB8gUU) 利奥·伊西克多甘 在摩托罗拉移动总部发表\n18.  [NIPS 2016 讲座与研讨会视频](https:\u002F\u002Fnips.cc\u002FConferences\u002F2016\u002FSchedule) — NIPS 2016\n19.  [深度学习速成班](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=oS5fz_mHVz0&list=PLWKotBjTDoLj3rXBL-nEIPRN9V3a9Cx07)：利奥·伊西克多甘在 YouTube 上推出的一系列迷你讲座（2018年）\n20.  [深度学习速成班](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fdeep-learning-crash-course) 奥利弗·蔡格曼 著\n21.  [R语言中的深度学习实战](https:\u002F\u002Fwww.manning.com\u002Flivevideo\u002Fdeep-learning-with-r-in-motion)：这是一门直播课程，教授如何使用强大的 Keras 库及其 R 语言接口，将深度学习应用于文本和图像。\n22.  [医学影像深度学习教程](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLheiZMDg_8ufxEx9cNVcOYXsT3BppJP4b)：本教程以研究生级别的医学影像深度学习讲座形式呈现。内容涵盖胸部 X 光片和组织学等热门医学影像领域的背景知识，以及处理多模态\u002F多视角、分割和计数任务的方法。\n23.  [DeepMind x UCL 深度学习](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLqYmG7hTraZCDxZ44o4p3N5Anz3lLRVZF)：2020年版本\n24.  [DeepMind x UCL 强化学习](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLqYmG7hTraZBKeNJ-JE_eyJHZ7XgBoAyb)：深度强化学习\n25.  [卡内基梅隆大学 11-785 深度学习导论 2020年春季](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLp-0K3kfddPzCnS4CqKphh-zT3aDwybDe) 课程：11-785，深度学习导论，由 Bhiksha Raj 主讲\n26.  [机器学习 CS 229](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU)：后半部分聚焦深度学习，由吴恩达主讲\n27.  [安德鲁·费尔利奇谈神经结构化学习](https:\u002F\u002Fyoutu.be\u002FLXWSE_9gHd0)\n28.  [安德鲁·费尔利奇谈深度学习设计模式](https:\u002F\u002Fyoutu.be\u002F_DaviS6K0Vc)\n29.  [现代 CNN 的架构：基于设计模式的方法，由安德鲁·费尔利奇讲解](https:\u002F\u002Fyoutu.be\u002FQCGSS3kyGo0)\n30.  [安德鲁·费尔利奇谈 CNN 中的超参数](https:\u002F\u002Fyoutu.be\u002FK1PLeggQ33I)\n31.  [多任务 CNN：安德鲁·费尔利奇提供的一个实际案例](https:\u002F\u002Fyoutu.be\u002FdH2nuI-1-qM)\n32.  [路易斯·塞拉诺对深度强化学习的友好介绍](https:\u002F\u002Fyoutu.be\u002F1FyAh07jh0o)\n33.  [什么是 GAN？它们是如何工作的？] 由爱德华·拉夫讲解（https:\u002F\u002Fyoutu.be\u002Ff6ivp84qFUc）\n34.  [爱德华·拉夫用 PyTorch 编写一个基础 WGAN](https:\u002F\u002Fyoutu.be\u002F7VRdaqMDalQ)\n35.  [米格尔·莫拉莱斯训练强化学习智能体](https:\u002F\u002Fyoutu.be\u002F8TMT-gHlj_Q)\n36.  [了解什么是深度学习](https:\u002F\u002Fwww.scaler.com\u002Ftopics\u002Fwhat-is-deep-learning\u002F)\n\n### 论文\n*你也可以从[这里](https:\u002F\u002Fgithub.com\u002Fterryum\u002Fawesome-deep-learning-papers)找到被引用最多的深度学习论文*\n\n1.  [使用深度卷积神经网络进行ImageNet分类](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)\n2.  [使用超深层自编码器进行基于内容的图像检索](http:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002Fabsps\u002Fesann-deep-final.pdf)\n3.  [为人工智能学习深度架构](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~lisa\u002Fpointeurs\u002FTR1312.pdf)\n4.  [卡内基梅隆大学的论文列表](http:\u002F\u002Fdeeplearning.cs.cmu.edu\u002F)\n5.  [用于命名实体识别的神经网络](http:\u002F\u002Fnlp.stanford.edu\u002F~socherr\u002Fpa4_ner.pdf) [zip](http:\u002F\u002Fnlp.stanford.edu\u002F~socherr\u002Fpa4-ner.zip)\n6. [YB的训练技巧](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~bengioy\u002Fpapers\u002FYB-tricks.pdf)\n7. [杰夫·辛顿的阅读清单（所有论文）](http:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002Fdeeprefs.html)\n8. [使用循环神经网络进行监督序列标注](http:\u002F\u002Fwww.cs.toronto.edu\u002F~graves\u002Fpreprint.pdf)\n9.  [基于神经网络的统计语言模型](http:\u002F\u002Fwww.fit.vutbr.cz\u002F~imikolov\u002Frnnlm\u002Fthesis.pdf)\n10.  [训练循环神经网络](http:\u002F\u002Fwww.cs.utoronto.ca\u002F~ilya\u002Fpubs\u002Filya_sutskever_phd_thesis.pdf)\n11.  [用于自然语言处理和计算机视觉的递归深度学习](http:\u002F\u002Fnlp.stanford.edu\u002F~socherr\u002Fthesis.pdf)\n12.  [双向RNN](http:\u002F\u002Fwww.di.ufpe.br\u002F~fnj\u002FRNA\u002Fbibliografia\u002FBRNN.pdf)\n13.  [LSTM](http:\u002F\u002Fweb.eecs.utk.edu\u002F~itamar\u002Fcourses\u002FECE-692\u002FBobby_paper1.pdf)\n14.  [GRU - 门控循环单元](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1406.1078v3.pdf)\n15.  [GFRNN](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1502.02367v3.pdf) [.](http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv37\u002Fchung15.pdf) [.](http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv37\u002Fchung15-supp.pdf)\n16.  [LSTM：搜索空间奥德赛](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1503.04069v1.pdf)\n17.  [对用于序列学习的循环神经网络的批判性评论](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.00019v1.pdf)\n18.  [可视化与理解循环网络](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.02078v1.pdf)\n19.  [沃伊切赫·扎伦巴、伊利亚·苏茨克维尔：对循环网络架构的实证探索](http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv37\u002Fjozefowicz15.pdf)\n20.  [基于循环神经网络的语言模型](http:\u002F\u002Fwww.fit.vutbr.cz\u002Fresearch\u002Fgroups\u002Fspeech\u002Fpubli\u002F2010\u002Fmikolov_interspeech2010_IS100722.pdf)\n21.  [循环神经网络语言模型的扩展](http:\u002F\u002Fwww.fit.vutbr.cz\u002Fresearch\u002Fgroups\u002Fspeech\u002Fpubli\u002F2011\u002Fmikolov_icassp2011_5528.pdf)\n22.  [在会议识别中使用基于循环神经网络的语言建模](http:\u002F\u002Fwww.fit.vutbr.cz\u002F~imikolov\u002Frnnlm\u002FApplicationOfRNNinMeetingRecognition_IS2011.pdf)\n23.  [用于语音识别中声学建模的深度神经网络](http:\u002F\u002Fcs224d.stanford.edu\u002Fpapers\u002Fmaas_paper.pdf)\n24.  [使用深度循环神经网络进行语音识别](http:\u002F\u002Fwww.cs.toronto.edu\u002F~fritz\u002Fabsps\u002FRNN13.pdf)\n25.  [强化学习神经图灵机](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1505.00521v1.pdf)\n26.  [使用RNN编码器-解码器学习短语表示，用于统计机器翻译](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1406.1078v3.pdf)\n27. [谷歌——使用神经网络进行序列到序列学习](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5346-sequence-to-sequence-learning-with-neural-networks.pdf)\n28. [记忆网络](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1410.3916v10.pdf)\n29. [针对部分可观测机器人控制的连续记忆状态策略学习](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1507.01273v1.pdf)\n30. [微软——联合建模嵌入与翻译以连接视频与语言](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1505.01861v1.pdf)\n31. [神经图灵机](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1410.5401v2.pdf)\n32. [有问必答：用于自然语言处理的动态记忆网络](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.07285v1.pdf)\n33. [利用深度神经网络和树搜索掌握围棋游戏](http:\u002F\u002Fwww.nature.com\u002Fnature\u002Fjournal\u002Fv529\u002Fn7587\u002Fpdf\u002Fnature16961.pdf)\n34. [批量归一化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167)\n35. [残差学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.03385v1.pdf)\n36. [使用条件对抗网络进行图像到图像的转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07004v1.pdf)\n37. [伯克利人工智能研究（BAIR）实验室](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.07004v1.pdf)\n38. [谷歌的MobileNets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861)\n39. [利用深度学习在野外进行跨音频-视觉识别](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05739)\n40. [胶囊之间的动态路由](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.09829)\n41. [带有EM路由的矩阵胶囊](https:\u002F\u002Fopenreview.net\u002Fpdf?id=HJWLfGWRb)\n42. [高效的反向传播](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fpublis\u002Fpdf\u002Flecun-98b.pdf)\n43. [生成对抗网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1406.2661v1.pdf)\n44. [Fast R-CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1504.08083.pdf)\n45. [FaceNet：用于人脸识别和聚类的统一嵌入](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1503.03832.pdf)\n46. [用于一次性图像识别的暹罗神经网络](https:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002Fpapers\u002Foneshot1.pdf)\n47. [编程语言的无监督翻译](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.03511.pdf)\n48. [用于一次学习的匹配网络](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6385-matching-networks-for-one-shot-learning.pdf)\n49. [VOLO：用于视觉识别的视觉展望者](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.13112.pdf)\n50. [ViT：一张图片胜过16×16个词——大规模图像识别的Transformer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.11929.pdf)\n51. [批量归一化：通过减少内部协变量偏移加速深度网络训练](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fioffe15.pdf)\n52. [DeepFaceDrawing：从草图深度生成人脸图像](http:\u002F\u002Fgeometrylearning.com\u002Fpaper\u002FDeepFaceDrawing.pdf?fbclid=IwAR0colWFHPGBCB1APZq9JVsWeWtmeZd9oCTNQvR52T5PRUJP_dLOwB8pt0I)\n\n### 教程\n\n1.  [UFLDL 教程 1](http:\u002F\u002Fdeeplearning.stanford.edu\u002Fwiki\u002Findex.php\u002FUFLDL_Tutorial)\n2.  [UFLDL 教程 2](http:\u002F\u002Fufldl.stanford.edu\u002Ftutorial\u002Fsupervised\u002FLinearRegression\u002F)\n3.  [自然语言处理的深度学习（无需魔法）](http:\u002F\u002Fwww.socher.org\u002Findex.php\u002FDeepLearningTutorial\u002FDeepLearningTutorial)\n4.  [深度学习教程：从感知机到深度网络](http:\u002F\u002Fwww.toptal.com\u002Fmachine-learning\u002Fan-introduction-to-deep-learning-from-perceptrons-to-deep-networks)\n5.  [自底向上的深度学习](http:\u002F\u002Fwww.metacademy.org\u002Froadmaps\u002Frgrosse\u002Fdeep_learning)\n6.  [Theano 教程](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fdeeplearning.pdf)\n7.  [Matlab 中的神经网络](http:\u002F\u002Fuk.mathworks.com\u002Fhelp\u002Fpdf_doc\u002Fnnet\u002Fnnet_ug.pdf)\n8.  [使用卷积神经网络检测面部关键点教程](http:\u002F\u002Fdanielnouri.org\u002Fnotes\u002F2014\u002F12\u002F17\u002Fusing-convolutional-neural-nets-to-detect-facial-keypoints-tutorial\u002F)\n9.  [Torch7 教程](https:\u002F\u002Fgithub.com\u002Fclementfarabet\u002Fipam-tutorials\u002Ftree\u002Fmaster\u002Fth_tutorials)\n10.  [网络上最好的机器学习教程](https:\u002F\u002Fgithub.com\u002Fjosephmisiti\u002Fmachine-learning-module)\n11. [VGG 卷积神经网络实践](http:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpracticals\u002Fcnn\u002Findex.html)\n12. [TensorFlow 教程](https:\u002F\u002Fgithub.com\u002Fnlintz\u002FTensorFlow-Tutorials)\n13. [更多 TensorFlow 教程](https:\u002F\u002Fgithub.com\u002Fpkmital\u002Ftensorflow_tutorials)\n13. [TensorFlow Python 笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples)\n14. [Keras 和 Lasagne 深度学习教程](https:\u002F\u002Fgithub.com\u002FVict0rSch\u002Fdeep_learning)\n15. [在 TensorFlow 中使用 LSTM RNN 对原始时间序列进行分类](https:\u002F\u002Fgithub.com\u002Fguillaume-chevalier\u002FLSTM-Human-Activity-Recognition)\n16. [使用卷积神经网络检测面部关键点教程](http:\u002F\u002Fdanielnouri.org\u002Fnotes\u002F2014\u002F12\u002F17\u002Fusing-convolutional-neural-nets-to-detect-facial-keypoints-tutorial\u002F)\n17. [TensorFlow 世界](https:\u002F\u002Fgithub.com\u002Fastorfi\u002FTensorFlow-World)\n18. [用 Python 进行深度学习](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python)\n19. [深入理解深度学习](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-deep-learning)\n20. [用于搜索的深度学习](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-for-search)\n21. [Keras 教程：基于内容的图像检索，使用卷积去噪自动编码器](https:\u002F\u002Fmedium.com\u002Fsicara\u002Fkeras-tutorial-content-based-image-retrieval-convolutional-denoising-autoencoder-dc91450cc511)\n22. [崔允哲的 PyTorch 教程](https:\u002F\u002Fgithub.com\u002Fyunjey\u002Fpytorch-tutorial)\n23. [通过 TensorFlow 和 Keras 的实际案例理解深度卷积神经网络](https:\u002F\u002Fahmedbesbes.com\u002Funderstanding-deep-convolutional-neural-networks-with-a-practical-use-case-in-tensorflow-and-keras.html)\n24. [文本分类中传统模型与深度学习模型的概述及基准测试](https:\u002F\u002Fahmedbesbes.com\u002Foverview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification.html)\n25. [人工智能硬件：理解计算机硬件并自制电脑](https:\u002F\u002Fgithub.com\u002FMelAbgrall\u002FHardwareforAI)\n26. [编程社区精选资源](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai)\n27. [图解自监督学习](https:\u002F\u002Famitness.com\u002F2020\u002F02\u002Fillustrated-self-supervised-learning\u002F)\n28. [ALBERT（轻量级 BERT）可视化论文摘要](https:\u002F\u002Famitness.com\u002F2020\u002F02\u002Falbert-visual-summary\u002F)\n28. [使用 GAN 进行黑色素瘤检测的半监督深度学习](https:\u002F\u002Fwww.manning.com\u002Fliveproject\u002Fsemi-supervised-deep-learning-with-gans-for-melanoma-detection\u002F)\n29. [使用 Reformers 进行命名实体识别](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FTrax-Examples\u002Fblob\u002Fmain\u002FNLP\u002FNER%20using%20Reformer.ipynb)\n30. [莎士比亚作品中的深度 N 元模型](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FTrax-Examples\u002Fblob\u002Fmain\u002FNLP\u002FDeep%20N-Gram.ipynb)\n31. [宽残差网络](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FTrax-Examples\u002Fblob\u002Fmain\u002Fvision\u002Fillustrated-wideresnet.ipynb)\n32. [使用 Flax 进行时尚 MNIST 分类](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FFlax-Examples)\n33. [假新闻分类（含 Streamlit 部署）](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FFake-News-Classification)\n34. [原发性胆汁性肝硬化回归分析](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FCoxPH-Model-for-Primary-Biliary-Cirrhosis)\n35. [天文目录的交叉匹配方法](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FCross-Matching-Methods-for-Astronomical-Catalogs)\n36. [使用双向 LSTM 进行命名实体识别](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FNamed-Entity-Recognition-)\n37. [使用 Tflite 和 Flutter 的图像识别应用](https:\u002F\u002Fgithub.com\u002FSauravMaheshkar\u002FFlutter_Image-Recognition)\n\n## 研究人员\n\n1. [Aaron Courville](http:\u002F\u002Faaroncourville.wordpress.com)\n2. [Abdel-rahman Mohamed](http:\u002F\u002Fwww.cs.toronto.edu\u002F~asamir\u002F)\n3. [Adam Coates](http:\u002F\u002Fcs.stanford.edu\u002F~acoates\u002F)\n4. [Alex Acero](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Falexac\u002F)\n5. [ Alex Krizhevsky ](http:\u002F\u002Fwww.cs.utoronto.ca\u002F~kriz\u002Findex.html)\n6. [ Alexander Ilin ](http:\u002F\u002Fusers.ics.aalto.fi\u002Falexilin\u002F)\n7. [ Amos Storkey ](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Famos\u002F)\n8. [ Andrej Karpathy ](https:\u002F\u002Fkarpathy.ai\u002F)\n9. [ Andrew M. Saxe ](http:\u002F\u002Fwww.stanford.edu\u002F~asaxe\u002F)\n10. [ Andrew Ng ](http:\u002F\u002Fwww.cs.stanford.edu\u002Fpeople\u002Fang\u002F)\n11. [ Andrew W. Senior ](http:\u002F\u002Fresearch.google.com\u002Fpubs\u002Fauthor37792.html)\n12. [ Andriy Mnih ](http:\u002F\u002Fwww.gatsby.ucl.ac.uk\u002F~amnih\u002F)\n13. [ Ayse Naz Erkan ](http:\u002F\u002Fwww.cs.nyu.edu\u002F~naz\u002F)\n14. [ Benjamin Schrauwen ](http:\u002F\u002Freslab.elis.ugent.be\u002Fbenjamin)\n15. [ Bernardete Ribeiro ](https:\u002F\u002Fwww.cisuc.uc.pt\u002Fpeople\u002Fshow\u002F2020)\n16. [ Bo David Chen ](http:\u002F\u002Fvision.caltech.edu\u002F~bchen3\u002FSite\u002FBo_David_Chen.html)\n17. [ Boureau Y-Lan ](http:\u002F\u002Fcs.nyu.edu\u002F~ylan\u002F)\n18. [ Brian Kingsbury ](http:\u002F\u002Fresearcher.watson.ibm.com\u002Fresearcher\u002Fview.php?person=us-bedk)\n19. [ Christopher Manning ](http:\u002F\u002Fnlp.stanford.edu\u002F~manning\u002F)\n20. [ Clement Farabet ](http:\u002F\u002Fwww.clement.farabet.net\u002F)\n21. [ Dan Claudiu Cireșan ](http:\u002F\u002Fwww.idsia.ch\u002F~ciresan\u002F)\n22. [ David Reichert ](http:\u002F\u002Fserre-lab.clps.brown.edu\u002Fperson\u002Fdavid-reichert\u002F)\n23. [ Derek Rose ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F5.html)\n24. [ Dong Yu ](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Fdongyu\u002Fdefault.aspx)\n25. [ Drausin Wulsin ](http:\u002F\u002Fwww.seas.upenn.edu\u002F~wulsin\u002F)\n26. [ Erik M. Schmidt ](http:\u002F\u002Fmusic.ece.drexel.edu\u002Fpeople\u002Feschmidt)\n27. [ Eugenio Culurciello ](https:\u002F\u002Fengineering.purdue.edu\u002FBME\u002FPeople\u002FviewPersonById?resource_id=71333)\n28. [ Frank Seide ](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Ffseide\u002F)\n29. [ Galen Andrew ](http:\u002F\u002Fhomes.cs.washington.edu\u002F~galen\u002F)\n30. [ Geoffrey Hinton ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002F)\n31. [ George Dahl ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~gdahl\u002F)\n32. [ Graham Taylor ](http:\u002F\u002Fwww.uoguelph.ca\u002F~gwtaylor\u002F)\n33. [ Grégoire Montavon ](http:\u002F\u002Fgregoire.montavon.name\u002F)\n34. [ Guido Francisco Montúfar ](http:\u002F\u002Fpersonal-homepages.mis.mpg.de\u002Fmontufar\u002F)\n35. [ Guillaume Desjardins ](http:\u002F\u002Fbrainlogging.wordpress.com\u002F)\n36. [ Hannes Schulz ](http:\u002F\u002Fwww.ais.uni-bonn.de\u002F~schulz\u002F)\n37. [ Hélène Paugam-Moisy ](http:\u002F\u002Fwww.lri.fr\u002F~hpaugam\u002F)\n38. [ Honglak Lee ](http:\u002F\u002Fweb.eecs.umich.edu\u002F~honglak\u002F)\n39. [ Hugo Larochelle ](http:\u002F\u002Fwww.dmi.usherb.ca\u002F~larocheh\u002Findex_en.html)\n40. [ Ilya Sutskever ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~ilya\u002F)\n41. [ Itamar Arel ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F2.html)\n42. [ James Martens ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~jmartens\u002F)\n43. [ Jason Morton ](http:\u002F\u002Fwww.jasonmorton.com\u002F)\n44. [ Jason Weston ](http:\u002F\u002Fwww.thespermwhale.com\u002Fjaseweston\u002F)\n45. [ Jeff Dean ](http:\u002F\u002Fresearch.google.com\u002Fpubs\u002Fjeff.html)\n46. [ Jiquan Mgiam ](http:\u002F\u002Fcs.stanford.edu\u002F~jngiam\u002F)\n47. [ Joseph Turian ](http:\u002F\u002Fwww-etud.iro.umontreal.ca\u002F~turian\u002F)\n48. [ Joshua Matthew Susskind ](http:\u002F\u002Faclab.ca\u002Fusers\u002Fjosh\u002Findex.html)\n49. [ Jürgen Schmidhuber ](http:\u002F\u002Fwww.idsia.ch\u002F~juergen\u002F)\n50. [ Justin A. Blanco ](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fblancousna\u002F)\n51. [ Koray Kavukcuoglu ](http:\u002F\u002Fkoray.kavukcuoglu.org\u002F)\n52. [ KyungHyun Cho ](http:\u002F\u002Fusers.ics.aalto.fi\u002Fkcho\u002F)\n53. [ Li Deng ](http:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fpeople\u002Fdeng\u002F)\n54. [ Lucas Theis ](http:\u002F\u002Fwww.kyb.tuebingen.mpg.de\u002Fnc\u002Femployee\u002Fdetails\u002Flucas.html)\n55. [ Ludovic Arnold ](http:\u002F\u002Fludovicarnold.altervista.org\u002Fhome\u002F)\n56. [ Marc'Aurelio Ranzato ](http:\u002F\u002Fwww.cs.nyu.edu\u002F~ranzato\u002F)\n57. [ Martin Längkvist ](http:\u002F\u002Faass.oru.se\u002F~mlt\u002F)\n58. [ Misha Denil ](http:\u002F\u002Fmdenil.com\u002F)\n59. [ Mohammad Norouzi ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~norouzi\u002F)\n60. [ Nando de Freitas ](http:\u002F\u002Fwww.cs.ubc.ca\u002F~nando\u002F)\n61. [ Navdeep Jaitly ](http:\u002F\u002Fwww.cs.utoronto.ca\u002F~ndjaitly\u002F)\n62. [ Nicolas Le Roux ](http:\u002F\u002Fnicolas.le-roux.name\u002F)\n63. [ Nitish Srivastava ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~nitish\u002F)\n64. [ Noel Lopes ](https:\u002F\u002Fwww.cisuc.uc.pt\u002Fpeople\u002Fshow\u002F2028)\n65. [ Oriol Vinyals ](http:\u002F\u002Fwww.cs.berkeley.edu\u002F~vinyals\u002F)\n66. [ Pascal Vincent ](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~vincentp)\n67. [ Patrick Nguyen ](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fdrpngx\u002F)\n68. [ Pedro Domingos ](http:\u002F\u002Fhomes.cs.washington.edu\u002F~pedrod\u002F)\n69. [ Peggy Series ](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fpseries\u002F)\n70. [ Pierre Sermanet ](http:\u002F\u002Fcs.nyu.edu\u002F~sermanet)\n71. [ Piotr Mirowski ](http:\u002F\u002Fwww.cs.nyu.edu\u002F~mirowski\u002F)\n72. [ Quoc V. Le ](http:\u002F\u002Fai.stanford.edu\u002F~quocle\u002F)\n73. [ Reinhold Scherer ](http:\u002F\u002Fbci.tugraz.at\u002Fscherer\u002F)\n74. [ Richard Socher ](http:\u002F\u002Fwww.socher.org\u002F)\n75. [ Rob Fergus ](http:\u002F\u002Fcs.nyu.edu\u002F~fergus\u002Fpmwiki\u002Fpmwiki.php)\n76. [ Robert Coop ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F19.html)\n77. [ Robert Gens ](http:\u002F\u002Fhomes.cs.washington.edu\u002F~rcg\u002F)\n78. [ Roger Grosse ](http:\u002F\u002Fpeople.csail.mit.edu\u002Frgrosse\u002F)\n79. [ Ronan Collobert ](http:\u002F\u002Fronan.collobert.com\u002F)\n80. [ Ruslan Salakhutdinov ](http:\u002F\u002Fwww.utstat.toronto.edu\u002F~rsalakhu\u002F)\n81. [ Sebastian Gerwinn ](http:\u002F\u002Fwww.kyb.tuebingen.mpg.de\u002Fnc\u002Femployee\u002Fdetails\u002Fsgerwinn.html)\n82. [ Stéphane Mallat ](http:\u002F\u002Fwww.cmap.polytechnique.fr\u002F~mallat\u002F)\n83. [ Sven Behnke ](http:\u002F\u002Fwww.ais.uni-bonn.de\u002Fbehnke\u002F)\n84. [ Tapani Raiko ](http:\u002F\u002Fusers.ics.aalto.fi\u002Fpraiko\u002F)\n85. [ Tara Sainath ](https:\u002F\u002Fsites.google.com\u002Fsite\u002Ftsainath\u002F)\n86. [ Tijmen Tieleman ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~tijmen\u002F)\n87. [ Tom Karnowski ](http:\u002F\u002Fmil.engr.utk.edu\u002Fnmil\u002Fmember\u002F36.html)\n88. [ Tomáš Mikolov ](https:\u002F\u002Fresearch.facebook.com\u002Ftomas-mikolov)\n89. [ Ueli Meier ](http:\u002F\u002Fwww.idsia.ch\u002F~meier\u002F)\n90. [ Vincent Vanhoucke ](http:\u002F\u002Fvincent.vanhoucke.com)\n91. [ Volodymyr Mnih ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~vmnih\u002F)\n92. [ Yann LeCun ](http:\u002F\u002Fyann.lecun.com\u002F)\n93. [ Yichuan Tang ](http:\u002F\u002Fwww.cs.toronto.edu\u002F~tang\u002F)\n94. [ Yoshua Bengio ](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~bengioy\u002Fyoshua_en\u002Findex.html)\n95. [ Yotaro Kubo ](http:\u002F\u002Fyota.ro\u002F)\n96. [ Youzhi (Will) Zou ](http:\u002F\u002Fai.stanford.edu\u002F~wzou)\n97. [ Fei-Fei Li ](http:\u002F\u002Fvision.stanford.edu\u002Ffeifeili)\n98. [ Ian Goodfellow ](https:\u002F\u002Fresearch.google.com\u002Fpubs\u002F105214.html)\n99. [ Robert Laganière ](http:\u002F\u002Fwww.site.uottawa.ca\u002F~laganier\u002F)\n100. [Merve Ayyüce Kızrak](http:\u002F\u002Fwww.ayyucekizrak.com\u002F)\n\n### 网站\n\n1.  [deeplearning.net](http:\u002F\u002Fdeeplearning.net\u002F)\n2.  [deeplearning.stanford.edu](http:\u002F\u002Fdeeplearning.stanford.edu\u002F)\n3.  [nlp.stanford.edu](http:\u002F\u002Fnlp.stanford.edu\u002F)\n4.  [ai-junkie.com](http:\u002F\u002Fwww.ai-junkie.com\u002Fann\u002Fevolved\u002Fnnt1.html)\n5.  [cs.brown.edu\u002Fresearch\u002Fai](http:\u002F\u002Fcs.brown.edu\u002Fresearch\u002Fai\u002F)\n6.  [eecs.umich.edu\u002Fai](http:\u002F\u002Fwww.eecs.umich.edu\u002Fai\u002F)\n7.  [cs.utexas.edu\u002Fusers\u002Fai-lab](http:\u002F\u002Fwww.cs.utexas.edu\u002Fusers\u002Fai-lab\u002F)\n8.  [cs.washington.edu\u002Fresearch\u002Fai](http:\u002F\u002Fwww.cs.washington.edu\u002Fresearch\u002Fai\u002F)\n9.  [aiai.ed.ac.uk](http:\u002F\u002Fwww.aiai.ed.ac.uk\u002F)\n10.  [www-aig.jpl.nasa.gov](http:\u002F\u002Fwww-aig.jpl.nasa.gov\u002F)\n11.  [csail.mit.edu](http:\u002F\u002Fwww.csail.mit.edu\u002F)\n12.  [cgi.cse.unsw.edu.au\u002F~aishare](http:\u002F\u002Fcgi.cse.unsw.edu.au\u002F~aishare\u002F)\n13.  [cs.rochester.edu\u002Fresearch\u002Fai](http:\u002F\u002Fwww.cs.rochester.edu\u002Fresearch\u002Fai\u002F)\n14.  [ai.sri.com](http:\u002F\u002Fwww.ai.sri.com\u002F)\n15.  [isi.edu\u002FAI\u002Fisd.htm](http:\u002F\u002Fwww.isi.edu\u002FAI\u002Fisd.htm)\n16.  [nrl.navy.mil\u002Fitd\u002Faic](http:\u002F\u002Fwww.nrl.navy.mil\u002Fitd\u002Faic\u002F)\n17.  [hips.seas.harvard.edu](http:\u002F\u002Fhips.seas.harvard.edu\u002F)\n18.  [AI Weekly](http:\u002F\u002Faiweekly.co)\n19.  [stat.ucla.edu](http:\u002F\u002Fstatistics.ucla.edu\u002F)\n20.  [deeplearning.cs.toronto.edu](http:\u002F\u002Fdeeplearning.cs.toronto.edu\u002Fi2t)\n21.  [jeffdonahue.com\u002Flrcn\u002F](http:\u002F\u002Fjeffdonahue.com\u002Flrcn\u002F)\n22.  [visualqa.org](http:\u002F\u002Fwww.visualqa.org\u002F)\n23.  [www.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision...](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision-and-multimodal-computing\u002F)\n24.  [Deep Learning News](http:\u002F\u002Fnews.startup.ml\u002F)\n25.  [机器学习很有趣！亚当·盖特吉的博客](https:\u002F\u002Fmedium.com\u002F@ageitgey\u002F)\n26.  [机器学习指南](http:\u002F\u002Fyerevann.com\u002Fa-guide-to-deep-learning\u002F)\n27.  [面向初学者的深度学习](https:\u002F\u002Fspandan-madan.github.io\u002FDeepLearningProject\u002F)\n28.  [Machine Learning Mastery 博客](https:\u002F\u002Fmachinelearningmastery.com\u002Fblog\u002F)\n29.  [ML Compiled](https:\u002F\u002Fml-compiled.readthedocs.io\u002Fen\u002Flatest\u002F)\n30.  [编程社区精选资源](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai)\n31.  [理解卷积神经网络的入门指南](https:\u002F\u002Fadeshpande3.github.io\u002FA-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks\u002F)\n32.  [ahmedbesbes.com](http:\u002F\u002Fahmedbesbes.com)\n33.  [amitness.com](https:\u002F\u002Famitness.com\u002F)\n34.  [AI Summer](https:\u002F\u002Ftheaisummer.com\u002F)\n35.  [AI Hub - 由 AAAI 和 NeurIPS 支持](https:\u002F\u002Faihub.org\u002F)\n36.  [CatalyzeX：面向开发者和创客的机器学习中心](https:\u002F\u002Fwww.catalyzeX.com)\n37.  [The Epic Code](https:\u002F\u002Ftheepiccode.com\u002F)\n38.  [all AI news](https:\u002F\u002Fallainews.com\u002F)\n\n### 数据集\n\n1.  [MNIST](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F) 手写数字数据集\n2.  [Google 房屋号码](http:\u002F\u002Fufldl.stanford.edu\u002Fhousenumbers\u002F) 数据集，来自街景视图\n3.  [CIFAR-10 和 CIFAR-100](http:\u002F\u002Fwww.cs.toronto.edu\u002F~kriz\u002Fcifar.html)\n4.  [IMAGENET](http:\u002F\u002Fwww.image-net.org\u002F)\n5.  [Tiny Images](http:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002FTinyImages\u002F) 8000万张小图像\n6.  [Flickr 数据](https:\u002F\u002Fyahooresearch.tumblr.com\u002Fpost\u002F89783581601\u002Fone-hundred-million-creative-commons-flickr-images) 1亿张雅虎数据集\n7.  [伯克利分割数据集 500](http:\u002F\u002Fwww.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fbsds\u002F)\n8.  [UC Irvine 机器学习仓库](http:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002F)\n9.  [Flickr 8k](http:\u002F\u002Fnlp.cs.illinois.edu\u002FHockenmaierGroup\u002FFraming_Image_Description\u002FKCCA.html)\n10. [Flickr 30k](http:\u002F\u002Fshannon.cs.illinois.edu\u002FDenotationGraph\u002F)\n11. [微软 COCO](http:\u002F\u002Fmscoco.org\u002Fhome\u002F)\n12. [VQA](http:\u002F\u002Fwww.visualqa.org\u002F)\n13. [图像问答](http:\u002F\u002Fwww.cs.toronto.edu\u002F~mren\u002Fimageqa\u002Fdata\u002Fcocoqa\u002F)\n14. [AT&T 剑桥实验室人脸数据库](http:\u002F\u002Fwww.uk.research.att.com\u002Ffacedatabase.html)\n15. [AVHRR Pathfinder](http:\u002F\u002Fxtreme.gsfc.nasa.gov)\n16. [航空货运](http:\u002F\u002Fwww.anc.ed.ac.uk\u002F~amos\u002Fafreightdata.html) - 航空货运数据集是一系列基于纹理特征的真实分割标注的光线追踪图像序列。（455张图像+GT，每张160x120像素）（格式：PNG）\n17. [阿姆斯特丹物体图像库](http:\u002F\u002Fwww.science.uva.nl\u002F~aloi\u002F) - ALOI 是一个包含一千个小物体的彩色图像集合，专为科学研究目的而采集。为了捕捉物体成像中的感官变化，我们系统地改变了每个物体的视角、光照角度和光照颜色，并额外采集了宽基线立体图像。每个物体记录了超过一百张图像，整个数据集共包含110,250张图像。（格式：png）\n18. [带标注的人脸、手部、心脏及肉类图像](http:\u002F\u002Fwww.imm.dtu.dk\u002F~aam\u002F) - 大多数图像及其标注都通过AAM-API进行了各种ASM\u002FAAM分析补充。（格式：bmp,asf）\n19. [图像分析与计算机图形学](http:\u002F\u002Fwww.imm.dtu.dk\u002Fimage\u002F)\n21. [布朗大学刺激材料](http:\u002F\u002Fwww.cog.brown.edu\u002F~tarr\u002Fstimuli.html) - 包括几何体、物体和“格里布尔”等多种数据集。非常适合测试识别算法。（格式：pict）\n22. [CAVIAR 商场和公共场所行为视频序列](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Frbf\u002FCAVIARDATA1\u002F) - 90个序列中包含9万个不同人类活动的视频帧，并配有XML格式的检测与行为分类真值标注。（格式：MPEG2 & JPEG）\n23. [机器视觉单元](http:\u002F\u002Fwww.ipab.inf.ed.ac.uk\u002Fmvu\u002F)\n25. [CCITT 传真标准图像](http:\u002F\u002Fwww.cs.waikato.ac.nz\u002F~singlis\u002Fccitt.html) - 8张图像（格式：gif）\n26. [CMU CIL 的带有真值的立体数据](cil-ster.html) - 3组各11张图像，包括带有光谱辐射测量的彩色TIFF图像。（格式：gif, tiff）\n27. [CMU PIE 数据库](http:\u002F\u002Fwww.ri.cmu.edu\u002Fprojects\u002Fproject_418.html) - 一个包含68人41,368张人脸图像的数据库，涵盖了13种姿态、43种光照条件以及4种不同表情。\n28. [CMU VASC 图像数据库](http:\u002F\u002Fwww.ius.cs.cmu.edu\u002Fidb\u002F) - 图像、序列、立体对（数千张图像）（格式：Sun Rasterimage）\n29. [加州理工学院图像数据库](http:\u002F\u002Fwww.vision.caltech.edu\u002Fhtml-files\u002Farchive.html) - 约20张图像，多为小型物体和玩具的俯视图。（格式：GIF）\n30. [哥伦比亚-乌得勒支反射率和纹理数据库](http:\u002F\u002Fwww.cs.columbia.edu\u002FCAVE\u002Fcuret\u002F) - 对60多种3D纹理样本进行反射率和纹理测量，观察了200多种不同的视角和光照组合。（格式：bmp）\n31. [计算色彩恒常性数据](http:\u002F\u002Fwww.cs.sfu.ca\u002F~colour\u002Fdata\u002Findex.html) - 该数据集主要面向计算色彩恒常性研究，但也适用于一般的计算机视觉任务。包含合成数据、相机传感器数据以及700多张图像。（格式：tiff）\n32. [计算视觉实验室](http:\u002F\u002Fwww.cs.sfu.ca\u002F~colour\u002F)\n34. [基于内容的图像检索数据库](http:\u002F\u002Fwww.cs.washington.edu\u002Fresearch\u002Fimagedatabase\u002Fgroundtruth\u002F) - 11组用于测试基于内容检索算法的彩色图像。大多数数据集附有描述文件，列出每张图像中的对象名称。（格式：jpg）\n35. [高效内容检索小组](http:\u002F\u002Fwww.cs.washington.edu\u002Fresearch\u002Fimagedatabase\u002F)\n37. [密集采样的视球](http:\u002F\u002Fls7-www.cs.uni-dortmund.de\u002F~peters\u002Fpages\u002Fresearch\u002Fmodeladaptsys\u002Fmodeladaptsys_vba_rov.html) - 密集采样的视球——两个玩具物体的上半球视域，每个包含2500张图像。（格式：tiff）\n38. [计算机科学第七系（图形系统）](http:\u002F\u002Fls7-www.cs.uni-dortmund.de\u002F)\n40. [数字胚胎](https:\u002F\u002Fweb-beta.archive.org\u002Fweb\u002F20011216051535\u002Fvision.psych.umn.edu\u002Fwww\u002Fkersten-lab\u002Fdemos\u002Fdigitalembryo.html) - 数字胚胎是可用于开发和测试物体识别系统的新型对象。它们具有有机外观。（格式：可根据需求提供多种格式）\n41. [明尼苏达大学视觉实验室](http:\u002F\u002Fvision.psych.umn.edu\u002Fusers\u002Fkersten\u002F\u002Fkersten-lab\u002Fkersten-lab.html)\n42. [萨尔瓦多胃肠内镜视频图谱](http:\u002F\u002Fwww.gastrointestinalatlas.com) - 来自胃肠内镜检查的图像和视频。（格式：jpg, mpg, gif）\n43. [FG-NET 面部老化数据库](http:\u002F\u002Fsting.cycollege.ac.cy\u002F~alanitis\u002Ffgnetaging\u002Findex.htm) - 数据库包含1002张展示受试者不同年龄阶段的脸部图像。（格式：jpg）\n44. [FVC2000 指纹数据库](http:\u002F\u002Fbias.csr.unibo.it\u002Ffvc2000\u002F) - FVC2000是首次国际指纹验证算法竞赛。四个指纹数据库构成了FVC2000基准测试集（共计3520枚指纹）。\n45. [生物识别系统实验室](http:\u002F\u002Fbiolab.csr.unibo.it\u002Fhome.asp) - 博洛尼亚大学\n46. [面部和手势图像及序列](http:\u002F\u002Fwww.fg-net.org) - 几个经过真值标注的面部和手势图像数据集，用于基准测试\n47. [德国指拼法数据库](http:\u002F\u002Fwww-i6.informatik.rwth-aachen.de\u002F~dreuw\u002Fdatabase.html) - 该数据库包含35种手势，由1400个图像序列组成，记录了20位不同人士在非均匀日光照明条件下的手势动作。（格式：mpg,jpg）\n48. [语言处理与模式识别](http:\u002F\u002Fwww-i6.informatik.rwth-aachen.de\u002F)\n50. [格罗宁根自然图像数据库](http:\u002F\u002Fhlab.phys.rug.nl\u002Farchive.html) - 4000多张1536x1024（16位）校准过的户外图像（格式：自制）\n51. [ICG 测试台序列](http:\u002F\u002Fwww.icg.tu-graz.ac.at\u002F~schindler\u002FData) - 两个不同高度的转盘序列，每组36张图像，分辨率为1000x750，彩色（格式：PPM）\n52. [计算机图形与视觉研究所](http:\u002F\u002Fwww.icg.tu-graz.ac.at)\n54. [IEN 图像库](http:\u002F\u002Fwww.ien.it\u002Fis\u002Fvislib\u002F) - 1000多张图像，多为户外序列（格式：raw, ppm）\n55. [INRIA 的 Syntim 图像数据库](http:\u002F\u002Fwww-rocq.inria.fr\u002F~tarel\u002Fsyntim\u002Fimages.html) - 15张简单物体的彩色图像（格式：gif）\n56. [INRIA](http:\u002F\u002Fwww.inria.fr\u002F)\n57. [INRIA 的 Syntim 立体数据库](http:\u002F\u002Fwww-rocq.inria.fr\u002F~tarel\u002Fsyntim\u002Fpaires.html) - 34对校准过的彩色立体图像（格式：gif）\n58. [图像分析实验室](http:\u002F\u002Fwww.ece.ncsu.edu\u002Fimaging\u002FArchives\u002FImageDataBase\u002Findex.html) - 从多种成像方式获取的图像——原始CFA图像、距离图像以及大量“医学图像”。（格式：自制）\n59. [图像分析实验室](http:\u002F\u002Fwww.ece.ncsu.edu\u002Fimaging)\n61. [图像数据库](http:\u002F\u002Fwww.prip.tuwien.ac.at\u002Fprip\u002Fimage.html) - 一个包含部分纹理的图像数据库\n62. [JAFFE 面部表情图像数据库](http:\u002F\u002Fwww.mis.atr.co.jp\u002F~mlyons\u002Fjaffe.html) - JAFFE数据库包含213张日本女性受试者的照片，展示了6种基本面部表情以及中性表情。此外，还免费提供了用于研究的情感形容词评分。（格式：灰度TIFF图像）\n63. [ATR 研究所，京都，日本](http:\u002F\u002Fwww.mic.atr.co.jp\u002F)\n64. [JISCT 立体评估](ftp:\u002F\u002Fftp.vislist.com\u002FIMAGERY\u002FJISCT\u002F) - 44对图像。这些数据曾被用于立体分析评估，正如1993年4月ARPA图像理解研讨会论文《JISCT立体评估》中所述，作者为R.C.Bolles、H.H.Baker和M.J.Hannah，第263–274页（格式：SSI）\n65. [MIT 视觉纹理](https:\u002F\u002Fvismod.media.mit.edu\u002Fvismod\u002Fimagery\u002FVisionTexture\u002Fvistex.html) - 图像档案（100多张图像）（格式：ppm）\n66. [MIT 人脸图像等](ftp:\u002F\u002Fwhitechapel.media.mit.edu\u002Fpub\u002Fimages) - 数百张图像（格式：自制）\n67. [机器视觉](http:\u002F\u002Fvision.cse.psu.edu\u002Fbook\u002Ftestbed\u002Fimages\u002F) - 来自Jain、Kasturi、Schunck教科书中的图像（20多张图像）（格式：GIF TIFF）\n68. [乳腺X线图像数据库](http:\u002F\u002Fmarathon.csee.usf.edu\u002FMammography\u002FDatabase.html) - 100多张带有真值的乳腺X线图像。还可根据请求提供其他图像，并附有多个其他乳腺X线数据库的链接。（格式：自制）\n69. [ftp:\u002F\u002Fftp.cps.msu.edu\u002Fpub\u002Fprip](ftp:\u002F\u002Fftp.cps.msu.edu\u002Fpub\u002Fprip) - 许多图像（格式：未知）\n70. [米德尔伯里立体数据集及真值](http:\u002F\u002Fwww.middlebury.edu\u002Fstereo\u002Fdata.html) - 六个包含平面区域的多帧立体数据集。每个数据集包含9张彩色图像以及亚像素精度的真值数据。（格式：ppm）\n71. [米德尔伯里立体视觉研究页面](http:\u002F\u002Fwww.middlebury.edu\u002Fstereo) - 米德尔伯里学院\n72. [Modis 机载模拟器、画廊和数据集](http:\u002F\u002Fltpwww.gsfc.nasa.gov\u002FMODIS\u002FMAS\u002F) - 来自世界各地的高空影像，用于支持NASA EOS计划的环境建模（格式：JPG和HDF）\n73. [NIST 指纹和手写数据](ftp:\u002F\u002Fsequoyah.ncsl.nist.gov\u002Fpub\u002Fdatabases\u002Fdata) - 数据集，包含数千张图像（格式：未知）\n74. [NIST 指纹数据](ftp:\u002F\u002Fftp.cs.columbia.edu\u002Fjpeg\u002Fother\u002Fuuencoded) - 压缩后的多部分uu编码tar文件\n75. [NLM 可视化人体项目](http:\u002F\u002Fwww.nlm.nih.gov\u002Fresearch\u002Fvisible\u002Fvisible_human.html) - 彩色、CAT和MRI图像样本——超过30张图像（格式：jpeg）\n76. [国家设计资源库](http:\u002F\u002Fwww.designrepository.org) - 超过55,000个以机械\u002F加工工程设计为主的3D CAD和实体模型。（格式：gif,vrml,wrl,stp,sat）\n77. [几何与智能计算实验室](http:\u002F\u002Fgicl.mcs.drexel.edu)\n79. [OSU (MSU) 3D 物体模型数据库](http:\u002F\u002Feewww.eng.ohio-state.edu\u002F~flynn\u002F3DDB\u002FModels\u002F) - 多年来收集的几组3D物体模型，用于物体识别研究（格式：自制、vrml）\n80. [OSU (MSU\u002FWSU) 距离图像数据库](http:\u002F\u002Feewww.eng.ohio-state.edu\u002F~flynn\u002F3DDB\u002FRID\u002F) - 数百张真实和合成图像（格式：gif、自制）\n81. [OSU\u002FSAMPL 数据库：距离图像、3D模型、静止图像和运动序列](http:\u002F\u002Fsampl.eng.ohio-state.edu\u002F~sampl\u002Fdatabase.htm) - 超过1000张距离图像、3D物体模型、静止图像和运动序列（格式：gif、ppm、vrml、自制）\n82. [信号分析和机器感知实验室](http:\u002F\u002Fsampl.eng.ohio-state.edu)\n84. [奥塔哥光学流评估序列](http:\u002F\u002Fwww.cs.otago.ac.nz\u002Fresearch\u002Fvision\u002FResearch\u002FOpticalFlow\u002Fopticalflow.html) - 合成和真实的序列，配有机器可读的光学流真值场，以及生成新序列真值的工具。（格式：ppm,tif、自制）\n85. [视觉研究小组](http:\u002F\u002Fwww.cs.otago.ac.nz\u002Fresearch\u002Fvision\u002Findex.html)\n87. [ftp:\u002F\u002Fftp.limsi.fr\u002Fpub\u002Fquenot\u002Fopflow\u002Ftestdata\u002Fpiv\u002F](ftp:\u002F\u002Fftp.limsi.fr\u002Fpub\u002Fquenot\u002Fopflow\u002Ftestdata\u002Fpiv\u002F) - 用于测试粒子图像测速仪应用的真实和合成图像序列。这些图像也可用于光学流和图像匹配算法的测试。（格式：pgm（原始））\n88. [LIMSI-CNRS\u002FCHM\u002FIMM\u002F视觉](http:\u002F\u002Fwww.limsi.fr\u002FRecherche\u002FIMM\u002FPageIMM.html)\n89. [LIMSI-CNRS](http:\u002F\u002Fwww.limsi.fr\u002F)\n90. [光度法三维表面纹理数据库](http:\u002F\u002Fwww.taurusstudio.net\u002Fresearch\u002Fpmtexdb\u002Findex.htm) - 这是首个同时提供完整真实表面旋转和注册光度立体数据的三维纹理数据库（30种纹理，1680张图像）。（格式：TIFF）\n91. [用于光学流分析的序列（SOFA）](http:\u002F\u002Fwww.cee.hw.ac.uk\u002F~mtc\u002Fsofa) - 9个专为测试运动分析应用而设计的合成序列，包含完整的运动和相机参数真值。（格式：gif）\n92. [计算机视觉小组](http:\u002F\u002Fwww.cee.hw.ac.uk\u002F~mtc\u002Fresearch.html)\n94. [用于基于流重建的序列](http:\u002F\u002Fwww.nada.kth.se\u002F~zucch\u002FCAMERA\u002FPUB\u002Fseq.html) - 用于测试基于运动结构算法的合成序列（格式：pgm）\n95. [带有真值差异和遮挡信息的立体图像](http:\u002F\u002Fwww-dbv.cs.uni-bonn.de\u002Fstereo_data\u002F) - 一小批走廊场景的合成图像，添加了不同水平的噪声。使用这些图像来评估你的立体算法。（格式：原始、viff（khoros）或tiff）\n96. [斯图加特距离图像数据库](http:\u002F\u002Frange.informatik.uni-stuttgart.de) - 一系列从网络上可用的高分辨率多边形模型中提取的合成距离图像（格式：自制）\n97. [图像理解部门](http:\u002F\u002Fwww.informatik.uni-stuttgart.de\u002Fipvr\u002Fbv\u002Fbv_home_engl.html)\n99. [AR 人脸数据库](http:\u002F\u002Fwww2.ece.ohio-state.edu\u002F~aleix\u002FARdatabase.html) - 包含超过4,000张对应于126个人脸的彩色图像（70名男性和56名女性）。正面视图，伴有不同表情、光照和遮挡情况。（格式：RAW（RGB 24位））\n100. [普渡大学机器人视觉实验室](http:\u002F\u002Frvl.www.ecn.purdue.edu\u002FRVL\u002F)\n101. [MIT-CSAIL 物体和场景数据库](http:\u002F\u002Fweb.mit.edu\u002Ftorralba\u002Fwww\u002Fdatabase.html) - 用于测试多类物体检测和场景识别算法的数据库。超过72,000张图像，其中2,873帧已标注。超过50个已标注的对象类别。（格式：jpg）\n102. [RVL SPEC-DB（光泽度数据库）](http:\u002F\u002Frvl1.ecn.purdue.edu\u002FRVL\u002Fspecularity_database\u002F) - 收集了300多张100个物体在三种不同光照条件下拍摄的真实图像（漫射\u002F环境\u002F定向）。-- 使用这些图像来测试检测和补偿彩色图像中高光效果的算法。（格式：TIFF）\n103. [机器人视觉实验室](http:\u002F\u002Frvl1.ecn.purdue.edu\u002FRVL\u002F)\n105. [XM2VTS 数据库](http:\u002F\u002Fxm2vtsdb.ee.surrey.ac.uk) - XM2VTSDB包含四次为期四个月的295人的数字录制。该数据库同时包含面部的图像和视频数据。\n106. [视觉、语音和信号处理中心](http:\u002F\u002Fwww.ee.surrey.ac.uk\u002FResearch\u002FCVSSP)\n107. [交通图像序列和‘大理石块’序列](http:\u002F\u002Fi21www.ira.uka.de\u002Fimage_sequences) - 数千帧数字化的交通图像序列以及‘大理石块’序列（灰度图像）（格式：GIF）\n108. [IAKS\u002FKOGS](http:\u002F\u002Fi21www.ira.uka.de)\n110. [伯尔尼大学人脸图像](ftp:\u002F\u002Fftp.iam.unibe.ch\u002Fpub\u002FImages\u002FFaceImages) - 数百张图像（格式：Sun栅格文件）\n111. [密歇根大学纹理](ftp:\u002F\u002Ffreebie.engin.umich.edu\u002Fpub\u002Fmisc\u002Ftextures)（格式：压缩的原始数据）\n112. [奥卢大学木材和节疤数据库](http:\u002F\u002Fwww.ee.oulu.fi\u002F~olli\u002FProjects\u002FLumber.Grading.html) - 包括分类信息——1000多张彩色图像（格式：ppm）\n113. [UCID - 未压缩彩色图像数据库](http:\u002F\u002Fvision.doc.ntu.ac.uk\u002Fdatasets\u002FUCID\u002Fucid.html) - 一个用于图像检索的基准数据库，具有预定义的真值。（格式：tiff）\n115. [马萨诸塞大学视觉图像档案](http:\u002F\u002Fvis-www.cs.umass.edu\u002F~vislib\u002F) - 大型图像数据库，包含航拍、太空、立体、医学等图像。（格式：自制）\n116. [UNC 的3D图像数据库](ftp:\u002F\u002Fsunsite.unc.edu\u002Fpub\u002Facademic\u002Fcomputer-science\u002Fvirtual-reality\u002F3d) - 许多图像（格式：GIF）\n117. [USF 距离图像数据及分割真值](http:\u002F\u002Fmarathon.csee.usf.edu\u002Frange\u002Fseg-comp\u002FSegComp.html) - 80组图像（格式：Sun栅格图像）\n118. [奥卢大学基于物理的脸部数据库](http:\u002F\u002Fwww.ee.oulu.fi\u002Fresearch\u002Fimag\u002Fcolor\u002Fpbfd.html) - 包含在不同光源和相机校准条件下拍摄的彩色脸部图像，以及每个人的皮肤光谱反射率测量。\n119. [机器视觉与媒体处理单元](http:\u002F\u002Fwww.ee.oulu.fi\u002Fmvmp\u002F)\n121. [奥卢大学纹理数据库](http:\u002F\u002Fwww.outex.oulu.fi) - 一个包含320种表面纹理的数据库，每种纹理都在三种光源、六种空间分辨率和九种旋转角度下采集。还提供了一套测试套件，以便以标准化方式测试纹理分割、分类和检索算法。（格式：bmp, ras, xv）\n122. [机器视觉小组](http:\u002F\u002Fwww.ee.oulu.fi\u002Fmvg)\n124. [Usenix 脸部数据库](ftp:\u002F\u002Fftp.uu.net\u002Fpublished\u002Fusenix\u002Ffaces) - 来自多个不同来源的数千张脸部图像（约994张）\n125. [视球数据库](http:\u002F\u002Fwww-prima.inrialpes.fr\u002FPrima\u002Fhall\u002Fview_sphere.html) - 8个物体从多个不同视角拍摄的图像。视球采用测地线方法采样，每球172张图像。提供两组用于训练和测试的数据。（格式：ppm）\n126. [PRIMA，GRAVIR](http:\u002F\u002Fwww-prima.inrialpes.fr\u002FPrima\u002F)\n127. [Vision-list 图像档案](ftp:\u002F\u002Fftp.vislist.com\u002FIMAGERY\u002F) - 许多图像，多种格式\n128. [Wiry 物体识别数据库](http:\u002F\u002Fwww.cs.cmu.edu\u002F~owenc\u002Fword.htm) - 数千张购物车、梯子、凳子、自行车、椅子以及杂乱场景的图像，附有边缘和区域的真值标签。（格式：jpg）\n129. [3D视觉小组](http:\u002F\u002Fwww.cs.cmu.edu\u002F0.000000E+003dvision\u002F)\n131. [耶鲁人脸数据库](http:\u002F\u002Fcvc.yale.edu\u002Fprojects\u002Fyalefaces\u002Fyalefaces.html) - 165张图像（15位个体），具有不同的光照、表情和遮挡配置。\n132. [耶鲁人脸数据库B](http:\u002F\u002Fcvc.yale.edu\u002Fprojects\u002FyalefacesB\u002FyalefacesB.html) - 5760张单光源图像，每张图像代表10位受试者，在576种观看条件下拍摄（9种姿势×64种光照条件）。（格式：PGM）\n133. [计算视觉与控制中心](http:\u002F\u002Fcvc.yale.edu\u002F)\n134. [DeepMind QA 语料库](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frc-data) - 来自CNN和DailyMail的文本问答语料库。总共超过30万份文档。[论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.03340)可供参考。\n135. [YouTube-8M 数据集](https:\u002F\u002Fresearch.google.com\u002Fyoutube8m\u002F) - YouTube-8M是一个大规模的标注视频数据集，包含800万个YouTube视频ID及其关联的标签，涵盖4800种视觉实体的多样化词汇。\n136. [Open Images 数据集](https:\u002F\u002Fgithub.com\u002Fopenimages\u002Fdataset) - Open Images是一个包含约900万个已标注图像URL的数据集，标签覆盖超过6000个类别。\n137. [2012年视觉目标类别挑战赛（VOC2012）](http:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002Fvoc2012\u002Findex.html#devkit) - VOC2012数据集包含12,000张图像，标注了20个用于物体检测和分割的类别。\n138. [Fashion-MNIST](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Ffashion-mnist) - 类似MNIST的时尚产品数据集，包含60,000个训练样本和10,000个测试样本。每个样本都是28x28的灰度图像，与10个类别的标签相关联。\n139. [大规模时尚（DeepFashion）数据库](http:\u002F\u002Fmmlab.ie.cuhk.edu.hk\u002Fprojects\u002FDeepFashion.html) - 包含超过80万张多样化的时尚图像。该数据集中每张图像都标注了50个类别、1,000个描述性属性、边界框以及服装关键点。\n140. [FakeNewsCorpus](https:\u002F\u002Fgithub.com\u002Fseveral27\u002FFakeNewsCorpus) - 包含约1000万篇新闻文章，按照[opensources.co](http:\u002F\u002Fopensources.co)的类型进行分类。\n141. [LLVIP](https:\u002F\u002Fgithub.com\u002Fbupt-ai-cz\u002FLLVIP) - 15,488张可见光-红外配对图像（共30,976张），用于低光视觉研究，[项目页面](https:\u002F\u002Fbupt-ai-cz.github.io\u002FLLVIP\u002F)。\n142. [MSDA](https:\u002F\u002Fgithub.com\u002Fbupt-ai-cz\u002FMeta-SelfLearning) - 超过500万张来自5个不同领域的图像，用于多源OCR\u002F文本识别DA研究，[项目页面](https:\u002F\u002Fbupt-ai-cz.github.io\u002FMeta-SelfLearning\u002F)。\n143. [SANAD：用于自动文本分类的单标签阿拉伯语新闻文章数据集](https:\u002F\u002Fdata.mendeley.com\u002Fdatasets\u002F57zpx667y9\u002F2) - SANAD数据集是一个大型的阿拉伯语新闻文章集合，可用于多种阿拉伯语NLP任务，如文本分类和词嵌入。这些文章是通过专门编写的Python脚本从三个热门新闻网站——AlKhaleej、AlArabiya和Akhbarona——收集而来的。\n144. [Referit3D](https:\u002F\u002Freferit3d.github.io) - 两个大规模且互补的视听语言数据集（即Nr3D和Sr3D），用于在ScanNet场景中识别细粒度的3D物体。Nr3D包含41.5万条自然、自由形式的语句，而Sr3D则包含83.5万条基于模板的语句。\n145. [SQuAD](https:\u002F\u002Frajpurkar.github.io\u002FSQuAD-explorer\u002F) - 斯坦福大学发布了约10万对英语问答以及约5万道无法回答的问题。\n146. [FQuAD](https:\u002F\u002Ffquad.illuin.tech\u002F) - Illuin Technology发布了约2.5万对法语问答。\n147. [GermanQuAD和GermanDPR](https:\u002F\u002Fwww.deepset.ai\u002Fgermanquad) - deepset发布了约1.4万对德语问答。\n148. [SberQuAD](https:\u002F\u002Fgithub.com\u002Fannnyway\u002FQA-for-Russian) - Sberbank发布了约9万对俄语问答。\n149. [ArtEmis](http:\u002F\u002Fartemisdataset.org\u002F) - 包含45万条情感反应的注释以及针对8万件WikiArt作品的语言解释。\n\n### 会议\n\n1. [CVPR - IEEE计算机视觉与模式识别会议](http:\u002F\u002Fcvpr2018.thecvf.com)\n2. [AAMAS - 自主代理与多智能体系统国际联合会议](http:\u002F\u002Fcelweb.vuse.vanderbilt.edu\u002Faamas18\u002F)\n3. [IJCAI - 人工智能国际联合会议](https:\u002F\u002Fwww.ijcai-18.org\u002F)\n4. [ICML - 机器学习国际会议](https:\u002F\u002Ficml.cc)\n5. [ECML - 欧洲机器学习会议](http:\u002F\u002Fwww.ecmlpkdd2018.org)\n6. [KDD - 知识发现与数据挖掘](http:\u002F\u002Fwww.kdd.org\u002Fkdd2018\u002F)\n7. [NIPS - 神经信息处理系统大会](https:\u002F\u002Fnips.cc\u002FConferences\u002F2018)\n8. [O'Reilly AI会议 - O'Reilly人工智能会议](https:\u002F\u002Fconferences.oreilly.com\u002Fartificial-intelligence\u002Fai-ny)\n9. [ICDM - 数据挖掘国际会议](https:\u002F\u002Fwww.waset.org\u002Fconference\u002F2018\u002F07\u002Fistanbul\u002FICDM)\n10. [ICCV - 计算机视觉国际会议](http:\u002F\u002Ficcv2017.thecvf.com)\n11. [AAAI - 人工智能促进协会](https:\u002F\u002Fwww.aaai.org)\n12. [MAIS - 蒙特利尔人工智能研讨会](https:\u002F\u002Fmontrealaisymposium.wordpress.com\u002F)\n\n### 框架\n\n1.  [Caffe](http:\u002F\u002Fcaffe.berkeleyvision.org\u002F)  \n2.  [Torch7](http:\u002F\u002Ftorch.ch\u002F)\n3.  [Theano](http:\u002F\u002Fdeeplearning.net\u002Fsoftware\u002Ftheano\u002F)\n4.  [cuda-convnet](https:\u002F\u002Fcode.google.com\u002Fp\u002Fcuda-convnet2\u002F)\n5.  [convetjs](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs)\n5.  [Ccv](http:\u002F\u002Flibccv.org\u002Fdoc\u002Fdoc-convnet\u002F)\n6.  [NuPIC](http:\u002F\u002Fnumenta.org\u002Fnupic.html)\n7.  [DeepLearning4J](http:\u002F\u002Fdeeplearning4j.org\u002F)\n8.  [Brain](https:\u002F\u002Fgithub.com\u002Fharthur\u002Fbrain)\n9.  [DeepLearnToolbox](https:\u002F\u002Fgithub.com\u002Frasmusbergpalm\u002FDeepLearnToolbox)\n10.  [Deepnet](https:\u002F\u002Fgithub.com\u002Fnitishsrivastava\u002Fdeepnet)\n11.  [Deeppy](https:\u002F\u002Fgithub.com\u002Fandersbll\u002Fdeeppy)\n12.  [JavaNN](https:\u002F\u002Fgithub.com\u002Fivan-vasilev\u002Fneuralnetworks)\n13.  [hebel](https:\u002F\u002Fgithub.com\u002Fhannes-brt\u002Fhebel)\n14.  [Mocha.jl](https:\u002F\u002Fgithub.com\u002Fpluskid\u002FMocha.jl)\n15.  [OpenDL](https:\u002F\u002Fgithub.com\u002Fguoding83128\u002FOpenDL)\n16.  [cuDNN](https:\u002F\u002Fdeveloper.nvidia.com\u002FcuDNN)\n17.  [MGL](http:\u002F\u002Fmelisgl.github.io\u002Fmgl-pax-world\u002Fmgl-manual.html)\n18.  [Knet.jl](https:\u002F\u002Fgithub.com\u002Fdenizyuret\u002FKnet.jl)\n19.  [Nvidia DIGITS - 基于 Caffe 的 Web 应用程序](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FDIGITS)\n20.  [Neon - 基于 Python 的深度学习框架](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fneon)\n21.  [Keras - 基于 Theano 的深度学习库](http:\u002F\u002Fkeras.io)\n22.  [Chainer - 用于深度学习的灵活神经网络框架](http:\u002F\u002Fchainer.org\u002F)\n23.  [RNNLM Toolkit](http:\u002F\u002Frnnlm.org\u002F)\n24.  [RNNLIB - 循环神经网络库](http:\u002F\u002Fsourceforge.net\u002Fp\u002Frnnl\u002Fwiki\u002FHome\u002F)\n25.  [char-rnn](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fchar-rnn)\n26.  [MatConvNet: 用于 MATLAB 的 CNN](https:\u002F\u002Fgithub.com\u002Fvlfeat\u002Fmatconvnet)\n27.  [Minerva - 一种快速且灵活的多 GPU 深度学习工具](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fminerva)\n28.  [Brainstorm - 快速、灵活且有趣的神经网络。](https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fbrainstorm)\n29.  [Tensorflow - 使用数据流图进行数值计算的开源软件库](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow)\n30.  [DMTK - 微软分布式机器学习工具包](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FDMTK)\n31.  [Scikit Flow - TensorFlow 的简化接口（模仿 Scikit Learn）](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fskflow)\n32.  [MXnet - 轻量级、可移植、灵活的分布式\u002F移动深度学习框架](https:\u002F\u002Fgithub.com\u002Fapache\u002Fincubator-mxnet)\n33.  [Veles - 三星分布式机器学习平台](https:\u002F\u002Fgithub.com\u002FSamsung\u002Fveles)\n34.  [Marvin - 一个极简的仅 GPU 的 N 维卷积神经网络框架](https:\u002F\u002Fgithub.com\u002FPrincetonVision\u002Fmarvin)\n35.  [Apache SINGA - 通用分布式深度学习平台](http:\u002F\u002Fsinga.incubator.apache.org\u002F)\n36.  [DSSTNE - 亚马逊用于构建深度学习模型的库](https:\u002F\u002Fgithub.com\u002Famznlabs\u002Famazon-dsstne)\n37.  [SyntaxNet - 谷歌句法分析器 - 一个依赖 TensorFlow 的库](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fsyntaxnet)\n38.  [mlpack - 一个可扩展的机器学习库](http:\u002F\u002Fmlpack.org\u002F)\n39.  [Torchnet - 基于 Torch 的深度学习库](https:\u002F\u002Fgithub.com\u002Ftorchnet\u002Ftorchnet)\n40.  [Paddle - 百度的并行分布式深度学习框架](https:\u002F\u002Fgithub.com\u002Fbaidu\u002Fpaddle)\n41.  [NeuPy - 基于 Theano 的用于人工神经网络和深度学习的 Python 库](http:\u002F\u002Fneupy.com)\n42.  [Lasagne - 一个轻量级库，用于在 Theano 中构建和训练神经网络](https:\u002F\u002Fgithub.com\u002FLasagne\u002FLasagne)\n43.  [nolearn - 现有神经网络库的封装和抽象，尤其是 Lasagne](https:\u002F\u002Fgithub.com\u002Fdnouri\u002Fnolearn)\n44.  [Sonnet - 谷歌 DeepMind 开发的用于构建神经网络的库](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fsonnet)\n45.  [PyTorch - 在 Python 中使用张量和动态神经网络，并具有强大的 GPU 加速功能](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch)\n46.  [CNTK - 微软认知工具包](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FCNTK)\n47.  [Serpent.AI - 游戏智能体框架：将任何视频游戏用作深度学习的沙盒](https:\u002F\u002Fgithub.com\u002FSerpentAI\u002FSerpentAI)\n48.  [Caffe2 - 一个新的轻量级、模块化且可扩展的深度学习框架](https:\u002F\u002Fgithub.com\u002Fcaffe2\u002Fcaffe2)\n49.  [deeplearn.js - 用于 Web 的硬件加速深度学习和线性代数（NumPy）库](https:\u002F\u002Fgithub.com\u002FPAIR-code\u002Fdeeplearnjs)\n50.  [TVM - 面向 CPU、GPU 和专用加速器的端到端深度学习编译器栈](https:\u002F\u002Ftvm.ai\u002F)\n51.  [Coach - 英特尔® AI 实验室的强化学习教练](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fcoach)\n52.  [albumentations - 一个快速且与框架无关的图像增强库](https:\u002F\u002Fgithub.com\u002Falbu\u002Falbumentations)\n53.  [Neuraxle - 一个通用的 ML 管道框架](https:\u002F\u002Fgithub.com\u002FNeuraxio\u002FNeuraxle)\n54.  [Catalyst：用于 PyTorch DL 和 RL 研究的高级工具。它专注于可重复性、快速实验以及代码\u002F想法的复用](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n55.  [garage - 一个用于可重复强化学习研究的工具包](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fgarage)\n56.  [Detecto - 用 5-10 行代码训练和运行目标检测模型](https:\u002F\u002Fgithub.com\u002Falankbi\u002Fdetecto)\n57.  [Karate Club - 一个用于图结构数据的无监督机器学习库](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fkarateclub)\n58.  [Synapses - 一个轻量级的神经网络库，可在任何地方运行](https:\u002F\u002Fgithub.com\u002Fmrdimosthenis\u002FSynapses)\n59.  [TensorForce - 一个用于应用强化学习的 TensorFlow 库](https:\u002F\u002Fgithub.com\u002Freinforceio\u002Ftensorforce)\n60.  [Hopsworks - 一个面向 ML 和数据密集型 AI 的特征存储](https:\u002F\u002Fgithub.com\u002Flogicalclocks\u002Fhopsworks)\n61.  [Feast - 由 Gojek\u002FGoogle 为 GCP 提供的 ML 特征存储](https:\u002F\u002Fgithub.com\u002Fgojek\u002Ffeast)\n62.  [PyTorch Geometric Temporal - 动态图上的表示学习](https:\u002F\u002Fgithub.com\u002Fgojek\u002Ffeast)\n63.  [lightly - 一个用于自监督学习的计算机视觉框架](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly)\n64.  [Trax — 清晰代码与速度兼备的深度学习](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Ftrax)\n65.  [Flax - 一个专为 JAX 设计的灵活神经网络生态系统](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fflax)\n66.  [QuickVision](https:\u002F\u002Fgithub.com\u002FQuick-AI\u002Fquickvision)\n67.  [Colossal-AI - 一个集成的大规模模型训练系统，配备高效的并行化技术](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI)\n68.  [haystack：一个开源的神经搜索框架](https:\u002F\u002Fhaystack.deepset.ai\u002Fdocs\u002Fintromd)\n69.  [Maze](https:\u002F\u002Fgithub.com\u002Fenlite-ai\u002Fmaze) - 一个面向应用的深度强化学习框架，解决现实世界的决策问题。\n70.  [InsNet - 一个用于构建实例相关 NLP 模型的神经网络库，支持无填充的动态批处理](https:\u002F\u002Fgithub.com\u002Fchncwang\u002FInsNet)\n\n### 工具\n\n1.  [Nebullvm](https:\u002F\u002Fgithub.com\u002Fnebuly-ai\u002Fnebullvm) - 易于使用的库，利用多种深度学习编译器加速深度学习推理。\n2.  [Netron](https:\u002F\u002Fgithub.com\u002Flutzroeder\u002Fnetron) - 深度学习和机器学习模型的可视化工具\n2.  [Jupyter Notebook](http:\u002F\u002Fjupyter.org) - 基于网页的交互式计算笔记本环境\n3.  [TensorBoard](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorboard) - TensorFlow 的可视化工具包\n4.  [Visual Studio Tools for AI](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fproject\u002Fvisual-studio-code-tools-ai\u002F) - 用于开发、调试和部署深度学习及人工智能解决方案\n5.  [TensorWatch](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftensorwatch) - 针对深度学习的调试与可视化工具\n6. [ML Workspace](https:\u002F\u002Fgithub.com\u002Fml-tooling\u002Fml-workspace) - 面向机器学习和数据科学的一体化Web IDE。\n7.  [dowel](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fdowel) - 一款用于机器学习研究的小型日志记录工具。只需调用一次 `logger.log()`，即可将任意对象记录到控制台、CSV文件、TensorBoard、文本日志文件等多种输出中。\n8.  [Neptune](https:\u002F\u002Fneptune.ai\u002F) - 轻量级的实验跟踪与结果可视化工具。\n9.  [CatalyzeX](https:\u002F\u002Fchrome.google.com\u002Fwebstore\u002Fdetail\u002Fcode-finder-for-research\u002Faikkeehnlfpamidigaffhfmgbkdeheil) - 浏览器扩展程序（适用于[Chrome](https:\u002F\u002Fchrome.google.com\u002Fwebstore\u002Fdetail\u002Fcode-finder-for-research\u002Faikkeehnlfpamidigaffhfmgbkdeheil) 和 [Firefox](https:\u002F\u002Faddons.mozilla.org\u002Fen-US\u002Ffirefox\u002Faddon\u002Fcode-finder-catalyzex\u002F)），可自动查找并链接在线上任何地方（如Google、Twitter、Arxiv、Scholar等）发布的机器学习论文的代码实现。\n10. [Determined](https:\u002F\u002Fgithub.com\u002Fdetermined-ai\u002Fdetermined) - 深度学习训练平台，集成支持分布式训练、超参数调优、智能GPU调度、实验跟踪以及模型注册表等功能。\n11. [DAGsHub](https:\u002F\u002Fdagshub.com\u002F) - 开源机器学习社区平台——轻松管理实验、数据与模型，并创建协作式机器学习项目。\n12. [hub](https:\u002F\u002Fgithub.com\u002Factiveloopai\u002FHub) - activeloop.ai 提供的面向 TensorFlow\u002FPyTorch 的最快非结构化数据集管理工具。支持数据流式传输与版本控制。可在云端将大规模数据转换为类似 NumPy 的单一数组，从而在任何设备上均可访问。\n13. [DVC](https:\u002F\u002Fdvc.org\u002F) - DVC 的设计目标是使机器学习模型易于共享和复现。它专为处理大型文件、数据集、机器学习模型、指标以及代码而打造。\n14. [CML](https:\u002F\u002Fcml.dev\u002F) - CML 可帮助您将常用的 DevOps 工具引入机器学习工作流程。\n15. [MLEM](https:\u002F\u002Fmlem.ai\u002F) - MLEM 是一款用于轻松打包、部署和提供机器学习模型服务的工具。它无缝支持实时推理和批量处理等多种场景。\n16. [Maxim AI](https:\u002F\u002Fgetmaxim.ai) - 用于AI智能体仿真、评估与可观性分析的工具。\n\n### 杂项\n\n1.  [Caffe 网络研讨会](http:\u002F\u002Fon-demand-gtc.gputechconf.com\u002Fgtcnew\u002Fon-demand-gtc.php?searchByKeyword=shelhamer&amp;searchItems=&amp;sessionTopic=&amp;sessionEvent=4&amp;sessionYear=2014&amp;sessionFormat=&amp;submit=&amp;select=+)\n2.  [GitHub 上深度学习领域的 100 个最佳资源](http:\u002F\u002Fmeta-guide.com\u002Fsoftware-meta-guide\u002F100-best-github-deep-learning\u002F)\n3.  [Word2Vec](https:\u002F\u002Fcode.google.com\u002Fp\u002Fword2vec\u002F)\n4.  [Caffe Dockerfile](https:\u002F\u002Fgithub.com\u002Ftleyden\u002Fdocker\u002Ftree\u002Fmaster\u002Fcaffe)\n5.  [TorontoDeepLEarning 卷积神经网络](https:\u002F\u002Fgithub.com\u002FTorontoDeepLearning\u002Fconvnet)\n6.  [gfx.js](https:\u002F\u002Fgithub.com\u002Fclementfarabet\u002Fgfx.js)\n7.  [Torch7 备忘录](https:\u002F\u002Fgithub.com\u002Ftorch\u002Ftorch7\u002Fwiki\u002FCheatsheet)\n8. [麻省理工学院“高级自然语言处理”课程的相关资料](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-864-advanced-natural-language-processing-fall-2005\u002F)\n9. [麻省理工学院“机器学习”课程的相关资料](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-867-machine-learning-fall-2006\u002Flecture-notes\u002F)\n10. [麻省理工学院“用于学习的网络：回归与分类”课程的相关资料](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Fbrain-and-cognitive-sciences\u002F9-520-a-networks-for-learning-regression-and-classification-spring-2001\u002F)\n11. [麻省理工学院“神经编码与声音感知”课程的相关资料](http:\u002F\u002Focw.mit.edu\u002Fcourses\u002Fhealth-sciences-and-technology\u002Fhst-723j-neural-coding-and-perception-of-sound-spring-2005\u002Findex.htm)\n12. [在 Spark 上实现分布式深度学习网络](http:\u002F\u002Fwww.datasciencecentral.com\u002Fprofiles\u002Fblogs\u002Fimplementing-a-distributed-deep-learning-network-over-spark)\n13. [使用深度学习学会下棋的国际象棋 AI](https:\u002F\u002Fgithub.com\u002Ferikbern\u002Fdeep-pink)\n14. [复现 DeepMind 的论文《使用深度强化学习玩 Atari 游戏》的结果](https:\u002F\u002Fgithub.com\u002Fkristjankorjus\u002FReplicating-DeepMind)\n15. [Wiki2Vec。从 Wikipedia 数据库中获取实体和单词的 Word2vec 向量](https:\u002F\u002Fgithub.com\u002Fidio\u002Fwiki2vec)\n16. [DeepMind 论文中的原始代码 + 一些改进](https:\u002F\u002Fgithub.com\u002Fkuz\u002FDeepMind-Atari-Deep-Q-Learner)\n17. [Google deepdream - 神经网络艺术](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fdeepdream)\n18. [高效的批量 LSTM](https:\u002F\u002Fgist.github.com\u002Fkarpathy\u002F587454dc0146a6ae21fc)\n19. [设计用于生成古典音乐的循环神经网络](https:\u002F\u002Fgithub.com\u002Fhexahedria\u002Fbiaxial-rnn-music-composition)\n20. [Facebook 的记忆网络实现](https:\u002F\u002Fgithub.com\u002Ffacebook\u002FMemNN)\n21. [使用 Google 的 FaceNet 深度神经网络进行人脸识别](https:\u002F\u002Fgithub.com\u002Fcmusatyalab\u002Fopenface)\n22. [基础数字识别神经网络](https:\u002F\u002Fgithub.com\u002Fjoeledenberg\u002FDigitRecognition)\n23. [微软的情感识别 API 演示](https:\u002F\u002Fwww.projectoxford.ai\u002Fdemo\u002Femotion#detection)\n24. [在 TensorFlow 中加载 Caffe 模型的概念验证](https:\u002F\u002Fgithub.com\u002Fethereon\u002Fcaffe-tensorflow)\n25. [YOLO：实时目标检测](http:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F#webcam)\n26. [YOLO：使用 Python 的实用实现](https:\u002F\u002Fwww.analyticsvidhya.com\u002Fblog\u002F2018\u002F12\u002Fpractical-guide-object-detection-yolo-framewor-python\u002F)\n27. [AlphaGo - 复现 DeepMind 2016 年发表于 Nature 的论文《利用深度神经网络和树搜索掌握围棋》](https:\u002F\u002Fgithub.com\u002FRochester-NRT\u002FAlphaGo)\n28. [面向软件工程师的机器学习](https:\u002F\u002Fgithub.com\u002FZuzooVn\u002Fmachine-learning-for-software-engineers)\n29. [机器学习很有趣！](https:\u002F\u002Fmedium.com\u002F@ageitgey\u002Fmachine-learning-is-fun-80ea3ec3c471#.oa4rzez3g)\n30. [Siraj Raval 的深度学习教程](https:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCWN3xxRkmTPmbKwht9FuE5A)\n31. [Dockerface](https:\u002F\u002Fgithub.com\u002Fnatanielruiz\u002Fdockerface) - 在 Docker 容器中轻松安装和使用的深度学习 Faster R-CNN 人脸检测工具，适用于图像和视频。\n32. [超赞的深度学习音乐资源](https:\u002F\u002Fgithub.com\u002Fybayle\u002Fawesome-deep-learning-music) - 精选的关于将深度学习科学研究应用于音乐的文章列表\n33. [超赞的图嵌入资源](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fawesome-graph-embedding) - 精选的关于在图级别上对图结构数据进行深度学习科学研究的文章列表\n34. [超赞的网络嵌入资源](https:\u002F\u002Fgithub.com\u002Fchihming\u002Fawesome-network-embedding) - 精选的关于在节点级别上对图结构数据进行深度学习科学研究的文章列表\n35. [微软推荐系统](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FRecommenders) 包含构建推荐系统的示例、工具和最佳实践。提供了几种最先进的算法实现，可供自学并在自己的应用中定制。\n36. [循环神经网络的不可思议效果](http:\u002F\u002Fkarpathy.github.io\u002F2015\u002F05\u002F21\u002Frnn-effectiveness\u002F) - Andrej Karpathy 关于使用 RNN 生成文本的博客文章\n37. [梯子网络](https:\u002F\u002Fgithub.com\u002Fdivamgupta\u002Fladder_network_keras) - Keras 实现的半监督学习用梯子网络\n38. [toolbox：精选的机器学习库列表](https:\u002F\u002Fgithub.com\u002Famitness\u002Ftoolbox)\n39. [CNN 解释器](https:\u002F\u002Fpoloclub.github.io\u002Fcnn-explainer\u002F)\n40. [人工智能专家路线图](https:\u002F\u002Fgithub.com\u002FAMAI-GmbH\u002FAI-Expert-Roadmap) - 成为人工智能专家的路线图\n41. [超赞的药物相互作用、协同效应及多药联用预测资源](https:\u002F\u002Fgithub.com\u002FAstraZeneca\u002Fawesome-polipharmacy-side-effect-prediction\u002F)\n\n-----\n### 贡献\n您是否想到任何很棒的内容，认为适合加入此列表？欢迎随时提交 [pull request](https:\u002F\u002Fgithub.com\u002Fashara12\u002Fawesome-deeplearning\u002Fpulls)。\n\n-----\n## 许可证\n\n[![CC0](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChristosChristofidis_awesome-deep-learning_readme_b7657951a0bb.png)](http:\u002F\u002Fcreativecommons.org\u002Fpublicdomain\u002Fzero\u002F1.0\u002F)\n\n在法律允许的最大范围内，[Christos Christofidis](https:\u002F\u002Flinkedin.com\u002Fin\u002FChristofidis) 已放弃本作品的所有版权及相关或邻接权利。","# Awesome Deep Learning 快速上手指南\n\n`awesome-deep-learning` 并非一个可安装的软件库或框架，而是一个由社区维护的**深度学习资源精选列表**。它汇集了书籍、课程、论文、教程、数据集和框架等高质量学习材料。因此，本指南旨在指导开发者如何利用该列表构建学习环境并开始学习。\n\n## 环境准备\n\n由于该列表涵盖多种技术栈（如 TensorFlow, PyTorch, Keras, JAX 等），建议根据你选择的具体学习路径准备基础环境。以下是通用的推荐配置：\n\n*   **操作系统**: Windows 10\u002F11, macOS, 或 Linux (Ubuntu 20.04+ 推荐)\n*   **编程语言**: Python 3.8 - 3.10 (大多数深度学习资源基于此版本)\n*   **硬件要求**:\n    *   **CPU**: 多核处理器\n    *   **GPU**: 推荐使用 NVIDIA GPU (显存 8GB+) 以加速模型训练，需安装对应的 CUDA Toolkit 和 cuDNN。\n    *   *注：入门阶段可使用 Google Colab 或 Kaggle Kernels 免费使用云端 GPU，无需本地硬件。*\n*   **前置依赖**:\n    *   Git (用于克隆仓库或管理代码)\n    *   Package Manager (pip 或 conda)\n\n## 安装步骤\n\n你不需要“安装”这个列表本身，而是需要获取它并安装列表中推荐的工具。\n\n### 1. 获取资源列表\n通过 Git 克隆仓库到本地，以便随时查阅更新：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FChristosChristofidis\u002Fawesome-deep-learning.git\ncd awesome-deep-learning\n```\n\n*国内加速方案*: 如果访问 GitHub 较慢，可使用 Gitee 镜像（如有）或配置代理。\n\n### 2. 搭建基础深度学习环境\n根据列表中热门的框架（如 PyTorch 或 TensorFlow），推荐使用 `conda` 创建隔离环境。\n\n**使用 Conda 创建环境 (推荐):**\n\n```bash\n# 创建名为 dl-env 的环境，指定 Python 版本\nconda create -n dl-env python=3.9 -y\n\n# 激活环境\nconda activate dl-env\n\n# 安装基础科学计算包\nconda install numpy pandas matplotlib jupyter -y\n```\n\n**安装主流框架 (二选一):**\n\n*   **选项 A: PyTorch (源自 Facebook AI)**\n    ```bash\n    # 访问 pytorch.org 获取最新命令，以下为 CPU 版本示例\n    pip install torch torchvision torchaudio\n    ```\n    *国内镜像加速*:\n    ```bash\n    pip install torch torchvision torchaudio -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n*   **选项 B: TensorFlow (源自 Google)**\n    ```bash\n    pip install tensorflow\n    ```\n    *国内镜像加速*:\n    ```bash\n    pip install tensorflow -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n`awesome-deep-learning` 的核心用法是作为**导航地图**。以下是利用该列表开始第一个深度学习项目的流程：\n\n### 1. 选择学习路径\n打开本地的 `README.md` 文件或在 GitHub 上浏览，根据需求选择板块：\n*   **初学者**: 查看 **[Courses](#courses)** 部分，推荐 Andrew Ng 的 [Deep Learning Specialization](https:\u002F\u002Fwww.coursera.org\u002Fspecializations\u002Fdeep-learning) 或 Fast.ai 的 [Practical Deep Learning For Coders](http:\u002F\u002Fcourse.fast.ai\u002F)。\n*   **理论深入**: 查看 **[Books](#books)** 部分，推荐阅读 *Deep Learning* (Ian Goodfellow et al.) 或 *Dive into Deep Learning* (交互式书籍)。\n*   **实战项目**: 查看 **[Datasets](#datasets)** 和 **[Tutorials](#tutorials)** 寻找数据和代码示例。\n\n### 2. 运行第一个示例 (基于列表中的教程)\n假设你选择了列表中提到的 \"Dive into Deep Learning\" (d2l.ai) 作为起点，该书提供了基于 PyTorch\u002FTensorFlow 的可运行代码。\n\n**步骤:**\n1.  安装 `d2l` 包：\n    ```bash\n    pip install d2l -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n2.  创建一个 Python 脚本 `hello_dl.py`，输入以下经典代码（线性回归示例）：\n\n```python\nfrom d2l import torch as d2l\nimport torch\n\n# 定义数据\ntrue_w = torch.tensor([2, -3.4])\ntrue_b = 4.2\nfeatures, labels = d2l.synthetic_data(true_w, true_b, 1000)\n\n# 定义模型\nnet = d2l.linreg\nloss = d2l.squared_loss\n\n# 简单打印验证\nprint(f\"Features shape: {features.shape}\")\nprint(f\"Labels shape: {labels.shape}\")\nprint(\"环境配置成功，已准备好开始深度学习之旅！\")\n```\n\n3.  运行脚本：\n```bash\npython hello_dl.py\n```\n\n### 3. 探索更多资源\n回到 `awesome-deep-learning` 目录，根据你的兴趣领域（如计算机视觉、NLP、强化学习），点击 README 中对应的链接访问原始论文、视频讲座或开源代码库，直接复制其提供的代码片段到你的环境中进行实验。","某初创公司的算法工程师小李需要在两周内为医疗影像项目搭建原型，但他对深度学习领域尚不熟悉，面临技术选型和资源筛选的巨大压力。\n\n### 没有 awesome-deep-learning 时\n- **资源检索低效**：在谷歌和 GitHub 上盲目搜索\"Deep Learning tutorial\"，被大量过时教程、营销文章和低星项目淹没，难以辨别质量。\n- **学习路径混乱**：面对碎片化的博客和零散视频，无法构建从基础理论（如反向传播）到前沿架构（如 Transformer）的系统化知识体系。\n- **框架选型困难**：不清楚 TensorFlow、PyTorch 或 JAX 各自的生态优势及适用场景，容易选错工具导致后期重构成本高昂。\n- **数据获取受阻**：花费数天时间寻找合适的医疗影像公开数据集，却因缺乏权威索引而只能找到格式混乱或标注缺失的数据。\n- **社区连接断裂**：错过相关的顶级会议（如 CVPR、NeurIPS）和核心研究者动态，导致技术方案闭门造车，缺乏行业视野。\n\n### 使用 awesome-deep-learning 后\n- **精准获取高质资源**：直接访问 curated 列表，一键获取由社区验证的经典书籍（如《Deep Learning》花书）和高星实战项目，节省 80% 的筛选时间。\n- **构建系统学习路线**：依据分类清晰的课程和视频板块，快速制定从吴恩达基础课到专项进阶的学习计划，知识吸收效率显著提升。\n- **科学决策技术栈**：参考框架与工具章节的详细对比，结合项目需求迅速锁定 PyTorch 作为开发底座，避免了试错成本。\n- **快速定位标准数据**：通过数据集专区直接找到经过清洗和标注的医疗影像库，当天即可启动模型训练流程。\n- **同步前沿动态**：紧跟列表中推荐的顶尖学者和会议资讯，及时引入最新的正则化技术和优化策略，提升模型竞争力。\n\nawesome-deep-learning 将原本需要数周的信息搜集与甄别工作压缩至几小时，让开发者能专注于核心算法创新而非资源大海捞针。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChristosChristofidis_awesome-deep-learning_68bf91b7.png","ChristosChristofidis","Chris","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FChristosChristofidis_c8a08def.png",null,"https:\u002F\u002Fgithub.com\u002FChristosChristofidis",27936,6297,"2026-04-19T13:58:27",1,"","未说明",{"notes":84,"python":82,"dependencies":85},"该仓库（awesome-deep-learning）是一个深度学习资源的精选列表（包含书籍、课程、视频、论文等链接），本身不是一个可执行的软件工具或框架，因此没有特定的操作系统、硬件配置、Python 版本或依赖库要求。用户可根据列表中推荐的具体框架（如 TensorFlow, PyTorch 等）单独查阅其运行环境需求。",[],[14,15],[88,89,90,91,92,93,94,95,96],"deep-learning","neural-network","machine-learning","awesome","awesome-list","recurrent-networks","deep-networks","deep-learning-tutorial","face-images","2026-03-27T02:49:30.150509","2026-04-20T07:17:59.588429",[],[]]