[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-astorfi--Deep-Learning-Roadmap":3,"tool-astorfi--Deep-Learning-Roadmap":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":95,"env_ram":95,"env_deps":96,"category_tags":99,"github_topics":100,"view_count":10,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":103,"updated_at":104,"faqs":105,"releases":121},1039,"astorfi\u002FDeep-Learning-Roadmap","Deep-Learning-Roadmap",":satellite: Organized Resources for Deep Learning Researchers and Developers","Deep-Learning-Roadmap 是一个专为深度学习研究者与开发者打造的资源整理项目。面对互联网上庞杂的学习资料，它解决了用户难以快速筛选和定位高质量内容的问题。与其他综合性仓库不同，Deep-Learning-Roadmap 的核心理念在于资源的“针对性”与“结构化”。它将海量资料划分为细致的类别，虽然初期可能显得繁多，但能让用户根据具体需求精准锁定相关论文、代码及模型实现。\n\n项目内容涵盖深度学习领域的核心方向，例如卷积神经网络在图像分类、句子分类及视频分类中的经典论文与对应代码链接。即使是初学者，也能通过提供的通用资源入口顺利入门，避免迷失方向。此外，Deep-Learning-Roadmap 还建立了 Slack 交流群组，促进社区成员间的互动与协作。\n\n无论是希望系统构建知识体系的学生，还是需要查阅特定技术细节的工程师，Deep-Learning-Roadmap 都能提供一条高效的学习捷径。项目基于 Python 生态，采用开源模式，持续欢迎社区贡献，旨在打造一个开放、有序且实用的深度学习知识共享平台，帮助用户在快速发展的技术领域中保持竞争力。",".. image:: _img\u002Fmainpage\u002Flogo.gif\n\n.. figure:: _img\u002Fmainpage\u002Fsubscribe.gif\n  :target: https:\u002F\u002Fmachinelearningmindset.com\u002Fsubscription\u002F\n  \n \n###################################################\nSlack Group\n###################################################\n\n.. raw:: html\n\n   \u003Cdiv align=\"center\">\n\n.. raw:: html\n\n \u003Ca href=\"https:\u002F\u002Fwww.machinelearningmindset.com\u002Fslack-group\u002F\" target=\"_blank\">\n  \u003Cimg width=\"1033\" height=\"350\" align=\"center\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fastorfi_Deep-Learning-Roadmap_readme_2f07fac72f26.png\"\u002F>\n \u003C\u002Fa>\n\n.. raw:: html\n\n   \u003C\u002Fdiv>\n\n\n###################################################\nDeep Learning World\n###################################################\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\n    :target: https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World\u002Fpulls\n.. image:: https:\u002F\u002Fbadges.frapsoft.com\u002Fos\u002Fv2\u002Fopen-source.png?v=103\n    :target: https:\u002F\u002Fgithub.com\u002Fellerbrock\u002Fopen-source-badge\u002F\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMade%20with-Python-1f425f.svg\n      :target: https:\u002F\u002Fwww.python.org\u002F\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fansicolortags.svg\n      :target: https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World\u002Fblob\u002Fmaster\u002FLICENSE\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002FNaereen\u002FStrapDown.js.svg\n      :target: https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World\u002Fgraphs\u002Fcontributors\n.. image:: https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Famirsinatorfi.svg?label=Follow&style=social\n      :target: https:\u002F\u002Ftwitter.com\u002Famirsinatorfi\n\n\n\n##################\nTable of Contents\n##################\n.. contents::\n  :local:\n  :depth: 4\n\n***************\nIntroduction\n***************\n\nThe purpose of this project is to introduce a shortcut to developers and researcher\nfor finding useful resources about Deep Learning.\n\n============\nMotivation\n============\n\nThere are different motivations for this open source project.\n\n.. --------------------\n.. Why Deep Learning?\n.. --------------------\n\n------------------------------------------------------------\nWhat's the point of this open source project?\n------------------------------------------------------------\n\nThere other similar repositories similar to this repository and are very\ncomprehensive and useful and to be honest they made me ponder if there is\na necessity for this repository!\n\n**The point of this repository is that the resources are being targeted**. The organization\nof the resources is such that the user can easily find the things he\u002Fshe is looking for.\nWe divided the resources to a large number of categories that in the beginning one may\nhave a headache!!! However, if someone knows what is being located, it is very easy to find the most related resources.\nEven if someone doesn't know what to look for, in the beginning, the general resources have\nbeen provided.\n\n\n.. ================================================\n.. How to make the most of this effort\n.. ================================================\n\n************\nPapers\n************\n\n.. image:: _img\u002Fmainpage\u002Farticle.jpeg\n\nThis chapter is associated with the papers published in deep learning.\n\n====================\nModels\n====================\n\n-----------------------\nConvolutional Networks\n-----------------------\n\n  .. image:: _img\u002Fmainpage\u002Fconvolutional.png\n\n.. For continuous lines, the lines must be start from the same locations.\n* **Imagenet classification with deep convolutional neural networks** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fdontfollowmeimcrazy\u002Fimagenet>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Convolutional Neural Networks for Sentence Classification** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5882>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyoonkim\u002FCNN_sentence>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Large-scale Video Classification with Convolutional Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FKarpathy_Large-scale_Video_Classification_2014_CVPR_paper.html>`_][`Project Page \u003Chttps:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fdeepvideo\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FOquab_Learning_and_Transferring_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n* **Deep convolutional neural networks for LVCSR** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6639347\u002F&hl=zh-CN&sa=T&oi=gsb&ct=res&cd=0&ei=KknXWYbGFMbFjwSsyICADQ&scisig=AAGBfm2F0Zlu0ciUwadzshNNm80IQQhuhA>`_]\n  \n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Face recognition: a convolutional neural-network approach** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F554195\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n\n-----------------------\nRecurrent Networks\n-----------------------\n\n  .. image:: _img\u002Fmainpage\u002FRecurrent_neural_network_unfold.svg\n\n\n.. For continuous lines, the lines must be start from the same locations.\n* **An empirical exploration of recurrent network architectures** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fjozefowicz15.pdf?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=revue>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fdebajyotidatta\u002FRecurrentArchitectures>`_]\n\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **LSTM: A search space odyssey** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7508408\u002F>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Ffomorians\u002Flstm-odyssey>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n\n* **On the difficulty of training recurrent neural networks** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fpascanu13.pdf>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fpascanur\u002FtrainingRNNs>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Learning to forget: Continual prediction with LSTM** :\n  [`Paper \u003Chttp:\u002F\u002Fdigital-library.theiet.org\u002Fcontent\u002Fconferences\u002F10.1049\u002Fcp_19991218>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n-----------------------\nAutoencoders\n-----------------------\n\n.. image:: _img\u002Fmainpage\u002FAutoencoder_structure.png\n\n* **Extracting and composing robust features with denoising autoencoders** :\n  [`Paper \u003Chttps:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=1390294>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion** :\n  [`Paper \u003Chttp:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv11\u002Fvincent10a.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Frajarsheem\u002Flibsdae-autoencoder-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Adversarial Autoencoders** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fconan7882\u002Fadversarial-autoencoders>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Autoencoders, Unsupervised Learning, and Deep Architectures** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv27\u002Fbaldi12a\u002Fbaldi12a.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Reducing the Dimensionality of Data with Neural Networks** :\n  [`Paper \u003Chttp:\u002F\u002Fscience.sciencemag.org\u002Fcontent\u002F313\u002F5786\u002F504>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fjordn\u002Fautoencoder>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n-----------------------\nGenerative Models\n-----------------------\n\n.. image:: _img\u002Fmainpage\u002Fgenerative.png\n\n* **Exploiting generative models discriminative classifiers** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F1520-exploiting-generative-models-in-discriminative-classifiers.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Semi-supervised Learning with Deep Generative Models** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5352-semi-supervised-learning-with-deep-generative-models>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fwohlert\u002Fsemi-supervised-pytorch>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n\n* **Generative Adversarial Nets** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5423-generative-adversarial-nets>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fgoodfeli\u002Fadversarial>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Generalized Denoising Auto-Encoders as Generative Models** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5023-generalized-denoising-auto-encoders-as-generative-models>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n  \n* **Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDCGAN-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n-----------------------\nProbabilistic Models\n-----------------------\n\n* **Stochastic Backpropagation and Approximate Inference in Deep Generative Models** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1401.4082>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Probabilistic models of cognition: exploring representations and inductive biases** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1364661310001129>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **On deep generative models with applications to recognition** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5995710\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n\n\n\n====================\nCore\n====================\n\n---------------------\nOptimization\n---------------------\n\n.. ################################################################################\n.. For continuous lines, the lines must be start from the same locations.\n* **Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Dropout: A Simple Way to Prevent Neural Networks from Overfitting** :\n  [`Paper \u003Chttp:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume15\u002Fsrivastava14a\u002Fsrivastava14a.pdf?utm_content=buffer79b43&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Training Very Deep Networks** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5850-training-very-deep-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FHe_Delving_Deep_into_ICCV_2015_paper.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Large Scale Distributed Deep Networks** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4687-large-scale-distributed-deep-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n------------------------\nRepresentation Learning\n------------------------\n\n* **Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002FNewmu\u002Fdcgan_code>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Representation Learning: A Review and New Perspectives** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6472238\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6399-infogan-interpretable-representation>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fopenai\u002FInfoGAN>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n\n------------------------------------\nUnderstanding and Transfer Learning\n------------------------------------\n\n* **Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FOquab_Learning_and_Transferring_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Distilling the Knowledge in a Neural Network** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02531>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fdonahue14.pdf>`_][\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **How transferable are features in deep neural networks?** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5347-how-transferable-are-features-in-deep-n%E2%80%A6>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyosinski\u002Fconvnet_transfer>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n-----------------------\nReinforcement Learning\n-----------------------\n\n* **Human-level control through deep reinforcement learning** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature14236\u002F>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fdevsisters\u002FDQN-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Playing Atari with Deep Reinforcement Learning** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1312.5602>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fdeep-rl-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Continuous control with deep reinforcement learning** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1509.02971>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fstevenpjg\u002Fddpg-aigym>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Deep Reinforcement Learning with Double Q-Learning** :\n  [`Paper \u003Chttp:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI16\u002Fpaper\u002Fdownload\u002F12389\u002F11847>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fdeep-rl-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Dueling Network Architectures for Deep Reinforcement Learning** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06581>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyoosan\u002Fdeeprl>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n\n====================\nApplications\n====================\n\n--------------------\nImage Recognition\n--------------------\n\n* **Deep Residual Learning for Image Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fhtml\u002FHe_Deep_Residual_Learning_CVPR_2016_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fgcr\u002Ftorch-residual-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Very Deep Convolutional Networks for Large-Scale Image Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Multi-column Deep Neural Networks for Image Classification** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1202.2745>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **DeepID3: Face Recognition with Very Deep Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1502.00873>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6034>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fartvandelay\u002FDeep_Inside_Convolutional_Networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Deep Image: Scaling up Image Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fvc\u002Farxiv\u002Fpapers\u002F1501\u002F1501.02876v1.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Long-Term Recurrent Convolutional Networks for Visual Recognition and Description** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FDonahue_Long-Term_Recurrent_Convolutional_2015_CVPR_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002FJaggerYoung\u002FLRCN-for-Activity-Recognition>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **3D Convolutional Neural Networks for Cross Audio-Visual Matching Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8063416>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fastorfi\u002Flip-reading-deeplearning>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n--------------------\nObject Recognition\n--------------------\n\n* **ImageNet Classification with Deep Convolutional Neural Networks** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Learning Deep Features for Scene Recognition using Places Database** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5349-learning-deep-features>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Scalable Object Detection using Deep Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FErhan_Scalable_Object_Detection_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Frbgirshick\u002Fpy-faster-rcnn>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6229>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fsermanet\u002FOverFeat>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **CNN Features Off-the-Shelf: An Astounding Baseline for Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_workshops_2014\u002FW15\u002Fhtml\u002FRazavian_CNN_Features_Off-the-Shelf_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **What is the best multi-stage architecture for object recognition?** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5459469\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_2.png\n\n\n--------------------\nAction Recognition\n--------------------\n\n* **Long-Term Recurrent Convolutional Networks for Visual Recognition and Description** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FDonahue_Long-Term_Recurrent_Convolutional_2015_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Learning Spatiotemporal Features With 3D Convolutional Networks** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fhtml\u002FTran_Learning_Spatiotemporal_Features_ICCV_2015_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002FDavideA\u002Fc3d-pytorch>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Describing Videos by Exploiting Temporal Structure** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fhtml\u002FYao_Describing_Videos_by_ICCV_2015_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Ftsenghungchen\u002FSA-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Convolutional Two-Stream Network Fusion for Video Action Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fhtml\u002FFeichtenhofer_Convolutional_Two-Stream_Network_CVPR_2016_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002Ftwostreamfusion>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Temporal segment networks: Towards good practices for deep action recognition** :\n  [`Paper \u003Chttps:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-46484-8_2>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyjxiong\u002Ftemporal-segment-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n----------------------------\nCaption Generation\n----------------------------\n\n* **Show, Attend and Tell: Neural Image Caption Generation with Visual Attention** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fxuc15.pdf>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyunjey\u002Fshow-attend-and-tell>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Mind's Eye: A Recurrent Visual Representation for Image Caption Generation** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FChen_Minds_Eye_A_2015_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_2.png\n\n* **Generative Adversarial Text to Image Synthesis** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Freed16.pdf>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftext-to-image>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Deep Visual-Semantic Al60ignments for Generating Image Descriptions** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FKarpathy_Deep_Visual-Semantic_Alignments_2015_CVPR_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fjonkuo\u002FDeep-Learning-Image-Captioning>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Show and Tell: A Neural Image Caption Generator** :\n  [`Paper \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FVinyals_Show_and_Tell_2015_CVPR_paper.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002FDeepRNN\u002Fimage_captioning>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n----------------------------\nNatural Language Processing\n----------------------------\n\n* **Distributed Representations of Words and Phrases and their Compositionality** :\n  [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf>`_][`Code \u003Chttps:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Efficient Estimation of Word Representations in Vector Space** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fpdf\u002F1301.3781.pdf>`_][`Code \u003Chttps:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Sequence to Sequence Learning with Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.3215.pdf>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Ffarizrahman4u\u002Fseq2seq>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Neural Machine Translation by Jointly Learning to Align and Translate** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.0473.pdf>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fnmt>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Get To The Point: Summarization with Pointer-Generator Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04368>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fabisee\u002Fpointer-generator>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Attention Is All You Need** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03762>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fjadore801120\u002Fattention-is-all-you-need-pytorch>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Convolutional Neural Networks for Sentence Classification** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5882>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyoonkim\u002FCNN_sentence>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n\n----------------------------\nSpeech Technology\n----------------------------\n\n* **Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6296526\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Towards End-to-End Speech Recognition with Recurrent Neural Networks** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fgraves14.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Speech recognition with deep recurrent neural networks** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6638947\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1507.06947>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Famodei16.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FDeepSpeech>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **A novel scheme for speaker recognition using a phonetically-aware deep neural network** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6853887\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n \n* **Text-Independent Speaker Verification Using 3D Convolutional Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09422>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fastorfi\u002F3D-convolutional-speaker-recognition>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n\n\n************\nDatasets\n************\n\n====================\nImage\n====================\n\n\n----------------------------\nGeneral\n----------------------------\n\n* **MNIST** Handwritten digits:\n  [`Link \u003Chttp:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F>`_]\n\n\n----------------------------\nFace\n----------------------------\n\n* **Face Recognition Technology (FERET)** The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties:\n  [`Link \u003Chttps:\u002F\u002Fwww.nist.gov\u002Fprograms-projects\u002Fface-recognition-technology-feret>`_]\n\n* **The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces** Between October and December 2000 we collected a database of 41,368 images of 68 people:\n  [`Link \u003Chttps:\u002F\u002Fwww.ri.cmu.edu\u002Fpublications\u002Fthe-cmu-pose-illumination-and-expression-pie-database-of-human-faces\u002F>`_]\n\n* **YouTube Faces DB** The data set contains 3,425 videos of 1,595 different people. All the videos were downloaded from YouTube. An average of 2.15 videos are available for each subject:\n  [`Link \u003Chttps:\u002F\u002Fwww.cs.tau.ac.il\u002F~wolf\u002Fytfaces\u002F>`_]\n\n* **Grammatical Facial Expressions Data Set** Developed to assist the the automated analysis of facial expressions:\n  [`Link \u003Chttps:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002FGrammatical+Facial+Expressions>`_]\n\n* **FaceScrub** A Dataset With Over 100,000 Face Images of 530 People:\n  [`Link \u003Chttp:\u002F\u002Fvintage.winklerbros.net\u002Ffacescrub.html>`_]\n\n* **IMDB-WIKI** 500k+ face images with age and gender labels:\n  [`Link \u003Chttps:\u002F\u002Fdata.vision.ee.ethz.ch\u002Fcvl\u002Frrothe\u002Fimdb-wiki\u002F>`_]\n\n* **FDDB** Face Detection Data Set and Benchmark (FDDB):\n  [`Link \u003Chttp:\u002F\u002Fvis-www.cs.umass.edu\u002Ffddb\u002F>`_]\n\n----------------------------\nObject Recognition\n----------------------------\n\n* **COCO** Microsoft COCO: Common Objects in Context:\n  [`Link \u003Chttp:\u002F\u002Fcocodataset.org\u002F#home>`_]\n\n* **ImageNet** The famous ImageNet dataset:\n  [`Link \u003Chttp:\u002F\u002Fwww.image-net.org\u002F>`_]\n\n* **Open Images Dataset** Open Images is a dataset of ~9 million images that have been annotated with image-level labels and object bounding boxes:\n  [`Link \u003Chttps:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html>`_]\n\n* **Caltech-256 Object Category Dataset** A large dataset object classification:\n  [`Link \u003Chttps:\u002F\u002Fauthors.library.caltech.edu\u002F7694\u002F>`_]\n\n* **Pascal VOC dataset** A large dataset for classification tasks:\n  [`Link \u003Chttp:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002F>`_]\n\n* **CIFAR 10 \u002F CIFAR 100** The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes. CIFAR-100 is similar to CIFAR-10 but it has 100 classes containing 600 images each:\n  [`Link \u003Chttps:\u002F\u002Fwww.cs.toronto.edu\u002F~kriz\u002Fcifar.html>`_]\n\n\n----------------------------\nAction recognition\n----------------------------\n\n* **HMDB** a large human motion database:\n  [`Link \u003Chttp:\u002F\u002Fserre-lab.clps.brown.edu\u002Fresource\u002Fhmdb-a-large-human-motion-database\u002F>`_]\n\n* **MHAD** Berkeley Multimodal Human Action Database:\n  [`Link \u003Chttp:\u002F\u002Ftele-immersion.citris-uc.org\u002Fberkeley_mhad>`_]\n\n* **UCF101 - Action Recognition Data Set** UCF101 is an action recognition data set of realistic action videos, collected from YouTube, having 101 action categories. This data set is an extension of UCF50 data set which has 50 action categories:\n  [`Link \u003Chttp:\u002F\u002Fcrcv.ucf.edu\u002Fdata\u002FUCF101.php>`_]\n\n* **THUMOS Dataset** A large dataset for action classification:\n  [`Link \u003Chttp:\u002F\u002Fcrcv.ucf.edu\u002Fdata\u002FTHUMOS.php>`_]\n\n* **ActivityNet** A Large-Scale Video Benchmark for Human Activity Understanding:\n  [`Link \u003Chttp:\u002F\u002Factivity-net.org\u002F>`_]\n\n======================================\nText and Natural Language Processing\n======================================\n\n\n-----------------------\nGeneral\n-----------------------\n\n* **1 Billion Word Language Model Benchmark**: The purpose of the project is to make available a standard training and test setup for language modeling experiments:\n  [`Link \u003Chttp:\u002F\u002Fwww.statmt.org\u002Flm-benchmark\u002F>`_]\n\n* **Common Crawl**: The Common Crawl corpus contains petabytes of data collected over the last 7 years. It contains raw web page data, extracted metadata and text extractions:\n  [`Link \u003Chttp:\u002F\u002Fcommoncrawl.org\u002Fthe-data\u002Fget-started\u002F>`_]\n\n* **Yelp Open Dataset**: A subset of Yelp's businesses, reviews, and user data for use in personal, educational, and academic purposes:\n  [`Link \u003Chttps:\u002F\u002Fwww.yelp.com\u002Fdataset>`_]\n\n\n-----------------------\nText classification\n-----------------------\n\n* **20 newsgroups** The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups:\n  [`Link \u003Chttp:\u002F\u002Fqwone.com\u002F~jason\u002F20Newsgroups\u002F>`_]\n\n* **Broadcast News** The 1996 Broadcast News Speech Corpus contains a total of 104 hours of broadcasts from ABC, CNN and CSPAN television networks and NPR and PRI radio networks with corresponding transcripts:\n  [`Link \u003Chttps:\u002F\u002Fcatalog.ldc.upenn.edu\u002FLDC97S44>`_]\n\n* **The wikitext long term dependency language modeling dataset**: A collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. :\n  [`Link \u003Chttps:\u002F\u002Feinstein.ai\u002Fresearch\u002Fthe-wikitext-long-term-dependency-language-modeling-dataset>`_]\n\n-----------------------\nQuestion Answering\n-----------------------\n\n* **Question Answering Corpus** by Deep Mind and Oxford which is two new corpora of roughly a million news stories with associated queries from the CNN and Daily Mail websites.\n  [`Link \u003Chttps:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frc-data>`_]\n\n* **Stanford Question Answering Dataset (SQuAD)** consisting of questions posed by crowdworkers on a set of Wikipedia articles:\n  [`Link \u003Chttps:\u002F\u002Frajpurkar.github.io\u002FSQuAD-explorer\u002F>`_]\n\n* **Amazon question\u002Fanswer data** contains Question and Answer data from Amazon, totaling around 1.4 million answered questions:\n  [`Link \u003Chttp:\u002F\u002Fjmcauley.ucsd.edu\u002Fdata\u002Famazon\u002Fqa\u002F>`_]\n\n\n\n-----------------------\nSentiment Analysis\n-----------------------\n\n* **Multi-Domain Sentiment Dataset** TThe Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains):\n  [`Link \u003Chttp:\u002F\u002Fwww.cs.jhu.edu\u002F~mdredze\u002Fdatasets\u002Fsentiment\u002F>`_]\n\n* **Stanford Sentiment Treebank Dataset** The Stanford Sentiment Treebank is the first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language:\n  [`Link \u003Chttps:\u002F\u002Fnlp.stanford.edu\u002Fsentiment\u002F>`_]\n\n* **Large Movie Review Dataset**: This is a dataset for binary sentiment classification:\n  [`Link \u003Chttp:\u002F\u002Fai.stanford.edu\u002F~amaas\u002Fdata\u002Fsentiment\u002F>`_]\n\n\n-----------------------\nMachine Translation\n-----------------------\n\n* **Aligned Hansards of the 36th Parliament of Canada** dataset contains 1.3 million pairs of aligned text chunks:\n  [`Link \u003Chttps:\u002F\u002Fwww.isi.edu\u002Fnatural-language\u002Fdownload\u002Fhansard\u002F>`_]\n\n* **Europarl: A Parallel Corpus for Statistical Machine Translation** dataset extracted from the proceedings of the European Parliament:\n  [`Link \u003Chttp:\u002F\u002Fwww.statmt.org\u002Feuroparl\u002F>`_]\n\n\n-----------------------\nSummarization\n-----------------------\n\n* **Legal Case Reports Data Set** as a textual corpus of 4000 legal cases for automatic summarization and citation analysis.:\n  [`Link \u003Chttps:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002FLegal+Case+Reports>`_]\n\n\n======================================\nSpeech Technology\n======================================\n\n* **TIMIT Acoustic-Phonetic Continuous Speech Corpus** The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems:\n  [`Link \u003Chttps:\u002F\u002Fcatalog.ldc.upenn.edu\u002Fldc93s1>`_]\n\n* **LibriSpeech** LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey:\n  [`Link \u003Chttp:\u002F\u002Fwww.openslr.org\u002F12\u002F>`_]\n\n* **VoxCeleb** A large scale audio-visual dataset:\n  [`Link \u003Chttp:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fdata\u002Fvoxceleb\u002F>`_]\n\n* **NIST Speaker Recognition**:\n  [`Link \u003Chttps:\u002F\u002Fwww.nist.gov\u002Fitl\u002Fiad\u002Fmig\u002Fspeaker-recognition>`_]\n\n\n\n\n\n\n************\nCourses\n************\n\n.. image:: _img\u002Fmainpage\u002Fonline.png\n\n* **Machine Learning** by Stanford on Coursera :\n  [`Link \u003Chttps:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning>`_]\n\n* **Neural Networks and Deep Learning** Specialization by Coursera:\n  [`Link \u003Chttps:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fneural-networks-deep-learning>`_]\n\n* **Intro to Deep Learning** by Google:\n  [`Link \u003Chttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning--ud730>`_]\n\n* **Introduction to Deep Learning** by CMU:\n  [`Link \u003Chttp:\u002F\u002Fdeeplearning.cs.cmu.edu\u002F>`_]\n\n* **NVIDIA Deep Learning Institute** by NVIDIA:\n  [`Link \u003Chttps:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdeep-learning-ai\u002Feducation\u002F>`_]\n\n* **Convolutional Neural Networks for Visual Recognition** by Stanford:\n  [`Link \u003Chttp:\u002F\u002Fcs231n.stanford.edu\u002F>`_]\n\n* **Deep Learning for Natural Language Processing** by Stanford:\n  [`Link \u003Chttp:\u002F\u002Fcs224d.stanford.edu\u002F>`_]\n\n* **Deep Learning** by fast.ai:\n  [`Link \u003Chttp:\u002F\u002Fwww.fast.ai\u002F>`_]\n\n* **Course on Deep Learning for Visual Computing** by IITKGP:\n  [`Link \u003Chttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLuv3GM6-gsE1Biyakccxb3FAn4wBLyfWf>`_]\n\n\n\n\n************\nBooks\n************\n\n.. image:: _img\u002Fmainpage\u002Fbooks.jpg\n\n* **Deep Learning** by Ian Goodfellow:\n  [`Link \u003Chttp:\u002F\u002Fwww.deeplearningbook.org\u002F>`_]\n\n* **Neural Networks and Deep Learning** :\n  [`Link \u003Chttp:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F>`_]\n\n* **Deep Learning with Python**:\n  [`Link \u003Chttps:\u002F\u002Fwww.amazon.com\u002FDeep-Learning-Python-Francois-Chollet\u002Fdp\u002F1617294438\u002Fref=as_li_ss_tl?s=books&ie=UTF8&qid=1519989624&sr=1-4&keywords=deep+learning+with+python&linkCode=sl1&tag=trndingcom-20&linkId=ec7663329fdb7ace60f39c762e999683>`_]\n\n* **Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems**:\n  [`Link \u003Chttps:\u002F\u002Fwww.amazon.com\u002FHands-Machine-Learning-Scikit-Learn-TensorFlow\u002Fdp\u002F1491962291\u002Fref=as_li_ss_tl?ie=UTF8&qid=1519989725&sr=1-2-ent&linkCode=sl1&tag=trndingcom-20&linkId=71938c9398940c7b0a811dc1cfef7cc3>`_]\n\n\n************\nBlogs\n************\n\n.. image:: _img\u002Fmainpage\u002FBlogger_icon.png\n\n* **Colah's blog**:\n  [`Link \u003Chttp:\u002F\u002Fcolah.github.io\u002F>`_]\n\n* **Andrej Karpathy blog**:\n  [`Link \u003Chttp:\u002F\u002Fkarpathy.github.io\u002F>`_]\n\n* **The Spectator** Shakir's Machine Learning Blog:\n  [`Link \u003Chttp:\u002F\u002Fblog.shakirm.com\u002F>`_]\n\n* **WILDML**:\n  [`Link \u003Chttp:\u002F\u002Fwww.wildml.com\u002Fabout\u002F>`_]\n\n* **Distill blog** It is more like a journal than a blog because it has a peer review process and only accepyed articles will be published on that.:\n  [`Link \u003Chttps:\u002F\u002Fdistill.pub\u002F>`_]\n\n* **BAIR** Berkeley Artificial Inteliigent Research:\n  [`Link \u003Chttp:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F>`_]\n\n* **Sebastian Ruder's blog**:\n  [`Link \u003Chttp:\u002F\u002Fruder.io\u002F>`_]\n\n* **inFERENCe**:\n  [`Link \u003Chttps:\u002F\u002Fwww.inference.vc\u002Fpage\u002F2\u002F>`_]\n\n* **i am trask** A Machine Learning Craftsmanship Blog:\n  [`Link \u003Chttp:\u002F\u002Fiamtrask.github.io>`_]\n\n\n************\nTutorials\n************\n\n.. image:: _img\u002Fmainpage\u002Ftutorial.png\n\n* **Deep Learning Tutorials**:\n  [`Link \u003Chttp:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002F>`_]\n\n* **Deep Learning for NLP with Pytorch** by Pytorch:\n  [`Link \u003Chttps:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fdeep_learning_nlp_tutorial.html>`_]\n\n* **Deep Learning for Natural Language Processing: Tutorials with Jupyter Notebooks** by Jon Krohn:\n  [`Link \u003Chttps:\u002F\u002Finsights.untapt.com\u002Fdeep-learning-for-natural-language-processing-tutorials-with-jupyter-notebooks-ad67f336ce3f>`_]\n\n\n************\nFrameworks\n************\n\n* **Tensorflow**:\n  [`Link \u003Chttps:\u002F\u002Fwww.tensorflow.org\u002F>`_]\n\n* **Pytorch**:\n  [`Link \u003Chttps:\u002F\u002Fpytorch.org\u002F>`_]\n\n* **CNTK**:\n  [`Link \u003Chttps:\u002F\u002Fdocs.microsoft.com\u002Fen-us\u002Fcognitive-toolkit\u002F>`_]\n\n* **MatConvNet**:\n  [`Link \u003Chttp:\u002F\u002Fwww.vlfeat.org\u002Fmatconvnet\u002F>`_]\n\n* **Keras**:\n  [`Link \u003Chttps:\u002F\u002Fkeras.io\u002F>`_]\n\n* **Caffe**:\n  [`Link \u003Chttp:\u002F\u002Fcaffe.berkeleyvision.org\u002F>`_]\n\n* **Theano**:\n  [`Link \u003Chttp:\u002F\u002Fwww.deeplearning.net\u002Fsoftware\u002Ftheano\u002F>`_]\n\n* **CuDNN**:\n  [`Link \u003Chttps:\u002F\u002Fdeveloper.nvidia.com\u002Fcudnn>`_]\n\n* **Torch**:\n  [`Link \u003Chttps:\u002F\u002Fgithub.com\u002Ftorch\u002Ftorch7>`_]\n\n* **Deeplearning4j**:\n  [`Link \u003Chttps:\u002F\u002Fdeeplearning4j.org\u002F>`_]\n\n\n************\nContributing\n************\n\n\n*For typos, unless significant changes, please do not create a pull request. Instead, declare them in issues or email the repository owner*. Please note we have a code of conduct, please follow it in all your interactions with the project.\n\n========================\nPull Request Process\n========================\n\nPlease consider the following criterions in order to help us in a better way:\n\n1. The pull request is mainly expected to be a link suggestion.\n2. Please make sure your suggested resources are not obsolete or broken.\n3. Ensure any install or build dependencies are removed before the end of the layer when doing a\n   build and creating a pull request.\n4. Add comments with details of changes to the interface, this includes new environment\n   variables, exposed ports, useful file locations and container parameters.\n5. You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you\n   do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed.\n\n========================\nFinal Note\n========================\n\nWe are looking forward to your kind feedback. Please help us to improve this open source project and make our work better.\nFor contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate\nyour kind feedback and support.\n",".. image:: _img\u002Fmainpage\u002Flogo.gif\n\n.. figure:: _img\u002Fmainpage\u002Fsubscribe.gif\n  :target: https:\u002F\u002Fmachinelearningmindset.com\u002Fsubscription\u002F\n  \n \n#########\nSlack 群组\n#########\n\n.. raw:: html\n\n   \u003Cdiv align=\"center\">\n\n.. raw:: html\n\n \u003Ca href=\"https:\u002F\u002Fwww.machinelearningmindset.com\u002Fslack-group\u002F\" target=\"_blank\">\n  \u003Cimg width=\"1033\" height=\"350\" align=\"center\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fastorfi_Deep-Learning-Roadmap_readme_2f07fac72f26.png\"\u002F>\n \u003C\u002Fa>\n\n.. raw:: html\n\n   \u003C\u002Fdiv>\n\n\n######################\n深度学习 (Deep Learning) 世界\n######################\n\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\n    :target: https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World\u002Fpulls\n.. image:: https:\u002F\u002Fbadges.frapsoft.com\u002Fos\u002Fv2\u002Fopen-source.png?v=103\n    :target: https:\u002F\u002Fgithub.com\u002Fellerbrock\u002Fopen-source-badge\u002F\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMade%20with-Python-1f425f.svg\n      :target: https:\u002F\u002Fwww.python.org\u002F\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fansicolortags.svg\n      :target: https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World\u002Fblob\u002Fmaster\u002FLICENSE\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002FNaereen\u002FStrapDown.js.svg\n      :target: https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World\u002Fgraphs\u002Fcontributors\n.. image:: https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Famirsinatorfi.svg?label=Follow&style=social\n      :target: https:\u002F\u002Ftwitter.com\u002Famirsinatorfi\n\n\n\n##\n目录\n##\n.. contents::\n  :local:\n  :depth: 4\n\n****\n简介\n****\n\n本项目的目的是为开发者和研究人员提供一个捷径，以便查找关于深度学习 (Deep Learning) 的有用资源。\n\n==\n动机\n==\n\n这个开源 (Open Source) 项目有不同的动机。\n\n.. --------------------\n.. 为什么选择深度学习 (Deep Learning)？\n.. --------------------\n\n------------------------------------------------------------\n这个开源 (Open Source) 项目的意义是什么？\n------------------------------------------------------------\n\n还有其他类似的仓库 (Repository) 与这个仓库类似，它们非常全面且有用，老实说，它们让我思考是否有必要建立这个仓库！\n\n**本仓库的重点在于资源具有针对性**。资源的组织方式使用户可以轻松找到他\u002F她正在寻找的内容。我们将资源划分为大量类别，起初可能会让人头疼！！！但是，如果有人知道要找的内容位于何处，就非常容易找到最相关的资源。即使有人起初不知道要找什么，我们也提供了通用资源。\n\n\n.. ================================================\n.. 如何充分利用这一努力\n.. ================================================\n\n****\n论文\n****\n\n.. image:: _img\u002Fmainpage\u002Farticle.jpeg\n\n本章与深度学习 (Deep Learning) 领域发表的论文相关。\n\n====\n模型\n====\n\n-----------------------\n卷积神经网络 (Convolutional Networks)\n-----------------------\n\n  .. image:: _img\u002Fmainpage\u002Fconvolutional.png\n\n.. 对于连续的行，行必须从相同的位置开始。\n* **Imagenet classification with deep convolutional neural networks** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fdontfollowmeimcrazy\u002Fimagenet>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Convolutional Neural Networks for Sentence Classification** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5882>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fyoonkim\u002FCNN_sentence>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Large-scale Video Classification with Convolutional Neural Networks** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FKarpathy_Large-scale_Video_Classification_2014_CVPR_paper.html>`_][`项目页面 \u003Chttps:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fdeepvideo\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FOquab_Learning_and_Transferring_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n* **Deep convolutional neural networks for LVCSR** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6639347\u002F&hl=zh-CN&sa=T&oi=gsb&ct=res&cd=0&ei=KknXWYbGFMbFjwSsyICADQ&scisig=AAGBfm2F0Zlu0ciUwadzshNNm80IQQhuhA>`_]\n  \n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Face recognition: a convolutional neural-network approach** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F554195\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n\n-----------------------\n循环神经网络 (Recurrent Networks)\n-----------------------\n\n  .. image:: _img\u002Fmainpage\u002FRecurrent_neural_network_unfold.svg\n\n\n.. 对于连续的行，行必须从相同的位置开始。\n* **An empirical exploration of recurrent network architectures** :\n  [`论文 \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fjozefowicz15.pdf?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=revue>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fdebajyotidatta\u002FRecurrentArchitectures>`_]\n\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **LSTM: A search space odyssey** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7508408\u002F>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Ffomorians\u002Flstm-odyssey>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n\n* **On the difficulty of training recurrent neural networks** :\n  [`论文 \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fpascanu13.pdf>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fpascanur\u002FtrainingRNNs>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Learning to forget: Continual prediction with LSTM** :\n  [`论文 \u003Chttp:\u002F\u002Fdigital-library.theiet.org\u002Fcontent\u002Fconferences\u002F10.1049\u002Fcp_19991218>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n-----------------------\n自编码器 (Autoencoders)\n-----------------------\n\n.. image:: _img\u002Fmainpage\u002FAutoencoder_structure.png\n\n* **Extracting and composing robust features with denoising autoencoders** :\n  [`论文 \u003Chttps:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=1390294>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion** :\n  [`论文 \u003Chttp:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv11\u002Fvincent10a.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Frajarsheem\u002Flibsdae-autoencoder-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Adversarial Autoencoders** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fconan7882\u002Fadversarial-autoencoders>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Autoencoders, Unsupervised Learning, and Deep Architectures** :\n  [`论文 \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv27\u002Fbaldi12a\u002Fbaldi12a.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **使用神经网络 (Neural Networks) 降低数据维度** :\n  [`论文 \u003Chttp:\u002F\u002Fscience.sciencemag.org\u002Fcontent\u002F313\u002F5786\u002F504>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fjordn\u002Fautoencoder>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n-----------------------\n生成模型 (Generative Models)\n-----------------------\n\n.. image:: _img\u002Fmainpage\u002Fgenerative.png\n\n* **在判别分类器 (Discriminative Classifiers) 中利用生成模型** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F1520-exploiting-generative-models-in-discriminative-classifiers.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **基于深度生成模型的半监督学习 (Semi-supervised Learning)** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5352-semi-supervised-learning-with-deep-generative-models>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fwohlert\u002Fsemi-supervised-pytorch>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n\n* **生成对抗网络 (Generative Adversarial Nets)** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5423-generative-adversarial-nets>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fgoodfeli\u002Fadversarial>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **作为生成模型的广义去噪自编码器 (Auto-Encoders)** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5023-generalized-denoising-auto-encoders-as-generative-models>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n  \n* **使用深度卷积 (Convolutional) 生成对抗网络进行无监督表示学习** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDCGAN-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n-----------------------\n概率模型 (Probabilistic Models)\n-----------------------\n\n* **深度生成模型中的随机反向传播 (Stochastic Backpropagation) 与近似推断** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1401.4082>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **认知的概率模型：探索表示与归纳偏差** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1364661310001129>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **关于应用于识别的深度生成模型** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5995710\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n\n\n\n====================\n核心 (Core)\n====================\n\n---------------------\n优化 (Optimization)\n---------------------\n\n.. ################################################################################\n.. For continuous lines, the lines must be start from the same locations.\n* **批量归一化 (Batch Normalization)：通过减少内部协变量偏移 (Internal Covariate Shift) 加速深度网络训练** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Dropout (随机失活)：防止神经网络过拟合 (Overfitting) 的一种简单方法** :\n  [`论文 \u003Chttp:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume15\u002Fsrivastava14a\u002Fsrivastava14a.pdf?utm_content=buffer79b43&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **训练非常深的网络** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5850-training-very-deep-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **深入探究整流器 (Rectifiers)：在 ImageNet 分类上超越人类水平的性能** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FHe_Delving_Deep_into_ICCV_2015_paper.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **大规模分布式深度网络** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4687-large-scale-distributed-deep-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n------------------------\n表示学习 (Representation Learning)\n------------------------\n\n* **使用深度卷积生成对抗网络进行无监督表示学习** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002FNewmu\u002Fdcgan_code>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **表示学习：回顾与新视角** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6472238\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **InfoGAN：通过信息最大化生成对抗网络进行可解释表示学习** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6399-infogan-interpretable-representation>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fopenai\u002FInfoGAN>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n\n------------------------------------\n理解与迁移学习 (Transfer Learning)\n------------------------------------\n\n* **使用卷积神经网络 (Convolutional Neural Networks) 学习和迁移中层图像表示** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FOquab_Learning_and_Transferring_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **蒸馏神经网络中的知识** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02531>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **DeCAF：用于通用视觉识别的深度卷积激活特征** :\n  [`论文 \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fdonahue14.pdf>`_][` `\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **深度神经网络中的特征具有多大的可迁移性？** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5347-how-transferable-are-features-in-deep-n%E2%80%A6>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fyosinski\u002Fconvnet_transfer>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n-----------------------\n强化学习 (Reinforcement Learning)\n-----------------------\n\n* **通过深度强化学习实现人类水平的控制** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature14236\u002F>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fdevsisters\u002FDQN-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **使用深度强化学习玩 Atari 游戏** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1312.5602>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fdeep-rl-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **使用深度强化学习进行连续控制** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1509.02971>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fstevenpjg\u002Fddpg-aigym>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **使用双重 Q 学习 (Double Q-Learning) 的深度强化学习** :\n  [`论文 \u003Chttp:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI16\u002Fpaper\u002Fdownload\u002F12389\u002F11847>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fdeep-rl-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **用于深度强化学习的 Dueling 网络架构** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06581>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fyoosan\u002Fdeeprl>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n\n====================\n应用 (Applications)\n====================\n\n--------------------\n图像识别 (Image Recognition)\n--------------------\n\n* **用于图像识别的深度残差学习 (Deep Residual Learning)** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fhtml\u002FHe_Deep_Residual_Learning_CVPR_2016_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fgcr\u002Ftorch-residual-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **用于大规模图像识别的非常深卷积网络** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **用于图像分类的多列深度神经网络** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1202.2745>`_]\n\n.. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **DeepID3：使用非常深的神经网络 (Neural Networks) 进行人脸识别** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1502.00873>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **深入卷积网络 (Convolutional Networks) 内部：可视化图像分类模型和显著图 (Saliency Maps)** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6034>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fartvandelay\u002FDeep_Inside_Convolutional_Networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Deep Image：扩展图像识别 (Image Recognition)** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fvc\u002Farxiv\u002Fpapers\u002F1501\u002F1501.02876v1.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **用于视觉识别和描述的长期循环 (Recurrent) 卷积网络** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FDonahue_Long-Term_Recurrent_Convolutional_2015_CVPR_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002FJaggerYoung\u002FLRCN-for-Activity-Recognition>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **用于跨视听匹配识别的 3D 卷积神经网络 (Convolutional Neural Networks)** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8063416>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fastorfi\u002Flip-reading-deeplearning>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n--------------------\n物体识别 (Object Recognition)\n--------------------\n\n* **使用深度卷积神经网络 (Convolutional Neural Networks) 进行 ImageNet 分类** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **使用 Places 数据库学习场景识别 (Scene Recognition) 的深度特征** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5349-learning-deep-features>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **使用深度神经网络的可扩展物体检测 (Object Detection)** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2014\u002Fhtml\u002FErhan_Scalable_Object_Detection_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Faster R-CNN：迈向使用区域提议网络 (Region Proposal Networks) 的实时物体检测** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Frbgirshick\u002Fpy-faster-rcnn>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **OverFeat：使用卷积网络集成识别、定位和检测** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6229>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fsermanet\u002FOverFeat>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **即用型 CNN (Convolutional Neural Networks) 特征：一个惊人的识别基线** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_workshops_2014\u002FW15\u002Fhtml\u002FRazavian_CNN_Features_Off-the-Shelf_2014_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **什么是物体识别的最佳多阶段架构？** :\n  [`论文 \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5459469\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_2.png\n\n\n--------------------\n动作识别 (Action Recognition)\n--------------------\n\n* **用于视觉识别和描述的长期循环卷积网络** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FDonahue_Long-Term_Recurrent_Convolutional_2015_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **使用 3D 卷积网络学习时空特征 (Spatiotemporal Features)** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fhtml\u002FTran_Learning_Spatiotemporal_Features_ICCV_2015_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002FDavideA\u002Fc3d-pytorch>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **利用时间结构描述视频** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fhtml\u002FYao_Describing_Videos_by_ICCV_2015_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Ftsenghungchen\u002FSA-tensorflow>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **用于视频动作识别的卷积双流网络融合 (Convolutional Two-Stream Network Fusion)** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fhtml\u002FFeichtenhofer_Convolutional_Two-Stream_Network_CVPR_2016_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002Ftwostreamfusion>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **时间片段网络 (Temporal Segment Networks)：迈向深度动作识别的良好实践** :\n  [`论文 \u003Chttps:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-46484-8_2>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fyjxiong\u002Ftemporal-segment-networks>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n----------------------------\n字幕生成 (Caption Generation)\n----------------------------\n\n* **Show, Attend and Tell：具有视觉注意力 (Visual Attention) 的神经图像字幕生成** :\n  [`论文 \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fxuc15.pdf>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fyunjey\u002Fshow-attend-and-tell>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Mind's Eye：用于图像字幕生成的循环视觉表示** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FChen_Minds_Eye_A_2015_CVPR_paper.html>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_2.png\n\n* **生成对抗 (Generative Adversarial) 文本到图像合成** :\n  [`论文 \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Freed16.pdf>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftext-to-image>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **用于生成图像描述的深度视觉 - 语义对齐 (Deep Visual-Semantic Alignments)** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FKarpathy_Deep_Visual-Semantic_Alignments_2015_CVPR_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fjonkuo\u002FDeep-Learning-Image-Captioning>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Show and Tell：一个神经图像字幕生成器** :\n  [`论文 \u003Chttps:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fhtml\u002FVinyals_Show_and_Tell_2015_CVPR_paper.html>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002FDeepRNN\u002Fimage_captioning>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n\n----------------------------\n自然语言处理 (Natural Language Processing)\n----------------------------\n\n* **词和短语的分布式表示 (Distributed Representations) 及其组合性** :\n  [`论文 \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf>`_][`代码 \u003Chttps:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **向量空间 (Vector Space) 中词表示的高效估计** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fpdf\u002F1301.3781.pdf>`_][`代码 \u003Chttps:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **使用神经网络进行序列到序列 (Sequence to Sequence) 学习** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.3215.pdf>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Ffarizrahman4u\u002Fseq2seq>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **通过对齐和翻译联合学习的神经机器翻译 (Neural Machine Translation)** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.0473.pdf>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fnmt>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Get To The Point：使用指针 - 生成器网络 (Pointer-Generator Networks) 进行摘要** :\n  [`论文 \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04368>`_][`代码 \u003Chttps:\u002F\u002Fgithub.com\u002Fabisee\u002Fpointer-generator>`_]\n\n.. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Attention Is All You Need** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03762>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fjadore801120\u002Fattention-is-all-you-need-pytorch>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Convolutional Neural Networks (卷积神经网络) for Sentence Classification** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5882>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fyoonkim\u002FCNN_sentence>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n\n----------------------------\n语音技术 (Speech Technology)\n----------------------------\n\n* **Deep Neural Networks (深度神经网络) for Acoustic Modeling (声学建模) in Speech Recognition (语音识别): The Shared Views of Four Research Groups** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6296526\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_5.png\n\n* **Towards End-to-End (端到端) Speech Recognition with Recurrent Neural Networks (循环神经网络)** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fgraves14.pdf>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Speech recognition with deep recurrent neural networks** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6638947\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1507.06947>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n\n* **Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin** :\n  [`Paper \u003Chttp:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Famodei16.html>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FDeepSpeech>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n* **A novel scheme for speaker recognition using a phonetically-aware deep neural network** :\n  [`Paper \u003Chttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6853887\u002F>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_3.png\n \n* **Text-Independent Speaker Verification (说话人验证) Using 3D Convolutional Neural Networks** :\n  [`Paper \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09422>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fastorfi\u002F3D-convolutional-speaker-recognition>`_]\n\n  .. image:: _img\u002Fmainpage\u002Fstar_4.png\n\n\n\n************\n数据集 (Datasets)\n************\n\n====================\n图像 (Image)\n====================\n\n\n----------------------------\n通用 (General)\n----------------------------\n\n* **MNIST** 手写数字：\n  [`Link \u003Chttp:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F>`_]\n\n\n----------------------------\n人脸 (Face)\n----------------------------\n\n* **Face Recognition (人脸识别) Technology (FERET)** FERET 计划的目标是开发自动人脸识别能力，可用于协助安全、情报和执法人员在履行职责：\n  [`Link \u003Chttps:\u002F\u002Fwww.nist.gov\u002Fprograms-projects\u002Fface-recognition-technology-feret>`_]\n\n* **The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces** 2000 年 10 月至 12 月期间，我们收集了一个包含 68 人的 41,368 张图像的数据库：\n  [`Link \u003Chttps:\u002F\u002Fwww.ri.cmu.edu\u002Fpublications\u002Fthe-cmu-pose-illumination-and-expression-pie-database-of-human-faces\u002F>`_]\n\n* **YouTube Faces DB** 该数据集包含 1,595 不同人的 3,425 个视频。所有视频均从 YouTube 下载。每个主体平均有 2.15 个视频：\n  [`Link \u003Chttps:\u002F\u002Fwww.cs.tau.ac.il\u002F~wolf\u002Fytfaces\u002F>`_]\n\n* **Grammatical Facial Expressions Data Set** 开发用于协助面部表情的自动化分析：\n  [`Link \u003Chttps:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002FGrammatical+Facial+Expressions>`_]\n\n* **FaceScrub** 一个包含 530 人的超过 100,000 张人脸图像的数据集：\n  [`Link \u003Chttp:\u002F\u002Fvintage.winklerbros.net\u002Ffacescrub.html>`_]\n\n* **IMDB-WIKI** 500k+ 带有年龄和性别标签的人脸图像：\n  [`Link \u003Chttps:\u002F\u002Fdata.vision.ee.ethz.ch\u002Fcvl\u002Frrothe\u002Fimdb-wiki\u002F>`_]\n\n* **FDDB** 人脸检测数据集和基准 (FDDB)：\n  [`Link \u003Chttp:\u002F\u002Fvis-www.cs.umass.edu\u002Ffddb\u002F>`_]\n\n----------------------------\n物体识别 (Object Recognition)\n----------------------------\n\n* **COCO** Microsoft COCO: Common Objects in Context：\n  [`Link \u003Chttp:\u002F\u002Fcocodataset.org\u002F#home>`_]\n\n* **ImageNet** 著名的 ImageNet 数据集：\n  [`Link \u003Chttp:\u002F\u002Fwww.image-net.org\u002F>`_]\n\n* **Open Images Dataset** Open Images 是一个包含约 900 万图像的数据集，已标注图像级标签和物体边界框 (object bounding boxes)：\n  [`Link \u003Chttps:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html>`_]\n\n* **Caltech-256 Object Category Dataset** 一个大型物体分类数据集：\n  [`Link \u003Chttps:\u002F\u002Fauthors.library.caltech.edu\u002F7694\u002F>`_]\n\n* **Pascal VOC dataset** 一个用于分类任务的大型数据集：\n  [`Link \u003Chttp:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002F>`_]\n\n* **CIFAR 10 \u002F CIFAR 100** CIFAR-10 数据集由 10 个类别中的 60000 张 32x32 彩色图像组成。CIFAR-100 类似于 CIFAR-10，但它包含 100 个类别，每个类别包含 600 张图像：\n  [`Link \u003Chttps:\u002F\u002Fwww.cs.toronto.edu\u002F~kriz\u002Fcifar.html>`_]\n\n\n----------------------------\n动作识别 (Action recognition)\n----------------------------\n\n* **HMDB** 一个大型人体运动数据库：\n  [`Link \u003Chttp:\u002F\u002Fserre-lab.clps.brown.edu\u002Fresource\u002Fhmdb-a-large-human-motion-database\u002F>`_]\n\n* **MHAD** 伯克利多模态人体动作数据库：\n  [`Link \u003Chttp:\u002F\u002Ftele-immersion.citris-uc.org\u002Fberkeley_mhad>`_]\n\n* **UCF101 - Action Recognition Data Set** UCF101 是一个动作识别数据集，包含从 YouTube 收集的真实动作视频，共有 101 个动作类别。该数据集是拥有 50 个动作类别的 UCF50 数据集的扩展：\n  [`Link \u003Chttp:\u002F\u002Fcrcv.ucf.edu\u002Fdata\u002FUCF101.php>`_]\n\n* **THUMOS Dataset** 一个用于动作分类的大型数据集：\n  [`Link \u003Chttp:\u002F\u002Fcrcv.ucf.edu\u002Fdata\u002FTHUMOS.php>`_]\n\n* **ActivityNet** 一个用于人类活动理解的大规模视频基准：\n  [`Link \u003Chttp:\u002F\u002Factivity-net.org\u002F>`_]\n\n======================================\n文本与自然语言处理 (Text and Natural Language Processing)\n======================================\n\n\n-----------------------\n通用 (General)\n-----------------------\n\n* **1 Billion Word Language Model (语言模型) Benchmark**: 该项目的目的是为语言建模实验提供标准的训练和测试设置：\n  [`Link \u003Chttp:\u002F\u002Fwww.statmt.org\u002Flm-benchmark\u002F>`_]\n\n* **Common Crawl**: Common Crawl 语料库 (Corpus) 包含过去 7 年收集的 PB 级数据。它包含原始网页数据、提取的元数据和文本提取内容：\n  [`Link \u003Chttp:\u002F\u002Fcommoncrawl.org\u002Fthe-data\u002Fget-started\u002F>`_]\n\n* **Yelp Open Dataset**: Yelp 业务、评论和用户数据的子集，用于个人、教育和学术目的：\n  [`Link \u003Chttps:\u002F\u002Fwww.yelp.com\u002Fdataset>`_]\n\n\n-----------------------\n文本分类 (Text classification)\n-----------------------\n\n* **20 newsgroups** 20 Newsgroups 数据集是一个包含约 20,000 个新闻组文档的集合，（几乎）均匀地分布在 20 个不同的新闻组中：\n  [`Link \u003Chttp:\u002F\u002Fqwone.com\u002F~jason\u002F20Newsgroups\u002F>`_]\n\n* **Broadcast News** 1996 年广播新闻语音语料库包含来自 ABC、CNN 和 CSPAN 电视网络以及 NPR 和 PRI 广播网络的总共 104 小时广播及其相应转录：\n  [`Link \u003Chttps:\u002F\u002Fcatalog.ldc.upenn.edu\u002FLDC97S44>`_]\n\n* **The wikitext 长期依赖语言建模数据集 (The wikitext long term dependency language modeling dataset)**: 从维基百科经过验证的优良和特色条目集合中提取的超过 1 亿个词元 (tokens) 的集合。:\n  [`链接 \u003Chttps:\u002F\u002Feinstein.ai\u002Fresearch\u002Fthe-wikitext-long-term-dependency-language-modeling-dataset>`_]\n\n-----------------------\n问答 (Question Answering)\n-----------------------\n\n* **Question Answering Corpus (问答语料库)** 由 Deep Mind 和牛津大学提供，包含两个新的语料库 (Corpora)，约有 100 万篇新闻故事，以及来自 CNN 和 Daily Mail 网站的相关查询。\n  [`链接 \u003Chttps:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frc-data>`_]\n\n* **斯坦福问答数据集 (Stanford Question Answering Dataset, SQuAD)** 由众包工作者 (crowdworkers) 在一组维基百科文章上提出的问题组成：\n  [`链接 \u003Chttps:\u002F\u002Frajpurkar.github.io\u002FSQuAD-explorer\u002F>`_]\n\n* **亚马逊问答数据 (Amazon question\u002Fanswer data)** 包含来自亚马逊的问答数据，总计约 140 万个已回答的问题：\n  [`链接 \u003Chttp:\u002F\u002Fjmcauley.ucsd.edu\u002Fdata\u002Famazon\u002Fqa\u002F>`_]\n\n\n\n-----------------------\n情感分析 (Sentiment Analysis)\n-----------------------\n\n* **多领域情感数据集 (Multi-Domain Sentiment Dataset)** 该多领域情感数据集包含来自 Amazon.com 的许多产品类型（领域）的产品评论：\n  [`链接 \u003Chttp:\u002F\u002Fwww.cs.jhu.edu\u002F~mdredze\u002Fdatasets\u002Fsentiment\u002F>`_]\n\n* **斯坦福情感树库数据集 (Stanford Sentiment Treebank Dataset)** 斯坦福情感树库是第一个具有完全标记解析树 (parse trees) 的语料库，允许对语言中情感的组合效应进行完整分析：\n  [`链接 \u003Chttps:\u002F\u002Fnlp.stanford.edu\u002Fsentiment\u002F>`_]\n\n* **大型电影评论数据集 (Large Movie Review Dataset)**: 这是一个用于二分类情感分类 (binary sentiment classification) 的数据集：\n  [`链接 \u003Chttp:\u002F\u002Fai.stanford.edu\u002F~amaas\u002Fdata\u002Fsentiment\u002F>`_]\n\n\n-----------------------\n机器翻译 (Machine Translation)\n-----------------------\n\n* **加拿大第 36 届议会对齐汉萨德记录 (Aligned Hansards of the 36th Parliament of Canada)** 数据集包含 130 万对对齐的文本块：\n  [`链接 \u003Chttps:\u002F\u002Fwww.isi.edu\u002Fnatural-language\u002Fdownload\u002Fhansard\u002F>`_]\n\n* **Europarl: 统计机器翻译平行语料库 (Europarl: A Parallel Corpus for Statistical Machine Translation)** 数据集提取自欧洲议会的会议记录：\n  [`链接 \u003Chttp:\u002F\u002Fwww.statmt.org\u002Feuroparl\u002F>`_]\n\n\n-----------------------\n摘要生成 (Summarization)\n-----------------------\n\n* **法律案例报告数据集 (Legal Case Reports Data Set)** 作为包含 4000 个法律案例的文本语料库，用于自动摘要和引用分析。:\n  [`链接 \u003Chttps:\u002F\u002Farchive.ics.uci.edu\u002Fml\u002Fdatasets\u002FLegal+Case+Reports>`_]\n\n\n======================================\n语音技术 (Speech Technology)\n======================================\n\n* **TIMIT 声学 - 语音连续语音语料库 (TIMIT Acoustic-Phonetic Continuous Speech Corpus)** TIMIT 朗读语音语料库旨在为声学 - 语音研究 (acoustic-phonetic studies) 以及自动语音识别系统 (automatic speech recognition systems) 的开发和评估提供语音数据：\n  [`链接 \u003Chttps:\u002F\u002Fcatalog.ldc.upenn.edu\u002Fldc93s1>`_]\n\n* **LibriSpeech** LibriSpeech 是一个约 1000 小时的 16kHz 朗读英语语音语料库，由 Vassil Panayotov 准备，Daniel Povey 协助：\n  [`链接 \u003Chttp:\u002F\u002Fwww.openslr.org\u002F12\u002F>`_]\n\n* **VoxCeleb** 一个大规模视听数据集 (audio-visual dataset)：\n  [`链接 \u003Chttp:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fdata\u002Fvoxceleb\u002F>`_]\n\n* **NIST 说话人识别 (NIST Speaker Recognition)**:\n  [`链接 \u003Chttps:\u002F\u002Fwww.nist.gov\u002Fitl\u002Fiad\u002Fmig\u002Fspeaker-recognition>`_]\n\n\n\n\n\n\n************\n课程\n************\n\n.. image:: _img\u002Fmainpage\u002Fonline.png\n\n* **机器学习 (Machine Learning)** 由斯坦福大学在 Coursera 上提供 :\n  [`链接 \u003Chttps:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning>`_]\n\n* **神经网络与深度学习 (Neural Networks and Deep Learning)** 专项课程由 Coursera 提供：\n  [`链接 \u003Chttps:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fneural-networks-deep-learning>`_]\n\n* **深度学习入门 (Intro to Deep Learning)** 由 Google 提供：\n  [`链接 \u003Chttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning--ud730>`_]\n\n* **深度学习导论 (Introduction to Deep Learning)** 由 CMU 提供：\n  [`链接 \u003Chttp:\u002F\u002Fdeeplearning.cs.cmu.edu\u002F>`_]\n\n* **NVIDIA 深度学习研究所 (NVIDIA Deep Learning Institute)** 由 NVIDIA 提供：\n  [`链接 \u003Chttps:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdeep-learning-ai\u002Feducation\u002F>`_]\n\n* **视觉识别卷积神经网络 (Convolutional Neural Networks for Visual Recognition)** 由斯坦福大学提供：\n  [`链接 \u003Chttp:\u002F\u002Fcs231n.stanford.edu\u002F>`_]\n\n* **自然语言处理深度学习 (Deep Learning for Natural Language Processing)** 由斯坦福大学提供：\n  [`链接 \u003Chttp:\u002F\u002Fcs224d.stanford.edu\u002F>`_]\n\n* **深度学习 (Deep Learning)** 由 fast.ai 提供：\n  [`链接 \u003Chttp:\u002F\u002Fwww.fast.ai\u002F>`_]\n\n* **视觉计算深度学习课程 (Course on Deep Learning for Visual Computing)** 由 IITKGP 提供：\n  [`链接 \u003Chttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLuv3GM6-gsE1Biyakccxb3FAn4wBLyfWf>`_]\n\n\n\n\n************\n书籍\n************\n\n.. image:: _img\u002Fmainpage\u002Fbooks.jpg\n\n* **深度学习 (Deep Learning)** 作者 Ian Goodfellow：\n  [`链接 \u003Chttp:\u002F\u002Fwww.deeplearningbook.org\u002F>`_]\n\n* **神经网络与深度学习 (Neural Networks and Deep Learning)** :\n  [`链接 \u003Chttp:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F>`_]\n\n* **Python 深度学习 (Deep Learning with Python)**:\n  [`链接 \u003Chttps:\u002F\u002Fwww.amazon.com\u002FDeep-Learning-Python-Francois-Chollet\u002Fdp\u002F1617294438\u002Fref=as_li_ss_tl?s=books&ie=UTF8&qid=1519989624&sr=1-4&keywords=deep+learning+with+python&linkCode=sl1&tag=trndingcom-20&linkId=ec7663329fdb7ace60f39c762e999683>`_]\n\n* **使用 Scikit-Learn 和 TensorFlow 动手实践机器学习：构建智能系统的概念、工具和技术 (Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems)**:\n  [`链接 \u003Chttps:\u002F\u002Fwww.amazon.com\u002FHands-Machine-Learning-Scikit-Learn-TensorFlow\u002Fdp\u002F1491962291\u002Fref=as_li_ss_tl?ie=UTF8&qid=1519989725&sr=1-2-ent&linkCode=sl1&tag=trndingcom-20&linkId=71938c9398940c7b0a811dc1cfef7cc3>`_]\n\n\n************\n博客\n************\n\n.. image:: _img\u002Fmainpage\u002FBlogger_icon.png\n\n* **Colah 的博客 (Colah's blog)**:\n  [`链接 \u003Chttp:\u002F\u002Fcolah.github.io\u002F>`_]\n\n* **Andrej Karpathy 博客 (Andrej Karpathy blog)**:\n  [`链接 \u003Chttp:\u002F\u002Fkarpathy.github.io\u002F>`_]\n\n* **The Spectator** Shakir 的机器学习博客：\n  [`链接 \u003Chttp:\u002F\u002Fblog.shakirm.com\u002F>`_]\n\n* **WILDML**:\n  [`链接 \u003Chttp:\u002F\u002Fwww.wildml.com\u002Fabout\u002F>`_]\n\n* **Distill 博客 (Distill blog)** 它更像是一本期刊而不是博客，因为它有同行评审流程 (peer review process)，只有接受的文章才会发布在上面。:\n  [`链接 \u003Chttps:\u002F\u002Fdistill.pub\u002F>`_]\n\n* **BAIR** 伯克利人工智能研究 (Berkeley Artificial Intelligent Research)：\n  [`链接 \u003Chttp:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F>`_]\n\n* **Sebastian Ruder 的博客 (Sebastian Ruder's blog)**:\n  [`链接 \u003Chttp:\u002F\u002Fruder.io\u002F>`_]\n\n* **inFERENCe**:\n  [`链接 \u003Chttps:\u002F\u002Fwww.inference.vc\u002Fpage\u002F2\u002F>`_]\n\n* **i am trask** 一个机器学习工艺博客 (A Machine Learning Craftsmanship Blog)：\n  [`链接 \u003Chttp:\u002F\u002Fiamtrask.github.io>`_]\n\n\n************\n教程\n************\n\n.. image:: _img\u002Fmainpage\u002Ftutorial.png\n\n* **深度学习教程 (Deep Learning Tutorials)**:\n  [`链接 \u003Chttp:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002F>`_]\n\n* **使用 Pytorch 进行自然语言处理深度学习 (Deep Learning for NLP with Pytorch)** 由 Pytorch 提供：\n  [`链接 \u003Chttps:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fdeep_learning_nlp_tutorial.html>`_]\n\n* **自然语言处理深度学习：使用 Jupyter Notebooks 的教程 (Deep Learning for Natural Language Processing: Tutorials with Jupyter Notebooks)** 作者 Jon Krohn：\n  [`链接 \u003Chttps:\u002F\u002Finsights.untapt.com\u002Fdeep-learning-for-natural-language-processing-tutorials-with-jupyter-notebooks-ad67f336ce3f>`_]\n\n\n************\n框架 (Frameworks)\n************\n\n* **Tensorflow**:\n  [`链接 \u003Chttps:\u002F\u002Fwww.tensorflow.org\u002F>`_]\n\n* **Pytorch**:\n  [`链接 \u003Chttps:\u002F\u002Fpytorch.org\u002F>`_]\n\n* **CNTK**:\n  [`链接 \u003Chttps:\u002F\u002Fdocs.microsoft.com\u002Fen-us\u002Fcognitive-toolkit\u002F>`_]\n\n* **MatConvNet**:\n  [`链接 \u003Chttp:\u002F\u002Fwww.vlfeat.org\u002Fmatconvnet\u002F>`_]\n\n* **Keras**:\n  [`链接 \u003Chttps:\u002F\u002Fkeras.io\u002F>`_]\n\n* **Caffe**:\n  [`链接 \u003Chttp:\u002F\u002Fcaffe.berkeleyvision.org\u002F>`_]\n\n* **Theano**:\n  [`链接 \u003Chttp:\u002F\u002Fwww.deeplearning.net\u002Fsoftware\u002Ftheano\u002F>`_]\n\n* **CuDNN**:\n  [`链接 \u003Chttps:\u002F\u002Fdeveloper.nvidia.com\u002Fcudnn>`_]\n\n* **Torch**:\n  [`链接 \u003Chttps:\u002F\u002Fgithub.com\u002Ftorch\u002Ftorch7>`_]\n\n* **Deeplearning4j**:\n  [`链接 \u003Chttps:\u002F\u002Fdeeplearning4j.org\u002F>`]\n\n\n************\n贡献\n************\n\n\n*对于拼写错误，除非涉及重大更改，否则请不要创建 Pull Request（拉取请求）。相反，请在 Issues（问题）中声明或通过电子邮件联系 Repository（仓库）所有者*。请注意，我们有一份 Code of Conduct（行为准则），请在与项目的所有互动中遵守它。\n\n========================\nPull Request（拉取请求）流程\n========================\n\n请考虑以下标准，以便更好地帮助我们：\n\n1. Pull Request 主要期望是链接建议。\n2. 请确保您建议的资源没有过时或失效。\n3. 在进行构建和创建 Pull Request 时，确保在该层结束前移除任何安装或构建依赖项。\n4. 添加注释详细说明接口的更改，这包括新的 Environment Variables（环境变量）、暴露的 Ports（端口）、有用的文件位置和 Container Parameters（容器参数）。\n5. 一旦获得至少一名其他开发人员的签字确认，您可以合并 Pull Request；如果您没有权限这样做，如果您认为所有检查都已通过，可以请求所有者为您合并。\n\n========================\n最后说明\n========================\n\n我们期待您的宝贵反馈。请帮助我们改进这个开源项目，让我们的工作变得更好。\n对于贡献，请创建 Pull Request，我们将迅速进行调查。再次感谢您的宝贵反馈和支持。","# Deep-Learning-Roadmap 快速上手指南\n\n## 环境准备\n\n本项目是一个深度学习资源汇总仓库，主要用于查找论文、代码及相关学习资料。\n\n- **操作系统**：Windows \u002F Linux \u002F macOS 均可\n- **前置依赖**：\n  - **Web 浏览器**：用于在线浏览仓库内容\n  - **Git**：用于克隆仓库到本地（可选）\n  - **Python**：如需运行仓库中链接的具体代码示例，建议安装 Python 环境\n\n## 安装步骤\n\n本项目主要为资源索引，无需复杂安装。您可以通过以下方式获取内容：\n\n1. **在线浏览**\n   直接访问 GitHub 仓库页面查看最新资源列表。\n\n2. **本地克隆**\n   使用 Git 命令将仓库克隆到本地，方便离线查阅或搜索：\n\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-World.git\n   ```\n\n3. **依赖安装（可选）**\n   若需运行仓库中链接的具体代码项目，请进入对应子项目目录，根据其独立的 `requirements.txt` 安装依赖。\n\n## 基本使用\n\n克隆完成后，打开根目录下的 README 文件或直接在线查看，通过目录导航查找所需资源。\n\n1. **浏览分类**\n   资源已按领域分类，主要包含：\n   - **Models**：卷积网络 (Convolutional Networks)、循环网络 (Recurrent Networks)、自编码器 (Autoencoders)、生成模型 (Generative Models) 等。\n   - **Core**：优化 (Optimization)、表示学习 (Representation Learning)、迁移学习 (Transfer Learning) 等。\n\n2. **查找资源**\n   在对应分类下查找感兴趣的论文或项目。每个条目通常包含：\n   - `Paper`：论文链接\n   - `Code`：代码实现链接（如有）\n\n3. **使用示例**\n   例如，若要查找卷积神经网络相关资源：\n   - 定位到 **Models** -> **Convolutional Networks** 章节。\n   - 点击 **Imagenet classification with deep convolutional neural networks** 条目下的 `Paper` 或 `Code` 链接即可访问详细内容。\n\n   ```markdown\n   * **Imagenet classification with deep convolutional neural networks** :\n     [`Paper \u003Chttp:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4824-imagenet-classification-with-deep-convolutional-neural-networks>`_][`Code \u003Chttps:\u002F\u002Fgithub.com\u002Fdontfollowmeimcrazy\u002Fimagenet>`_]\n   ```","某科技公司算法工程师小张，接到任务需在两周内为新的医疗图像识别项目搭建基准模型，急需查找经典的卷积神经网络论文及可靠代码实现，但面对海量信息无从下手。\n\n### 没有 Deep-Learning-Roadmap 时\n- 在 arXiv 和 Google 大海捞针，筛选海量无关论文耗时极长，项目进度严重滞后。\n- 找到论文后难以匹配对应的开源代码，复现成本高昂且容易因版本问题出错。\n- 缺乏系统分类，不清楚不同模型的具体适用场景，技术选型困难重重。\n- 学习路径混乱，容易陷入细节而偏离项目核心需求，大幅增加试错成本。\n\n### 使用 Deep-Learning-Roadmap 后\n- 通过 Deep-Learning-Roadmap 的分类目录，直接定位到卷积网络板块，节省大量检索时间。\n- 每个模型条目均提供论文与代码的双向链接，一键获取资源，复现效率倍增。\n- 资源经过针对性整理，快速匹配到适合图像分类的经典模型，技术选型更精准。\n- 结构化的知识体系帮助小张迅速建立技术选型思路，专注核心开发而非资源搜集。\n\nDeep-Learning-Roadmap 将分散的深度学习资源系统化，显著降低了研究人员的信息检索与筛选成本，让开发者能更专注于算法创新与业务落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fastorfi_Deep-Learning-Roadmap_ce44edf7.png","astorfi","Sina Torfi","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fastorfi_af2bf404.png","PhD & Developer\r\nworking on Deep Learning, Computer Vision & NLP\r\n","Meta","San Jose","amirsina.torfi@gmail.com",null,"https:\u002F\u002Fastorfi.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fastorfi",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,3189,313,"2026-03-28T04:49:39","MIT",1,"未说明",{"notes":97,"python":95,"dependencies":98},"该项目为深度学习资源导航与论文代码汇总清单，并非单一可执行工具，因此无特定运行环境要求。徽章显示项目与 Python 相关，具体依赖需参考链接到的外部仓库（如 TensorFlow、PyTorch 项目）。",[],[13],[101,102],"deep-learning","reinforcement-learning","2026-03-27T02:49:30.150509","2026-04-06T09:44:33.396432",[106,111,116],{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},4639,"路线图中的进度条代表什么含义？","进度条用于表示参考资料的重要性程度。","https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-Roadmap\u002Fissues\u002F4",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},4640,"如何向项目贡献课程或资源信息？","请将相关内容通过提交 Pull Request 的方式进行贡献。","https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-Roadmap\u002Fissues\u002F10",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},4641,"是否有针对不同理解水平和计算能力的数据集推荐？","该功能目前正在开发中，敬请期待后续更新。","https:\u002F\u002Fgithub.com\u002Fastorfi\u002FDeep-Learning-Roadmap\u002Fissues\u002F1",[]]