[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ujjwalkarn--Machine-Learning-Tutorials":3,"tool-ujjwalkarn--Machine-Learning-Tutorials":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":78,"owner_twitter":78,"owner_website":81,"owner_url":82,"languages":78,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":94,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":109},2798,"ujjwalkarn\u002FMachine-Learning-Tutorials","Machine-Learning-Tutorials","machine learning and deep learning tutorials, articles and other resources ","Machine-Learning-Tutorials 是一个精心整理的机器学习与深度学习资源库，旨在为学习者提供按主题分类的教程、文章及实用资料。面对人工智能领域知识更新快、学习资源分散且质量参差不齐的痛点，它将庞杂的内容系统化，涵盖了从基础统计、经典算法（如线性回归、支持向量机）到前沿技术（如卷积神经网络、自然语言处理、强化学习）的全方位指南。\n\n无论是刚入门的学生、寻求进阶的开发者，还是从事相关研究的学者，都能在这里找到适合自身水平的学习路径。资源库不仅收录了吴恩达等名校的经典课程链接，还整合了面试指南、速查表（Cheat Sheets）、代码框架以及 Kaggle 实战资源，甚至包含了 R 语言和 Python 语言的专项教程索引。其独特的亮点在于极高的结构化程度与社区维护机制，帮助用户快速定位特定知识点，避免在海量信息中迷失方向。如果你希望系统性地构建机器学习知识体系，或需要一份可靠的案头参考手册，Machine-Learning-Tutorials 将是极佳的起点。","\n# Machine Learning & Deep Learning Tutorials [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n- This repository contains a topic-wise curated list of Machine Learning and Deep Learning tutorials, articles and other resources. Other awesome lists can be found in this [list](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome).\n\n- If you want to contribute to this list, please read [Contributing Guidelines](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FMachine-Learning-Tutorials\u002Fblob\u002Fmaster\u002Fcontributing.md).\n\n- [Curated list of R tutorials for Data Science, NLP and Machine Learning](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataScienceR).\n\n- [Curated list of Python tutorials for Data Science, NLP and Machine Learning](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataSciencePython).\n\n\n## Contents\n- [Introduction](#general)\n- [Interview Resources](#interview)\n- [Artificial Intelligence](#ai)\n- [Genetic Algorithms](#ga)\n- [Statistics](#stat)\n- [Useful Blogs](#blogs)\n- [Resources on Quora](#quora)\n- [Resources on Kaggle](#kaggle)\n- [Cheat Sheets](#cs)\n- [Classification](#classification)\n- [Linear Regression](#linear)\n- [Logistic Regression](#logistic)\n- [Model Validation using Resampling](#validation)\n    - [Cross Validation](#cross)\n    - [Bootstraping](#boot)\n- [Deep Learning](#deep)\n    - [Frameworks](#frame)\n    - [Feed Forward Networks](#feed)\n    - [Recurrent Neural Nets, LSTM, GRU](#rnn)\n    - [Restricted Boltzmann Machine, DBNs](#rbm)\n    - [Autoencoders](#auto)\n    - [Convolutional Neural Nets](#cnn)\n    - [Graph Representation Learning](#nrl)\n- [Natural Language Processing](#nlp)\n    - [Topic Modeling, LDA](#topic)\n    - [Word2Vec](#word2vec)\n- [Computer Vision](#vision)\n- [Support Vector Machine](#svm)\n- [Reinforcement Learning](#rl)\n- [Decision Trees](#dt)\n- [Random Forest \u002F Bagging](#rf)\n- [Boosting](#gbm)\n- [Ensembles](#ensem)\n- [Stacking Models](#stack)\n- [VC Dimension](#vc)\n- [Bayesian Machine Learning](#bayes)\n- [Semi Supervised Learning](#semi)\n- [Optimizations](#opt)\n- [Other Useful Tutorials](#other)\n\n\u003Ca name=\"general\" \u002F>\n\n## Introduction\n\n- [Machine Learning Course by Andrew Ng (Stanford University)](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning)\n\n- [AI\u002FML YouTube Courses](https:\u002F\u002Fgithub.com\u002Fdair-ai\u002FML-YouTube-Courses)\n\n- [Curated List of Machine Learning Resources](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-machine-learning-ml)\n\n- [In-depth introduction to machine learning in 15 hours of expert videos](http:\u002F\u002Fwww.dataschool.io\u002F15-hours-of-expert-machine-learning-videos\u002F)\n\n- [An Introduction to Statistical Learning](http:\u002F\u002Fwww-bcf.usc.edu\u002F~gareth\u002FISL\u002F)\n\n- [List of Machine Learning University Courses](https:\u002F\u002Fgithub.com\u002Fprakhar1989\u002Fawesome-courses#machine-learning)\n\n- [Machine Learning for Software Engineers](https:\u002F\u002Fgithub.com\u002FZuzooVn\u002Fmachine-learning-for-software-engineers)\n\n- [Dive into Machine Learning](https:\u002F\u002Fgithub.com\u002Fhangtwenty\u002Fdive-into-machine-learning)\n\n- [A curated list of awesome Machine Learning frameworks, libraries and software](https:\u002F\u002Fgithub.com\u002Fjosephmisiti\u002Fawesome-machine-learning)\n\n- [A curated list of awesome data visualization libraries and resources.](https:\u002F\u002Fgithub.com\u002Ffasouto\u002Fawesome-dataviz)\n\n- [An awesome Data Science repository to learn and apply for real world problems](https:\u002F\u002Fgithub.com\u002Fokulbilisim\u002Fawesome-datascience)\n\n- [The Open Source Data Science Masters](http:\u002F\u002Fdatasciencemasters.org\u002F)\n\n- [Machine Learning FAQs on Cross Validated](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fmachine-learning)\n\n- [Machine Learning algorithms that you should always have a strong understanding of](https:\u002F\u002Fwww.quora.com\u002FWhat-are-some-Machine-Learning-algorithms-that-you-should-always-have-a-strong-understanding-of-and-why)\n\n- [Difference between Linearly Independent, Orthogonal, and Uncorrelated Variables](http:\u002F\u002Fterpconnect.umd.edu\u002F~bmomen\u002FBIOM621\u002FLineardepCorrOrthogonal.pdf)\n\n- [List of Machine Learning Concepts](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_machine_learning_concepts)\n\n- [Slides on Several Machine Learning Topics](http:\u002F\u002Fwww.slideshare.net\u002Fpierluca.lanzi\u002Fpresentations)\n\n- [MIT Machine Learning Lecture Slides](http:\u002F\u002Fwww.ai.mit.edu\u002Fcourses\u002F6.867-f04\u002Flectures.html)\n\n- [Comparison Supervised Learning Algorithms](http:\u002F\u002Fwww.dataschool.io\u002Fcomparing-supervised-learning-algorithms\u002F)\n\n- [Learning Data Science Fundamentals](http:\u002F\u002Fwww.dataschool.io\u002Flearning-data-science-fundamentals\u002F)\n\n- [Machine Learning mistakes to avoid](https:\u002F\u002Fmedium.com\u002F@nomadic_mind\u002Fnew-to-machine-learning-avoid-these-three-mistakes-73258b3848a4#.lih061l3l)\n\n- [Statistical Machine Learning Course](http:\u002F\u002Fwww.stat.cmu.edu\u002F~larry\u002F=sml\u002F)\n\n- [TheAnalyticsEdge edX Notes and Codes](https:\u002F\u002Fgithub.com\u002Fpedrosan\u002FTheAnalyticsEdge)\n\n- [Have Fun With Machine Learning](https:\u002F\u002Fgithub.com\u002Fhumphd\u002Fhave-fun-with-machine-learning)\n\n- [Twitter's Most Shared #machineLearning Content From The Past 7 Days](http:\u002F\u002Ftheherdlocker.com\u002Ftweet\u002Fpopularity\u002Fmachinelearning)\n\n- [Grokking Machine Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-machine-learning)\n\n\u003Ca name=\"interview\" \u002F>\n\n## Interview Resources\n\n- [41 Essential Machine Learning Interview Questions (with answers)](https:\u002F\u002Fwww.springboard.com\u002Fblog\u002Fmachine-learning-interview-questions\u002F)\n\n- [How can a computer science graduate student prepare himself for data scientist interviews?](https:\u002F\u002Fwww.quora.com\u002FHow-can-a-computer-science-graduate-student-prepare-himself-for-data-scientist-machine-learning-intern-interviews)\n\n- [How do I learn Machine Learning?](https:\u002F\u002Fwww.quora.com\u002FHow-do-I-learn-machine-learning-1)\n\n- [FAQs about Data Science Interviews](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FData-Science-Interviews\u002Ffaq)\n\n- [What are the key skills of a data scientist?](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-key-skills-of-a-data-scientist)\n\n- [The Big List of DS\u002FML Interview Resources](https:\u002F\u002Ftowardsdatascience.com\u002Fthe-big-list-of-ds-ml-interview-resources-2db4f651bd63)\n\n\u003Ca name=\"ai\" \u002F>\n\n## Artificial Intelligence\n\n- [Awesome Artificial Intelligence (GitHub Repo)](https:\u002F\u002Fgithub.com\u002Fowainlewis\u002Fawesome-artificial-intelligence)\n\n- [UC Berkeley CS188 Intro to AI](http:\u002F\u002Fai.berkeley.edu\u002Fhome.html), [Lecture Videos](http:\u002F\u002Fai.berkeley.edu\u002Flecture_videos.html), [2](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=W1S-HSakPTM)\n\n- [Programming Community Curated Resources for learning Artificial Intelligence](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai) \n\n- [MIT 6.034 Artificial Intelligence Lecture Videos](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi), [Complete Course](https:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-034-artificial-intelligence-fall-2010\u002F)\n\n- [edX course | Klein & Abbeel](https:\u002F\u002Fcourses.edx.org\u002Fcourses\u002FBerkeleyX\u002FCS188x_1\u002F1T2013\u002Finfo)\n\n- [Udacity Course | Norvig & Thrun](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fintro-to-artificial-intelligence--cs271)\n\n- [TED talks on AI](http:\u002F\u002Fwww.ted.com\u002Fplaylists\u002F310\u002Ftalks_on_artificial_intelligen)\n\n\u003Ca name=\"ga\" \u002F>\n\n## Genetic Algorithms\n\n- [Genetic Algorithms Wikipedia Page](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGenetic_algorithm)\n\n- [Simple Implementation of Genetic Algorithms in Python (Part 1)](http:\u002F\u002Foutlace.com\u002Fminiga.html), [Part 2](http:\u002F\u002Foutlace.com\u002Fminiga_addendum.html)\n\n- [Genetic Algorithms vs Artificial Neural Networks](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F1402370\u002Fwhen-to-use-genetic-algorithms-vs-when-to-use-neural-networks)\n\n- [Genetic Algorithms Explained in Plain English](http:\u002F\u002Fwww.ai-junkie.com\u002Fga\u002Fintro\u002Fgat1.html)\n\n- [Genetic Programming](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGenetic_programming)\n\n    - [Genetic Programming in Python (GitHub)](https:\u002F\u002Fgithub.com\u002Ftrevorstephens\u002Fgplearn)\n    \n    - [Genetic Alogorithms vs Genetic Programming (Quora)](https:\u002F\u002Fwww.quora.com\u002FWhats-the-difference-between-Genetic-Algorithms-and-Genetic-Programming), [StackOverflow](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F3819977\u002Fwhat-are-the-differences-between-genetic-algorithms-and-genetic-programming)\n\n\u003Ca name=\"stat\" \u002F>\n\n## Statistics\n\n- [Stat Trek Website](http:\u002F\u002Fstattrek.com\u002F) - A dedicated website to teach yourselves Statistics\n\n- [Learn Statistics Using Python](https:\u002F\u002Fgithub.com\u002Frouseguy\u002Fintro2stats) - Learn Statistics using an application-centric programming approach\n\n- [Statistics for Hackers | Slides | @jakevdp](https:\u002F\u002Fspeakerdeck.com\u002Fjakevdp\u002Fstatistics-for-hackers) - Slides by Jake VanderPlas\n\n- [Online Statistics Book](http:\u002F\u002Fonlinestatbook.com\u002F2\u002Findex.html) - An Interactive Multimedia Course for Studying Statistics\n\n- [What is a Sampling Distribution?](http:\u002F\u002Fstattrek.com\u002Fsampling\u002Fsampling-distribution.aspx)\n\n- Tutorials\n\n    - [AP Statistics Tutorial](http:\u002F\u002Fstattrek.com\u002Ftutorials\u002Fap-statistics-tutorial.aspx)\n    \n    - [Statistics and Probability Tutorial](http:\u002F\u002Fstattrek.com\u002Ftutorials\u002Fstatistics-tutorial.aspx)\n    \n    - [Matrix Algebra Tutorial](http:\u002F\u002Fstattrek.com\u002Ftutorials\u002Fmatrix-algebra-tutorial.aspx)\n    \n- [What is an Unbiased Estimator?](https:\u002F\u002Fwww.physicsforums.com\u002Fthreads\u002Fwhat-is-an-unbiased-estimator.547728\u002F)\n\n- [Goodness of Fit Explained](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGoodness_of_fit)\n\n- [What are QQ Plots?](http:\u002F\u002Fonlinestatbook.com\u002F2\u002Fadvanced_graphs\u002Fq-q_plots.html)\n\n- [OpenIntro Statistics](https:\u002F\u002Fwww.openintro.org\u002Fstat\u002Ftextbook.php?stat_book=os) - Free PDF textbook\n\n\u003Ca name=\"blogs\" \u002F>\n\n## Useful Blogs\n\n- [Edwin Chen's Blog](http:\u002F\u002Fblog.echen.me\u002F) - A blog about Math, stats, ML, crowdsourcing, data science\n\n- [The Data School Blog](http:\u002F\u002Fwww.dataschool.io\u002F) - Data science for beginners!\n\n- [ML Wave](http:\u002F\u002Fmlwave.com\u002F) - A blog for Learning Machine Learning\n\n- [Andrej Karpathy](http:\u002F\u002Fkarpathy.github.io\u002F) - A blog about Deep Learning and Data Science in general\n\n- [Colah's Blog](http:\u002F\u002Fcolah.github.io\u002F) - Awesome Neural Networks Blog\n\n- [Alex Minnaar's Blog](http:\u002F\u002Falexminnaar.com\u002F) - A blog about Machine Learning and Software Engineering\n\n- [Statistically Significant](http:\u002F\u002Fandland.github.io\u002F) - Andrew Landgraf's Data Science Blog\n\n- [Simply Statistics](http:\u002F\u002Fsimplystatistics.org\u002F) - A blog by three biostatistics professors\n\n- [Yanir Seroussi's Blog](https:\u002F\u002Fyanirseroussi.com\u002F) - A blog about Data Science and beyond\n\n- [fastML](http:\u002F\u002Ffastml.com\u002F) - Machine learning made easy\n\n- [Trevor Stephens Blog](http:\u002F\u002Ftrevorstephens.com\u002F) - Trevor Stephens Personal Page\n\n- [no free hunch | kaggle](http:\u002F\u002Fblog.kaggle.com\u002F) - The Kaggle Blog about all things Data Science\n\n- [A Quantitative Journey | outlace](http:\u002F\u002Foutlace.com\u002F) -  learning quantitative applications\n\n- [r4stats](http:\u002F\u002Fr4stats.com\u002F) - analyze the world of data science, and to help people learn to use R\n\n- [Variance Explained](http:\u002F\u002Fvarianceexplained.org\u002F) - David Robinson's Blog\n\n- [AI Junkie](http:\u002F\u002Fwww.ai-junkie.com\u002F) - a blog about Artificial Intellingence\n\n- [Deep Learning Blog by Tim Dettmers](http:\u002F\u002Ftimdettmers.com\u002F) - Making deep learning accessible\n\n- [J Alammar's Blog](http:\u002F\u002Fjalammar.github.io\u002F)- Blog posts about Machine Learning and Neural Nets\n\n- [Adam Geitgey](https:\u002F\u002Fmedium.com\u002F@ageitgey\u002Fmachine-learning-is-fun-80ea3ec3c471#.f7vwrtfne) - Easiest Introduction to machine learning\n\n- [Ethen's Notebook Collection](https:\u002F\u002Fgithub.com\u002Fethen8181\u002Fmachine-learning) - Continuously updated machine learning documentations (mainly in Python3). Contents include educational implementation of machine learning algorithms from scratch and open-source library usage\n\n\u003Ca name=\"quora\" \u002F>\n\n## Resources on Quora\n\n- [Most Viewed Machine Learning writers](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FMachine-Learning\u002Fwriters)\n\n- [Data Science Topic on Quora](https:\u002F\u002Fwww.quora.com\u002FData-Science)\n\n- [William Chen's Answers](https:\u002F\u002Fwww.quora.com\u002FWilliam-Chen-6\u002Fanswers)\n\n- [Michael Hochster's Answers](https:\u002F\u002Fwww.quora.com\u002FMichael-Hochster\u002Fanswers)\n\n- [Ricardo Vladimiro's Answers](https:\u002F\u002Fwww.quora.com\u002FRicardo-Vladimiro-1\u002Fanswers)\n\n- [Storytelling with Statistics](https:\u002F\u002Fdatastories.quora.com\u002F)\n\n- [Data Science FAQs on Quora](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FData-Science\u002Ffaq)\n\n- [Machine Learning FAQs on Quora](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FMachine-Learning\u002Ffaq)\n\n\u003Ca name=\"kaggle\" \u002F>\n\n## Kaggle Competitions WriteUp\n\n- [How to almost win Kaggle Competitions](https:\u002F\u002Fyanirseroussi.com\u002F2014\u002F08\u002F24\u002Fhow-to-almost-win-kaggle-competitions\u002F)\n\n- [Convolution Neural Networks for EEG detection](http:\u002F\u002Fblog.kaggle.com\u002F2015\u002F10\u002F05\u002Fgrasp-and-lift-eeg-detection-winners-interview-3rd-place-team-hedj\u002F)\n\n- [Facebook Recruiting III Explained](http:\u002F\u002Falexminnaar.com\u002Ftag\u002Fkaggle-competitions.html)\n\n- [Predicting CTR with Online ML](http:\u002F\u002Fmlwave.com\u002Fpredicting-click-through-rates-with-online-machine-learning\u002F)\n\n- [How to Rank 10% in Your First Kaggle Competition](https:\u002F\u002Fdnc1994.com\u002F2016\u002F05\u002Frank-10-percent-in-first-kaggle-competition-en\u002F)\n\n\u003Ca name=\"cs\" \u002F>\n\n## Cheat Sheets\n\n- [Probability Cheat Sheet](http:\u002F\u002Fstatic1.squarespace.com\u002Fstatic\u002F54bf3241e4b0f0d81bf7ff36\u002Ft\u002F55e9494fe4b011aed10e48e5\u002F1441352015658\u002Fprobability_cheatsheet.pdf),\n[Source](http:\u002F\u002Fwww.wzchen.com\u002Fprobability-cheatsheet\u002F)\n\n- [Machine Learning Cheat Sheet](https:\u002F\u002Fgithub.com\u002Fsoulmachine\u002Fmachine-learning-cheat-sheet)\n\n- [ML Compiled](https:\u002F\u002Fml-compiled.readthedocs.io\u002Fen\u002Flatest\u002F)\n\n\u003Ca name=\"classification\" \u002F>\n\n## Classification\n\n- [Does Balancing Classes Improve Classifier Performance?](http:\u002F\u002Fwww.win-vector.com\u002Fblog\u002F2015\u002F02\u002Fdoes-balancing-classes-improve-classifier-performance\u002F)\n\n- [What is Deviance?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F6581\u002Fwhat-is-deviance-specifically-in-cart-rpart)\n\n- [When to choose which machine learning classifier?](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F2595176\u002Fwhen-to-choose-which-machine-learning-classifier)\n\n- [What are the advantages of different classification algorithms?](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-advantages-of-different-classification-algorithms)\n\n- [ROC and AUC Explained](http:\u002F\u002Fwww.dataschool.io\u002Froc-curves-and-auc-explained\u002F) ([related video](https:\u002F\u002Fyoutu.be\u002FOAl6eAyP-yo))\n\n- [An introduction to ROC analysis](https:\u002F\u002Fccrma.stanford.edu\u002Fworkshops\u002Fmir2009\u002Freferences\u002FROCintro.pdf)\n\n- [Simple guide to confusion matrix terminology](http:\u002F\u002Fwww.dataschool.io\u002Fsimple-guide-to-confusion-matrix-terminology\u002F)\n\n\n\u003Ca name=\"linear\" \u002F>\n\n## Linear Regression\n\n- [General](#general-)\n\n    - [Assumptions of Linear Regression](http:\u002F\u002Fpareonline.net\u002Fgetvn.asp?n=2&v=8), [Stack Exchange](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F16381\u002Fwhat-is-a-complete-list-of-the-usual-assumptions-for-linear-regression)\n    \n    - [Linear Regression Comprehensive Resource](http:\u002F\u002Fpeople.duke.edu\u002F~rnau\u002Fregintro.htm)\n    \n    - [Applying and Interpreting Linear Regression](http:\u002F\u002Fwww.dataschool.io\u002Fapplying-and-interpreting-linear-regression\u002F)\n    \n    - [What does having constant variance in a linear regression model mean?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F52089\u002Fwhat-does-having-constant-variance-in-a-linear-regression-model-mean\u002F52107?stw=2#52107)\n    \n    - [Difference between linear regression on y with x and x with y](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F22718\u002Fwhat-is-the-difference-between-linear-regression-on-y-with-x-and-x-with-y?lq=1)\n    \n    - [Is linear regression valid when the dependant variable is not normally distributed?](https:\u002F\u002Fwww.researchgate.net\u002Fpost\u002FIs_linear_regression_valid_when_the_outcome_dependant_variable_not_normally_distributed)\n- Multicollinearity and VIF\n\n    - [Dummy Variable Trap | Multicollinearity](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMulticollinearity)\n    \n    - [Dealing with multicollinearity using VIFs](https:\u002F\u002Fjonlefcheck.net\u002F2012\u002F12\u002F28\u002Fdealing-with-multicollinearity-using-variance-inflation-factors\u002F)\n\n- [Residual Analysis](#residuals-)\n\n    - [Interpreting plot.lm() in R](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F58141\u002Finterpreting-plot-lm)\n    \n    - [How to interpret a QQ plot?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F101274\u002Fhow-to-interpret-a-qq-plot?lq=1)\n    \n    - [Interpreting Residuals vs Fitted Plot](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F76226\u002Finterpreting-the-residuals-vs-fitted-values-plot-for-verifying-the-assumptions)\n\n- [Outliers](#outliers-)\n\n    - [How should outliers be dealt with?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F175\u002Fhow-should-outliers-be-dealt-with-in-linear-regression-analysis)\n\n- [Elastic Net](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FElastic_net_regularization)\n    - [Regularization and Variable Selection via the\nElastic Net](https:\u002F\u002Fweb.stanford.edu\u002F~hastie\u002FPapers\u002Felasticnet.pdf)\n\n\u003Ca name=\"logistic\" \u002F>\n\n## Logistic Regression\n\n- [Logistic Regression Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLogistic_regression)\n\n- [Geometric Intuition of Logistic Regression](http:\u002F\u002Fflorianhartl.com\u002Flogistic-regression-geometric-intuition.html)\n\n- [Obtaining predicted categories (choosing threshold)](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F25389\u002Fobtaining-predicted-values-y-1-or-0-from-a-logistic-regression-model-fit)\n\n- [Residuals in logistic regression](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F1432\u002Fwhat-do-the-residuals-in-a-logistic-regression-mean)\n\n- [Difference between logit and probit models](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F20523\u002Fdifference-between-logit-and-probit-models#30909), [Logistic Regression Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLogistic_regression), [Probit Model Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FProbit_model)\n\n- [Pseudo R2 for Logistic Regression](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F3559\u002Fwhich-pseudo-r2-measure-is-the-one-to-report-for-logistic-regression-cox-s), [How to calculate](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F8511\u002Fhow-to-calculate-pseudo-r2-from-rs-logistic-regression), [Other Details](http:\u002F\u002Fwww.ats.ucla.edu\u002Fstat\u002Fmult_pkg\u002Ffaq\u002Fgeneral\u002FPsuedo_RSquareds.htm)\n\n- [Guide to an in-depth understanding of logistic regression](http:\u002F\u002Fwww.dataschool.io\u002Fguide-to-logistic-regression\u002F)\n\n\u003Ca name=\"validation\" \u002F>\n\n## Model Validation using Resampling\n\n- [Resampling Explained](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FResampling_(statistics))\n\n- [Partioning data set in R](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F13536537\u002Fpartitioning-data-set-in-r-based-on-multiple-classes-of-observations)\n\n- [Implementing hold-out Validaion in R](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F22972854\u002Fhow-to-implement-a-hold-out-validation-in-r), [2](http:\u002F\u002Fwww.gettinggeneticsdone.com\u002F2011\u002F02\u002Fsplit-data-frame-into-testing-and.html)\n\n\u003Ca name=\"cross\" \u002F>\n\n- [Cross Validation](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCross-validation_(statistics))\n    - [How to use cross-validation in predictive modeling](http:\u002F\u002Fstuartlacy.co.uk\u002F2016\u002F02\u002F04\u002Fhow-to-correctly-use-cross-validation-in-predictive-modelling\u002F)\n    - [Training with Full dataset after CV?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F11602\u002Ftraining-with-the-full-dataset-after-cross-validation)\n    \n    - [Which CV method is best?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F103459\u002Fhow-do-i-know-which-method-of-cross-validation-is-best)\n    \n    - [Variance Estimates in k-fold CV](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F31190\u002Fvariance-estimates-in-k-fold-cross-validation)\n    \n    - [Is CV a subsitute for Validation Set?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F18856\u002Fis-cross-validation-a-proper-substitute-for-validation-set)\n    \n    - [Choice of k in k-fold CV](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F27730\u002Fchoice-of-k-in-k-fold-cross-validation)\n    \n    - [CV for ensemble learning](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F102631\u002Fk-fold-cross-validation-of-ensemble-learning)\n    \n    - [k-fold CV in R](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F22909197\u002Fcreating-folds-for-k-fold-cv-in-r-using-caret)\n    \n    - [Good Resources](http:\u002F\u002Fwww.chioka.in\u002Ftag\u002Fcross-validation\u002F)\n    \n    - Overfitting and Cross Validation\n    \n        - [Preventing Overfitting the Cross Validation Data | Andrew Ng](http:\u002F\u002Fai.stanford.edu\u002F~ang\u002Fpapers\u002Fcv-final.pdf)\n        \n        - [Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume11\u002Fcawley10a\u002Fcawley10a.pdf)\n\n        - [CV for detecting and preventing Overfitting](http:\u002F\u002Fwww.autonlab.org\u002Ftutorials\u002Foverfit10.pdf)\n        \n        - [How does CV overcome the Overfitting Problem](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F9053\u002Fhow-does-cross-validation-overcome-the-overfitting-problem)\n\n\n\u003Ca name=\"boot\" \u002F>\n\n- [Bootstrapping](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBootstrapping_(statistics))\n\n    - [Why Bootstrapping Works?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F26088\u002Fexplaining-to-laypeople-why-bootstrapping-works)\n    \n    - [Good Animation](https:\u002F\u002Fwww.stat.auckland.ac.nz\u002F~wild\u002FBootAnim\u002F)\n    \n    - [Example of Bootstapping](http:\u002F\u002Fstatistics.about.com\u002Fod\u002FApplications\u002Fa\u002FExample-Of-Bootstrapping.htm)\n    \n    - [Understanding Bootstapping for Validation and Model Selection](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F14516\u002Funderstanding-bootstrapping-for-validation-and-model-selection?rq=1)\n    \n    - [Cross Validation vs Bootstrap to estimate prediction error](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F18348\u002Fdifferences-between-cross-validation-and-bootstrapping-to-estimate-the-predictio), [Cross-validation vs .632 bootstrapping to evaluate classification performance](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F71184\u002Fcross-validation-or-bootstrapping-to-evaluate-classification-performance)\n\n\n\u003Ca name=\"deep\" \u002F>\n\n## Deep Learning\n\n- [fast.ai - Practical Deep Learning For Coders](http:\u002F\u002Fcourse.fast.ai\u002F)\n\n- [fast.ai - Cutting Edge Deep Learning For Coders](http:\u002F\u002Fcourse.fast.ai\u002Fpart2.html)\n\n- [A curated list of awesome Deep Learning tutorials, projects and communities](https:\u002F\u002Fgithub.com\u002FChristosChristofidis\u002Fawesome-deep-learning)\n\n- **[Deep Learning Papers Reading Roadmap](https:\u002F\u002Fgithub.com\u002Ffloodsung\u002FDeep-Learning-Papers-Reading-Roadmap\u002Fblob\u002Fmaster\u002FREADME.md)**\n\n- [Lots of Deep Learning Resources](http:\u002F\u002Fdeeplearning4j.org\u002Fdocumentation.html)\n\n- [Interesting Deep Learning and NLP Projects (Stanford)](http:\u002F\u002Fcs224d.stanford.edu\u002Freports.html), [Website](http:\u002F\u002Fcs224d.stanford.edu\u002F)\n\n- [Core Concepts of Deep Learning](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-learning-nutshell-core-concepts\u002F)\n\n- [Understanding Natural Language with Deep Neural Networks Using Torch](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n\n- [Stanford Deep Learning Tutorial](http:\u002F\u002Fufldl.stanford.edu\u002Ftutorial\u002F)\n\n- [Deep Learning FAQs on Quora](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FDeep-Learning\u002Ffaq)\n\n- [Google+ Deep Learning Page](https:\u002F\u002Fplus.google.com\u002Fcommunities\u002F112866381580457264725)\n\n- [Recent Reddit AMAs related to Deep Learning](http:\u002F\u002Fdeeplearning.net\u002F2014\u002F11\u002F22\u002Frecent-reddit-amas-about-deep-learning\u002F), [Another AMA](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FIAmA\u002Fcomments\u002F3mdk9v\u002Fwe_are_google_researchers_working_on_deep\u002F)\n\n- [Where to Learn Deep Learning?](http:\u002F\u002Fwww.kdnuggets.com\u002F2014\u002F05\u002Flearn-deep-learning-courses-tutorials-overviews.html)\n\n- [Deep Learning nvidia concepts](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-learning-nutshell-core-concepts\u002F)\n\n- [Introduction to Deep Learning Using Python (GitHub)](https:\u002F\u002Fgithub.com\u002Frouseguy\u002Fintro2deeplearning), [Good Introduction Slides](https:\u002F\u002Fspeakerdeck.com\u002Fbargava\u002Fintroduction-to-deep-learning)\n\n- [Video Lectures Oxford 2015](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu), [Video Lectures Summer School Montreal](http:\u002F\u002Fvideolectures.net\u002Fdeeplearning2015_montreal\u002F)\n\n- [Deep Learning Software List](http:\u002F\u002Fdeeplearning.net\u002Fsoftware_links\u002F)\n\n- [Hacker's guide to Neural Nets](http:\u002F\u002Fkarpathy.github.io\u002Fneuralnets\u002F)\n\n- [Top arxiv Deep Learning Papers explained](http:\u002F\u002Fwww.kdnuggets.com\u002F2015\u002F10\u002Ftop-arxiv-deep-learning-papers-explained.html)\n\n- [Geoff Hinton Youtube Vidoes on Deep Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=IcOMKXAw5VA)\n\n- [Awesome Deep Learning Reading List](http:\u002F\u002Fdeeplearning.net\u002Freading-list\u002F)\n\n- [Deep Learning Comprehensive Website](http:\u002F\u002Fdeeplearning.net\u002F), [Software](http:\u002F\u002Fdeeplearning.net\u002Fsoftware_links\u002F)\n\n- [deeplearning Tutorials](http:\u002F\u002Fdeeplearning4j.org\u002F)\n\n- [AWESOME! Deep Learning Tutorial](https:\u002F\u002Fwww.toptal.com\u002Fmachine-learning\u002Fan-introduction-to-deep-learning-from-perceptrons-to-deep-networks)\n\n- [Deep Learning Basics](http:\u002F\u002Falexminnaar.com\u002Fdeep-learning-basics-neural-networks-backpropagation-and-stochastic-gradient-descent.html)\n\n- [Intuition Behind Backpropagation](https:\u002F\u002Fmedium.com\u002Fspidernitt\u002Fbreaking-down-neural-networks-an-intuitive-approach-to-backpropagation-3b2ff958794c)\n\n- [Stanford Tutorials](http:\u002F\u002Fufldl.stanford.edu\u002Ftutorial\u002Fsupervised\u002FMultiLayerNeuralNetworks\u002F)\n\n- [Train, Validation & Test in Artificial Neural Networks](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F2976452\u002Fwhats-is-the-difference-between-train-validation-and-test-set-in-neural-networ)\n\n- [Artificial Neural Networks Tutorials](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F478947\u002Fwhat-are-some-good-resources-for-learning-about-artificial-neural-networks)\n\n- [Neural Networks FAQs on Stack Overflow](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fneural-network?sort=votes&pageSize=50)\n\n- [Deep Learning Tutorials on deeplearning.net](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Findex.html)\n\n- [Neural Networks and Deep Learning Online Book](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F)\n\n- Neural Machine Translation\n\n    - **[Machine Translation Reading List](https:\u002F\u002Fgithub.com\u002FTHUNLP-MT\u002FMT-Reading-List#machine-translation-reading-list)**\n\n    - [Introduction to Neural Machine Translation with GPUs (part 1)](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fintroduction-neural-machine-translation-with-gpus\u002F), [Part 2](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fintroduction-neural-machine-translation-gpus-part-2\u002F), [Part 3](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fintroduction-neural-machine-translation-gpus-part-3\u002F)\n    \n    - [Deep Speech: Accurate Speech Recognition with GPU-Accelerated Deep Learning](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-speech-accurate-speech-recognition-gpu-accelerated-deep-learning\u002F)\n\n\u003Ca name=\"frame\" \u002F>\n\n- Deep Learning Frameworks\n\n    - [Torch vs. Theano](http:\u002F\u002Ffastml.com\u002Ftorch-vs-theano\u002F)\n    \n    - [dl4j vs. torch7 vs. theano](http:\u002F\u002Fdeeplearning4j.org\u002Fcompare-dl4j-torch7-pylearn.html)\n    \n    - [Deep Learning Libraries by Language](http:\u002F\u002Fwww.teglor.com\u002Fb\u002Fdeep-learning-libraries-language-cm569\u002F)\n    \n\n    - [Theano](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTheano_(software))\n    \n        - [Website](http:\u002F\u002Fdeeplearning.net\u002Fsoftware\u002Ftheano\u002F)\n        \n        - [Theano Introduction](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Fspeeding-up-your-neural-network-with-theano-and-the-gpu\u002F)\n        \n        - [Theano Tutorial](http:\u002F\u002Foutlace.com\u002FBeginner-Tutorial-Theano\u002F)\n        \n        - [Good Theano Tutorial](http:\u002F\u002Fdeeplearning.net\u002Fsoftware\u002Ftheano\u002Ftutorial\u002F)\n        \n        - [Logistic Regression using Theano for classifying digits](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flogreg.html#logreg)\n        \n        - [MLP using Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fmlp.html#mlp)\n        \n        - [CNN using Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flenet.html#lenet)\n        \n        - [RNNs using Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Frnnslu.html#rnnslu)\n        \n        - [LSTM for Sentiment Analysis in Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flstm.html#lstm)\n        \n        - [RBM using Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Frbm.html#rbm)\n        \n        - [DBNs using Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002FDBN.html#dbn)\n        \n        - [All Codes](https:\u002F\u002Fgithub.com\u002Flisa-lab\u002FDeepLearningTutorials)\n        \n        - [Deep Learning Implementation Tutorials - Keras and Lasagne](https:\u002F\u002Fgithub.com\u002Fvict0rsch\u002Fdeep_learning\u002F)\n\n    - [Torch](http:\u002F\u002Ftorch.ch\u002F)\n    \n        - [Torch ML Tutorial](http:\u002F\u002Fcode.madbits.com\u002Fwiki\u002Fdoku.php), [Code](https:\u002F\u002Fgithub.com\u002Ftorch\u002Ftutorials)\n        \n        - [Intro to Torch](http:\u002F\u002Fml.informatik.uni-freiburg.de\u002F_media\u002Fteaching\u002Fws1415\u002Fpresentation_dl_lect3.pdf)\n        \n        - [Learning Torch GitHub Repo](https:\u002F\u002Fgithub.com\u002Fchetannaik\u002Flearning_torch)\n        \n        - [Awesome-Torch (Repository on GitHub)](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fawesome-torch)\n        \n        - [Machine Learning using Torch Oxford Univ](https:\u002F\u002Fwww.cs.ox.ac.uk\u002Fpeople\u002Fnando.defreitas\u002Fmachinelearning\u002F), [Code](https:\u002F\u002Fgithub.com\u002Foxford-cs-ml-2015)\n        \n        - [Torch Internals Overview](https:\u002F\u002Fapaszke.github.io\u002Ftorch-internals.html)\n        \n        - [Torch Cheatsheet](https:\u002F\u002Fgithub.com\u002Ftorch\u002Ftorch7\u002Fwiki\u002FCheatsheet)\n        \n        - [Understanding Natural Language with Deep Neural Networks Using Torch](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n\n    - Caffe\n        - [Deep Learning for Computer Vision with Caffe and cuDNN](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-learning-computer-vision-caffe-cudnn\u002F)\n\n    - TensorFlow\n        - [Website](http:\u002F\u002Ftensorflow.org\u002F)\n        \n        - [TensorFlow Examples for Beginners](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples)\n        \n        - [Stanford Tensorflow for Deep Learning Research Course](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs20si\u002Fsyllabus.html)\n        \n            - [GitHub Repo](https:\u002F\u002Fgithub.com\u002Fchiphuyen\u002Ftf-stanford-tutorials)\n            \n        - [Simplified Scikit-learn Style Interface to TensorFlow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fskflow)\n        \n        - [Learning TensorFlow GitHub Repo](https:\u002F\u002Fgithub.com\u002Fchetannaik\u002Flearning_tensorflow)\n        \n        - [Benchmark TensorFlow GitHub](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Fconvnet-benchmarks\u002Fissues\u002F66)\n        \n        - [Awesome TensorFlow List](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fawesome-tensorflow)\n        \n        - [TensorFlow Book](https:\u002F\u002Fgithub.com\u002FBinRoot\u002FTensorFlow-Book)\n        \n        - [Android TensorFlow Machine Learning Example](https:\u002F\u002Fblog.mindorks.com\u002Fandroid-tensorflow-machine-learning-example-ff0e9b2654cc)\n        \n            - [GitHub Repo](https:\u002F\u002Fgithub.com\u002FMindorksOpenSource\u002FAndroidTensorFlowMachineLearningExample)\n        - [Creating Custom Model For Android Using TensorFlow](https:\u002F\u002Fblog.mindorks.com\u002Fcreating-custom-model-for-android-using-tensorflow-3f963d270bfb)\n            - [GitHub Repo](https:\u002F\u002Fgithub.com\u002FMindorksOpenSource\u002FAndroidTensorFlowMNISTExample)            \n\n\u003Ca name=\"feed\" \u002F>\n\n- Feed Forward Networks\n\n    - [A Quick Introduction to Neural Networks](https:\u002F\u002Fujjwalkarn.me\u002F2016\u002F08\u002F09\u002Fquick-intro-neural-networks\u002F)\n    \n    - [Implementing a Neural Network from scratch](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Fimplementing-a-neural-network-from-scratch\u002F), [Code](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Fnn-from-scratch)\n    \n    - [Speeding up your Neural Network with Theano and the gpu](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Fspeeding-up-your-neural-network-with-theano-and-the-gpu\u002F), [Code](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Fnn-theano)\n    \n    - [Basic ANN Theory](https:\u002F\u002Ftakinginitiative.wordpress.com\u002F2008\u002F04\u002F03\u002Fbasic-neural-network-tutorial-theory\u002F)\n    \n    - [Role of Bias in Neural Networks](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F2480650\u002Frole-of-bias-in-neural-networks)\n    \n    - [Choosing number of hidden layers and nodes](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F3345079\u002Festimating-the-number-of-neurons-and-number-of-layers-of-an-artificial-neural-ne),[2](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F10565868\u002Fmulti-layer-perceptron-mlp-architecture-criteria-for-choosing-number-of-hidde?lq=1),[3](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F9436209\u002Fhow-to-choose-number-of-hidden-layers-and-nodes-in-neural-network\u002F2#)\n    \n    - [Backpropagation in Matrix Form](http:\u002F\u002Fsudeepraja.github.io\u002FNeural\u002F)\n    \n    - [ANN implemented in C++ | AI Junkie](http:\u002F\u002Fwww.ai-junkie.com\u002Fann\u002Fevolved\u002Fnnt6.html)\n    \n    - [Simple Implementation](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F15395835\u002Fsimple-multi-layer-neural-network-implementation)\n    \n    - [NN for Beginners](http:\u002F\u002Fwww.codeproject.com\u002FArticles\u002F16419\u002FAI-Neural-Network-for-beginners-Part-of)\n    \n    - [Regression and Classification with NNs (Slides)](http:\u002F\u002Fwww.autonlab.org\u002Ftutorials\u002Fneural13.pdf)\n    \n    - [Another Intro](http:\u002F\u002Fwww.doc.ic.ac.uk\u002F~nd\u002Fsurprise_96\u002Fjournal\u002Fvol4\u002Fcs11\u002Freport.html)\n\n\u003Ca name=\"rnn\" \u002F>\n\n- Recurrent and LSTM Networks\n    - [awesome-rnn: list of resources (GitHub Repo)](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-rnn)\n    \n    - [Recurrent Neural Net Tutorial Part 1](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Frecurrent-neural-networks-tutorial-part-1-introduction-to-rnns\u002F), [Part 2](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Frecurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano\u002F), [Part 3](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F10\u002Frecurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients\u002F), [Code](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Frnn-tutorial-rnnlm\u002F)\n    \n    - [NLP RNN Representations](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2014-07-NLP-RNNs-Representations\u002F)\n    \n    - [The Unreasonable effectiveness of RNNs](http:\u002F\u002Fkarpathy.github.io\u002F2015\u002F05\u002F21\u002Frnn-effectiveness\u002F), [Torch Code](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fchar-rnn), [Python Code](https:\u002F\u002Fgist.github.com\u002Fkarpathy\u002Fd4dee566867f8291f086)\n    \n    - [Intro to RNN](http:\u002F\u002Fdeeplearning4j.org\u002Frecurrentnetwork.html), [LSTM](http:\u002F\u002Fdeeplearning4j.org\u002Flstm.html)\n    \n    - [An application of RNN](http:\u002F\u002Fhackaday.com\u002F2015\u002F10\u002F15\u002F73-computer-scientists-created-a-neural-net-and-you-wont-believe-what-happened-next\u002F)\n    \n    - [Optimizing RNN Performance](http:\u002F\u002Fsvail.github.io\u002F)\n    \n    - [Simple RNN](http:\u002F\u002Foutlace.com\u002FSimple-Recurrent-Neural-Network\u002F)\n    \n    - [Auto-Generating Clickbait with RNN](https:\u002F\u002Flarseidnes.com\u002F2015\u002F10\u002F13\u002Fauto-generating-clickbait-with-recurrent-neural-networks\u002F)\n    \n    - [Sequence Learning using RNN (Slides)](http:\u002F\u002Fwww.slideshare.net\u002Findicods\u002Fgeneral-sequence-learning-with-recurrent-neural-networks-for-next-ml)\n    \n    - [Machine Translation using RNN (Paper)](http:\u002F\u002Femnlp2014.org\u002Fpapers\u002Fpdf\u002FEMNLP2014179.pdf)\n    \n    - [Music generation using RNNs (Keras)](https:\u002F\u002Fgithub.com\u002FMattVitelli\u002FGRUV)\n    \n    - [Using RNN to create on-the-fly dialogue (Keras)](http:\u002F\u002Fneuralniche.com\u002Fpost\u002Ftutorial\u002F)\n    \n    - Long Short Term Memory (LSTM)\n    \n        - [Understanding LSTM Networks](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2015-08-Understanding-LSTMs\u002F)\n        \n        - [LSTM explained](https:\u002F\u002Fapaszke.github.io\u002Flstm-explained.html)\n        \n        - [Beginner’s Guide to LSTM](http:\u002F\u002Fdeeplearning4j.org\u002Flstm.html)\n        \n        - [Implementing LSTM from scratch](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F10\u002Frecurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano\u002F), [Python\u002FTheano code](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Frnn-tutorial-gru-lstm)\n        \n        - [Torch Code for character-level language models using LSTM](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fchar-rnn)\n        \n        - [LSTM for Kaggle EEG Detection competition (Torch Code)](https:\u002F\u002Fgithub.com\u002Fapaszke\u002Fkaggle-grasp-and-lift)\n        \n        - [LSTM for Sentiment Analysis in Theano](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flstm.html#lstm)\n        \n        - [Deep Learning for Visual Q&A | LSTM | CNN](http:\u002F\u002Favisingh599.github.io\u002Fdeeplearning\u002Fvisual-qa\u002F), [Code](https:\u002F\u002Fgithub.com\u002Favisingh599\u002Fvisual-qa)\n        \n        - [Computer Responds to email using LSTM | Google](http:\u002F\u002Fgoogleresearch.blogspot.in\u002F2015\u002F11\u002Fcomputer-respond-to-this-email.html)\n        \n        - [LSTM dramatically improves Google Voice Search](http:\u002F\u002Fgoogleresearch.blogspot.ch\u002F2015\u002F09\u002Fgoogle-voice-search-faster-and-more.html), [Another Article](http:\u002F\u002Fdeeplearning.net\u002F2015\u002F09\u002F30\u002Flong-short-term-memory-dramatically-improves-google-voice-etc-now-available-to-a-billion-users\u002F)\n        \n        - [Understanding Natural Language with LSTM Using Torch](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n        \n        - [Torch code for Visual Question Answering using a CNN+LSTM model](https:\u002F\u002Fgithub.com\u002Fabhshkdz\u002Fneural-vqa)\n        \n        - [LSTM for Human Activity Recognition](https:\u002F\u002Fgithub.com\u002Fguillaume-chevalier\u002FLSTM-Human-Activity-Recognition\u002F)\n        \n    - Gated Recurrent Units (GRU)\n    \n        - [LSTM vs GRU](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F10\u002Frecurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano\u002F)\n    \n    - [Time series forecasting with Sequence-to-Sequence (seq2seq) rnn models](https:\u002F\u002Fgithub.com\u002Fguillaume-chevalier\u002Fseq2seq-signal-prediction)\n\n\n\u003Ca name=\"rnn2\" \u002F>\n\n- [Recursive Neural Network (not Recurrent)](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRecursive_neural_network)\n\n    - [Recursive Neural Tensor Network (RNTN)](http:\u002F\u002Fdeeplearning4j.org\u002Frecursiveneuraltensornetwork.html)\n    \n    - [word2vec, DBN, RNTN for Sentiment Analysis ](http:\u002F\u002Fdeeplearning4j.org\u002Fzh-sentiment_analysis_word2vec.html)\n\n\u003Ca name=\"rbm\" \u002F>\n\n- Restricted Boltzmann Machine\n\n    - [Beginner's Guide about RBMs](http:\u002F\u002Fdeeplearning4j.org\u002Frestrictedboltzmannmachine.html)\n    \n    - [Another Good Tutorial](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Frbm.html)\n    \n    - [Introduction to RBMs](http:\u002F\u002Fblog.echen.me\u002F2011\u002F07\u002F18\u002Fintroduction-to-restricted-boltzmann-machines\u002F)\n    \n    - [Hinton's Guide to Training RBMs](https:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002Fabsps\u002FguideTR.pdf)\n    \n    - [RBMs in R](https:\u002F\u002Fgithub.com\u002Fzachmayer\u002Frbm)\n    \n    - [Deep Belief Networks Tutorial](http:\u002F\u002Fdeeplearning4j.org\u002Fdeepbeliefnetwork.html)\n    \n    - [word2vec, DBN, RNTN for Sentiment Analysis ](http:\u002F\u002Fdeeplearning4j.org\u002Fzh-sentiment_analysis_word2vec.html)\n\n\u003Ca name=\"auto\" \u002F>\n\n- Autoencoders: Unsupervised (applies BackProp after setting target = input)\n\n    - [Andrew Ng Sparse Autoencoders pdf](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs294a\u002FsparseAutoencoder.pdf)\n    \n    - [Deep Autoencoders Tutorial](http:\u002F\u002Fdeeplearning4j.org\u002Fdeepautoencoder.html)\n    \n    - [Denoising Autoencoders](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002FdA.html), [Theano Code](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fcode\u002FdA.py)\n    \n    - [Stacked Denoising Autoencoders](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002FSdA.html#sda)\n\n\n\u003Ca name=\"cnn\" \u002F>\n\n- Convolutional Neural Networks\n\n    - [An Intuitive Explanation of Convolutional Neural Networks](https:\u002F\u002Fujjwalkarn.me\u002F2016\u002F08\u002F11\u002Fintuitive-explanation-convnets\u002F)\n    \n    - [Awesome Deep Vision: List of Resources (GitHub)](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-deep-vision)\n    \n    - [Intro to CNNs](http:\u002F\u002Fdeeplearning4j.org\u002Fconvolutionalnets.html)\n    \n    - [Understanding CNN for NLP](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F11\u002Funderstanding-convolutional-neural-networks-for-nlp\u002F)\n    \n    - [Stanford Notes](http:\u002F\u002Fvision.stanford.edu\u002Fteaching\u002Fcs231n\u002F), [Codes](http:\u002F\u002Fcs231n.github.io\u002F), [GitHub](https:\u002F\u002Fgithub.com\u002Fcs231n\u002Fcs231n.github.io)\n    \n    - [JavaScript Library (Browser Based) for CNNs](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002F)\n    \n    - [Using CNNs to detect facial keypoints](http:\u002F\u002Fdanielnouri.org\u002Fnotes\u002F2014\u002F12\u002F17\u002Fusing-convolutional-neural-nets-to-detect-facial-keypoints-tutorial\u002F)\n    \n    - [Deep learning to classify business photos at Yelp](http:\u002F\u002Fengineeringblog.yelp.com\u002F2015\u002F10\u002Fhow-we-use-deep-learning-to-classify-business-photos-at-yelp.html)\n    \n    - [Interview with Yann LeCun | Kaggle](http:\u002F\u002Fblog.kaggle.com\u002F2014\u002F12\u002F22\u002Fconvolutional-nets-and-cifar-10-an-interview-with-yan-lecun\u002F)\n    \n    - [Visualising and Understanding CNNs](https:\u002F\u002Fwww.cs.nyu.edu\u002F~fergus\u002Fpapers\u002FzeilerECCV2014.pdf)\n\n\u003Ca name=\"nrl\" \u002F>\n\n- Network Representation Learning\n\n    - [Awesome Graph Embedding](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fawesome-graph-embedding)\n    \n    - [Awesome Network Embedding](https:\u002F\u002Fgithub.com\u002Fchihming\u002Fawesome-network-embedding)\n    \n    - [Network Representation Learning Papers](https:\u002F\u002Fgithub.com\u002Fthunlp)\n    \n    - [Knowledge Representation Learning Papers](https:\u002F\u002Fgithub.com\u002Fthunlp\u002FKRLPapers)\n    \n    - [Graph Based Deep Learning Literature](https:\u002F\u002Fgithub.com\u002Fnaganandy\u002Fgraph-based-deep-learning-literature)\n\n\u003Ca name=\"nlp\" \u002F>\n\n## Natural Language Processing\n\n- [A curated list of speech and natural language processing resources](https:\u002F\u002Fgithub.com\u002Fedobashira\u002Fspeech-language-processing)\n\n- [Understanding Natural Language with Deep Neural Networks Using Torch](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n\n- [tf-idf explained](http:\u002F\u002Fmichaelerasm.us\u002Fpost\u002Ftf-idf-in-10-minutes\u002F)\n\n- [Interesting Deep Learning NLP Projects Stanford](http:\u002F\u002Fcs224d.stanford.edu\u002Freports.html), [Website](http:\u002F\u002Fcs224d.stanford.edu\u002F)\n\n- [The Stanford NLP Group](https:\u002F\u002Fnlp.stanford.edu\u002F)\n\n- [NLP from Scratch | Google Paper](https:\u002F\u002Fstatic.googleusercontent.com\u002Fmedia\u002Fresearch.google.com\u002Fen\u002Fus\u002Fpubs\u002Farchive\u002F35671.pdf)\n\n- [Graph Based Semi Supervised Learning for NLP](http:\u002F\u002Fgraph-ssl.wdfiles.com\u002Flocal--files\u002Fblog%3A_start\u002Fgraph_ssl_acl12_tutorial_slides_final.pdf)\n\n- [Bag of Words](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBag-of-words_model)\n\n    - [Classification text with Bag of Words](http:\u002F\u002Ffastml.com\u002Fclassifying-text-with-bag-of-words-a-tutorial\u002F)\n    \n\u003Ca name=\"topic\" \u002F>\n\n- Topic Modeling\n    - [Topic Modeling Wikipedia](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTopic_model) \n    - [**Probabilistic Topic Models Princeton PDF**](http:\u002F\u002Fwww.cs.columbia.edu\u002F~blei\u002Fpapers\u002FBlei2012.pdf)\n\n    - [LDA Wikipedia](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLatent_Dirichlet_allocation), [LSA Wikipedia](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLatent_semantic_analysis), [Probabilistic LSA Wikipedia](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FProbabilistic_latent_semantic_analysis)\n    \n    - [What is a good explanation of Latent Dirichlet Allocation (LDA)?](https:\u002F\u002Fwww.quora.com\u002FWhat-is-a-good-explanation-of-Latent-Dirichlet-Allocation)\n    \n    - [**Introduction to LDA**](http:\u002F\u002Fblog.echen.me\u002F2011\u002F08\u002F22\u002Fintroduction-to-latent-dirichlet-allocation\u002F), [Another good explanation](http:\u002F\u002Fconfusedlanguagetech.blogspot.in\u002F2012\u002F07\u002Fjordan-boyd-graber-and-philip-resnik.html)\n    \n    - [The LDA Buffet - Intuitive Explanation](http:\u002F\u002Fwww.matthewjockers.net\u002F2011\u002F09\u002F29\u002Fthe-lda-buffet-is-now-open-or-latent-dirichlet-allocation-for-english-majors\u002F)\n    \n    - [Your Guide to Latent Dirichlet Allocation (LDA)](https:\u002F\u002Fmedium.com\u002F@lettier\u002Fhow-does-lda-work-ill-explain-using-emoji-108abf40fa7d)\n    \n    - [Difference between LSI and LDA](https:\u002F\u002Fwww.quora.com\u002FWhats-the-difference-between-Latent-Semantic-Indexing-LSI-and-Latent-Dirichlet-Allocation-LDA)\n    \n    - [Original LDA Paper](https:\u002F\u002Fwww.cs.princeton.edu\u002F~blei\u002Fpapers\u002FBleiNgJordan2003.pdf)\n    \n    - [alpha and beta in LDA](http:\u002F\u002Fdatascience.stackexchange.com\u002Fquestions\u002F199\u002Fwhat-does-the-alpha-and-beta-hyperparameters-contribute-to-in-latent-dirichlet-a)\n    \n    - [Intuitive explanation of the Dirichlet distribution](https:\u002F\u002Fwww.quora.com\u002FWhat-is-an-intuitive-explanation-of-the-Dirichlet-distribution)\n    - [topicmodels: An R Package for Fitting Topic Models](https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Ftopicmodels\u002Fvignettes\u002Ftopicmodels.pdf)\n\n    - [Topic modeling made just simple enough](https:\u002F\u002Ftedunderwood.com\u002F2012\u002F04\u002F07\u002Ftopic-modeling-made-just-simple-enough\u002F)\n    \n    - [Online LDA](http:\u002F\u002Falexminnaar.com\u002Fonline-latent-dirichlet-allocation-the-best-option-for-topic-modeling-with-large-data-sets.html), [Online LDA with Spark](http:\u002F\u002Falexminnaar.com\u002Fdistributed-online-latent-dirichlet-allocation-with-apache-spark.html)\n    \n    - [LDA in Scala](http:\u002F\u002Falexminnaar.com\u002Flatent-dirichlet-allocation-in-scala-part-i-the-theory.html), [Part 2](http:\u002F\u002Falexminnaar.com\u002Flatent-dirichlet-allocation-in-scala-part-ii-the-code.html)\n    \n    - [Segmentation of Twitter Timelines via Topic Modeling](https:\u002F\u002Falexisperrier.com\u002Fnlp\u002F2015\u002F09\u002F16\u002Fsegmentation_twitter_timelines_lda_vs_lsa.html)\n    \n    - [Topic Modeling of Twitter Followers](http:\u002F\u002Falexperrier.github.io\u002Fjekyll\u002Fupdate\u002F2015\u002F09\u002F04\u002Ftopic-modeling-of-twitter-followers.html)\n\n    - [Multilingual Latent Dirichlet Allocation (LDA)](https:\u002F\u002Fgithub.com\u002FArtificiAI\u002FMultilingual-Latent-Dirichlet-Allocation-LDA). ([Tutorial here](https:\u002F\u002Fgithub.com\u002FArtificiAI\u002FMultilingual-Latent-Dirichlet-Allocation-LDA\u002Fblob\u002Fmaster\u002FMultilingual-LDA-Pipeline-Tutorial.ipynb))\n\n    - [Deep Belief Nets for Topic Modeling](https:\u002F\u002Fgithub.com\u002Flarsmaaloee\u002Fdeep-belief-nets-for-topic-modeling)\n    - [Gaussian LDA for Topic Models with Word Embeddings](http:\u002F\u002Fwww.cs.cmu.edu\u002F~rajarshd\u002Fpapers\u002Facl2015.pdf)\n    - Python\n        - [Series of lecture notes for probabilistic topic models written in ipython notebook](https:\u002F\u002Fgithub.com\u002Farongdari\u002Ftopic-model-lecture-note)\n        - [Implementation of various topic models in Python](https:\u002F\u002Fgithub.com\u002Farongdari\u002Fpython-topic-model)\n           \n\u003Ca name=\"word2vec\" \u002F>\n\n- word2vec\n\n    - [Google word2vec](https:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec)\n    \n    - [Bag of Words Model Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBag-of-words_model)\n    \n    - [word2vec Tutorial](https:\u002F\u002Frare-technologies.com\u002Fword2vec-tutorial\u002F)\n    \n    - [A closer look at Skip Gram Modeling](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fballison\u002Fpdf\u002Flrec_skipgrams.pdf)\n    \n    - [Skip Gram Model Tutorial](http:\u002F\u002Falexminnaar.com\u002Fword2vec-tutorial-part-i-the-skip-gram-model.html), [CBoW Model](http:\u002F\u002Falexminnaar.com\u002Fword2vec-tutorial-part-ii-the-continuous-bag-of-words-model.html)\n    \n    - [Word Vectors Kaggle Tutorial Python](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-2-word-vectors), [Part 2](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-3-more-fun-with-word-vectors)\n    \n    - [Making sense of word2vec](http:\u002F\u002Frare-technologies.com\u002Fmaking-sense-of-word2vec\u002F)\n    \n    - [word2vec explained on deeplearning4j](http:\u002F\u002Fdeeplearning4j.org\u002Fword2vec.html)\n    \n    - [Quora word2vec](https:\u002F\u002Fwww.quora.com\u002FHow-does-word2vec-work)\n    \n    - [Other Quora Resources](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-continuous-bag-of-words-and-skip-gram-architectures-in-laymans-terms), [2](https:\u002F\u002Fwww.quora.com\u002FWhat-is-the-difference-between-the-Bag-of-Words-model-and-the-Continuous-Bag-of-Words-model), [3](https:\u002F\u002Fwww.quora.com\u002FIs-skip-gram-negative-sampling-better-than-CBOW-NS-for-word2vec-If-so-why)\n    \n    - [word2vec, DBN, RNTN for Sentiment Analysis ](http:\u002F\u002Fdeeplearning4j.org\u002Fzh-sentiment_analysis_word2vec.html)\n\n- Text Clustering\n\n    - [How string clustering works](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F8196371\u002Fhow-clustering-works-especially-string-clustering)\n    \n    - [Levenshtein distance for measuring the difference between two sequences](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLevenshtein_distance)\n    \n    - [Text clustering with Levenshtein distances](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F21511801\u002Ftext-clustering-with-levenshtein-distances)\n\n- Text Classification\n\n    - [Classification Text with Bag of Words](http:\u002F\u002Ffastml.com\u002Fclassifying-text-with-bag-of-words-a-tutorial\u002F)\n\n- Named Entity Recognitation \n    \n     - [Stanford Named Entity Recognizer (NER)](https:\u002F\u002Fnlp.stanford.edu\u002Fsoftware\u002FCRF-NER.shtml)\n\n     - [Named Entity Recognition: Applications and Use Cases- Towards Data Science](https:\u002F\u002Ftowardsdatascience.com\u002Fnamed-entity-recognition-applications-and-use-cases-acdbf57d595e)\n\t\n- [Language learning with NLP and reinforcement learning](http:\u002F\u002Fblog.dennybritz.com\u002F2015\u002F09\u002F11\u002Freimagining-language-learning-with-nlp-and-reinforcement-learning\u002F)\n\n- [Kaggle Tutorial Bag of Words and Word vectors](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-1-for-beginners-bag-of-words), [Part 2](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-2-word-vectors), [Part 3](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-3-more-fun-with-word-vectors)\n\n- [What would Shakespeare say (NLP Tutorial)](https:\u002F\u002Fgigadom.wordpress.com\u002F2015\u002F10\u002F02\u002Fnatural-language-processing-what-would-shakespeare-say\u002F)\n\n- [A closer look at Skip Gram Modeling](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fballison\u002Fpdf\u002Flrec_skipgrams.pdf)\n\n\u003Ca name=\"vision\" \u002F>\n\n## Computer Vision\n- [Awesome computer vision (github)](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002Fawesome-computer-vision)\n\n- [Awesome deep vision (github)](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-deep-vision)\n\n\n\u003Ca name=\"svm\" \u002F>\n\n## Support Vector Machine\n\n- [Highest Voted Questions about SVMs on Cross Validated](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fsvm)\n\n- [Help me Understand SVMs!](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F3947\u002Fhelp-me-understand-support-vector-machines)\n\n- [SVM in Layman's terms](https:\u002F\u002Fwww.quora.com\u002FWhat-does-support-vector-machine-SVM-mean-in-laymans-terms)\n\n- [How does SVM Work | Comparisons](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F23391\u002Fhow-does-a-support-vector-machine-svm-work)\n\n- [A tutorial on SVMs](http:\u002F\u002Falex.smola.org\u002Fpapers\u002F2003\u002FSmoSch03b.pdf)\n\n- [Practical Guide to SVC](http:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~cjlin\u002Fpapers\u002Fguide\u002Fguide.pdf), [Slides](http:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~cjlin\u002Ftalks\u002Ffreiburg.pdf)\n\n- [Introductory Overview of SVMs](http:\u002F\u002Fwww.statsoft.com\u002FTextbook\u002FSupport-Vector-Machines)\n\n- Comparisons\n\n    - [SVMs > ANNs](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F6699222\u002Fsupport-vector-machines-better-than-artificial-neural-networks-in-which-learn?rq=1), [ANNs > SVMs](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F11632516\u002Fwhat-are-advantages-of-artificial-neural-networks-over-support-vector-machines), [Another Comparison](http:\u002F\u002Fwww.svms.org\u002Fanns.html)\n    \n    - [Trees > SVMs](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F57438\u002Fwhy-is-svm-not-so-good-as-decision-tree-on-the-same-data)\n    \n    - [Kernel Logistic Regression vs SVM](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F43996\u002Fkernel-logistic-regression-vs-svm)\n    \n    - [Logistic Regression vs SVM](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F58684\u002Fregularized-logistic-regression-and-support-vector-machine), [2](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F95340\u002Fsvm-v-s-logistic-regression), [3](https:\u002F\u002Fwww.quora.com\u002FSupport-Vector-Machines\u002FWhat-is-the-difference-between-Linear-SVMs-and-Logistic-Regression)\n    \n- [Optimization Algorithms in Support Vector Machines](http:\u002F\u002Fpages.cs.wisc.edu\u002F~swright\u002Ftalks\u002Fsjw-complearning.pdf)\n\n- [Variable Importance from SVM](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F2179\u002Fvariable-importance-from-svm)\n\n- Software\n\n    - [LIBSVM](https:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~cjlin\u002Flibsvm\u002F)\n    \n    - [Intro to SVM in R](http:\u002F\u002Fcbio.ensmp.fr\u002F~jvert\u002Fsvn\u002Ftutorials\u002Fpractical\u002Fsvmbasic\u002Fsvmbasic_notes.pdf)\n    \n- Kernels\n    - [What are Kernels in ML and SVM?](https:\u002F\u002Fwww.quora.com\u002FWhat-are-Kernels-in-Machine-Learning-and-SVM)\n    \n    - [Intuition Behind Gaussian Kernel in SVMs?](https:\u002F\u002Fwww.quora.com\u002FSupport-Vector-Machines\u002FWhat-is-the-intuition-behind-Gaussian-kernel-in-SVM)\n    \n- Probabilities post SVM\n\n    - [Platt's Probabilistic Outputs for SVM](http:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~htlin\u002Fpaper\u002Fdoc\u002Fplattprob.pdf)\n    \n    - [Platt Calibration Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FPlatt_scaling)\n    \n    - [Why use Platts Scaling](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F5196\u002Fwhy-use-platts-scaling)\n    \n    - [Classifier Classification with Platt's Scaling](http:\u002F\u002Ffastml.com\u002Fclassifier-calibration-with-platts-scaling-and-isotonic-regression\u002F)\n\n\n\u003Ca name=\"rl\" \u002F>\n\n## Reinforcement Learning\n\n- [Awesome Reinforcement Learning (GitHub)](https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl)\n\n- [RL Tutorial Part 1](http:\u002F\u002Foutlace.com\u002FReinforcement-Learning-Part-1\u002F), [Part 2](http:\u002F\u002Foutlace.com\u002FReinforcement-Learning-Part-2\u002F)\n\n\u003Ca name=\"dt\" \u002F>\n\n## Decision Trees\n\n- [Wikipedia Page - Lots of Good Info](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FDecision_tree_learning)\n\n- [FAQs about Decision Trees](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fcart)\n\n- [Brief Tour of Trees and Forests](https:\u002F\u002Fstatistical-research.com\u002Findex.php\u002F2013\u002F04\u002F29\u002Fa-brief-tour-of-the-trees-and-forests\u002F)\n\n- [Tree Based Models in R](http:\u002F\u002Fwww.statmethods.net\u002Fadvstats\u002Fcart.html)\n\n- [How Decision Trees work?](http:\u002F\u002Fwww.aihorizon.com\u002Fessays\u002Fgeneralai\u002Fdecision_trees.htm)\n\n- [Weak side of Decision Trees](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F1292\u002Fwhat-is-the-weak-side-of-decision-trees)\n\n- [Thorough Explanation and different algorithms](http:\u002F\u002Fwww.ise.bgu.ac.il\u002Ffaculty\u002Fliorr\u002Fhbchap9.pdf)\n\n- [What is entropy and information gain in the context of building decision trees?](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F1859554\u002Fwhat-is-entropy-and-information-gain)\n\n- [Slides Related to Decision Trees](http:\u002F\u002Fwww.slideshare.net\u002Fpierluca.lanzi\u002Fmachine-learning-and-data-mining-11-decision-trees)\n\n- [How do decision tree learning algorithms deal with missing values?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F96025\u002Fhow-do-decision-tree-learning-algorithms-deal-with-missing-values-under-the-hoo)\n\n- [Using Surrogates to Improve Datasets with Missing Values](https:\u002F\u002Fwww.salford-systems.com\u002Fvideos\u002Ftutorials\u002Ftips-and-tricks\u002Fusing-surrogates-to-improve-datasets-with-missing-values)\n\n- [Good Article](https:\u002F\u002Fwww.mindtools.com\u002Fdectree.html)\n\n- [Are decision trees almost always binary trees?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F12187\u002Fare-decision-trees-almost-always-binary-trees)\n\n- [Pruning Decision Trees](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FPruning_(decision_trees)), [Grafting of Decision Trees](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGrafting_(decision_trees))\n\n- [What is Deviance in context of Decision Trees?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F6581\u002Fwhat-is-deviance-specifically-in-cart-rpart)\n\n- [Discover structure behind data with decision trees](http:\u002F\u002Fvooban.com\u002Fen\u002Ftips-articles-geek-stuff\u002Fdiscover-structure-behind-data-with-decision-trees\u002F) - Grow and plot a decision tree to automatically figure out hidden rules in your data\n\n- Comparison of Different Algorithms\n\n    - [CART vs CTREE](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F12140\u002Fconditional-inference-trees-vs-traditional-decision-trees)\n    \n    - [Comparison of complexity or performance](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F9979461\u002Fdifferent-decision-tree-algorithms-with-comparison-of-complexity-or-performance)\n    \n    - [CHAID vs CART](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F61230\u002Fchaid-vs-crt-or-cart) , [CART vs CHAID](http:\u002F\u002Fwww.bzst.com\u002F2006\u002F10\u002Fclassification-trees-cart-vs-chaid.html)\n    \n    - [Good Article on comparison](http:\u002F\u002Fwww.ftpress.com\u002Farticles\u002Farticle.aspx?p=2248639&seqNum=11)\n    \n- CART\n\n    - [Recursive Partitioning Wikipedia](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRecursive_partitioning)\n    \n    - [CART Explained](http:\u002F\u002Fdocuments.software.dell.com\u002FStatistics\u002FTextbook\u002FClassification-and-Regression-Trees)\n    \n    - [How to measure\u002Frank “variable importance” when using CART?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F6478\u002Fhow-to-measure-rank-variable-importance-when-using-cart-specifically-using)\n    \n    - [Pruning a Tree in R](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F15318409\u002Fhow-to-prune-a-tree-in-r)\n    \n    - [Does rpart use multivariate splits by default?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F4356\u002Fdoes-rpart-use-multivariate-splits-by-default)\n    \n    - [FAQs about Recursive Partitioning](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Frpart)\n    \n- CTREE\n\n    - [party package in R](https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fparty\u002Fparty.pdf)\n    \n    - [Show volumne in each node using ctree in R](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F13772715\u002Fshow-volume-in-each-node-using-ctree-plot-in-r)\n    \n    - [How to extract tree structure from ctree function?](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F8675664\u002Fhow-to-extract-tree-structure-from-ctree-function)\n    \n- CHAID\n\n    - [Wikipedia Artice on CHAID](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCHAID)\n    \n    - [Basic Introduction to CHAID](https:\u002F\u002Fsmartdrill.com\u002FIntroduction-to-CHAID.html)\n    \n    - [Good Tutorial on CHAID](http:\u002F\u002Fwww.statsoft.com\u002FTextbook\u002FCHAID-Analysis)\n    \n- MARS\n\n    - [Wikipedia Article on MARS](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMultivariate_adaptive_regression_splines)\n    \n- Probabilistic Decision Trees\n\n    - [Bayesian Learning in Probabilistic Decision Trees](http:\u002F\u002Fwww.stats.org.uk\u002Fbayesian\u002FJordan.pdf)\n    \n    - [Probabilistic Trees Research Paper](http:\u002F\u002Fpeople.stern.nyu.edu\u002Fadamodar\u002Fpdfiles\u002Fpapers\u002Fprobabilistic.pdf)\n\n\u003Ca name=\"rf\" \u002F>\n\n## Random Forest \u002F Bagging\n\n- [Awesome Random Forest (GitHub)**](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-random-forest)\n\n- [How to tune RF parameters in practice?](https:\u002F\u002Fwww.kaggle.com\u002Fforums\u002Ff\u002F15\u002Fkaggle-forum\u002Ft\u002F4092\u002Fhow-to-tune-rf-parameters-in-practice)\n\n- [Measures of variable importance in random forests](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F12605\u002Fmeasures-of-variable-importance-in-random-forests)\n\n- [Compare R-squared from two different Random Forest models](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F13869\u002Fcompare-r-squared-from-two-different-random-forest-models)\n\n- [OOB Estimate Explained | RF vs LDA](https:\u002F\u002Fstat.ethz.ch\u002Feducation\u002Fsemesters\u002Fss2012\u002Fams\u002Fslides\u002Fv10.2.pdf)\n\n- [Evaluating Random Forests for Survival Analysis Using Prediction Error Curve](https:\u002F\u002Fwww.jstatsoft.org\u002Findex.php\u002Fjss\u002Farticle\u002Fview\u002Fv050i11)\n\n- [Why doesn't Random Forest handle missing values in predictors?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F98953\u002Fwhy-doesnt-random-forest-handle-missing-values-in-predictors)\n\n- [How to build random forests in R with missing (NA) values?](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F8370455\u002Fhow-to-build-random-forests-in-r-with-missing-na-values)\n\n- [FAQs about Random Forest](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Frandom-forest), [More FAQs](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Frandom-forest)\n\n- [Obtaining knowledge from a random forest](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F21152\u002Fobtaining-knowledge-from-a-random-forest)\n\n- [Some Questions for R implementation](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F20537186\u002Fgetting-predictions-after-rfimpute), [2](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F81609\u002Fwhether-preprocessing-is-needed-before-prediction-using-finalmodel-of-randomfore), [3](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F17059432\u002Frandom-forest-package-in-r-shows-error-during-prediction-if-there-are-new-fact)\n\n\u003Ca name=\"gbm\" \u002F>\n\n## Boosting\n\n- [Boosting for Better Predictions](http:\u002F\u002Fwww.datasciencecentral.com\u002Fprofiles\u002Fblogs\u002Fboosting-algorithms-for-better-predictions)\n\n- [Boosting Wikipedia Page](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBoosting_(machine_learning))\n\n- [Introduction to Boosted Trees | Tianqi Chen](https:\u002F\u002Fhomes.cs.washington.edu\u002F~tqchen\u002Fpdf\u002FBoostedTree.pdf)\n\n- Gradient Boosting Machine\n\n    - [Gradiet Boosting Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGradient_boosting)\n    \n    - [Guidelines for GBM parameters in R](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F25748\u002Fwhat-are-some-useful-guidelines-for-gbm-parameters), [Strategy to set parameters](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F35984\u002Fstrategy-to-set-the-gbm-parameters)\n    \n    - [Meaning of Interaction Depth](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F16501\u002Fwhat-does-interaction-depth-mean-in-gbm), [2](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F16501\u002Fwhat-does-interaction-depth-mean-in-gbm)\n    \n    - [Role of n.minobsinnode parameter of GBM in R](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F30645\u002Frole-of-n-minobsinnode-parameter-of-gbm-in-r)\n    \n    - [GBM in R](http:\u002F\u002Fwww.slideshare.net\u002Fmark_landry\u002Fgbm-package-in-r)\n    \n    - [FAQs about GBM](http:\u002F\u002Fstats.stackexchange.com\u002Ftags\u002Fgbm\u002Fhot)\n    \n    - [GBM vs xgboost](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fhiggs-boson\u002Fforums\u002Ft\u002F9497\u002Fr-s-gbm-vs-python-s-xgboost)\n\n- xgboost\n\n    - [xgboost tuning kaggle](https:\u002F\u002Fwww.kaggle.com\u002Fkhozzy\u002Frossmann-store-sales\u002Fxgboost-parameter-tuning-template\u002Flog)\n    \n    - [xgboost vs gbm](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fotto-group-product-classification-challenge\u002Fforums\u002Ft\u002F13012\u002Fquestion-to-experienced-kagglers-and-anyone-who-wants-to-take-a-shot\u002F68296#post68296)\n    \n    - [xgboost survey](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fhiggs-boson\u002Fforums\u002Ft\u002F10335\u002Fxgboost-post-competition-survey)\n    \n    - [Practical XGBoost in Python online course (free)](http:\u002F\u002Feducation.parrotprediction.teachable.com\u002Fcourses\u002Fpractical-xgboost-in-python)\n    \n- AdaBoost\n\n    - [AdaBoost Wiki](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAdaBoost), [Python Code](https:\u002F\u002Fgist.github.com\u002Ftristanwietsma\u002F5486024)\n    \n    - [AdaBoost Sparse Input Support](http:\u002F\u002Fhamzehal.blogspot.com\u002F2014\u002F06\u002Fadaboost-sparse-input-support.html)\n    \n    - [adaBag R package](https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fadabag\u002Fadabag.pdf)\n    \n    - [Tutorial](http:\u002F\u002Fmath.mit.edu\u002F~rothvoss\u002F18.304.3PM\u002FPresentations\u002F1-Eric-Boosting304FinalRpdf.pdf)\n\n- CatBoost\n\n    - [CatBoost Documentation](https:\u002F\u002Fcatboost.ai\u002Fdocs\u002F)\n\n    - [Benchmarks](https:\u002F\u002Fcatboost.ai\u002F#benchmark)\n\n    - [Tutorial](https:\u002F\u002Fgithub.com\u002Fcatboost\u002Ftutorials)\n\n    - [GitHub Project](https:\u002F\u002Fgithub.com\u002Fcatboost)\n\n    - [CatBoost vs. Light GBM vs. XGBoost](https:\u002F\u002Ftowardsdatascience.com\u002Fcatboost-vs-light-gbm-vs-xgboost-5f93620723db)\n\n\u003Ca name=\"ensem\" \u002F>\n\n## Ensembles\n\n- [Wikipedia Article on Ensemble Learning](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEnsemble_learning)\n\n- [Kaggle Ensembling Guide](http:\u002F\u002Fmlwave.com\u002Fkaggle-ensembling-guide\u002F)\n\n- [The Power of Simple Ensembles](http:\u002F\u002Fwww.overkillanalytics.net\u002Fmore-is-always-better-the-power-of-simple-ensembles\u002F)\n\n- [Ensemble Learning Intro](http:\u002F\u002Fmachine-learning.martinsewell.com\u002Fensembles\u002F)\n\n- [Ensemble Learning Paper](http:\u002F\u002Fcs.nju.edu.cn\u002Fzhouzh\u002Fzhouzh.files\u002Fpublication\u002FspringerEBR09.pdf)\n\n- [Ensembling models with R](http:\u002F\u002Famunategui.github.io\u002Fblending-models\u002F), [Ensembling Regression Models in R](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F26790\u002Fensembling-regression-models), [Intro to Ensembles in R](http:\u002F\u002Fwww.vikparuchuri.com\u002Fblog\u002Fintro-to-ensemble-learning-in-r\u002F)\n\n- [Ensembling Models with caret](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F27361\u002Fstacking-ensembling-models-with-caret)\n\n- [Bagging vs Boosting vs Stacking](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F18891\u002Fbagging-boosting-and-stacking-in-machine-learning)\n\n- [Good Resources | Kaggle Africa Soil Property Prediction](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fafsis-soil-properties\u002Fforums\u002Ft\u002F10391\u002Fbest-ensemble-references)\n\n- [Boosting vs Bagging](http:\u002F\u002Fwww.chioka.in\u002Fwhich-is-better-boosting-or-bagging\u002F)\n\n- [Resources for learning how to implement ensemble methods](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F32703\u002Fresources-for-learning-how-to-implement-ensemble-methods)\n\n- [How are classifications merged in an ensemble classifier?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F21502\u002Fhow-are-classifications-merged-in-an-ensemble-classifier)\n\n\u003Ca name=\"stack\" \u002F>\n\n## Stacking Models\n\n- [Stacking, Blending and Stacked Generalization](http:\u002F\u002Fwww.chioka.in\u002Fstacking-blending-and-stacked-generalization\u002F)\n\n- [Stacked Generalization (Stacking)](http:\u002F\u002Fmachine-learning.martinsewell.com\u002Fensembles\u002Fstacking\u002F)\n\n- [Stacked Generalization: when does it work?](http:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F97-2\u002F011.pdf)\n\n- [Stacked Generalization Paper](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.56.1533&rep=rep1&type=pdf)\n\n\u003Ca name=\"vc\" \u002F>\n\n## Vapnik–Chervonenkis Dimension\n\n- [Wikipedia article on VC Dimension](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FVC_dimension)\n\n- [Intuitive Explanantion of VC Dimension](https:\u002F\u002Fwww.quora.com\u002FExplain-VC-dimension-and-shattering-in-lucid-Way)\n\n- [Video explaining VC Dimension](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=puDzy2XmR5c)\n\n- [Introduction to VC Dimension](http:\u002F\u002Fwww.svms.org\u002Fvc-dimension\u002F)\n\n- [FAQs about VC Dimension](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fvc-dimension)\n\n- [Do ensemble techniques increase VC-dimension?](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F78076\u002Fdo-ensemble-techniques-increase-vc-dimension)\n\n\n\u003Ca name=\"bayes\" \u002F>\n\n## Bayesian Machine Learning\n\n- [Bayesian Methods for Hackers (using pyMC)](https:\u002F\u002Fgithub.com\u002FCamDavidsonPilon\u002FProbabilistic-Programming-and-Bayesian-Methods-for-Hackers)\n\n- [Should all Machine Learning be Bayesian?](http:\u002F\u002Fvideolectures.net\u002Fbark08_ghahramani_samlbb\u002F)\n\n- [Tutorial on Bayesian Optimisation for Machine Learning](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~bengioy\u002Fcifar\u002FNCAP2014-summerschool\u002Fslides\u002FRyan_adams_140814_bayesopt_ncap.pdf)\n\n- [Bayesian Reasoning and Deep Learning](http:\u002F\u002Fblog.shakirm.com\u002F2015\u002F10\u002Fbayesian-reasoning-and-deep-learning\u002F), [Slides](http:\u002F\u002Fblog.shakirm.com\u002Fwp-content\u002Fuploads\u002F2015\u002F10\u002FBayes_Deep.pdf)\n\n- [Bayesian Statistics Made Simple](http:\u002F\u002Fgreenteapress.com\u002Fwp\u002Fthink-bayes\u002F)\n\n- [Kalman & Bayesian Filters in Python](https:\u002F\u002Fgithub.com\u002Frlabbe\u002FKalman-and-Bayesian-Filters-in-Python)\n\n- [Markov Chain Wikipedia Page](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMarkov_chain)\n\n\n\u003Ca name=\"semi\" \u002F>\n\n## Semi Supervised Learning\n\n- [Wikipedia article on Semi Supervised Learning](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FSemi-supervised_learning)\n\n- [Tutorial on Semi Supervised Learning](http:\u002F\u002Fpages.cs.wisc.edu\u002F~jerryzhu\u002Fpub\u002Fsslicml07.pdf)\n\n- [Graph Based Semi Supervised Learning for NLP](http:\u002F\u002Fgraph-ssl.wdfiles.com\u002Flocal--files\u002Fblog%3A_start\u002Fgraph_ssl_acl12_tutorial_slides_final.pdf)\n\n- [Taxonomy](http:\u002F\u002Fis.tuebingen.mpg.de\u002Ffileadmin\u002Fuser_upload\u002Ffiles\u002Fpublications\u002Ftaxo_[0].pdf)\n\n- [Video Tutorial Weka](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=sWxcIjZFGNM)\n\n- [Unsupervised, Supervised and Semi Supervised learning](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F517\u002Funsupervised-supervised-and-semi-supervised-learning)\n\n- [Research Papers 1](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fzoubin\u002Fpapers\u002Fzglactive.pdf), [2](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fzoubin\u002Fpapers\u002Fzgl.pdf), [3](http:\u002F\u002Ficml.cc\u002F2012\u002Fpapers\u002F616.pdf)\n\n\n\u003Ca name=\"opt\" \u002F>\n\n## Optimization\n\n- [Mean Variance Portfolio Optimization with R and Quadratic Programming](http:\u002F\u002Fwww.wdiam.com\u002F2012\u002F06\u002F10\u002Fmean-variance-portfolio-optimization-with-r-and-quadratic-programming\u002F?utm_content=buffer04c12&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer)\n\n- [Algorithms for Sparse Optimization and Machine Learning](http:\u002F\u002Fwww.ima.umn.edu\u002F2011-2012\u002FW3.26-30.12\u002Factivities\u002FWright-Steve\u002Fsjw-ima12)\n\n- [Optimization Algorithms in Machine Learning](http:\u002F\u002Fpages.cs.wisc.edu\u002F~swright\u002Fnips2010\u002Fsjw-nips10.pdf), [Video Lecture](http:\u002F\u002Fvideolectures.net\u002Fnips2010_wright_oaml\u002F)\n\n- [Optimization Algorithms for Data Analysis](http:\u002F\u002Fwww.birs.ca\u002Fworkshops\u002F2011\u002F11w2035\u002Ffiles\u002FWright.pdf)\n\n- [Video Lectures on Optimization](http:\u002F\u002Fvideolectures.net\u002Fstephen_j_wright\u002F)\n\n- [Optimization Algorithms in Support Vector Machines](http:\u002F\u002Fpages.cs.wisc.edu\u002F~swright\u002Ftalks\u002Fsjw-complearning.pdf)\n\n- [The Interplay of Optimization and Machine Learning Research](http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fvolume7\u002FMLOPT-intro06a\u002FMLOPT-intro06a.pdf)\n\n- [Hyperopt tutorial for Optimizing Neural Networks’ Hyperparameters](http:\u002F\u002Fvooban.com\u002Fen\u002Ftips-articles-geek-stuff\u002Fhyperopt-tutorial-for-optimizing-neural-networks-hyperparameters\u002F)\n\n\n\u003Ca name=\"other\" \u002F>\n\n## Other Tutorials\n\n- For a collection of Data Science Tutorials using R, please refer to [this list](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataScienceR).\n\n- For a collection of Data Science Tutorials using Python, please refer to [this list](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataSciencePython).\n","# 机器学习与深度学习教程 [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n- 本仓库包含按主题整理的机器学习和深度学习教程、文章及其他资源列表。更多优秀的列表可在该[列表](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)中找到。\n\n- 如您希望为本列表贡献内容，请阅读[贡献指南](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FMachine-Learning-Tutorials\u002Fblob\u002Fmaster\u002Fcontributing.md)。\n\n- [数据科学、自然语言处理和机器学习领域的R语言教程精选列表](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataScienceR)。\n\n- [数据科学、自然语言处理和机器学习领域的Python教程精选列表](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataSciencePython)。\n\n\n## 目录\n- [简介](#general)\n- [面试资源](#interview)\n- [人工智能](#ai)\n- [遗传算法](#ga)\n- [统计学](#stat)\n- [实用博客](#blogs)\n- [Quora上的资源](#quora)\n- [Kaggle上的资源](#kaggle)\n- [速查表](#cs)\n- [分类](#classification)\n- [线性回归](#linear)\n- [逻辑回归](#logistic)\n- [通过重采样进行模型验证](#validation)\n    - [交叉验证](#cross)\n    - [自助法](#boot)\n- [深度学习](#deep)\n    - [框架](#frame)\n    - [前馈神经网络](#feed)\n    - [循环神经网络、LSTM、GRU](#rnn)\n    - [受限玻尔兹曼机、深度信念网络](#rbm)\n    - [自编码器](#auto)\n    - [卷积神经网络](#cnn)\n    - [图表示学习](#nrl)\n- [自然语言处理](#nlp)\n    - [主题建模、LDA](#topic)\n    - [Word2Vec](#word2vec)\n- [计算机视觉](#vision)\n- [支持向量机](#svm)\n- [强化学习](#rl)\n- [决策树](#dt)\n- [随机森林\u002F装袋](#rf)\n- [提升](#gbm)\n- [集成方法](#ensem)\n- [模型堆叠](#stack)\n- [VC维](#vc)\n- [贝叶斯机器学习](#bayes)\n- [半监督学习](#semi)\n- [优化技术](#opt)\n- [其他实用教程](#other)\n\n\u003Ca name=\"general\" \u002F>\n\n## 简介\n\n- [吴恩达（斯坦福大学）的机器学习课程](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning)\n\n- [AI\u002FML YouTube课程](https:\u002F\u002Fgithub.com\u002Fdair-ai\u002FML-YouTube-Courses)\n\n- [机器学习资源精选列表](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-machine-learning-ml)\n\n- [15小时专家视频深入介绍机器学习](http:\u002F\u002Fwww.dataschool.io\u002F15-hours-of-expert-machine-learning-videos\u002F)\n\n- [统计学习导论](http:\u002F\u002Fwww-bcf.usc.edu\u002F~gareth\u002FISL\u002F)\n\n- [机器学习高校课程列表](https:\u002F\u002Fgithub.com\u002Fprakhar1989\u002Fawesome-courses#machine-learning)\n\n- [面向软件工程师的机器学习](https:\u002F\u002Fgithub.com\u002FZuzooVn\u002Fmachine-learning-for-software-engineers)\n\n- [深入机器学习](https:\u002F\u002Fgithub.com\u002Fhangtwenty\u002Fdive-into-machine-learning)\n\n- [精选的优秀机器学习框架、库和软件列表](https:\u002F\u002Fgithub.com\u002Fjosephmisiti\u002Fawesome-machine-learning)\n\n- [精选的数据可视化库和资源列表](https:\u002F\u002Fgithub.com\u002Ffasouto\u002Fawesome-dataviz)\n\n- [一个用于学习并应用于实际问题的优秀数据科学仓库](https:\u002F\u002Fgithub.com\u002Fokulbilisim\u002Fawesome-datascience)\n\n- [开源数据科学硕士项目](http:\u002F\u002Fdatasciencemasters.org\u002F)\n\n- [Cross Validated上的机器学习常见问题解答](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fmachine-learning)\n\n- [你应该始终深入理解的一些机器学习算法](https:\u002F\u002Fwww.quora.com\u002FWhat-are-some-Machine-Learning-algorithms-that-you-should-always-have-a-strong-understanding-of-and-why)\n\n- [线性无关、正交与不相关变量的区别](http:\u002F\u002Fterpconnect.umd.edu\u002F~bmomen\u002FBIOM621\u002FLineardepCorrOrthogonal.pdf)\n\n- [机器学习概念列表](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_machine_learning_concepts)\n\n- [多个机器学习主题的幻灯片](http:\u002F\u002Fwww.slideshare.net\u002Fpierluca.lanzi\u002Fpresentations)\n\n- [MIT机器学习讲座幻灯片](http:\u002F\u002Fwww.ai.mit.edu\u002Fcourses\u002F6.867-f04\u002Flectures.html)\n\n- [监督学习算法比较](http:\u002F\u002Fwww.dataschool.io\u002Fcomparing-supervised-learning-algorithms\u002F)\n\n- [学习数据科学基础](http:\u002F\u002Fwww.dataschool.io\u002Flearning-data-science-fundamentals\u002F)\n\n- [机器学习中应避免的错误](https:\u002F\u002Fmedium.com\u002F@nomadic_mind\u002Fnew-to-machine-learning-avoid-these-three-mistakes-73258b3848a4#.lih061l3l)\n\n- [统计机器学习课程](http:\u002F\u002Fwww.stat.cmu.edu\u002F~larry\u002F=sml\u002F)\n\n- [TheAnalyticsEdge edX笔记与代码](https:\u002F\u002Fgithub.com\u002Fpedrosan\u002FTheAnalyticsEdge)\n\n- [玩转机器学习](https:\u002F\u002Fgithub.com\u002Fhumphd\u002Fhave-fun-with-machine-learning)\n\n- [过去7天Twitter上分享最多的#机器学习内容](http:\u002F\u002Ftheherdlocker.com\u002Ftweet\u002Fpopularity\u002Fmachinelearning)\n\n- [掌握机器学习](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-machine-learning)\n\n\u003Ca name=\"interview\" \u002F>\n\n## 面试资源\n\n- [41道必备的机器学习面试题（附答案）](https:\u002F\u002Fwww.springboard.com\u002Fblog\u002Fmachine-learning-interview-questions\u002F)\n\n- [计算机科学研究生如何准备数据科学家面试？](https:\u002F\u002Fwww.quora.com\u002FHow-can-a-computer-science-graduate-student-prepare-himself-for-data-scientist-machine-learning-intern-interviews)\n\n- [我该如何学习机器学习？](https:\u002F\u002Fwww.quora.com\u002FHow-do-I-learn-machine-learning-1)\n\n- [数据科学面试常见问题解答](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FData-Science-Interviews\u002Ffaq)\n\n- [数据科学家的核心技能有哪些？](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-key-skills-of-a-data-scientist)\n\n- [DS\u002FML面试资源大全](https:\u002F\u002Ftowardsdatascience.com\u002Fthe-big-list-of-ds-ml-interview-resources-2db4f651bd63)\n\n\u003Ca name=\"ai\" \u002F>\n\n## 人工智能\n\n- [优秀的人工智能（GitHub仓库）](https:\u002F\u002Fgithub.com\u002Fowainlewis\u002Fawesome-artificial-intelligence)\n\n- [UC Berkeley CS188人工智能导论](http:\u002F\u002Fai.berkeley.edu\u002Fhome.html)，[讲座视频](http:\u002F\u002Fai.berkeley.edu\u002Flecture_videos.html)，[2](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=W1S-HSakPTM)\n\n- [编程社区整理的人工智能学习资源](https:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai)\n\n- [MIT 6.034人工智能讲座视频](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi)，[完整课程](https:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-034-artificial-intelligence-fall-2010\u002F)\n\n- [edX课程 | Klein & Abbeel](https:\u002F\u002Fcourses.edx.org\u002Fcourses\u002FBerkeleyX\u002FCS188x_1\u002F1T2013\u002Finfo)\n\n- [Udacity课程 | Norvig & Thrun](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fintro-to-artificial-intelligence--cs271)\n\n- [TED关于人工智能的演讲](http:\u002F\u002Fwww.ted.com\u002Fplaylists\u002F310\u002Ftalks_on_artificial_intelligen)\n\n\u003Ca name=\"ga\" \u002F>\n\n## 遗传算法\n\n- [遗传算法维基百科页面](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGenetic_algorithm)\n\n- [Python中遗传算法的简单实现（第1部分）](http:\u002F\u002Foutlace.com\u002Fminiga.html)，[第2部分](http:\u002F\u002Foutlace.com\u002Fminiga_addendum.html)\n\n- [遗传算法与人工神经网络](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F1402370\u002Fwhen-to-use-genetic-algorithms-vs-when-to-use-neural-networks)\n\n- [用通俗语言解释的遗传算法](http:\u002F\u002Fwww.ai-junkie.com\u002Fga\u002Fintro\u002Fgat1.html)\n\n- [遗传编程](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGenetic_programming)\n\n    - [Python中的遗传编程（GitHub）](https:\u002F\u002Fgithub.com\u002Ftrevorstephens\u002Fgplearn)\n    \n    - [遗传算法与遗传编程的区别（Quora）](https:\u002F\u002Fwww.quora.com\u002FWhats-the-difference-between-Genetic-Algorithms-and-Genetic-Programming)，[StackOverflow](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F3819977\u002Fwhat-are-the-differences-between-genetic-algorithms-and-genetic-programming)\n\n\u003Ca name=\"stat\" \u002F>\n\n## 统计学\n\n- [Stat Trek网站](http:\u002F\u002Fstattrek.com\u002F) - 一个专门用于自学统计学的网站\n\n- [使用Python学习统计学](https:\u002F\u002Fgithub.com\u002Frouseguy\u002Fintro2stats) - 通过以应用为中心的编程方式学习统计学\n\n- [黑客的统计学 | 演示文稿 | @jakevdp](https:\u002F\u002Fspeakerdeck.com\u002Fjakevdp\u002Fstatistics-for-hackers) - 杰克·范德普拉斯的演示文稿\n\n- [在线统计学教材](http:\u002F\u002Fonlinestatbook.com\u002F2\u002Findex.html) - 一门互动式多媒体统计学课程\n\n- [什么是抽样分布？](http:\u002F\u002Fstattrek.com\u002Fsampling\u002Fsampling-distribution.aspx)\n\n- 教程\n\n    - [AP统计学教程](http:\u002F\u002Fstattrek.com\u002Ftutorials\u002Fap-statistics-tutorial.aspx)\n    \n    - [统计与概率教程](http:\u002F\u002Fstattrek.com\u002Ftutorials\u002Fstatistics-tutorial.aspx)\n    \n    - [矩阵代数教程](http:\u002F\u002Fstattrek.com\u002Ftutorials\u002Fmatrix-algebra-tutorial.aspx)\n    \n- [什么是无偏估计量？](https:\u002F\u002Fwww.physicsforums.com\u002Fthreads\u002Fwhat-is-an-unbiased-estimator.547728\u002F)\n\n- [拟合优度解释](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGoodness_of_fit)\n\n- [什么是QQ图？](http:\u002F\u002Fonlinestatbook.com\u002F2\u002Fadvanced_graphs\u002Fq-q_plots.html)\n\n- [OpenIntro统计学](https:\u002F\u002Fwww.openintro.org\u002Fstat\u002Ftextbook.php?stat_book=os) - 免费PDF教材\n\n\u003Ca name=\"blogs\" \u002F>\n\n## 有用的博客\n\n- [Edwin Chen的博客](http:\u002F\u002Fblog.echen.me\u002F) - 一个关于数学、统计学、机器学习、众包和数据科学的博客\n\n- [数据科学学校博客](http:\u002F\u002Fwww.dataschool.io\u002F) - 面向初学者的数据科学！\n\n- [ML Wave](http:\u002F\u002Fmlwave.com\u002F) - 一个学习机器学习的博客\n\n- [Andrej Karpathy](http:\u002F\u002Fkarpathy.github.io\u002F) - 一个关于深度学习和数据科学的博客\n\n- [Colah的博客](http:\u002F\u002Fcolah.github.io\u002F) - 令人惊叹的神经网络博客\n\n- [Alex Minnaar的博客](http:\u002F\u002Falexminnaar.com\u002F) - 一个关于机器学习和软件工程的博客\n\n- [Statistically Significant](http:\u002F\u002Fandland.github.io\u002F) - 安德鲁·兰德格拉夫的数据科学博客\n\n- [Simply Statistics](http:\u002F\u002Fsimplystatistics.org\u002F) - 由三位生物统计学教授运营的博客\n\n- [Yanir Seroussi的博客](https:\u002F\u002Fyanirseroussi.com\u002F) - 一个关于数据科学及其他领域的博客\n\n- [fastML](http:\u002F\u002Ffastml.com\u002F) - 让机器学习变得简单\n\n- [Trevor Stephens博客](http:\u002F\u002Ftrevorstephens.com\u002F) - 特雷弗·斯蒂芬斯的个人主页\n\n- [no free hunch | kaggle](http:\u002F\u002Fblog.kaggle.com\u002F) - Kaggle关于数据科学所有方面的博客\n\n- [量化之旅 | outlace](http:\u002F\u002Foutlace.com\u002F) - 学习量化应用\n\n- [r4stats](http:\u002F\u002Fr4stats.com\u002F) - 分析数据科学领域，并帮助人们学习使用R语言\n\n- [Variance Explained](http:\u002F\u002Fvarianceexplained.org\u002F) - 大卫·罗宾逊的博客\n\n- [AI Junkie](http:\u002F\u002Fwww.ai-junkie.com\u002F) - 一个关于人工智能的博客\n\n- [Tim Dettmers的深度学习博客](http:\u002F\u002Ftimdettmers.com\u002F) - 让深度学习更易获取\n\n- [J Alammar的博客](http:\u002F\u002Fjalammar.github.io\u002F) - 关于机器学习和神经网络的文章\n\n- [Adam Geitgey](https:\u002F\u002Fmedium.com\u002F@ageitgey\u002Fmachine-learning-is-fun-80ea3ec3c471#.f7vwrtfne) - 最简单的机器学习入门\n\n- [Ethen的笔记集](https:\u002F\u002Fgithub.com\u002Fethen8181\u002Fmachine-learning) - 不断更新的机器学习文档（主要使用Python3）。内容包括从头实现机器学习算法的教学以及开源库的使用\n\n\u003Ca name=\"quora\" \u002F>\n\n## Quora上的资源\n\n- [浏览量最高的机器学习作者](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FMachine-Learning\u002Fwriters)\n\n- [Quora上的数据科学话题](https:\u002F\u002Fwww.quora.com\u002FData-Science)\n\n- [William Chen的回答](https:\u002F\u002Fwww.quora.com\u002FWilliam-Chen-6\u002Fanswers)\n\n- [Michael Hochster的回答](https:\u002F\u002Fwww.quora.com\u002FMichael-Hochster\u002Fanswers)\n\n- [Ricardo Vladimiro的回答](https:\u002F\u002Fwww.quora.com\u002FRicardo-Vladimiro-1\u002Fanswers)\n\n- [用统计讲故事](https:\u002F\u002Fdatastories.quora.com\u002F)\n\n- [Quora上的数据科学常见问题](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FData-Science\u002Ffaq)\n\n- [Quora上的机器学习常见问题](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FMachine-Learning\u002Ffaq)\n\n\u003Ca name=\"kaggle\" \u002F>\n\n## Kaggle竞赛总结\n\n- [如何几乎赢得Kaggle竞赛](https:\u002F\u002Fyanirseroussi.com\u002F2014\u002F08\u002F24\u002Fhow-to-almost-win-kaggle-competitions\u002F)\n\n- [用于EEG检测的卷积神经网络](http:\u002F\u002Fblog.kaggle.com\u002F2015\u002F10\u002F05\u002Fgrasp-and-lift-eeg-detection-winners-interview-3rd-place-team-hedj\u002F)\n\n- [Facebook招聘III详解](http:\u002F\u002Falexminnaar.com\u002Ftag\u002Fkaggle-competitions.html)\n\n- [使用在线机器学习预测点击率](http:\u002F\u002Fmlwave.com\u002Fpredicting-click-through-rates-with-online-machine-learning\u002F)\n\n- [如何在你的第一次Kaggle竞赛中排名前10%](https:\u002F\u002Fdnc1994.com\u002F2016\u002F05\u002Frank-10-percent-in-first-kaggle-competition-en\u002F)\n\n\u003Ca name=\"cs\" \u002F>\n\n## 备忘单\n\n- [概率备忘单](http:\u002F\u002Fstatic1.squarespace.com\u002Fstatic\u002F54bf3241e4b0f0d81bf7ff36\u002Ft\u002F55e9494fe4b011aed10e48e5\u002F1441352015658\u002Fprobability_cheatsheet.pdf),\n[来源](http:\u002F\u002Fwww.wzchen.com\u002Fprobability-cheatsheet\u002F)\n\n- [机器学习备忘单](https:\u002F\u002Fgithub.com\u002Fsoulmachine\u002Fmachine-learning-cheat-sheet)\n\n- [ML Compiled](https:\u002F\u002Fml-compiled.readthedocs.io\u002Fen\u002Flatest\u002F)\n\n## 分类\n\n- [平衡类别是否能提升分类器性能？](http:\u002F\u002Fwww.win-vector.com\u002Fblog\u002F2015\u002F02\u002Fdoes-balancing-classes-improve-classifier-performance\u002F)\n\n- [偏差是什么？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F6581\u002Fwhat-is-deviance-specifically-in-cart-rpart)\n\n- [何时选择哪种机器学习分类器？](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F2595176\u002Fwhen-to-choose-which-machine-learning-classifier)\n\n- [不同分类算法有哪些优势？](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-advantages-of-different-classification-algorithms)\n\n- [ROC与AUC详解](http:\u002F\u002Fwww.dataschool.io\u002Froc-curves-and-auc-explained\u002F)（[相关视频](https:\u002F\u002Fyoutu.be\u002FOAl6eAyP-yo)）\n\n- [ROC分析简介](https:\u002F\u002Fccrma.stanford.edu\u002Fworkshops\u002Fmir2009\u002Freferences\u002FROCintro.pdf)\n\n- [混淆矩阵术语简易指南](http:\u002F\u002Fwww.dataschool.io\u002Fsimple-guide-to-confusion-matrix-terminology\u002F)\n\n\n\u003Ca name=\"linear\" \u002F>\n\n## 线性回归\n\n- [概述](#general-)\n\n    - [线性回归的假设](http:\u002F\u002Fpareonline.net\u002Fgetvn.asp?n=2&v=8)，[Stack Exchange](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F16381\u002Fwhat-is-a-complete-list-of-the-usual-assumptions-for-linear-regression)\n    \n    - [线性回归综合资源](http:\u002F\u002Fpeople.duke.edu\u002F~rnau\u002Fregintro.htm)\n    \n    - [线性回归的应用与解释](http:\u002F\u002Fwww.dataschool.io\u002Fapplying-and-interpreting-linear-regression\u002F)\n    \n    - [线性回归模型中误差方差恒定意味着什么？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F52089\u002Fwhat-does-having-constant-variance-in-a-linear-regression-model-mean\u002F52107?stw=2#52107)\n    \n    - [y对x与x对y的线性回归有何区别？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F22718\u002Fwhat-is-the-difference-between-linear-regression-on-y-with-x-and-x-with-y?lq=1)\n    \n    - [当因变量不服从正态分布时，线性回归是否仍然有效？](https:\u002F\u002Fwww.researchgate.net\u002Fpost\u002FIs_linear_regression_valid_when_the_outcome_dependant_variable_not_normally_distributed)\n- 多重共线性和VIF\n\n    - [虚拟变量陷阱 | 多重共线性](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMulticollinearity)\n    \n    - [使用VIF处理多重共线性](https:\u002F\u002Fjonlefcheck.net\u002F2012\u002F12\u002F28\u002Fdealing-with-multicollinearity-using-variance-inflation-factors\u002F)\n\n- [残差分析](#residuals-)\n\n    - [解读R中的plot.lm()](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F58141\u002Finterpreting-plot-lm)\n    \n    - [如何解读QQ图？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F101274\u002Fhow-to-interpret-a-qq-plot?lq=1)\n    \n    - [解读残差与拟合值图以验证假设](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F76226\u002Finterpreting-the-residuals-vs-fitted-values-plot-for-verifying-the-assumptions)\n\n- [异常值](#outliers-)\n\n    - [应如何处理异常值？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F175\u002Fhow-should-outliers-be-dealt-with-in-linear-regression-analysis)\n\n- [弹性网络](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FElastic_net_regularization)\n    - [通过弹性网络进行正则化和变量选择](https:\u002F\u002Fweb.stanford.edu\u002F~hastie\u002FPapers\u002Felasticnet.pdf)\n\n\u003Ca name=\"logistic\" \u002F>\n\n## 逻辑回归\n\n- [逻辑回归维基](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLogistic_regression)\n\n- [逻辑回归的几何直观](http:\u002F\u002Fflorianhartl.com\u002Flogistic-regression-geometric-intuition.html)\n\n- [获取预测类别（选择阈值）](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F25389\u002Fobtaining-predicted-values-y-1-or-0-from-a-logistic-regression-model-fit)\n\n- [逻辑回归中的残差](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F1432\u002Fwhat-do-the-residuals-in-a-logistic-regression-mean)\n\n- [logit模型与probit模型的区别](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F20523\u002Fdifference-between-logit-and-probit-models#30909)，[逻辑回归维基](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLogistic_regression)，[Probit模型维基](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FProbit_model)\n\n- [逻辑回归的伪R²](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F3559\u002Fwhich-pseudo-r2-measure-is-the-one-to-report-for-logistic-regression-cox-s)，[计算方法](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F8511\u002Fhow-to-calculate-pseudo-r2-from-rs-logistic-regression)，[其他细节](http:\u002F\u002Fwww.ats.ucla.edu\u002Fstat\u002Fmult_pkg\u002Ffaq\u002Fgeneral\u002FPsuedo_RSquareds.htm)\n\n- [深入理解逻辑回归指南](http:\u002F\u002Fwww.dataschool.io\u002Fguide-to-logistic-regression\u002F)\n\n\u003Ca name=\"validation\" \u002F>\n\n## 使用重采样进行模型验证\n\n- [重采样详解](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FResampling_(statistics))\n\n- [在R中划分数据集](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F13536537\u002Fpartitioning-data-set-in-r-based-on-multiple-classes-of-observations)\n\n- [在R中实现留出法验证](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F22972854\u002Fhow-to-implement-a-hold-out-validation-in-r)，[2](http:\u002F\u002Fwww.gettinggeneticsdone.com\u002F2011\u002F02\u002Fsplit-data-frame-into-testing-and.html)\n\n\u003Ca name=\"cross\" \u002F>\n\n- [交叉验证](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCross-validation_(statistics))\n    - [如何在预测建模中使用交叉验证](http:\u002F\u002Fstuartlacy.co.uk\u002F2016\u002F02\u002F04\u002Fhow-to-correctly-use-cross-validation-in-predictive-modelling\u002F)\n    - [交叉验证后是否应使用完整数据集进行训练？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F11602\u002Ftraining-with-the-full-dataset-after-cross-validation)\n    \n    - [哪种交叉验证方法最好？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F103459\u002Fhow-do-i-know-which-method-of-cross-validation-is-best)\n    \n    - [k折交叉验证中的方差估计](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F31190\u002Fvariance-estimates-in-k-fold-cross-validation)\n    \n    - [交叉验证能否替代验证集？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F18856\u002Fis-cross-validation-a-proper-substitute-for-validation-set)\n    \n    - [k折交叉验证中k值的选择](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F27730\u002Fchoice-of-k-in-k-fold-cross-validation)\n    \n    - [集成学习中的交叉验证](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F102631\u002Fk-fold-cross-validation-of-ensemble-learning)\n    \n    - [在R中进行k折交叉验证](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F22909197\u002Fcreating-folds-for-k-fold-cv-in-r-using-caret)\n    \n    - [优质资源](http:\u002F\u002Fwww.chioka.in\u002Ftag\u002Fcross-validation\u002F)\n    \n    - 过拟合与交叉验证\n    \n        - [防止对交叉验证数据过拟合 | Andrew Ng](http:\u002F\u002Fai.stanford.edu\u002F~ang\u002Fpapers\u002Fcv-final.pdf)\n        \n        - [模型选择中的过拟合及随后的性能评估偏差](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume11\u002Fcawley10a\u002Fcawley10a.pdf)\n\n        - [用于检测和防止过拟合的交叉验证](http:\u002F\u002Fwww.autonlab.org\u002Ftutorials\u002Foverfit10.pdf)\n        \n        - [交叉验证如何克服过拟合问题](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F9053\u002Fhow-does-cross-validation-overcome-the-overfitting-problem)\n\n\n\u003Ca name=\"boot\" \u002F>\n\n- [自助法](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBootstrapping_(statistics))\n\n    - [为什么自助法有效？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F26088\u002Fexplaining-to-laypeople-why-bootstrapping-works)\n    \n    - [优质动画](https:\u002F\u002Fwww.stat.auckland.ac.nz\u002F~wild\u002FBootAnim\u002F)\n    \n    - [自助法示例](http:\u002F\u002Fstatistics.about.com\u002Fod\u002FApplications\u002Fa\u002FExample-Of-Bootstrapping.htm)\n    \n    - [理解自助法在验证和模型选择中的应用](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F14516\u002Funderstanding-bootstrapping-for-validation-and-model-selection?rq=1)\n    \n    - [交叉验证与自助法在预测误差估计上的比较](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F18348\u002Fdifferences-between-cross-validation-and-bootstrapping-to-estimate-the-predictio)，[交叉验证与.632自助法在分类性能评估上的比较](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F71184\u002Fcross-validation-or-bootstrapping-to-evaluate-classification-performance)\n\n\n\u003Ca name=\"deep\" \u002F>\n\n## 深度学习\n\n- [fast.ai - 面向编码者的实用深度学习](http:\u002F\u002Fcourse.fast.ai\u002F)\n\n- [fast.ai - 面向编码者的前沿深度学习](http:\u002F\u002Fcourse.fast.ai\u002Fpart2.html)\n\n- [精选的优秀深度学习教程、项目和社区列表](https:\u002F\u002Fgithub.com\u002FChristosChristofidis\u002Fawesome-deep-learning)\n\n- **[深度学习论文阅读路线图](https:\u002F\u002Fgithub.com\u002Ffloodsung\u002FDeep-Learning-Papers-Reading-Roadmap\u002Fblob\u002Fmaster\u002FREADME.md)**\n\n- [大量深度学习资源](http:\u002F\u002Fdeeplearning4j.org\u002Fdocumentation.html)\n\n- [有趣的深度学习和NLP项目（斯坦福）](http:\u002F\u002Fcs224d.stanford.edu\u002Freports.html)，[网站](http:\u002F\u002Fcs224d.stanford.edu\u002F)\n\n- [深度学习的核心概念](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-learning-nutshell-core-concepts\u002F)\n\n- [使用Torch通过深度神经网络理解自然语言](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n\n- [斯坦福深度学习教程](http:\u002F\u002Fufldl.stanford.edu\u002Ftutorial\u002F)\n\n- [Quora上的深度学习常见问题解答](https:\u002F\u002Fwww.quora.com\u002Ftopic\u002FDeep-Learning\u002Ffaq)\n\n- [Google+深度学习页面](https:\u002F\u002Fplus.google.com\u002Fcommunities\u002F112866381580457264725)\n\n- [近期与深度学习相关的Reddit AMA](http:\u002F\u002Fdeeplearning.net\u002F2014\u002F11\u002F22\u002Frecent-reddit-amas-about-deep-learning\u002F)，[另一场AMA](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FIAmA\u002Fcomments\u002F3mdk9v\u002Fwe_are_google_researchers_working_on_deep\u002F)\n\n- [在哪里学习深度学习？](http:\u002F\u002Fwww.kdnuggets.com\u002F2014\u002F05\u002Flearn-deep-learning-courses-tutorials-overviews.html)\n\n- [英伟达关于深度学习的概念](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-learning-nutshell-core-concepts\u002F)\n\n- [使用Python介绍深度学习（GitHub）](https:\u002F\u002Fgithub.com\u002Frouseguy\u002Fintro2deeplearning)，[优秀的入门幻灯片](https:\u002F\u002Fspeakerdeck.com\u002Fbargava\u002Fintroduction-to-deep-learning)\n\n- [牛津大学2015年视频讲座](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu)，[蒙特利尔暑期学校视频讲座](http:\u002F\u002Fvideolectures.net\u002Fdeeplearning2015_montreal\u002F)\n\n- [深度学习软件列表](http:\u002F\u002Fdeeplearning.net\u002Fsoftware_links\u002F)\n\n- [黑客版神经网络指南](http:\u002F\u002Fkarpathy.github.io\u002Fneuralnets\u002F)\n\n- [顶级arXiv深度学习论文解读](http:\u002F\u002Fwww.kdnuggets.com\u002F2015\u002F10\u002Ftop-arxiv-deep-learning-papers-explained.html)\n\n- [Geoff Hinton关于深度学习的YouTube视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=IcOMKXAw5VA)\n\n- [超赞的深度学习阅读清单](http:\u002F\u002Fdeeplearning.net\u002Freading-list\u002F)\n\n- [深度学习综合网站](http:\u002F\u002Fdeeplearning.net\u002F)，[软件](http:\u002F\u002Fdeeplearning.net\u002Fsoftware_links\u002F)\n\n- [深度学习教程](http:\u002F\u002Fdeeplearning4j.org\u002F)\n\n- [超棒！深度学习教程](https:\u002F\u002Fwww.toptal.com\u002Fmachine-learning\u002Fan-introduction-to-deep-learning-from-perceptrons-to-deep-networks)\n\n- [深度学习基础](http:\u002F\u002Falexminnaar.com\u002Fdeep-learning-basics-neural-networks-backpropagation-and-stochastic-gradient-descent.html)\n\n- [反向传播背后的直觉](https:\u002F\u002Fmedium.com\u002Fspidernitt\u002Fbreaking-down-neural-networks-an-intuitive-approach-to-backpropagation-3b2ff958794c)\n\n- [斯坦福教程](http:\u002F\u002Fufldl.stanford.edu\u002Ftutorial\u002Fsupervised\u002FMultiLayerNeuralNetworks\u002F)\n\n- [人工神经网络中的训练、验证和测试](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F2976452\u002Fwhats-is-the-difference-between-train-validation-and-test-set-in-neural-networ)\n\n- [人工神经网络教程](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F478947\u002Fwhat-are-some-good-resources-for-learning-about-artificial-neural-networks)\n\n- [Stack Overflow 上的神经网络常见问题解答](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fneural-network?sort=votes&pageSize=50)\n\n- [deeplearning.net 上的深度学习教程](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Findex.html)\n\n- [神经网络与深度学习在线书籍](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F)\n\n- 神经机器翻译\n\n    - **[机器翻译阅读清单](https:\u002F\u002Fgithub.com\u002FTHUNLP-MT\u002FMT-Reading-List#machine-translation-reading-list)**\n\n    - [使用 GPU 的神经机器翻译简介（第 1 部分）](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fintroduction-neural-machine-translation-with-gpus\u002F), [第 2 部分](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fintroduction-neural-machine-translation-gpus-part-2\u002F), [第 3 部分](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fintroduction-neural-machine-translation-gpus-part-3\u002F)\n    \n    - [Deep Speech：利用 GPU 加速的深度学习实现高精度语音识别](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-speech-accurate-speech-recognition-gpu-accelerated-deep-learning\u002F)\n\n\u003Ca name=\"frame\" \u002F>\n\n- 深度学习框架\n\n    - [Torch 与 Theano 对比](http:\u002F\u002Ffastml.com\u002Ftorch-vs-theano\u002F)\n    \n    - [dl4j、torch7 和 theano 对比](http:\u002F\u002Fdeeplearning4j.org\u002Fcompare-dl4j-torch7-pylearn.html)\n    \n    - [按语言划分的深度学习库](http:\u002F\u002Fwww.teglor.com\u002Fb\u002Fdeep-learning-libraries-language-cm569\u002F)\n    \n\n    - [Theano](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTheano_(software))\n    \n        - [官网](http:\u002F\u002Fdeeplearning.net\u002Fsoftware\u002Ftheano\u002F)\n        \n        - [Theano 入门介绍](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Fspeeding-up-your-neural-network-with-theano-and-the-gpu\u002F)\n        \n        - [Theano 教程](http:\u002F\u002Foutlace.com\u002FBeginner-Tutorial-Theano\u002F)\n        \n        - [优秀的 Theano 教程](http:\u002F\u002Fdeeplearning.net\u002Fsoftware\u002Ftheano\u002Ftutorial\u002F)\n        \n        - [使用 Theano 进行数字分类的逻辑回归](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flogreg.html#logreg)\n        \n        - [使用 Theano 的多层感知器](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fmlp.html#mlp)\n        \n        - [使用 Theano 的卷积神经网络](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flenet.html#lenet)\n        \n        - [使用 Theano 的循环神经网络](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Frnnslu.html#rnnslu)\n        \n        - [在 Theano 中用于情感分析的 LSTM](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flstm.html#lstm)\n        \n        - [使用 Theano 的受限玻尔兹曼机](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Frbm.html#rbm)\n        \n        - [使用 Theano 的深度信念网络](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002FDBN.html#dbn)\n        \n        - [所有代码](https:\u002F\u002Fgithub.com\u002Flisa-lab\u002FDeepLearningTutorials)\n        \n        - [深度学习实现教程——Keras 和 Lasagne](https:\u002F\u002Fgithub.com\u002Fvict0rsch\u002Fdeep_learning\u002F)\n\n    - [Torch](http:\u002F\u002Ftorch.ch\u002F)\n    \n        - [Torch 机器学习教程](http:\u002F\u002Fcode.madbits.com\u002Fwiki\u002Fdoku.php), [代码](https:\u002F\u002Fgithub.com\u002Ftorch\u002Ftutorials)\n        \n        - [Torch 入门介绍](http:\u002F\u002Fml.informatik.uni-freiburg.de\u002F_media\u002Fteaching\u002Fws1415\u002Fpresentation_dl_lect3.pdf)\n        \n        - [学习 Torch 的 GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fchetannaik\u002Flearning_torch)\n        \n        - [GitHub 上的 Awesome-Torch 仓库](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fawesome-torch)\n        \n        - [牛津大学使用 Torch 进行机器学习](https:\u002F\u002Fwww.cs.ox.ac.uk\u002Fpeople\u002Fnando.defreitas\u002Fmachinelearning\u002F), [代码](https:\u002F\u002Fgithub.com\u002Foxford-cs-ml-2015)\n        \n        - [Torch 内部结构概览](https:\u002F\u002Fapaszke.github.io\u002Ftorch-internals.html)\n        \n        - [Torch 备忘录](https:\u002F\u002Fgithub.com\u002Ftorch\u002Ftorch7\u002Fwiki\u002FCheatsheet)\n        \n        - [使用 Torch 的深度神经网络理解自然语言](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n\n    - Caffe\n        - [使用 Caffe 和 cuDNN 进行计算机视觉领域的深度学习](https:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Fdeep-learning-computer-vision-caffe-cudnn\u002F)\n\n    - TensorFlow\n        - [官网](http:\u002F\u002Ftensorflow.org\u002F)\n        \n        - [面向初学者的 TensorFlow 示例](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples)\n        \n        - [斯坦福大学深度学习研究课程中的 TensorFlow](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs20si\u002Fsyllabus.html)\n            \n            - [GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fchiphuyen\u002Ftf-stanford-tutorials)\n            \n        - [简化的 Scikit-learn 风格 TensorFlow 接口](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fskflow)\n        \n        - [学习 TensorFlow 的 GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fchetannaik\u002Flearning_tensorflow)\n        \n        - [TensorFlow 基准测试 GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Fconvnet-benchmarks\u002Fissues\u002F66)\n        \n        - [Awesome TensorFlow 列表](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fawesome-tensorflow)\n        \n        - [TensorFlow 书籍](https:\u002F\u002Fgithub.com\u002FBinRoot\u002FTensorFlow-Book)\n        \n        - [Android 上的 TensorFlow 机器学习示例](https:\u002F\u002Fblog.mindorks.com\u002Fandroid-tensorflow-machine-learning-example-ff0e9b2654cc)\n            \n            - [GitHub 仓库](https:\u002F\u002Fgithub.com\u002FMindorksOpenSource\u002FAndroidTensorFlowMachineLearningExample)        \n        - [使用 TensorFlow 在 Android 上创建自定义模型](https:\u002F\u002Fblog.mindorks.com\u002Fcreating-custom-model-for-android-using-tensorflow-3f963d270bfb)\n            - [GitHub 仓库](https:\u002F\u002Fgithub.com\u002FMindorksOpenSource\u002FAndroidTensorFlowMNISTExample)            \n\n\u003Ca name=\"feed\" \u002F>\n\n- 前馈神经网络\n\n- [神经网络快速入门](https:\u002F\u002Fujjwalkarn.me\u002F2016\u002F08\u002F09\u002Fquick-intro-neural-networks\u002F)\n    \n    - [从零开始实现神经网络](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Fimplementing-a-neural-network-from-scratch\u002F)，[代码](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Fnn-from-scratch)\n    \n    - [使用Theano和GPU加速神经网络](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Fspeeding-up-your-neural-network-with-theano-and-the-gpu\u002F)，[代码](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Fnn-theano)\n    \n    - [基础人工神经网络理论](https:\u002F\u002Ftakinginitiative.wordpress.com\u002F2008\u002F04\u002F03\u002Fbasic-neural-network-tutorial-theory\u002F)\n    \n    - [神经网络中偏置的作用](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F2480650\u002Frole-of-bias-in-neural-networks)\n    \n    - [隐藏层和节点数量的选择](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F3345079\u002Festimating-the-number-of-neurons-and-number-of-layers-of-an-artificial-neural-ne)，[2](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F10565868\u002Fmulti-layer-perceptron-mlp-architecture-criteria-for-choosing-number-of-hidde?lq=1)，[3](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F9436209\u002Fhow-to-choose-number-of-hidden-layers-and-nodes-in-neural-network\u002F2#)\n    \n    - [矩阵形式的反向传播](http:\u002F\u002Fsudeepraja.github.io\u002FNeural\u002F)\n    \n    - [用C++实现的人工神经网络 | AI Junkie](http:\u002F\u002Fwww.ai-junkie.com\u002Fann\u002Fevolved\u002Fnnt6.html)\n    \n    - [简单实现](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F15395835\u002Fsimple-multi-layer-neural-network-implementation)\n    \n    - [面向初学者的神经网络](http:\u002F\u002Fwww.codeproject.com\u002FArticles\u002F16419\u002FAI-Neural-Network-for-beginners-Part-of)\n    \n    - [使用神经网络进行回归与分类（幻灯片）](http:\u002F\u002Fwww.autonlab.org\u002Ftutorials\u002Fneural13.pdf)\n    \n    - [另一篇介绍](http:\u002F\u002Fwww.doc.ic.ac.uk\u002F~nd\u002Fsurprise_96\u002Fjournal\u002Fvol4\u002Fcs11\u002Freport.html)\n\n\u003Ca name=\"rnn\" \u002F>\n\n- 循环神经网络与LSTM\n    - [awesome-rnn：资源列表（GitHub仓库）](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-rnn)\n    \n    - [循环神经网络教程 第1部分](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Frecurrent-neural-networks-tutorial-part-1-introduction-to-rnns\u002F)，[第2部分](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F09\u002Frecurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano\u002F)，[第3部分](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F10\u002Frecurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients\u002F)，[代码](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Frnn-tutorial-rnnlm\u002F)\n    \n    - [NLP中的RNN表示](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2014-07-NLP-RNNs-Representations\u002F)\n    \n    - [RNN的不可思议效果](http:\u002F\u002Fkarpathy.github.io\u002F2015\u002F05\u002F21\u002Frnn-effectiveness\u002F)，[Torch代码](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fchar-rnn)，[Python代码](https:\u002F\u002Fgist.github.com\u002Fkarpathy\u002Fd4dee566867f8291f086)\n    \n    - [RNN简介](http:\u002F\u002Fdeeplearning4j.org\u002Frecurrentnetwork.html)，[LSTM](http:\u002F\u002Fdeeplearning4j.org\u002Flstm.html)\n    \n    - [RNN的一个应用](http:\u002F\u002Fhackaday.com\u002F2015\u002F10\u002F15\u002F73-computer-scientists-created-a-neural-net-and-you-wont-believe-what-happened-next\u002F)\n    \n    - [优化RNN性能](http:\u002F\u002Fsvail.github.io\u002F)\n    \n    - [简单RNN](http:\u002F\u002Foutlace.com\u002FSimple-Recurrent-Neural-Network\u002F)\n    \n    - [利用RNN自动生成标题党内容](https:\u002F\u002Flarseidnes.com\u002F2015\u002F10\u002F13\u002Fauto-generating-clickbait-with-recurrent-neural-networks\u002F)\n    \n    - [使用RNN进行序列学习（幻灯片）](http:\u002F\u002Fwww.slideshare.net\u002Findicods\u002Fgeneral-sequence-learning-with-recurrent-neural-networks-for-next-ml)\n    \n    - [使用RNN进行机器翻译（论文）](http:\u002F\u002Femnlp2014.org\u002Fpapers\u002Fpdf\u002FEMNLP2014179.pdf)\n    \n    - [使用RNN生成音乐（Keras）](https:\u002F\u002Fgithub.com\u002FMattVitelli\u002FGRUV)\n    \n    - [使用RNN实时生成对话（Keras）](http:\u002F\u002Fneuralniche.com\u002Fpost\u002Ftutorial\u002F)\n    \n    - 长短期记忆网络（LSTM）\n    \n        - [理解LSTM网络](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2015-08-Understanding-LSTMs\u002F)\n        \n        - [LSTM详解](https:\u002F\u002Fapaszke.github.io\u002Flstm-explained.html)\n        \n        - [LSTM入门指南](http:\u002F\u002Fdeeplearning4j.org\u002Flstm.html)\n        \n        - [从零开始实现LSTM](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F10\u002Frecurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano\u002F)，[Python\u002FTheano代码](https:\u002F\u002Fgithub.com\u002Fdennybritz\u002Frnn-tutorial-gru-lstm)\n        \n        - [使用LSTM构建字符级语言模型的Torch代码](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fchar-rnn)\n        \n        - [用于Kaggle EEG检测竞赛的LSTM（Torch代码）](https:\u002F\u002Fgithub.com\u002Fapaszke\u002Fkaggle-grasp-and-lift)\n        \n        - [在Theano中用于情感分析的LSTM](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Flstm.html#lstm)\n        \n        - [深度学习用于视觉问答 | LSTM | CNN](http:\u002F\u002Favisingh599.github.io\u002Fdeeplearning\u002Fvisual-qa\u002F)，[代码](https:\u002F\u002Fgithub.com\u002Favisingh599\u002Fvisual-qa)\n        \n        - [计算机使用LSTM回复邮件 | Google](http:\u002F\u002Fgoogleresearch.blogspot.in\u002F2015\u002F11\u002Fcomputer-respond-to-this-email.html)\n        \n        - [LSTM显著提升Google语音搜索](http:\u002F\u002Fgoogleresearch.blogspot.ch\u002F2015\u002F09\u002Fgoogle-voice-search-faster-and-more.html)，[另一篇文章](http:\u002F\u002Fdeeplearning.net\u002F2015\u002F09\u002F30\u002Flong-short-term-memory-dramatically-improves-google-voice-etc-now-available-to-a-billion-users\u002F)\n        \n        - [使用Torch通过LSTM理解自然语言](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n        \n        - [用于视觉问答的CNN+LSTM模型的Torch代码](https:\u002F\u002Fgithub.com\u002Fabhshkdz\u002Fneural-vqa)\n        \n        - [用于人类活动识别的LSTM](https:\u002F\u002Fgithub.com\u002Fguillaume-chevalier\u002FLSTM-Human-Activity-Recognition\u002F)\n    \n    - 门控循环单元（GRU）\n    \n        - [LSTM与GRU的比较](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F10\u002Frecurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano\u002F)\n    \n    - [使用序列到序列（seq2seq）RNN模型进行时间序列预测](https:\u002F\u002Fgithub.com\u002Fguillaume-chevalier\u002Fseq2seq-signal-prediction)\n\n\n\u003Ca name=\"rnn2\" \u002F>\n\n- [递归神经网络（非循环）](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRecursive_neural_network)\n\n    - [递归神经张量网络（RNTN）](http:\u002F\u002Fdeeplearning4j.org\u002Frecursiveneuraltensornetwork.html)\n    \n    - [word2vec、DBN、RNTN用于情感分析](http:\u002F\u002Fdeeplearning4j.org\u002Fzh-sentiment_analysis_word2vec.html)\n\n\u003Ca name=\"rbm\" \u002F>\n\n- 受限玻尔兹曼机\n\n- [RBM 初学者指南](http:\u002F\u002Fdeeplearning4j.org\u002Frestrictedboltzmannmachine.html)\n    \n    - [另一篇优秀的教程](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Frbm.html)\n    \n    - [RBM 简介](http:\u002F\u002Fblog.echen.me\u002F2011\u002F07\u002F18\u002Fintroduction-to-restricted-boltzmann-machines\u002F)\n    \n    - [Hinton 的 RBM 训练指南](https:\u002F\u002Fwww.cs.toronto.edu\u002F~hinton\u002Fabsps\u002FguideTR.pdf)\n    \n    - [R 语言中的 RBM](https:\u002F\u002Fgithub.com\u002Fzachmayer\u002Frbm)\n    \n    - [深度信念网络教程](http:\u002F\u002Fdeeplearning4j.org\u002Fdeepbeliefnetwork.html)\n    \n    - [用于情感分析的 word2vec、DBN 和 RNTN](http:\u002F\u002Fdeeplearning4j.org\u002Fzh-sentiment_analysis_word2vec.html)\n\n\u003Ca name=\"auto\" \u002F>\n\n- 自编码器：无监督学习（将目标设置为输入后应用反向传播）\n\n    - [吴恩达稀疏自编码器 PDF](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs294a\u002FsparseAutoencoder.pdf)\n    \n    - [深度自编码器教程](http:\u002F\u002Fdeeplearning4j.org\u002Fdeepautoencoder.html)\n    \n    - [去噪自编码器](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002FdA.html)，[Theano 代码](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fcode\u002FdA.py)\n    \n    - [堆叠式去噪自编码器](http:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002FSdA.html#sda)\n\n\n\u003Ca name=\"cnn\" \u002F>\n\n- 卷积神经网络\n\n    - [卷积神经网络的直观解释](https:\u002F\u002Fujjwalkarn.me\u002F2016\u002F08\u002F11\u002Fintuitive-explanation-convnets\u002F)\n    \n    - [超赞的深度视觉资源列表 (GitHub)](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-deep-vision)\n    \n    - [CNN 入门](http:\u002F\u002Fdeeplearning4j.org\u002Fconvolutionalnets.html)\n    \n    - [理解用于 NLP 的 CNN](http:\u002F\u002Fwww.wildml.com\u002F2015\u002F11\u002Funderstanding-convolutional-neural-networks-for-nlp\u002F)\n    \n    - [斯坦福大学课程笔记](http:\u002F\u002Fvision.stanford.edu\u002Fteaching\u002Fcs231n\u002F)，[代码](http:\u002F\u002Fcs231n.github.io\u002F)，[GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fcs231n\u002Fcs231n.github.io)\n    \n    - [基于浏览器的 JavaScript 库，用于 CNN](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002F)\n    \n    - [使用 CNN 检测面部关键点](http:\u002F\u002Fdanielnouri.org\u002Fnotes\u002F2014\u002F12\u002F17\u002Fusing-convolutional-neural-nets-to-detect-facial-keypoints-tutorial\u002F)\n    \n    - [Yelp 使用深度学习对商家照片进行分类](http:\u002F\u002Fengineeringblog.yelp.com\u002F2015\u002F10\u002Fhow-we-use-deep-learning-to-classify-business-photos-at-yelp.html)\n    \n    - [与 Yann LeCun 的访谈 | Kaggle](http:\u002F\u002Fblog.kaggle.com\u002F2014\u002F12\u002F22\u002Fconvolutional-nets-and-cifar-10-an-interview-with-yan-lecun\u002F)\n    \n    - [可视化与理解 CNN](https:\u002F\u002Fwww.cs.nyu.edu\u002F~fergus\u002Fpapers\u002FzeilerECCV2014.pdf)\n\n\u003Ca name=\"nrl\" \u002F>\n\n- 网络表示学习\n\n    - [超赞的图嵌入资源](https:\u002F\u002Fgithub.com\u002Fbenedekrozemberczki\u002Fawesome-graph-embedding)\n    \n    - [超赞的网络嵌入资源](https:\u002F\u002Fgithub.com\u002Fchihming\u002Fawesome-network-embedding)\n    \n    - [网络表示学习论文](https:\u002F\u002Fgithub.com\u002Fthunlp)\n    \n    - [知识表示学习论文](https:\u002F\u002Fgithub.com\u002Fthunlp\u002FKRLPapers)\n    \n    - [基于图的深度学习文献](https:\u002F\u002Fgithub.com\u002Fnaganandy\u002Fgraph-based-deep-learning-literature)\n\n\u003Ca name=\"nlp\" \u002F>\n\n\n\n## 自然语言处理\n\n- [精选的语音与自然语言处理资源列表](https:\u002F\u002Fgithub.com\u002Fedobashira\u002Fspeech-language-processing)\n\n- [使用 Torch 的深度神经网络理解自然语言](http:\u002F\u002Fdevblogs.nvidia.com\u002Fparallelforall\u002Funderstanding-natural-language-deep-neural-networks-using-torch\u002F)\n\n- [tf-idf 解释](http:\u002F\u002Fmichaelerasm.us\u002Fpost\u002Ftf-idf-in-10-minutes\u002F)\n\n- [斯坦福有趣的深度学习 NLP 项目](http:\u002F\u002Fcs224d.stanford.edu\u002Freports.html)，[官网](http:\u002F\u002Fcs224d.stanford.edu\u002F)\n\n- [斯坦福 NLP 小组](https:\u002F\u002Fnlp.stanford.edu\u002F)\n\n- [从零开始的 NLP | Google 论文](https:\u002F\u002Fstatic.googleusercontent.com\u002Fmedia\u002Fresearch.google.com\u002Fen\u002Fus\u002Fpubs\u002Farchive\u002F35671.pdf)\n\n- [基于图的半监督学习应用于 NLP](http:\u002F\u002Fgraph-ssl.wdfiles.com\u002Flocal--files\u002Fblog%3A_start\u002Fgraph_ssl_acl12_tutorial_slides_final.pdf)\n\n- [词袋模型](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBag-of-words_model)\n\n    - [使用词袋模型进行文本分类](http:\u002F\u002Ffastml.com\u002Fclassifying-text-with-bag-of-words-a-tutorial\u002F)\n    \n\u003Ca name=\"topic\" \u002F>\n\n- 主题建模\n    - [主题建模维基百科](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTopic_model) \n    - [**普林斯顿概率主题模型 PDF**](http:\u002F\u002Fwww.cs.columbia.edu\u002F~blei\u002Fpapers\u002FBlei2012.pdf)\n\n    - [LDA 维基百科](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLatent_Dirichlet_allocation)，[LSA 维基百科](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLatent_semantic_analysis)，[概率 LSA 维基百科](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FProbabilistic_latent_semantic_analysis)\n    \n    - [什么是关于潜在狄利克雷分配 (LDA) 的好解释？](https:\u002F\u002Fwww.quora.com\u002FWhat-is-a-good-explanation-of-Latent-Dirichlet-Allocation)\n    \n    - [**LDA 简介**](http:\u002F\u002Fblog.echen.me\u002F2011\u002F08\u002F22\u002Fintroduction-to-latent-dirichlet-allocation\u002F)，[另一个好解释](http:\u002F\u002Fconfusedlanguagetech.blogspot.in\u002F2012\u002F07\u002Fjordan-boyd-graber-and-philip-resnik.html)\n    \n    - [LDA 自助餐——直观解释](http:\u002F\u002Fwww.matthewjockers.net\u002F2011\u002F09\u002F29\u002Fthe-lda-buffet-is-now-open-or-latent-dirichlet-allocation-for-english-majors\u002F)\n    \n    - [你的潜在狄利克雷分配 (LDA) 指南](https:\u002F\u002Fmedium.com\u002F@lettier\u002Fhow-does-lda-work-ill-explain-using-emoji-108abf40fa7d)\n    \n    - [LSI 和 LDA 的区别](https:\u002F\u002Fwww.quora.com\u002FWhats-the-difference-between-Latent-Semantic-Indexing-LSI-and-Latent-Dirichlet-Allocation-LDA)\n    \n    - [原始 LDA 论文](https:\u002F\u002Fwww.cs.princeton.edu\u002F~blei\u002Fpapers\u002FBleiNgJordan2003.pdf)\n    \n    - [LDA 中的 alpha 和 beta 参数](http:\u002F\u002Fdatascience.stackexchange.com\u002Fquestions\u002F199\u002Fwhat-does-the-alpha-and-beta-hyperparameters-contribute-to-in-latent-dirichlet-a)\n    \n    - [狄利克雷分布的直观解释](https:\u002F\u002Fwww.quora.com\u002FWhat-is-an-intuitive-explanation-of-the-Dirichlet-distribution)\n    - [topicmodels: 一个用于拟合主题模型的 R 包](https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Ftopicmodels\u002Fvignettes\u002Ftopicmodels.pdf)\n\n    - [让主题建模变得足够简单](https:\u002F\u002Ftedunderwood.com\u002F2012\u002F04\u002F07\u002Ftopic-modeling-made-just-simple-enough\u002F)\n    \n    - [在线 LDA](http:\u002F\u002Falexminnaar.com\u002Fonline-latent-dirichlet-allocation-the-best-option-for-topic-modeling-with-large-data-sets.html)，[使用 Spark 的在线 LDA](http:\u002F\u002Falexminnaar.com\u002Fdistributed-online-latent-dirichlet-allocation-with-apache-spark.html)\n    \n    - [Scala 中的 LDA](http:\u002F\u002Falexminnaar.com\u002Flatent-dirichlet-allocation-in-scala-part-i-the-theory.html)，[第二部分](http:\u002F\u002Falexminnaar.com\u002Flatent-dirichlet-allocation-in-scala-part-ii-the-code.html)\n    \n    - [通过主题建模对 Twitter 时间线进行分割](https:\u002F\u002Falexisperrier.com\u002Fnlp\u002F2015\u002F09\u002F16\u002Fsegmentation_twitter_timelines_lda_vs_lsa.html)\n    \n    - [Twitter 关注者主题建模](http:\u002F\u002Falexperrier.github.io\u002Fjekyll\u002Fupdate\u002F2015\u002F09\u002F04\u002Ftopic-modeling-of-twitter-followers.html)\n\n- [多语言潜在狄利克雷分配（LDA）](https:\u002F\u002Fgithub.com\u002FArtificiAI\u002FMultilingual-Latent-Dirichlet-Allocation-LDA)。([教程在此](https:\u002F\u002Fgithub.com\u002FArtificiAI\u002FMultilingual-Latent-Dirichlet-Allocation-LDA\u002Fblob\u002Fmaster\u002FMultilingual-LDA-Pipeline-Tutorial.ipynb))\n\n    - [用于主题建模的深度信念网络](https:\u002F\u002Fgithub.com\u002Flarsmaaloee\u002Fdeep-belief-nets-for-topic-modeling)\n    - [结合词嵌入的主题模型高斯LDA](http:\u002F\u002Fwww.cs.cmu.edu\u002F~rajarshd\u002Fpapers\u002Facl2015.pdf)\n    - Python\n        - [用IPython Notebook编写的概率主题模型系列讲义](https:\u002F\u002Fgithub.com\u002Farongdari\u002Ftopic-model-lecture-note)\n        - [Python中各种主题模型的实现](https:\u002F\u002Fgithub.com\u002Farongdari\u002Fpython-topic-model)\n           \n\u003Ca name=\"word2vec\" \u002F>\n\n- word2vec\n\n    - [Google word2vec](https:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec)\n    \n    - [词袋模型维基](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBag-of-words_model)\n    \n    - [word2vec教程](https:\u002F\u002Frare-technologies.com\u002Fword2vec-tutorial\u002F)\n    \n    - [深入探讨Skip Gram模型](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fballison\u002Fpdf\u002Flrec_skipgrams.pdf)\n    \n    - [Skip Gram模型教程](http:\u002F\u002Falexminnaar.com\u002Fword2vec-tutorial-part-i-the-skip-gram-model.html)，[CBoW模型](http:\u002F\u002Falexminnaar.com\u002Fword2vec-tutorial-part-ii-the-continuous-bag-of-words-model.html)\n    \n    - [Kaggle Word Vectors教程（Python）](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-2-word-vectors)，[第二部分](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-3-more-fun-with-word-vectors)\n    \n    - [理解word2vec](http:\u002F\u002Frare-technologies.com\u002Fmaking-sense-of-word2vec\u002F)\n    \n    - [deeplearning4j上对word2vec的解释](http:\u002F\u002Fdeeplearning4j.org\u002Fword2vec.html)\n    \n    - [Quora上的word2vec](https:\u002F\u002Fwww.quora.com\u002FHow-does-word2vec-work)\n    \n    - [其他Quora资源](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-continuous-bag-of-words-and-skip-gram-architectures-in-laymans-terms)，[2](https:\u002F\u002Fwww.quora.com\u002FWhat-is-the-difference-between-the-Bag-of-Words-model-and-the-Continuous-Bag-of-Words-model)，[3](https:\u002F\u002Fwww.quora.com\u002FIs-skip-gram-negative-sampling-better-than-CBOW-NS-for-word2vec-If-so-why)\n    \n    - [用于情感分析的word2vec、DBN、RNTN](http:\u002F\u002Fdeeplearning4j.org\u002Fzh-sentiment_analysis_word2vec.html)\n\n- 文本聚类\n\n    - [字符串聚类的工作原理](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F8196371\u002Fhow-clustering-works-especially-string-clustering)\n    \n    - [莱文斯坦距离：衡量两个序列之间的差异](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLevenshtein_distance)\n    \n    - [基于莱文斯坦距离的文本聚类](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F21511801\u002Ftext-clustering-with-levenshtein-distances)\n\n- 文本分类\n\n    - [使用词袋模型进行文本分类](http:\u002F\u002Ffastml.com\u002Fclassifying-text-with-bag-of-words-a-tutorial\u002F)\n\n- 命名实体识别\n    \n     - [斯坦福命名实体识别器（NER）](https:\u002F\u002Fnlp.stanford.edu\u002Fsoftware\u002FCRF-NER.shtml)\n\n     - [命名实体识别：应用与用例——Towards Data Science](https:\u002F\u002Ftowardsdatascience.com\u002Fnamed-entity-recognition-applications-and-use-cases-acdbf57d595e)\n\t\n- [利用NLP和强化学习进行语言学习](http:\u002F\u002Fblog.dennybritz.com\u002F2015\u002F09\u002F11\u002Freimagining-language-learning-with-nlp-and-reinforcement-learning\u002F)\n\n- [Kaggle教程：词袋模型与词向量](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-1-for-beginners-bag-of-words)，[第二部分](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-2-word-vectors)，[第三部分](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fword2vec-nlp-tutorial\u002Fdetails\u002Fpart-3-more-fun-with-word-vectors)\n\n- [如果莎士比亚会说话（NLP教程）](https:\u002F\u002Fgigadom.wordpress.com\u002F2015\u002F10\u002F02\u002Fnatural-language-processing-what-would-shakespeare-say\u002F)\n\n- [深入探讨Skip Gram模型](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fballison\u002Fpdf\u002Flrec_skipgrams.pdf)\n\n\u003Ca name=\"vision\" \u002F>\n\n\n\n## 计算机视觉\n- [超赞的计算机视觉项目（GitHub）](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002Fawesome-computer-vision)\n\n- [超赞的深度视觉项目（GitHub）](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-deep-vision)\n\n\n\u003Ca name=\"svm\" \u002F>\n\n## 支持向量机\n\n- [Cross Validated 上关于 SVM 的最高票问题](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fsvm)\n\n- [帮助我理解支持向量机！](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F3947\u002Fhelp-me-understand-support-vector-machines)\n\n- [用通俗语言解释 SVM](https:\u002F\u002Fwww.quora.com\u002FWhat-does-support-vector-machine-SVM-mean-in-laymans-terms)\n\n- [SVM 是如何工作的 | 对比](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F23391\u002Fhow-does-a-support-vector-machine-svm-work)\n\n- [SVM 教程](http:\u002F\u002Falex.smola.org\u002Fpapers\u002F2003\u002FSmoSch03b.pdf)\n\n- [SVC 实用指南](http:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~cjlin\u002Fpapers\u002Fguide\u002Fguide.pdf)，[幻灯片](http:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~cjlin\u002Ftalks\u002Ffreiburg.pdf)\n\n- [SVM 入门概述](http:\u002F\u002Fwww.statsoft.com\u002FTextbook\u002FSupport-Vector-Machines)\n\n- 对比\n\n    - [SVM > 神经网络](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F6699222\u002Fsupport-vector-machines-better-than-artificial-neural-networks-in-which-learn?rq=1)，[神经网络 > SVM](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F11632516\u002Fwhat-are-advantages-of-artificial-neural-networks-over-support-vector-machines)，[另一对比](http:\u002F\u002Fwww.svms.org\u002Fanns.html)\n    \n    - [树模型 > SVM](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F57438\u002Fwhy-is-svm-not-so-good-as-decision-tree-on-the-same-data)\n    \n    - [核逻辑回归 vs SVM](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F43996\u002Fkernel-logistic-regression-vs-svm)\n    \n    - [逻辑回归 vs SVM](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F58684\u002Fregularized-logistic-regression-and-support-vector-machine)，[2](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F95340\u002Fsvm-v-s-logistic-regression)，[3](https:\u002F\u002Fwww.quora.com\u002FSupport-Vector-Machines\u002FWhat-is-the-difference-between-Linear-SVMs-and-Logistic-Regression)\n    \n- [支持向量机中的优化算法](http:\u002F\u002Fpages.cs.wisc.edu\u002F~swright\u002Ftalks\u002Fsjw-complearning.pdf)\n\n- [从 SVM 中提取变量重要性](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F2179\u002Fvariable-importance-from-svm)\n\n- 软件\n\n    - [LIBSVM](https:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~cjlin\u002Flibsvm\u002F)\n    \n    - [R 语言中 SVM 入门](http:\u002F\u002Fcbio.ensmp.fr\u002F~jvert\u002Fsvn\u002Ftutorials\u002Fpractical\u002Fsvmbasic\u002Fsvmbasic_notes.pdf)\n    \n- 核函数\n    - [机器学习和 SVM 中的核函数是什么？](https:\u002F\u002Fwww.quora.com\u002FWhat-are-Kernels-in-Machine-Learning-and-SVM)\n    \n    - [SVM 中高斯核的直观理解？](https:\u002F\u002Fwww.quora.com\u002FSupport-Vector-Machines\u002FWhat-is-the-intuition-behind-Gaussian-kernel-in-SVM)\n    \n- SVM 后的概率估计\n\n    - [Platt 的 SVM 概率输出](http:\u002F\u002Fwww.csie.ntu.edu.tw\u002F~htlin\u002Fpaper\u002Fdoc\u002Fplattprob.pdf)\n    \n    - [Platt 标定维基百科](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FPlatt_scaling)\n    \n    - [为什么使用 Platt 标定？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F5196\u002Fwhy-use-platts-scaling)\n    \n    - [使用 Platt 标定和等熵回归进行分类器校准](http:\u002F\u002Ffastml.com\u002Fclassifier-calibration-with-platts-scaling-and-isotonic-regression\u002F)\n\n\n\u003Ca name=\"rl\" \u002F>\n\n## 强化学习\n\n- [超棒的强化学习资源（GitHub）](https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl)\n\n- [强化学习教程 第一部分](http:\u002F\u002Foutlace.com\u002FReinforcement-Learning-Part-1\u002F)，[第二部分](http:\u002F\u002Foutlace.com\u002FReinforcement-Learning-Part-2\u002F)\n\n\u003Ca name=\"dt\" \u002F>\n\n## 决策树\n\n- [维基百科页面 - 丰富的信息](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FDecision_tree_learning)\n\n- [关于决策树的常见问题](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fcart)\n\n- [树与森林简要介绍](https:\u002F\u002Fstatistical-research.com\u002Findex.php\u002F2013\u002F04\u002F29\u002Fa-brief-tour-of-the-trees-and-forests\u002F)\n\n- [R语言中的树模型](http:\u002F\u002Fwww.statmethods.net\u002Fadvstats\u002Fcart.html)\n\n- [决策树是如何工作的？](http:\u002F\u002Fwww.aihorizon.com\u002Fessays\u002Fgeneralai\u002Fdecision_trees.htm)\n\n- [决策树的弱点](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F1292\u002Fwhat-is-the-weak-side-of-decision-trees)\n\n- [详尽的解释及不同算法](http:\u002F\u002Fwww.ise.bgu.ac.il\u002Ffaculty\u002Fliorr\u002Fhbchap9.pdf)\n\n- [在构建决策树的背景下，熵和信息增益是什么？](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F1859554\u002Fwhat-is-entropy-and-information-gain)\n\n- [与决策树相关的幻灯片](http:\u002F\u002Fwww.slideshare.net\u002Fpierluca.lanzi\u002Fmachine-learning-and-data-mining-11-decision-trees)\n\n- [决策树学习算法如何处理缺失值？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F96025\u002Fhow-do-decision-tree-learning-algorithms-deal-with-missing-values-under-the-hoo)\n\n- [使用替代变量改进含有缺失值的数据集](https:\u002F\u002Fwww.salford-systems.com\u002Fvideos\u002Ftutorials\u002Ftips-and-tricks\u002Fusing-surrogates-to-improve-datasets-with-missing-values)\n\n- [好文章](https:\u002F\u002Fwww.mindtools.com\u002Fdectree.html)\n\n- [决策树几乎总是二叉树吗？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F12187\u002Fare-decision-trees-almost-always-binary-trees)\n\n- [决策树剪枝](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FPruning_(decision_trees))，[决策树嫁接](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGrafting_(decision_trees))\n\n- [在决策树的上下文中，偏差是什么？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F6581\u002Fwhat-is-deviance-specifically-in-cart-rpart)\n\n- [用决策树发现数据背后的结构](http:\u002F\u002Fvooban.com\u002Fen\u002Ftips-articles-geek-stuff\u002Fdiscover-structure-behind-data-with-decision-trees\u002F) - 构建并绘制决策树，自动找出数据中的隐藏规则。\n\n- 不同算法的比较\n\n    - [CART与CTREE](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F12140\u002Fconditional-inference-trees-vs-traditional-decision-trees)\n    \n    - [复杂度或性能的比较](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F9979461\u002Fdifferent-decision-tree-algorithms-with-comparison-of-complexity-or-performance)\n    \n    - [CHAID与CART](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F61230\u002Fchaid-vs-crt-or-cart) ，[CART与CHAID](http:\u002F\u002Fwww.bzst.com\u002F2006\u002F10\u002Fclassification-trees-cart-vs-chaid.html)\n    \n    - [一篇关于比较的好文章](http:\u002F\u002Fwww.ftpress.com\u002Farticles\u002Farticle.aspx?p=2248639&seqNum=11)\n    \n- CART\n\n    - [递归划分维基百科](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRecursive_partitioning)\n    \n    - [CART详解](http:\u002F\u002Fdocuments.software.dell.com\u002FStatistics\u002FTextbook\u002FClassification-and-Regression-Trees)\n    \n    - [在使用CART时，如何衡量\u002F排名“变量重要性”？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F6478\u002Fhow-to-measure-rank-variable-importance-when-using-cart-specifically-using)\n    \n    - [在R中修剪一棵树](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F15318409\u002Fhow-to-prune-a-tree-in-r)\n    \n    - [rpart默认是否使用多变量分割？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F4356\u002Fdoes-rpart-use-multivariate-splits-by-default)\n    \n    - [关于递归划分的常见问题](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Frpart)\n    \n- CTREE\n\n    - [R中的party包](https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fparty\u002Fparty.pdf)\n    \n    - [在R中使用ctree显示每个节点的数量](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F13772715\u002Fshow-volume-in-each-node-using-ctree-plot-in-r)\n    \n    - [如何从ctree函数中提取树结构？](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F8675664\u002Fhow-to-extract-tree-structure-from-ctree-function)\n    \n- CHAID\n\n    - [关于CHAID的维基百科文章](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCHAID)\n    \n    - [CHAID的基本介绍](https:\u002F\u002Fsmartdrill.com\u002FIntroduction-to-CHAID.html)\n    \n    - [关于CHAID的好教程](http:\u002F\u002Fwww.statsoft.com\u002FTextbook\u002FCHAID-Analysis)\n    \n- MARS\n\n    - [关于MARS的维基百科文章](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMultivariate_adaptive_regression_splines)\n    \n- 概率决策树\n\n    - [概率决策树中的贝叶斯学习](http:\u002F\u002Fwww.stats.org.uk\u002Fbayesian\u002FJordan.pdf)\n    \n    - [概率树研究论文](http:\u002F\u002Fpeople.stern.nyu.edu\u002Fadamodar\u002Fpdfiles\u002Fpapers\u002Fprobabilistic.pdf)\n\n\u003Ca name=\"rf\" \u002F>\n\n## 随机森林 \u002F 装袋法\n\n- [超棒的随机森林（GitHub）**](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-random-forest)\n\n- [实践中如何调整随机森林的参数？](https:\u002F\u002Fwww.kaggle.com\u002Fforums\u002Ff\u002F15\u002Fkaggle-forum\u002Ft\u002F4092\u002Fhow-to-tune-rf-parameters-in-practice)\n\n- [随机森林中变量重要性的度量方法](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F12605\u002Fmeasures-of-variable-importance-in-random-forests)\n\n- [比较两个不同随机森林模型的R²](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F13869\u002Fcompare-r-squared-from-two-different-random-forest-models)\n\n- [OOB估计详解 | 随机森林 vs LDA](https:\u002F\u002Fstat.ethz.ch\u002Feducation\u002Fsemesters\u002Fss2012\u002Fams\u002Fslides\u002Fv10.2.pdf)\n\n- [利用预测误差曲线评估随机森林在生存分析中的表现](https:\u002F\u002Fwww.jstatsoft.org\u002Findex.php\u002Fjss\u002Farticle\u002Fview\u002Fv050i11)\n\n- [为什么随机森林不能处理预测变量中的缺失值？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F98953\u002Fwhy-doesnt-random-forest-handle-missing-values-in-predictors)\n\n- [如何在R中构建包含缺失值（NA）的随机森林？](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F8370455\u002Fhow-to-build-random-forests-in-r-with-missing-na-values)\n\n- [关于随机森林的常见问题](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Frandom-forest)，[更多常见问题](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Frandom-forest)\n\n- [从随机森林中获取知识](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F21152\u002Fobtaining-knowledge-from-a-random-forest)\n\n- [关于R实现的一些问题](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F20537186\u002Fgetting-predictions-after-rfimpute)，[2](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F81609\u002Fwhether-preprocessing-is-needed-before-prediction-using-finalmodel-of-randomfore)，[3](http:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F17059432\u002Frandom-forest-package-in-r-shows-error-during-prediction-if-there-are-new-fact)\n\n\u003Ca name=\"gbm\" \u002F>\n\n## 提升算法\n\n- [提升算法以获得更好的预测](http:\u002F\u002Fwww.datasciencecentral.com\u002Fprofiles\u002Fblogs\u002Fboosting-algorithms-for-better-predictions)\n\n- [提升算法维基页面](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBoosting_(machine_learning))\n\n- [提升树简介 | 陈天奇](https:\u002F\u002Fhomes.cs.washington.edu\u002F~tqchen\u002Fpdf\u002FBoostedTree.pdf)\n\n- 梯度提升机\n\n    - [梯度提升维基](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FGradient_boosting)\n    \n    - [R语言中GBM参数设置指南](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F25748\u002Fwhat-are-some-useful-guidelines-for-gbm-parameters), [参数设置策略](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F35984\u002Fstrategy-to-set-the-gbm-parameters)\n    \n    - [交互深度的含义](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F16501\u002Fwhat-does-interaction-depth-mean-in-gbm), [2](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F16501\u002Fwhat-does-interaction-depth-mean-in-gbm)\n    \n    - [R语言中GBM的n.minobsinnode参数的作用](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F30645\u002Frole-of-n-minobsinnode-parameter-of-gbm-in-r)\n    \n    - [R语言中的GBM](http:\u002F\u002Fwww.slideshare.net\u002Fmark_landry\u002Fgbm-package-in-r)\n    \n    - [关于GBM的常见问题](http:\u002F\u002Fstats.stackexchange.com\u002Ftags\u002Fgbm\u002Fhot)\n    \n    - [GBM与XGBoost](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fhiggs-boson\u002Fforums\u002Ft\u002F9497\u002Fr-s-gbm-vs-python-s-xgboost)\n\n- XGBoost\n\n    - [Kaggle上的XGBoost调参](https:\u002F\u002Fwww.kaggle.com\u002Fkhozzy\u002Frossmann-store-sales\u002Fxgboost-parameter-tuning-template\u002Flog)\n    \n    - [XGBoost与GBM的比较](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fotto-group-product-classification-challenge\u002Fforums\u002Ft\u002F13012\u002Fquestion-to-experienced-kagglers-and-anyone-who-wants-to-take-a-shot\u002F68296#post68296)\n    \n    - [XGBoost调查问卷](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fhiggs-boson\u002Fforums\u002Ft\u002F10335\u002Fxgboost-post-competition-survey)\n    \n    - [Python实用XGBoost在线课程（免费）](http:\u002F\u002Feducation.parrotprediction.teachable.com\u002Fcourses\u002Fpractical-xgboost-in-python)\n    \n- AdaBoost\n\n    - [AdaBoost维基](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAdaBoost), [Python代码](https:\u002F\u002Fgist.github.com\u002Ftristanwietsma\u002F5486024)\n    \n    - [AdaBoost稀疏输入支持](http:\u002F\u002Fhamzehal.blogspot.com\u002F2014\u002F06\u002Fadaboost-sparse-input-support.html)\n    \n    - [adaBag R包](https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fadabag\u002Fadabag.pdf)\n    \n    - [教程](http:\u002F\u002Fmath.mit.edu\u002F~rothvoss\u002F18.304.3PM\u002FPresentations\u002F1-Eric-Boosting304FinalRpdf.pdf)\n\n- CatBoost\n\n    - [CatBoost文档](https:\u002F\u002Fcatboost.ai\u002Fdocs\u002F)\n\n    - [基准测试](https:\u002F\u002Fcatboost.ai\u002F#benchmark)\n\n    - [教程](https:\u002F\u002Fgithub.com\u002Fcatboost\u002Ftutorials)\n\n    - [GitHub项目](https:\u002F\u002Fgithub.com\u002Fcatboost)\n\n    - [CatBoost vs Light GBM vs XGBoost](https:\u002F\u002Ftowardsdatascience.com\u002Fcatboost-vs-light-gbm-vs-xgboost-5f93620723db)\n\n\u003Ca name=\"ensem\" \u002F>\n\n## 集成学习\n\n- [集成学习维基文章](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEnsemble_learning)\n\n- [Kaggle集成学习指南](http:\u002F\u002Fmlwave.com\u002Fkaggle-ensembling-guide\u002F)\n\n- [简单集成的力量](http:\u002F\u002Fwww.overkillanalytics.net\u002Fmore-is-always-better-the-power-of-simple-ensembles\u002F)\n\n- [集成学习简介](http:\u002F\u002Fmachine-learning.martinsewell.com\u002Fensembles\u002F)\n\n- [集成学习论文](http:\u002F\u002Fcs.nju.edu.cn\u002Fzhouzh\u002Fzhouzh.files\u002Fpublication\u002FspringerEBR09.pdf)\n\n- [用R语言集成模型](http:\u002F\u002Famunategui.github.io\u002Fblending-models\u002F), [在R中集成回归模型](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F26790\u002Fensembling-regression-models), [R中集成学习简介](http:\u002F\u002Fwww.vikparuchuri.com\u002Fblog\u002Fintro-to-ensemble-learning-in-r\u002F)\n\n- [用caret集成模型](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F27361\u002Fstacking-ensembling-models-with-caret)\n\n- [自助法、提升法和堆叠法](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F18891\u002Fbagging-boosting-and-stacking-in-machine-learning)\n\n- [优质资源 | Kaggle非洲土壤属性预测](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fafsis-soil-properties\u002Fforums\u002Ft\u002F10391\u002Fbest-ensemble-references)\n\n- [提升法与自助法](http:\u002F\u002Fwww.chioka.in\u002Fwhich-is-better-boosting-or-bagging\u002F)\n\n- [学习如何实现集成方法的资源](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F32703\u002Fresources-for-learning-how-to-implement-ensemble-methods)\n\n- [分类器在集成分类器中是如何合并的？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F21502\u002Fhow-are-classifications-merged-in-an-ensemble-classifier)\n\n\u003Ca name=\"stack\" \u002F>\n\n## 堆叠模型\n\n- [堆叠、混合与堆叠泛化](http:\u002F\u002Fwww.chioka.in\u002Fstacking-blending-and-stacked-generalization\u002F)\n\n- [堆叠泛化（堆叠）](http:\u002F\u002Fmachine-learning.martinsewell.com\u002Fensembles\u002Fstacking\u002F)\n\n- [堆叠泛化：何时有效？](http:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F97-2\u002F011.pdf)\n\n- [堆叠泛化论文](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.56.1533&rep=rep1&type=pdf)\n\n\u003Ca name=\"vc\" \u002F>\n\n## Vapnik–Chervonenkis维度\n\n- [VC维度维基文章](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FVC_dimension)\n\n- [VC维度的直观解释](https:\u002F\u002Fwww.quora.com\u002FExplain-VC-dimension-and-shattering-in-lucid-Way)\n\n- [解释VC维度的视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=puDzy2XmR5c)\n\n- [VC维度简介](http:\u002F\u002Fwww.svms.org\u002Fvc-dimension\u002F)\n\n- [关于VC维度的常见问题](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002Ftagged\u002Fvc-dimension)\n\n- [集成技术会增加VC维度吗？](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F78076\u002Fdo-ensemble-techniques-increase-vc-dimension)\n\n\n\u003Ca name=\"bayes\" \u002F>\n\n## 贝叶斯机器学习\n\n- [黑客的贝叶斯方法（使用pyMC）](https:\u002F\u002Fgithub.com\u002FCamDavidsonPilon\u002FProbabilistic-Programming-and-Bayesian-Methods-for-Hackers)\n\n- [所有的机器学习都应该采用贝叶斯方法吗？](http:\u002F\u002Fvideolectures.net\u002Fbark08_ghahramani_samlbb\u002F)\n\n- [机器学习中的贝叶斯优化教程](http:\u002F\u002Fwww.iro.umontreal.ca\u002F~bengioy\u002Fcifar\u002FNCAP2014-summerschool\u002Fslides\u002FRyan_adams_140814_bayesopt_ncap.pdf)\n\n- [贝叶斯推理与深度学习](http:\u002F\u002Fblog.shakirm.com\u002F2015\u002F10\u002Fbayesian-reasoning-and-deep-learning\u002F), [幻灯片](http:\u002F\u002Fblog.shakirm.com\u002Fwp-content\u002Fuploads\u002F2015\u002F10\u002FBayes_Deep.pdf)\n\n- [轻松理解贝叶斯统计](http:\u002F\u002Fgreenteapress.com\u002Fwp\u002Fthink-bayes\u002F)\n\n- [Python中的卡尔曼滤波器和贝叶斯滤波器](https:\u002F\u002Fgithub.com\u002Frlabbe\u002FKalman-and-Bayesian-Filters-in-Python)\n\n- [马尔可夫链维基页面](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMarkov_chain)\n\n\n\u003Ca name=\"semi\" \u002F>\n\n## 半监督学习\n\n- [维基百科关于半监督学习的条目](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FSemi-supervised_learning)\n\n- [半监督学习教程](http:\u002F\u002Fpages.cs.wisc.edu\u002F~jerryzhu\u002Fpub\u002Fsslicml07.pdf)\n\n- [面向自然语言处理的基于图的半监督学习](http:\u002F\u002Fgraph-ssl.wdfiles.com\u002Flocal--files\u002Fblog%3A_start\u002Fgraph_ssl_acl12_tutorial_slides_final.pdf)\n\n- [分类学](http:\u002F\u002Fis.tuebingen.mpg.de\u002Ffileadmin\u002Fuser_upload\u002Ffiles\u002Fpublications\u002Ftaxo_[0].pdf)\n\n- [Weka视频教程](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=sWxcIjZFGNM)\n\n- [无监督、有监督与半监督学习](http:\u002F\u002Fstats.stackexchange.com\u002Fquestions\u002F517\u002Funsupervised-supervised-and-semi-supervised-learning)\n\n- [研究论文 1](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fzoubin\u002Fpapers\u002Fzglactive.pdf)、[2](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fzoubin\u002Fpapers\u002Fzgl.pdf)、[3](http:\u002F\u002Ficml.cc\u002F2012\u002Fpapers\u002F616.pdf)\n\n\n\u003Ca name=\"opt\" \u002F>\n\n## 优化\n\n- [使用R和二次规划进行均值-方差投资组合优化](http:\u002F\u002Fwww.wdiam.com\u002F2012\u002F06\u002F10\u002Fmean-variance-portfolio-optimization-with-R-and-quadratic-programming\u002F?utm_content=buffer04c12&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer)\n\n- [稀疏优化与机器学习算法](http:\u002F\u002Fwww.ima.umn.edu\u002F2011-2012\u002FW3.26-30.12\u002Factivities\u002FWright-Steve\u002Fsjw-ima12)\n\n- [机器学习中的优化算法](http:\u002F\u002Fpages.cs.wisc.edu\u002F~swright\u002Fnips2010\u002Fsjw-nips10.pdf)、[视频讲座](http:\u002F\u002Fvideolectures.net\u002Fnips2010_wright_oaml\u002F)\n\n- [数据分析中的优化算法](http:\u002F\u002Fwww.birs.ca\u002Fworkshops\u002F2011\u002F11w2035\u002Ffiles\u002FWright.pdf)\n\n- [优化视频讲座](http:\u002F\u002Fvideolectures.net\u002Fstephen_j_wright\u002F)\n\n- [支持向量机中的优化算法](http:\u002F\u002Fpages.cs.wisc.edu\u002F~swright\u002Ftalks\u002Fsjw-complearning.pdf)\n\n- [优化与机器学习研究的相互作用](http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fvolume7\u002FMLOPT-intro06a\u002FMLOPT-intro06a.pdf)\n\n- [Hyperopt教程：用于优化神经网络超参数](http:\u002F\u002Fvooban.com\u002Fen\u002Ftips-articles-geek-stuff\u002Fhyperopt-tutorial-for-optimizing-neural-networks-hyperparameters\u002F)\n\n\n\u003Ca name=\"other\" \u002F>\n\n## 其他教程\n\n- 如需使用R的数据科学教程合集，请参阅[此列表](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataScienceR)。\n\n- 如需使用Python的数据科学教程合集，请参阅[此列表](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FDataSciencePython)。","# Machine-Learning-Tutorials 快速上手指南\n\n**注意**：`Machine-Learning-Tutorials` 并非一个需要安装运行的软件库或框架，而是一个**精选的学习资源清单（Awesome List）**。它汇集了机器学习与深度学习领域的教程、文章、课程视频、博客及面试资料。因此，本指南侧重于如何高效利用该仓库进行学习，而非软件安装。\n\n## 环境准备\n\n由于本仓库主要提供链接和文档索引，无需特定的系统环境或复杂的依赖安装。你只需要具备以下基础条件即可开始学习：\n\n*   **操作系统**：Windows, macOS 或 Linux 均可。\n*   **浏览器**：现代浏览器（推荐 Chrome, Edge 或 Firefox）用于访问资源链接。\n*   **前置知识**：\n    *   基础编程能力（推荐 **Python**，部分资源包含 **R**）。\n    *   基础数学知识（线性代数、概率论、微积分）。\n*   **可选工具**（用于实践仓库中推荐的代码教程）：\n    *   Python 3.8+\n    *   Jupyter Notebook \u002F JupyterLab\n    *   核心数据科学库：`numpy`, `pandas`, `matplotlib`, `scikit-learn`, `tensorflow` 或 `pytorch`。\n\n## 获取与浏览步骤\n\n你无需通过包管理器安装此项目，直接通过 Git 克隆或在线浏览即可。\n\n### 方法一：在线浏览（推荐）\n直接访问 GitHub 仓库页面查看目录结构：\n[https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FMachine-Learning-Tutorials](https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FMachine-Learning-Tutorials)\n\n### 方法二：本地克隆\n如果你希望离线查看或搜索内容，可以使用以下命令将仓库克隆到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fujjwalkarn\u002FMachine-Learning-Tutorials.git\ncd Machine-Learning-Tutorials\n```\n\n*国内加速提示*：如果访问 GitHub 速度较慢，可使用国内镜像源克隆（需确保镜像源同步正常）：\n```bash\ngit clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002FMachine-Learning-Tutorials.git\n```\n*(注：若上述镜像不存在，建议使用科学上网工具或直接在线浏览)*\n\n## 基本使用指南\n\n本仓库的核心价值在于其**分类清晰的目录结构**。请按照以下步骤高效利用资源：\n\n### 1. 确定学习路径\n打开根目录下的 `README.md` 文件，查看 **Contents** 部分。根据你当前的需求选择对应的章节：\n\n*   **零基础入门**：跳转至 `[Introduction](#general)`，推荐从吴恩达（Andrew Ng）的斯坦福课程或《An Introduction to Statistical Learning》开始。\n*   **备战面试**：跳转至 `[Interview Resources](#interview)`，查看常见的机器学习面试题及解答。\n*   **特定算法学习**：例如想学习卷积神经网络，直接跳转到 `[Convolutional Neural Nets](#cnn)`；想学习自然语言处理，查看 `[Natural Language Processing](#nlp)`。\n*   **速查手册**：需要公式或概念速查时，参考 `[Cheat Sheets](#cs)`。\n\n### 2. 实践代码示例\n仓库中许多链接指向具体的代码实现（如 GitHub 上的 Python\u002FR 教程）。以下是一个典型的基于仓库推荐资源进行实践的通用流程（以 Python 为例）：\n\n假设你在 `[Deep Learning](#deep)` 章节找到了一个关于神经网络的教程链接，并决定在本地运行相关代码：\n\n**步骤 A: 创建虚拟环境**\n```bash\npython -m venv ml_env\nsource ml_env\u002Fbin\u002Factivate  # Windows 用户请使用: ml_env\\Scripts\\activate\n```\n\n**步骤 B: 安装通用数据科学依赖**\n大多数教程需要以下基础库：\n```bash\npip install numpy pandas matplotlib scikit-learn jupyter\n# 若涉及深度学习，根据教程要求安装：\n# pip install tensorflow  或  pip install torch torchvision\n```\n\n**步骤 C: 运行教程代码**\n下载教程提供的 `.ipynb` (Jupyter Notebook) 或 `.py` 文件，并在本地启动：\n```bash\njupyter notebook tutorial_name.ipynb\n```\n\n### 3. 利用博客与社区资源\n仓库中的 `[Useful Blogs](#blogs)` 和 `[Resources on Quora](#quora)` 部分提供了大量深度文章。建议将这些高质量博客（如 Andrej Karpathy's Blog, Colah's Blog）加入书签，作为深入理解算法原理的补充阅读材料。\n\n---\n**提示**：该仓库会持续更新，建议定期 `git pull` 同步最新资源，或关注其关联的 Python (`DataSciencePython`) 和 R (`DataScienceR`) 专用教程列表以获取更针对性的代码示例。","某初创公司的算法工程师小李需要在两周内为电商项目构建一个商品推荐原型，但他对从基础统计到深度学习的全栈知识体系尚不熟练。\n\n### 没有 Machine-Learning-Tutorials 时\n- **资源检索低效**：在谷歌、知乎和各类博客间反复跳转搜索“逻辑回归”或\"LSTM\"教程，大量时间浪费在筛选低质量内容上。\n- **知识体系碎片化**：学到的概念零散不成系统，难以理清从传统机器学习（如随机森林）到现代深度学习（如图神经网络）的技术演进脉络。\n- **实战落地困难**：缺乏针对特定框架（如 TensorFlow\u002FPyTorch）的权威指南和作弊表（Cheat Sheets），代码实现时频繁报错且无处查证。\n- **面试准备盲目**：面对技术面试不知所措，找不到涵盖核心算法原理与常见考题的系统性复习清单。\n\n### 使用 Machine-Learning-Tutorials 后\n- **一站式精准获取**：直接通过分类目录定位到\"Logistic Regression\"或\"Recurrent Neural Nets\"板块，即刻获得经过社区验证的高质量教程与文章。\n- **结构化学习路径**：依托从统计学基础到集成学习（Stacking\u002FBoosting）的完整大纲，快速构建起逻辑严密的知识树，避免学习盲区。\n- **开发效率倍增**：利用提供的速查表和框架专项资源，迅速解决模型验证（如交叉验证）和代码实现难题，大幅缩短原型开发周期。\n- **备考有的放矢**：直接使用专门的\"Interview Resources\"板块，针对性地复习高频算法考点与经典面试题，提升求职竞争力。\n\nMachine-Learning-Tutorials 将分散的全球优质资源聚合为结构化的知识地图，帮助开发者从混乱的信息海洋中解脱，专注于算法创新与工程落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fujjwalkarn_Machine-Learning-Tutorials_453bf737.png","ujjwalkarn","Ujjwal Karn","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fujjwalkarn_a8820a4b.png",null,"@facebook","San Francisco","ujjwalkarn.me","https:\u002F\u002Fgithub.com\u002Fujjwalkarn",17688,3983,"2026-04-03T12:16:22","CC0-1.0",1,"","未说明",{"notes":91,"python":89,"dependencies":92},"该仓库是一个机器学习与深度学习教程、文章及资源的精选列表，并非可执行的软件工具或代码库，因此没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。用户可根据列表中链接的具体教程内容自行配置相应环境。",[],[13],[95,96,97,98,99,100,101,102,103,104,105],"deep-learning-tutorial","machine-learning","machinelearning","deeplearning","neural-network","neural-networks","deep-neural-networks","awesome-list","awesome","list","deep-learning","2026-03-27T02:49:30.150509","2026-04-06T06:54:54.629789",[],[]]