[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-memoakten--ai-resources":3,"tool-memoakten--ai-resources":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,2,"2026-04-10T11:13:16",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75370,"2026-04-11T11:15:34",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":76,"owner_website":81,"owner_url":82,"languages":80,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":29,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":22,"created_at":93,"updated_at":94,"faqs":95,"releases":96},6618,"memoakten\u002Fai-resources","ai-resources","Selection of resources to learn Artificial Intelligence \u002F Machine Learning \u002F Statistical Inference \u002F Deep Learning \u002F Reinforcement Learning","ai-resources 是一份精心整理的人工智能学习资源清单，涵盖机器学习、统计推断、深度学习及强化学习等核心领域。它主要解决了初学者在面对庞大且复杂的 AI 知识体系时，难以筛选高质量学习资料以及缺乏系统性学习路径的痛点。\n\n这份资源特别适合没有计算机科学背景但希望深入理解 AI 算法与数学原理的开发者、创意编码艺术家及自学者。与其他仅罗列链接的清单不同，ai-resources 的独特之处在于所有收录内容均经过作者亲自研习与验证，并附带了宝贵的个人心得与建议。它强调“夯实基础”的重要性，不仅提供进阶教程，还包含了线性代数、概率统计和微积分等必要的数学前置课程，帮助学习者克服陡峭的学习曲线。\n\n此外，该清单鼓励通过不同视角的重复学习来建立直觉，特别推荐了面向创意社区的资源（如 ml4a），旨在引导用户从零基础逐步成长至能够独立阅读和理解前沿学术论文的水平。无论你是想透彻掌握背后的数学逻辑，还是仅需概念层面的认知，都能从中找到适合的学习阶段指引。","*This list can be found on github and medium:  \nhttps:\u002F\u002Fgithub.com\u002Fmemo\u002Fai-resources  \nhttps:\u002F\u002Fmedium.com\u002F@memoakten\u002Fselection-of-resources-to-learn-artificial-intelligence-machine-learning-statistical-inference-23bc56ba655*\n\n***Update April 2017**: It’s been almost a year since I posted this list of resources, and over the year there’s been an explosion of articles, videos, books, tutorials etc on the subject — even an explosion of ‘lists of resources’ such as this one. It’s impossible for me to keep this up to date. However, the one resource I would like to add is https:\u002F\u002Fml4a.github.io\u002F (https:\u002F\u002Fgithub.com\u002Fml4a) led by Gene Kogan. It’s specifically aimed at artists and the creative coding community.*\n\n# Introduction\nThis is a very incomplete and subjective selection of resources to learn about the algorithms and maths of Artificial Intelligence (AI) \u002F Machine Learning (ML) \u002F Statistical Inference (SI) \u002F Deep Learning (DL) \u002F Reinforcement Learning (RL). It is aimed at beginners (those without Computer Science background and not knowing anything about these subjects) and hopes to take them to quite advanced levels (able to read and understand DL papers). It is not an exhaustive list and only contains some of the learning materials *that I have personally completed* so that I can include brief personal comments on them. It is also by no means the *best* path to follow (nowadays most MOOCs have full paths all the way from basic statistics and linear algebra to ML\u002FDL). But this is the path I took and in a sense it's a partial documentation of my personal journey into DL (actually I bounced around all of these back and forth like crazy). As someone who has no formal background in Computer Science (but has been programming for many years), the language, notation and concepts of ML\u002FSI\u002FDL and even CS was completely alien to me, and the learning curve was not only steep, but vertical, treacherous and slippery like ice.\n\nA lot of the resources below are actually not for DL but more comprehensive ML\u002FSI. DL is mostly just tweaks on top of older techniques, so once you have a solid foundation in ML\u002FSI it makes a lot more sense. If you go through the video lectures below (including advanced ones), you'll be able to pick up current DL developments directly from the published papers.\n\nIf you really want to understand AI\u002FML\u002FSI\u002FDL\u002FRL indepth with all the maths, you need a good understanding of linear algebra (vectors & matrices), probability and statistics (which is more complex than it sounds), and calculus (mainly multivariate differential calculus, which is often simpler than it sounds). I've included lectures for these too. You can't cut corners. Take the time and study as much of the below as you can, from the beginning. Strong foundations are crucial. I often started watching one lecture, 5 minutes in I realized I didn't understand anything so went back to watch another lecture which covered slightly more fundamental topics, 5 minutes in I realized I still didn't understand anything so went back to watch another lecture which covered even more fundamental topics, etc. until I went back 10 lectures. It has been depressing at times (like trying to climb vertical, treacherous, slippery walls of ice without the right tools).\n\nIf you just want to *use* the algorithms without necessarily understanding or delving into the maths, or just want to understand the algorithms at a high conceptual level, that's perfectly fine too. Hopefully my comments below will make it clear what's what. \n  \n\n# Tips\nThere's a lot of overlap in the lectures below. That's a **good** thing. Don't skip things because you've already read or seen them elsewhere. If you're trying to learn and *understand* something which is potentially quite complicated, having different people explain the same thing to you in different ways is very useful and often gives insight or intuition you might not otherwise find.\n\nIf there are sections which you are 100% comfortable with, then you could watch those sections at 1.25x, 1.5x, or even 2.0x speed just to see what's going on, and then switch back to 1.0x speed once you encounter new material or interesting new angles on the same material.\n\n---\n# Video Lectures & Workshops\nThese are one off video lectures (~1 hour) or workshops (~2-3 hours) that give overviews, intuition or advanced crash courses. You won't learn much about the depths of how things work, but will at least probably understand what things are, what they mean and what you can do with them. These are usually a good intro before you dive into the heavier MOOCs. \n\n### Introductory Summaries\n\n**Deep Learning by Yann LeCun and Yohua Bengio @ NIPS 2015**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259574  \nQuite up to date overview of what DL is, how it works (at a very high level) and latest developments, from the grandmasters Yoshua Bengio and Yann LeCun. Bit of an ad for their respective research labs. It's not very technical and doesn't require much maths. Some slides do get a little technical and in those cases if you are familiar with the usual linear algebra \u002F calculus etc you'll get more out of it. If you know nothing (or very little) about AI\u002FML\u002FDL this course will probably be useful for you and give you an idea of what DL is. If you know ML quite well but not DL then this might be more useful for you. \n\n**The Unreasonable Effectiveness of Deep Learning by Yann LeCun 2014**  \nhttp:\u002F\u002Fvideolectures.net\u002Fsahd2014_lecun_deep_learning\u002F  \nFamous lecture by Yannn LeCun, godfather of deep convolutional neural networks (CNN). Brief intro to DL, why it's awesome, and then mainly focuses on CNNs. Very similar to above. Could probably be skipped if you watch the above video. \n\n**Deep Learning RNNaissance by Juergen Schmidhuber @ NYC ML Meetup 2014**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6bOMf9zr7N8  \nAn alternate history of DL from Jurgen Scmidhuber, another grandmaster of DL. He goes into more detailed history of the algorithms and where they come from, and then focuses on Recurrent Neural Networks (RNN), which his lab made many innovations on (including LSTM). Bit of an ad for his own research lab(s). This video is a bit more advanced than the above ones. Arguably more interesting too. \n\n**Basics of Computational Reinforcement Learning by Michael Littman @ RLDM 2015**  \nhttp:\u002F\u002Fvideolectures.net\u002Frldm2015_littman_computational_reinforcement  \nKind of an overview intro to RL. Some experience with MDPs etc would be useful but not essential. Michael Littman is one of the old school stars of RL and a lot of fun.\n\n### Advanced Crash Courses\n\n**Deep Learning by Ruslan Salakhutdinov @ KDD 2014**  \nhttp:\u002F\u002Fvideolectures.net\u002Fkdd2014_salakhutdinov_deep_learning  \nOverview of DL including DBN, RBM, PGM etc which are not as popular these days. Very theoretical, dense and mathematical. Maybe not that useful for beginners. Salakhutdinov is another major player in DL.\n\n**Introduction to Reinforcement Learning with Function Approximation by Rich Sutton @ NIPS 2015**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259577  \nAnother intro to RL but more technical and theoretical. Rich Sutton is old school king of RL. \n\n**Deep Reinforcement Learning by David Silver @ RLDM 2015**    \nhttp:\u002F\u002Fvideolectures.net\u002Frldm2015_silver_reinforcement_learning  \nAdvanced intro to Deep RL as used by Deepmind on the Atari games and AlphaGo. Quite technical and requires decent understanding of RL, TD learning and Q-Learning etc. (see RL courses below). David Silver is the new school king of RL and superstar of Deepmind's AlphaGo (which uses Deep RL).\n\n**Monte Carlo Inference Methods by Ian Murray @ NIPS 2015**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259575  \nGood introduction and overview of sampling \u002F monte carlo based methods. Not essential for a lot of DL, but good side knowledge to have. \n\n**How to Grow a Mind: Statistics, Structure and Abstraction by Josh Tenenbaum @ AAAI 2012**  \nhttp:\u002F\u002Fvideolectures.net\u002Faaai2012_tenenbaum_grow_mind\u002F  \nCompletely unrelated to current DL and takes a very different approach: Bayesian Heirarchical Models. Not much success in real world yet, but I'm still a fan as the questions and problems they're looking at feels a lot more applicable to real world than DL (e.g. one-shot learning and transfer learning, though Deepmind is looking at this with DL as well now). \n\n**Two architectures for one-shot learning by Josh Tenenbaum @ NIPS 2013**  \nhttp:\u002F\u002Fvideolectures.net\u002Fnipsworkshops2013_tenenbaum_learning  \nSimilar to above but slightly more recent. \n\n**Optimal and Suboptimal Control in Brain and Behavior by Nathaniel Daw @ NIPS 2015**    \nhttp:\u002F\u002Fvideolectures.net\u002Frldm2015_daw_brain_and_behavior  \nQuite unrelated to DL, looks at human learning - combined with research from pyschology and neuroscience - through the computational lens of RL. Requires decent understanding of RL. \n\n\n**Lots more one-off video lectures at:**  \nhttp:\u002F\u002Fvideolectures.net\u002FTop\u002FComputer_Science\u002FArtificial_Intelligence  \nhttp:\u002F\u002Fvideolectures.net\u002FTop\u002FComputer_Science\u002FMachine_Learning\u002F  \n\n---\n\n# Massive Open Online Courses (MOOC)\nThese are concentrated long term courses consisting of many video lectures. Ordered very roughly in the order that I recommend they are watched. \n\n\n## Foundation \u002F Maths\nIf you want to *understand the maths* of ML\u002FSI\u002FDL then these are crucial. If you don't want to understand the maths, but only want to understand the *concepts* then you could probably skip these and go straight to the *introductory* ML courses. However, this is also where some of the fundamental terminology is defined (prior, conditional, expected value, derivative, vector, matrix etc). So it helps to at least know what these things mean.\n\nInstead of going through all of these now, you could just watch some of the basic lessons first to help you understand the fundamentals. And then come back to some of the more advanced lessons if and when you encounter them. E.g. it's quite probable that you'll never encounter a Hessian matrix, or require eigenvectors or calculate the determinant of a matrix by hand. And only if and when you do, then you could come back and watch the relevant lessons. You can also skip proofs if you're short on time, but they do help you understand better. Try not to be impatient. \n\nKhan is a superhero and will make you understand things you never knew you could. \n\n\n**Khan Academy - Probability & Statistics**  \nhttps:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Fprobability  \nML is basically a subdiscipline of applied statistics mashed with computer science. So basic understanding of probability & statistics is essential. You don't need to watch all lessons, but at least the first few sections to understand the concepts. Bear in mind as you watch the more advanced stuff - which may not be nessecary for ML - they actually help you understand the basic stuff better. \n\n\n**Khan Academy - Linear Algebra**  \nhttps:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Flinear-algebra  \nAgain you probably don't need to watch them all, but at least vectors, matrics, operations on them, dot & cross product, matrix multiplication etc. is essential for the most basic understanding of ML maths. Basis, eigenvalues\u002Feigenvectors is essential for deeper understanding for some areas, but you could scrape by without them, at least for now.\n\n**Khan Academy - Calculus**  \nhttps:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Fcalculus-home  \n**Precalculus** - trigonometry, vectors, matrices are essential. If you watched the linear algebra lessons above you may not need this. Complex numbers, sequences and series etc. are useful and come up in various advanced areas, but not nessecary for basics. \n\n**Differential Calculus** - Essential (esp chain rule) if you want to understand maths of ML. You could skip proofs and the applications if you're impatient, though maxima \u002F minima, concavity & inflection, optimization is important.\n\n**Integral Calculus** - I don't think integral calculus is integral to DL (see what I did there? :). I've seen it in a few proofs, but it's more of a niche thing I think in ML which beginners could skip for now. Watch at least the first sections to know what it is. Function approximation, series etc. do come up in more advanced areas but safe to skip for now. \n\n**Multivariate Calculus** - Essential if you really want to understand the maths of DL, especially (partial) derivatives of multivariable functions. Hessians, Jacobians, Laplacians etc come up a lot in advanced areas, but you could get by and understand basic ML without knowing these. I.e. you could skip these for now and come back later, once you encounter them. \n\n---\n\n## Machine Learning \u002F Deep Learning\n### Short \u002F Introductory Courses\nThese are probably enough (combined with some tutorials) if you just want to be able to play with and tweak existing ML\u002FDL code and algorithms. \n\n**Machine Learning by Andrew Ng @ Coursera**  \nhttps:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning  \nFantastic introductory course and foundation for ML. Covers basics of ML from linear and logistic regression to artificial neural networks. Gives great insight into concepts and techniques with minimal maths. Requires basic knowledge of linear algebra and differential calculus. Note: doesn't cover specifities of current *deep* learning (e.g. convolutional neural networks, recurrent neural networks etc.), so is mainly a great foundation for more advanced studies. Andrew Ng was co-founder of Google Brain and now chief scientist at Baidu research. He is great at giving intuition. \n\n**Deep Learning by Google @ Udacity**  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning--ud730  \nBrief introduction to DL for those who are familiar with ML. This is a very short course, I think I went through the whole thing in under 2 hours. It's almost a reading of the tensorflow tutorials ( https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Findex.html ). It gives a top level summary of basic DL techniques. Assumes you're comfortable with ML and related concepts. So at least Andrew Ng's coursera (or equivalent knowledge) is a must. Don't expect to be a DL wizard after this, but at least you might know what a CNN or RNN is. If you're going to look at any of the advanced ML courses below, watch this DL course *after* them.\n\n\n### Longer \u002F Advanced Courses\nThese will help you understand what's actually going on, perhaps even understand some DL papers (I say 'some' DL papers because others are just insanely theoretical and dense). \n\n**CS188 Introduction to Artificial Intelligence by Pieter Abbeel @ Berkeley**  \n(some videos have audio issues, so below are a bunch of playlists from different years, I had to pick and choose from different playlists depending on audio problems).    \nhttps:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCDZUttQj8ytfASQIcvsLYgg (Spring 2015)  \nhttps:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCB4_W1V-KfwpTLxH9jG1_iA (Spring 2014)  \nhttps:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCshmLD2MsyqAKBx8ctivb5Q (Fall 2013)  \nhttps:\u002F\u002Fwww.youtube.com\u002Fuser\u002FCS188Spring2013 (Spring 2013)  \nThis is a fantastic introduction to *AI in general*, not specifically *ML* and introduces many different fundamental areas of AI and ML. Spreads the net very wide, so if all you're interested in is playing convolutional neural networks to make things like Deepdream, then 90% of this course won't be relevant. The first half is more agent-based AI starting with CSPs, decision trees, MDPs etc, and in that respect it is a bit unique compared to the other courses on this list. Then goes into various different classic ML topics. It is an introduction, so requires no prior knowledge of AI or ML, but it does go into maths, so requires decent understanding of the usual probability, linear algebra, calculus etc. Doesn't cover DL but a great foundation for a lot of AI and ML, especially if you want to get more into agent-based AI such as RL and Monte Carlo Tree Search (MCTS).\n\n**CS540 Machine Learning by Nando de Freitas @ UBC 2013**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6    \nThis covers many classic ML and SI etc from start all the way to neural networks. Doesn't require prior knowledge of ML, so can be considered comprehensive introduction. It's way more thorough and detailed than Andrew Ng's Coursera and goes heavy into maths. Bear in mind it's a post-graduate CS course so it's quite advanced. Again spreads the net quite wide, but not as wide as CS188, instead goes deeper into some areas. Only brief intro to DL but comprehensive foundation in ML and SI. Nando is ace. Also prof at Oxford and works for Deepmind. \n\n**CS340 Machine Learning by Nando de Freitas @ UBC 2012**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--Ecf_5nCbnSQMHqORpiChfJf  \nSimilar to above, but undergraduate version. I haven't actually watched these so I don't know how they differ from CS540. Probably bit simpler.\n\n**Deep Learning by Nando de Freitas @ Oxford 2015**    \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu  \nSimilar to CS540 but more about DL. Definitely requires more understanding of statistics and multivariate differential calculus, and prior knowledge in ML\u002FSI (Andrew Ng's coursera may be enough, but I really recommend Nando's CS540 or Pieter's CS188). Even knowledge of information theory would be useful. Great guest lectures by Alex Graves on generative RNNs and Karol Gregor on VAEs.\n\n**CS229 Machine Learning by Andrew Ng @ Stanford 2008**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fview_play_list?p=A89DCFA6ADACE599  \nAnother very comprehensive introduction to ML\u002FSI. Nothing like his Coursera, way more theoretical and covers lots more topics, and much more thorough. Kind of like a mashup of Pieter Abbeel's CS188 AI Course and Nando de Freitas's CS540 ML Course. This course is more detailed in some areas, and less detailed in others (e.g. AFAIR goes deeper into MDPs and RL than Abbeel's CS188, but doesn't cover bayes nets). They all provide slightly different perspectives and insights. Also doesn't cover DL, just a really solid comprehensive foundation for ML and SI. \n\n**Neural Networks for Machine Learning by Geoffrey Hinton @ Coursera**  \nhttps:\u002F\u002Fwww.coursera.org\u002Fcourse\u002Fneuralnets  \nGoes deep into some areas of DL and rather advanced. Hinton is one of the titans of DL and there is a lot of insight in here, but I found it a bit all over the place and I wasn't a huge fan of it. I.e. I don't think it's very useful as a *linear educational* resource and requies prior knowledge of ML, SI and DL. If you first learn these topics elsewhere (e.g. videos above) and then come back to this course then you can find great insight. Otherwise if you dive straight into this you will get lost.  \n\n\n**Computational Neuroscience by Rajesh Rao & Adrienne Fairhall @ Coursera**  \nhttps:\u002F\u002Fwww.coursera.org\u002Fcourse\u002Fcompneuro  \nNot directly related to DL but fascinating nevertheless. Starts quite fun but gets rather heavy, especially Adrienne's sections. Rajesh takes things quite slow and re-iterates everything, but I think Adrienne is used to dealing with comp-neuroscience postgrad students and flies through the slides. Expect to pause the video on every slide while you try to digest what's on the screen. Requires decent understanding of the usual suspects, linear algebra, differential calculus, probability and statistical analysis, including things like PCA etc.\n\n---\n## I haven't completed, but started or skimmed through\n**Machine Learning for Musicians and Artists by Rebecca Fiebrink @ Kadenze**  \nhttps:\u002F\u002Fwww.kadenze.com\u002Fcourses\u002Fmachine-learning-for-musicians-and-artists\u002Finfo  \nAimed at artists and musicians. I haven't watched this but knowing Rebecca and her work this is bound to ace. \n\n**Machine Learning by Georgia Tech (Charles Isbell & Michael Littman) @ Udacity**  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fmachine-learning-supervised-learning--ud675  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fmachine-learning-unsupervised-learning--ud741  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fmachine-learning-reinforcement-learning--ud820  \nLooks like basic introduction to main topics. Doesn't look too heavy. Probably requires basic linear algebra etc but not too complex. Charles Isbell and Michael Littman are really good. \n\n**Reinforcement Learning by Michael Littman, Chris Pryby & Charles Isbell @ Udacity**  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Freinforcement-learning--ud600  \nDid about half of this then got distracted by other things. Similar to above but focuses on MDPs and RL and goes quite thorough. I'd like to finish it but have other priorties right now. \n\n**Reinforcement Learning by David Silver @ UCL 2015**  \nhttp:\u002F\u002Fwww0.cs.ucl.ac.uk\u002Fstaff\u002Fd.silver\u002Fweb\u002FTeaching.html  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PL5X3mDkKaJrL42i_jhE4N-p6E2Ol62Ofa  \nIntroduction to MDPs and RL. Looks lighter and briefer than above. But perhaps enough for most RL explorations. David Silver is superstar of Deepmind's AlphaGo (which uses Deep RL). \n\n**Probabilistic Graphical Models by Daphne Koller \u002F Stanfod @ Coursera**  \nhttps:\u002F\u002Fwww.coursera.org\u002Fcourse\u002Fpgm  \nNot actually directly related to DL but probabilistic methods, bayes networks etc. (i.e. related to Josh Tenenbaum's talks at the top). I started this but stopped after a while as I got busy with other things. Starts fun but gets quite heavy. Looks like it's very thorough, perhaps too thorough as it seems to be covering a whole range of topics past and present. I'd like to finish it but have other priorties right now. \n\n---\n# Tutorials \u002F Articles \u002F Blogs\nThere are so many articles & tutorials now, especially tutorials on very specific subjects, that I can't list them all. So below a few main ones that cover broad topics and mainly foundational material. Many of these require some prior knowledge of ML\u002FDL and linear algebra, calculus etc. \n\n\n### Blogs & Unstructured Tutorials\nThese are sites which have unstructured tutorials on various different topics. \n\nhttp:\u002F\u002Fcolah.github.io  \nChris Olah's blog. Lots of great insight on complex topics and concepts. \n\nhttp:\u002F\u002Fkarpathy.github.io  \nAndrei Karpathy's blog. Similar to above.\n\nhttp:\u002F\u002Fblog.otoro.net\u002F  \n@hardmaru's blog. Great explanations of concepts and example code too. \n\nhttp:\u002F\u002Ffastml.com\u002F  \nLots of good examples. \n\nhttp:\u002F\u002Fblog.keras.io   \nTutorials on DL as implemented in Keras, a python based DL framework that sits on top of Tensorflow and Theano. \n\n### Linear Tutorials\nThese are linear tutorials that start from beginning to end. \n\nhttps:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Findex.html  \nTutorials on DL as implemented in Tensorflow, Google's python based DL framework. Requires understanding of ML fundamentals, linear algebra, calculus etc.\n\nhttp:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fcontents.html  \nTutorials on DL as implemented in Theano, a python based DL framework. Requires understanding of ML fundamentals, linear algebra, calculus etc. \n\n---\n\n# Books\n**Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville**  \nhttp:\u002F\u002Fwww.deeplearningbook.org  \nFree online book. Very recent. Briefly covers required maths too.\n\n**Information Theory, Inference, and Learning Algorithms by David Mackay**  \nhttp:\u002F\u002Fwww.inference.phy.cam.ac.uk\u002Fitprnn\u002Fbook.html  \nFree online book. Relatively old (1st 1997, current 2005) but classic textbook. Very statistical and theoretical. Heavy. Requires good understanding of multivariate calculus, linear algebra etc. \n\n**Pattern Recognition and Machine Learning by Chris Bishop**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fum\u002Fpeople\u002Fcmbishop\u002Fprml  \nSimilar to above (not online or free though). Classic text book. Very theoretical. \n\n**Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto**  \nhttps:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fbook\u002Fthe-book.html  \nOnline book on RL.\n\n**The Mathematical Theory of Communication by Claude E. Shannon (1948\u002F1949)**  \nhttp:\u002F\u002Fieeexplore.ieee.org\u002Fxpl\u002FarticleDetails.jsp?reload=true&arnumber=6773024    \nhttp:\u002F\u002Fwww.amazon.com\u002FMathematical-Theory-Communication-Claude-Shannon\u002Fdp\u002F0252725484   \nClassic article\u002Fbook which gives birth to modern Information Theory. I realize I'm on a slippery slope by recommending this as it's originally a paper, and if I suggest this I'd have to suggest a dozen others. But I find this book so useful as a foundation and alternative (supplementary) angle to help understand ML\u002FSI concepts that I couldn't resist it. I actually recommend the book version (as opposed to paper) because it has an additional article by Warren Weaver which explains the concepts in plain English, before Shannon explains them with maths. The maths isn't actually that hairy and mainly requires good understanding of basic probability and Bayes law. \n\n---\n\n# Other Recommendations\nThese are resources which I have not read or watched, but come highly recommended by others.\n\n**Linear Algebra (Video lectures) by Gilbert Strang & MIT**  \nhttp:\u002F\u002Focw.mit.edu\u002Fcourses\u002Fmathematics\u002F18-06-linear-algebra-spring-2010\u002Fvideo-lectures\u002F  \n\n\n**Machine Learning: a Probabilistic Perspective (Book) by Kevin Patrick Murphy**  \nhttps:\u002F\u002Fwww.cs.ubc.ca\u002F~murphyk\u002FMLbook  \n\n**Most Cited Deep Learning Papers by Terry Taewoong Um**  \nhttps:\u002F\u002Fgithub.com\u002Fterryum\u002Fawesome-deep-learning-papers  \n\"A curated list of the most cited deep learning papers (since 2010). I believe that there exist classic deep learning papers which are worth reading regardless of their applications. Rather than providing overwhelming amount of papers, I would like to provide a curated list of the classic deep learning papers which can be considered as must-reads in some area.\"\n\n**Critical Algorithm Studies: a Reading List by Tarleton Gillespie and Nick Seaver**  \nhttps:\u002F\u002Fsocialmediacollective.org\u002Freading-lists\u002Fcritical-algorithm-studies  \nNot related to the maths\u002Falgorithms directly but important nevertheless.  \n\"This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others.\"\n\n**Programming Community Curated Resources for Learning AI**\nhttps:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai\n\"Learn Artificial Intelligence (AI) from the best online Artificial Intelligence courses\u002Ftutorials submitted and voted by the programming community.\"\n\n\n\n\n---\n# Notes\nThis is a list of resources that I have personally watched or read, accompanied by my brief thoughts on them, ~~it doesn't make sense for me to accept pull requests (unless there are typos). But~~. Please do send me suggestions (probably via issues) for others. I have added an 'Other Recommendations' section where I am including other resources which come highly recommended by others. \n\n","*此列表可在 GitHub 和 Medium 上找到：  \nhttps:\u002F\u002Fgithub.com\u002Fmemo\u002Fai-resources  \nhttps:\u002F\u002Fmedium.com\u002F@memoakten\u002Fselection-of-resources-to-learn-artificial-intelligence-machine-learning-statistical-inference-23bc56ba655*\n\n***更新于 2017 年 4 月**：自从我发布这份资源清单以来，已经近一年了。这一年里，关于人工智能、机器学习、统计推断、深度学习和强化学习等主题的文章、视频、书籍、教程等内容呈井喷式增长，甚至出现了许多类似的“资源列表”。因此，要保持这份清单的实时更新几乎是不可能的。不过，我想补充的一份资源是由 Gene Kogan 主导的 https:\u002F\u002Fml4a.github.io\u002F（https:\u002F\u002Fgithub.com\u002Fml4a）。该资源特别面向艺术家和创意编程社区。*\n\n# 引言\n这是一份非常不完整且带有主观色彩的资源精选，旨在帮助大家学习人工智能（AI）、机器学习（ML）、统计推断（SI）、深度学习（DL）和强化学习（RL）相关的算法与数学知识。本资源主要面向初学者（即没有计算机科学背景、对这些领域一无所知的人），目标是引导他们达到相当高级的水平（能够阅读并理解深度学习领域的论文）。这份清单并非详尽无遗，仅包含了我个人已完成的部分学习材料，并附上了简短的个人评注。同时，它也绝非所谓的“最佳”学习路径（如今大多数 MOOC 平台都提供了从基础统计学、线性代数到机器学习和深度学习的完整课程体系）。然而，这确实是我曾经走过的学习路径，某种程度上也是我对深度学习探索历程的一种记录（实际上，我在这些领域之间反复切换，可谓“疯狂”）。作为一名没有正式计算机科学背景但已多年从事编程工作的人，机器学习、统计推断、深度学习乃至计算机科学的语言、符号和概念对我来说完全陌生，学习曲线不仅陡峭，更像垂直、险峻且湿滑如冰的岩壁。\n\n以下许多资源其实并不专门针对深度学习，而是涵盖更为全面的机器学习和统计推断内容。深度学习本质上是在传统技术基础上进行的改进，因此一旦你打下了坚实的机器学习和统计推断基础，再学习深度学习就会更加得心应手。如果你系统地观看下面的视频课程，包括那些较为高级的内容，就能直接从相关学术论文中了解当前深度学习领域的最新进展。\n\n若想深入理解人工智能、机器学习、统计推断、深度学习和强化学习的核心原理，并掌握其中的数学知识，你需要扎实的线性代数（向量与矩阵）、概率论与统计学（其复杂程度远超想象）以及微积分基础（主要是多元微分学，往往比表面看起来简单）。为此，我也在清单中加入了相应的课程。切勿投机取巧。请务必花时间从头开始尽可能多地学习这些内容，因为牢固的基础至关重要。我常常会先打开一节课程，听上五分钟就发现自己完全不懂，于是又返回去观看另一节更基础的课程；再听五分钟还是不明白，便继续往前找更基础的课程……如此反复，有时甚至要倒退十节课程。这样的过程有时令人沮丧，仿佛徒手攀登垂直、险峻且湿滑如冰的岩壁，却缺乏合适的工具。\n\n当然，如果你只是想“使用”这些算法，而不必深入理解其背后的数学原理，或者仅仅希望从高层次的概念层面把握它们的工作机制，那也完全没问题。希望我下面的评注能帮助你厘清各部分内容。\n  \n\n# 小贴士\n以下列出的课程内容存在大量重叠，但这恰恰是一件**好事**。不要因为已经在其他地方看过或读过相关内容而跳过某些部分。当你试图学习并真正理解一些可能相当复杂的事物时，让不同的人以不同的方式为你讲解同一主题，往往能带来独特的见解和直觉，这是单纯依靠单一来源难以获得的。\n\n如果你对某些章节已经非常熟悉，可以尝试以 1.25 倍、1.5 倍甚至 2.0 倍的速度快速浏览，以便大致了解内容；一旦遇到新知识点或对已有内容的新视角时，再将播放速度调回正常速率。\n---\n# 视频讲座与研讨会\n这些是一次性的视频讲座（约一小时）或研讨会（约两到三小时），旨在提供概览、直观理解或进阶速成课程。虽然你可能无法深入了解其内部运作机制，但至少能够明白这些技术是什么、有何意义以及如何应用。通常，在深入学习更系统的 MOOC 课程之前，先观看这类入门级内容会很有帮助。\n\n### 入门概述\n\n**Yann LeCun 和 Yohua Bengio 在 NIPS 2015 上的深度学习演讲**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259574  \n由深度学习领域的两位泰斗 Yoshua Bengio 和 Yann LeCun 主讲，对深度学习的定义、基本原理（高度概括）以及最新进展进行了相当前沿的综述。其中也不乏对其各自研究团队的宣传成分。整体内容并不十分技术化，也不需要太多数学基础。不过，部分幻灯片会涉及一些技术细节，若你熟悉常用的线性代数、微积分等知识，则能从中获得更多收获。对于完全不了解 AI、ML 或 DL 的人来说，这门课程将非常有帮助，能够让你初步认识深度学习究竟是什么；而对于已经掌握机器学习但尚未接触深度学习的人来说，同样具有较高的参考价值。\n\n**Yann LeCun 2014 年的《深度学习的不合理有效性》演讲**  \nhttp:\u002F\u002Fvideolectures.net\u002Fsahd2014_lecun_deep_learning\u002F  \n来自深度卷积神经网络之父 Yann LeCun 的著名演讲。简要介绍了深度学习及其强大之处，随后重点讨论了卷积神经网络。内容与上述演讲颇为相似，若已观看前者，可考虑略过此篇。\n\n**Juergen Schmidhuber 在 NYC ML Meetup 2014 上的“深度学习文艺复兴”演讲**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6bOMf9zr7N8  \n由另一位深度学习领域的泰斗 Juergen Schmidhuber 带来的另类历史回顾。他详细梳理了深度学习算法的发展脉络及其起源，随后聚焦于循环神经网络（RNN），而他的实验室正是 RNN 领域诸多创新的发源地（包括 LSTM）。演讲中也包含对自己研究团队的宣传成分。相较于前两篇，本视频内容更具深度，或许也更为有趣。\n\n**Michael Littman 在 RLDM 2015 上的“计算强化学习基础”演讲**  \nhttp:\u002F\u002Fvideolectures.net\u002Frldm2015_littman_computational_reinforcement  \n这是一场关于强化学习的概览性入门介绍。具备马尔可夫决策过程等相关经验会更有帮助，但并非必需。Michael Littman 是强化学习领域的老派大师之一，演讲风格风趣幽默。\n\n### 高级速成课程\n\n**Ruslan Salakhutdinov 在 KDD 2014 上讲授的深度学习**  \nhttp:\u002F\u002Fvideolectures.net\u002Fkdd2014_salakhutdinov_deep_learning  \n深度学习概述，包括 DBN、RBM、PGM 等，这些方法如今已不如以前流行。内容非常理论化、密集且数学性强。对于初学者来说可能用处不大。Salakhutdinov 是深度学习领域的另一位重要人物。\n\n**Rich Sutton 在 NIPS 2015 上讲授的带有函数逼近的强化学习导论**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259577  \n另一门强化学习入门课程，但更加技术性和理论性。Rich Sutton 是强化学习领域的老派宗师。\n\n**David Silver 在 RLDM 2015 上讲授的深度强化学习**  \nhttp:\u002F\u002Fvideolectures.net\u002Frldm2015_silver_reinforcement_learning  \n关于 DeepMind 在 Atari 游戏和 AlphaGo 中使用的深度强化学习的高级入门课程。内容相当技术性，需要对强化学习、TD 学习和 Q-Learning 等有较好的理解（参见下面的强化学习课程）。David Silver 是新一代强化学习的领军人物，也是 DeepMind 的 AlphaGo（使用深度强化学习）中的超级明星。\n\n**Ian Murray 在 NIPS 2015 上讲授的蒙特卡洛推理方法**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259575  \n对采样\u002F基于蒙特卡洛的方法的良好介绍和概述。虽然对许多深度学习任务并非必需，但作为辅助知识还是很有帮助的。\n\n**Josh Tenenbaum 在 AAAI 2012 上讲授的“如何发展心智：统计、结构与抽象”**  \nhttp:\u002F\u002Fvideolectures.net\u002Faaai2012_tenenbaum_grow_mind\u002F  \n与当前的深度学习完全无关，采用了非常不同的方法：贝叶斯层次模型。目前在实际应用中尚未取得太大成功，但我仍然很欣赏这种方法，因为它所关注的问题和研究方向比深度学习更贴近现实世界（例如一次学习和迁移学习，尽管 DeepMind 现在也在用深度学习研究这些问题）。\n\n**Josh Tenenbaum 在 NIPS 2013 上讲授的“用于一次学习的两种架构”**  \nhttp:\u002F\u002Fvideolectures.net\u002Fnipsworkshops2013_tenenbaum_learning  \n与上述内容类似，但稍为新近一些。\n\n**Nathaniel Daw 在 NIPS 2015 上讲授的“大脑与行为中的最优与次优控制”**  \nhttp:\u002F\u002Fvideolectures.net\u002Frldm2015_daw_brain_and_behavior  \n与深度学习关系不大，从强化学习的计算视角出发，结合心理学和神经科学的研究，探讨人类的学习过程。需要对强化学习有较好的理解。\n\n\n**更多单次视频讲座请访问：**  \nhttp:\u002F\u002Fvideolectures.net\u002FTop\u002FComputer_Science\u002FArtificial_Intelligence  \nhttp:\u002F\u002Fvideolectures.net\u002FTop\u002FComputer_Science\u002FMachine_Learning\u002F  \n\n---\n\n# 大规模在线开放课程（MOOC）\n这些是包含大量视频讲座的长期集中课程。我大致按照推荐观看的顺序排列。\n\n\n## 基础 \u002F 数学\n如果你想*理解机器学习\u002F统计学\u002F深度学习的数学原理*，那么这些课程至关重要。如果你只想了解*概念*而不想深入数学细节，那么可以跳过这些基础课程直接进入*机器学习入门*课程。不过，这里也定义了一些基本术语（先验、条件概率、期望值、导数、向量、矩阵等），因此至少要明白这些术语的含义。\n\n与其现在把所有课程都看完，不如先看一些基础课程来掌握基本概念，等到遇到相关问题时再回过头来看更高级的内容。比如，你很可能永远都不会接触到海森矩阵，也不需要计算特征向量或手算矩阵的行列式。只有在真正需要的时候，再回来观看相关的课程即可。如果时间紧张，也可以跳过证明部分，但它们确实有助于更好地理解内容。不要急于求成。\n\nKhan 是一位超级英雄，他能让你理解那些你原本以为自己不可能懂的东西。\n\n\n**Khan Academy - 概率与统计**  \nhttps:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Fprobability  \n机器学习基本上是应用统计学与计算机科学相结合的一个分支学科。因此，对概率与统计的基本理解至关重要。不必观看所有课程，但至少要先看前几节以理解基本概念。需要注意的是，即使是一些对机器学习来说可能并不必要的高级内容，也能帮助你更好地理解基础知识。\n\n\n**Khan Academy - 线性代数**  \nhttps:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Flinear-algebra  \n同样，你可能不需要全部看完，但至少要掌握向量、矩阵及其运算、点积与叉积、矩阵乘法等内容，这是理解机器学习数学基础所必需的。基底、特征值\u002F特征向量对于深入理解某些领域很重要，但至少目前可以暂时不学。\n\n**Khan Academy - 微积分**  \nhttps:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Fcalculus-home  \n**预科微积分**——三角函数、向量、矩阵等内容是必不可少的。如果你已经看过线性代数课程，这部分可能就不必再学了。复数、数列与级数等知识也很有用，在一些高级领域会用到，但对于基础学习来说并非必需。\n\n**微分学**——如果你想理解机器学习的数学原理，微分学是必不可少的（尤其是链式法则）。如果你比较着急，可以跳过证明和应用部分，但极值、凹凸性、拐点以及优化等内容还是很重要的。\n\n**积分学**——我认为积分学对深度学习并不是至关重要的（你看，我在这里玩了个文字游戏 :）。我在一些证明中见过积分，但它更像是机器学习中的一个细分领域，初学者可以暂时跳过。至少先看看前几节，了解一下积分是什么。函数逼近、级数等内容会在更高级的领域出现，但目前可以先跳过。\n\n**多元微积分**——如果你真的想理解深度学习的数学原理，那么多元微积分是必不可少的，尤其是多元函数的（偏）导数。海森矩阵、雅可比矩阵、拉普拉斯算子等在高级领域中经常出现，但至少目前你可以先不学，等到遇到相关问题时再回过头来学习。也就是说，你可以暂时跳过这些内容，等以后需要用到时再继续学习。 \n\n---\n\n## 机器学习 \u002F 深度学习\n\n### 短期\u002F入门课程\n如果你只是想玩一玩、调一调现有的机器学习\u002F深度学习代码和算法，这些课程（再加上一些教程）可能就足够了。\n\n**吴恩达的机器学习课程 @ Coursera**  \nhttps:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning  \n这是一门非常棒的机器学习入门课程，为后续学习打下坚实基础。课程内容从线性回归、逻辑回归到人工神经网络，涵盖了机器学习的基础知识。它以较少的数学推导深入浅出地讲解各种概念和技术。要求具备线性代数和微积分的基础知识。需要注意的是，该课程并未涉及当前深度学习的具体内容（如卷积神经网络、循环神经网络等），因此主要作为更高级学习的基础。吴恩达曾是Google Brain的联合创始人，现任百度研究院首席科学家，他擅长用直观的方式解释复杂的概念。\n\n**谷歌的深度学习课程 @ Udacity**  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fdeep-learning--ud730  \n这是一门针对已有机器学习基础的学习者的简短深度学习入门课程。我个人觉得整个课程不到2小时就能看完。它几乎就是对TensorFlow官方教程（https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Findex.html）的简单梳理，提供了一些基本深度学习技术的高层次概述。课程假设你已经熟悉机器学习及相关概念，因此至少需要掌握吴恩达Coursera课程的内容或同等水平的知识。不要指望学完这门课就能成为深度学习高手，但至少你会知道什么是CNN或RNN。如果你想学习下面提到的更高级的机器学习课程，建议先完成这门深度学习课程。\n\n### 长期\u002F进阶课程\n这些课程能帮助你深入理解背后的原理，甚至读懂一些深度学习论文（当然，只能说“一些”，因为很多论文确实过于理论化且晦涩难懂）。\n\n**伯克利大学Pieter Abbeel的CS188 人工智能导论**  \n（部分视频存在音频问题，因此我整理了不同年份的多个播放列表，根据音频情况从中挑选观看）。  \nhttps:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCDZUttQj8ytfASQIcvsLYgg（2015年春季）  \nhttps:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCB4_W1V-KfwpTLxH9jG1_iA（2014年春季）  \nhttps:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCshmLD2MsyqAKBx8ctivb5Q（2013年秋季）  \nhttps:\u002F\u002Fwww.youtube.com\u002Fuser\u002FCS188Spring2013（2013年春季）  \n这门课程是对*人工智能整体*的绝佳入门介绍，并非专门针对机器学习。它涵盖了人工智能和机器学习中的多个基础领域，范围非常广泛。如果你只对运行卷积神经网络来实现DeepDream之类的效果感兴趣，那么这门课程中约90%的内容对你来说可能并不相关。课程前半部分主要围绕基于智能体的人工智能展开，从约束满足问题、决策树、马尔可夫决策过程等开始，在这一点上与其他课程相比显得较为独特。随后课程会进入各种经典的机器学习主题。由于是入门课程，不需要任何人工智能或机器学习的基础知识，但会涉及一定的数学内容，因此需要对概率论、线性代数、微积分等有较好的理解。该课程不涉及深度学习，但对于想要深入研究基于智能体的人工智能（如强化学习和蒙特卡洛树搜索）的人来说，是一个非常好的基础。\n\n**UBC大学Nando de Freitas的CS540 机器学习课程 2013年**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6  \n这门课程从基础讲起，一直涵盖到神经网络，内容全面，涉及经典机器学习和符号人工智能等多个领域。无需机器学习的基础知识，可以视为一门综合性的入门课程。与吴恩达的Coursera课程相比，这门课程更加深入细致，数学推导也更为繁重。需要注意的是，这是一门研究生级别的计算机科学课程，因此难度较高。虽然课程覆盖面也很广，但不如CS188那样宽泛，而是更深入地探讨某些特定领域。课程对深度学习仅有简要介绍，但在机器学习和符号人工智能方面提供了非常扎实的基础。Nando本人非常出色，同时也是牛津大学的教授，并在DeepMind工作。\n\n**UBC大学Nando de Freitas的CS340 机器学习课程 2012年**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--Ecf_5nCbnSQMHqORpiChfJf  \n与上述CS540类似，但这是本科版本。我本人尚未观看过该课程，因此不清楚它与CS540有何区别。不过可以推测，其难度可能会稍低一些。\n\n**牛津大学Nando de Freitas的深度学习课程 2015年**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu  \n这门课程与CS540相似，但更专注于深度学习。它对统计学和多元微积分的要求更高，同时也需要具备一定的机器学习和符号人工智能的基础知识（吴恩达的Coursera课程或许足够，但我更推荐Nando的CS540或Pieter的CS188）。此外，信息论方面的知识也会有所帮助。课程还邀请了多位重量级嘉宾，例如Alex Graves关于生成式循环神经网络的讲座，以及Karol Gregor关于变分自编码器的讲解。\n\n**斯坦福大学Andrew Ng的CS229 机器学习课程 2008年**  \nhttps:\u002F\u002Fwww.youtube.com\u002Fview_play_list?p=A89DCFA6ADACE599  \n这又是一门非常全面的机器学习和符号人工智能入门课程。与他的Coursera课程完全不同，本课程更具理论性，涵盖的主题更多，内容也更加深入。它有点像Pieter Abbeel的CS188人工智能课程和Nando de Freitas的CS540机器学习课程的结合版。在这门课程中，某些部分讲解得更详细，而另一些则相对简略（例如，据我所知，它比Abbeel的CS188更深入地探讨了马尔可夫决策过程和强化学习，但没有涉及贝叶斯网络）。每门课程都提供了略有不同的视角和见解。同样，本课程也不涉及深度学习，但它为机器学习和符号人工智能提供了一个非常扎实的综合基础。\n\n**Coursera上的Geoffrey Hinton的机器学习神经网络课程**  \nhttps:\u002F\u002Fwww.coursera.org\u002Fcourse\u002Fneuralnets  \n这门课程深入探讨了深度学习的某些领域，难度较高。Hinton是深度学习领域的泰斗之一，课程中包含许多有价值的洞见，但我个人感觉内容有些零散，不太喜欢。也就是说，它并不适合作为一条清晰的线性学习路径，而是需要学习者具备一定的机器学习、符号人工智能和深度学习的基础知识。如果你先通过其他课程（如上面提到的视频）掌握了这些基础知识，再回过头来学习这门课程，就能获得很多启发。否则直接开始学习，很可能会感到迷茫。\n\n\n**Coursera上的计算神经科学课程，由Rajesh Rao和Adrienne Fairhall主讲**  \nhttps:\u002F\u002Fwww.coursera.org\u002Fcourse\u002Fcompneuro  \n这门课程与深度学习并无直接关联，但依然非常引人入胜。课程一开始比较轻松有趣，但后半部分逐渐变得非常艰深，尤其是Adrienne负责的部分。Rajesh讲解节奏较慢，会反复强调重点，而Adrienne似乎更习惯于面对计算神经科学方向的研究生，讲课速度很快。你可能会发现自己每看到一张幻灯片都要暂停一下，努力消化其中的内容。学习这门课程需要对常规的数学工具（如线性代数、微积分、概率论和统计分析，包括主成分分析等）有较好的理解。\n\n## 我尚未完成，但已开始或略读过\n**Rebecca Fiebrink 在 Kadenze 上开设的“面向音乐家与艺术家的机器学习”课程**  \nhttps:\u002F\u002Fwww.kadenze.com\u002Fcourses\u002Fmachine-learning-for-musicians-and-artists\u002Finfo  \n面向艺术家和音乐家。我还没看过这门课，但了解 Rebecca 及其工作，想必会非常出色。\n\n**乔治亚理工学院（Charles Isbell 和 Michael Littman）在 Udacity 上开设的机器学习课程**  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fmachine-learning-supervised-learning--ud675  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fmachine-learning-unsupervised-learning--ud741  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fmachine-learning-reinforcement-learning--ud820  \n看起来是对主要主题的基础性介绍。内容不算太难。可能需要一些基础的线性代数等知识，但不会过于复杂。Charles Isbell 和 Michael Littman 都是非常优秀的讲师。\n\n**Michael Littman、Chris Pryby 和 Charles Isbell 在 Udacity 上开设的强化学习课程**  \nhttps:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Freinforcement-learning--ud600  \n我大概学了一半，后来就被其他事情分心了。与上述课程类似，但更专注于 MDP 和强化学习，内容相当深入。我很想把它学完，不过目前还有其他更重要的事情。\n\n**David Silver 于 2015 年在 UCL 开设的强化学习课程**  \nhttp:\u002F\u002Fwww0.cs.ucl.ac.uk\u002Fstaff\u002Fd.silver\u002Fweb\u002FTeaching.html  \nhttps:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PL5X3mDkKaJrL42i_jhE4N-p6E2Ol62Ofa  \n介绍 MDP 和强化学习。相比前面的课程，内容显得更轻松简短。但对于大多数强化学习的入门探索来说，可能已经足够了。David Silver 是 DeepMind 的 AlphaGo 团队中的超级明星，而 AlphaGo 正是基于深度强化学习开发的。\n\n**斯坦福大学 Daphne Koller 在 Coursera 上开设的概率图模型课程**  \nhttps:\u002F\u002Fwww.coursera.org\u002Fcourse\u002Fpgm  \n虽然与深度学习没有直接关系，但涉及概率方法、贝叶斯网络等内容（即与 Josh Tenenbaum 的演讲主题相关）。我曾经开始学习这门课，但后来因为忙于其他事情而中断了。课程开头很有趣，但越往后就越艰深。整体内容非常全面，甚至有些过于庞杂，似乎涵盖了从过去到现在的各种相关主题。我也很想把它学完，只是现在有更优先的事情要处理。\n\n---\n# 教程 \u002F 文章 \u002F 博客\n如今关于机器学习和深度学习的文章与教程多不胜数，尤其是针对特定主题的教程更是层出不穷，无法一一列举。因此，下面列出几篇涵盖广泛主题、以基础知识为主的优质资源。这些资源大多需要一定的机器学习、深度学习以及线性代数、微积分等方面的基础知识。\n\n\n### 博客与非结构化教程\n这些网站提供了关于各种不同主题的非结构化教程。\n\nhttp:\u002F\u002Fcolah.github.io  \nChris Olah 的博客。其中包含大量关于复杂主题和概念的深刻见解。\n\nhttp:\u002F\u002Fkarpathy.github.io  \nAndrei Karpathy 的博客。与前者类似。\n\nhttp:\u002F\u002Fblog.otoro.net\u002F  \n@hardmaru 的博客。对概念有很好的解释，并且还提供了示例代码。\n\nhttp:\u002F\u002Ffastml.com\u002F  \n有许多实用的示例。\n\nhttp:\u002F\u002Fblog.keras.io  \n关于使用 Keras 实现的深度学习教程。Keras 是一个基于 Python 的深度学习框架，构建在 TensorFlow 和 Theano 之上。\n\n### 线性教程\n这些是按照从头到尾顺序排列的教程。\n\nhttps:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Findex.html  \n关于使用 Google 的 Python 深度学习框架 TensorFlow 实现的深度学习教程。需要理解机器学习基础、线性代数、微积分等知识。\n\nhttp:\u002F\u002Fdeeplearning.net\u002Ftutorial\u002Fcontents.html  \n关于使用 Python 深度学习框架 Theano 实现的深度学习教程。同样需要具备机器学习基础、线性代数、微积分等知识。\n\n---\n\n# 书籍\n**Ian Goodfellow、Yoshua Bengio 和 Aaron Courville 合著的《深度学习》**  \nhttp:\u002F\u002Fwww.deeplearningbook.org  \n免费在线书籍。内容非常新近，也简要介绍了所需的数学知识。\n\n**David Mackay 著的《信息论、推断与学习算法》**  \nhttp:\u002F\u002Fwww.inference.phy.cam.ac.uk\u002Fitprnn\u002Fbook.html  \n免费在线书籍。出版时间较早（初版 1997 年，现行版 2005 年），但属于经典教材。内容偏重统计与理论，较为艰深，需要较好的多元微积分和线性代数基础。\n\n**Chris Bishop 葳的《模式识别与机器学习》**  \nhttp:\u002F\u002Fresearch.microsoft.com\u002Fen-us\u002Fum\u002Fpeople\u002Fcmbishop\u002Fprml  \n与上一本类似（不过并非在线或免费）。这是一本经典的理论教材，内容非常抽象。\n\n**Richard S. Sutton 和 Andrew G. Barto 合著的《强化学习：导论》**  \nhttps:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fbook\u002Fthe-book.html  \n一本关于强化学习的在线书籍。\n\n**Claude E. Shannon 于 1948\u002F1949 年撰写的《通信的数学理论》**  \nhttp:\u002F\u002Fieeexplore.ieee.org\u002Fxpl\u002FarticleDetails.jsp?reload=true&arnumber=6773024    \nhttp:\u002F\u002Fwww.amazon.com\u002FMathematical-Theory-Communication-Claude-Shannon\u002Fdp\u002F0252725484   \n这是一部开创现代信息论的经典著作。我意识到推荐这本书可能会引发一系列连锁反应——既然提到了这篇论文，那接下来是不是还要推荐其他相关的文献呢？然而，我发现这本书作为理解机器学习和信息科学概念的基础和补充视角，具有极高的价值，因此还是忍不住要推荐它。我特别推荐它的书籍版本，因为它附带了 Warren Weaver 的一篇额外文章，用通俗易懂的语言解释了相关概念，然后再由 Shannon 用数学语言进行阐述。书中的数学并不算特别复杂，主要需要掌握基本的概率论和贝叶斯法则。\n\n---\n\n# 其他推荐\n以下是一些我尚未阅读或观看，但被他人高度推荐的资源。\n\n**Gilbert Strang 和 MIT 提供的线性代数视频讲座**  \nhttp:\u002F\u002Focw.mit.edu\u002Fcourses\u002Fmathematics\u002F18-06-linear-algebra-spring-2010\u002Fvideo-lectures\u002F  \n\n\n**Kevin Patrick Murphy 著的《机器学习：概率视角》**  \nhttps:\u002F\u002Fwww.cs.ubc.ca\u002F~murphyk\u002FMLbook  \n\n**Terry Taewoong Um 整理的“被引用最多的深度学习论文”列表**  \nhttps:\u002F\u002Fgithub.com\u002Fterryum\u002Fawesome-deep-learning-papers  \n“这是一份精心挑选的、自 2010 年以来被引用次数最多的深度学习论文清单。我认为，有一些经典的深度学习论文无论其具体应用如何，都值得阅读。与其提供海量的论文，不如精选出那些在某些领域堪称必读的经典深度学习论文。”\n\n**Tarleton Gillespie 和 Nick Seaver 编写的“批判性算法研究阅读清单”**  \nhttps:\u002F\u002Fsocialmediacollective.org\u002Freading-lists\u002Fcritical-algorithm-studies  \n虽然与数学或算法本身没有直接关系，但仍然非常重要。  \n“这份清单旨在收集并分类日益增长的关于算法的社会性议题的批判性文献。所收录的作品涵盖了社会学、人类学、科学技术研究、地理学、传播学、媒体研究以及法律研究等多个领域。”\n\n**编程社区精选的人工智能学习资源**  \nhttps:\u002F\u002Fhackr.io\u002Ftutorials\u002Flearn-artificial-intelligence-ai  \n“通过编程社区提交并投票选出的最佳在线人工智能课程和教程，学习人工智能。”\n\n# 注释\n这是一份我个人观看或阅读过的资源列表，并附上了我的简短感想。不过，对我来说，接受拉取请求并不太合理（除非是拼写错误）。但是……请大家通过议题等方式向我推荐其他资源。我还添加了一个“其他推荐”章节，里面收录了由他人高度推荐的资源。","# ai-resources 快速上手指南\n\n**注意**：`ai-resources` 并非一个可安装的软件库或框架，而是一个由 Memo Akten 维护的**人工智能学习资源精选列表**。它包含了视频讲座、在线课程（MOOC）、书籍和教程的链接，旨在帮助零基础开发者系统性地掌握 AI、机器学习、统计推断及深度学习的数学原理与算法。\n\n本指南将指导你如何访问并利用这份资源开始学习。\n\n## 环境准备\n\n由于本项目是资源索引而非代码库，无需特定的操作系统或复杂的依赖环境。你只需要：\n\n*   **硬件设备**：一台可以连接互联网的电脑、平板或手机。\n*   **网络环境**：\n    *   能够访问 **GitHub** (https:\u002F\u002Fgithub.com) 查看完整列表。\n    *   能够访问 **YouTube**, **VideoLectures.net**, **Khan Academy** 等视频托管平台。\n    *   *国内用户提示*：部分视频源（如 YouTube）在国内可能无法直接访问。建议配置科学上网环境，或寻找 Bilibili 等国内平台上的对应搬运内容（搜索讲师姓名 + 课程名称，如 \"Yann LeCun Deep Learning\"）。\n*   **前置知识**：\n    *   无需计算机科班背景。\n    *   建议具备基础的编程经验（任何语言）。\n    *   心态准备：作者强调学习曲线陡峭，需要耐心从头补全线性代数、概率统计和微积分基础。\n\n## 安装步骤\n\n本项目无需执行安装命令。请通过以下方式获取资源列表：\n\n1.  **访问 GitHub 仓库（推荐）**\n    在浏览器中打开以下地址，查看最新整理的资源清单：\n    ```text\n    https:\u002F\u002Fgithub.com\u002Fmemo\u002Fai-resources\n    ```\n\n2.  **访问 Medium 文章**\n    查看作者撰写的详细导读文章：\n    ```text\n    https:\u002F\u002Fmedium.com\u002F@memoakten\u002Fselection-of-resources-to-learn-artificial-intelligence-machine-learning-statistical-inference-23bc56ba655\n    ```\n\n3.  **补充资源（针对创意编码\u002F艺术家）**\n    作者特别推荐由 Gene Kogan 领导的面向艺术家和创意编码社区的资源：\n    ```text\n    https:\u002F\u002Fml4a.github.io\u002F\n    ```\n\n## 基本使用\n\n本资源的“使用”即按照作者推荐的路径进行学习。以下是基于 README 内容提炼的最简学习路径示例：\n\n### 1. 建立直观认知（入门视频）\n在深入数学之前，先观看简短的概述视频，了解 AI\u002F深度学习是什么。\n*   **推荐内容**: *Deep Learning by Yann LeCun and Yoshua Bengio @ NIPS 2015*\n*   **操作**: 访问视频链接，以 1.0x - 1.5x 倍速观看，无需纠结细节，重点理解概念。\n    ```text\n    http:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259574\n    ```\n\n### 2. 夯实数学基础（核心环节）\n若要真正理解算法背后的原理，必须补习数学。不要跳过此步骤。\n*   **推荐平台**: Khan Academy (可汗学院)\n*   **学习顺序**:\n    1.  **概率与统计 (Probability & Statistics)**: 重点理解先验、条件概率、期望值。\n        ```text\n        https:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Fprobability\n        ```\n    2.  **线性代数 (Linear Algebra)**: 重点掌握向量、矩阵、点积、矩阵乘法。特征值\u002F特征向量可视情况后续补充。\n        ```text\n        https:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Flinear-algebra\n        ```\n    3.  **微积分 (Calculus)**: 重点掌握微分（特别是链式法则）、优化（最大值\u002F最小值）。积分对深度学习非必须，可暂缓。\n        ```text\n        https:\u002F\u002Fwww.khanacademy.org\u002Fmath\u002Fcalculus-home\n        ```\n\n### 3. 系统课程学习（MOOC）\n完成基础数学后，进入系统的在线课程学习。\n*   **策略**: 作者建议按顺序观看多个不同讲师的同类课程，因为不同人的讲解角度能提供不同的直觉。\n*   **进阶视频示例**: 当准备好学习强化学习时，可观看 Rich Sutton 或 David Silver 的讲座。\n    ```text\n    # 强化学习入门 (Rich Sutton)\n    http:\u002F\u002Fresearch.microsoft.com\u002Fapps\u002Fvideo\u002F?id=259577\n    \n    # 深度强化学习 (David Silver, AlphaGo 背后技术)\n    http:\u002F\u002Fvideolectures.net\u002Frldm2015_silver_reinforcement_learning\n    ```\n\n### 4. 学习技巧提示\n*   **重复学习**: 如果某个知识点不懂，不要硬撑，返回去观看更基础的讲座，直到打通任督二脉。\n*   **倍速播放**: 对于已熟悉的内容，使用 1.25x 或 1.5x 倍速快速过一遍；遇到新难点切换回 1.0x。\n*   **目标导向**: 如果只想调用算法而不深究数学，可跳过第 2 步的深层证明部分，直接关注概念性课程。","一位非计算机科班出身的创意开发者，试图从零开始掌握深度学习算法以完成艺术生成项目，却因数学基础薄弱而陷入学习困境。\n\n### 没有 ai-resources 时\n- **资源过载且良莠不齐**：面对网络上爆炸式增长的教程、视频和文章，无法分辨哪些适合零基础入门，浪费大量时间试错。\n- **知识断层严重**：直接啃读深度学习论文时，因缺乏线性代数、概率统计等前置数学知识，导致理解过程如“攀爬垂直冰壁”般艰难且令人沮丧。\n- **学习路径混乱**：没有经过验证的进阶路线，只能在各个碎片化知识点间盲目跳跃，难以构建从基础统计到高级深度学习的完整知识体系。\n- **单一视角局限**：仅依赖单一来源的解释，遇到晦涩概念时缺乏不同角度的解读，难以形成直观的直觉理解。\n\n### 使用 ai-resources 后\n- **精选路径指引**：依托作者亲身完成并评论过的资源列表，直接锁定适合非科班背景的高质量材料，避免了在低质内容中大海捞针。\n- **夯实数学地基**：按照推荐顺序先系统补习线性代数与微积分等核心数学课，为后续理解复杂的机器学习算法扫清了根本障碍。\n- **循序渐进进阶**：遵循从传统机器学习\u002F统计推断到深度学习的自然演进路径，稳固基础后再接触前沿论文，学习曲线变得平滑可控。\n- **多维视角解惑**：利用列表中重叠的课程资源，通过不同讲师对同一概念的多样化阐释，迅速突破理解瓶颈，获得深刻的直觉洞察。\n\nai-resources 通过提供一条经实战验证的、重视数学根基的个性化学习路径，帮助非科班开发者将原本陡峭危险的学习过程转化为可执行的稳步进阶之旅。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmemoakten_ai-resources_1a3dba92.png","memoakten","Memo Akten","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmemoakten_d3d7f8fa.png","Sometimes I like to pretend to be an Olenellus fremonti evolving into an Olenellus mohavensis, just for fun – although I usually can't walk for days afterwards.",null,"http:\u002F\u002Fwww.memo.tv","https:\u002F\u002Fgithub.com\u002Fmemoakten",627,98,"2026-03-20T11:34:31","MIT","","未说明",{"notes":90,"python":88,"dependencies":91},"该仓库（ai-resources）并非一个可运行的软件工具或代码库，而是一份由作者个人整理的人工智能、机器学习、统计推断及深度学习的学习资源清单（包含视频讲座、MOOC 课程链接等）。因此，它没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。用户只需具备网络连接和浏览器即可访问其中列出的外部学习资源。",[],[18],"2026-03-27T02:49:30.150509","2026-04-11T22:00:02.734818",[],[]]