[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ahmedbahaaeldin--From-0-to-Research-Scientist-resources-guide":3,"tool-ahmedbahaaeldin--From-0-to-Research-Scientist-resources-guide":61},[4,18,26,36,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,60],"视频",{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":73,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":77,"languages":73,"stars":78,"forks":79,"last_commit_at":80,"license":73,"difficulty_score":81,"env_os":82,"env_gpu":83,"env_ram":83,"env_deps":84,"category_tags":87,"github_topics":88,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":96,"updated_at":97,"faqs":98,"releases":99},4543,"ahmedbahaaeldin\u002FFrom-0-to-Research-Scientist-resources-guide","From-0-to-Research-Scientist-resources-guide","Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.","From-0-to-Research-Scientist-resources-guide 是一份专为零基础学习者打造的 AI 科研进阶指南，旨在帮助本科生或任何希望深入人工智能领域的人士建立坚实的理论基础。它系统性地解决了初学者在面对海量学习资料时容易迷失方向、缺乏清晰学习路径的痛点，特别是针对深度学习与自然语言处理（NLP）方向提供了详尽的资源导航。\n\n这份指南非常适合具备基础编程知识或计算机科学背景，并立志成为 AI 研究员的开发者与学生使用。其独特亮点在于提供了“自底向上”与“自顶向下”两种灵活的学习策略：用户既可以选择先攻克线性代数、概率论、微积分及优化理论等数学基石，再进入算法应用；也可以优先通过实战项目上手，随后反向补充理论知识。内容涵盖机器学习、深度学习、强化学习及 NLP 等核心板块，并精心筛选了包括 MIT 公开课、经典教材及可视化教程在内的优质资源，同时标注了难度等级与对各领域的关联度，帮助用户高效规划从入门到科研的成长之路。","\u003Cdiv align=\"center\">\n\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_readme_311e4c739223.png\" width=\"25%\"> \n    \n\n\n\n  \n  **\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_readme_026c64708ff4.gif\" width=\"29px\">From Zero to Research Scientist full resources guide. \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_readme_26df447c41af.gif\" width=\"29px\">**\n  \n  \n  ![Full Guide](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFullAI-Guide-brightgreen.svg)\n  ![Version 0.0.1](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVersion-0.0.1-blue.svg)\n\u003C\u002Fdiv>\n\n## Guide description\nThis guide is designated to anybody with basic programming knowledge or a computer science background interested in becoming a Research Scientist with :dart: on Deep Learning and NLP.\n\nYou can go Bottom-Up or Top-Down both works well and it is actually crucial to know which approach suites you the best. If you are okay with studying lots of mathematical concepts without application then use Bottom-Up. If you want to go hands-on first then use the Top-Down first.\n\n## Contents:\n- [Mathematical Foundation](#Mathematical-Foundations)\n   - [Linear Algebra](#Linear-Algebra) \n   - [Probability](#Probability) \n   - [Calculus](#Calculus)\n   - [Optimization Theory](#Optimization-Theory)\n- [Machine Learning](#Machine-Learning)\n- [Deep Learning](#Deep-Learning)\n- [Reinforcement Learning](#Reinforcement-Learning)\n- [Natural Language Processing](#Natural-Language-Processing)\n\n## Mathematical Foundations:\nThe Mathematical Foundation part is for all Artificial Intelligence branches such as Machine Learning, Reinforcement Learning, Computer Vision and so on. AI is heavily math-theory based so a solid foundation is essential.\n\n### Linear Algebra \n\n\u003Cdetails>\n  \u003Csummary>:infinity:\u003C\u002Fsummary>\n  \n\u003C!--START_SECTION:activity-->  \n\n  This branch of Math is crucial for understanding the mechanism of Neural Networks which are the norm for NLP methodologies in nowadays State-of-The-Art.\n\nResource                    | Difficulty     | Relevance \n------------------------- | --------------- | -------------------------------\n[MIT Gilbert Strang 2005 Linear Algebra 🎥][gilbertStrang] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Computer+Vision&color=ff0101)\n[Linear Algebra 4th Edition by Friedberg 📘][Friedberg] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[Mathematics for Machine Learning Book: Chapter 2 📘][mmlbook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Deep+Learning) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Machine+Learning+Algorithms&color=000000)\n[James Hamblin Awesome Lecture Series 🎥][James_Hamblin] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[3Blue1Brown Essence of Linear Algebra 🎥][3blue] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Machine+Learning+Algorithms&color=000000) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[Mathematics For Machine Learning Specialization: Linear Algebra 🎥][MMLLA] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[Matrix Methods for Linear Algebra for Gilber Strang UPDATED! 🎥][matrixmethods] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>|  ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n  \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n### Probability\n\n\u003Cdetails>\n  \u003Csummary>:atom: \u003C\u002Fsummary>\n  \n\u003C!--START_SECTION:activity-->  \n\nMost of Natural Language Processing and Machine Learning Algorithms are based on Probability theory. So this branch is extremely important for grasping how old methods work.\nResource                    | Difficulty     | Relevance \n------------------------- | --------------- | -------------------------------\n[Joe Blitzstein Harvard Probability and Statistics Course 🎥][harvard] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Natural+Language+Processing&color=ff69b4) \n[MIT Probability Course 2011 Lecture videos 🎥][mitprob11] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Natural+Language+Processing&color=ff69b4) \n[MIT Probability Course 2018 short videos UPDATED! 🎥][mitprob18] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![25%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Natural+Language+Processing&color=ff69b4) \n[Mathematics for Machine Learning Book: Chapter 6 📘][mmlbook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Natural+Language+Processing&color=ff69b4) \n [Probabilistic Graphical Models CMU Advanced 🎥][cmuprob] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Natural+Language+Processing&color=ff69b4) \n[Probabilistic Graphical Models Stanford Daphne Advanced 🎥][stanfordprobgraph] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Natural+Language+Processing&color=ff69b4) \n [A First Course In Probability Book by Ross 📘][probBook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n [Joe Blitzstein Harvard Professor Probability Awesome Book 📘][harvBook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n  \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n[harvBook]: https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1VmkAAGOYCTORq1wxSQqy255qLJjTNvBI\u002Fview\n\n### Calculus\n\n\u003Cdetails>\n  \u003Csummary>:triangular_ruler:\u003C\u002Fsummary>\n  \n\n\n\u003C!--START_SECTION:activity--> \nResource                    | Difficulty     | Relevance \n------------------------- | --------------- | --------------------------\n[Essence of Calculus by 3Blue1Brown🎥][bluecal]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>|![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Deep+Learning)\n[Single Variable Calculus MIT 2007🎥][single07]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>|![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Deep+Learning)\n[Strang's Overview of Calculus🎥][strangcalc]|\u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[MultiVariable Calculus MIT 2007🎥][multi07]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[Princeton University Multivariable Calculus 2013🎥][princeton]|\u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[Calculus Book by Stewart 📘][calcbok]|\u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n[Mathematics for Machine Learning Book: Chapter 5 📘][mmlbook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Deep+Learning) ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n\n\n \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n ### Optimization Theory\n \n\u003Cdetails>\n  \u003Csummary> 📉 \u003C\u002Fsummary>\n  \n\n\u003C!--START_SECTION:activity--> \n-Resource                    | Difficulty     | Relevance \n------------------------- | --------------- | --------------------------\n[CMU optimization course 2018🎥][cmuopti]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Machine-Learning-Algorithms&color=000000) \n[CMU Advanced optimization course🎥][cmuadvopti]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) \n[Stanford Famous optimization course 🎥][stanfordopti]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) \n[Boyd Convex Optimization Book 📕][boyd] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) \n \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n-------------------------------------------------------------------------------- \n\n## Machine Learning\n\nConsidered a fancy name for Statistical models where its main goal is to learn from data for several usages. It is considered highly recommended to master these statistical techniques before Research as most of research is inspired by most of the Algorithms.\n\nResource                    | Difficulty Level \n------------------------- | ---------------\n[Mathematics for Machine Learning Part 2 📚][fullmmlbook] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Pattern Recognition and Machine Leanring📚][patternML]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Elements of Statistical Learning 📚][eesl]|![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Introduction to Statistical Learning  📚][introSL]|![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[Machine Learning: A Probabilistic Perspective 📚][murphyml]|![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Berkley CS188 Introduction to AI course 🎥][cs188]|![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[MIT Classic AI course taught by Prof. Patrick H. Winston 🎥][mitai]|![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[Stanford AI course 2018 🎥][stai18]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[California Institute of Technology Learning from Data course 🎥][caltldc]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[CMU Machine Learning 2015 10-601 🎥][cmuml2015]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[CMU Statistical Machine Learning 10-702 🎥][cmu702]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Information Theory, Pattern Recognition ML course 2012 🎥][PR2012]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Large Scale Machine Learning Toronto University 2015 🎥][toronto2015]|![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Algorithmic Aspects of Machine Learning MIT 🎥][Mitaspects]|![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[MIT Course 9.520 - Statistical Learning Theory and Applications, Fall 2015 🎥][mitfallslt]|![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Undergraduate Machine Learning Course University of British Columbia 2013 🎥][ubc2013]|![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n\n\n-------------------------------------------------------------------------------- \n\n[murphyml]: http:\u002F\u002Fnoiselab.ucsd.edu\u002FECE228\u002FMurphy_Machine_Learning.pdf\n[introSL]: https:\u002F\u002Fwww.ime.unicamp.br\u002F~dias\u002FIntoduction%20to%20Statistical%20Learning.pdf\n[patternML]:http:\u002F\u002Fusers.isr.ist.utl.pt\u002F~wurmd\u002FLivros\u002Fschool\u002FBishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf\n[eesl]: https:\u002F\u002Fweb.stanford.edu\u002F~hastie\u002FPapers\u002FESLII.pdf\n[fullmmlbook]: https:\u002F\u002Fmml-book.com\u002F\n[ubc2013]:https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=w2OtwL5T1ow&list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6\n[mitfallslt]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLyGKBDfnk-iDj3FBd0Avr_dLbrU8VG73O\n[Mitaspects]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLB3sDpSRdrOvI1hYXNsa6Lety7K8FhPpx\n[toronto2015]:https:\u002F\u002Fvideo-archive.fields.utoronto.ca\u002Fview\u002F2800\n[PR2012]: http:\u002F\u002Fvideolectures.net\u002Fcourse_information_theory_pattern_recognition\u002F\n[cmu702]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLjbUi5mgii6BWEUZf7He6nowWvGne_Y8r\n[cmuml2015]: http:\u002F\u002Fwww.cs.cmu.edu\u002F~ninamf\u002Fcourses\u002F601sp15\u002Flectures.shtml\n[caltldc]: https:\u002F\u002Fwork.caltech.edu\u002Flectures.html\n[cs188]: https:\u002F\u002Finst.eecs.berkeley.edu\u002F~cs188\u002Ffa18\u002F\n[mitai]: https:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-034-artificial-intelligence-fall-2010\u002Flecture-videos\u002Flecture-1-introduction-and-scope\u002F\n[stai18]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX\n\n## Deep Learning \n\nOne of the major breakthroughs in the field of intersection between Artificial Intelligence and Computer Science. It lead to countless advances in technology and considered the standard way to do Artificial Intelligence.\n\nResource                    | Difficulty Level \n------------------------- | ---------------\n[Deep Learning Book by Ian Goodfellow 📚][Ian] |![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[UCL DeepMind Deep Learning 🎥][ucl2020] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Advanced Talks by Deep Learning Pioneers 🎥][talkie] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Stanford Autumn 2018 Deep Learning Lectures 🎥][18standeep] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[FAU Deep Learning 2020 Series 🎥][fau] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[CMU Deep Learning course 2020 🎥][cmudeep] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[Stanford Convolutional Neural Network 2017 🎥][stanfcnn] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Oxford Deep Learning Awesome Lectures 2015 🎥][oxforddeep] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Stanford NLP with Deep Learning 2019 🎥][stanfordnlp2019] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Deep Learning from Probability and Statistics POV 🎥][alideep] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Advanced Deep Learning UCL 2017 course + Reinforcement Learning 🎥][ucladvrein] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Deep Learning UC Berkley 2020 Course 🎥][berkley2020] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[NYU Deep Learning with Pytorch hands on 🎥][DeepPy] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Classic Jeoffrey Hinton Old course OUTDATED 🎥][jeoff] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Pieter Abdeel Deep Unsupervised Learning 🎥][abdeeladv] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Hugo Larochelle Deep Learning series 🎥][hugodeep] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Deep Learning Book Explanation Series 🎥][deepbookexp] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Deep Learning Introduction by Durham University 🎥][Durham] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Fast.ai Practical Deep Learning 🎥][fast1] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Fast.ai Deep Learning From Foundations 🎥][fast2] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Deep Learning with Python (Keras Author) 📚][keras] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n--------------------------------------------------------------------------------  \n\n## Reinforcement Learning \n\nIt is a sub-field of AI which focuses on learning by observation\u002Frewards. \n\nResource                    | Difficulty Level \n------------------------- | ---------------\n[Introduction to Reinforcement Learning 📚][rlbook] | ![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[David Silver Deep Mind Introductory Lectures 🎥][dsIntrodu] | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Stanford 2018 cs234 Reinforcement Learning🎥 ][cs234] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Stanford 2019 cs330 Meta Learning advanced course 🎥][cs330] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Sergie Levine 2018 UC Berkley Lecture Videos 🎥][ucb2018rl] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Waterloo cs885 Reinforcement Learning 🎥][cs885] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Sergie Levine 2020 Deep Reinforcement Learning 🎥][sergie2020rl] | ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Reinforcement Learning Specialization Coursera GOLDEN courses🎥 (Though it is not free but you can apply for financial aid)][courseraRL] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n\n--------------------------------------------------------------------------------  \n\n## Natural Language Processing\n\nIt is a sub-field of AI which focuses on the interpretation of Human Language. \n\nResource                    | Difficulty Level \n------------------------- | ---------------\n[Jurafsky Speech and Language Processing 📚][jurafskybook]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Christopher Manning Foundations of Statistical NLP📚][fsnlp]| ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Christopher Manning Introduction to Information Retrieval📚][manninginformationr]| ![Advanced](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[cs224n Natural Language Processing with Deep Learning GOLDEN 2019🎥][stanfordnlp2019] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Oxford Natural Language Processing with Deep Learning 2017🎥][oxfordnlp] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Michigan Introduction to NLP🎥][michigannlp]  | ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[cs224u Natural Language Understanding 2019 🎥][stanfordnlu] |![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[cmu 2021 Neural Nets for NLP 2021🎥][cmunlp2021]|![Intermediate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[Jurafsky and Manning Introduction to Natural Language Processing🎥][jurafskynlp]| ![Introductory](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n\n### Must Read NLP Papers:\nIn this section, I am going to list the most influential papers that help people who want to dig deeper into the research world of NLP to catch up.\nPaper                    | Comment\n------------------------- | ---------------\n# TODO\n\n\n\n\n\n\n\n\n\n[manninginformationr]: https:\u002F\u002Fnlp.stanford.edu\u002FIR-book\u002Fpdf\u002Firbookprint.pdf\n[fsnlp]: https:\u002F\u002Fgithub.com\u002Fshivamms\u002Fbooks\u002Fblob\u002Fmaster\u002Fnlp\u002FFoundations%20of%20Statistical%20Natural%20Language%20Processing%20-%20Christopher%20D.%20Manning.pdf\n[jurafskybook]: https:\u002F\u002Fweb.stanford.edu\u002F~jurafsky\u002Fslp3\u002F\n[jurafskynlp]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zQ6gzQ5YZ8o&list=PLoROMvodv4rOFZnDyrlW3-nI7tMLtmiJZ\n[cmunlp2021]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vnx6M7N-ggs&list=PL8PYTP1V4I8AkaHEJ7lOOrlex-pcxS-XV\n[stanfordnlu]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=tZ_Jrc_nRJY&list=PLoROMvodv4rObpMCir6rNNUlFAn56Js20\n[michigannlp]:https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=n25JjoixM3I&list=PLLssT5z_DsK8BdawOVCCaTCO99Ya58ryR \n[oxfordnlp]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RP3tZFcC2e8&list=PL613dYIGMXoZBtZhbyiBqb0QtgK6oJbpm\n[courseraRL]: https:\u002F\u002Fwww.coursera.org\u002Fspecializations\u002Freinforcement-learning\n[sergie2020rl]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=JHrlF10v2Og&list=PL_iWQOsE6TfURIIhCrlt-wj9ByIVpbfGc\n[cs885]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLdAoL1zKcqTXFJniO3Tqqn6xMBBL07EDc\n[ucb2018rl]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ue9aS17d5iI&list=PLkFD6_40KJIxJMR-j5A1mkxK26gh_qg37&index=2\n[cs330]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0rZtSwNOTQo&list=PLoROMvodv4rMC6zfYmnD7UG3LVvwaITY5\n[cs234]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u\n[dsIntrodu]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2pWv7GOvuf0&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ\n[rlbook]: http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2020.pdf\n[Ian]: https:\u002F\u002Fgithub.com\u002Fjanishar\u002Fmit-deep-learning-book-pdf\u002Fblob\u002Fmaster\u002Fcomplete-book-pdf\u002FIan%20Goodfellow%2C%20Yoshua%20Bengio%2C%20Aaron%20Courville%20-%20Deep%20Learning%20(2017%2C%20MIT).pdf\n[fast2]: https:\u002F\u002Fcourse19.fast.ai\u002Fpart2\n[fast1]: https:\u002F\u002Fcourse.fast.ai\u002F\n[abdeeladv]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=V9Roouqfu-M&list=PLwRJQ4m4UJjPiJP3691u-qWwPGVKzSlNP\n[durham]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=s2uXPz3wyCk&list=PLMsTLcO6etti_SObSLvk9ZNvoS_0yia57\n[deepbookexp]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vi7lACKOUao&list=PLsXu9MHQGs8df5A4PzQGw-kfviylC-R9b\n[hugodeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SGZ6BttHMPw&list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH\n[jeoff]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=cbeTc-Urqak&list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9\n[DeepPy]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0bMe_vCZo30&list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq\n[berkley2020]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Va8WWRfw7Og&list=PLZSO_6-bSqHQHBCoGaObUljoXAyyqhpFW\n[ucladvrein]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iOh7QUZGyiU&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs\n[alideep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fyAZszlPphs&list=PLehuLRPyt1Hyi78UOkMPWCGRxGcA9NVOE\n[stanfordnlp2019]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8rXD5-xhemo&list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z\n[oxforddeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PlhFWT7vAEw&list=RDQMa66mIb9tImc&start_radio=1\n[stanfcnn]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vT1JzLTH4G4&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv\n[cmudeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0Oqpax2Q2hc&list=PLp-0K3kfddPzCnS4CqKphh-zT3aDwybDe\n[fau]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=p-_Stl0t3kU&list=PLpOGQvPCDQzvgpD3S0vTy7bJe2pf_yJFj\n[18standeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PySo_6S4ZAg&list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb\n[talkie]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vFYkyk_GmWM&list=PLhb1t0L7sKy2q7on_7dpgOACs3qpNbfkR&index=2\n[ucl2020]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7R52wiUgxZI&list=PLqYmG7hTraZCDxZ44o4p3N5Anz3lLRVZF\n[boyd]: https:\u002F\u002Fweb.stanford.edu\u002F~boyd\u002Fcvxbook\u002Fbv_cvxbook.pdf\n[cmuopti]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Di9f47LAzHQ&list=PLRPU00LaonXQ27RBcq6jFJnyIbGw5azOI\n[cmuadvopti]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yBO4E1FARaA&list=PLjTcdlvIS6cjdA8WVXNIk56X_SjICxt0d\n[stanfordopti]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=McLq1hEq3UY&list=PL3940DD956CDF0622\n[calcbok]: http:\u002F\u002Findex-of.co.uk\u002FMathematics\u002FCalculus%20-%20J.%20Stewart.pdf\n[princeton]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uDByROsGzuk&list=PLGqzsq0erqU7h6_bpE-CgJp4iX5aRju28\n[multi07]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PxCxlsl_YwY&list=PL4C4C8A7D06566F38\n[strangcalc]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=X9t-u87df3o&list=PLBE9407EA64E2C318\n[single07]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7K1sB05pE0A&list=PL590CCC2BC5AF3BC1\n[matrixmethods]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Cx5Z-OslNWE&list=PLUl4u3cNGP63oMNUHXqIUcrkS2PivhN3k\n[bluecal]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WUvTyaaNkzM&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x\n[probBook]: http:\u002F\u002Fwww.seyedkalali.com\u002Fwp-content\u002Fuploads\u002F2016\u002F11\u002FA-First-Course-in-Probability-8th-ed.-Sheldon-Ross.pdf\n[stanfordprobgraph]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GqMzbbaN6T4&list=PLzERW_Obpmv-_TkPEmCyzaJUGHtl7S01i\n[cmuprob]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=oqvdH_8lmCA&list=PLoZgVqqHOumTqxIhcdcpOAJOOimrRCGZn\n[mitprob18]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=1uW3qMFA9Ho&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6\n[mitprob11]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=j9WZyLZCBzs&list=PLUl4u3cNGP61MdtwGTqZA0MreSaDybji8\n[harvard]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KbB0FjPg0mw&list=PL2SOU6wwxB0uwwH80KTQ6ht66KWxbzTIo\n[MMLLA]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T73ldK46JqE&list=PLiiljHvN6z1_o1ztXTKWPrShrMrBLo5P3\n[3blue]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab\n[gilbertStrang]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QVKj3LADCnA&list=PL49CF3715CB9EF31D\n[Friedberg]: https:\u002F\u002Fwww.academia.edu\u002F43200796\u002FLinear_Algebra\n[mmlbook]: https:\u002F\u002Fmml-book.github.io\u002Fbook\u002Fmml-book.pdf\n[James_Hamblin]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HAoL5fPmgrw&list=PLNr8B4XHL5kGDHOrU4IeI6QNuZHur4F86\n[keras]: https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python\n","\u003Cdiv align=\"center\">\n\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_readme_311e4c739223.png\" width=\"25%\"> \n    \n\n\n\n  \n  **\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_readme_026c64708ff4.gif\" width=\"29px\">从零开始成为研究科学家的完整资源指南。\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_readme_26df447c41af.gif\" width=\"29px\">**\n  \n  \n  ![完整指南](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFullAI-Guide-brightgreen.svg)\n  ![版本 0.0.1](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVersion-0.0.1-blue.svg)\n\u003C\u002Fdiv>\n\n## 指南说明\n本指南专为具备基础编程知识或计算机科学背景、对深度学习和自然语言处理领域感兴趣并希望成为研究科学家的人士设计。\n\n你可以选择自底向上或自顶向下两种学习方式，这两种方法都行之有效，关键在于找到最适合自己的路径。如果你不介意先深入学习大量数学概念而不急于应用，那么就采用自底向上的方式；如果你想先动手实践，再逐步深入理论，则可以选择自顶向下。\n\n## 目录：\n- [数学基础](#Mathematical-Foundations)\n   - [线性代数](#Linear-Algebra) \n   - [概率论](#Probability) \n   - [微积分](#Calculus)\n   - [最优化理论](#Optimization-Theory)\n- [机器学习](#Machine-Learning)\n- [深度学习](#Deep-Learning)\n- [强化学习](#Reinforcement-Learning)\n- [自然语言处理](#Natural-Language-Processing)\n\n## 数学基础：\n数学基础部分适用于人工智能的所有分支，如机器学习、强化学习、计算机视觉等。人工智能高度依赖数学理论，因此扎实的数学基础至关重要。\n\n### 线性代数 \n\n\u003Cdetails>\n  \u003Csummary>:infinity:\u003C\u002Fsummary>\n  \n\u003C!--START_SECTION:activity-->  \n\n  这一数学分支对于理解神经网络的工作机制至关重要，而神经网络正是当今前沿自然语言处理方法中的主流技术。\n\n资源                    | 难度     | 相关性 \n------------------------- | --------------- | -------------------------------\n[MIT 吉尔伯特·斯特兰格 2005 年线性代数 🎥][gilbertStrang] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Computer+Vision&color=ff0101)\n[弗里德伯格《线性代数》第四版 📘][Friedberg] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[《机器学习数学》第二章 📘][mmlbook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Deep+Learning) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Machine+Learning+Algorithms&color=000000)\n[詹姆斯·汉布林精彩讲座系列 🎥][James_Hamblin] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[3Blue1Brown《线性代数的本质》 🎥][3blue] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Machine+Learning+Algorithms&color=000000) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[《机器学习数学》专项课程：线性代数 🎥][MMLLA] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[吉尔伯特·斯特兰格线性代数矩阵方法更新版 🎥][matrixmethods] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>|  ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n  \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n### 概率\n\n\u003Cdetails>\n  \u003Csummary>:atom: \u003C\u002Fsummary>\n  \n\u003C!--START_SECTION:activity-->  \n\n大多数自然语言处理和机器学习算法都基于概率论。因此，这一分支对于理解传统方法的工作原理至关重要。  \n资源                    | 难度     | 相关性 \n------------------------- | --------------- | -------------------------------\n[哈佛大学乔·布利茨斯坦的概率与统计课程 🎥][harvard] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Natural+Language+Processing&color=ff69b4) \n[MIT 2011年概率课程讲座视频 🎥][mitprob11] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Natural+Language+Processing&color=ff69b4) \n[MIT 2018年概率课程短视频 更新版！ 🎥][mitprob18] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![25%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Natural+Language+Processing&color=ff69b4) \n[《机器学习中的数学》第6章 📘][mmlbook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Natural+Language+Processing&color=ff69b4) \n [卡内基梅隆大学高级概率图模型课程 🎥][cmuprob] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Natural+Language+Processing&color=ff69b4) \n[斯坦福大学达芙妮教授的高级概率图模型课程 🎥][stanfordprobgraph] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine+Learning+Algorithms&color=000000) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Deep+Learning) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Natural+Language+Processing&color=ff69b4) \n [罗斯的《概率论初步》书籍 📘][probBook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n [哈佛大学乔·布利茨斯坦教授的优秀概率书籍 📘][harvBook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n  \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n[harvBook]: https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1VmkAAGOYCTORq1wxSQqy255qLJjTNvBI\u002Fview\n\n### 微积分\n\n\u003Cdetails>\n  \u003Csummary>:triangular_ruler:\u003C\u002Fsummary>\n  \n\n\n\u003C!--START_SECTION:activity--> \n资源                    | 难度     | 相关性 \n------------------------- | --------------- | --------------------------\n[3Blue1Brown的《微积分的本质》🎥][bluecal]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>|![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Deep+Learning)\n[MIT 2007年单变量微积分🎥][single07]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>|![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Deep+Learning)\n[斯特兰奇的微积分概览🎥][strangcalc]|\u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[MIT 2007年多变量微积分🎥][multi07]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[普林斯顿大学2013年多变量微积分🎥][princeton]|\u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning)\n[斯图尔特的微积分书籍 📘][calcbok]|\u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n[《机器学习中的数学》第5章 📘][mmlbook] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003Cspan>☆\u003C\u002Fspan>\u003C\u002Fdiv>| ![75%](https:\u002F\u002Fprogress-bar.dev\u002F75\u002F?title=Deep+Learning) ![50%](https:\u002F\u002Fprogress-bar.dev\u002F50\u002F?title=Machine-Learning-Algorithms&color=000000) \n\n\n \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n ### 优化理论\n \n\u003Cdetails>\n  \u003Csummary> 📉 \u003C\u002Fsummary>\n  \n\n\u003C!--START_SECTION:activity--> \n-资源                    | 难度     | 相关性 \n------------------------- | --------------- | --------------------------\n[CMU 2018年优化课程🎥][cmuopti]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) ![25%](https:\u002F\u002Fprogress-bar.dev\u002F25\u002F?title=Machine-Learning-Algorithms&color=000000) \n[CMU高级优化课程🎥][cmuadvopti]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) \n[斯坦福著名优化课程 🎥][stanfordopti]| \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) \n[博伊德的凸优化书籍 📕][boyd] | \u003Cdiv class=\"star-ratings-top\">\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003Cspan>★\u003C\u002Fspan>\u003C\u002Fdiv>| ![100%](https:\u002F\u002Fprogress-bar.dev\u002F100\u002F?title=Deep+Learning) \n \u003C!--END_SECTION:activity-->\n\n\u003C\u002Fdetails>\n\n--------------------------------------------------------------------------------\n\n## 机器学习\n\n机器学习可以被视为统计模型的一种高端称呼，其主要目标是从数据中学习，以应用于多种场景。在进行研究之前，掌握这些统计技术被认为是非常推荐的，因为大多数研究都是受到现有算法的启发而产生的。\n\n资源                    | 难度级别 \n------------------------- | ---------------\n[机器学习数学 第2部分 📚][fullmmlbook] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[模式识别与机器学习📚][patternML]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[统计学习要素 📚][eesl]|![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[统计学习导论  📚][introSL]|![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[机器学习：概率视角 📚][murphyml]|![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Berkley CS188 人工智能导论课程 🎥][cs188]|![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[MIT 经典人工智能课程，由 Patrick H. Winston 教授讲授 🎥][mitai]|![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[斯坦福大学 2018 年人工智能课程 🎥][stai18]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[加州理工学院 数据学习课程 🎥][caltldc]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[CMU 机器学习 2015 10-601 🎥][cmuml2015]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[CMU 统计机器学习 10-702 🎥][cmu702]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[信息论、模式识别 ML 课程 2012 🎥][PR2012]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[多伦多大学 大规模机器学习 2015 🎥][toronto2015]|![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[MIT 机器学习的算法方面 🎥][Mitaspects]|![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[MIT 课程 9.520 - 统计学习理论及其应用，2015 年秋季 🎥][mitfallslt]|![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[英属哥伦比亚大学 2013 年本科机器学习课程 🎥][ubc2013]|![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n\n\n-------------------------------------------------------------------------------- \n\n[murphyml]: http:\u002F\u002Fnoiselab.ucsd.edu\u002FECE228\u002FMurphy_Machine_Learning.pdf\n[introSL]: https:\u002F\u002Fwww.ime.unicamp.br\u002F~dias\u002FIntoduction%20to%20Statistical%20Learning.pdf\n[patternML]:http:\u002F\u002Fusers.isr.ist.utl.pt\u002F~wurmd\u002FLivros\u002Fschool\u002FBishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf\n[eesl]: https:\u002F\u002Fweb.stanford.edu\u002F~hastie\u002FPapers\u002FESLII.pdf\n[fullmmlbook]: https:\u002F\u002Fmml-book.com\u002F\n[ubc2013]:https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=w2OtwL5T1ow&list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6\n[mitfallslt]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLyGKBDfnk-iDj3FBd0Avr_dLbrU8VG73O\n[Mitaspects]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLB3sDpSRdrOvI1hYXNsa6Lety7K8FhPpx\n[toronto2015]:https:\u002F\u002Fvideo-archive.fields.utoronto.ca\u002Fview\u002F2800\n[PR2012]: http:\u002F\u002Fvideolectures.net\u002Fcourse_information_theory_pattern_recognition\u002F\n[cmu702]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLjbUi5mgii6BWEUZf7He6nowWvGne_Y8r\n[cmuml2015]: http:\u002F\u002Fwww.cs.cmu.edu\u002F~ninamf\u002Fcourses\u002F601sp15\u002Flectures.shtml\n[caltldc]: https:\u002F\u002Fwork.caltech.edu\u002Flectures.html\n[cs188]: https:\u002F\u002Finst.eecs.berkeley.edu\u002F~cs188\u002Ffa18\u002F\n[mitai]: https:\u002F\u002Focw.mit.edu\u002Fcourses\u002Felectrical-engineering-and-computer-science\u002F6-034-artificial-intelligence-fall-2010\u002Flecture-videos\u002Flecture-1-introduction-and-scope\u002F\n[stai18]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX\n\n## 深度学习\n\n人工智能与计算机科学交叉领域中的重大突破之一。它带来了无数技术进步，被认为是实现人工智能的标准方法。\n\n资源                    | 难度级别 \n------------------------- | ---------------\n[伊恩·古德费洛的深度学习书籍 📚][Ian] |![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[UCL DeepMind深度学习课程 🎥][ucl2020] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[深度学习先驱的高级讲座 🎥][talkie] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[斯坦福大学2018年秋季深度学习课程 🎥][18standeep] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[FAU 2020年深度学习系列课程 🎥][fau] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[CMU 2020年深度学习课程 🎥][cmudeep] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg) \n[斯坦福大学2017年卷积神经网络课程 🎥][stanfcnn] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[牛津大学2015年深度学习精彩讲座 🎥][oxforddeep] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[斯坦福大学2019年深度学习自然语言处理课程 🎥][stanfordnlp2019] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[从概率与统计角度讲解深度学习 🎥][alideep] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[UCL 2017年高级深度学习课程 + 强化学习 🎥][ucladvrein] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[UC伯克利2020年深度学习课程 🎥][berkley2020] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[NYU Pytorch动手实践深度学习课程 🎥][DeepPy] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[杰弗里·辛顿的经典旧课程 已过时 🎥][jeoff] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[皮特·阿卜杜勒的深度无监督学习课程 🎥][abdeeladv] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[于戈·拉罗谢尔深度学习系列课程 🎥][hugodeep] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[深度学习书籍讲解系列 🎥][deepbookexp] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[杜伦大学深度学习入门课程 🎥][Durham] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Fast.ai 实用深度学习课程 🎥][fast1] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Fast.ai 从基础开始的深度学习课程 🎥][fast2] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[Python深度学习（Keras作者）📚][keras] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n--------------------------------------------------------------------------------  \n\n## 强化学习 \n\n它是人工智能的一个子领域，专注于通过观察和奖励来学习。\n\n资源                    | 难度级别 \n------------------------- | ---------------\n[强化学习入门书籍 📚][rlbook] | ![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[戴维·西尔弗DeepMind入门讲座 🎥][dsIntrodu] | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[斯坦福大学2018年CS234强化学习课程🎥 ][cs234] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[斯坦福大学2019年CS330元学习高级课程 🎥][cs330] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[塞尔吉·莱文2018年UC伯克利讲座视频 🎥][ucb2018rl] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[滑铁卢大学CS885强化学习课程 🎥][cs885] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[塞尔吉·莱文2020年深度强化学习课程 🎥][sergie2020rl] | ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[Coursera黄金强化学习专项课程🎥 （虽然不是免费的，但可以申请经济援助）][courseraRL] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n\n--------------------------------------------------------------------------------  \n\n## 自然语言处理\n\n它是人工智能的一个子领域，专注于人类语言的理解与处理。\n\n资源                    | 难度级别 \n------------------------- | ---------------\n[朱拉夫斯基《语音与语言处理》📚][jurafskybook]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[克里斯托弗·曼宁《统计自然语言处理基础》📚][fsnlp]| ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[克里斯托弗·曼宁《信息检索导论》📚][manninginformationr]| ![高级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Advanced-red.svg)\n[斯坦福大学2019年CS224n深度学习自然语言处理黄金课程🎥][stanfordnlp2019] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[牛津大学2017年深度学习自然语言处理课程🎥][oxfordnlp] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[密歇根大学自然语言处理入门课程🎥][michigannlp]  | ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n[斯坦福大学2019年CS224u自然语言理解课程 🎥][stanfordnlu] |![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[卡内基梅隆大学2021年NLP神经网络课程 🎥][cmunlp2021]|![中级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Intermediate-yellow.svg)\n[朱拉夫斯基和曼宁自然语言处理入门课程🎥][jurafskynlp]| ![入门级](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLevel-Introductory-brightgreen.svg)\n\n### 必读自然语言处理论文：\n在这一部分，我将列出那些对希望深入自然语言处理研究领域的人们最有影响力的论文，帮助他们快速了解该领域的前沿进展。\n论文                    | 评论\n------------------------- | ---------------\n\n# 待办事项\n\n\n\n\n\n\n\n\n\n[manninginformationr]: https:\u002F\u002Fnlp.stanford.edu\u002FIR-book\u002Fpdf\u002Firbookprint.pdf\n[fsnlp]: https:\u002F\u002Fgithub.com\u002Fshivamms\u002Fbooks\u002Fblob\u002Fmaster\u002Fnlp\u002FFoundations%20of%20Statistical%20Natural%20Language%20Processing%20-%20Christopher%20D.%20Manning.pdf\n[jurafskybook]: https:\u002F\u002Fweb.stanford.edu\u002F~jurafsky\u002Fslp3\u002F\n[jurafskynlp]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zQ6gzQ5YZ8o&list=PLoROMvodv4rOFZnDyrlW3-nI7tMLtmiJZ\n[cmunlp2021]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vnx6M7N-ggs&list=PL8PYTP1V4I8AkaHEJ7lOOrlex-pcxS-XV\n[stanfordnlu]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=tZ_Jrc_nRJY&list=PLoROMvodv4rObpMCir6rNNUlFAn56Js20\n[michigannlp]:https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=n25JjoixM3I&list=PLLssT5z_DsK8BdawOVCCaTCO99Ya58ryR \n[oxfordnlp]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RP3tZFcC2e8&list=PL613dYIGMXoZBtZhbyiBqb0QtgK6oJbpm\n[courseraRL]: https:\u002F\u002Fwww.coursera.org\u002Fspecializations\u002Freinforcement-learning\n[sergie2020rl]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=JHrlF10v2Og&list=PL_iWQOsE6TfURIIhCrlt-wj9ByIVpbfGc\n[cs885]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLdAoL1zKcqTXFJniO3Tqqn6xMBBL07EDc\n[ucb2018rl]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ue9aS17d5iI&list=PLkFD6_40KJIxJMR-j5A1mkxK26gh_qg37&index=2\n[cs330]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0rZtSwNOTQo&list=PLoROMvodv4rMC6zfYmnD7UG3LVvwaITY5\n[cs234]: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u\n[dsIntrodu]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2pWv7GOvuf0&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ\n[rlbook]: http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2020.pdf\n[Ian]: https:\u002F\u002Fgithub.com\u002Fjanishar\u002Fmit-deep-learning-book-pdf\u002Fblob\u002Fmaster\u002Fcomplete-book-pdf\u002FIan%20Goodfellow%2C%20Yoshua%20Bengio%2C%20Aaron%20Courville%20-%20Deep%20Learning%20(2017%2C%20MIT).pdf\n[fast2]: https:\u002F\u002Fcourse19.fast.ai\u002Fpart2\n[fast1]: https:\u002F\u002Fcourse.fast.ai\u002F\n[abdeeladv]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=V9Roouqfu-M&list=PLwRJQ4m4UJjPiJP3691u-qWwPGVKzSlNP\n[durham]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=s2uXPz3wyCk&list=PLMsTLcO6etti_SObSLvk9ZNvoS_0yia57\n[deepbookexp]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vi7lACKOUao&list=PLsXu9MHQGs8df5A4PzQGw-kfviylC-R9b\n[hugodeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SGZ6BttHMPw&list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH\n[jeoff]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=cbeTc-Urqak&list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9\n[DeepPy]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0bMe_vCZo30&list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq\n[berkley2020]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Va8WWRfw7Og&list=PLZSO_6-bSqHQHBCoGaObUljoXAyyqhpFW\n[ucladvrein]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iOh7QUZGyiU&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs\n[alideep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fyAZszlPphs&list=PLehuLRPyt1Hyi78UOkMPWCGRxGcA9NVOE\n[stanfordnlp2019]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8rXD5-xhemo&list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z\n[oxforddeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PlhFWT7vAEw&list=RDQMa66mIb9tImc&start_radio=1\n[stanfcnn]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vT1JzLTH4G4&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv\n[cmudeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0Oqpax2Q2hc&list=PLp-0K3kfddPzCnS4CqKphh-zT3aDwybDe\n[fau]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=p-_Stl0t3kU&list=PLpOGQvPCDQzvgpD3S0vTy7bJe2pf_yJFj\n[18standeep]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PySo_6S4ZAg&list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb\n[talkie]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vFYkyk_GmWM&list=PLhb1t0L7sKy2q7on_7dpgOACs3qpNbfkR&index=2\n[ucl2020]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7R52wiUgxZI&list=PLqYmG7hTraZCDxZ44o4p3N5Anz3lLRVZF\n[boyd]: https:\u002F\u002Fweb.stanford.edu\u002F~boyd\u002Fcvxbook\u002Fbv_cvxbook.pdf\n[cmuopti]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Di9f47LAzHQ&list=PLRPU00LaonXQ27RBcq6jFJnyIbGw5azOI\n[cmuadvopti]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yBO4E1FARaA&list=PLjTcdlvIS6cjdA8WVXNIk56X_SjICxt0d\n[stanfordopti]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=McLq1hEq3UY&list=PL3940DD956CDF0622\n[calcbok]: http:\u002F\u002Findex-of.co.uk\u002FMathematics\u002FCalculus%20-%20J.%20Stewart.pdf\n[princeton]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uDByROsGzuk&list=PLGqzsq0erqU7h6_bpE-CgJp4iX5aRju28\n[multi07]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=PxCxlsl_YwY&list=PL4C4C8A7D06566F38\n[strangcalc]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=X9t-u87df3o&list=PLBE9407EA64E2C318\n[single07]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7K1sB05pE0A&list=PL590CCC2BC5AF3BC1\n[matrixmethods]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Cx5Z-OslNWE&list=PLUl4u3cNGP63oMNUHXqIUcrkS2PivhN3k\n[bluecal]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WUvTyaaNkzM&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x\n[probBook]: http:\u002F\u002Fwww.seyedkalali.com\u002Fwp-content\u002Fuploads\u002F2016\u002F11\u002FA-First-Course-in-Probability-8th-ed.-Sheldon-Ross.pdf\n[stanfordprobgraph]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GqMzbbaN6T4&list=PLzERW_Obpmv-_TkPEmCyzaJUGHtl7S01i\n[cmuprob]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=oqvdH_8lmCA&list=PLoZgVqqHOumTqxIhcdcpOAJOOimrRCGZn\n[mitprob18]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=1uW3qMFA9Ho&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6\n[mitprob11]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=j9WZyLZCBzs&list=PLUl4u3cNGP61MdtwGTqZA0MreSaDybji8\n[harvard]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KbB0FjPg0mw&list=PL2SOU6wwxB0uwwH80KTQ6ht66KWxbzTIo\n[MMLLA]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T73ldK46JqE&list=PLiiljHvN6z1_o1ztXTKWPrShrMrBLo5P3\n[3blue]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab\n[gilbertStrang]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QVKj3LADCnA&list=PL49CF3715CB9EF31D\n[Friedberg]: https:\u002F\u002Fwww.academia.edu\u002F43200796\u002FLinear_Algebra\n[mmlbook]: https:\u002F\u002Fmml-book.github.io\u002Fbook\u002Fmml-book.pdf\n[James_Hamblin]: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HAoL5fPmgrw&list=PLNr8B4XHL5kGDHOrU4IeI6QNuZHur4F86\n[keras]: https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python","# From-0-to-Research-Scientist 快速上手指南\n\n本指南旨在帮助具备基础编程知识或计算机科学背景的开发人员，系统性地成长为专注于**深度学习**和**自然语言处理 (NLP)** 的研究科学家。\n\n> **注意**：本项目是一个**学习资源路线图**，而非可安装的软件库。因此，无需执行 `pip install` 等安装命令。请根据您的基础选择“自底向上”（先学数学）或“自顶向下”（先动手实践）的学习路径。\n\n## 环境准备\n\n在开始学习之前，请确保您的开发环境满足以下要求，以便能够运行资源中推荐的代码示例和实验。\n\n### 系统要求\n- **操作系统**: Linux (推荐 Ubuntu 20.04+), macOS, 或 Windows (建议使用 WSL2)\n- **硬件**: \n  - 内存：建议 16GB 以上\n  - GPU: 推荐使用 NVIDIA GPU (显存 8GB+) 以加速深度学习模型训练\n- **编程语言**: Python 3.8+\n\n### 前置依赖\n建议创建一个独立的虚拟环境来管理学习过程中涉及的库。\n\n```bash\n# 创建虚拟环境\npython3 -m venv ai-research-env\n\n# 激活环境\n# Linux\u002FmacOS:\nsource ai-research-env\u002Fbin\u002Factivate\n# Windows:\nai-research-env\\Scripts\\activate\n\n# 升级包管理工具\npip install --upgrade pip\n```\n\n### 核心库安装\n为了复现大多数深度学习和 NLP 教程，建议预装以下核心库。国内用户可使用清华源加速下载。\n\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\npip install tensorflow\npip install scikit-learn matplotlib jupyter notebook pandas numpy\npip install transformers datasets accelerate -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 学习路径与资源使用\n\n本指南的核心内容是按领域分类的资源列表。请根据您的需求点击原仓库中的链接访问课程视频、电子书或讲义。\n\n### 1. 选择学习策略\n- **自底向上 (Bottom-Up)**: 适合愿意先深入钻研数学概念再应用的用户。\n  - 顺序：数学基础 -> 机器学习 -> 深度学习 -> NLP\n- **自顶向下 (Top-Down)**: 适合希望先动手写代码，遇到理论问题再回头补充的用户。\n  - 顺序：深度学习\u002FNLP 实战 -> 回溯数学原理\n\n### 2. 核心模块资源索引\n\n#### 数学基础 (Mathematical Foundations)\nAI 的基石，所有分支（ML, RL, CV, NLP）均需掌握。\n\n| 领域 | 推荐资源 (点击原链接访问) | 难度 | 适用场景 |\n| :--- | :--- | :--- | :--- |\n| **线性代数** | MIT Gilbert Strang 2005 (视频) | ⭐⭐ | 深度学习 (100%), 计算机视觉 |\n| | 3Blue1Brown 本质系列 (视频) | ⭐ | 直观理解深度学习概念 |\n| | Mathematics for Machine Learning (书 Ch2) | ⭐⭐⭐ | 机器学习算法基础 |\n| **概率论** | Joe Blitzstein Harvard (视频\u002F书) | ⭐⭐⭐⭐⭐ | NLP (100%), 传统机器学习 |\n| | MIT Probability Course (视频) | ⭐⭐⭐ | NLP 与传统算法 |\n| **微积分** | 3Blue1Brown Essence of Calculus | ⭐⭐ | 深度学习梯度理解 |\n| | MIT Single\u002FMulti Variable Calculus | ⭐⭐⭐⭐ | 深度学习核心推导 |\n| **优化理论** | Stanford Optimization Course (视频) | ⭐⭐⭐⭐⭐ | 深度学习训练原理 |\n| | Boyd Convex Optimization (书) | ⭐⭐⭐⭐⭐ | 高级优化理论 |\n\n#### 机器学习 (Machine Learning)\n统计模型的集合，建议在研究前掌握。\n\n- **入门**: *Introduction to Statistical Learning* (书), *Berkley CS188* (课程)\n- **进阶**: *Pattern Recognition and Machine Learning* (书), *Mathematics for Machine Learning Part 2*\n- **高级**: *Elements of Statistical Learning*, *Machine Learning: A Probabilistic Perspective*\n\n#### 深度学习与自然语言处理 (Deep Learning & NLP)\n在掌握上述基础后，进入核心研究领域。\n- 重点关注神经网络机制、Transformer 架构及最新 SOTA 模型。\n- 利用已安装的 `transformers` 库进行 Hugging Face 模型实战。\n\n## 基本使用示例\n\n由于本项目是资源指南，\"使用\"即指利用上述资源进行学习并编写代码。以下是一个基于指南中 NLP 方向的最简单实战示例，用于验证环境并体验现代 NLP 流程。\n\n### 示例：使用预训练模型进行文本情感分析\n\n```python\nfrom transformers import pipeline\n\n# 初始化情感分析管道 (自动下载模型)\nclassifier = pipeline(\"sentiment-analysis\")\n\n# 测试输入\ntext = \"I love becoming a research scientist through this guide!\"\n\n# 执行推理\nresult = classifier(text)\n\nprint(f\"Text: {text}\")\nprint(f\"Prediction: {result[0]['label']} (Confidence: {result[0]['score']:.4f})\")\n```\n\n**预期输出：**\n```text\nText: I love becoming a research scientist through this guide!\nPrediction: POSITIVE (Confidence: 0.9998)\n```\n\n### 下一步行动\n1. 访问原 GitHub 仓库，点击 \"Contents\" 中您感兴趣的章节。\n2. 根据推荐的书籍或视频课程制定学习计划。\n3. 在本地 Jupyter Notebook 中复现课程中的数学推导或算法代码。","计算机系大三学生李明立志攻读人工智能研究生，希望从基础编程者转型为具备深厚理论功底的研究员，主攻自然语言处理方向。\n\n### 没有 From-0-to-Research-Scientist-resources-guide 时\n- **学习路径混乱**：面对海量的数学教材和网课，无法分辨哪些是深度学习核心必备，哪些是过时内容，导致在无关知识上浪费大量时间。\n- **理论与实践脱节**：盲目啃读晦涩的线性代数专著，却不清楚这些矩阵运算如何具体应用于神经网络机制，学习过程枯燥且难以坚持。\n- **资源难度不匹配**：随意选取的教程要么过于浅显无法触及科研门槛，要么直接抛出高深公式缺乏前置引导，造成严重的挫败感。\n- **方向选择迷茫**：不清楚自己适合“自底向上”先攻克数学理论，还是“自顶向下”先动手实践，缺乏个性化的路线指导。\n\n### 使用 From-0-to-Research-Scientist-resources-guide 后\n- **路径清晰高效**：直接依据指南中针对 NLP 和深度学习标注的资源相关性进度条，精准锁定 MIT Gilbert Strang 等高分核心课程，剔除冗余信息。\n- **知行合一**：按照指南建议，将线性代数学习与神经网络机制理解挂钩，明确每个数学概念在 SOTA 模型中的实际用途，极大提升了学习动力。\n- **难度阶梯合理**：利用指南提供的难度星级评分，从 3Blue1Brown 的直观视频入门，逐步过渡到专业教材，构建了平滑且扎实的知识上升曲线。\n- **策略量身定制**：根据自身“喜欢先动手”的特点，果断选择指南推荐的“自顶向下”学习模式，快速建立感性认识后再回头补全数学地基。\n\nFrom-0-to-Research-Scientist-resources-guide 通过提供结构化、分级且目标明确的资源地图，将原本无序的自学过程转化为一条通往科研科学家的高效捷径。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fahmedbahaaeldin_From-0-to-Research-Scientist-resources-guide_7f93210e.png","ahmedbahaaeldin",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fahmedbahaaeldin_77d6e8f4.png","Master's . Data scientist. NLP Researcher","Microsoft","https:\u002F\u002Fgithub.com\u002Fahmedbahaaeldin",7605,1075,"2026-04-03T03:31:51",1,"","未说明",{"notes":85,"python":83,"dependencies":86},"该仓库是一个学习资源指南（包含数学、机器学习、深度学习等领域的课程和书籍链接），并非可执行的软件工具或代码库，因此没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。用户只需具备基础的编程知识或计算机科学背景即可使用其中的资源进行学习。",[],[14],[89,90,91,92,93,94,95],"machine-learning","deep-learning","linear-algebra","probability","calculus","books","lectures","2026-03-27T02:49:30.150509","2026-04-07T01:53:52.936183",[],[]]