[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-emilwallner--How-to-learn-Deep-Learning":3,"tool-emilwallner--How-to-learn-Deep-Learning":61},[4,18,28,37,45,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":24,"last_commit_at":25,"category_tags":26,"status":17},9989,"n8n","n8n-io\u002Fn8n","n8n 是一款面向技术团队的公平代码（fair-code）工作流自动化平台，旨在让用户在享受低代码快速构建便利的同时，保留编写自定义代码的灵活性。它主要解决了传统自动化工具要么过于封闭难以扩展、要么完全依赖手写代码效率低下的痛点，帮助用户轻松连接 400 多种应用与服务，实现复杂业务流程的自动化。\n\nn8n 特别适合开发者、工程师以及具备一定技术背景的业务人员使用。其核心亮点在于“按需编码”：既可以通过直观的可视化界面拖拽节点搭建流程，也能随时插入 JavaScript 或 Python 代码、调用 npm 包来处理复杂逻辑。此外，n8n 原生集成了基于 LangChain 的 AI 能力，支持用户利用自有数据和模型构建智能体工作流。在部署方面，n8n 提供极高的自由度，支持完全自托管以保障数据隐私和控制权，也提供云端服务选项。凭借活跃的社区生态和数百个现成模板，n8n 让构建强大且可控的自动化系统变得简单高效。",184740,2,"2026-04-19T23:22:26",[16,14,13,15,27],"插件",{"id":29,"name":30,"github_repo":31,"description_zh":32,"stars":33,"difficulty_score":10,"last_commit_at":34,"category_tags":35,"status":17},10095,"AutoGPT","Significant-Gravitas\u002FAutoGPT","AutoGPT 是一个旨在让每个人都能轻松使用和构建 AI 的强大平台，核心功能是帮助用户创建、部署和管理能够自动执行复杂任务的连续型 AI 智能体。它解决了传统 AI 应用中需要频繁人工干预、难以自动化长流程工作的痛点，让用户只需设定目标，AI 即可自主规划步骤、调用工具并持续运行直至完成任务。\n\n无论是开发者、研究人员，还是希望提升工作效率的普通用户，都能从 AutoGPT 中受益。开发者可利用其低代码界面快速定制专属智能体；研究人员能基于开源架构探索多智能体协作机制；而非技术背景用户也可直接选用预置的智能体模板，立即投入实际工作场景。\n\nAutoGPT 的技术亮点在于其模块化“积木式”工作流设计——用户通过连接功能块即可构建复杂逻辑，每个块负责单一动作，灵活且易于调试。同时，平台支持本地自托管与云端部署两种模式，兼顾数据隐私与使用便捷性。配合完善的文档和一键安装脚本，即使是初次接触的用户也能在几分钟内启动自己的第一个 AI 智能体。AutoGPT 正致力于降低 AI 应用门槛，让人人都能成为 AI 的创造者与受益者。",183572,"2026-04-20T04:47:55",[13,36,27,14,15],"语言模型",{"id":38,"name":39,"github_repo":40,"description_zh":41,"stars":42,"difficulty_score":10,"last_commit_at":43,"category_tags":44,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":24,"last_commit_at":51,"category_tags":52,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",161692,"2026-04-20T11:33:57",[14,13,36],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":24,"last_commit_at":59,"category_tags":60,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":88,"env_deps":90,"category_tags":101,"github_topics":102,"view_count":24,"oss_zip_url":82,"oss_zip_packed_at":82,"status":17,"created_at":108,"updated_at":109,"faqs":110,"releases":111},10215,"emilwallner\u002FHow-to-learn-Deep-Learning","How-to-learn-Deep-Learning","A top-down, practical guide to learn AI, Deep learning and Machine Learning.","How-to-learn-Deep-Learning 是一份专为零基础或转行人士设计的深度学习实战指南。它摒弃了传统“先理论后实践”的枯燥路径，采用独特的“自顶向下”教学法，引导学习者直接从高层框架入手，快速建立对人工智能的直观认知。\n\n这份资源主要解决了初学者面对海量理论无从下手、难以将知识转化为实际工程能力，以及因缺乏真实项目经验而求职受阻的痛点。它不仅提供了清晰的学习路线图，涵盖从 Python 基础、数据处理到模型部署的全流程，还特别强调了如何构建具有竞争力的作品集，帮助学习者避开仅依赖教程和玩具数据的误区，真正掌握解决现实问题的能力。\n\n该指南非常适合希望进入机器学习领域的开发者、自学者以及寻求职业转型的技术人员。其核心亮点在于务实的职业导向：建议初学者优先瞄准需求量大且更看重工程落地能力的“机器学习工程师”岗位，利用 FastAI 和 PyTorch 等现代工具在云 GPU 上快速实践。通过两个月的密集训练与后续的项目积累，用户能够建立起扎实的深度学习思维，为职业生涯打下坚实基础。","## Approach\nA practical, top-down approach, starting with high-level frameworks with a focus on Deep Learning.\n\n---\n\n**UPDATED VERSION:** 👉 Check out my 60-page guide, [No ML Degree](https:\u002F\u002Fwww.emilwallner.com\u002Fp\u002Fno-ml-degree), on how to land a machine learning job without a degree.\n\n## Getting started [2 months]\n\nThere are three main goals to get up to speed with deep learning: \n1) Get familiar to the tools you will be working with, e.g. Python, the command line and Jupyter notebooks\n2) Get used to the workflow, everything from finding the data to deploying a trained model\n3) Building a deep learning mindset, an intuition for how deep learning models behave and how to improve them\n\n- Spend a week on codecademy.com and learn the python syntax, command line and git. If you don't have any previous programming experience, it's good to spend a few months learning how to program. Otherwise, it's easy to become overwhelmed. \n- Spend one to two weeks using [Pandas](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yzIMircGU5I&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y) and [Scikit-learn](http:\u002F\u002Fscikit-learn.org\u002Fstable\u002F) on [Kaggle problems](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions?sortBy=grouped&group=general&page=1&pageSize=20&category=gettingStarted) using [Jupyter Notebook on Colab](https:\u002F\u002Fcolab.research.google.com\u002Fnotebooks\u002Fwelcome.ipynb), e.g. [Titanic](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Ftitanic), [House prices](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fhouse-prices-advanced-regression-techniques), and [Iris](https:\u002F\u002Fwww.kaggle.com\u002Fuciml\u002Firis). This gives you an overview of the machine learning mindset and workflow. \n- Spend one month implementing models on cloud GPUs. Start with [FastAI and PyTorch](http:\u002F\u002Fcourse.fast.ai\u002F). The FastAI community is the go-to place for people wanting to apply deep learning and share the state of the art techniques.\n\nOnce you have done this, you will know how to add value with ML. \n\n\n## Portfolio [3 - 12 months]\n\nThink of your portfolio as evidence to a potential employer that you can provide value for them.\n\nWhen you are looking for your first job, there are four main roles you can apply for \n1. Machine Learning Engineering, \n1. Applied Machine Learning Researcher \u002F Residencies, \n1. Machine Learning Research Scientist, and \n1. Software Engineering. \n\nA lot of the work related to machine learning is pure software engineering roles (category 4), e.g. scaling infrastructure, but that's out of scope for this article. \n\nIt's easiest to get a foot in the door if you aim for Machine Learning Engineering roles. There are a magnitude more ML engineering roles compared to category 2 & 3 roles, they require little to no theory, and they are less competitive. Most employers prefer scaling and leveraging stable implementations, often ~1 year old, instead of allocating scarce resources to implement SOTA papers, which are often time-consuming and seldom work well in practice.  \n\nOnce you can cover your bills and have a few years of experience, you are in a better position to learn theory and advance to category 2 & 3 roles. This is especially true if you are self-taught, you often have an edge against an average university graduate. In general, graduates have weak practical skills and strong theory skills. \n\n### Context\n\nYou'll have a mix of  3 - 10 technical and non-technical people looking at your portfolio, regardless of their background, you want to spark the following reactions: \n* the applicant has experience tackling our type of problems,\n* the applicant's work is easy to understand and well organized, and\n* the work was without a doubt 100% made by the applicant.\n\nMost ML learners end up with the same portfolio as everyone else. Portfolio items include things as MOOC participation, dog\u002Fcat classifiers, and implementations on toy datasets such as the titanic and iris datasets. They often indicate that you actively avoid real-world problem-solving, and prefer being in your comfort zone by copy-pasting from tutorials. These portfolio items often signal negative value instead of signaling that you are a high-quality candidate.\n\nA unique portfolio item implies that you have tackled a unique problem without a solution, and thus have to engage in the type of problem-solving an employee does daily. A good starting point is to look for portfolio ideas on [active Kaggle competitions](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions), and [machine learning consulting projects](https:\u002F\u002Fwww.upwork.com\u002Ffreelance-jobs\u002Fmachine-learning\u002F), and demo versions of [common production pipelines](https:\u002F\u002Fgithub.com\u002Fchiphuyen\u002Fmachine-learning-systems-design\u002Fblob\u002Fmaster\u002Fbuild\u002Fbuild1\u002Fconsolidated.pdf). Here's a Twitter thread on [how to come up with portfolio ideas](https:\u002F\u002Ftwitter.com\u002FEmilWallner\u002Fstatus\u002F1184723538810413056).\n\nHere are rough guidelines to self-assess the strength of your portfolio:\n\n#### Machine learning engineering:\n\nEven though ML engineering roles are the most strategic entry point, they are still highly competitive. In general, there are ~50 software engineering roles for every ML role. From the self-learners I know, 2\u002F3 fail to get a foot in the door and end up taking software engineering roles instead. You are ready to look for a job when you have two high-quality projects that are well-documented, have unique datasets, and are relevant to a [specific industry](https:\u002F\u002Ftowardsdatascience.com\u002Fthe-cold-start-problem-how-to-build-your-machine-learning-portfolio-6718b4ae83e9), say banking or insurance. \n\nProject Type | Base score | \n-------------| -----------|\nCommon project | -1 p ||\nUnique project | 10 p |\n\nMultiplier Type | Factor\n-----------------|-----------------\nStrong documentation | 5x\n5000-word article | 5x\nKaggle Medal | 10x\nEmployer relevancy | 20x\n \n* __Hireable:__ 5,250 p\n* __Competative:__ 15,000 p\n\n\n\n#### Applied research \u002F research assistant\u002F residencies:\n\nFor most companies, the risk of pursuing cutting edge research is often too high, thus only the biggest companies tend to need this skillset. There are smaller research organizations that hire for these positions, but these positions tend to be poorly advertised and have a bias for people in their existing community. \n\nMany of these roles don't require a Ph.D., which makes them available to most people with a Bachelor's or Master's degrees, or self-learners with one year of focussed study. \n\nGiven the status, scarcity, and requirements for these positions, they are the most competitive ML positions. Positions at well-known companies tend to get more than a thousand applicants per position.\n\nDaily, these roles require that you understand and can implement SOTA papers, thus that's what they will be looking for in your portfolio. \n\n\nProjects type | Base score\n--------------| -----------\nCommon project | -10 p\nUnique project | 1 p\nSOTA paper implementation | 20 p\n\nMultiplier type | Factor\n----------------| --------------- \nStrong documentation | 5x\n5000-word article | 5x\nSOTA performance | 5x\nEmployer relevancy | 20x\n\n* __Hireable:__  52,500 p\n* __Competitive:__ 150,000 p\n\n\n#### Research Scientist:\n\nResearch scientist roles require a Ph.D. or equivalent experience. While the former category requires the ability to implement SOTA papers, this category requires you to come up with research ideas. The mainstream research community measure the quality of research ideas by their impact, [here is a list of the venues and their impact](https:\u002F\u002Fscholar.google.es\u002Fcitations?view_op=top_venues&hl=en&vq=eng_artificialintelligence). To have a competitive portfolio, you need two published papers in the top venues in an area that's relevant to your potential employer.\n\nProject type | Base score\n-------------| ----------------\nCommon project | -100 p\nAn unpublished paper | 5 p\nICML\u002FICLR\u002FNeurIPS publication | 500p\nAll other publications | 50 p\n\nMultiplier type | Factor\n------------------| ------------------\nFirst author paper | 10x\nEmployer relevancy | 20x\n\n* __Hireable:__ 20,000 p\n* __Competitive roles and elite PhD positions:__ 200,000 p\n\n__Examples:__\n* My first portfolio item (after 2 months of learning): [Code](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FColoring-greyscale-images) | [Write-up](https:\u002F\u002Fblog.floydhub.com\u002Fcolorizing-b-w-photos-with-neural-networks\u002F)\n* My second portfolio item (after 4 months of learning): [Code](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FScreenshot-to-code) | [Write-up](https:\u002F\u002Fblog.floydhub.com\u002Fturning-design-mockups-into-code-with-deep-learning\u002F)\n* [Dylan Djian's](https:\u002F\u002Fgithub.com\u002Fdylandjian) first portfolio item: [Code](https:\u002F\u002Fgithub.com\u002Fdylandjian\u002Fretro-contest-sonic) | [Write-up](https:\u002F\u002Fdylandjian.github.io\u002Fworld-models\u002F)\n* [Dylan Djian's](https:\u002F\u002Fgithub.com\u002Fdylandjian) second portfolio item: [Code](https:\u002F\u002Fgithub.com\u002Fdylandjian\u002FSuperGo) | [Write-up](https:\u002F\u002Fdylandjian.github.io\u002Falphago-zero\u002F)\n* [Reiichiro Nakano's](https:\u002F\u002Fgithub.com\u002Freiinakano) first portfolio item: [Code](https:\u002F\u002Fgithub.com\u002Freiinakano\u002Farbitrary-image-stylization-tfjs) | [Write-up](https:\u002F\u002Fmagenta.tensorflow.org\u002Fblog\u002F2018\u002F12\u002F20\u002Fstyle-transfer-js\u002F)\n* [Reiichiro Nakano's](https:\u002F\u002Fgithub.com\u002Freiinakano) second portfolio item: [Write-up](https:\u002F\u002Freiinakano.com\u002F2019\u002F01\u002F27\u002Fworld-painters.html)\n\nMost recruiters will spend 10-20 seconds on each of your portfolio items. Unless they can understand the value in that time frame, the value of the project is close to zero. Thus, writing and documentation are key. Here's another thread on how to [write about portfolio items](https:\u002F\u002Ftwitter.com\u002FEmilWallner\u002Fstatus\u002F1162289417140264960). \n\nThe last key point is relevancy. It's more fun to make a wide range of projects, but if you want to optimize for breaking into the industry, you want to do all projects in one niche, thus making your skillset **super** relevant for a specific pool of employers.\n\n__Further Inspiration:__\n* [FastAI student projects](https:\u002F\u002Fforums.fast.ai\u002Ft\u002Fshare-you-work-here-highlights\u002F57140)\n* [Stanford NLP student projects](https:\u002F\u002Fnlp.stanford.edu\u002Fcourses\u002Fcs224n\u002F)\n* [Stanford CNN student projects](http:\u002F\u002Fcs231n.stanford.edu\u002Findex.html)\n\n\n## Theory 101 [4 months]\n\nLearning how to read papers is critical if you want to get into research, and a brilliant asset as an ML engineer. There are three key areas to feel comfortable reading papers:\n1) Understanding the details of the most frequent algorithms, gradient descent, linear regression, and MLPs, etc\n2) Learning how to translate the most frequent math notations into code\n3) Learn the basics of algebra, calculus, statistics, and machine learning\n\n- For the first week, spend it on 3Blue1Brown's [Essence of linear algebra](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab), [the Essence of Calculus](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WUvTyaaNkzM&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr), and StatQuests' [the Basics (of statistics)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qBigTkBLU6g&list=PLblh5JKOoLUK0FLuzwntyYI10UQFUhsY9) and [Machine Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Gv9_4yMHFhI&list=PLblh5JKOoLUICTaGLRoHQDuF_7q2GfuJF). Use a spaced repetition app like Anki and memorize all the key concepts. Use images as much as possible, they are easier to memorize. \n- Spend one month [recoding the core concepts](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FML-From-Scratch) in python numpy, including least squares, gradient descent, linear regression, and a vanilla neural network. This will help you reduce a lot of cognitive load down the line. Learning that notations are compact logic and how to translate it into code will make you feel less anxious about the theory.\n-  I believe the best deep learning theory curriculum is the [Deep Learning Book](http:\u002F\u002Fwww.deeplearningbook.org\u002F) by Ian Goodfellow and Yoshua Bengio and Aaron Courville. I use it as a curriculum, and the use online courses and internet resources to learn the details about each concept. Spend three months on part 1 of the Deep learning book. Use [lectures](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vi7lACKOUao&list=PLsXu9MHQGs8df5A4PzQGw-kfviylC-R9b&ab_channel=AlenaKruchkova) and videos to understand the concepts, Khan academy type exercises to master each concept, and Anki flashcards to remember them long-term. \n\n**Key Books:**\n- [Deep Learning Book](http:\u002F\u002Fwww.deeplearningbook.org\u002F) by Ian Goodfellow and Yoshua Bengio and Aaron Courville.\n- [Deep Learning for Coders with fastai and PyTorch: AI Applications Without a PhD](https:\u002F\u002Fwww.amazon.com\u002Fgp\u002Fproduct\u002F1492045527\u002Fref=ppx_od_dt_b_asin_title_s00?ie=UTF8&psc=1) by Jeremy Howard and Sylvain.\nGugger.\n- [Deep Learning with Python](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python) by François Chollet.\n- [Neural Networks and Deep Learning](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F) by Michael Nielsen.\n- [Grokking Deep Learning](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-deep-learning) by Andrew W. Trask.\n\n\n## Forums\n- [FastAI](http:\u002F\u002Fforums.fast.ai\u002F)\n- [Keras Slack](https:\u002F\u002Fkeras-slack-autojoin.herokuapp.com\u002F)\n- [Distill Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fdistillpub\u002Fshared_invite\u002FenQtMzg1NzU3MzEzMTg3LWJkNmQ4Y2JlNjJkNDlhYTU2ZmQxMGFkM2NiMTI2NGVjNzJkOTdjNTFiOGZmNDBjNTEzZGUwM2U0Mzg4NDAyN2E)\n- [Pytorch](https:\u002F\u002Fdiscuss.pytorch.org\u002F)\n- Twitter\n\n## Other good learning strategies:\n- [Emil Wallner](https:\u002F\u002Fblog.floydhub.com\u002Femils-story-as-a-self-taught-ai-researcher\u002F)\n- [S. Zayd Enam](http:\u002F\u002Fai.stanford.edu\u002F~zayd\u002Fwhy-is-machine-learning-hard.html)\n- [Catherine Olsson](https:\u002F\u002F80000hours.org\u002Farticles\u002Fml-engineering-career-transition-guide\u002F)\n- [Greg Brockman V2](https:\u002F\u002Fblog.gregbrockman.com\u002Fhow-i-became-a-machine-learning-practitioner)\n- [Greg Brockman V1](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-best-ways-to-pick-up-Deep-Learning-skills-as-an-engineer)\n- [Andrew Ng](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=F1ka6a13S9I)\n- [Amid Fish](http:\u002F\u002Famid.fish\u002Freproducing-deep-rl)\n- [Spinning Up by OpenAI](https:\u002F\u002Fspinningup.openai.com\u002Fen\u002Flatest\u002Fspinningup\u002Fspinningup.html)\n- [Confession as an AI researcher](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FMachineLearning\u002Fcomments\u002F73n9pm\u002Fd_confession_as_an_ai_researcher_seeking_advice\u002F)\n- YC Threads: [One](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=20765553) and [Two](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=18421422)\n\nIf you have suggestions\u002Fquestions create an issue or [ping me on Twitter.](https:\u002F\u002Ftwitter.com\u002FEmilWallner)\n\n**UPDATED VERSION:** 👉 Check out my 60-page guide, [No ML Degree](https:\u002F\u002Ftwitter.com\u002FEmilWallner\u002Fstatus\u002F1528961488206979072), on how to land a machine learning job without a degree.\n\nLanguage versions: [Korean](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FHow-to-learn-Deep-Learning\u002Fblob\u002Fmaster\u002FREADME_kr.md) | [English](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FHow-to-learn-Deep-Learning\u002Fblob\u002Fmaster\u002FREADME.md)\n","## 方法\n采用实用的自顶向下方法，从高层次框架入手，重点关注深度学习。\n\n---\n\n**更新版：** 👉 请查看我的60页指南《无需机器学习学位》（[No ML Degree](https:\u002F\u002Fwww.emilwallner.com\u002Fp\u002Fno-ml-degree)），了解如何在没有学位的情况下找到一份机器学习相关工作。\n\n## 入门阶段 [2个月]\n\n要快速掌握深度学习，主要有三个目标：\n1) 熟悉将要使用的工具，例如Python、命令行和Jupyter Notebook；\n2) 适应整个工作流程，从数据获取到训练好的模型部署；\n3) 培养深度学习思维模式，对深度学习模型的行为及其优化方式形成直觉。\n\n- 在Codecademy.com上花一周时间学习Python语法、命令行操作和Git。如果你之前没有任何编程经验，建议花几个月时间系统地学习编程基础，否则很容易感到不知所措。\n- 花一到两周时间使用[Pandas](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yzIMircGU5I&list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y)和[Scikit-learn](http:\u002F\u002Fscikit-learn.org\u002Fstable\u002F)，结合[Kaggle上的入门题目](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions?sortBy=grouped&group=general&page=1&pageSize=20&category=gettingStarted)，通过[Google Colab上的Jupyter Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fnotebooks\u002Fwelcome.ipynb)完成练习，比如泰坦尼克号乘客生存预测、房屋价格预测以及鸢尾花分类等任务。这将帮助你全面了解机器学习的思维方式和工作流程。\n- 花一个月时间在云端GPU上实现各种模型。可以从[FastAI和PyTorch](http:\u002F\u002Fcourse.fast.ai\u002F)开始。FastAI社区是希望应用深度学习并分享最新技术的人们的首选之地。\n\n完成以上步骤后，你就能知道如何利用机器学习创造价值了。\n\n## 作品集 [3–12个月]\n\n把你的作品集看作是向潜在雇主展示你能够为其带来价值的证据。\n\n在寻找第一份工作时，你可以申请以下四种主要职位：\n1. 机器学习工程师，\n2. 应用机器学习研究员\u002F实习岗位，\n3. 机器学习研究科学家，以及\n4. 软件工程师。\n\n与机器学习相关的许多工作实际上属于纯软件工程岗位（第4类），例如基础设施的扩展与优化，但这不在本文讨论范围内。如果你想顺利进入这个行业，最简单的方式就是瞄准机器学习工程师岗位。相比第2类和第3类岗位，ML工程师的职位数量要多得多，且对理论知识的要求较低，竞争也相对较小。大多数企业更倾向于采用稳定、成熟的实现方案（通常已有约一年历史），而不是投入稀缺资源去实现那些耗时较长、实际效果往往不佳的最先进论文中的方法。\n\n一旦你能靠这份工作维持生计，并积累几年的工作经验，你就会更有条件去学习理论知识，进而晋升到第2类或第3类岗位。这一点对于自学成才的人来说尤其重要——你往往比普通的大学毕业生更具优势。一般来说，大学毕业生虽然理论功底扎实，但实践能力相对较弱。\n\n### 背景\n\n无论你的背景如何，评审你的作品集的人通常会包括3到10位技术与非技术人员。你希望引发以下几方面的反应：\n\n* 申请人具备解决我们这类问题的经验；\n* 申请人的作品易于理解且组织清晰；\n* 这些作品毫无疑问是申请人独立完成的。\n\n大多数机器学习学习者的作品集都大同小异，常见内容包括参加在线课程、训练猫狗分类器，以及在泰坦尼克号或鸢尾花等玩具数据集上的实现。这些项目往往表明你倾向于回避真实世界的问题解决，更愿意待在舒适区里照搬教程代码。这样的作品集不仅无法体现你的价值，反而可能给人留下负面印象，让你看起来并不像一位高质量的候选人。\n\n相比之下，一个独特的作品项目意味着你曾面对过没有现成解决方案的独特问题，从而锻炼了与员工日常工作中类似的解决问题能力。一个好的起点是关注[活跃的Kaggle竞赛](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions)、[机器学习咨询项目](https:\u002F\u002Fwww.upwork.com\u002Ffreelance-jobs\u002Fmachine-learning\u002F)，以及[常见生产流水线的演示版本](https:\u002F\u002Fgithub.com\u002Fchiphuyen\u002Fmachine-learning-systems-design\u002Fblob\u002Fmaster\u002Fbuild\u002Fbuild1\u002Fconsolidated.pdf)。这里有一条关于[如何构思作品集创意](https:\u002F\u002Ftwitter.com\u002FEmilWallner\u002Fstatus\u002F1184723538810413056)的推文串。\n\n以下是评估作品集质量的大致指南：\n\n#### 机器学习工程方向：\n\n尽管机器学习工程岗位是最具战略意义的入门途径，但竞争依然非常激烈。一般来说，每有一个机器学习岗位，就有大约50个软件工程岗位。据我所了解的自学者中，有三分之二未能成功进入该领域，最终转而从事软件工程工作。当你拥有两个高质量、文档完善、使用独特数据集且与某一[特定行业](https:\u002F\u002Ftowardsdatascience.com\u002Fthe-cold-start-problem-how-to-build-your-machine-learning-portfolio-6718b4ae83e9)（如银行或保险）相关的项目时，就可以开始求职了。\n\n项目类型 | 基础分 |\n-------------| -----------|\n普通项目 | -1 分 ||\n独特项目 | 10 分 |\n\n倍数类型 | 因子\n-----------------|-----------------\n强文档支持 | 5倍\n5000字文章 | 5倍\nKaggle奖牌 | 10倍\n与雇主相关性 | 20倍\n \n* __可聘用：__ 5,250 分\n* __有竞争力：__ 15,000 分\n\n\n\n#### 应用研究\u002F研究助理\u002F研究员：\n\n对于大多数公司而言，追求前沿研究的风险往往过高，因此只有大型企业才真正需要这类人才。虽然也有一些小型研究机构会招聘此类职位，但这些岗位通常宣传不足，且更倾向于从其现有圈子内选拔人才。\n\n许多这类岗位并不要求博士学位，因此持有学士或硕士学位，或者经过一年专注学习的自学者也有机会申请。\n\n鉴于这些岗位的地位、稀缺性及要求，它们无疑是机器学习领域中最难竞争的职位之一。知名公司的相关岗位往往每空缺一席就会收到上千份申请。\n\n日常工作中，这些岗位要求你能够理解并实现最先进论文中的方法，因此他们在评估你的作品集时也会着重考察这一点。\n\n\n项目类型 | 基础分\n--------------| -----------\n普通项目 | -10 分\n独特项目 | 1 分\nSOTA论文实现 | 20 分\n\n倍数类型 | 因子\n----------------| --------------- \n强文档支持 | 5倍\n5000字文章 | 5倍\n达到SOTA性能 | 5倍\n与雇主相关性 | 20倍\n\n* __可聘用：__  52,500 分\n* __有竞争力：__ 150,000 分\n\n\n#### 研究科学家：\n\n研究科学家岗位通常要求博士学位或同等经验。前者侧重于实现最先进论文中的方法，而后者则要求提出创新性的研究思路。主流学术界通常以研究成果的影响力来衡量研究创意的质量，[此处列出了一些重要期刊及其影响力](https:\u002F\u002Fscholar.google.es\u002Fcitations?view_op=top_venues&hl=en&vq=eng_artificialintelligence)。要使自己的作品集具有竞争力，你需要在与潜在雇主相关领域内的顶级期刊上发表两篇论文。\n\n项目类型 | 基础分\n-------------| ----------------\n普通项目 | -100 分\n未发表论文 | 5 分\nICML\u002FICLR\u002FNeurIPS发表 | 500分\n其他发表 | 50分\n\n倍数类型 | 因子\n------------------| ------------------\n第一作者论文 | 10倍\n与雇主相关性 | 20倍\n\n* __可聘用：__ 20,000 分\n* __竞争激烈的岗位及顶尖博士职位：__ 200,000 分\n\n__案例：__\n* 我的第一个作品项目（学习两个月后）：[代码](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FColoring-greyscale-images) | [文章](https:\u002F\u002Fblog.floydhub.com\u002Fcolorizing-b-w-photos-with-neural-networks\u002F)\n* 我的第二个作品项目（学习四个月后）：[代码](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FScreenshot-to-code) | [文章](https:\u002F\u002Fblog.floydhub.com\u002Fturning-design-mockups-into-code-with-deep-learning\u002F)\n* [Dylan Djian](https:\u002F\u002Fgithub.com\u002Fdylandjian)的第一个作品项目：[代码](https:\u002F\u002Fgithub.com\u002Fdylandjian\u002Fretro-contest-sonic) | [文章](https:\u002F\u002Fdylandjian.github.io\u002Fworld-models\u002F)\n* [Dylan Djian](https:\u002F\u002Fgithub.com\u002Fdylandjian)的第二个作品项目：[代码](https:\u002F\u002Fgithub.com\u002Fdylandjian\u002FSuperGo) | [文章](https:\u002F\u002Fdylandjian.github.io\u002Falphago-zero\u002F)\n* [Reiichiro Nakano](https:\u002F\u002Fgithub.com\u002Freiinakano)的第一个作品项目：[代码](https:\u002F\u002Fgithub.com\u002Freiinakano\u002Farbitrary-image-stylization-tfjs) | [文章](https:\u002F\u002Fmagenta.tensorflow.org\u002Fblog\u002F2018\u002F12\u002F20\u002Fstyle-transfer-js\u002F)\n* [Reiichiro Nakano](https:\u002F\u002Fgithub.com\u002Freiinakano)的第二个作品项目：[文章](https:\u002F\u002Freiinakano.com\u002F2019\u002F01\u002F27\u002Fworld-painters.html)\n\n大多数招聘人员只会花10到20秒钟浏览你的每个作品项目。如果在这段时间内他们无法理解该项目的价值，那么这个项目的实际价值几乎为零。因此，撰写清晰的说明和完善的文档至关重要。这里还有一条关于如何[撰写作品项目说明](https:\u002F\u002Ftwitter.com\u002FEmilWallner\u002Fstatus\u002F1162289417140264960)的推文。\n\n最后一个关键点是相关性。制作多样化的项目固然有趣，但如果你想最大化进入行业的可能性，就应该将所有项目聚焦于同一个细分领域，从而使你的技能对特定雇主群体来说具有**极高**的相关性。\n\n__更多灵感：__\n* [FastAI学生项目](https:\u002F\u002Fforums.fast.ai\u002Ft\u002Fshare-you-work-here-highlights\u002F57140)\n* [斯坦福NLP学生项目](https:\u002F\u002Fnlp.stanford.edu\u002Fcourses\u002Fcs224n\u002F)\n* [斯坦福CNN学生项目](http:\u002F\u002Fcs231n.stanford.edu\u002Findex.html)\n\n## 理论基础101 [4个月]\n\n如果你想进入研究领域，或者成为一名优秀的机器学习工程师，学会阅读论文至关重要。要熟练阅读论文，需要掌握以下三个关键方面：\n\n1) 理解最常见算法的细节，例如梯度下降、线性回归和多层感知机等。\n2) 学会将最常见的数学符号转化为代码。\n3) 掌握代数、微积分、统计学和机器学习的基础知识。\n\n- 第一周：观看3Blue1Brown的《线性代数的本质》（https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab）、《微积分的本质》（https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WUvTyaaNkzM&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr），以及StatQuest的《统计学基础》（https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qBigTkBLU6g&list=PLblh5JKOoLUICTaGLRoHQDuF_7q2GfuJF）和《机器学习》（https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Gv9_4yMHFhI&list=PLblh5JKOoLUICTaGLRoHQDuF_7q2GfuJF）。使用间隔重复应用（如Anki）来记忆所有关键概念。尽量多使用图像，因为它们更容易记住。\n- 花一个月时间用Python和NumPy重新实现核心概念（https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FML-From-Scratch），包括最小二乘法、梯度下降、线性回归和一个简单的神经网络。这将帮助你在后续工作中减轻认知负担。通过学习这些符号实际上是一种简洁的逻辑表达方式，并将其转化为代码，你会对理论部分感到更加自信，焦虑感也会降低。\n- 我认为最好的深度学习理论课程是Ian Goodfellow、Yoshua Bengio和Aaron Courville合著的《深度学习》（http:\u002F\u002Fwww.deeplearningbook.org\u002F）。我以这本书为课程大纲，同时结合在线课程和网络资源来深入理解每个概念。用三个月的时间学习《深度学习》的第一部分。利用讲座视频（如https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vi7lACKOUao&list=PLsXu9MHQGs8df5A4PzQGw-kfviylC-R9b&ab_channel=AlenaKruchkova）来理解概念，通过类似可汗学院的练习来掌握每个概念，并使用Anki闪卡进行长期记忆。\n\n**重点书籍：**\n- 《深度学习》（http:\u002F\u002Fwww.deeplearningbook.org\u002F），作者：Ian Goodfellow、Yoshua Bengio和Aaron Courville。\n- 《面向编码者的深度学习：使用fastai和PyTorch的AI应用，无需博士学位》（https:\u002F\u002Fwww.amazon.com\u002Fgp\u002Fproduct\u002F1492045527\u002Fref=ppx_od_dt_b_asin_title_s00?ie=UTF8&psc=1），作者：Jeremy Howard和Sylvain Gugger。\n- 《用Python进行深度学习》（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-learning-with-python），作者：François Chollet。\n- 《神经网络与深度学习》（http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F），作者：Michael Nielsen。\n- 《深度学习精要》（https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgrokking-deep-learning），作者：Andrew W. Trask。\n\n## 论坛\n- FastAI论坛（http:\u002F\u002Fforums.fast.ai\u002F）\n- Keras Slack频道（https:\u002F\u002Fkeras-slack-autojoin.herokuapp.com\u002F）\n- Distill Slack频道（https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fdistillpub\u002Fshared_invite\u002FenQtMzg1NzU3MzEzMTg3LWJkNmQ4Y2JlNjJkNDlhYTU2ZmQxMGFkM2NiMTI2NGVjNzJkOTdjNTFiOGZmNDBjNTEzZGUwM2U0Mzg4NDAyN2E）\n- PyTorch讨论区（https:\u002F\u002Fdiscuss.pytorch.org\u002F）\n- Twitter\n\n## 其他优秀的学习策略：\n- Emil Wallner的文章（https:\u002F\u002Fblog.floydhub.com\u002Femils-story-as-a-self-taught-ai-researcher\u002F）\n- S. Zayd Enam的文章（http:\u002F\u002Fai.stanford.edu\u002F~zayd\u002Fwhy-is-machine-learning-hard.html）\n- Catherine Olsson的文章（https:\u002F\u002F80000hours.org\u002Farticles\u002Fml-engineering-career-transition-guide\u002F）\n- Greg Brockman的第二篇博文（https:\u002F\u002Fblog.gregbrockman.com\u002Fhow-i-became-a-machine-learning-practitioner）\n- Greg Brockman的第一篇问答文章（https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-best-ways-to-pick-up-Deep-Learning-skills-as-an-engineer）\n- 吴恩达的视频（https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=F1ka6a13S9I）\n- Amid Fish的文章（http:\u002F\u002Famid.fish\u002Freproducing-deep-rl）\n- OpenAI的Spinning Up项目（https:\u002F\u002Fspinningup.openai.com\u002Fen\u002Flatest\u002Fspinningup\u002Fspinningup.html）\n- Reddit上关于AI研究员的自白帖（https:\u002F\u002Fwww.reddit.com\u002Fr\u002FMachineLearning\u002Fcomments\u002F73n9pm\u002Fd_confession_as_an_ai_researcher_seeking_advice\u002F）\n- Y Combinator上的相关讨论帖：[一](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=20765553) 和 [二](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=18421422)\n\n如果你有任何建议或问题，请在GitHub上创建一个issue，或者在Twitter上@我。（https:\u002F\u002Ftwitter.com\u002FEmilWallner）\n\n**更新版本：** 👉 查看我的60页指南《无需机器学习学位》（https:\u002F\u002Ftwitter.com\u002FEmilWallner\u002Fstatus\u002F1528961488206979072），教你如何在没有学位的情况下找到一份机器学习工作。\n\n语言版本：[韩语](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FHow-to-learn-Deep-Learning\u002Fblob\u002Fmaster\u002FREADME_kr.md) | [英语](https:\u002F\u002Fgithub.com\u002Femilwallner\u002FHow-to-learn-Deep-Learning\u002Fblob\u002Fmaster\u002FREADME.md)","# How-to-learn-Deep-Learning 快速上手指南\n\n本指南基于 Emil Wallner 的深度学习学习路径整理，旨在帮助开发者通过“自顶向下”的实战方式快速掌握深度学习技能。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Windows, macOS 或 Linux（推荐 Linux 以获得最佳兼容性）。\n*   **编程语言**：Python 3.8+。\n*   **核心工具**：\n    *   命令行终端 (Command Line \u002F Terminal)\n    *   Git 版本控制\n    *   Jupyter Notebook (推荐使用 Google Colab 云端环境，无需本地配置 GPU)\n*   **前置知识**：具备基础编程概念。若无编程经验，建议先花费一周时间在 Codecademy 等平台学习 Python 语法、命令行操作和 Git 基础。\n*   **硬件要求**：\n    *   本地运行：需要支持 CUDA 的 NVIDIA 显卡（可选，初学者建议使用云端 GPU）。\n    *   云端运行：仅需浏览器和网络连接。\n\n## 安装步骤\n\n本学习路径主要依赖 Python 生态系统的开源库。你可以选择本地安装或使用云端环境。\n\n### 方案 A：使用 Google Colab（推荐初学者）\n无需安装任何软件，直接在浏览器中运行，免费使用 GPU。\n1. 访问 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002F)。\n2. 点击 `File` -> `New notebook`。\n3. 点击 `Runtime` -> `Change runtime type`，将 `Hardware accelerator` 设置为 `GPU`。\n\n### 方案 B：本地环境安装\n如果你希望在本地构建环境，请执行以下命令：\n\n1. **安装 Python 依赖包**\n   使用 pip 安装核心数据处理和深度学习框架：\n   ```bash\n   pip install pandas scikit-learn jupyterlab torch torchvision torchaudio fastai\n   ```\n   *(注：国内用户如遇下载速度慢，可使用清华源加速)*\n   ```bash\n   pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple pandas scikit-learn jupyterlab torch torchvision torchaudio fastai\n   ```\n\n2. **验证安装**\n   启动 Jupyter Lab 并创建一个新笔记本进行测试：\n   ```bash\n   jupyter lab\n   ```\n\n## 基本使用\n\n本项目的核心理念是**实战驱动**。以下是按照学习路径划分的最简实践步骤：\n\n### 第一阶段：熟悉工作流 (1-2 周)\n目标：掌握从数据获取到模型部署的全流程，建立机器学习直觉。\n\n1. **选择数据集**：访问 [Kaggle Getting Started](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions?sortBy=grouped&group=general&page=1&pageSize=20&category=gettingStarted) 竞赛页面。\n2. **实战练习**：在 Jupyter Notebook 中加载并处理经典数据集（如 Titanic, House Prices, Iris）。\n3. **代码示例** (以 Pandas 读取数据为例)：\n   ```python\n   import pandas as pd\n   from sklearn.model_selection import train_test_split\n   from sklearn.ensemble import RandomForestClassifier\n\n   # 加载数据\n   df = pd.read_csv('train.csv') \n   \n   # 简单的数据预处理与建模\n   X = df.drop(['Survived'], axis=1)\n   y = df['Survived']\n   \n   model = RandomForestClassifier()\n   model.fit(X, y)\n   ```\n\n### 第二阶段：深度学习实战 (1 个月)\n目标：在云 GPU 上实现深度神经网络，掌握 FastAI 和 PyTorch。\n\n1. **学习资源**：跟随 [FastAI 课程](http:\u002F\u002Fcourse.fast.ai\u002F) 进行学习。\n2. **核心任务**：使用 FastAI 库快速构建并训练图像分类或文本分析模型。\n3. **代码示例** (使用 FastAI 训练图像分类器)：\n   ```python\n   from fastai.vision.all import *\n\n   # 下载并解压示例数据集 (例如猫狗分类)\n   path = untar_data(URLs.PETS)\n   dls = ImageDataLoaders.from_name_func(\n       path, get_image_files(path\u002F\"images\"), valid_pct=0.2, seed=42,\n       label_func=lambda x: x[0].isupper()\n   )\n\n   # 创建并训练模型\n   learn = vision_learner(dls, resnet34, metrics=error_rate)\n   learn.fine_tune(4)\n   ```\n\n### 第三阶段：构建作品集 (3-12 个月)\n目标：通过解决独特的实际问题来证明你的能力，而非重复教程项目。\n\n*   **避免**：仅完成 MOOC 作业、复现常见的玩具数据集（如鸢尾花分类）。\n*   **推荐**：\n    *   参与 [Kaggle 活跃竞赛](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions)。\n    *   尝试解决 [Upwork 上的机器学习咨询项目](https:\u002F\u002Fwww.upwork.com\u002Ffreelance-jobs\u002Fmachine-learning\u002F) 中的实际问题。\n    *   针对特定行业（如金融、保险）构建拥有独特数据集的项目。\n*   **关键产出**：每个项目应包含清晰的代码仓库 (GitHub) 和一篇详细的技术文章（解释问题背景、解决方案及价值），以便招聘人员在 10-20 秒内理解项目价值。\n\n### 补充：理论基础 (并行进行)\n若需深入理论研究或阅读论文，建议结合以下资源：\n*   **数学基础**：观看 3Blue1Brown 的线性代数与微积分本质系列视频。\n*   **从零实现**：参考 [ML-From-Scratch](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FML-From-Scratch) 用 NumPy 手写核心算法。\n*   **经典教材**：研读《Deep Learning Book》(Ian Goodfellow 等著) 第一部分。","一位零基础的转行者希望在不攻读学位的情况下，通过自学掌握深度学习技能并成功应聘机器学习工程师岗位。\n\n### 没有 How-to-learn-Deep-Learning 时\n- **学习路径混乱**：面对海量理论教材不知所措，陷入“从数学公式开始”的误区，导致数月无法写出第一行有效代码。\n- **实战经验缺失**：作品集充斥着泰坦尼克号预测等教程里的“玩具数据集”项目，被面试官认为缺乏解决真实问题的能力。\n- **技术栈脱节**：花费大量时间钻研过时的算法或纯理论研究，忽视了企业真正需要的 PyTorch、FastAI 及云端部署等工程化技能。\n- **求职定位模糊**：盲目竞争高门槛的研究科学家岗位，因缺乏论文成果而屡屡碰壁，不知道如何切入需求更大的工程类职位。\n\n### 使用 How-to-learn-Deep-Learning 后\n- **自上而下高效入门**：遵循“先框架后原理”的实战路线，两个月内迅速掌握 Python、Jupyter 及 Git 工作流，建立起直观的深度学习思维。\n- **构建差异化作品集**：避开同质化的教程项目，利用云 GPU 复现前沿模型并解决具体业务问题，向雇主证明具备独立交付价值的能力。\n- **对齐工业界需求**：聚焦 FastAI 和 PyTorch 等主流工具，优先掌握模型部署与扩展技能，精准匹配企业对“机器学习工程师”的实际用人标准。\n- **明确职业切入点**：策略性地首选竞争较小且重工程的岗位，先就业再深造，利用实战优势弥补学历短板，实现职业生涯的平滑过渡。\n\nHow-to-learn-Deep-Learning 通过提供一条经过验证的“自顶向下”实战路径，帮助学习者跳过学术弯路，直接用可落地的工程能力敲开行业大门。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femilwallner_How-to-learn-Deep-Learning_cfd623c4.png","emilwallner","Emil Wallner","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Femilwallner_6db9c1d3.jpg","internet-educated. into machine learning and colorization. maker of palette.fm. former resident at  the google arts & culture lab.","Palette","Paris | Nyköping","w@llner.co","EmilWallner","https:\u002F\u002Flinktr.ee\u002Femilwallner","https:\u002F\u002Fgithub.com\u002Femilwallner",null,733,115,"2026-04-01T05:37:49","MIT",1,"未说明","建议使用云 GPU（Cloud GPUs），具体型号、显存大小及 CUDA 版本未在文中明确指定",{"notes":91,"python":92,"dependencies":93},"本项目主要是一份深度学习学习路径指南而非单一软件工具。建议在 Google Colab 上使用 Jupyter Notebook 进行实践，以便直接利用云端 GPU 资源。初学者应先花费一周学习 Python、命令行和 Git，随后使用 Pandas 和 Scikit-learn 处理 Kaggle 数据集，最后再进入 FastAI 和 PyTorch 的深度学习模型实施阶段。","未说明（文中仅提及需学习 Python 语法）",[94,95,96,97,98,99,100],"Python","Jupyter Notebook","Pandas","Scikit-learn","FastAI","PyTorch","Git",[14],[103,104,105,106,107],"deep-learning","machine-learning","tflearn","cnn","rnn","2026-03-27T02:49:30.150509","2026-04-20T22:34:30.697232",[],[]]