[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-vkosuri--CourseraMachineLearning":3,"tool-vkosuri--CourseraMachineLearning":61},[4,18,28,37,45,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":24,"last_commit_at":25,"category_tags":26,"status":17},9989,"n8n","n8n-io\u002Fn8n","n8n 是一款面向技术团队的公平代码（fair-code）工作流自动化平台，旨在让用户在享受低代码快速构建便利的同时，保留编写自定义代码的灵活性。它主要解决了传统自动化工具要么过于封闭难以扩展、要么完全依赖手写代码效率低下的痛点，帮助用户轻松连接 400 多种应用与服务，实现复杂业务流程的自动化。\n\nn8n 特别适合开发者、工程师以及具备一定技术背景的业务人员使用。其核心亮点在于“按需编码”：既可以通过直观的可视化界面拖拽节点搭建流程，也能随时插入 JavaScript 或 Python 代码、调用 npm 包来处理复杂逻辑。此外，n8n 原生集成了基于 LangChain 的 AI 能力，支持用户利用自有数据和模型构建智能体工作流。在部署方面，n8n 提供极高的自由度，支持完全自托管以保障数据隐私和控制权，也提供云端服务选项。凭借活跃的社区生态和数百个现成模板，n8n 让构建强大且可控的自动化系统变得简单高效。",184740,2,"2026-04-19T23:22:26",[16,14,13,15,27],"插件",{"id":29,"name":30,"github_repo":31,"description_zh":32,"stars":33,"difficulty_score":10,"last_commit_at":34,"category_tags":35,"status":17},10095,"AutoGPT","Significant-Gravitas\u002FAutoGPT","AutoGPT 是一个旨在让每个人都能轻松使用和构建 AI 的强大平台，核心功能是帮助用户创建、部署和管理能够自动执行复杂任务的连续型 AI 智能体。它解决了传统 AI 应用中需要频繁人工干预、难以自动化长流程工作的痛点，让用户只需设定目标，AI 即可自主规划步骤、调用工具并持续运行直至完成任务。\n\n无论是开发者、研究人员，还是希望提升工作效率的普通用户，都能从 AutoGPT 中受益。开发者可利用其低代码界面快速定制专属智能体；研究人员能基于开源架构探索多智能体协作机制；而非技术背景用户也可直接选用预置的智能体模板，立即投入实际工作场景。\n\nAutoGPT 的技术亮点在于其模块化“积木式”工作流设计——用户通过连接功能块即可构建复杂逻辑，每个块负责单一动作，灵活且易于调试。同时，平台支持本地自托管与云端部署两种模式，兼顾数据隐私与使用便捷性。配合完善的文档和一键安装脚本，即使是初次接触的用户也能在几分钟内启动自己的第一个 AI 智能体。AutoGPT 正致力于降低 AI 应用门槛，让人人都能成为 AI 的创造者与受益者。",183572,"2026-04-20T04:47:55",[13,36,27,14,15],"语言模型",{"id":38,"name":39,"github_repo":40,"description_zh":41,"stars":42,"difficulty_score":10,"last_commit_at":43,"category_tags":44,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":24,"last_commit_at":51,"category_tags":52,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",161692,"2026-04-20T11:33:57",[14,13,36],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":24,"last_commit_at":59,"category_tags":60,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":91,"env_os":92,"env_gpu":93,"env_ram":94,"env_deps":95,"category_tags":100,"github_topics":101,"view_count":24,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":105,"updated_at":106,"faqs":107,"releases":108},10169,"vkosuri\u002FCourseraMachineLearning","CourseraMachineLearning","Coursera Machine Learning By Prof. Andrew Ng","CourseraMachineLearning 是由吴恩达教授经典机器学习课程衍生出的开源学习资源库，旨在为初学者提供一套系统化的入门指南。它主要解决了机器学习理论抽象难懂、代码实现无从下手的问题，通过整合视频讲座索引、编程练习教程及测试用例，帮助用户将复杂的数学原理转化为实际的代码能力。\n\n这套资源非常适合希望从零开始掌握机器学习的开发者、学生及研究人员使用。其核心内容围绕“假设函数”与“代价函数”展开，深入浅出地讲解了如何通过梯度下降算法最小化误差，从而构建精准的预测模型。此外，它还详细剖析了偏差与方差的权衡关系，帮助使用者诊断并避免模型的过拟合或欠拟合问题。\n\nCourseraMachineLearning 的独特亮点在于提供了清晰的代码对比示例，直观展示了代价函数计算与梯度下降更新之间的逻辑差异，并辅以线性回归等经典算法的公式图表。无论是想夯实理论基础，还是寻求具体的编程实战参考，这里都是一份不可多得的优质学习资料。","# Machine Learning By Prof. Andrew Ng :star2::star2::star2::star2::star:\n\nThis page continas all my coursera machine learning courses and resources :book: by [Prof. Andrew Ng](http:\u002F\u002Fwww.andrewng.org\u002F) :man:\n\n# Table of Contents\n1. [Breif Intro](#breif-intro)\n2. [Video lectures Index](#video-lectures-index)\n3. [Programming Exercise Tutorials](#programming-exercise-tutorials)\n4. [Programming Exercise Test Cases](#programming-exercise-test-cases)\n5. [Useful Resources](#useful-resources)\n6. [Schedule](#schedule)\n7. [Extra Information](#extra-information)\n8. [Online E-Books](#online-e-books)\n9. [Aditional Information](#aditional-information)\n\n## Breif Intro\n\nThe most of the course talking about **hypothesis function** and minimising **cost funtions**\n\n### Hypothesis\nA hypothesis is a certain function that we believe (or hope) is similar to the true function, the target function that we want to model. In context of email spam classification, it would be the rule we came up with that allows us to separate spam from non-spam emails.\n\n### Cost Function\nThe cost function or **Sum of Squeared Errors(SSE)** is a measure of how far away our hypothesis is from the optimal hypothesis. The closer our hypothesis matches the training examples, the smaller the value of the cost function. Theoretically, we would like J(θ)=0\n\n### Gradient Descent\nGradient descent is an iterative minimization method. The gradient of the error function always shows in the direction of the steepest ascent of the error function. Thus, we can start with a random weight vector and subsequently follow the\nnegative gradient (using a learning rate alpha)\n\n#### Differnce between cost function and gradient descent functions\n\u003Ctable>\n    \u003Ccolgroup>\n        \u003Ccol width=\"50%\" \u002F>\n        \u003Ccol width=\"50%\" \u002F>\n    \u003C\u002Fcolgroup>\n    \u003Cthead>\n        \u003Ctr class=\"header\">\n            \u003Cth> Cost Function \u003C\u002Fth>\n            \u003Cth> Gradient Descent \u003C\u002Fth>\n        \u003C\u002Ftr>\n    \u003C\u002Fthead>\n    \u003Ctbody>\n        \u003Ctr valign=\"top\">\n            \u003Ctd markdown=\"span\">\n            \u003Cpre>\u003Ccode>\n            function J = computeCostMulti(X, y, theta)\n                m = length(y); % number of training examples\n                J = 0;\n                predictions =  X*theta;\n                sqerrors = (predictions - y).^2;\n                J = 1\u002F(2*m)* sum(sqerrors);\n            end\n            \u003C\u002Fcode>\u003C\u002Fpre>\n            \u003C\u002Ftd>\n            \u003Ctd markdown=\"span\">\n            \u003Cpre>\u003Ccode>\n            function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)    \n                m = length(y); % number of training examples\n                J_history = zeros(num_iters, 1);\n                for iter = 1:num_iters\n                    predictions =  X * theta;\n                    updates = X' * (predictions - y);\n                    theta = theta - alpha * (1\u002Fm) * updates;\n                    J_history(iter) = computeCostMulti(X, y, theta);\n                end\n            end\n            \u003C\u002Fcode>\u003C\u002Fpre>\n            \u003C\u002Ftd>\n        \u003C\u002Ftr>\n    \u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n### Bias and Variance\nWhen we discuss prediction models, prediction errors can be decomposed into two main subcomponents we care about: error due to \"bias\" and error due to \"variance\". There is a tradeoff between a model's ability to minimize bias and variance. Understanding these two types of error can help us diagnose model results and avoid the mistake of over- or under-fitting.\n\nSource: http:\u002F\u002Fscott.fortmann-roe.com\u002Fdocs\u002FBiasVariance.html\n\n### Hypotheis and Cost Function Table\n\n| Algorithem \t| Hypothesis Function \t| Cost Function \t| Gradient Descent \t|\n|--------------------------------------------\t|-----------------------------------------------------------------------\t|-------------------------------------------------------------------------------\t|---------------------------------------------------------------------------------------\t|\n| Linear Regression \t| ![linear_regression_hypothesis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_a807b773150b.gif) \t| ![linear_regression_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_da8bde4b24a9.gif) \t|  \t|\n| Linear Regression with Multiple variables \t| ![linear_regression_hypothesis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_a807b773150b.gif) \t| ![linear_regression_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_da8bde4b24a9.gif) \t| ![linear_regression_multi_var_gradient](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_3dfad1f5560d.gif) \t|\n| Logistic Regression \t| ![logistic_regression_hypothesis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_9d9082840762.gif) \t| ![logistic_regression_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_cb738ffc440b.gif) \t| ![logistic_regression_gradient](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_5d994361fb36.gif) \t|\n| Logistic Regression with Multiple Variable \t|  \t| ![logistic_regression_multi_var_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_8d4f989e2e46.gif) \t| ![logistic_regression_multi_var_gradient](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_96a84c89cd83.gif) \t|\n| Nural Networks \t|  \t| ![nural_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_2fc86078af32.gif) \t|  \t|                                                                                      |\n\n### Regression with Pictures\n- [Linear Regression](http:\u002F\u002Fadit.io\u002Fposts\u002F2016-02-20-Linear-Regression-in-Pictures.html)\n- [Logistic Regression](http:\u002F\u002Fadit.io\u002Fposts\u002F2016-03-13-Logistic-Regression.html#non-linear-classification)\n\n## Video lectures Index\n[https:\u002F\u002Fclass.coursera.org\u002Fml\u002Flecture\u002Fpreview](https:\u002F\u002Fclass.coursera.org\u002Fml\u002Flecture\u002Fpreview)\n\n## Programming Exercise Tutorials\n[https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002Fm0ZdvjSrEeWddiIAC9pDDA](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002Fm0ZdvjSrEeWddiIAC9pDDA)\n\n## Programming Exercise Test Cases\n[https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002F0SxufTSrEeWPACIACw4G5w](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002F0SxufTSrEeWPACIACw4G5w)\n\n## Useful Resources\n[https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fresources\u002FNrY2G](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fresources\u002FNrY2G)\n\n## Schedule:\n### Week 1 - Due 07\u002F16\u002F17:\n- Welcome - [pdf](\u002Fhome\u002Fweek-1\u002Flectures\u002Fpdf\u002FLecture1.pdf) - [ppt](\u002Fhome\u002F\u002Fweek-1\u002Flectures\u002Fppt\u002FLecture1.pptx)\n- Linear regression with one variable - [pdf](\u002Fhome\u002Fweek-1\u002Flectures\u002Fpdf\u002FLecture2.pdf) - [ppt](\u002Fhome\u002Fweek-1\u002Flectures\u002Fppt\u002FLecture2.pptx)\n- Linear Algebra review (Optional) - [pdf](\u002Fhome\u002Fweek-1\u002Flectures\u002Fpdf\u002FLecture3.pdf) - [ppt](\u002Fhome\u002Fweek-1\u002Flectures\u002Fppt\u002FLecture3.pptx)\n- [Lecture Notes](\u002Fhome\u002Fweek-1\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-1\u002Ferrata.pdf)\n\n### Week 2 - Due 07\u002F23\u002F17:\n- Linear regression with multiple variables - [pdf](\u002Fhome\u002Fweek-2\u002Flectures\u002Fpdf\u002FLecture4.pdf) - [ppt](\u002Fhome\u002Fweek-2\u002Flectures\u002Fppt\u002FLecture4.pptx)\n- Octave tutorial [pdf](\u002Fhome\u002Fweek-2\u002Flectures\u002Fpdf\u002FLecture5.pdf)\n- Programming Exercise 1: Linear Regression - [pdf](\u002Fhome\u002Fweek-2\u002Fexercises\u002Fmachine-learning-ex1\u002Fex1.pdf) - [Problem](\u002Fhome\u002Fweek-2\u002Fexercises\u002Fmachine-learning-ex1.zip) - [Solution](\u002Fhome\u002Fweek-2\u002Fexercises\u002Fmachine-learning-ex1\u002Fex1\u002F)\n- [Lecture Notes](\u002Fhome\u002Fweek-2\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-2\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-2\u002Fexercises\u002FProgramming%20Ex.1.pdf)\n\n### Week 3 - Due 07\u002F30\u002F17:\n- Logistic regression - [pdf](\u002Fhome\u002Fweek-3\u002Flectures\u002Fpdf\u002FLecture6.pdf) - [ppt](\u002Fhome\u002Fweek-3\u002Flectures\u002Fppt\u002FLecture6.pptx)\n- Regularization - [pdf](\u002Fhome\u002Fweek-3\u002Flectures\u002Fpdf\u002FLecture7.pdf) - [ppt](\u002Fhome\u002Fweek-3\u002Flectures\u002Fppt\u002FLecture7.pptx)\n- Programming Exercise 2: Logistic Regression - [pdf](\u002Fhome\u002Fweek-3\u002Fexercises\u002Fmachine-learning-ex2\u002Fex2.pdf) - [Problem](home\u002Fweek-3\u002Fexercises\u002Fmachine-learning-ex2.zip) - [Solution](\u002Fhome\u002Fweek-3\u002Fexercises\u002Fmachine-learning-ex2\u002Fex2)\n- [Lecture Notes](\u002Fhome\u002Fweek-3\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-3\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-3\u002Fexercises\u002FProgramming%20Ex.2.pdf)\n\n### Week 4 - Due 08\u002F06\u002F17:\n- Neural Networks: Representation - [pdf](\u002Fhome\u002Fweek-4\u002Flectures\u002Fpdf\u002FLecture8.pdf) - [ppt](\u002Fhome\u002Fweek-4\u002Flectures\u002Fppt\u002FLecture8.pptx)\n- Programming Exercise 3: Multi-class Classification and Neural Networks - [pdf](\u002Fhome\u002Fweek-4\u002Fexercises\u002Fmachine-learning-ex3\u002Fex3.pdf) - [Problem](\u002Fhome\u002Fweek-4\u002Fexercises\u002Fmachine-learning-ex3.zip) - [Solution](\u002Fhome\u002Fweek-4\u002Fexercises\u002Fmachine-learning-ex3\u002Fex3)\n- [Lecture Notes](\u002Fhome\u002Fweek-4\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-4\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-4\u002Fexercises\u002FProgramming%20Ex.3.pdf)\n\n### Week 5 - Due 08\u002F13\u002F17:\n- Neural Networks: Learning - [pdf](\u002Fhome\u002Fweek-5\u002Flectures\u002Fpdf\u002FLecture9.pdf) - [ppt](\u002Fhome\u002Fweek-5\u002Flectures\u002Fppt\u002FLecture9.pptx)\n- Programming Exercise 4: Neural Networks Learning - [pdf](\u002Fhome\u002Fweek-5\u002Fexercises\u002Fmachine-learning-ex4\u002Fex4.pdf) - [Problem](\u002Fhome\u002Fweek-5\u002Fexercises\u002Fmachine-learning-ex4.zip) - [Solution](\u002Fhome\u002Fweek-5\u002Fexercises\u002Fmachine-learning-ex4\u002Fex4)\n- [Lecture Notes](\u002Fhome\u002Fweek-5\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-5\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-4\u002Fexercises\u002FProgramming%20Ex.4.pdf)\n\n### Week 6 - Due 08\u002F20\u002F17:\n- Advice for applying machine learning - [pdf](\u002Fhome\u002Fweek-6\u002Flectures\u002Fpdf\u002FLecture10.pdf) - [ppt](\u002Fhome\u002Fweek-6\u002Flectures\u002Fppt\u002FLecture10.pptx)\n- Machine learning system design - [pdf](\u002Fhome\u002Fweek-6\u002Flectures\u002Fpdf\u002FLecture11.pdf) - [ppt](\u002Fhome\u002Fweek-6\u002Flectures\u002Fppt\u002FLecture11.pptx)\n- Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance - [pdf](\u002Fhome\u002Fweek-6\u002Fexercises\u002Fmachine-learning-ex5\u002Fex5.pdf) - [Problem](\u002Fhome\u002Fweek-6\u002Fexercises\u002Fmachine-learning-ex5.zip) - [Solution](\u002Fhome\u002Fweek-6\u002Fexercises\u002Fmachine-learning-ex5\u002Fex5)\n- [Lecture Notes](\u002Fhome\u002Fweek-6\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-6\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-6\u002Fexercises\u002FProgramming%20Ex.5.pdf)\n\n### Week 7 - Due 08\u002F27\u002F17:\n- Support vector machines - [pdf](\u002Fhome\u002Fweek-7\u002Flectures\u002Fpdf\u002FLecture12.pdf) - [ppt](\u002Fhome\u002Fweek-7\u002Flectures\u002Fppt\u002FLecture12.pptx)\n- Programming Exercise 6: Support Vector Machines - [pdf](\u002Fhome\u002Fweek-7\u002Fexercises\u002Fmachine-learning-ex6\u002Fex6.pdf) - [Problem](\u002Fhome\u002Fweek-7\u002Fexercises\u002Fmachine-learning-ex6.zip) - [Solution](\u002Fhome\u002Fweek-7\u002Fexercises\u002Fmachine-learning-ex6\u002Fex6)\n- [Lecture Notes](\u002Fhome\u002Fweek-7\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-7\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-7\u002Fexercises\u002FProgramming%20Ex.6.pdf)\n\n### Week 8 - Due 09\u002F03\u002F17:\n- Clustering - [pdf](\u002Fhome\u002Fweek-8\u002Flectures\u002Fpdf\u002FLecture13.pdf) - [ppt](\u002Fhome\u002Fweek-8\u002Flectures\u002Fppt\u002FLecture13.ppt)\n- Dimensionality reduction - [pdf](\u002Fhome\u002Fweek-8\u002Flectures\u002Fpdf\u002FLecture14.pdf) - [ppt](\u002Fhome\u002Fweek-8\u002Flectures\u002Fppt\u002FLecture14.ppt)\n- Programming Exercise 7: K-means Clustering and Principal Component Analysis - [pdf](\u002Fhome\u002Fweek-8\u002Fexercises\u002Fmachine-learning-ex7\u002Fex7.pdf) - [Problems](\u002Fhome\u002Fweek-8\u002Fexercises\u002Fmachine-learning-ex7.zip) - [Solution](\u002Fhome\u002Fweek-8\u002Fexercises\u002Fmachine-learning-ex7\u002Fex7)\n- [Lecture Notes](\u002Fhome\u002Fweek-8\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-8\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-8\u002Fexercises\u002FProgramming%20Ex.7.pdf)\n\n### Week 9 - Due 09\u002F10\u002F17:\n- Anomaly Detection - [pdf](\u002Fhome\u002Fweek-9\u002Flectures\u002Fpdf\u002FLecture15.pdf) - [ppt](\u002Fhome\u002Fweek-9\u002Flectures\u002Fppt\u002FLecture15.ppt)\n- Recommender Systems  - [pdf](\u002Fhome\u002Fweek-9\u002Flectures\u002Fpdf\u002FLecture16.pdf) - [ppt](\u002Fhome\u002Fweek-9\u002Flectures\u002Fppt\u002FLecture16.ppt)\n- Programming Exercise 8: Anomaly Detection and Recommender Systems - [pdf](\u002Fhome\u002Fweek-9\u002Fexercises\u002Fmachine-learning-ex8\u002Fex8.pdf) - [Problems](\u002Fhome\u002Fweek-9\u002Fexercises\u002Fmachine-learning-ex8.zip) - [Solution](\u002Fhome\u002Fweek-9\u002Fexercises\u002Fmachine-learning-ex8\u002Fex8)\n- [Lecture Notes](\u002Fhome\u002Fweek-9\u002Flectures\u002Fnotes.pdf)\n- [Errata](\u002Fhome\u002Fweek-9\u002Ferrata.pdf)\n- [Program Exercise Notes](\u002Fhome\u002Fweek-9\u002Fexercises\u002FProgramming%20Ex.8.pdf)\n\n### Week 10 - Due 09\u002F17\u002F17:\n- Large scale machine learning - [pdf](\u002Fhome\u002Fweek-10\u002Flectures\u002Fpdf\u002FLecture17.pdf) - [ppt](\u002Fhome\u002Fweek-10\u002Flectures\u002Fppt\u002FLecture17.ppt)\n- [Lecture Notes](\u002Fhome\u002Fweek-10\u002Flectures\u002Fnotes.pdf)\n\n### Week 11 - Due 09\u002F24\u002F17:\n- Application example: Photo OCR - [pdf](\u002Fhome\u002Fweek-11\u002Flectures\u002Fpdf\u002FLecture18.pdf) - [ppt](\u002Fhome\u002Fweek-11\u002Flectures\u002Fppt\u002FLecture18.ppt)\n\n## Extra Information\n\n- [Linear Algebra Review and Reference Zico Kolter](\u002Fextra\u002Fcs229-linalg.pdf)\n- [CS229 Lecture notes](\u002Fextra\u002Fcs229-notes1.pdf)\n- [CS229 Problems](\u002Fextra\u002Fcs229-prob.pdf)\n- [Financial time series forecasting with machine learning techniques](\u002Fextra\u002Fmachine%20learning%20stocks.pdf)\n- [Octave Examples](\u002Fextra\u002Foctave_session.m)\n\n## Online E Books\n\n- [Introduction to Machine Learning by Nils J. Nilsson](robotics.stanford.edu\u002F~nilsson\u002FMLBOOK.pdf)\n- [Introduction to Machine Learning by Alex Smola and S.V.N. Vishwanathan](http:\u002F\u002Falex.smola.org\u002Fdrafts\u002Fthebook.pdf)\n- [Introduction to Data Science by Jeffrey Stanton](http:\u002F\u002Fsurface.syr.edu\u002Fcgi\u002Fviewcontent.cgi?article=1165&context=istpub)\n- [Bayesian Reasoning and Machine Learning by David Barber](http:\u002F\u002Fweb4.cs.ucl.ac.uk\u002Fstaff\u002FD.Barber\u002Fpmwiki\u002Fpmwiki.php?n=Brml.Online)\n- [Understanding Machine Learning, © 2014 by Shai Shalev-Shwartz and Shai Ben-David](http:\u002F\u002Fwww.cs.huji.ac.il\u002F~shais\u002FUnderstandingMachineLearning\u002Fcopy.html)\n- [Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman](http:\u002F\u002Fstatweb.stanford.edu\u002F~tibs\u002FElemStatLearn\u002F)\n- [Pattern Recognition and Machine Learning, by Christopher M. Bishop](http:\u002F\u002Fusers.isr.ist.utl.pt\u002F~wurmd\u002FLivros\u002Fschool\u002FBishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf)\n\n\n## Aditional Information\n\n## :boom: Course Status :point_down:\n![coursera_course_completion](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_c1c1c5b763fd.png)\n\n### Links\n- [What are the top 10 problems in deep learning for 2017?](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-top-10-problems-in-deep-learning-for-2017)\n- [When will the deep learning bubble burst?](https:\u002F\u002Fwww.quora.com\u002FWhen-will-the-deep-learning-bubble-burst)\n\n### Statistics Models\n\n- HMM - [Hidden Markov Model](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHidden_Markov_model)\n- CRFs - [Conditional Random Fields](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FConditional_random_field)\n- LSI - [Latent Semantic Indexing](https:\u002F\u002Fwww.searchenginejournal.com\u002Fwhat-is-latent-semantic-indexing-seo-defined\u002F21642\u002F)\n- MRF - [Markov Random Fields](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMarkov_random_field)\n\n### NLP forums\n\n- SIGIR - [Special Interest Group on Information Retrieval](http:\u002F\u002Fsigir.org\u002F)\n- ACL - [Association for Computational Linguistics](https:\u002F\u002Fwww.aclweb.org\u002Fportal\u002F)\n- NAACL - [The North American Chapter of the Association for Computational Linguistics](http:\u002F\u002Fnaacl.org\u002F)\n- EMNLP - [Empirical Methods in Natural Language Processing](http:\u002F\u002Femnlp2017.net\u002F)\n- NIPS - [Neural Information Processing Systems](https:\u002F\u002Fnips.cc\u002F)\n","# 机器学习——吴恩达教授 :star2::star2::star2::star2::star:\n\n本页面包含我所有的Coursera机器学习课程及资源 :book:，由[吴恩达教授](http:\u002F\u002Fwww.andrewng.org\u002F) :man: 提供。\n\n# 目录\n1. [简要介绍](#breif-intro)\n2. [视频讲座索引](#video-lectures-index)\n3. [编程练习教程](#programming-exercise-tutorials)\n4. [编程练习测试用例](#programming-exercise-test-cases)\n5. [实用资源](#useful-resources)\n6. [课程安排](#schedule)\n7. [附加信息](#extra-information)\n8. [在线电子书](#online-e-books)\n9. [其他信息](#aditional-information)\n\n## 简要介绍\n\n本课程主要讨论**假设函数**以及如何最小化**代价函数**。\n\n### 假设函数\n假设函数是我们认为（或希望）与真实函数相似的某种函数，即我们想要建模的目标函数。以电子邮件垃圾邮件分类为例，它就是我们用来区分垃圾邮件和非垃圾邮件的规则。\n\n### 代价函数\n代价函数，也称为**平方误差之和(SSE)**，用于衡量我们的假设函数与最优假设函数之间的差距。假设函数越接近训练样本，代价函数的值就越小。理论上，我们希望J(θ)=0。\n\n### 梯度下降法\n梯度下降法是一种迭代优化算法。误差函数的梯度始终指向误差函数上升最快的方向。因此，我们可以从一个随机的权重向量开始，然后沿着负梯度方向逐步更新参数（使用学习率α）。\n\n#### 代价函数与梯度下降法的区别\n\u003Ctable>\n    \u003Ccolgroup>\n        \u003Ccol width=\"50%\" \u002F>\n        \u003Ccol width=\"50%\" \u002F>\n    \u003C\u002Fcolgroup>\n    \u003Cthead>\n        \u003Ctr class=\"header\">\n            \u003Cth> 代价函数 \u003C\u002Fth>\n            \u003Cth> 梯度下降法 \u003C\u002Fth>\n        \u003C\u002Ftr>\n    \u003C\u002Fthead>\n    \u003Ctbody>\n        \u003Ctr valign=\"top\">\n            \u003Ctd markdown=\"span\">\n            \u003Cpre>\u003Ccode>\n            function J = computeCostMulti(X, y, theta)\n                m = length(y); % 训练样本数量\n                J = 0;\n                predictions =  X*theta;\n                sqerrors = (predictions - y).^2;\n                J = 1\u002F(2*m)* sum(sqerrors);\n            end\n            \u003C\u002Fcode>\u003C\u002Fpre>\n            \u003C\u002Ftd>\n            \u003Ctd markdown=\"span\">\n            \u003Cpre>\u003Ccode>\n            function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)    \n                m = length(y); % 训练样本数量\n                J_history = zeros(num_iters, 1);\n                for iter = 1:num_iters\n                    predictions =  X * theta;\n                    updates = X' * (predictions - y);\n                    theta = theta - alpha * (1\u002Fm) * updates;\n                    J_history(iter) = computeCostMulti(X, y, theta);\n                end\n            end\n            \u003C\u002Fcode>\u003C\u002Fpre>\n            \u003C\u002Ftd>\n        \u003C\u002Ftr>\n    \u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n### 偏差与方差\n在讨论预测模型时，预测误差可以分解为两个主要部分：偏差误差和方差误差。模型在减少偏差和方差之间存在权衡。理解这两种误差有助于我们诊断模型结果，并避免过拟合或欠拟合的问题。\n\n来源：http:\u002F\u002Fscott.fortmann-roe.com\u002Fdocs\u002FBiasVariance.html\n\n### 假设函数与代价函数表\n\n| 算法 \t| 假设函数 \t| 代价函数 \t| 梯度下降法 \t|\n|--------------------------------------------\t|-----------------------------------------------------------------------\t|-------------------------------------------------------------------------------\t|---------------------------------------------------------------------------------------\t|\n| 线性回归 \t| ![linear_regression_hypothesis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_a807b773150b.gif) \t| ![linear_regression_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_da8bde4b24a9.gif) \t|  \t|\n| 多变量线性回归 \t| ![linear_regression_hypothesis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_a807b773150b.gif) \t| ![linear_regression_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_da8bde4b24a9.gif) \t| ![linear_regression_multi_var_gradient](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_3dfad1f5560d.gif) \t|\n| 逻辑回归 \t| ![logistic_regression_hypothesis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_9d9082840762.gif) \t| ![logistic_regression_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_cb738ffc440b.gif) \t| ![logistic_regression_gradient](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_5d994361fb36.gif) \t|\n| 多变量逻辑回归 \t|  \t| ![logistic_regression_multi_var_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_8d4f989e2e46.gif) \t| ![logistic_regression_multi_var_gradient](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_96a84c89cd83.gif) \t|\n| 神经网络 \t|  \t| ![nural_cost](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_2fc86078af32.gif) \t|  \t|                                                                                      |\n\n### 图示回归\n- [线性回归](http:\u002F\u002Fadit.io\u002Fposts\u002F2016-02-20-Linear-Regression-in-Pictures.html)\n- [逻辑回归](http:\u002F\u002Fadit.io\u002Fposts\u002F2016-03-13-Logistic-Regression.html#non-linear-classification)\n\n## 视频讲座索引\n[https:\u002F\u002Fclass.coursera.org\u002Fml\u002Flecture\u002Fpreview](https:\u002F\u002Fclass.coursera.org\u002Fml\u002Flecture\u002Fpreview)\n\n## 编程练习教程\n[https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002Fm0ZdvjSrEeWddiIAC9pDDA](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002Fm0ZdvjSrEeWddiIAC9pDDA)\n\n## 编程练习测试用例\n[https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002F0SxufTSrEeWPACIACw4G5w](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fdiscussions\u002Fall\u002Fthreads\u002F0SxufTSrEeWPACIACw4G5w)\n\n## 实用资源\n[https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fresources\u002FNrY2G](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fmachine-learning\u002Fresources\u002FNrY2G)\n\n## 课程安排：\n### 第一周 - 截止日期：2017年7月16日：\n- 欢迎 - [pdf](\u002Fhome\u002Fweek-1\u002Flectures\u002Fpdf\u002FLecture1.pdf) - [ppt](\u002Fhome\u002F\u002Fweek-1\u002Flectures\u002Fppt\u002FLecture1.pptx)\n- 单变量线性回归 - [pdf](\u002Fhome\u002Fweek-1\u002Flectures\u002Fpdf\u002FLecture2.pdf) - [ppt](\u002Fhome\u002Fweek-1\u002Flectures\u002Fppt\u002FLecture2.pptx)\n- 线性代数复习（可选） - [pdf](\u002Fhome\u002Fweek-1\u002Flectures\u002Fpdf\u002FLecture3.pdf) - [ppt](\u002Fhome\u002Fweek-1\u002Flectures\u002Fppt\u002FLecture3.pptx)\n- [讲义](\u002Fhome\u002Fweek-1\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-1\u002Ferrata.pdf)\n\n### 第二周 - 截止日期：2017年7月23日：\n- 多变量线性回归 - [pdf](\u002Fhome\u002Fweek-2\u002Flectures\u002Fpdf\u002FLecture4.pdf) - [ppt](\u002Fhome\u002Fweek-2\u002Flectures\u002Fppt\u002FLecture4.pptx)\n- Octave教程 [pdf](\u002Fhome\u002Fweek-2\u002Flectures\u002Fpdf\u002FLecture5.pdf)\n- 编程练习1：线性回归 - [pdf](\u002Fhome\u002Fweek-2\u002Fexercises\u002Fmachine-learning-ex1\u002Fex1.pdf) - [题目](\u002Fhome\u002Fweek-2\u002Fexercises\u002Fmachine-learning-ex1.zip) - [解答](\u002Fhome\u002Fweek-2\u002Fexercises\u002Fmachine-learning-ex1\u002Fex1\u002F)\n- [讲义](\u002Fhome\u002Fweek-2\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-2\u002Ferrata.pdf)\n- [编程练习笔记](\u002Fhome\u002Fweek-2\u002Fexercises\u002FProgramming%20Ex.1.pdf)\n\n### 第3周 - 截止日期：2017年7月30日：\n- 逻辑回归 - [pdf](\u002Fhome\u002Fweek-3\u002Flectures\u002Fpdf\u002FLecture6.pdf) - [ppt](\u002Fhome\u002Fweek-3\u002Flectures\u002Fppt\u002FLecture6.pptx)\n- 正则化 - [pdf](\u002Fhome\u002Fweek-3\u002Flectures\u002Fpdf\u002FLecture7.pdf) - [ppt](\u002Fhome\u002Fweek-3\u002Flectures\u002Fppt\u002FLecture7.pptx)\n- 编程练习2：逻辑回归 - [pdf](\u002Fhome\u002Fweek-3\u002Fexercises\u002Fmachine-learning-ex2\u002Fex2.pdf) - [题目](home\u002Fweek-3\u002Fexercises\u002Fmachine-learning-ex2.zip) - [解答](\u002Fhome\u002Fweek-3\u002Fexercises\u002Fmachine-learning-ex2\u002Fex2)\n- [讲义](\u002Fhome\u002Fweek-3\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-3\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-3\u002Fexercises\u002FProgramming%20Ex.2.pdf)\n\n### 第4周 - 截止日期：2017年8月6日：\n- 神经网络：表示 - [pdf](\u002Fhome\u002Fweek-4\u002Flectures\u002Fpdf\u002FLecture8.pdf) - [ppt](\u002Fhome\u002Fweek-4\u002Flectures\u002Fppt\u002FLecture8.pptx)\n- 编程练习3：多分类与神经网络 - [pdf](\u002Fhome\u002Fweek-4\u002Fexercises\u002Fmachine-learning-ex3\u002Fex3.pdf) - [题目](\u002Fhome\u002Fweek-4\u002Fexercises\u002Fmachine-learning-ex3.zip) - [解答](\u002Fhome\u002Fweek-4\u002Fexercises\u002Fmachine-learning-ex3\u002Fex3)\n- [讲义](\u002Fhome\u002Fweek-4\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-4\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-4\u002Fexercises\u002FProgramming%20Ex.3.pdf)\n\n### 第5周 - 截止日期：2017年8月13日：\n- 神经网络：学习 - [pdf](\u002Fhome\u002Fweek-5\u002Flectures\u002Fpdf\u002FLecture9.pdf) - [ppt](\u002Fhome\u002Fweek-5\u002Flectures\u002Fppt\u002FLecture9.pptx)\n- 编程练习4：神经网络学习 - [pdf](\u002Fhome\u002Fweek-5\u002Fexercises\u002Fmachine-learning-ex4\u002Fex4.pdf) - [题目](\u002Fhome\u002Fweek-5\u002Fexercises\u002Fmachine-learning-ex4.zip) - [解答](\u002Fhome\u002Fweek-5\u002Fexercises\u002Fmachine-learning-ex4\u002Fex4)\n- [讲义](\u002Fhome\u002Fweek-5\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-5\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-4\u002Fexercises\u002FProgramming%20Ex.4.pdf)\n\n### 第6周 - 截止日期：2017年8月20日：\n- 机器学习应用建议 - [pdf](\u002Fhome\u002Fweek-6\u002Flectures\u002Fpdf\u002FLecture10.pdf) - [ppt](\u002Fhome\u002Fweek-6\u002Flectures\u002Fppt\u002FLecture10.pptx)\n- 机器学习系统设计 - [pdf](\u002Fhome\u002Fweek-6\u002Flectures\u002Fpdf\u002FLecture11.pdf) - [ppt](\u002Fhome\u002Fweek-6\u002Flectures\u002Fppt\u002FLecture11.pptx)\n- 编程练习5：正则化线性回归及偏差与方差 - [pdf](\u002Fhome\u002Fweek-6\u002Fexercises\u002Fmachine-learning-ex5\u002Fex5.pdf) - [题目](\u002Fhome\u002Fweek-6\u002Fexercises\u002Fmachine-learning-ex5.zip) - [解答](\u002Fhome\u002Fweek-6\u002Fexercises\u002Fmachine-learning-ex5\u002Fex5)\n- [讲义](\u002Fhome\u002Fweek-6\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-6\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-6\u002Fexercises\u002FProgramming%20Ex.5.pdf)\n\n### 第7周 - 截止日期：2017年8月27日：\n- 支持向量机 - [pdf](\u002Fhome\u002Fweek-7\u002Flectures\u002Fpdf\u002FLecture12.pdf) - [ppt](\u002Fhome\u002Fweek-7\u002Flectures\u002Fppt\u002FLecture12.pptx)\n- 编程练习6：支持向量机 - [pdf](\u002Fhome\u002Fweek-7\u002Fexercises\u002Fmachine-learning-ex6\u002Fex6.pdf) - [题目](\u002Fhome\u002Fweek-7\u002Fexercises\u002Fmachine-learning-ex6.zip) - [解答](\u002Fhome\u002Fweek-7\u002Fexercises\u002Fmachine-learning-ex6\u002Fex6)\n- [讲义](\u002Fhome\u002Fweek-7\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-7\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-7\u002Fexercises\u002FProgramming%20Ex.6.pdf)\n\n### 第8周 - 截止日期：2017年9月3日：\n- 聚类 - [pdf](\u002Fhome\u002Fweek-8\u002Flectures\u002Fpdf\u002FLecture13.pdf) - [ppt](\u002Fhome\u002Fweek-8\u002Flectures\u002Fppt\u002FLecture13.ppt)\n- 降维 - [pdf](\u002Fhome\u002Fweek-8\u002Flectures\u002Fpdf\u002FLecture14.pdf) - [ppt](\u002Fhome\u002Fweek-8\u002Flectures\u002Fppt\u002FLecture14.ppt)\n- 编程练习7：K均值聚类与主成分分析 - [pdf](\u002Fhome\u002Fweek-8\u002Fexercises\u002Fmachine-learning-ex7\u002Fex7.pdf) - [题目](\u002Fhome\u002Fweek-8\u002Fexercises\u002Fmachine-learning-ex7.zip) - [解答](\u002Fhome\u002Fweek-8\u002Fexercises\u002Fmachine-learning-ex7\u002Fex7)\n- [讲义](\u002Fhome\u002Fweek-8\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-8\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-8\u002Fexercises\u002FProgramming%20Ex.7.pdf)\n\n### 第9周 - 截止日期：2017年9月10日：\n- 异常检测 - [pdf](\u002Fhome\u002Fweek-9\u002Flectures\u002Fpdf\u002FLecture15.pdf) - [ppt](\u002Fhome\u002Fweek-9\u002Flectures\u002Fppt\u002FLecture15.ppt)\n- 推荐系统 - [pdf](\u002Fhome\u002Fweek-9\u002Flectures\u002Fpdf\u002FLecture16.pdf) - [ppt](\u002Fhome\u002Fweek-9\u002Flectures\u002Fppt\u002FLecture16.ppt)\n- 编程练习8：异常检测与推荐系统 - [pdf](\u002Fhome\u002Fweek-9\u002Fexercises\u002Fmachine-learning-ex8\u002Fex8.pdf) - [题目](\u002Fhome\u002Fweek-9\u002Fexercises\u002Fmachine-learning-ex8.zip) - [解答](\u002Fhome\u002Fweek-9\u002Fexercises\u002Fmachine-learning-ex8\u002Fex8)\n- [讲义](\u002Fhome\u002Fweek-9\u002Flectures\u002Fnotes.pdf)\n- [勘误表](\u002Fhome\u002Fweek-9\u002Ferrata.pdf)\n- [编程练习说明](\u002Fhome\u002Fweek-9\u002Fexercises\u002FProgramming%20Ex.8.pdf)\n\n### 第10周 - 截止日期：2017年9月17日：\n- 大规模机器学习 - [pdf](\u002Fhome\u002Fweek-10\u002Flectures\u002Fpdf\u002FLecture17.pdf) - [ppt](\u002Fhome\u002Fweek-10\u002Flectures\u002Fppt\u002FLecture17.ppt)\n- [讲义](\u002Fhome\u002Fweek-10\u002Flectures\u002Fnotes.pdf)\n\n### 第11周 - 截止日期：2017年9月24日：\n- 应用实例：照片OCR - [pdf](\u002Fhome\u002Fweek-11\u002Flectures\u002Fpdf\u002FLecture18.pdf) - [ppt](\u002Fhome\u002Fweek-11\u002Flectures\u002Fppt\u002FLecture18.ppt)\n\n## 附加信息\n\n- [线性代数复习与参考 Zico Kolter](\u002Fextra\u002Fcs229-linalg.pdf)\n- [CS229讲义](\u002Fextra\u002Fcs229-notes1.pdf)\n- [CS229习题](\u002Fextra\u002Fcs229-prob.pdf)\n- [基于机器学习技术的金融时间序列预测](\u002Fextra\u002Fmachine%20learning%20stocks.pdf)\n- [Octave示例](\u002Fextra\u002Foctave_session.m)\n\n## 在线电子书\n\n- [尼尔斯·J·尼尔森《机器学习导论》](robotics.stanford.edu\u002F~nilsson\u002FMLBOOK.pdf)\n- [亚历克斯·斯莫拉和S.V.N.维什瓦纳坦《机器学习导论》](http:\u002F\u002Falex.smola.org\u002Fdrafts\u002Fthebook.pdf)\n- [杰弗里·斯坦顿《数据科学导论》](http:\u002F\u002Fsurface.syr.edu\u002Fcgi\u002Fviewcontent.cgi?article=1165&context=istpub)\n- [戴维·巴伯《贝叶斯推理与机器学习》](http:\u002F\u002Fweb4.cs.ucl.ac.uk\u002Fstaff\u002FD.Barber\u002Fpmwiki\u002Fpmwiki.php?n=Brml.Online)\n- [沙伊·沙列夫-施瓦茨和沙伊·本-大卫《理解机器学习》，2014年版](http:\u002F\u002Fwww.cs.huji.ac.il\u002F~shais\u002FUnderstandingMachineLearning\u002Fcopy.html)\n- [哈斯蒂、蒂布希拉尼和弗里德曼《统计学习要素》](http:\u002F\u002Fstatweb.stanford.edu\u002F~tibs\u002FElemStatLearn\u002F)\n- [克里斯托弗·M·毕晓普《模式识别与机器学习》](http:\u002F\u002Fusers.isr.ist.utl.pt\u002F~wurmd\u002FLivros\u002Fschool\u002FBishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf)\n\n\n## 附加信息\n\n## :boom: 课程状态 :point_down:\n![coursera_course_completion](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_readme_c1c1c5b763fd.png)\n\n### 链接\n- [2017年深度学习领域的十大问题是什么？](https:\u002F\u002Fwww.quora.com\u002FWhat-are-the-top-10-problems-in-deep-learning-for-2017)\n- [深度学习泡沫何时会破裂？](https:\u002F\u002Fwww.quora.com\u002FWhen-will-the-deep-learning-bubble-burst)\n\n### 统计模型\n\n- HMM - [隐马尔可夫模型](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHidden_Markov_model)\n- CRFs - [条件随机场](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FConditional_random_field)\n- LSI - [潜在语义索引](https:\u002F\u002Fwww.searchenginejournal.com\u002Fwhat-is-latent-semantic-indexing-seo-defined\u002F21642\u002F)\n- MRF - [马尔可夫随机场](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMarkov_random_field)\n\n### 自然语言处理论坛\n\n- SIGIR - [信息检索专题组](http:\u002F\u002Fsigir.org\u002F)\n- ACL - [计算语言学协会](https:\u002F\u002Fwww.aclweb.org\u002Fportal\u002F)\n- NAACL - [计算语言学协会北美分会](http:\u002F\u002Fnaacl.org\u002F)\n- EMNLP - [自然语言处理中的经验方法](http:\u002F\u002Femnlp2017.net\u002F)\n- NIPS - [神经信息处理系统大会](https:\u002F\u002Fnips.cc\u002F)","# Coursera 机器学习 (Andrew Ng) 快速上手指南\n\n本指南基于吴恩达 (Andrew Ng) 教授的经典机器学习课程开源资源整理，旨在帮助开发者快速搭建学习环境并开始练习。\n\n## 环境准备\n\n本课程主要使用 **Octave** 或 **MATLAB** 进行编程练习。对于大多数开发者，推荐使用免费开源的 GNU Octave。\n\n*   **操作系统**: Windows, macOS, 或 Linux\n*   **核心依赖**:\n    *   **GNU Octave** (推荐版本 4.0+)：用于运行 `.m` 脚本文件。\n    *   **Git**: 用于克隆代码仓库。\n    *   **解压工具**: 用于处理课程提供的 `.zip` 练习文件。\n*   **前置知识**: 基础线性代数知识（仓库中包含复习文档）。\n\n### 安装 Octave (国内加速建议)\n由于官方源下载较慢，建议使用国内镜像源安装：\n\n*   **Ubuntu\u002FDebian**:\n    ```bash\n    sudo apt-get update\n    sudo apt-get install octave\n    ```\n*   **macOS (使用 Homebrew)**:\n    ```bash\n    brew install octave\n    ```\n*   **Windows**:\n    访问清华大学开源软件镜像站或国内其他镜像站下载 Octave 安装包进行图形化安装。\n\n## 安装步骤\n\n1.  **克隆项目仓库**\n    获取包含课程笔记、幻灯片和解决方案的代码库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002F\u003Crepository-owner>\u002FCourseraMachineLearning.git\n    cd CourseraMachineLearning\n    ```\n    *(注：请将 `\u003Crepository-owner>` 替换为实际的项目所有者用户名)*\n\n2.  **获取编程练习素材**\n    课程每周的编程作业（含原始骨架代码和数据集）通常以 `.zip` 形式提供。根据下方的【学习路线】，进入对应周次的目录，解压 `Problem` 链接指向的压缩包。\n    \n    例如，准备第一周的练习：\n    ```bash\n    # 假设已下载 machine-learning-ex1.zip 到当前目录\n    unzip machine-learning-ex1.zip -d week-2\u002Fexercises\u002F\n    ```\n\n3.  **验证环境**\n    进入练习目录，启动 Octave 并尝试运行简单的成本函数计算脚本，确保无报错：\n    ```bash\n    octave\n    ```\n\n## 基本使用\n\n本课程的核心是通过实现 **假设函数 (Hypothesis)**、**代价函数 (Cost Function)** 和 **梯度下降 (Gradient Descent)** 来训练模型。\n\n### 1. 启动练习\n进入具体的练习文件夹（以第一周线性回归为例），在终端输入 `octave` 启动交互环境。\n\n### 2. 核心代码实现示例\n以下是课程中核心的两个函数实现逻辑，您需要在对应的 `.m` 文件中填充代码：\n\n**计算代价函数 (computeCostMulti.m):**\n用于衡量假设函数与真实值的误差（均方误差）。\n```matlab\nfunction J = computeCostMulti(X, y, theta)\n    m = length(y); % number of training examples\n    J = 0;\n    predictions =  X*theta;\n    sqerrors = (predictions - y).^2;\n    J = 1\u002F(2*m)* sum(sqerrors);\nend\n```\n\n**执行梯度下降 (gradientDescentMulti.m):**\n通过迭代更新参数 $\\theta$ 以最小化代价函数。\n```matlab\nfunction [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)    \n    m = length(y); % number of training examples\n    J_history = zeros(num_iters, 1);\n    for iter = 1:num_iters\n        predictions =  X * theta;\n        updates = X' * (predictions - y);\n        theta = theta - alpha * (1\u002Fm) * updates;\n        J_history(iter) = computeCostMulti(X, y, theta);\n    end\nend\n```\n\n### 3. 运行与测试\n在 Octave 命令行中调用上述函数进行测试：\n```matlab\n% 初始化数据 (具体变量名参考 ex1.m 主脚本)\nX = [ones(m, 1), data(:,1)]; \ny = data(:,2);\ntheta = zeros(2, 1);\n\n% 计算初始代价\nJ = computeCostMulti(X, y, theta)\n\n% 运行梯度下降\nalpha = 0.01;\nnum_iters = 1500;\n[theta, J_hist] = gradientDescentMulti(X, y, theta, alpha, num_iters);\n```\n\n### 4. 学习路线参考\n按照以下周次顺序逐步完成视频学习与编程作业：\n\n*   **Week 1**: 单变量线性回归 (Linear Regression)\n*   **Week 2**: 多变量线性回归 & Octave 教程\n*   **Week 3**: 逻辑回归 (Logistic Regression) & 正则化\n*   **Week 4-5**: 神经网络 (Neural Networks) 表示与学习\n*   **Week 6**: 机器学习建议 & 偏差\u002F方差分析\n*   **Week 7**: 支持向量机 (SVM)\n*   **Week 8**: 聚类 (K-Means) & 降维 (PCA)\n*   **Week 9**: 异常检测 & 推荐系统\n\n详细课件 (PDF\u002FPPT) 和习题说明位于各周次目录下的 `lectures` 和 `exercises` 文件夹中。","一名刚入门数据科学的毕业生正试图复现吴恩达教授课程中的线性回归算法，以完成公司分配的客户流失预测原型任务。\n\n### 没有 CourseraMachineLearning 时\n- **概念理解断层**：面对“假设函数”与“代价函数”的抽象定义，难以直观理解两者在代码层面的具体映射关系，导致公式推导卡壳。\n- **调试效率低下**：在手动编写梯度下降算法时，因无法确认矩阵维度是否匹配或更新逻辑是否正确，花费数小时排查却找不到错误源头。\n- **资源分散混乱**：需要在论坛、视频网站和零散博客间反复跳转寻找对应的编程练习提示和测试用例，学习路径支离破碎。\n- **缺乏验证标准**：没有官方提供的测试用例（Test Cases）作为基准，无法判断自己实现的算法收敛结果是否准确，只能凭感觉猜测。\n\n### 使用 CourseraMachineLearning 后\n- **理论代码互通**：通过仓库中清晰的对比表格和代码片段，迅速看懂了从数学公式到 `computeCostMulti` 函数的具体实现逻辑，消除了认知障碍。\n- **快速定位错误**：利用现成的编程练习教程和测试用例，立即验证了梯度下降中的参数更新步骤，将原本半天的调试时间缩短至几分钟。\n- **一站式学习流**：直接从目录索引获取配套的视频讲座、电子书及额外资源，无需切换平台即可按部就班地完成从理论到实战的闭环。\n- **结果可量化**：借助提供的标准测试数据，能够精确比对模型输出的代价函数值，确保算法在避免过拟合的同时达到了最优收敛状态。\n\nCourseraMachineLearning 将碎片化的机器学习知识整合为结构化的实战指南，让初学者能以最低试错成本掌握核心算法精髓。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvkosuri_CourseraMachineLearning_c67dba15.png","vkosuri","Mallikarjunarao Kosuri","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvkosuri_7e061408.jpg","Machine Learner, Developer and Tester","@ADTRAN @ChatterBot @MarketSquare @F5Networks","Hyderabad","malli.kv2@gmail.com",null,"https:\u002F\u002Fvkosuri.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fvkosuri",[83],{"name":84,"color":85,"percentage":86},"MATLAB","#e16737",100,775,310,"2026-04-19T05:32:31","MIT",1,"未说明 (代码为 Octave\u002FMATLAB，通常支持 Windows, macOS, Linux)","不需要 (基于传统机器学习算法，使用 CPU 即可)","未说明 (常规教学数据量，4GB+ 通常足够)",{"notes":96,"python":97,"dependencies":98},"该项目是吴恩达教授机器学习课程的资料库，核心编程练习使用 Octave 或 MATLAB 编写，而非 Python。无需安装 GPU 驱动或 CUDA。用户需自行安装 Octave (开源) 或 MATLAB (商业软件) 来运行代码文件 (.m)。","不需要 (主要编程语言为 Octave\u002FMATLAB)",[99],"Octave 或 MATLAB",[14],[102,103,104],"coursera","coursera-machine-learning","machine-learning","2026-03-27T02:49:30.150509","2026-04-20T20:23:03.320918",[],[]]