[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hadrienj--deepLearningBook-Notes":3,"tool-hadrienj--deepLearningBook-Notes":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,2,"2026-04-10T11:13:16",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75832,"2026-04-17T21:58:25",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":71,"readme_en":72,"readme_zh":73,"quickstart_zh":74,"use_case_zh":75,"hero_image_url":76,"owner_login":77,"owner_name":78,"owner_avatar_url":79,"owner_bio":80,"owner_company":81,"owner_location":82,"owner_email":83,"owner_twitter":84,"owner_website":85,"owner_url":86,"languages":87,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":29,"env_os":96,"env_gpu":97,"env_ram":97,"env_deps":98,"category_tags":104,"github_topics":81,"view_count":10,"oss_zip_url":81,"oss_zip_packed_at":81,"status":22,"created_at":105,"updated_at":106,"faqs":107,"releases":137},8694,"hadrienj\u002FdeepLearningBook-Notes","deepLearningBook-Notes","Notes on the Deep Learning book from Ian Goodfellow, Yoshua Bengio and Aaron Courville (2016)","deepLearningBook-Notes 是一套针对经典著作《深度学习》（由 Ian Goodfellow 等三位权威撰写）第二章“线性代数”的深度解读笔记。该项目旨在通过直观的图解、生动的示例和可执行的 Python 代码，将书中抽象的数学理论转化为易于理解的知识，帮助学习者跨越从理论定义到实际应用的鸿沟。\n\n许多初学者在面对深度学习所需的线性代数基础时，常因概念过于抽象而感到困难。deepLearningBook-Notes 正是为了解决这一痛点而生。它不仅提供了详细的文字导读，更强调通过可视化图表展示矩阵作为空间线性变换的本质，并配套了基于 Python\u002FNumpy 的交互式笔记本。这种“理论 + 代码 + 图形”三位一体的学习方式，让用户能够亲手实验并验证数学概念，从而更深刻地掌握特征分解、奇异值分解（SVD）及主成分分析（PCA）等核心算法背后的原理。\n\n这套资源非常适合希望夯实数学基础的机器学习初学者、数据科学从业者以及想要深入理解算法机制的开发者和研究人员。如果你渴望透过代码看清深度学习的数学基石，deepLearningBook-Notes 将是你提升技能、构建扎实","deepLearningBook-Notes 是一套针对经典著作《深度学习》（由 Ian Goodfellow 等三位权威撰写）第二章“线性代数”的深度解读笔记。该项目旨在通过直观的图解、生动的示例和可执行的 Python 代码，将书中抽象的数学理论转化为易于理解的知识，帮助学习者跨越从理论定义到实际应用的鸿沟。\n\n许多初学者在面对深度学习所需的线性代数基础时，常因概念过于抽象而感到困难。deepLearningBook-Notes 正是为了解决这一痛点而生。它不仅提供了详细的文字导读，更强调通过可视化图表展示矩阵作为空间线性变换的本质，并配套了基于 Python\u002FNumpy 的交互式笔记本。这种“理论 + 代码 + 图形”三位一体的学习方式，让用户能够亲手实验并验证数学概念，从而更深刻地掌握特征分解、奇异值分解（SVD）及主成分分析（PCA）等核心算法背后的原理。\n\n这套资源非常适合希望夯实数学基础的机器学习初学者、数据科学从业者以及想要深入理解算法机制的开发者和研究人员。如果你渴望透过代码看清深度学习的数学基石，deepLearningBook-Notes 将是你提升技能、构建扎实知识体系的理想伴侣。","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_26e366e75eca.jpg\" width=\"400\" alt=\"Cover of the deep learning book by Goodfellow, Bengio and Courville\" title=\"The Deep Learning Book - Goodfellow, I., Bengio, Y., and Courville, A. (2016)\">\n\n**The Deep Learning Book - Goodfellow, I., Bengio, Y., and Courville, A. (2016)**\n\nThis content is part of a series following the chapter 2 on linear algebra from the [Deep Learning Book](http:\u002F\u002Fwww.deeplearningbook.org\u002F) by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions\u002Fdrawings\u002Fpython code on mathematical theories and is constructed as my understanding of these concepts.\n\n\n# Boost your data science skills. Learn linear algebra.\n\nI'd like to introduce a series of blog posts and their corresponding Python Notebooks gathering notes on [the Deep Learning Book](http:\u002F\u002Fwww.deeplearningbook.org\u002F) from Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). The aim of these notebooks is to help beginners\u002Fadvanced beginners to grasp linear algebra concepts underlying deep learning and machine learning. Acquiring these skills can boost your ability to understand and apply various data science algorithms. In my opinion, it is one of the bedrock of machine learning, deep learning and data science.\n\nThese notes cover the chapter 2 on Linear Algebra. I liked this chapter because it gives a sense of what is most used in the domain of machine learning and deep learning. It is thus a great syllabus for anyone who wants to dive in deep learning and acquire the concepts of linear algebra useful to better understand deep learning algorithms.\n\nYou can find all the articles [here](https:\u002F\u002Fhadrienj.github.io).\n\n# Getting started with linear algebra\n\nThe goal of this series is to provide content for beginners who want to understand enough linear algebra to be confortable with machine learning and deep learning. However, I think that the chapter on linear algebra from the [Deep Learning book](http:\u002F\u002Fwww.deeplearningbook.org\u002F) is a bit tough for beginners. So I decided to produce code, examples and drawings on each part of this chapter in order to add steps that may not be obvious for beginners. I also think that you can convey as much information and knowledge through examples as through general definitions. The illustrations are a way to see the big picture of an idea. Finally, I think that coding is a great tool to experiment with these abstract mathematical notions. Along with pen and paper, it adds a layer of what you can try to push your understanding through new horizons.\n\nGraphical representation is also very helpful to understand linear algebra. I tried to bind the concepts with plots (and code to produce it). The type of representation I liked most by doing this series is the fact that you can see any matrix as linear transformation of the space. In several chapters we will extend this idea and see how it can be useful to understand eigendecomposition, Singular Value Decomposition (SVD) or the Principal Components Analysis (PCA).\n\n# The use of Python\u002FNumpy\n\nIn addition, I noticed that creating and reading examples is really helpful to understand the theory. It is why I built Python notebooks. The goal is two folds:\n\n1. To provide a starting point to use Python\u002FNumpy to apply linear algebra concepts. And since the final goal is to use linear algebra concepts for data science, it seems natural to continuously go between theory and code. All you will need is a working Python installation with major mathematical librairies like Numpy\u002FScipy\u002FMatplotlib.\n\n2. Give a more concrete vision of the underlying concepts. I found hugely useful to play and experiment with these notebooks in order to build my understanding of somewhat complicated theoretical concepts or notations. I hope that reading them will be as useful.\n\n# Syllabus\n\nThe syllabus follows exactly the [Deep Learning Book](http:\u002F\u002Fwww.deeplearningbook.org\u002F) so you can find more details if you can't understand one specific point while you are reading it. Here is a short description of the content:\n\n1. [Scalars, Vectors, Matrices and Tensors](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.1-Scalars-Vectors-Matrices-and-Tensors\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_a54afef5cb6e.png\" width=\"400\" alt=\"An example of a scalar, a vector, a matrix and a tensor\" title=\"Difference between a scalar, a vector, a matrix and a tensor\">\n\n    **Difference between a scalar, a vector, a matrix and a tensor**\n\n    Light introduction to vectors, matrices, transpose and basic operations (addition of vectors of matrices). Introduces also Numpy functions and finally a word on broadcasting.\n\n2. [Multiplying Matrices and Vectors](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.2-Multiplying-Matrices-and-Vectors\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_f7aaab3c772f.png\" width=\"400\" alt=\"An example of how to calculate the dot product\" title=\"The dot product explained\">\n\n    **The dot product explained**\n\n    This chapter is mainly on the dot product (vector and\u002For matrix multiplication). We will also see some of its properties. Then, we will see how to synthesize a system of linear equations using matrix notation. This is a major process for the following chapters.\n\n3. [Identity and Inverse Matrices](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_1b19bde0512e.png\" width=\"200\" alt=\"Example of an identity matrix\" title=\"An identity matrix\">\n\n    **An identity matrix**\n\n    We will see two important matrices: the identity matrix and the inverse matrix. We will see why they are important in linear algebra and how to use them with Numpy. Finally, we will see an example on how to solve a system of linear equations with the inverse matrix.\n\n4. [Linear Dependence and Span](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.4-Linear-Dependence-and-Span\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_c238bf183f25.png\" width=\"700\" alt=\"Examples of systems of equations with 0, 1 and an infinite number of solutions\" title=\"System of equations with 0, 1 and an infinite number of solutions\">\n\n    **A system of equations has no solution, 1 solution or an infinite number of solutions**\n\n    In this chapter we will continue to study systems of linear equations. We will see that such systems can't have more than one solution and less than an infinite number of solutions. We will see the intuition, the graphical representation and the proof behind this statement. Then we will go back to the matrix form of the system and consider what Gilbert Strang calls the *row figure* (we are looking at the rows, that is to say multiple equations) and the *column figure* (looking at the columns, that is to say the linear combination of the coefficients). We will also see what is linear combination. Finally, we will see examples of overdetermined and underdetermined systems of equations.\n\n5. [Norms](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.5-Norms\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_00f214d5ab39.png\" width=\"400\" alt=\"Representation of the squared L2 norm in 3 dimensions\" title=\"Representation of the squared L2 norm in 3 dimensions\">\n\n    **Shape of a squared L2 norm in 3 dimensions**\n\n    The norm of a vector is a function that takes a vector in input and outputs a positive value. It can be thought of as the *length* of the vector. It is for example used to evaluate the distance between the prediction of a model and the actual value. We will see different kinds of norms ($L^0$, $L^1$, $L^2$...) with examples.\n\n6. [Special Kinds of Matrices and Vectors](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_adc583bf53bb.png\" width=\"400\" alt=\"Example of a diagonal matrix and of a symmetric matrix\" title=\"Example of a diagonal matrix and of a symmetric matrix\">\n\n    **A diagonal (left) and a symmetric matrix (right)**\n\n    We have seen in [2.3](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices\u002F) some special matrices that are very interesting. We will see other types of vectors and matrices in this chapter. It is not a big chapter but it is important to understand the next ones.\n\n7. [Eigendecomposition](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.7-Eigendecomposition\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_37890db25290.png\" width=\"300\" alt=\"output_59_0\">\n\n    We will see some major concepts of linear algebra in this chapter. We will start by getting some ideas on eigenvectors and eigenvalues. We will see that a matrix can be seen as a linear transformation and that applying a matrix on its eigenvectors gives new vectors with same direction. Then we will see how to express quadratic equations in a matrix form. We will see that the eigendecomposition of the matrix corresponding to the quadratic equation can be used to find its minimum and maximum. As a bonus, we will also see how to visualize linear transformation in Python!\n\n8. [Singular Value Decomposition](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.8-Singular-Value-Decomposition\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_711095553548.png\" width=\"300\" alt=\"output_35_7\">\n\n    We will see another way to decompose matrices: the Singular Value Decomposition or SVD. Since the beginning of this series I emphasized the fact that you can see matrices as linear transformation in space. With the SVD, you decompose a matrix in three other matrices. We will see that we look at these new matrices as *sub-transformation* of the space. Instead of doing the transformation in one movement, we decompose it in three movements. As a bonus, we will apply the SVD to image processing. We will see the effect of SVD on an example image of Lucy the goose. So keep on reading!\n\n9. [The Moore-Penrose Pseudoinverse](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.9-The-Moore-Penrose-Pseudoinverse\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_06ea8add38b0.png\" width=\"300\" alt=\"output_44_0\">\n\n    We saw that not all matrices have an inverse. It is unfortunate because the inverse is used to solve system of equations. In some cases, a system of equations has no solution, and thus the inverse doesn’t exist. However it can be useful to find a value that is almost a solution (in terms of minimizing the error). This can be done with the pseudoinverse! We will see for instance how we can find the best-fit line of a set of data points with the pseudoinverse.\n\n10. [The Trace Operator](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.10-The-Trace-Operator\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_496d5616562a.png\" width=\"200\" alt=\"Calculating the trace of a matrix\" title=\"Calculating the trace of a matrix\">\n\n    **The trace of matrix**\n\n    We will see what is the Trace of a matrix. It will be needed for the last chapter on the Principal Component Analysis (PCA).\n\n11. [The Determinant](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.11-The-determinant\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_f3fd4c866af7.png\" width=\"400\" alt=\"Comparison of positive and negative determinant\" title=\"Comparison of the effect of positive and negative determinants\">\n\n    **Link between the determinant of a matrix and the transformation associated with it**\n\n    This chapter is about the determinant of a matrix. This special number can tell us a lot of things about our matrix!\n\n12. [Example: Principal Components Analysis](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.12-Example-Principal-Components-Analysis\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_11fa1dc53e06.png\" width=\"400\" alt=\"Mechanism of the gradient descent algorithm\" title=\"Mechanism of the gradient descent algorithm\">\n    **Gradient descent**\n\n    This is the last chapter of this series on linear algebra! It is about Principal Components Analysis (PCA). We will use some knowledge that we acquired along the preceding chapters to understand this important data analysis tool!\n\n# Requirements\n\nThis content is aimed at beginners but it would be nice to have at least some experience with mathematics.\n\n# Enjoy\n\nI hope that you will find something interesting in this series. I tried to be as accurate as I could. If you find errors\u002Fmisunderstandings\u002Ftypos… Please report it! You can send me emails or open issues and pull request in the notebooks Github.\n\n# References\n\nGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.\n","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_26e366e75eca.jpg\" width=\"400\" alt=\"古德费洛、本吉奥和库维尔所著深度学习书籍封面\" title=\"深度学习书籍 - 古德费洛、I.，本吉奥、Y.，以及库维尔、A.（2016）\">\n\n**深度学习书籍 - 古德费洛、I.，本吉奥、Y.，以及库维尔、A.（2016）**\n\n本内容是根据古德费洛、本吉奥和库维尔于2016年出版的《深度学习》一书第二章线性代数部分编写的系列文章之一。旨在通过直观解释、图表和Python代码来阐述相关数学理论，并基于我对这些概念的理解进行组织。\n\n# 提升你的数据科学技能：学习线性代数\n\n我想介绍一系列博客文章及其配套的Python笔记本，这些内容整理自伊恩·古德费洛、约书亚·本吉奥和阿伦·库维尔于2016年出版的《深度学习》一书。这些笔记的目标是帮助初学者或有一定基础的学习者掌握深度学习和机器学习背后的线性代数概念。掌握这些技能将显著提升你理解和应用各类数据科学算法的能力。在我看来，线性代数是机器学习、深度学习和数据科学的基石之一。\n\n本系列笔记覆盖了第二章“线性代数”。我特别喜欢这一章，因为它清晰地展示了机器学习和深度学习领域中最常用的概念和技术。因此，对于希望深入学习深度学习并掌握有助于理解深度学习算法的线性代数知识的人来说，这是一份极佳的学习大纲。\n\n所有文章都可以在这里找到：[https:\u002F\u002Fhadrienj.github.io](https:\u002F\u002Fhadrienj.github.io)。\n\n# 线性代数入门\n\n本系列的目标是为那些希望具备足够线性代数知识以顺利进入机器学习和深度学习领域的初学者提供内容。然而，我认为《深度学习》一书中关于线性代数的章节对初学者来说稍显艰深。因此，我决定针对该章节的每个部分编写代码、示例和图示，以补充一些初学者可能不易理解的细节。同时，我也相信，通过具体的例子可以传达与抽象定义同样丰富的信息和知识。插图可以帮助我们更宏观地理解一个概念。此外，编程也是探索这些抽象数学概念的绝佳工具。结合纸笔使用，它能够进一步拓展我们的思维边界，推动对知识的深入理解。\n\n图形化表示对于理解线性代数也非常有帮助。在编写本系列的过程中，我尝试将各个概念与相应的图表（以及生成这些图表的代码）相结合。其中，我最喜欢的一种表达方式是将任意矩阵视为对空间的一种线性变换。在后续几章中，我们将进一步探讨这一思想，并说明它如何用于理解特征分解、奇异值分解（SVD）以及主成分分析（PCA）等重要概念。\n\n# Python与Numpy的应用\n\n此外，我发现通过创建和阅读示例，能够极大地帮助理解理论知识。正是出于这个原因，我构建了Python笔记本。其目的主要有两个方面：\n\n1. 为读者提供一个起点，学习如何使用Python和Numpy来应用线性代数概念。由于最终目标是将线性代数应用于数据科学，因此在理论与代码之间来回切换显得十分自然。你只需要安装好Python环境，并配备Numpy、Scipy和Matplotlib等主要数学库即可。\n\n2. 使抽象的概念更加具体化。我发现，通过动手实践这些笔记本中的内容，能够有效加深对一些较为复杂理论概念或符号的理解。希望这些笔记也能对你有所帮助。\n\n# 学习大纲\n\n本系列的学习大纲完全遵循《深度学习》一书的结构，因此如果你在阅读原书时遇到难以理解的地方，可以参考本系列的内容获取更多细节。以下是各部分内容的简要介绍：\n\n1. [标量、向量、矩阵与张量](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.1-Scalars-Vectors-Matrices-and-Tensors\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_a54afef5cb6e.png\" width=\"400\" alt=\"标量、向量、矩阵和张量的示例\" title=\"标量、向量、矩阵和张量的区别\">\n\n    **标量、向量、矩阵和张量的区别**\n\n    对向量、矩阵、转置及基本运算（向量和矩阵的加法）进行了简单介绍。同时介绍了Numpy的相关函数，并简要讨论了广播机制。\n\n2. [矩阵与向量的乘法](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.2-Multiplying-Matrices-and-Vectors\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_f7aaab3c772f.png\" width=\"400\" alt=\"点积计算示例\" title=\"点积详解\">\n\n    **点积详解**\n\n    本章主要讲解点积（向量和矩阵的乘法），并探讨其相关性质。随后，我们将学习如何用矩阵形式表示线性方程组，这是后续章节的基础内容。\n\n3. [单位矩阵与逆矩阵](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_1b19bde0512e.png\" width=\"200\" alt=\"单位矩阵示例\" title=\"单位矩阵\">\n\n    **单位矩阵**\n\n    本节将介绍两种重要的矩阵：单位矩阵和逆矩阵。我们将探讨它们在线性代数中的重要性，以及如何利用Numpy进行操作。最后，通过一个实例演示如何使用逆矩阵求解线性方程组。\n\n4. [线性相关与张成空间](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.4-Linear-Dependence-and-Span\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_c238bf183f25.png\" width=\"700\" alt=\"无解、唯一解和无穷多解的方程组示例\" title=\"方程组可能无解、有唯一解或无穷多解\">\n\n    **方程组可能无解、有唯一解或无穷多解**\n\n在这一章中，我们将继续研究线性方程组。我们会看到，这样的方程组要么没有解，要么有唯一解，要么有无穷多解。我们将探讨这一结论背后的直觉、图形表示以及证明。接着，我们会回到方程组的矩阵形式，并讨论吉尔伯特·斯特兰格所说的“行图”（从行的角度看，即多个方程）和“列图”（从列的角度看，即系数的线性组合）。我们还会介绍什么是线性组合。最后，我们将通过例子来理解超定方程组和欠定方程组。\n\n5. [范数](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.5-Norms\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_00f214d5ab39.png\" width=\"400\" alt=\"3维空间中平方L2范数的表示\" title=\"3维空间中平方L2范数的表示\">\n\n    **3维空间中平方L2范数的形状**\n\n    向量的范数是一种函数，它接受一个向量作为输入，并输出一个正数值。可以将其视为向量的“长度”。例如，它常用于评估模型预测值与真实值之间的距离。我们将结合实例介绍不同类型的范数（$L^0$、$L^1$、$L^2$等）。\n\n6. [特殊类型的矩阵和向量](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_adc583bf53bb.png\" width=\"400\" alt=\"对角矩阵和对称矩阵的例子\" title=\"对角矩阵和对称矩阵的例子\">\n\n    **左图为对角矩阵，右图为对称矩阵**\n\n    在[2.3](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices\u002F)中，我们已经介绍了一些非常有趣的特殊矩阵。本章将继续探讨其他类型的向量和矩阵。虽然篇幅不大，但对理解后续内容非常重要。\n\n7. [特征分解](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.7-Eigendecomposition\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_37890db25290.png\" width=\"300\" alt=\"output_59_0\">\n\n    本章将介绍线性代数中的一些重要概念。首先，我们将了解特征向量和特征值的基本思想。我们会看到，矩阵可以被视作一种线性变换，而当矩阵作用于其特征向量时，会得到方向不变的新向量。随后，我们将学习如何将二次方程表示为矩阵形式，并发现对应矩阵的特征分解可用于求解该二次方程的极小值和极大值。此外，我们还将展示如何用Python可视化线性变换！\n\n8. [奇异值分解](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.8-Singular-Value-Decomposition\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_711095553548.png\" width=\"300\" alt=\"output_35_7\">\n\n    我们将学习另一种矩阵分解方法：奇异值分解（SVD）。自本系列开始以来，我一直强调矩阵可以被视为空间中的线性变换。通过SVD，我们可以把一个矩阵分解成三个新的矩阵。这些新矩阵可以被看作是空间的“子变换”。与其一次性完成整个变换，不如将其分解为三次较小的变换。作为附加内容，我们还将把SVD应用于图像处理，观察它对鹅“露西”这张图片的效果。敬请继续阅读！\n\n9. [摩尔-彭罗斯伪逆](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.9-The-Moore-Penrose-Pseudoinverse\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_06ea8add38b0.png\" width=\"300\" alt=\"output_44_0\">\n\n    我们已经知道，并非所有矩阵都存在逆矩阵。这很遗憾，因为逆矩阵常用于求解线性方程组。在某些情况下，方程组可能无解，从而导致逆矩阵不存在。然而，找到一个近似解（即误差最小化）仍然很有意义。这时就可以使用伪逆！我们将举例说明如何利用伪逆求一组数据点的最佳拟合直线。\n\n10. [迹运算符](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.10-The-Trace-Operator\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_496d5616562a.png\" width=\"200\" alt=\"计算矩阵的迹\" title=\"计算矩阵的迹\">\n\n    **矩阵的迹**\n\n    本章将介绍矩阵的迹是什么。这一概念将在最后一章关于主成分分析（PCA）的内容中用到。\n\n11. [行列式](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.11-The-determinant\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_f3fd4c866af7.png\" width=\"400\" alt=\"正负行列式的对比\" title=\"正负行列式效果的对比\">\n\n    **矩阵行列式与其所代表的线性变换之间的联系**\n\n    本章的主题是矩阵的行列式。这个特殊的数值能够告诉我们许多关于矩阵的信息！\n\n12. [示例：主成分分析](https:\u002F\u002Fhadrienj.github.io\u002Fposts\u002FDeep-Learning-Book-Series-2.12-Example-Principal-Components-Analysis\u002F)\n\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_readme_11fa1dc53e06.png\" width=\"400\" alt=\"梯度下降算法的工作机制\" title=\"梯度下降算法的工作机制\">\n    **梯度下降法**\n\n    这是本线性代数系列的最后一章！主题是主成分分析（PCA）。我们将运用之前各章学到的知识，来理解这一重要的数据分析工具！\n\n\n\n# 要求\n\n本内容面向初学者，但最好具备一定的数学基础。\n\n# 祝您阅读愉快\n\n希望您能在这套系列中找到感兴趣的内容。我已尽力确保内容的准确性。如果您发现任何错误、误解或错别字，请随时告知！您可以通过电子邮件与我联系，或者在GitHub上的笔记本中提交问题和拉取请求。\n\n# 参考文献\n\nGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.","# deepLearningBook-Notes 快速上手指南\n\n`deepLearningBook-Notes` 是一个基于《Deep Learning Book》（Goodfellow 等著）第二章“线性代数”的开源学习系列。它通过直观的图解、数学推导和 Python 代码示例，帮助开发者深入理解深度学习背后的线性代数核心概念。\n\n## 环境准备\n\n本项目主要包含一系列 Jupyter Notebooks 和博客文章，因此你需要一个支持 Python 数据科学库的运行环境。\n\n*   **操作系统**：Windows, macOS 或 Linux\n*   **Python 版本**：推荐 Python 3.8 及以上版本\n*   **核心依赖库**：\n    *   `numpy`：用于矩阵运算和线性代数操作\n    *   `matplotlib`：用于绘制几何图形和可视化线性变换\n    *   `scipy`：提供额外的科学计算支持\n    *   `jupyter` 或 `jupyterlab`：用于运行和交互查看 Notebook 文件\n\n> **国内加速建议**：在安装依赖时，推荐使用清华源或阿里源以提升下载速度。\n\n## 安装步骤\n\n### 1. 克隆项目\n首先，将仓库克隆到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhadrienj\u002FdeepLearningBook-Notes.git\ncd deepLearningBook-Notes\n```\n\n### 2. 创建虚拟环境（推荐）\n为了避免依赖冲突，建议创建独立的虚拟环境：\n\n```bash\npython -m venv dl-notes-env\n# Windows\ndl-notes-env\\Scripts\\activate\n# macOS\u002FLinux\nsource dl-notes-env\u002Fbin\u002Factivate\n```\n\n### 3. 安装依赖\n使用 pip 安装所需库。**推荐使用国内镜像源**：\n\n```bash\npip install numpy matplotlib scipy jupyterlab -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n或者，如果项目根目录包含 `requirements.txt` 文件，可直接运行：\n\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n该项目按章节组织，每个章节对应一个或多个 Jupyter Notebook 文件。以下以第一章“标量、向量、矩阵和张量”为例演示如何开始学习。\n\n### 启动 Jupyter Lab\n在项目根目录下启动服务：\n\n```bash\njupyter lab\n```\n\n浏览器会自动打开 Jupyter 界面。\n\n### 运行示例代码\n1. 在文件浏览器中，进入对应章节文件夹，例如 `2.1 Scalars, Vectors, Matrices and Tensors\u002F`。\n2. 打开 `.ipynb` 文件（如 `Deep-Learning-Book-Series-2.1.ipynb`）。\n3. 逐个单元格运行代码，观察输出结果和生成的几何图形。\n\n**最简单的代码示例（来自笔记内容）：**\n\n以下代码展示了如何使用 NumPy 创建向量并计算点积，这是后续理解矩阵乘法的基础：\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# 定义两个向量\nu = np.array([2, 5])\nv = np.array([4, 1])\n\n# 计算点积 (Dot Product)\ndot_product = np.dot(u, v)\n\nprint(f\"Vector u: {u}\")\nprint(f\"Vector v: {v}\")\nprint(f\"Dot product of u and v: {dot_product}\")\n\n# 可视化向量\nplt.quiver(0, 0, u[0], u[1], angles='xy', scale_units='xy', scale=1, color='r', label='u')\nplt.quiver(0, 0, v[0], v[1], angles='xy', scale_units='xy', scale=1, color='b', label='v')\nplt.xlim(-1, 6)\nplt.ylim(-1, 6)\nplt.grid()\nplt.legend()\nplt.title(\"Visualization of Vectors u and v\")\nplt.show()\n```\n\n### 学习路径建议\n按照 `Syllabus` 顺序依次学习：\n1.  **基础概念**：标量、向量、矩阵、张量 (2.1)\n2.  **核心运算**：矩阵与向量乘法 (2.2)\n3.  **关键矩阵**：单位矩阵与逆矩阵 (2.3)\n4.  **深入理论**：线性相关、范数、特征分解 (2.4 - 2.7)\n5.  **高级应用**：奇异值分解 (SVD)、伪逆、主成分分析 (PCA) (2.8 - 2.12)\n\n通过修改代码中的数值并重新运行，你可以直观地看到线性变换对空间的影响，从而加深对理论的理解。","一名刚入门深度学习的数据科学初学者，在尝试复现论文算法时，被《Deep Learning Book》第二章中抽象的线性代数理论卡住了脚步。\n\n### 没有 deepLearningBook-Notes 时\n- 面对书中密集的数学定义和符号推导，难以建立直观的几何理解，只能死记硬背公式。\n- 缺乏将理论转化为代码的桥梁，不知道如何用 Python\u002FNumpy 实现矩阵变换等核心概念。\n- 遇到晦涩难点时无人指引，无法通过可视化图形看清“矩阵即空间线性变换”的本质。\n- 学习曲线过于陡峭，容易因挫败感而放弃对底层数学原理的深入钻研。\n\n### 使用 deepLearningBook-Notes 后\n- 通过配套的直观绘图和通俗解读，将抽象的线性代数概念转化为可视化的几何图像，轻松理解核心思想。\n- 直接运行提供的 Python Notebooks，在动手实验中观察数据变化，无缝衔接理论与代码实现。\n- 利用逐步拆解的代码示例和动态图表，清晰看到矩阵如何变换空间，彻底搞懂特征分解与 SVD 等难点。\n- 跟随精心设计的进阶路径，从基础到应用稳步提升，建立起扎实的机器学习数学基石。\n\ndeepLearningBook-Notes 通过将枯燥的数学理论转化为可交互的代码与可视化图表，让初学者能真正“看见”并掌握深度学习的底层逻辑。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhadrienj_deepLearningBook-Notes_26e366e7.jpg","hadrienj","Hadrien Jean","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhadrienj_125f5319.jpg","Data and Machine Learning Scientist at Iaudiogram.\r\n\r\nPreviously Machine Learning Scientist at Ava.\r\n\r\nPreviously PhD Student at Ecole Normal Supérieure.",null,"Paris, France","code.datascience@gmail.com","_hadrienj","https:\u002F\u002Fhadrienj.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fhadrienj",[88],{"name":89,"color":90,"percentage":91},"Jupyter Notebook","#DA5B0B",100,1780,564,"2026-03-27T03:18:08","MIT","","未说明",{"notes":99,"python":97,"dependencies":100},"该项目是《深度学习》书籍第 2 章（线性代数）的学习笔记，主要包含 Python Notebooks、代码示例和图表。运行环境仅需标准的 Python 安装及主要数学库（Numpy, Scipy, Matplotlib），无需 GPU 或特殊硬件配置。旨在帮助初学者通过代码实验理解线性代数概念。",[101,102,103],"numpy","scipy","matplotlib",[18],"2026-03-27T02:49:30.150509","2026-04-18T09:19:24.210750",[108,113,117,122,127,132],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},38938,"第三章第 3.6 到 3.14 节的笔记什么时候发布？","作者已经开始撰写第三章的内容，第一篇文章已发布在博客上（关于概率质量函数和密度函数），请持续关注后续文章。","https:\u002F\u002Fgithub.com\u002Fhadrienj\u002FdeepLearningBook-Notes\u002Fissues\u002F13",{"id":114,"question_zh":115,"answer_zh":116,"source_url":112},38939,"为什么不开源书籍配套的完整代码？","作者考虑过此选项，但书中的代码包含所有文本内容。如果只截取代码而去除文本，在没有原文解释的情况下代码本身用处不大，因此决定保持现状。",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},38940,"笔记中精美的示意图是使用什么软件绘制的？","作者使用 OmniGraffle 绘制这些图表。该软件易于使用且绘图效果舒适直观，但缺点是它不是免费软件。","https:\u002F\u002Fgithub.com\u002Fhadrienj\u002FdeepLearningBook-Notes\u002Fissues\u002F8",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},38941,"是否有计划编写深度学习书籍第三章、第四章和第五章的笔记？","是的，作者已经开始了第三章的编写工作，首篇文章已发布，旨在帮助读者更好地理解 AI 背后的数学原理，敬请期待后续更新。","https:\u002F\u002Fgithub.com\u002Fhadrienj\u002FdeepLearningBook-Notes\u002Fissues\u002F9",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},38942,"在提取矩阵列向量时，有没有更简洁的代码写法？","有的。原代码使用 reshape 方法较为繁琐，可以简化为：提取列使用 `A[:, [0]]` 和 `A[:, [1]]`；提取行并重塑可以使用 `A[0].reshape(A.shape[1], 1)` 等方式，利用高级索引和形状属性使代码更简洁。","https:\u002F\u002Fgithub.com\u002Fhadrienj\u002FdeepLearningBook-Notes\u002Fissues\u002F7",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},38943,"是否会在笔记中补充点积公式 xT⋅y = ‖x‖⋅‖y‖⋅cosθ 的证明？","作者认为这是一个非常有趣且有价值的补充，已确认会将该证明添加到第 2.5 小节中。","https:\u002F\u002Fgithub.com\u002Fhadrienj\u002FdeepLearningBook-Notes\u002Fissues\u002F5",[]]