[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-melisgl--mgl":3,"tool-melisgl--mgl":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":93,"env_os":94,"env_gpu":95,"env_ram":94,"env_deps":96,"category_tags":110,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":144},2468,"melisgl\u002Fmgl","mgl","Common Lisp machine learning library.","MGL 是一个基于 Common Lisp 语言开发的开源机器学习库，由 Gábor Melis 主导开发。它旨在为 Lisp 社区提供一个功能全面且高效的机器学习解决方案，填补了该生态系统中在深度学习与统计建模领域的空白。\n\nMGL 的核心优势在于其对多种经典及现代机器学习算法的支持，主要包括反向传播神经网络（涵盖前馈网络与循环神经网络 RNN）、玻尔兹曼机以及高斯过程等。除了核心的模型构建能力，MGL 还提供了一整套完善的数据处理工具，包括数据集采样、重采样、交叉验证以及特征选择与编码功能。此外，它内置了基于梯度的优化器（如梯度下降、共轭梯度法）和详细的训练监控机制，帮助开发者实时追踪模型性能，如混淆矩阵计算和各类指标测量。\n\n在技术实现上，MGL 底层依赖于 MGL-MAT 库，这意味着它能够充分利用 BLAS（基础线性代数子程序）进行 CPU 加速，并支持 CUDA 以实现 GPU 并行计算，从而显著提升了大规模数据训练的效率。这种设计使得 MGL 不仅在算法灵活性上表现出色，在计算性能上也具备竞争力。\n\nMGL 主要适合熟悉 Common Lisp 的软件开发人员、人工智能研","MGL 是一个基于 Common Lisp 语言开发的开源机器学习库，由 Gábor Melis 主导开发。它旨在为 Lisp 社区提供一个功能全面且高效的机器学习解决方案，填补了该生态系统中在深度学习与统计建模领域的空白。\n\nMGL 的核心优势在于其对多种经典及现代机器学习算法的支持，主要包括反向传播神经网络（涵盖前馈网络与循环神经网络 RNN）、玻尔兹曼机以及高斯过程等。除了核心的模型构建能力，MGL 还提供了一整套完善的数据处理工具，包括数据集采样、重采样、交叉验证以及特征选择与编码功能。此外，它内置了基于梯度的优化器（如梯度下降、共轭梯度法）和详细的训练监控机制，帮助开发者实时追踪模型性能，如混淆矩阵计算和各类指标测量。\n\n在技术实现上，MGL 底层依赖于 MGL-MAT 库，这意味着它能够充分利用 BLAS（基础线性代数子程序）进行 CPU 加速，并支持 CUDA 以实现 GPU 并行计算，从而显著提升了大规模数据训练的效率。这种设计使得 MGL 不仅在算法灵活性上表现出色，在计算性能上也具备竞争力。\n\nMGL 主要适合熟悉 Common Lisp 的软件开发人员、人工智能研究人员以及对函数式编程在机器学习中应用感兴趣的技术专家使用。对于希望在不离开 Lisp 环境的前提下，进行神经网络实验、自然语言处理（如词袋模型）或复杂统计建模的用户来说，MGL 是一个值得尝试的专业级工具库。","\u003Ca id=\"x-28MGL-3A-40MGL-MANUAL-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n# MGL Manual\n\n## Table of Contents\n\n- [1 Introduction][f7aa]\n    - [1.1 Overview][9192]\n    - [1.2 Links][00ee]\n    - [1.3 Dependencies][e7ea]\n    - [1.4 Code Organization][443c]\n    - [1.5 Glossary][4a8e]\n- [2 Common Stuff][e198]\n- [3 Datasets][109e]\n    - [3.1 Samplers][7bc3]\n        - [3.1.1 Function Sampler][be8d]\n- [4 Resampling][a39b]\n    - [4.1 Shuffling][8611]\n    - [4.2 Partitions][f790]\n    - [4.3 Cross-validation][f17b]\n    - [4.4 Bagging][b647]\n    - [4.5 CV Bagging][3f9f]\n    - [4.6 Miscellaneous Operations][59c2]\n- [5 Core][f257]\n    - [5.1 Persistence][29a1]\n    - [5.2 Batch Processing][ff82]\n    - [5.3 Executors][4476]\n        - [5.3.1 Parameterized Executor Cache][ada2]\n- [6 Monitoring][e668]\n    - [6.1 Monitors][c701]\n    - [6.2 Measurers][cd3b]\n    - [6.3 Counters][be95]\n        - [6.3.1 Attributes][6da5]\n        - [6.3.2 Counter classes][7ee3]\n- [7 Classification][60e3]\n    - [7.1 Classification Monitors][c573]\n    - [7.2 Classification Measurers][0ba7]\n    - [7.3 Classification Counters][6598]\n        - [7.3.1 Confusion Matrices][07c7]\n- [8 Features][c8db]\n    - [8.1 Feature Selection][1b5e]\n    - [8.2 Feature Encoding][24aa]\n- [9 Gradient Based Optimization][c74a]\n    - [9.1 Iterative Optimizer][779d]\n    - [9.2 Cost Function][e746]\n    - [9.3 Gradient Descent][10e7]\n        - [9.3.1 Batch Based Optimizers][2c39]\n        - [9.3.2 Segmented GD Optimizer][989a]\n        - [9.3.3 Per-weight Optimization][a884]\n        - [9.3.4 Utilities][c40e]\n    - [9.4 Conjugate Gradient][83e6]\n    - [9.5 Extension API][6a6f]\n        - [9.5.1 Implementing Optimizers][5748]\n        - [9.5.2 Implementing Gradient Sources][c58b]\n        - [9.5.3 Implementing Gradient Sinks][a210]\n- [10 Differentiable Functions][2981]\n- [11 Backpropagation Neural Networks][8788]\n    - [11.1 Backprop Overview][56b2]\n    - [11.2 Clump API][7a28]\n    - [11.3 `BPN`s][d1e0]\n        - [11.3.1 Training][0d82]\n        - [11.3.2 Monitoring][4f0e]\n        - [11.3.3 Feed-Forward Nets][1355]\n        - [11.3.4 Recurrent Neural Nets][871e]\n    - [11.4 Lumps][9641]\n        - [11.4.1 Lump Base Class][3045]\n        - [11.4.2 Inputs][207b]\n        - [11.4.3 Weight Lump][6872]\n        - [11.4.4 Activations][9105]\n        - [11.4.5 Activation Functions][5d86]\n        - [11.4.6 Losses][93a7]\n        - [11.4.7 Stochasticity][aa2e]\n        - [11.4.8 Arithmetic][2fe9]\n        - [11.4.9 Operations for `RNN`s][51f7]\n    - [11.5 Utilities][91f3]\n- [12 Boltzmann Machines][332e]\n- [13 Gaussian Processes][60b3]\n- [14 Natural Language Processing][0d6a]\n    - [14.1 Bag of Words][0784]\n- [15 Logging][3f42]\n\n###### \\[in package MGL\\]\n\u003Ca id=\"x-28-22mgl-22-20ASDF-2FSYSTEM-3ASYSTEM-29\">\u003C\u002Fa>\n\n- [system] **\"mgl\"**\n    - _Version:_ 0.1.0\n    - _Description:_ `MGL` is a machine learning library for backpropagation\n        neural networks, boltzmann machines, gaussian processes and more.\n    - _Licence:_ MIT, see COPYING.\n    - _Author:_ Gábor Melis \u003Cmega@retes.hu>\n    - _Mailto:_ [mega@retes.hu](mailto:mega@retes.hu)\n    - _Homepage:_ [http:\u002F\u002Fmelisgl.github.io\u002Fmgl](http:\u002F\u002Fmelisgl.github.io\u002Fmgl)\n    - _Bug tracker:_ [https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues)\n    - _Source control:_ [GIT](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl.git)\n    - *Depends on:* alexandria, array-operations, cl-reexport, closer-mop, lla, mgl-gnuplot, mgl-mat, mgl-pax, named-readtables, num-utils, pythonic-string-reader, swank\n\n\u003Ca id=\"x-28MGL-3A-40MGL-INTRODUCTION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 1 Introduction\n\n\u003Ca id=\"x-28MGL-3A-40MGL-OVERVIEW-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.1 Overview\n\nMGL is a Common Lisp machine learning library by [Gábor\nMelis](http:\u002F\u002Fquotenil.com) with some parts originally contributed\nby Ravenpack International. It mainly concentrates on various forms\nof neural networks (boltzmann machines, feed-forward and recurrent\nbackprop nets). Most of MGL is built on top of\nMGL-MAT so it has BLAS and CUDA support.\n\nIn general, the focus is on power and performance not on ease of\nuse. Perhaps one day there will be a cookie cutter interface with\nrestricted functionality if a reasonable compromise is found between\npower and utility.\n\n\u003Ca id=\"x-28MGL-3A-40MGL-LINKS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.2 Links\n\nHere is the [official repository](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl)\nand the [HTML\ndocumentation](http:\u002F\u002Fmelisgl.github.io\u002Fmgl-pax-world\u002Fmgl-manual.html)\nfor the latest version.\n\n\u003Ca id=\"x-28MGL-3A-40MGL-DEPENDENCIES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.3 Dependencies\n\nMGL used to rely on [LLA](https:\u002F\u002Fgithub.com\u002Ftpapp\u002Flla) to\ninterface to BLAS and LAPACK. That's mostly history by now, but\nconfiguration of foreign libraries is still done via LLA. See the\nREADME in LLA on how to set things up. Note that these days OpenBLAS\nis easier to set up and just as fast as ATLAS.\n\n[CL-CUDA](https:\u002F\u002Fgithub.com\u002Ftakagi\u002Fcl-cuda) and\n[MGL-MAT](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl) are the two main\ndependencies and also the ones not yet in quicklisp, so just drop\nthem into `quicklisp\u002Flocal-projects\u002F`. If there is no suitable GPU\non the system or the CUDA SDK is not installed, MGL will simply\nfall back on using BLAS and Lisp code. Wrapping code in\n`MGL-MAT:WITH-CUDA*` is basically all that's needed to run on the GPU,\nand with `MGL-MAT:CUDA-AVAILABLE-P` one can check whether the GPU is\nreally being used.\n\n\u003Ca id=\"x-28MGL-3A-40MGL-CODE-ORGANIZATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.4 Code Organization\n\nMGL consists of several packages dedicated to different tasks.\nFor example, package `MGL-RESAMPLE` is about\n[Resampling][a39b] and `MGL-GD` is about [Gradient Descent][10e7]\nand so on. On one hand, having many packages makes it easier to\ncleanly separate API and implementation and also to explore into a\nspecific task. At other times, they can be a hassle, so the `MGL`\npackage itself reexports every external symbol found in all the\nother packages that make up MGL and MGL-MAT (see\nMAT Manual) on which it heavily relies.\n\nOne exception to this rule is the bundled, but independent\nMGL-GNUPLOT library.\n\nThe built in tests can be run with:\n\n    (ASDF:OOS 'ASDF:TEST-OP '#:MGL)\n\nNote, that most of the tests are rather stochastic and can fail once\nin a while.\n\n\u003Ca id=\"x-28MGL-3A-40MGL-GLOSSARY-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.5 Glossary\n\nUltimately machine learning is about creating **models** of some\ndomain. The observations in the modelled domain are called\n**instances** (also known as examples or samples). Sets of instances\nare called **datasets**. Datasets are used when fitting a model or\nwhen making **predictions**. Sometimes the word predictions is too\nspecific, and the results obtained from applying a model to some\ninstances are simply called **results**.\n\n\u003Ca id=\"x-28MGL-COMMON-3A-40MGL-COMMON-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 2 Common Stuff\n\n###### \\[in package MGL-COMMON\\]\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **NAME** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-3D-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **NAME=** *X Y*\n\n    Return `T` if X and Y are `EQL`([`0`][db03] [`1`][5fd4]) or if they are structured components whose\n    elements are [`EQUAL`][3fb5]. Strings and bit-vectors are `EQUAL` if they are the same\n    length and have identical components. Other arrays must be [`EQ`][5a82] to be `EQUAL`.\n\n\u003Ca id=\"x-28MGL-COMMON-3ASIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SIZE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **NODES** *OBJECT*\n\n    Returns a `MGL-MAT:MAT` object representing the state\n    or result of `OBJECT`. The first dimension of the returned matrix is\n    equal to the number of stripes.\n\n\u003Ca id=\"x-28MGL-COMMON-3ADEFAULT-VALUE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **DEFAULT-VALUE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **GROUP-SIZE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **BATCH-SIZE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3AWEIGHTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **WEIGHTS** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ASCALE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SCALE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-DATASET-3A-40MGL-DATASET-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 3 Datasets\n\n###### \\[in package MGL-DATASET\\]\nAn instance can often be any kind of object of the user's choice.\nIt is typically represented by a set of numbers which is called a\nfeature vector or by a structure holding the feature vector, the\nlabel, etc. A dataset is a [`SEQUENCE`][ae23] of such instances or a\n[Samplers][7bc3] object that produces instances.\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAP-DATASET-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAP-DATASET** *FN DATASET*\n\n    Call `FN` with each instance in `DATASET`. This is basically equivalent\n    to iterating over the elements of a sequence or a sampler (see\n    [Samplers][7bc3]).\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAP-DATASETS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAP-DATASETS** *FN DATASETS &KEY (IMPUTE NIL IMPUTEP)*\n\n    Call `FN` with a list of instances, one from each dataset in\n    `DATASETS`. Return nothing. If `IMPUTE` is specified then iterate until\n    the largest dataset is consumed imputing `IMPUTE` for missing values.\n    If `IMPUTE` is not specified then iterate until the smallest dataset\n    runs out.\n    \n    ```common-lisp\n    (map-datasets #'prin1 '((0 1 2) (:a :b)))\n    .. (0 :A)(1 :B)\n    \n    (map-datasets #'prin1 '((0 1 2) (:a :b)) :impute nil)\n    .. (0 :A)(1 :B)(2 NIL)\n    ```\n    \n    It is of course allowed to mix sequences with samplers:\n    \n    ```common-lisp\n    (map-datasets #'prin1\n                  (list '(0 1 2)\n                        (make-sequence-sampler '(:a :b) :max-n-samples 2)))\n    .. (0 :A)(1 :B)\n    ```\n\n\u003Ca id=\"x-28MGL-DATASET-3A-40MGL-SAMPLER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 3.1 Samplers\n\nSome algorithms do not need random access to the entire dataset and\ncan work with a stream observations. Samplers are simple generators\nproviding two functions: [`SAMPLE`][f956] and [`FINISHEDP`][401f].\n\n\u003Ca id=\"x-28MGL-DATASET-3ASAMPLE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SAMPLE** *SAMPLER*\n\n    If `SAMPLER` has not run out of data (see [`FINISHEDP`][401f])\n    `SAMPLE` returns an object that represents a sample from the world to\n    be experienced or, in other words, simply something the can be used\n    as input for training or prediction. It is not allowed to call\n    `SAMPLE` if `SAMPLER` is `FINISHEDP`.\n\n\u003Ca id=\"x-28MGL-DATASET-3AFINISHEDP-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **FINISHEDP** *SAMPLER*\n\n    See if `SAMPLER` has run out of examples.\n\n\u003Ca id=\"x-28MGL-DATASET-3ALIST-SAMPLES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LIST-SAMPLES** *SAMPLER MAX-SIZE*\n\n    Return a list of samples of length at most `MAX-SIZE` or less if\n    `SAMPLER` runs out.\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAKE-SEQUENCE-SAMPLER-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-SEQUENCE-SAMPLER** *SEQ &KEY MAX-N-SAMPLES*\n\n    Create a sampler that returns elements of `SEQ` in their original\n    order. If `MAX-N-SAMPLES` is non-nil, then at most `MAX-N-SAMPLES` are\n    sampled.\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAKE-RANDOM-SAMPLER-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-RANDOM-SAMPLER** *SEQ &KEY MAX-N-SAMPLES (REORDER \\#'MGL-RESAMPLE:SHUFFLE)*\n\n    Create a sampler that returns elements of `SEQ` in random order. If\n    `MAX-N-SAMPLES` is non-nil, then at most `MAX-N-SAMPLES` are sampled.\n    The first pass over a shuffled copy of `SEQ`, and this copy is\n    reshuffled whenever the sampler reaches the end of it. Shuffling is\n    performed by calling the `REORDER` function.\n\n\u003Ca id=\"x-28MGL-DATASET-3A-2AINFINITELY-EMPTY-DATASET-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*INFINITELY-EMPTY-DATASET\\*** *\\#\\\u003CFUNCTION-SAMPLER \"infinitely empty\" >*\n\n    This is the default dataset for [`MGL-OPT:MINIMIZE`][46a4]. It's an infinite\n    stream of `NIL`s.\n\n\u003Ca id=\"x-28MGL-DATASET-3A-40MGL-SAMPLER-FUNCTION-SAMPLER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 3.1.1 Function Sampler\n\n\u003Ca id=\"x-28MGL-DATASET-3AFUNCTION-SAMPLER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **FUNCTION-SAMPLER**\n\n    A sampler with a function in its [`GENERATOR`][08ac] that\n    produces a stream of samples which may or may not be finite\n    depending on [`MAX-N-SAMPLES`][1cab]. [`FINISHEDP`][401f] returns `T` iff `MAX-N-SAMPLES` is\n    non-nil, and it's not greater than the number of samples\n    generated ([`N-SAMPLES`][bdf9]).\n    \n        (list-samples (make-instance 'function-sampler\n                                     :generator (lambda ()\n                                                  (random 10))\n                                     :max-n-samples 5)\n                      10)\n        => (3 5 2 3 3)\n\n\u003Ca id=\"x-28MGL-DATASET-3AGENERATOR-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [reader] **GENERATOR** *[FUNCTION-SAMPLER][715c] (:GENERATOR)*\n\n    A generator function of no arguments that returns\n    the next sample.\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAX-N-SAMPLES-20-28MGL-PAX-3AACCESSOR-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MAX-N-SAMPLES** *[FUNCTION-SAMPLER][715c] (:MAX-N-SAMPLES = NIL)*\n\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [reader] **NAME** *[FUNCTION-SAMPLER][715c] (:NAME = NIL)*\n\n    An arbitrary object naming the sampler. Only used\n    for printing the sampler object.\n\n\u003Ca id=\"x-28MGL-DATASET-3AN-SAMPLES-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [reader] **N-SAMPLES** *[FUNCTION-SAMPLER][715c] (:N-SAMPLES = 0)*\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 4 Resampling\n\n###### \\[in package MGL-RESAMPLE\\]\nThe focus of this package is on resampling methods such as\ncross-validation and bagging which can be used for model evaluation,\nmodel selection, and also as a simple form of ensembling. Data\npartitioning and sampling functions are also provided because they\ntend to be used together with resampling.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-SHUFFLING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.1 Shuffling\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASHUFFLE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SHUFFLE** *SEQ*\n\n    Copy of `SEQ` and shuffle it using Fisher-Yates algorithm.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASHUFFLE-21-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SHUFFLE!** *SEQ*\n\n    Shuffle `SEQ` using Fisher-Yates algorithm.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-PARTITIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.2 Partitions\n\nThe following functions partition a dataset (currently only\n[`SEQUENCE`][ae23]s are supported) into a number of partitions. For each\nelement in the original dataset there is exactly one partition that\ncontains it.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3AFRACTURE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **FRACTURE** *FRACTIONS SEQ &KEY WEIGHT*\n\n    Partition `SEQ` into a number of subsequences. `FRACTIONS` is either a\n    positive integer or a list of non-negative real numbers. `WEIGHT` is\n    `NIL` or a function that returns a non-negative real number when\n    called with an element from `SEQ`. If `FRACTIONS` is a positive integer\n    then return a list of that many subsequences with equal sum of\n    weights bar rounding errors, else partition `SEQ` into subsequences,\n    where the sum of weights of subsequence I is proportional to element\n    I of `FRACTIONS`. If `WEIGHT` is `NIL`, then it's element is assumed to\n    have the same weight.\n    \n    To split into 5 sequences:\n    \n    ```common-lisp\n    (fracture 5 '(0 1 2 3 4 5 6 7 8 9))\n    => ((0 1) (2 3) (4 5) (6 7) (8 9))\n    ```\n    \n    To split into two sequences whose lengths are proportional to 2 and\n    3:\n    \n    ```common-lisp\n    (fracture '(2 3) '(0 1 2 3 4 5 6 7 8 9))\n    => ((0 1 2 3) (4 5 6 7 8 9))\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASTRATIFY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **STRATIFY** *SEQ &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL)*\n\n    Return the list of strata of `SEQ`. `SEQ` is a sequence of elements for\n    which the function `KEY` returns the class they belong to. Such\n    classes are opaque objects compared for equality with `TEST`. A\n    stratum is a sequence of elements with the same (under `TEST`) `KEY`.\n    \n    ```common-lisp\n    (stratify '(0 1 2 3 4 5 6 7 8 9) :key #'evenp)\n    => ((0 2 4 6 8) (1 3 5 7 9))\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3AFRACTURE-STRATIFIED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **FRACTURE-STRATIFIED** *FRACTIONS SEQ &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL) WEIGHT*\n\n    Similar to [`FRACTURE`][6f82], but also makes sure that keys are evenly\n    distributed among the partitions (see [`STRATIFY`][ba91]). It can be useful\n    for classification tasks to partition the data set while keeping the\n    distribution of classes the same.\n    \n    Note that the sets returned are not in random order. In fact, they\n    are sorted internally by `KEY`.\n    \n    For example, to make two splits with approximately the same number\n    of even and odd numbers:\n    \n    ```common-lisp\n    (fracture-stratified 2 '(0 1 2 3 4 5 6 7 8 9) :key #'evenp)\n    => ((0 2 1 3) (4 6 8 5 7 9))\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CROSS-VALIDATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.3 Cross-validation\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ACROSS-VALIDATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **CROSS-VALIDATE** *DATA FN &KEY (N-FOLDS 5) (FOLDS (ALEXANDRIA:IOTA N-FOLDS)) (SPLIT-FN \\#'SPLIT-FOLD\u002FMOD) PASS-FOLD*\n\n    Map `FN` over the `FOLDS` of `DATA` split with `SPLIT-FN` and collect the\n    results in a list. The simplest demonstration is:\n    \n    ```common-lisp\n    (cross-validate '(0 1 2 3 4)\n                    (lambda (test training)\n                     (list test training))\n                    :n-folds 5)\n    => (((0) (1 2 3 4))\n        ((1) (0 2 3 4))\n        ((2) (0 1 3 4))\n        ((3) (0 1 2 4))\n        ((4) (0 1 2 3)))\n    ```\n    \n    Of course, in practice one would typically train a model and return\n    the trained model and\u002For its score on `TEST`. Also, sometimes one may\n    want to do only some of the folds and remember which ones they were:\n    \n    ```common-lisp\n    (cross-validate '(0 1 2 3 4)\n                    (lambda (fold test training)\n                     (list :fold fold test training))\n                    :folds '(2 3)\n                    :pass-fold t)\n    => ((:fold 2 (2) (0 1 3 4))\n        (:fold 3 (3) (0 1 2 4)))\n    ```\n    \n    Finally, the way the data is split can be customized. By default\n    [`SPLIT-FOLD\u002FMOD`][5ded] is called with the arguments `DATA`, the fold (from\n    among `FOLDS`) and `N-FOLDS`. `SPLIT-FOLD\u002FMOD` returns two values which\n    are then passed on to `FN`. One can use [`SPLIT-FOLD\u002FCONT`][5293] or\n    [`SPLIT-STRATIFIED`][8cb8] or any other function that works with these\n    arguments. The only real constraint is that `FN` has to take as many\n    arguments (plus the fold argument if `PASS-FOLD`) as `SPLIT-FN`\n    returns.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FMOD-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SPLIT-FOLD\u002FMOD** *SEQ FOLD N-FOLDS*\n\n    Partition `SEQ` into two sequences: one with elements of `SEQ` with\n    indices whose remainder is `FOLD` when divided with `N-FOLDS`, and a\n    second one with the rest. The second one is the larger set. The\n    order of elements remains stable. This function is suitable as the\n    `SPLIT-FN` argument of [`CROSS-VALIDATE`][9524].\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FCONT-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SPLIT-FOLD\u002FCONT** *SEQ FOLD N-FOLDS*\n\n    Imagine dividing `SEQ` into `N-FOLDS` subsequences of the same\n    size (bar rounding). Return the subsequence of index `FOLD` as the\n    first value and the all the other subsequences concatenated into one\n    as the second value. The order of elements remains stable. This\n    function is suitable as the `SPLIT-FN` argument of [`CROSS-VALIDATE`][9524].\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPLIT-STRATIFIED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SPLIT-STRATIFIED** *SEQ FOLD N-FOLDS &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL) WEIGHT*\n\n    Split `SEQ` into `N-FOLDS` partitions (as in [`FRACTURE-STRATIFIED`][627a]).\n    Return the partition of index `FOLD` as the first value, and the\n    concatenation of the rest as the second value. This function is\n    suitable as the `SPLIT-FN` argument of [`CROSS-VALIDATE`][9524] (mostly likely\n    as a closure with `KEY`, `TEST`, `WEIGHT` bound).\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-BAGGING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.4 Bagging\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ABAG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **BAG** *SEQ FN &KEY (RATIO 1) N WEIGHT (REPLACEMENT T) KEY (TEST \\#'EQL) (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    Sample from `SEQ` with [`SAMPLE-FROM`][86fd] (passing `RATIO`, `WEIGHT`,\n    `REPLACEMENT`), or [`SAMPLE-STRATIFIED`][aee6] if `KEY` is not `NIL`. Call `FN` with\n    the sample. If `N` is `NIL` then keep repeating this until `FN` performs a\n    non-local exit. Else `N` must be a non-negative integer, `N` iterations\n    will be performed, the primary values returned by `FN` collected into\n    a list and returned. See `SAMPLE-FROM` and `SAMPLE-STRATIFIED` for\n    examples.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASAMPLE-FROM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SAMPLE-FROM** *RATIO SEQ &KEY WEIGHT REPLACEMENT (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    Return a sequence constructed by sampling with or without\n    `REPLACEMENT` from `SEQ`. The sum of weights in the result sequence will\n    approximately be the sum of weights of `SEQ` times `RATIO`. If `WEIGHT` is\n    `NIL` then elements are assumed to have equal weights, else `WEIGHT`\n    should return a non-negative real number when called with an element\n    of `SEQ`.\n    \n    To randomly select half of the elements:\n    \n    ```common-lisp\n    (sample-from 1\u002F2 '(0 1 2 3 4 5))\n    => (5 3 2)\n    ```\n    \n    To randomly select some elements such that the sum of their weights\n    constitute about half of the sum of weights across the whole\n    sequence:\n    \n    ```common-lisp\n    (sample-from 1\u002F2 '(0 1 2 3 4 5 6 7 8 9) :weight #'identity)\n    => ;; sums to 28 that's near 45\u002F2\n       (9 4 1 6 8)\n    ```\n    \n    To sample with replacement (that is, allowing the element to be\n    sampled multiple times):\n    \n    ```common-lisp\n    (sample-from 1 '(0 1 2 3 4 5) :replacement t)\n    => (1 1 5 1 4 4)\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASAMPLE-STRATIFIED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SAMPLE-STRATIFIED** *RATIO SEQ &KEY WEIGHT REPLACEMENT (KEY \\#'IDENTITY) (TEST \\#'EQL) (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    Like [`SAMPLE-FROM`][86fd] but makes sure that the weighted proportion of\n    classes in the result is approximately the same as the proportion in\n    `SEQ`. See [`STRATIFY`][ba91] for the description of `KEY` and `TEST`.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CV-BAGGING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.5 CV Bagging\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ABAG-CV-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **BAG-CV** *DATA FN &KEY N (N-FOLDS 5) (FOLDS (ALEXANDRIA:IOTA N-FOLDS)) (SPLIT-FN \\#'SPLIT-FOLD\u002FMOD) PASS-FOLD (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    Perform cross-validation on different shuffles of `DATA` `N` times and\n    collect the results. Since [`CROSS-VALIDATE`][9524] collects the return values\n    of `FN`, the return value of this function is a list of lists of `FN`\n    results. If `N` is `NIL`, don't collect anything just keep doing\n    repeated CVs until `FN` performs a non-local exit.\n    \n    The following example simply collects the test and training sets for\n    2-fold CV repeated 3 times with shuffled data:\n    \n    ```commonlisp\n    ;;; This is non-deterministic.\n    (bag-cv '(0 1 2 3 4) #'list :n 3 :n-folds 2)\n    => ((((2 3 4) (1 0))\n         ((1 0) (2 3 4)))\n        (((2 1 0) (4 3))\n         ((4 3) (2 1 0)))\n        (((1 0 3) (2 4))\n         ((2 4) (1 0 3))))\n    ```\n    \n    CV bagging is useful when a single CV is not producing stable\n    results. As an ensemble method, CV bagging has the advantage over\n    bagging that each example will occur the same number of times and\n    after the first CV is complete there is a complete but less reliable\n    estimate for each example which gets refined by further CVs.\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-MISC-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.6 Miscellaneous Operations\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPREAD-STRATA-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SPREAD-STRATA** *SEQ &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL)*\n\n    Return a sequence that's a reordering of `SEQ` such that elements\n    belonging to different strata (under `KEY` and `TEST`, see [`STRATIFY`][ba91]) are\n    distributed evenly. The order of elements belonging to the same\n    stratum is unchanged.\n    \n    For example, to make sure that even and odd numbers are distributed\n    evenly:\n    \n    ```common-lisp\n    (spread-strata '(0 2 4 6 8 1 3 5 7 9) :key #'evenp)\n    => (0 1 2 3 4 5 6 7 8 9)\n    ```\n    \n    Same thing with unbalanced classes:\n    \n    ```common-lisp\n    (spread-strata (vector 0 2 3 5 6 1 4)\n                   :key (lambda (x)\n                          (if (member x '(1 4))\n                              t\n                              nil)))\n    => #(0 1 2 3 4 5 6)\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3AZIP-EVENLY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **ZIP-EVENLY** *SEQS &KEY RESULT-TYPE*\n\n    Make a single sequence out of the sequences in `SEQS` so that in the\n    returned sequence indices of elements belonging to the same source\n    sequence are spread evenly across the whole range. The result is a\n    list is `RESULT-TYPE` is `LIST`([`0`][79d8] [`1`][6d9f]), it's a vector if `RESULT-TYPE` is `VECTOR`([`0`][6098] [`1`][6d31]).\n    If `RESULT-TYPE` is `NIL`, then it's determined by the type of the first\n    sequence in `SEQS`.\n    \n    ```common-lisp\n    (zip-evenly '((0 2 4) (1 3)))\n    => (0 1 2 3 4)\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CORE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 5 Core\n\n###### \\[in package MGL-CORE\\]\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-PERSISTENCE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 5.1 Persistence\n\n\u003Ca id=\"x-28MGL-CORE-3ALOAD-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LOAD-STATE** *FILENAME OBJECT*\n\n    Load weights of `OBJECT` from `FILENAME`. Return `OBJECT`.\n\n\u003Ca id=\"x-28MGL-CORE-3ASAVE-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SAVE-STATE** *FILENAME OBJECT &KEY (IF-EXISTS :ERROR) (ENSURE T)*\n\n    Save weights of `OBJECT` to `FILENAME`. If `ENSURE`, then\n    [`ENSURE-DIRECTORIES-EXIST`][876d] is called on `FILENAME`. `IF-EXISTS` is passed\n    on to [`OPEN`][6547]. Return `OBJECT`.\n\n\u003Ca id=\"x-28MGL-CORE-3AREAD-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **READ-STATE** *OBJECT STREAM*\n\n    Read the weights of `OBJECT` from the bivalent `STREAM` where weights\n    mean the learnt parameters. There is currently no sanity checking of\n    data which will most certainly change in the future together with\n    the serialization format. Return `OBJECT`.\n\n\u003Ca id=\"x-28MGL-CORE-3AWRITE-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **WRITE-STATE** *OBJECT STREAM*\n\n    Write weight of `OBJECT` to the bivalent `STREAM`. Return `OBJECT`.\n\n\u003Ca id=\"x-28MGL-CORE-3AREAD-STATE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **READ-STATE\\*** *OBJECT STREAM CONTEXT*\n\n    This is the extension point for [`READ-STATE`][8148]. It is\n    guaranteed that primary `READ-STATE*` methods will be called only once\n    for each `OBJECT` (under [`EQ`][5a82]). `CONTEXT` is an opaque object and must be\n    passed on to any recursive `READ-STATE*` calls.\n\n\u003Ca id=\"x-28MGL-CORE-3AWRITE-STATE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **WRITE-STATE\\*** *OBJECT STREAM CONTEXT*\n\n    This is the extension point for [`WRITE-STATE`][95fe]. It is\n    guaranteed that primary `WRITE-STATE*` methods will be called only\n    once for each `OBJECT` (under [`EQ`][5a82]). `CONTEXT` is an opaque object and must\n    be passed on to any recursive `WRITE-STATE*` calls.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MODEL-STRIPE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 5.2 Batch Processing\n\nProcessing instances one by one during training or prediction can\nbe slow. The models that support batch processing for greater\nefficiency are said to be *striped*.\n\nTypically, during or after creating a model, one sets [`MAX-N-STRIPES`][16c4]\non it a positive integer. When a batch of instances is to be fed to\nthe model it is first broken into subbatches of length that's at\nmost `MAX-N-STRIPES`. For each subbatch, [`SET-INPUT`][0c9e] (FIXDOC) is called\nand a before method takes care of setting [`N-STRIPES`][8dd7] to the actual\nnumber of instances in the subbatch. When `MAX-N-STRIPES` is set\ninternal data structures may be resized which is an expensive\noperation. Setting `N-STRIPES` is a comparatively cheap operation,\noften implemented as matrix reshaping.\n\nNote that for models made of different parts (for example,\n[`MGL-BP:BPN`][5187] consists of [`MGL-BP:LUMP`][c1ac]s) , setting these\nvalues affects the constituent parts, but one should never change\nthe number stripes of the parts directly because that would lead to\nan internal inconsistency in the model.\n\n\u003Ca id=\"x-28MGL-CORE-3AMAX-N-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAX-N-STRIPES** *OBJECT*\n\n    The number of stripes with which the `OBJECT` is\n    capable of dealing simultaneously. \n\n\u003Ca id=\"x-28MGL-CORE-3ASET-MAX-N-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SET-MAX-N-STRIPES** *MAX-N-STRIPES OBJECT*\n\n    Allocate the necessary stuff to allow for\n    `MAX-N-STRIPES` number of stripes to be worked with simultaneously in\n    `OBJECT`. This is called when `MAX-N-STRIPES` is [`SETF`][a138]'ed.\n\n\u003Ca id=\"x-28MGL-CORE-3AN-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **N-STRIPES** *OBJECT*\n\n    The number of stripes currently present in `OBJECT`.\n    This is at most [`MAX-N-STRIPES`][16c4].\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-N-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SET-N-STRIPES** *N-STRIPES OBJECT*\n\n    Set the number of stripes (out of [`MAX-N-STRIPES`][16c4])\n    that are in use in `OBJECT`. This is called when `N-STRIPES` is\n    [`SETF`][a138]'ed.\n\n\u003Ca id=\"x-28MGL-CORE-3AWITH-STRIPES-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **WITH-STRIPES** *SPECS &BODY BODY*\n\n    Bind start and optionally end indices belonging to stripes in\n    striped objects.\n    \n        (WITH-STRIPES ((STRIPE1 OBJECT1 START1 END1)\n                       (STRIPE2 OBJECT2 START2)\n                       ...)\n         ...)\n    \n    This is how one's supposed to find the index range corresponding to\n    the Nth input in an input lump of a bpn:\n    \n         (with-stripes ((n input-lump start end))\n           (loop for i upfrom start below end\n                 do (setf (mref (nodes input-lump) i) 0d0)))\n    \n    Note how the input lump is striped, but the matrix into which we are\n    indexing ([`NODES`][cc1c]) is not known to `WITH-STRIPES`. In fact, for lumps\n    the same stripe indices work with `NODES` and [`MGL-BP:DERIVATIVES`][a81b].\n\n\u003Ca id=\"x-28MGL-CORE-3ASTRIPE-START-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **STRIPE-START** *STRIPE OBJECT*\n\n    Return the start index of `STRIPE` in some array or\n    matrix of `OBJECT`.\n\n\u003Ca id=\"x-28MGL-CORE-3ASTRIPE-END-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **STRIPE-END** *STRIPE OBJECT*\n\n    Return the end index (exclusive) of `STRIPE` in some\n    array or matrix of `OBJECT`.\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-INPUT-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SET-INPUT** *INSTANCES MODEL*\n\n    Set `INSTANCES` as inputs in `MODEL`. `INSTANCES` is\n    always a [`SEQUENCE`][ae23] of instances even for models not capable of batch\n    operation. It sets [`N-STRIPES`][8dd7] to ([`LENGTH`][2f78] `INSTANCES`) in a `:BEFORE`\n    method.\n\n\u003Ca id=\"x-28MGL-CORE-3AMAP-BATCHES-FOR-MODEL-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAP-BATCHES-FOR-MODEL** *FN DATASET MODEL*\n\n    Call `FN` with batches of instances from `DATASET` suitable for `MODEL`.\n    The number of instances in a batch is [`MAX-N-STRIPES`][16c4] of `MODEL` or less\n    if there are no more instances left.\n\n\u003Ca id=\"x-28MGL-CORE-3ADO-BATCHES-FOR-MODEL-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **DO-BATCHES-FOR-MODEL** *(BATCH (DATASET MODEL)) &BODY BODY*\n\n    Convenience macro over [`MAP-BATCHES-FOR-MODEL`][5fdc].\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-EXECUTORS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 5.3 Executors\n\n\u003Ca id=\"x-28MGL-CORE-3AMAP-OVER-EXECUTORS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAP-OVER-EXECUTORS** *FN INSTANCES PROTOTYPE-EXECUTOR*\n\n    Divide `INSTANCES` between executors that perform the\n    same function as `PROTOTYPE-EXECUTOR` and call `FN` with the instances\n    and the executor for which the instances are.\n    \n    Some objects conflate function and call: the forward pass of a\n    [`MGL-BP:BPN`][5187] computes output from inputs so it is like a\n    function but it also doubles as a function call in the sense that\n    the bpn (function) object changes state during the computation of\n    the output. Hence not even the forward pass of a bpn is thread safe.\n    There is also the restriction that all inputs must be of the same\n    size.\n    \n    For example, if we have a function that builds bpn a for an input of\n    a certain size, then we can create a factory that creates bpns for a\n    particular call. The factory probably wants to keep the weights the\n    same though. In [Parameterized Executor Cache][ada2],\n    [`MAKE-EXECUTOR-WITH-PARAMETERS`][331b] is this factory.\n    \n    Parallelization of execution is another possibility\n    `MAP-OVER-EXECUTORS` allows, but there is no prebuilt solution for it,\n    yet.\n    \n    The default implementation simply calls `FN` with `INSTANCES` and\n    `PROTOTYPE-EXECUTOR`.\n\n\u003Ca id=\"x-28MGL-CORE-3ADO-EXECUTORS-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **DO-EXECUTORS** *(INSTANCES OBJECT) &BODY BODY*\n\n    Convenience macro on top of [`MAP-OVER-EXECUTORS`][b01b].\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-PARAMETERIZED-EXECUTOR-CACHE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 5.3.1 Parameterized Executor Cache\n\n\u003Ca id=\"x-28MGL-CORE-3APARAMETERIZED-EXECUTOR-CACHE-MIXIN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **PARAMETERIZED-EXECUTOR-CACHE-MIXIN**\n\n    Mix this into a model, implement\n    [`INSTANCE-TO-EXECUTOR-PARAMETERS`][0078] and [`MAKE-EXECUTOR-WITH-PARAMETERS`][331b]\n    and [`DO-EXECUTORS`][f98e] will be to able build executors suitable for\n    different instances. The canonical example is using a BPN to compute\n    the means and convariances of a gaussian process. Since each\n    instance is made of a variable number of observations, the size of\n    the input is not constant, thus we have a bpn (an executor) for each\n    input dimension (the parameters).\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-EXECUTOR-WITH-PARAMETERS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAKE-EXECUTOR-WITH-PARAMETERS** *PARAMETERS CACHE*\n\n    Create a new executor for `PARAMETERS`. `CACHE` is a\n    [`PARAMETERIZED-EXECUTOR-CACHE-MIXIN`][d3b2]. In the BPN gaussian process\n    example, `PARAMETERS` would be a list of input dimensions.\n\n\u003Ca id=\"x-28MGL-CORE-3AINSTANCE-TO-EXECUTOR-PARAMETERS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **INSTANCE-TO-EXECUTOR-PARAMETERS** *INSTANCE CACHE*\n\n    Return the parameters for an executor able to\n    handle `INSTANCE`. Called by [`MAP-OVER-EXECUTORS`][b01b] on `CACHE` (that's a\n    [`PARAMETERIZED-EXECUTOR-CACHE-MIXIN`][d3b2]). The returned parameters are\n    keys in an [`EQUAL`][3fb5] parameters->executor hash table.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MONITORING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 6 Monitoring\n\n###### \\[in package MGL-CORE\\]\nWhen training or applying a model, one often wants to track various\nstatistics. For example, in the case of training a neural network\nwith cross-entropy loss, these statistics could be the average\ncross-entropy loss itself, classification accuracy, or even the\nentire confusion matrix and sparsity levels in hidden layers. Also,\nthere is the question of what to do with the measured values (log\nand forget, add to some counter or a list).\n\nSo there may be several phases of operation when we want to keep an\neye on. Let's call these **events**. There can also be many fairly\nindependent things to do in response to an event. Let's call these\n**monitors**. Some monitors are a composition of two operations: one\nthat extracts some measurements and another that aggregates those\nmeasurements. Let's call these two **measurers** and **counters**,\nrespectively.\n\nFor example, consider training a backpropagation neural network. We\nwant to look at the state of of network just after the backward\npass. [`MGL-BP:BP-LEARNER`][00a0] has a [`MONITORS`][6202] event hook corresponding to the moment after\nbackpropagating the gradients. Suppose we are interested in how the\ntraining cost evolves:\n\n    (push (make-instance 'monitor\n                         :measurer (lambda (instances bpn)\n                                     (declare (ignore instances))\n                                     (mgl-bp:cost bpn))\n                         :counter (make-instance 'basic-counter))\n          (monitors learner))\n\nDuring training, this monitor will track the cost of training\nexamples behind the scenes. If we want to print and reset this\nmonitor periodically we can put another monitor on\n[`MGL-OPT:ITERATIVE-OPTIMIZER`][8da0]'s [`MGL-OPT:ON-N-INSTANCES-CHANGED`][4f0b]\naccessor:\n\n    (push (lambda (optimizer gradient-source n-instances)\n            (declare (ignore optimizer))\n            (when (zerop (mod n-instances 1000))\n              (format t \"n-instances: ~S~%\" n-instances)\n              (dolist (monitor (monitors gradient-source))\n                (when (counter monitor)\n                  (format t \"~A~%\" (counter monitor))\n                  (reset-counter (counter monitor)))))\n          (mgl-opt:on-n-instances-changed optimizer))\n\nNote that the monitor we push can be anything as long as\n[`APPLY-MONITOR`][bbdf] is implemented on it with the appropriate signature.\nAlso note that the [`ZEROP`][ec8b] + `MOD`([`0`][80fa] [`1`][ee86]) logic is fragile, so you will likely\nwant to use [`MGL-OPT:MONITOR-OPTIMIZATION-PERIODICALLY`][4528] instead of\ndoing the above.\n\nSo that's the general idea. Concrete events are documented where\nthey are signalled. Often there are task specific utilities that\ncreate a reasonable set of default monitors (see\n[Classification Monitors][c573]).\n\n\u003Ca id=\"x-28MGL-CORE-3AAPPLY-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **APPLY-MONITORS** *MONITORS &REST ARGUMENTS*\n\n    Call [`APPLY-MONITOR`][bbdf] on each monitor in `MONITORS` and `ARGUMENTS`. This\n    is how an event is fired.\n\n\u003Ca id=\"x-28MGL-CORE-3AAPPLY-MONITOR-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **APPLY-MONITOR** *MONITOR &REST ARGUMENTS*\n\n    Apply `MONITOR` to `ARGUMENTS`. This sound fairly\n    generic, because it is. `MONITOR` can be anything, even a simple\n    function or symbol, in which case this is just [`CL:APPLY`][d811]. See\n    [Monitors][c701] for more.\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **COUNTER** *MONITOR*\n\n    Return an object representing the state of `MONITOR`\n    or `NIL`, if it doesn't have any (say because it's a simple logging\n    function). Most monitors have counters into which they accumulate\n    results until they are printed and reset. See [Counters][be95] for\n    more.\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITOR-MODEL-RESULTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MONITOR-MODEL-RESULTS** *FN DATASET MODEL MONITORS*\n\n    Call `FN` with batches of instances from `DATASET` until it runs\n    out (as in [`DO-BATCHES-FOR-MODEL`][faaa]). `FN` is supposed to apply `MODEL` to\n    the batch and return some kind of result (for neural networks, the\n    result is the model state itself). Apply `MONITORS` to each batch and\n    the result returned by `FN` for that batch. Finally, return the list\n    of counters of `MONITORS`.\n    \n    The purpose of this function is to collect various results and\n    statistics (such as error measures) efficiently by applying the\n    model only once, leaving extraction of quantities of interest from\n    the model's results to `MONITORS`.\n    \n    See the model specific versions of this functions such as\n    [`MGL-BP:MONITOR-BPN-RESULTS`][0933].\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITORS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MONITORS** *OBJECT*\n\n    Return monitors associated with `OBJECT`. See various\n    methods such as [`MONITORS`][6202] for more\n    documentation.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MONITOR-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 6.1 Monitors\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITOR-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **MONITOR**\n\n    A monitor that has another monitor called [`MEASURER`][eb05]\n    embedded in it. When this monitor is applied, it applies the\n    measurer and passes the returned values to [`ADD-TO-COUNTER`][62de] called on\n    its [`COUNTER`][a077] slot. One may further specialize [`APPLY-MONITOR`][bbdf] to change\n    that.\n    \n    This class is useful when the same event monitor is applied\n    repeatedly over a period and its results must be aggregated such as\n    when training statistics are being tracked or when predictions are\n    begin made. Note that the monitor must be compatible with the event\n    it handles. That is, the embedded `MEASURER` must be prepared to take\n    the arguments that are documented to come with the event.\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29\">\u003C\u002Fa>\n\n- [reader] **MEASURER** *[MONITOR][7068] (:MEASURER)*\n\n    This must be a monitor itself which only means\n    that [`APPLY-MONITOR`][bbdf] is defined on it (but see [Monitoring][e668]). The\n    returned values are aggregated by [`COUNTER`][5752]. See\n    [Measurers][cd3b] for a library of measurers.\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29\">\u003C\u002Fa>\n\n- [reader] **COUNTER** *[MONITOR][7068] (:COUNTER)*\n\n    The `COUNTER` of a monitor carries out the\n    aggregation of results returned by [`MEASURER`][eb05]. The See [Counters][be95]\n    for a library of counters.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MEASURER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 6.2 Measurers\n\n[`MEASURER`][eb05] is a part of [`MONITOR`][7068] objects, an embedded monitor that\ncomputes a specific quantity (e.g. classification accuracy) from the\narguments of event it is applied to (e.g. the model results).\nMeasurers are often implemented by combining some kind of model\nspecific extractor with a generic measurer function.\n\nAll generic measurer functions return their results as multiple\nvalues matching the arguments of [`ADD-TO-COUNTER`][62de] for a counter of a\ncertain type (see [Counters][be95]) so as to make them easily used in a\n`MONITOR`:\n\n    (multiple-value-call #'add-to-counter \u003Csome-counter>\n                         \u003Ccall-to-some-measurer>)\n\nThe counter class compatible with the measurer this way is noted for\neach function.\n\nFor a list of measurer functions see [Classification Measurers][0ba7].\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-COUNTER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 6.3 Counters\n\n\u003Ca id=\"x-28MGL-CORE-3AADD-TO-COUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **ADD-TO-COUNTER** *COUNTER &REST ARGS*\n\n    Add `ARGS` to `COUNTER` in some way. See specialized\n    methods for type specific documentation. The kind of arguments to be\n    supported is the what the measurer functions (see [Measurers][cd3b])\n    intended to be paired with the counter return as multiple values.\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-VALUES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **COUNTER-VALUES** *COUNTER*\n\n    Return any number of values representing the state\n    of `COUNTER`. See specialized methods for type specific\n    documentation.\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-RAW-VALUES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **COUNTER-RAW-VALUES** *COUNTER*\n\n    Return any number of values representing the state\n    of `COUNTER` in such a way that passing the returned values as\n    arguments [`ADD-TO-COUNTER`][62de] on a fresh instance of the same type\n    recreates the original state.\n\n\u003Ca id=\"x-28MGL-CORE-3ARESET-COUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **RESET-COUNTER** *COUNTER*\n\n    Restore state of `COUNTER` to what it was just after\n    creation.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-ATTRIBUTES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 6.3.1 Attributes\n\n\u003Ca id=\"x-28MGL-CORE-3AATTRIBUTED-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **ATTRIBUTED**\n\n    This is a utility class that all counters subclass.\n    The [`ATTRIBUTES`][cc37] plist can hold basically anything. Currently the\n    attributes are only used when printing and they can be specified by\n    the user. The monitor maker functions such as those in\n    [Classification Monitors][c573] also add attributes of their own to the\n    counters they create.\n    \n    With the `:PREPEND-ATTRIBUTES` initarg when can easily add new\n    attributes without clobbering the those in the `:INITFORM`, (`:TYPE`\n    \"rmse\") in this case.\n    \n        (princ (make-instance 'rmse-counter\n                              :prepend-attributes '(:event \"pred.\"\n                                                    :dataset \"test\")))\n        ;; pred. test rmse: 0.000e+0 (0)\n        => #\u003CRMSE-COUNTER pred. test rmse: 0.000e+0 (0)>\n\n\u003Ca id=\"x-28MGL-CORE-3AATTRIBUTES-20-28MGL-PAX-3AACCESSOR-20MGL-CORE-3AATTRIBUTED-29-29\">\u003C\u002Fa>\n\n- [accessor] **ATTRIBUTES** *[ATTRIBUTED][9715] (:ATTRIBUTES = NIL)*\n\n    A plist of attribute keys and values.\n\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-20-28METHOD-20-28MGL-CORE-3AATTRIBUTED-29-29-29\">\u003C\u002Fa>\n\n- [method] **NAME** *(ATTRIBUTED ATTRIBUTED)*\n\n    Return a string assembled from the values of the [`ATTRIBUTES`][cc37] of\n    `ATTRIBUTED`. If there are multiple entries with the same key, then\n    they are printed near together.\n    \n    Values may be padded according to an enclosing\n    [`WITH-PADDED-ATTRIBUTE-PRINTING`][2e8b].\n\n\u003Ca id=\"x-28MGL-CORE-3AWITH-PADDED-ATTRIBUTE-PRINTING-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **WITH-PADDED-ATTRIBUTE-PRINTING** *(ATTRIBUTEDS) &BODY BODY*\n\n    Note the width of values for each attribute key which is the number\n    of characters in the value's [`PRINC-TO-STRING`][a541]'ed representation. In\n    `BODY`, if attributes with they same key are printed they are forced\n    to be at least this wide. This allows for nice, table-like output:\n    \n        (let ((attributeds\n                (list (make-instance 'basic-counter\n                                     :attributes '(:a 1 :b 23 :c 456))\n                      (make-instance 'basic-counter\n                                     :attributes '(:a 123 :b 45 :c 6)))))\n          (with-padded-attribute-printing (attributeds)\n            (map nil (lambda (attributed)\n                       (format t \"~A~%\" attributed))\n                 attributeds)))\n        ;; 1   23 456: 0.000e+0 (0)\n        ;; 123 45 6  : 0.000e+0 (0)\n\n\u003Ca id=\"x-28MGL-CORE-3ALOG-PADDED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LOG-PADDED** *ATTRIBUTEDS*\n\n    Log (see [`LOG-MSG`][f85e]) `ATTRIBUTEDS` non-escaped (as in [`PRINC`][676d] or ~A) with\n    the output being as table-like as possible.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-COUNTER-CLASSES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 6.3.2 Counter classes\n\nIn addition to the really basic ones here, also see\n[Classification Counters][6598].\n\n\u003Ca id=\"x-28MGL-CORE-3ABASIC-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **BASIC-COUNTER** *[ATTRIBUTED][9715]*\n\n    A simple counter whose [`ADD-TO-COUNTER`][62de] takes two\n    additional parameters: an increment to the internal sums of called\n    the [`NUMERATOR`][8af5] and [`DENOMINATOR`][5cd8]. [`COUNTER-VALUES`][20e8] returns two\n    values:\n    \n    - `NUMERATOR` divided by `DENOMINATOR` (or 0 if `DENOMINATOR` is 0) and\n    \n    - `DENOMINATOR`\n    \n    Here is an example the compute the mean of 5 things received in two\n    batches:\n    \n         (let ((counter (make-instance 'basic-counter)))\n           (add-to-counter counter 6.5 3)\n           (add-to-counter counter 3.5 2)\n           counter)\n         => #\u003CBASIC-COUNTER 2.00000e+0 (5)>\n\n\u003Ca id=\"x-28MGL-CORE-3ARMSE-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **RMSE-COUNTER** *[BASIC-COUNTER][5979]*\n\n    A [`BASIC-COUNTER`][5979] with whose nominator accumulates\n    the square of some statistics. It has the attribute `:TYPE` \"rmse\".\n    [`COUNTER-VALUES`][20e8] returns the square root of what `BASIC-COUNTER`'s\n    `COUNTER-VALUES` would return.\n    \n        (let ((counter (make-instance 'rmse-counter)))\n          (add-to-counter counter (+ (* 3 3) (* 4 4)) 2)\n          counter)\n        => #\u003CRMSE-COUNTER rmse: 3.53553e+0 (2)>\n\n\u003Ca id=\"x-28MGL-CORE-3ACONCAT-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **CONCAT-COUNTER** *[ATTRIBUTED][9715]*\n\n    A counter that simply concatenates\n    sequences.\n    \n    ```common-lisp\n    (let ((counter (make-instance 'concat-counter)))\n      (add-to-counter counter '(1 2 3) #(4 5))\n      (add-to-counter counter '(6 7))\n      (counter-values counter))\n    => (1 2 3 4 5 6 7)\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3ACONCATENATION-TYPE-20-28MGL-PAX-3AREADER-20MGL-CORE-3ACONCAT-COUNTER-29-29\">\u003C\u002Fa>\n\n- [reader] **CONCATENATION-TYPE** *[CONCAT-COUNTER][0f83] (:CONCATENATION-TYPE = 'LIST)*\n\n    A type designator suitable as the RESULT-TYPE\n    argument to [`CONCATENATE`][2ecb].\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 7 Classification\n\n###### \\[in package MGL-CORE\\]\nTo be able to measure classification related quantities, we need to\ndefine what the label of an instance is. Customization is possible\nby implementing a method for a specific type of instance, but these\nfunctions only ever appear as defaults that can be overridden.\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDEX-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **LABEL-INDEX** *INSTANCE*\n\n    Return the label of `INSTANCE` as a non-negative\n    integer.\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTION-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **LABEL-INDEX-DISTRIBUTION** *INSTANCE*\n\n    Return a one dimensional array of probabilities\n    representing the distribution of labels. The probability of the\n    label with [`LABEL-INDEX`][cc80] `I` is element at index `I` of the returned\n    arrray.\n\nThe following two functions are basically the same as the previous\ntwo, but in batch mode: they return a sequence of label indices or\ndistributions. These are called on results produced by models.\nImplement these for a model and the monitor maker functions below\nwill automatically work. See FIXDOC: for bpn and boltzmann.\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDICES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **LABEL-INDICES** *RESULTS*\n\n    Return a sequence of label indices for `RESULTS`\n    produced by some model for a batch of instances. This is akin to\n    [`LABEL-INDEX`][cc80].\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTIONS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **LABEL-INDEX-DISTRIBUTIONS** *RESULT*\n\n    Return a sequence of label index distributions for\n    `RESULTS` produced by some model for a batch of instances. This is\n    akin to [`LABEL-INDEX-DISTRIBUTION`][caec].\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MONITOR-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 7.1 Classification Monitors\n\nThe following functions return a list monitors. The monitors are\nfor events of signature (`INSTANCES` `MODEL`) such as those produced by\n[`MONITOR-MODEL-RESULTS`][e50c] and its various model specific variations.\nThey are model-agnostic functions, extensible to new classifier\ntypes. \n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-CLASSIFICATION-ACCURACY-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-FN \\#'LABEL-INDEX)*\n\n    Return a list of [`MONITOR`][7068] objects associated with\n    [`CLASSIFICATION-ACCURACY-COUNTER`][430d]s. `LABEL-INDEX-FN` is a function\n    like [`LABEL-INDEX`][cc80]. See that function for more.\n    \n    Implemented in terms of [`MAKE-CLASSIFICATION-ACCURACY-MONITORS*`][2aa3].\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-CROSS-ENTROPY-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-DISTRIBUTION-FN \\#'LABEL-INDEX-DISTRIBUTION)*\n\n    Return a list of [`MONITOR`][7068] objects associated with\n    [`CROSS-ENTROPY-COUNTER`][b186]s. `LABEL-INDEX-DISTRIBUTION-FN` is a\n    function like [`LABEL-INDEX-DISTRIBUTION`][caec]. See that function for more.\n    \n    Implemented in terms of [`MAKE-CROSS-ENTROPY-MONITORS*`][e46f].\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-LABEL-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-LABEL-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-FN \\#'LABEL-INDEX) (LABEL-INDEX-DISTRIBUTION-FN \\#'LABEL-INDEX-DISTRIBUTION)*\n\n    Return classification accuracy and cross-entropy monitors. See\n    [`MAKE-CLASSIFICATION-ACCURACY-MONITORS`][911c] and\n    [`MAKE-CROSS-ENTROPY-MONITORS`][6004] for a description of paramters.\n\nThe monitor makers above can be extended to support new classifier\ntypes via the following generic functions.\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAKE-CLASSIFICATION-ACCURACY-MONITORS\\*** *MODEL OPERATION-MODE LABEL-INDEX-FN ATTRIBUTES*\n\n    Identical to [`MAKE-CLASSIFICATION-ACCURACY-MONITORS`][911c]\n    bar the keywords arguments. Specialize this to add to support for\n    new model types. The default implementation also allows for some\n    extensibility: if [`LABEL-INDICES`][31ed] is defined on `MODEL`, then it will be\n    used to extract label indices from model results.\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAKE-CROSS-ENTROPY-MONITORS\\*** *MODEL OPERATION-MODE LABEL-INDEX-DISTRIBUTION-FN ATTRIBUTES*\n\n    Identical to [`MAKE-CROSS-ENTROPY-MONITORS`][6004] bar the\n    keywords arguments. Specialize this to add to support for new model\n    types. The default implementation also allows for some\n    extensibility: if [`LABEL-INDEX-DISTRIBUTIONS`][9385] is defined on `MODEL`,\n    then it will be used to extract label distributions from model\n    results.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MEASURER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 7.2 Classification Measurers\n\nThe functions here compare some known good solution (also known as\n*ground truth* or *target*) to a prediction or approximation and\nreturn some measure of their \\[dis\\]similarity. They are model\nindependent, hence one has to extract the ground truths and\npredictions first. Rarely used directly, they are mostly hidden\nbehind [Classification Monitors][c573].\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-CLASSIFICATION-ACCURACY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MEASURE-CLASSIFICATION-ACCURACY** *TRUTHS PREDICTIONS &KEY (TEST \\#'EQL) TRUTH-KEY PREDICTION-KEY WEIGHT*\n\n    Return the number of correct classifications and as the second\n    value the number of instances (equal to length of `TRUTHS` in the\n    non-weighted case). `TRUTHS` (keyed by `TRUTH-KEY`) is a sequence of\n    opaque class labels compared with `TEST` to another sequence of\n    classes labels in `PREDICTIONS` (keyed by `PREDICTION-KEY`). If `WEIGHT`\n    is non-nil, then it is a function that returns the weight of an\n    element of `TRUTHS`. Weighted cases add their weight to both\n    counts (returned as the first and second values) instead of 1 as in\n    the non-weighted case.\n    \n    Note how the returned values are suitable for [`MULTIPLE-VALUE-CALL`][e4dd]\n    with #'[`ADD-TO-COUNTER`][62de] and a [`CLASSIFICATION-ACCURACY-COUNTER`][430d].\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-CROSS-ENTROPY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MEASURE-CROSS-ENTROPY** *TRUTHS PREDICTIONS &KEY TRUTH-KEY PREDICTION-KEY (MIN-PREDICTION-PR 1.0d-15)*\n\n    Return the sum of the cross-entropy between pairs of elements with\n    the same index of `TRUTHS` and `PREDICTIONS`. `TRUTH-KEY` is a function\n    that's when applied to an element of `TRUTHS` returns a sequence\n    representing some kind of discrete target distribution (P in the\n    definition below). `TRUTH-KEY` may be `NIL` which is equivalent to the\n    [`IDENTITY`][8ae0] function. `PREDICTION-KEY` is the same kind of key for\n    `PREDICTIONS`, but the sequence it returns represents a distribution\n    that approximates (Q below) the true one.\n    \n    Cross-entropy of the true and approximating distributions is defined\n    as:\n    \n        cross-entropy(p,q) = - sum_i p(i) * log(q(i))\n    \n    of which this function returns the sum over the pairs of elements of\n    `TRUTHS` and `PREDICTIONS` keyed by `TRUTH-KEY` and `PREDICTION-KEY`.\n    \n    Due to the logarithm, if q(i) is close to zero, we run into\n    numerical problems. To prevent this, all q(i) that are less than\n    `MIN-PREDICTION-PR` are treated as if they were `MIN-PREDICTION-PR`.\n    \n    The second value returned is the sum of p(i) over all `TRUTHS` and all\n    `I`. This is normally equal to `(LENGTH TRUTHS)`, since elements of\n    `TRUTHS` represent a probability distribution, but this is not\n    enforced which allows relative importance of elements to be\n    controlled.\n    \n    The third value returned is a plist that maps each index occurring\n    in the distribution sequences to a list of two elements:\n    \n         sum_j p_j(i) * log(q_j(i))\n    \n    and\n    \n        sum_j p_j(i)\n    \n    where `J` indexes into `TRUTHS` and `PREDICTIONS`.\n    \n        (measure-cross-entropy '((0 1 0)) '((0.1 0.7 0.2)))\n        => 0.35667497\n           1\n           (2 (0.0 0)\n            1 (0.35667497 1)\n            0 (0.0 0))\n    \n    Note how the returned values are suitable for [`MULTIPLE-VALUE-CALL`][e4dd]\n    with #'[`ADD-TO-COUNTER`][62de] and a [`CROSS-ENTROPY-COUNTER`][b186].\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-ROC-AUC-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MEASURE-ROC-AUC** *PREDICTIONS PRED &KEY (KEY \\#'IDENTITY) WEIGHT*\n\n    Return the area under the ROC curve for `PREDICTIONS` representing\n    predictions for a binary classification problem. `PRED` is a predicate\n    function for deciding whether a prediction belongs to the so called\n    positive class. `KEY` returns a number for each element which is the\n    predictor's idea of how much that element is likely to belong to the\n    class, although it's not necessarily a probability.\n    \n    If `WEIGHT` is `NIL`, then all elements of `PREDICTIONS` count as 1\n    towards the unnormalized sum within AUC. Else `WEIGHT` must be a\n    function like `KEY`, but it should return the importance (a positive\n    real number) of elements. If the weight of an prediction is 2 then\n    it's as if there were another identical copy of that prediction in\n    `PREDICTIONS`.\n    \n    The algorithm is based on algorithm 2 in the paper 'An introduction\n    to ROC analysis' by Tom Fawcett.\n    \n    ROC AUC is equal to the probability of a randomly chosen positive\n    having higher `KEY` (score) than a randomly chosen negative element.\n    With equal scores in mind, a more precise version is: AUC is the\n    expectation of the above probability over all possible sequences\n    sorted by scores.\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-CONFUSION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MEASURE-CONFUSION** *TRUTHS PREDICTIONS &KEY (TEST \\#'EQL) TRUTH-KEY PREDICTION-KEY WEIGHT*\n\n    Create a [`CONFUSION-MATRIX`][60d2] from `TRUTHS` and `PREDICTIONS`.\n    `TRUTHS` (keyed by `TRUTH-KEY`) is a sequence of class labels compared\n    with `TEST` to another sequence of class labels in `PREDICTIONS` (keyed\n    by `PREDICTION-KEY`). If `WEIGHT` is non-nil, then it is a function that\n    returns the weight of an element of `TRUTHS`. Weighted cases add their\n    weight to both counts (returned as the first and second values).\n    \n    Note how the returned confusion matrix can be added to another with\n    [`ADD-TO-COUNTER`][62de].\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-COUNTER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 7.3 Classification Counters\n\n\u003Ca id=\"x-28MGL-CORE-3ACLASSIFICATION-ACCURACY-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **CLASSIFICATION-ACCURACY-COUNTER** *[BASIC-COUNTER][5979]*\n\n    A [`BASIC-COUNTER`][5979] with \"acc.\" as its `:TYPE`\n    attribute and a [`PRINT-OBJECT`][3f2e] method that prints percentages.\n\n\u003Ca id=\"x-28MGL-CORE-3ACROSS-ENTROPY-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **CROSS-ENTROPY-COUNTER** *[BASIC-COUNTER][5979]*\n\n    A [`BASIC-COUNTER`][5979] with \"xent\" as its `:TYPE`\n    attribute.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CONFUSION-MATRIX-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 7.3.1 Confusion Matrices\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **CONFUSION-MATRIX**\n\n    A confusion matrix keeps count of classification\n    results. The correct class is called `target' and the output of the\n    classifier is called`prediction'.\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CONFUSION-MATRIX-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-CONFUSION-MATRIX** *&KEY (TEST \\#'EQL)*\n\n    Classes are compared with `TEST`.\n\n\u003Ca id=\"x-28MGL-CORE-3ASORT-CONFUSION-CLASSES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SORT-CONFUSION-CLASSES** *MATRIX CLASSES*\n\n    Return a list of `CLASSES` sorted for presentation\n    purposes.\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-CLASS-NAME-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **CONFUSION-CLASS-NAME** *MATRIX CLASS*\n\n    Name of `CLASS` for presentation purposes.\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-COUNT-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **CONFUSION-COUNT** *MATRIX TARGET PREDICTION*\n\n\u003Ca id=\"x-28MGL-CORE-3AMAP-CONFUSION-MATRIX-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAP-CONFUSION-MATRIX** *FN MATRIX*\n\n    Call `FN` with `TARGET`, `PREDICTION`,\n    [`COUNT`][3155] paramaters for each cell in the confusion matrix. Cells with a\n    zero count may be ommitted.\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-CLASSES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **CONFUSION-MATRIX-CLASSES** *MATRIX*\n\n    A list of all classes. The default is to collect\n    classes from the counts. This can be overridden if, for instance,\n    some classes are not present in the results.\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-ACCURACY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **CONFUSION-MATRIX-ACCURACY** *MATRIX &KEY FILTER*\n\n    Return the overall accuracy of the results in `MATRIX`. It's computed\n    as the number of correctly classified cases (hits) divided by the\n    name of cases. Return the number of hits and the number of cases as\n    the second and third value. If `FILTER` function is given, then call\n    it with the target and the prediction of the cell. Disregard cell\n    for which `FILTER` returns `NIL`.\n    \n    Precision and recall can be easily computed by giving the right\n    filter, although those are provided in separate convenience\n    functions.\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-PRECISION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **CONFUSION-MATRIX-PRECISION** *MATRIX PREDICTION*\n\n    Return the accuracy over the cases when the classifier said\n    `PREDICTION`.\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-RECALL-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **CONFUSION-MATRIX-RECALL** *MATRIX TARGET*\n\n    Return the accuracy over the cases when the correct class is\n    `TARGET`.\n\n\u003Ca id=\"x-28MGL-CORE-3AADD-CONFUSION-MATRIX-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **ADD-CONFUSION-MATRIX** *MATRIX RESULT-MATRIX*\n\n    Add `MATRIX` into `RESULT-MATRIX`.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-FEATURES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 8 Features\n\n###### \\[in package MGL-CORE\\]\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-FEATURE-SELECTION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 8.1 Feature Selection\n\nThe following *scoring functions* all return an [`EQUAL`][3fb5] hash table\nthat maps features to scores.\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNT-FEATURES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **COUNT-FEATURES** *DOCUMENTS MAPPER &KEY (KEY \\#'IDENTITY)*\n\n    Return scored features as an [`EQUAL`][3fb5] hash table whose keys are\n    features of `DOCUMENTS` and values are counts of occurrences of\n    features. `MAPPER` takes a function and a document and calls function\n    with features of the document.\n    \n    ```common-lisp\n    (sort (alexandria:hash-table-alist\n           (count-features '((\"hello\" \"world\")\n                             (\"this\" \"is\" \"our\" \"world\"))\n                           (lambda (fn document)\n                             (map nil fn document))))\n          #'string\u003C :key #'car)\n    => ((\"hello\" . 1) (\"is\" . 1) (\"our\" . 1) (\"this\" . 1) (\"world\" . 2))\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3AFEATURE-LLRS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **FEATURE-LLRS** *DOCUMENTS MAPPER CLASS-FN &KEY (CLASSES (ALL-DOCUMENT-CLASSES DOCUMENTS CLASS-FN))*\n\n    Return scored features as an [`EQUAL`][3fb5] hash table whose keys are\n    features of `DOCUMENTS` and values are their log likelihood ratios.\n    `MAPPER` takes a function and a document and calls function with\n    features of the document.\n    \n    ```common-lisp\n    (sort (alexandria:hash-table-alist\n           (feature-llrs '((:a \"hello\" \"world\")\n                           (:b \"this\" \"is\" \"our\" \"world\"))\n                         (lambda (fn document)\n                           (map nil fn (rest document)))\n                         #'first))\n          #'string\u003C :key #'car)\n    => ((\"hello\" . 2.6032386) (\"is\" . 2.6032386) (\"our\" . 2.6032386)\n        (\"this\" . 2.6032386) (\"world\" . 4.8428774e-8))\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3AFEATURE-DISAMBIGUITIES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **FEATURE-DISAMBIGUITIES** *DOCUMENTS MAPPER CLASS-FN &KEY (CLASSES (ALL-DOCUMENT-CLASSES DOCUMENTS CLASS-FN))*\n\n    Return scored features as an [`EQUAL`][3fb5] hash table whose keys are\n    features of `DOCUMENTS` and values are their *disambiguities*. `MAPPER`\n    takes a function and a document and calls function with features of\n    the document.\n    \n    From the paper 'Using Ambiguity Measure Feature Selection Algorithm\n    for Support Vector Machine Classifier'.\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-FEATURE-ENCODING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 8.2 Feature Encoding\n\nFeatures can rarely be fed directly to algorithms as is, they need\nto be transformed in some way. Suppose we have a simple language\nmodel that takes a single word as input and predicts the next word.\nHowever, both input and output is to be encoded as float vectors of\nlength 1000. What we do is find the top 1000 words by some\nmeasure (see [Feature Selection][1b5e]) and associate these words with\nthe integers in \\[0..999\\] (this is [`ENCODE`][fedd]ing). By using for\nexample [one-hot](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FOne-hot) encoding, we\ntranslate a word into a float vector when passing in the input. When\nthe model outputs the probability distribution of the next word, we\nfind the index of the max and find the word associated with it (this\nis [`DECODE`][1339]ing)\n\n\u003Ca id=\"x-28MGL-CORE-3AENCODE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **ENCODE** *ENCODER DECODED*\n\n    Encode `DECODED` with `ENCODER`. This interface is\n    generic enough to be almost meaningless. See [`ENCODER\u002FDECODER`][1beb] for a\n    simple, [`MGL-NLP:BAG-OF-WORDS-ENCODER`][cbb4] for a slightly more involved\n    example.\n    \n    If `ENCODER` is a function designator, then it's simply [`FUNCALL`][03c7]ed\n    with `DECODED`.\n\n\u003Ca id=\"x-28MGL-CORE-3ADECODE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **DECODE** *DECODER ENCODED*\n\n    Decode `ENCODED` with `ENCODER`. For an `DECODER` \u002F\n    `ENCODER` pair, `(DECODE DECODER (ENCODE ENCODER OBJECT))` must be\n    equal in some sense to `OBJECT`.\n    \n    If `DECODER` is a function designator, then it's simply [`FUNCALL`][03c7]ed\n    with `ENCODED`.\n\n\u003Ca id=\"x-28MGL-CORE-3AENCODER-2FDECODER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **ENCODER\u002FDECODER**\n\n    Implements O(1) [`ENCODE`][fedd] and [`DECODE`][1339] by having an\n    internal decoded-to-encoded and an encoded-to-decoded [`EQUAL`][3fb5] hash\n    table. `ENCODER\u002FDECODER` objects can be saved and loaded (see\n    [Persistence][29a1]) as long as the elements in the hash tables have\n    read\u002Fwrite consitency.\n    \n    ```common-lisp\n    (let ((indexer\n            (make-indexer\n             (alexandria:alist-hash-table '((\"I\" . 3) (\"me\" . 2) (\"mine\" . 1)))\n             2)))\n      (values (encode indexer \"I\")\n              (encode indexer \"me\")\n              (encode indexer \"mine\")\n              (decode indexer 0)\n              (decode indexer 1)\n              (decode indexer 2)))\n    => 0\n    => 1\n    => NIL\n    => \"I\"\n    => \"me\"\n    => NIL\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-INDEXER-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-INDEXER** *SCORED-FEATURES N &KEY (START 0) (CLASS 'ENCODER\u002FDECODER)*\n\n    Take the top `N` features from `SCORED-FEATURES` (see\n    [Feature Selection][1b5e]), assign indices to them starting from `START`.\n    Return an [`ENCODER\u002FDECODER`][1beb] (or another `CLASS`) that converts between\n    objects and indices.\n\nAlso see [Bag of Words][0784].\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 9 Gradient Based Optimization\n\n###### \\[in package MGL-OPT\\]\nWe have a real valued, differentiable function F and the task is to\nfind the parameters that minimize its value. Optimization starts\nfrom a single point in the parameter space of F, and this single\npoint is updated iteratively based on the gradient and value of F at\nor around the current point.\n\nNote that while the stated problem is that of global optimization,\nfor non-convex functions, most algorithms will tend to converge to a\nlocal optimum.\n\nCurrently, there are two optimization algorithms:\n[Gradient Descent][10e7] (with several variants) and [Conjugate Gradient][83e6] both of\nwhich are first order methods (they do not need second order\ngradients) but more can be added with the [Extension API][6a6f].\n\n\u003Ca id=\"x-28MGL-OPT-3AMINIMIZE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MINIMIZE** *OPTIMIZER GRADIENT-SOURCE &KEY (WEIGHTS (LIST-SEGMENTS GRADIENT-SOURCE)) (DATASET \\*INFINITELY-EMPTY-DATASET\\*)*\n\n    Minimize the value of the real valued function represented by\n    `GRADIENT-SOURCE` by updating some of its parameters in `WEIGHTS` (a `MAT`\n    or a sequence of `MAT`s). Return `WEIGHTS`. `DATASET` (see\n    [Datasets][109e]) is a set of unoptimized parameters of the\n    same function. For example, `WEIGHTS` may be the weights of a neural\n    network while `DATASET` is the training set consisting of inputs\n    suitable for [`SET-INPUT`][0c9e]. The default\n    `DATASET`, ([`*INFINITELY-EMPTY-DATASET*`][ad8f]) is suitable for when all\n    parameters are optimized, so there is nothing left to come from the\n    environment.\n    \n    Optimization terminates if `DATASET` is a sampler and it runs out or\n    when some other condition met (see [`TERMINATION`][9006], for example). If\n    `DATASET` is a [`SEQUENCE`][ae23], then it is reused over and over again.\n    \n    Examples for various optimizers are provided in [Gradient Descent][10e7] and\n    [Conjugate Gradient][83e6].\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-ITERATIVE-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.1 Iterative Optimizer\n\n\u003Ca id=\"x-28MGL-OPT-3AITERATIVE-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **ITERATIVE-OPTIMIZER**\n\n    An abstract base class of [Gradient Descent][10e7] and\n    [Conjugate Gradient][83e6] based optimizers that iterate over instances until a\n    termination condition is met.\n\n\u003Ca id=\"x-28MGL-OPT-3AN-INSTANCES-20-28MGL-PAX-3AREADER-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **N-INSTANCES** *[ITERATIVE-OPTIMIZER][8da0] (:N-INSTANCES = 0)*\n\n    The number of instances this optimizer has seen so\n    far. Incremented automatically during optimization.\n\n\u003Ca id=\"x-28MGL-OPT-3ATERMINATION-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **TERMINATION** *[ITERATIVE-OPTIMIZER][8da0] (:TERMINATION = NIL)*\n\n    If a number, it's the number of instances to train\n    on in the sense of [`N-INSTANCES`][4c73]. If `N-INSTANCES` is equal or greater\n    than this value optimization stops. If `TERMINATION` is `NIL`, then\n    optimization will continue. If it is `T`, then optimization will\n    stop. If it is a function of no arguments, then its return value\n    is processed as if it was returned by `TERMINATION`.\n\n\u003Ca id=\"x-28MGL-OPT-3AON-OPTIMIZATION-STARTED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **ON-OPTIMIZATION-STARTED** *[ITERATIVE-OPTIMIZER][8da0] (:ON-OPTIMIZATION-STARTED = NIL)*\n\n    An event hook with parameters `(OPTIMIZER\n    GRADIENT-SOURCE N-INSTANCES)`. Called after initializations are\n    performed (INITIALIZE-OPTIMIZER*, INITIALIZE-GRADIENT-SOURCE*) but\n    before optimization is started.\n\n\u003Ca id=\"x-28MGL-OPT-3AON-OPTIMIZATION-FINISHED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **ON-OPTIMIZATION-FINISHED** *[ITERATIVE-OPTIMIZER][8da0] (:ON-OPTIMIZATION-FINISHED = NIL)*\n\n    An event hook with parameters `(OPTIMIZER\n    GRADIENT-SOURCE N-INSTANCES)`. Called when optimization has\n    finished.\n\n\u003Ca id=\"x-28MGL-OPT-3AON-N-INSTANCES-CHANGED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **ON-N-INSTANCES-CHANGED** *[ITERATIVE-OPTIMIZER][8da0] (:ON-N-INSTANCES-CHANGED = NIL)*\n\n    An event hook with parameters `(OPTIMIZER\n    GRADIENT-SOURCE N-INSTANCES)`. Called when optimization of a batch\n    of instances is done and [`N-INSTANCES`][4c73] is incremented.\n\nNow let's discuss a few handy utilities.\n\n\u003Ca id=\"x-28MGL-OPT-3AMONITOR-OPTIMIZATION-PERIODICALLY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MONITOR-OPTIMIZATION-PERIODICALLY** *OPTIMIZER PERIODIC-FNS*\n\n    For each periodic function in the list of `PERIODIC-FNS`, add a\n    monitor to `OPTIMIZER`'s [`ON-OPTIMIZATION-STARTED`][ebd4],\n    [`ON-OPTIMIZATION-FINISHED`][0072] and [`ON-N-INSTANCES-CHANGED`][4f0b] hooks. The\n    monitors are simple functions that just call each periodic function\n    with the event parameters (`OPTIMIZER` `GRADIENT-SOURCE` [`N-INSTANCES`][4c73]).\n    Return `OPTIMIZER`.\n    \n    To log and reset the monitors of the gradient source after every\n    1000 instances seen by `OPTIMIZER`:\n    \n        (monitor-optimization-periodically optimizer\n                                           '((:fn log-my-test-error\n                                              :period 2000)\n                                             (:fn reset-optimization-monitors\n                                              :period 1000\n                                              :last-eval 0)))\n    \n    Note how we don't pass it's allowed to just pass the initargs for a\n    `PERIODIC-FN` instead of `PERIODIC-FN` itself. The `:LAST-EVAL` 0 bit\n    prevents [`RESET-OPTIMIZATION-MONITORS`][ca09] from being called at the start\n    of the optimization when the monitors are empty anyway.\n\n\u003Ca id=\"x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **RESET-OPTIMIZATION-MONITORS** *OPTIMIZER GRADIENT-SOURCE*\n\n    Report the state of [`MONITORS`][8f37] of\n    `OPTIMIZER` and `GRADIENT-SOURCE` and reset their counters. See\n    [`MONITOR-OPTIMIZATION-PERIODICALLY`][4528] for an example of how this is\n    used.\n\n\u003Ca id=\"x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20-28METHOD-20-28MGL-OPT-3AITERATIVE-OPTIMIZER-20T-29-29-29\">\u003C\u002Fa>\n\n- [method] **RESET-OPTIMIZATION-MONITORS** *(OPTIMIZER ITERATIVE-OPTIMIZER) GRADIENT-SOURCE*\n\n    Log the counters of the monitors of `OPTIMIZER` and `GRADIENT-SOURCE`\n    and reset them.\n\n\u003Ca id=\"x-28MGL-OPT-3AREPORT-OPTIMIZATION-PARAMETERS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **REPORT-OPTIMIZATION-PARAMETERS** *OPTIMIZER GRADIENT-SOURCE*\n\n    A utility that's often called at the start of\n    optimization (from [`ON-OPTIMIZATION-STARTED`][ebd4]). The default\n    implementation logs the description of `GRADIENT-SOURCE` (as in\n    [`DESCRIBE`][6651]) and `OPTIMIZER` and calls [`LOG-MAT-ROOM`][ea7d].\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-COST-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.2 Cost Function\n\nThe function being minimized is often called the *cost* or the\n*loss* function.\n\n\u003Ca id=\"x-28MGL-COMMON-3ACOST-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **COST** *MODEL*\n\n    Return the value of the cost function being\n    minimized. Calling this only makes sense in the context of an\n    ongoing optimization (see [`MINIMIZE`][46a4]). The cost is that of a batch of\n    instances.\n\n\u003Ca id=\"x-28MGL-OPT-3AMAKE-COST-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-COST-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES*\n\n    Return a list of [`MONITOR`][7068] objects, each associated with one\n    [`BASIC-COUNTER`][5979] with attribute `:TYPE` \"cost\". Implemented in terms of\n    [`MAKE-COST-MONITORS*`][3815].\n\n\u003Ca id=\"x-28MGL-OPT-3AMAKE-COST-MONITORS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAKE-COST-MONITORS\\*** *MODEL OPERATION-MODE ATTRIBUTES*\n\n    Identical to [`MAKE-COST-MONITORS`][46c2] bar the keywords\n    arguments. Specialize this to add to support for new model types.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.3 Gradient Descent\n\n###### \\[in package MGL-GD\\]\nGradient descent is a first-order optimization algorithm. Relying\ncompletely on first derivatives, it does not even evaluate the\nfunction to be minimized. Let's see how to minimize a numerical lisp\nfunction with respect to some of its parameters.\n\n```commonlisp\n(cl:defpackage :mgl-example-sgd\n  (:use #:common-lisp #:mgl))\n\n(in-package :mgl-example-sgd)\n\n;;; Create an object representing the sine function.\n(defparameter *diff-fn-1*\n  (make-instance 'mgl-diffun:diffun\n                 :fn #'sin\n                 ;; We are going to optimize its only parameter.\n                 :weight-indices '(0)))\n\n;;; Minimize SIN. Note that there is no dataset involved because all\n;;; parameters are being optimized.\n(minimize (make-instance 'sgd-optimizer :termination 1000)\n          *diff-fn-1*\n          :weights (make-mat 1))\n;;; => A MAT with a single value of about -pi\u002F2.\n\n;;; Create a differentiable function for f(x,y)=(x-y)^2. X is a\n;;; parameter whose values come from the DATASET argument passed to\n;;; MINIMIZE. Y is a parameter to be optimized (a 'weight').\n(defparameter *diff-fn-2*\n  (make-instance 'mgl-diffun:diffun\n                 :fn (lambda (x y)\n                       (expt (- x y) 2))\n                 :parameter-indices '(0)\n                 :weight-indices '(1)))\n\n;;; Find the Y that minimizes the distance from the instances\n;;; generated by the sampler.\n(minimize (make-instance 'sgd-optimizer :batch-size 10)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset (make-instance 'function-sampler\n                                  :generator (lambda ()\n                                               (list (+ 10\n                                                        (gaussian-random-1))))\n                                  :max-n-samples 1000))\n;;; => A MAT with a single value of about 10, the expected value of\n;;; the instances in the dataset.\n\n;;; The dataset can be a SEQUENCE in which case we'd better set\n;;; TERMINATION else optimization would never finish.\n(minimize (make-instance 'sgd-optimizer :termination 1000)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset '((0) (1) (2) (3) (4) (5)))\n;;; => A MAT with a single value of about 2.5.\n```\n\nWe are going to see a number of accessors for optimizer paramaters.\nIn general, it's allowed to [`SETF`][a138] real slot accessors (as opposed to\nreaders and writers) at any time during optimization and so is\ndefining a method on an optimizer subclass that computes the value\nin any way. For example, to decay the learning rate on a per\nmini-batch basis:\n\n```commonlisp\n(defmethod learning-rate ((optimizer my-sgd-optimizer))\n  (* (slot-value optimizer 'learning-rate)\n     (expt 0.998\n           (\u002F (n-instances optimizer) 60000))))\n```\n\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.1 Batch Based Optimizers\n\nFirst let's see everything common to all batch based optimizers,\nthen discuss [SGD Optimizer][25fd], [Adam Optimizer][bd13] and\n[Normalized Batch Optimizer][0c91]. All batch based optimizers\nare [`ITERATIVE-OPTIMIZER`][8da0]s, so see\n[Iterative Optimizer][779d] too.\n\n\u003Ca id=\"x-28MGL-GD-3ABATCH-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **BATCH-GD-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    Another abstract base class for gradient based\n    optimizers tath updates all weights simultaneously after chewing\n    through [`BATCH-SIZE`][fa6d] inputs. See subclasses [`SGD-OPTIMIZER`][2a2f],\n    [`ADAM-OPTIMIZER`][e0e6] and [`NORMALIZED-BATCH-GD-OPTIMIZER`][f6ae].\n    \n    [`PER-WEIGHT-BATCH-GD-OPTIMIZER`][5a43] may be a better choice when some\n    weights can go unused for instance due to missing input values.\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **BATCH-SIZE** *GD-OPTIMIZER (:BATCH-SIZE = 1)*\n\n    After having gone through `BATCH-SIZE` number of\n    inputs, weights are updated. With `BATCH-SIZE` 1, one gets\n    Stochastics Gradient Descent. With `BATCH-SIZE` equal to the number\n    of instances in the dataset, one gets standard, 'batch' gradient\n    descent. With `BATCH-SIZE` between these two extremes, one gets the\n    most practical 'mini-batch' compromise.\n\n\u003Ca id=\"x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **LEARNING-RATE** *GD-OPTIMIZER (:LEARNING-RATE = 0.1)*\n\n    This is the step size along the gradient. Decrease\n    it if optimization diverges, increase it if it doesn't make\n    progress.\n\n\u003Ca id=\"x-28MGL-GD-3AMOMENTUM-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MOMENTUM** *GD-OPTIMIZER (:MOMENTUM = 0)*\n\n    A value in the \\[0, 1) interval. `MOMENTUM` times the\n    previous weight change is added to the gradient. 0 means no\n    momentum.\n\n\u003Ca id=\"x-28MGL-GD-3AMOMENTUM-TYPE-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **MOMENTUM-TYPE** *GD-OPTIMIZER (:MOMENTUM-TYPE = :NORMAL)*\n\n    One of `:NORMAL`, `:NESTEROV` or `:NONE`. For pure\n    optimization Nesterov's momentum may be better, but it may also\n    increases chances of overfitting. Using `:NONE` is equivalent to 0\n    momentum, but it also uses less memory. Note that with `:NONE`,\n    [`MOMENTUM`][af05] is ignored even it it is non-zero.\n\n\u003Ca id=\"x-28MGL-GD-3AWEIGHT-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **WEIGHT-DECAY** *GD-OPTIMIZER (:WEIGHT-DECAY = 0)*\n\n    An L2 penalty. It discourages large weights, much\n    like a zero mean gaussian prior. `WEIGHT-DECAY` \\* WEIGHT is added to\n    the gradient to penalize large weights. It's as if the function\n    whose minimum is sought had `WEIGHT-DECAY`\\*sum\\_i{0.5 \\* WEIGHT\\_i^2}\n    added to it.\n\n\u003Ca id=\"x-28MGL-GD-3AWEIGHT-PENALTY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **WEIGHT-PENALTY** *GD-OPTIMIZER (:WEIGHT-PENALTY = 0)*\n\n    An L1 penalty. It encourages sparsity.\n    `SIGN`(WEIGHT) \\* `WEIGHT-PENALTY` is added to the gradient pushing the\n    weight towards negative infinity. It's as if the function whose\n    minima is sought had `WEIGHT-PENALTY`\\*sum\\_i{abs(WEIGHT\\_i)} added to\n    it. Putting it on feature biases consitutes a sparsity constraint\n    on the features.\n\n\u003Ca id=\"x-28MGL-GD-3AUSE-SEGMENT-DERIVATIVES-P-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **USE-SEGMENT-DERIVATIVES-P** *GD-OPTIMIZER (:USE-SEGMENT-DERIVATIVES-P = NIL)*\n\n    Save memory if both the gradient source (the model\n    being optimized) and the optimizer support this feature. It works\n    like this: the accumulator into which the gradient source is asked\n    to place the derivatives of a segment will be [`SEGMENT-DERIVATIVES`][9a5b]\n    of the segment. This allows the optimizer not to allocate an\n    accumulator matrix into which the derivatives are summed.\n\n\u003Ca id=\"x-28MGL-GD-3AAFTER-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **AFTER-UPDATE-HOOK** *GD-OPTIMIZER (:AFTER-UPDATE-HOOK = NIL)*\n\n    A list of functions with no arguments called after\n    each weight update.\n\n\u003Ca id=\"x-28MGL-GD-3ABEFORE-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3ABATCH-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **BEFORE-UPDATE-HOOK** *[BATCH-GD-OPTIMIZER][d94e] (:BEFORE-UPDATE-HOOK = NIL)*\n\n    A list of functions of no parameters. Each\n    function is called just before a weight update takes place (after\n    accumulated gradients have been divided the length of the batch).\n    Convenient to hang some additional gradient accumulating code\n    on.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-SGD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### SGD Optimizer\n\n\u003Ca id=\"x-28MGL-GD-3ASGD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **SGD-OPTIMIZER** *[BATCH-GD-OPTIMIZER][d94e]*\n\n    With [`BATCH-SIZE`][fa6d] 1 this is Stochastic Gradient\n    Descent. With higher batch sizes, one gets mini-batch and Batch\n    Gradient Descent.\n    \n    Assuming that `ACCUMULATOR` has the sum of gradients for a mini-batch,\n    the weight update looks like this:\n    \n    $$\n    \\Delta_w^{t+1} = momentum * \\Delta_w^t\n      + \\frac{accumulator}{batchsize}\n      + l_2 w + l_1 sign(w)\n    $$\n    \n    $$\n    w^{t+1} = w^{t} - learningrate * \\Delta_w,\n    $$\n    \n    which is the same as the more traditional formulation:\n    \n    $$\n    \\Delta_w^{t+1} = momentum * \\Delta_w^{t}\n      + learningrate * \\left(\\frac{\\frac{df}{dw}}{batchsize}\n                           + l_2 w + l_1 sign(w)\\right)\n    $$\n    \n    $$\n    w^{t+1} = w^{t} - \\Delta_w,\n    $$\n    \n    but the former works better when batch size, momentum or learning\n    rate change during the course of optimization. The above is with\n    normal momentum, Nesterov's momentum (see [`MOMENTUM-TYPE`][5611]) momentum is\n    also available.\n    \n    See [Batch Based Optimizers][2c39] for the description of the various\n    options common to all batch based optimizers.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-ADAM-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Adam Optimizer\n\n\u003Ca id=\"x-28MGL-GD-3AADAM-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **ADAM-OPTIMIZER** *[BATCH-GD-OPTIMIZER][d94e]*\n\n    Adam is a first-order stochasistic gradient descent\n    optimizer. It maintains an internal estimation for the mean and raw\n    variance of each derivative as exponential moving averages. The step\n    it takes is basically `M\u002F(sqrt(V)+E)` where `M` is the estimated\n    mean, `V` is the estimated variance, and `E` is a small adjustment\n    factor to prevent the gradient from blowing up. See version 5 of the\n    [paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6980) for more.\n    \n    Note that using momentum is not supported with Adam. In fact, an\n    error is signalled if it's not `:NONE`.\n    \n    See [Batch Based Optimizers][2c39] for the description of the various\n    options common to all batch based optimizers.\n\n\u003Ca id=\"x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **LEARNING-RATE** *[ADAM-OPTIMIZER][e0e6] (= 2.0e-4)*\n\n    Same thing as [`LEARNING-RATE`][09ed] but with the default suggested by the Adam paper.\n\n\u003Ca id=\"x-28MGL-GD-3AMEAN-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MEAN-DECAY** *[ADAM-OPTIMIZER][e0e6] (:MEAN-DECAY = 0.9)*\n\n    A number between 0 and 1 that determines how fast\n    the estimated mean of derivatives is updated. 0 basically gives\n    you `RMSPROP` (if [`VARIANCE-DECAY`][0900] is not too large) or AdaGrad (if\n    `VARIANCE-DECAY` is close to 1 and the learning rate is annealed.\n    This is $\\beta_1$ in the paper.\n\n\u003Ca id=\"x-28MGL-GD-3AMEAN-DECAY-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MEAN-DECAY-DECAY** *[ADAM-OPTIMIZER][e0e6] (:MEAN-DECAY-DECAY = (- 1 1.0d-7))*\n\n    A value that should be close to 1. [`MEAN-DECAY`][011d] is\n    multiplied by this value after each update. This is $\\lambda$ in\n    the paper.\n\n\u003Ca id=\"x-28MGL-GD-3AVARIANCE-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **VARIANCE-DECAY** *[ADAM-OPTIMIZER][e0e6] (:VARIANCE-DECAY = 0.999)*\n\n    A number between 0 and 1 that determines how fast\n    the estimated variance of derivatives is updated. This is\n    $\\beta_2$ in the paper.\n\n\u003Ca id=\"x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **VARIANCE-ADJUSTMENT** *[ADAM-OPTIMIZER][e0e6] (:VARIANCE-ADJUSTMENT = 1.0d-7)*\n\n    Within the bowels of adam, the estimated mean is\n    divided by the square root of the estimated variance (per weight)\n    which can lead to numerical problems if the denominator is near\n    zero. To avoid this, `VARIANCE-ADJUSTMENT`, which should be a small\n    positive number, is added to the denominator. This is `epsilon` in\n    the paper.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-NORMALIZED-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Normalized Batch Optimizer\n\n\u003Ca id=\"x-28MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **NORMALIZED-BATCH-GD-OPTIMIZER** *[BATCH-GD-OPTIMIZER][d94e]*\n\n    Like [`BATCH-GD-OPTIMIZER`][d94e] but keeps count of how many\n    times each weight was used in the batch and divides the accumulated\n    gradient by this count instead of dividing by `N-INSTANCES-IN-BATCH`.\n    This only makes a difference if there are missing values in the\n    learner that's being trained. The main feature that distuinguishes\n    this class from [`PER-WEIGHT-BATCH-GD-OPTIMIZER`][5a43] is that batches end at\n    same time for all weights.\n\n\u003Ca id=\"x-28MGL-GD-3AN-WEIGHT-USES-IN-BATCH-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **N-WEIGHT-USES-IN-BATCH** *[NORMALIZED-BATCH-GD-OPTIMIZER][f6ae]*\n\n    Number of uses of the weight in its current batch.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-SEGMENTED-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.2 Segmented GD Optimizer\n\n\u003Ca id=\"x-28MGL-GD-3ASEGMENTED-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **SEGMENTED-GD-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    An optimizer that delegates training of segments to\n    other optimizers. Useful to delegate training of different segments\n    to different optimizers (capable of working with segmentables) or\n    simply to not train all segments.\n\n\u003Ca id=\"x-28MGL-GD-3ASEGMENTER-20-28MGL-PAX-3AREADER-20MGL-GD-3ASEGMENTED-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **SEGMENTER** *[SEGMENTED-GD-OPTIMIZER][3ce0] (:SEGMENTER)*\n\n    When this optimizer is initialized it loops over\n    the segment of the learner with [`MAP-SEGMENTS`][2312]. `SEGMENTER` is a\n    function that is called with each segment and returns an optimizer\n    or `NIL`. Several segments may be mapped to the same optimizer.\n    After the segment->optimizer mappings are collected, each\n    optimizer is initialized by INITIALIZE-OPTIMIZER with the list of\n    segments mapped to it.\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENTS-20-28MGL-PAX-3AREADER-20MGL-GD-3ASEGMENTED-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **SEGMENTS** *[SEGMENTED-GD-OPTIMIZER][3ce0]*\n\n[`SEGMENTED-GD-OPTIMIZER`][3ce0] inherits from [`ITERATIVE-OPTIMIZER`][8da0], so see\n[Iterative Optimizer][779d] too.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-PER-WEIGHT-OPTIMIZATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.3 Per-weight Optimization\n\n\u003Ca id=\"x-28MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **PER-WEIGHT-BATCH-GD-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    This is much like [Batch Based Optimizers][2c39] but it\n    is more clever about when to update weights. Basically every weight\n    has its own batch independent from the batches of others. This has\n    desirable properties. One can for example put two neural networks\n    together without adding any connections between them and the\n    learning will produce results equivalent to the separated case.\n    Also, adding inputs with only missing values does not change\n    anything.\n    \n    Due to its very non-batch nature, there is no CUDA implementation of\n    this optimizer.\n\n\u003Ca id=\"x-28MGL-GD-3AN-WEIGHT-USES-IN-BATCH-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **N-WEIGHT-USES-IN-BATCH** *[PER-WEIGHT-BATCH-GD-OPTIMIZER][5a43]*\n\n    Number of uses of the weight in its current batch.\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-UTILITIES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.4 Utilities\n\n\u003Ca id=\"x-28MGL-GD-3ACLIP-L2-NORM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **CLIP-L2-NORM** *MATS L2-UPPER-BOUND &KEY CALLBACK*\n\n    Scale `MATS` so that their $L_2$ norm does not exceed `L2-UPPER-BOUND`.\n    \n    Compute the norm of of `MATS` as if they were a single vector. If the\n    norm is greater than `L2-UPPER-BOUND`, then scale each matrix\n    destructively by the norm divided by `L2-UPPER-BOUND` and if non-`NIL`\n    call the function `CALLBACK` with the scaling factor.\n\n\u003Ca id=\"x-28MGL-GD-3AARRANGE-FOR-CLIPPING-GRADIENTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **ARRANGE-FOR-CLIPPING-GRADIENTS** *BATCH-GD-OPTIMIZER L2-UPPER-BOUND &KEY CALLBACK*\n\n    Make it so that the norm of the batch normalized gradients\n    accumulated by `BATCH-GD-OPTIMIZER` is clipped to `L2-UPPER-BOUND`\n    before every update. See [`CLIP-L2-NORM`][af6b].\n\n\u003Ca id=\"x-28MGL-CG-3A-40MGL-CG-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.4 Conjugate Gradient\n\n###### \\[in package MGL-CG\\]\nConjugate gradient is a first-order optimization algorithm. It's\nmore advanced than gradient descent as it does line searches which\nunfortunately also makes it unsuitable for non-deterministic\nfunctions. Let's see how to minimize a numerical lisp function with\nrespect to some of its parameters.\n\n```\n;;; Create an object representing the sine function.\n(defparameter *diff-fn-1*\n  (make-instance 'mgl-diffun:diffun\n                 :fn #'sin\n                 ;; We are going to optimize its only parameter.\n                 :weight-indices '(0)))\n\n;;; Minimize SIN. Note that there is no dataset involved because all\n;;; parameters are being optimized.\n(minimize (make-instance 'cg-optimizer\n                         :batch-size 1\n                         :termination 1)\n          *diff-fn-1*\n          :weights (make-mat 1))\n;;; => A MAT with a single value of about -pi\u002F2.\n\n;;; Create a differentiable function for f(x,y)=(x-y)^2. X is a\n;;; parameter whose values come from the DATASET argument passed to\n;;; MINIMIZE. Y is a parameter to be optimized (a 'weight').\n(defparameter *diff-fn-2*\n  (make-instance 'mgl-diffun:diffun\n                 :fn (lambda (x y)\n                       (expt (- x y) 2))\n                 :parameter-indices '(0)\n                 :weight-indices '(1)))\n\n;;; Find the Y that minimizes the distance from the instances\n;;; generated by the sampler.\n(minimize (make-instance 'cg-optimizer :batch-size 10)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset (make-instance 'function-sampler\n                                  :generator (lambda ()\n                                               (list (+ 10\n                                                        (gaussian-random-1))))\n                                  :max-n-samples 1000))\n;;; => A MAT with a single value of about 10, the expected value of\n;;; the instances in the dataset.\n\n;;; The dataset can be a SEQUENCE in which case we'd better set\n;;; TERMINATION else optimization would never finish. Note how a\n;;; single epoch suffices.\n(minimize (make-instance 'cg-optimizer :termination 6)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset '((0) (1) (2) (3) (4) (5)))\n;;; => A MAT with a single value of about 2.5.\n```\n\n\n\u003Ca id=\"x-28MGL-CG-3ACG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **CG** *FN W &KEY (MAX-N-LINE-SEARCHES \\*DEFAULT-MAX-N-LINE-SEARCHES\\*) (MAX-N-EVALUATIONS-PER-LINE-SEARCH \\*DEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH\\*) (MAX-N-EVALUATIONS \\*DEFAULT-MAX-N-EVALUATIONS\\*) (SIG \\*DEFAULT-SIG\\*) (RHO \\*DEFAULT-RHO\\*) (INT \\*DEFAULT-INT\\*) (EXT \\*DEFAULT-EXT\\*) (RATIO \\*DEFAULT-RATIO\\*) SPARE-VECTORS*\n\n    [`CG-OPTIMIZER`][ee97] passes each batch of data to this function with its\n    [`CG-ARGS`][9749] passed on.\n    \n    Minimize a differentiable multivariate function with conjugate\n    gradient. The Polak-Ribiere flavour of conjugate gradients is used\n    to compute search directions, and a line search using quadratic and\n    cubic polynomial approximations and the Wolfe-Powell stopping\n    criteria is used together with the slope ratio method for guessing\n    initial step sizes. Additionally a bunch of checks are made to make\n    sure that exploration is taking place and that extrapolation will\n    not be unboundedly large.\n    \n    `FN` is a function of two parameters: [`WEIGHTS`][ab3c] and `DERIVATIVES`. `WEIGHTS`\n    is a `MAT` of the same size as `W` that is where the search start from.\n    `DERIVATIVES` is also a `MAT` of that size and it is where `FN` shall\n    place the partial derivatives. `FN` returns the value of the function\n    that is being minimized.\n    \n    `CG` performs a number of line searches and invokes `FN` at each step. A\n    line search invokes `FN` at most `MAX-N-EVALUATIONS-PER-LINE-SEARCH`\n    number of times and can succeed in improving the minimum by the\n    sufficient margin or it can fail. Note, the even a failed line\n    search may improve further and hence change the weights it's just\n    that the improvement was deemed too small. `CG` stops when either:\n    \n    - two line searches fail in a row\n    \n    - `MAX-N-LINE-SEARCHES` is reached\n    \n    - `MAX-N-EVALUATIONS` is reached\n    \n    `CG` returns a `MAT` that contains the best weights, the minimum, the\n    number of line searches performed, the number of succesful line\n    searches and the number of evaluations.\n    \n    When using `MAX-N-EVALUATIONS` remember that there is an extra\n    evaluation of `FN` before the first line search.\n    \n    `SPARE-VECTORS` is a list of preallocated `MAT`s of the same size as `W`.\n    Passing 6 of them covers the current need of the algorithm and it\n    will not cons up vectors of size `W` at all.\n    \n    `NOTE`: If the function terminates within a few iterations, it could\n    be an indication that the function values and derivatives are not\n    consistent (ie, there may be a bug in the implementation of `FN`\n    function).\n    \n    `SIG` and `RHO` are the constants controlling the Wolfe-Powell\n    conditions. `SIG` is the maximum allowed absolute ratio between\n    previous and new slopes (derivatives in the search direction), thus\n    setting `SIG` to low (positive) values forces higher precision in the\n    line-searches. `RHO` is the minimum allowed fraction of the\n    expected (from the slope at the initial point in the linesearch).\n    Constants must satisfy 0 \\\u003C `RHO` \\\u003C `SIG` \\\u003C 1. Tuning of `SIG` (depending\n    on the nature of the function to be optimized) may speed up the\n    minimization; it is probably not worth playing much with `RHO`.\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-INT-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-INT\\*** *0.1*\n\n    Don't reevaluate within `INT` of the limit of the current bracket.\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-EXT-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-EXT\\*** *3*\n\n    Extrapolate maximum `EXT` times the current step-size.\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-SIG-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-SIG\\*** *0.1*\n\n    `SIG` and `RHO` are the constants controlling the Wolfe-Powell\n    conditions. `SIG` is the maximum allowed absolute ratio between\n    previous and new slopes (derivatives in the search direction), thus\n    setting `SIG` to low (positive) values forces higher precision in the\n    line-searches.\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-RHO-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-RHO\\*** *0.05*\n\n    `RHO` is the minimum allowed fraction of the expected (from the slope\n    at the initial point in the linesearch). Constants must satisfy 0 \\\u003C\n    `RHO` \\\u003C `SIG` \\\u003C 1.\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-RATIO-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-RATIO\\*** *10*\n\n    Maximum allowed slope ratio.\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-MAX-N-LINE-SEARCHES-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-MAX-N-LINE-SEARCHES\\*** *NIL*\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH\\*** *20*\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-MAX-N-EVALUATIONS-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*DEFAULT-MAX-N-EVALUATIONS\\*** *NIL*\n\n\u003Ca id=\"x-28MGL-CG-3ACG-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **CG-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    Updates all weights simultaneously after chewing\n    through [`BATCH-SIZE`][fa6d] inputs.\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **BATCH-SIZE** *[CG-OPTIMIZER][ee97] (:BATCH-SIZE)*\n\n    After having gone through `BATCH-SIZE` number of\n    instances, weights are updated. Normally, [`CG`][4ffb] operates on all\n    available data, but it may be useful to introduce some noise into\n    the optimization to reduce overfitting by using smaller batch\n    sizes. If `BATCH-SIZE` is not set, it is initialized to the size of\n    the dataset at the start of optimization.\n\n\u003Ca id=\"x-28MGL-CG-3ACG-ARGS-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **CG-ARGS** *[CG-OPTIMIZER][ee97] (:CG-ARGS = 'NIL)*\n\n\u003Ca id=\"x-28MGL-CG-3AON-CG-BATCH-DONE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **ON-CG-BATCH-DONE** *[CG-OPTIMIZER][ee97] (:ON-CG-BATCH-DONE = NIL)*\n\n    An event hook called when processing a conjugate\n    gradient batch is done. The handlers on the hook are called with 8\n    arguments:\n    \n        (optimizer gradient-source instances\n         best-w best-f n-line-searches\n         n-succesful-line-searches n-evaluations)\n    \n    The latter 5 of which are the return values of the [`CG`][4ffb] function.\n\n\u003Ca id=\"x-28MGL-CG-3ALOG-CG-BATCH-DONE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **LOG-CG-BATCH-DONE** *OPTIMIZER GRADIENT-SOURCE INSTANCES BEST-W BEST-F N-LINE-SEARCHES N-SUCCESFUL-LINE-SEARCHES N-EVALUATIONS*\n\n    This is a function can be added to\n    [`ON-CG-BATCH-DONE`][d10a]. The default implementation simply logs the event\n    arguments.\n\n\u003Ca id=\"x-28MGL-CG-3ASEGMENT-FILTER-20-28MGL-PAX-3AREADER-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **SEGMENT-FILTER** *[CG-OPTIMIZER][ee97] (:SEGMENT-FILTER = (CONSTANTLY T))*\n\n    A predicate function on segments that filters out\n    uninteresting segments. Called from [`INITIALIZE-OPTIMIZER*`][7c2f].\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-EXTENSION-API-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.5 Extension API\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.5.1 Implementing Optimizers\n\nThe following generic functions must be specialized for new\noptimizer types.\n\n\u003Ca id=\"x-28MGL-OPT-3AMINIMIZE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MINIMIZE\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    Called by [`MINIMIZE`][46a4] after [`INITIALIZE-OPTIMIZER*`][7c2f] and\n    [`INITIALIZE-GRADIENT-SOURCE*`][dd95], this generic function is the main\n    extension point for writing optimizers.\n\n\u003Ca id=\"x-28MGL-OPT-3AINITIALIZE-OPTIMIZER-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **INITIALIZE-OPTIMIZER\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    Called automatically before training starts, this\n    function sets up `OPTIMIZER` to be suitable for optimizing\n    `GRADIENT-SOURCE`. It typically creates appropriately sized\n    accumulators for the gradients.\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SEGMENTS** *OPTIMIZER*\n\n    Several weight matrices known as *segments* can be\n    optimized by a single optimizer. This function returns them as a\n    list.\n\nThe rest are just useful for utilities for implementing\noptimizers.\n\n\u003Ca id=\"x-28MGL-OPT-3ATERMINATE-OPTIMIZATION-P-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **TERMINATE-OPTIMIZATION-P** *N-INSTANCES TERMINATION*\n\n    Utility function for subclasses of [`ITERATIVE-OPTIMIZER`][8da0]. It returns\n    whether optimization is to be terminated based on `N-INSTANCES` and\n    `TERMINATION` that are values of the respective accessors of\n    `ITERATIVE-OPTIMIZER`.\n\n\u003Ca id=\"x-28MGL-OPT-3ASET-N-INSTANCES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SET-N-INSTANCES** *OPTIMIZER GRADIENT-SOURCE N-INSTANCES*\n\n    Set [`N-INSTANCES`][4c73] of `OPTIMIZER` and\n    fire [`ON-N-INSTANCES-CHANGED`][4f0b]. [`ITERATIVE-OPTIMIZER`][8da0] subclasses must\n    call this to increment [`N-INSTANCES`][4c73].\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-SET-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **SEGMENT-SET**\n\n    This is a utility class for optimizers that have a\n    list of [`SEGMENTS`][f00d] and (the weights being optimized) is able to copy\n    back and forth between those segments and a single `MAT` (the\n    accumulator).\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENTS-20-28MGL-PAX-3AREADER-20MGL-OPT-3ASEGMENT-SET-29-29\">\u003C\u002Fa>\n\n- [reader] **SEGMENTS** *[SEGMENT-SET][418a] (:SEGMENTS)*\n\n    A list of weight matrices.\n\n\u003Ca id=\"x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-OPT-3ASEGMENT-SET-29-29\">\u003C\u002Fa>\n\n- [reader] **SIZE** *[SEGMENT-SET][418a]*\n\n    The sum of the sizes of the weight matrices of\n    [`SEGMENTS`][f00d].\n\n\u003Ca id=\"x-28MGL-OPT-3ADO-SEGMENT-SET-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **DO-SEGMENT-SET** *(SEGMENT &OPTIONAL START) SEGMENT-SET &BODY BODY*\n\n    Iterate over [`SEGMENTS`][f00d] in `SEGMENT-SET`. If `START` is specified, the it\n    is bound to the start index of `SEGMENT` within `SEGMENT-SET`. The start\n    index is the sum of the sizes of previous segments.\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-SET-3C-MAT-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SEGMENT-SET\\\u003C-MAT** *SEGMENT-SET MAT*\n\n    Copy the values of `MAT` to the weight matrices of `SEGMENT-SET` as if\n    they were concatenated into a single `MAT`.\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-SET--3EMAT-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **SEGMENT-SET->MAT** *SEGMENT-SET MAT*\n\n    Copy the values of `SEGMENT-SET` to `MAT` as if they were concatenated\n    into a single `MAT`.\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SOURCE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.5.2 Implementing Gradient Sources\n\nWeights can be stored in a multitude of ways. Optimizers need to\nupdate weights, so it is assumed that weights are stored in any\nnumber of `MAT` objects called segments.\n\nThe generic functions in this section must all be specialized for\nnew gradient sources except where noted.\n\n\u003Ca id=\"x-28MGL-OPT-3AMAP-SEGMENTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAP-SEGMENTS** *FN GRADIENT-SOURCE*\n\n    Apply `FN` to each segment of `GRADIENT-SOURCE`.\n\n\u003Ca id=\"x-28MGL-OPT-3AMAP-SEGMENT-RUNS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAP-SEGMENT-RUNS** *FN SEGMENT*\n\n    Call `FN` with start and end of intervals of\n    consecutive indices that are not missing in `SEGMENT`. Called by\n    optimizers that support partial updates. The default implementation\n    assumes that all weights are present. This only needs to be\n    specialized if one plans to use an optimizer that knows how to deal\n    unused\u002Fmissing weights such as [`MGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER`][f6ae]\n    and `OPTIMIZER` [`MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER`][5a43].\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-WEIGHTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SEGMENT-WEIGHTS** *SEGMENT*\n\n    Return the weight matrix of `SEGMENT`. A segment\n    doesn't need to be a `MAT` object itself. For example, it may be a\n    `MGL-BM:CHUNK` of a `MGL-BM:BM` or a [`MGL-BP:LUMP`][c1ac] of a\n    [`MGL-BP:BPN`][5187] whose [`NODES`][cc1c] slot holds the weights.\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-WEIGHTS-20-28METHOD-20-28MGL-MAT-3AMAT-29-29-29\">\u003C\u002Fa>\n\n- [method] **SEGMENT-WEIGHTS** *(MAT MAT)*\n\n    When the segment is really a `MAT`, then just return it.\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-DERIVATIVES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **SEGMENT-DERIVATIVES** *SEGMENT*\n\n    Return the derivatives matrix of `SEGMENT`. A segment\n    doesn't need to be a `MAT` object itself. For example, it may be a\n    `MGL-BM:CHUNK` of a `MGL-BM:BM` or a [`MGL-BP:LUMP`][c1ac] of a\n    [`MGL-BP:BPN`][5187] whose DERIVATIVES slot holds the gradient.\n\n\u003Ca id=\"x-28MGL-OPT-3ALIST-SEGMENTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LIST-SEGMENTS** *GRADIENT-SOURCE*\n\n    A utility function that returns the list of segments from\n    [`MAP-SEGMENTS`][2312] on `GRADIENT-SOURCE`.\n\n\u003Ca id=\"x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **INITIALIZE-GRADIENT-SOURCE\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    Called automatically before [`MINIMIZE*`][ae3d] is called,\n    this function may be specialized if `GRADIENT-SOURCE` needs some kind\n    of setup.\n\n\u003Ca id=\"x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20-28METHOD-20-28T-20T-20T-20T-29-29-29\">\u003C\u002Fa>\n\n- [method] **INITIALIZE-GRADIENT-SOURCE\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    The default method does nothing.\n\n\u003Ca id=\"x-28MGL-OPT-3AACCUMULATE-GRADIENTS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **ACCUMULATE-GRADIENTS\\*** *GRADIENT-SOURCE SINK BATCH MULTIPLIER VALUEP*\n\n    Add `MULTIPLIER` times the sum of first-order\n    gradients to accumulators of `SINK` (normally accessed with\n    [`DO-GRADIENT-SINK`][20ca]) and if `VALUEP`, return the sum of values of the\n    function being optimized for a `BATCH` of instances. `GRADIENT-SOURCE`\n    is the object representing the function being optimized, `SINK` is\n    gradient sink.\n    \n    Note the number of instances in `BATCH` may be larger than what\n    `GRADIENT-SOURCE` process in one go (in the sense of say,\n    [`MAX-N-STRIPES`][16c4]), so [`DO-BATCHES-FOR-MODEL`][faaa] or something like (`GROUP`\n    `BATCH` `MAX-N-STRIPES`) can be handy.\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SINK-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.5.3 Implementing Gradient Sinks\n\nOptimizers call [`ACCUMULATE-GRADIENTS*`][4bf1] on gradient sources. One\nparameter of `ACCUMULATE-GRADIENTS*` is the `SINK`. A gradient sink\nknows what accumulator matrix (if any) belongs to a segment. Sinks\nare defined entirely by [`MAP-GRADIENT-SINK`][aabd].\n\n\u003Ca id=\"x-28MGL-OPT-3AMAP-GRADIENT-SINK-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAP-GRADIENT-SINK** *FN SINK*\n\n    Call `FN` of lambda list (`SEGMENT` `ACCUMULATOR`) on\n    each segment and their corresponding accumulator `MAT` in `SINK`.\n\n\u003Ca id=\"x-28MGL-OPT-3ADO-GRADIENT-SINK-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **DO-GRADIENT-SINK** *((SEGMENT ACCUMULATOR) SINK) &BODY BODY*\n\n    A convenience macro on top of [`MAP-GRADIENT-SINK`][aabd].\n\n\u003Ca id=\"x-28MGL-DIFFUN-3A-40MGL-DIFFUN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 10 Differentiable Functions\n\n###### \\[in package MGL-DIFFUN\\]\n\u003Ca id=\"x-28MGL-DIFFUN-3ADIFFUN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **DIFFUN**\n\n    `DIFFUN` dresses a lisp function (in its [`FN`][f491] slot) as\n    a gradient source (see [Implementing Gradient Sources][c58b]), which\n    allows it to be used in [`MINIMIZE`][46a4]. See the examples in\n    [Gradient Descent][10e7] and [Conjugate Gradient][83e6].\n\n\u003Ca id=\"x-28MGL-COMMON-3AFN-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29\">\u003C\u002Fa>\n\n- [reader] **FN** *[DIFFUN][1a61] (:FN)*\n\n    A real valued lisp function. It may have any\n    number of parameters.\n\n\u003Ca id=\"x-28MGL-DIFFUN-3APARAMETER-INDICES-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29\">\u003C\u002Fa>\n\n- [reader] **PARAMETER-INDICES** *[DIFFUN][1a61] (:PARAMETER-INDICES = NIL)*\n\n    The list of indices of parameters that we don't\n    optimize. Values for these will come from the DATASET argument of\n    [`MINIMIZE`][46a4].\n\n\u003Ca id=\"x-28MGL-DIFFUN-3AWEIGHT-INDICES-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29\">\u003C\u002Fa>\n\n- [reader] **WEIGHT-INDICES** *[DIFFUN][1a61] (:WEIGHT-INDICES = NIL)*\n\n    The list of indices of parameters to be optimized,\n    the values of which will come from the `WEIGHTS`\n    argument of [`MINIMIZE`][46a4].\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 11 Backpropagation Neural Networks\n\n###### \\[in package MGL-BP\\]\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-OVERVIEW-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.1 Backprop Overview\n\nBackpropagation Neural Networks are just functions with lots of\nparameters called *weights* and a layered structure when presented\nas a [computational\ngraph](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAutomatic_differentiation). The\nnetwork is trained to [`MINIMIZE`][46a4] some kind of *loss function* whose\nvalue the network computes.\n\nIn this implementation, a [`BPN`][5187] is assembled from several\n[`LUMP`][c1ac]s (roughly corresponding to layers). Both feed-forward and\nrecurrent neural nets are supported ([`FNN`][9de4] and [`RNN`][b0f3], respectively).\n`BPN`s can contain not only `LUMP`s but other `BPN`s, too. As we\nsee, networks are composite objects and the abstract base class for\ncomposite and simple parts is called [`CLUMP`][a4fe].\n\n\u003Ca id=\"x-28MGL-BP-3ACLUMP-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **CLUMP**\n\n    A `CLUMP` is a [`LUMP`][c1ac] or a [`BPN`][5187]. It represents\n    a differentiable function. Arguments of clumps are given during\n    instantiation. Some arguments are clumps themselves so they get\n    permenantly wired together like this:\n    \n    ```commonlisp\n    (->v*m (->input :size 10 :name 'input)\n           (->weight :dimensions '(10 20) :name 'weight)\n           :name 'activation)\n    ```\n    \n    The above creates three clumps: the vector-matrix multiplication\n    clumps called `ACTIVATION` which has a reference to its operands:\n    `INPUT` and `WEIGHT`. Note that the example just defines a function, no\n    actual computation has taken place, yet.\n    \n    This wiring of `CLUMP`s is how one builds feed-forward nets ([`FNN`][9de4]) or\n    recurrent neural networks ([`RNN`][b0f3]) that are `CLUMP`s themselves so one\n    can build nets in a hiearchical style if desired. Non-composite\n    `CLUMP`s are called `LUMP` (note the loss of `C` that stands for\n    composite). The various `LUMP` subtypes correspond to different layer\n    types ([`->SIGMOID`][83f9], [`->DROPOUT`][441b], [`->RELU`][9d3a], [`->TANH`][5309], etc).\n\nAt this point, you may want to jump ahead to get a feel for how\nthings work by reading the [`FNN` Tutorial][6b38].\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-EXTENSION-API-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.2 Clump API\n\nThese are mostly for extension purposes. About the only thing\nneeded from here for normal operation is [`NODES`][cc1c] when clamping inputs\nor extracting predictions.\n\n\u003Ca id=\"x-28MGL-BP-3ASTRIPEDP-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **STRIPEDP** *CLUMP*\n\n    For efficiency, forward and backprop phases do\n    their stuff in batch mode: passing a number of instances through the\n    network in batches. Thus clumps must be able to store values of and\n    gradients for each of these instances. However, some clumps produce\n    the same result for each instance in a batch. These clumps are the\n    weights, the parameters of the network. `STRIPEDP` returns true iff\n    `CLUMP` does not represent weights (i.e. it's not a [`->WEIGHT`][b76f]).\n    \n    For striped clumps, their [`NODES`][cc1c] and [`DERIVATIVES`][a81b] are `MAT` objects with\n    a leading dimension (number of rows in the 2d case) equal to the\n    number of instances in the batch. Non-striped clumps have no\n    restriction on their shape apart from what their usage dictates.\n\n\u003Ca id=\"x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **NODES** *OBJECT*\n\n    Returns a `MGL-MAT:MAT` object representing the state\n    or result of `OBJECT`. The first dimension of the returned matrix is\n    equal to the number of stripes.\n\n[`CLUMP`][a4fe]s' [`NODES`][cc1c] holds the result computed by the most recent\n[`FORWARD`][c1ae]. For [`->INPUT`][f54e] lumps, this is where input values shall be\nplaced (see [`SET-INPUT`][0c9e]). Currently, the matrix is always two\ndimensional but this restriction may go away in the future.\n\n\u003Ca id=\"x-28MGL-BP-3ADERIVATIVES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **DERIVATIVES** *CLUMP*\n\n    Return the `MAT` object representing the partial\n    derivatives of the function `CLUMP` computes. The returned partial\n    derivatives were accumulated by previous [`BACKWARD`][5bd4] calls.\n    \n    This matrix is shaped like the matrix returned by [`NODES`][cc1c].\n\n\u003Ca id=\"x-28MGL-BP-3AFORWARD-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **FORWARD** *CLUMP*\n\n    Compute the values of the function represented by\n    `CLUMP` for all stripes and place the results into [`NODES`][cc1c] of `CLUMP`.\n\n\u003Ca id=\"x-28MGL-BP-3ABACKWARD-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **BACKWARD** *CLUMP*\n\n    Compute the partial derivatives of the function\n    represented by `CLUMP` and add them to [`DERIVATIVES`][a81b] of the\n    corresponding argument clumps. The `DERIVATIVES` of `CLUMP` contains the\n    sum of partial derivatives of all clumps by the corresponding\n    output. This function is intended to be called after a [`FORWARD`][c1ae] pass.\n    \n    Take the [`->SIGMOID`][83f9] clump for example when the network is being\n    applied to a batch of two instances `x1` and `x2`. `x1` and `x2` are\n    set in the [`->INPUT`][f54e] lump X. The sigmoid computes `1\u002F(1+exp(-x))`\n    where `X` is its only argument clump.\n    \n        f(x) = 1\u002F(1+exp(-x))\n    \n    When `BACKWARD` is called on the sigmoid lump, its `DERIVATIVES` is a\n    2x1 `MAT` object that contains the partial derivatives of the loss\n    function:\n    \n        dL(x1)\u002Fdf\n        dL(x2)\u002Fdf\n    \n    Now the `BACKWARD` method of the sigmoid needs to add `dL(x1)\u002Fdx1` and\n    `dL(x2)\u002Fdx2` to `DERIVATIVES` of `X`. Now, `dL(x1)\u002Fdx1 = dL(x1)\u002Fdf *\n    df(x1)\u002Fdx1` and the first term is what we have in `DERIVATIVES` of the\n    sigmoid so it only needs to calculate the second term.\n\nIn addition to the above, clumps also have to support [`SIZE`][019f],\n[`N-STRIPES`][8dd7], [`MAX-N-STRIPES`][16c4] (and the [`SETF`][a138] methods of the latter two)\nwhich can be accomplished just by inheriting from [`BPN`][5187], [`FNN`][9de4], [`RNN`][b0f3], or\na [`LUMP`][c1ac].\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BPN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.3 `BPN`s\n\n\u003Ca id=\"x-28MGL-BP-3ABPN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **BPN** *[CLUMP][a4fe]*\n\n    Abstract base class for [`FNN`][9de4] and [`RNN`][b0f3].\n\n\u003Ca id=\"x-28MGL-CORE-3AN-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29\">\u003C\u002Fa>\n\n- [reader] **N-STRIPES** *[BPN][5187] (:N-STRIPES = 1)*\n\n    The current number of instances the network has.\n    This is automatically set to the number of instances passed to\n    [`SET-INPUT`][0c9e], so it rarely has to be manipulated directly although it\n    can be set. When set `N-STRIPES` of all [`CLUMPS`][f7c1] get set to the same\n    value.\n\n\u003Ca id=\"x-28MGL-CORE-3AMAX-N-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29\">\u003C\u002Fa>\n\n- [reader] **MAX-N-STRIPES** *[BPN][5187] (:MAX-N-STRIPES = NIL)*\n\n    The maximum number of instances the network can\n    operate on in parallel. Within [`BUILD-FNN`][606c] or [`BUILD-RNN`][764b], it defaults\n    to `MAX-N-STRIPES` of that parent network, else it defaults to 1.\n    When set `MAX-N-STRIPES` of all [`CLUMPS`][f7c1] get set to the same value.\n\n\u003Ca id=\"x-28MGL-BP-3ACLUMPS-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29\">\u003C\u002Fa>\n\n- [reader] **CLUMPS** *[BPN][5187] (:CLUMPS = (MAKE-ARRAY 0 :ELEMENT-TYPE 'CLUMP :ADJUSTABLE T :FILL-POINTER T))*\n\n    A topological sorted adjustable array with a fill\n    pointer that holds the clumps that make up the network. Clumps are\n    added to it by [`ADD-CLUMP`][82d8] or, more often, automatically when within\n    a [`BUILD-FNN`][606c] or [`BUILD-RNN`][764b]. Rarely needed, [`FIND-CLUMP`][175f] takes care of\n    most uses.\n\n\u003Ca id=\"x-28MGL-BP-3AFIND-CLUMP-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **FIND-CLUMP** *NAME BPN &KEY (ERRORP T)*\n\n    Find the clump with `NAME` among [`CLUMPS`][f7c1] of `BPN`. As always, names are\n    compared with [`EQUAL`][3fb5]. If not found, then return `NIL` or signal and\n    error depending on `ERRORP`.\n\n\u003Ca id=\"x-28MGL-BP-3AADD-CLUMP-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **ADD-CLUMP** *CLUMP BPN*\n\n    Add `CLUMP` to `BPN`. [`MAX-N-STRIPES`][16c4] of `CLUMP` gets set to that of `BPN`.\n    It is an error to add a clump with a name already used by one of the\n    [`CLUMPS`][f7c1] of `BPN`.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-TRAINING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.1 Training\n\n[`BPN`][5187]s are trained to minimize the loss function they compute.\nBefore a `BPN` is passed to [`MINIMIZE`][46a4] (as its `GRADIENT-SOURCE`\nargument), it must be wrapped in a [`BP-LEARNER`][00a0] object. `BP-LEARNER` has\n[`MONITORS`][6202] slot which is used for example by\n[`RESET-OPTIMIZATION-MONITORS`][d479].\n\nWithout the bells an whistles, the basic shape of training is this:\n\n```commonlisp\n(minimize optimizer (make-instance 'bp-learner :bpn bpn)\n          :dataset dataset)\n```\n\n\n\u003Ca id=\"x-28MGL-BP-3ABP-LEARNER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **BP-LEARNER**\n\n\u003Ca id=\"x-28MGL-BP-3ABPN-20-28MGL-PAX-3AREADER-20MGL-BP-3ABP-LEARNER-29-29\">\u003C\u002Fa>\n\n- [reader] **BPN** *[BP-LEARNER][00a0] (:BPN)*\n\n    The `BPN` for which this [`BP-LEARNER`][00a0] provides the\n    gradients.\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ABP-LEARNER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MONITORS** *[BP-LEARNER][00a0] (:MONITORS = NIL)*\n\n    A list of [`MONITOR`][7068]s.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MONITORING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.2 Monitoring\n\n\u003Ca id=\"x-28MGL-BP-3AMONITOR-BPN-RESULTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MONITOR-BPN-RESULTS** *DATASET BPN MONITORS*\n\n    For every batch (of size [`MAX-N-STRIPES`][16c4] of `BPN`) of instances in\n    `DATASET`, set the batch as the next input with [`SET-INPUT`][0c9e], perform a\n    [`FORWARD`][c1ae] pass and apply `MONITORS` to the `BPN` (with [`APPLY-MONITORS`][989c]).\n    Finally, return the counters of `MONITORS`. This is built on top of\n    [`MONITOR-MODEL-RESULTS`][e50c].\n\n\u003Ca id=\"x-28MGL-BP-3AMAKE-STEP-MONITOR-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-STEP-MONITOR-MONITORS** *RNN &KEY (COUNTER-VALUES-FN \\#'COUNTER-RAW-VALUES) (MAKE-COUNTER \\#'MAKE-STEP-MONITOR-MONITOR-COUNTER)*\n\n    Return a list of monitors, one for every monitor in [`STEP-MONITORS`][71f9]\n    of `RNN`. These monitors extract the results from their warp\n    counterpairs with `COUNTER-VALUES-FN` and add them to their own\n    counter that's created by `MAKE-COUNTER`. Wow. Ew. The idea is that\n    one does something like this do monitor warped prediction:\n    \n    ```commonlisp\n    (let ((*warp-time* t))\n      (setf (step-monitors rnn)\n            (make-cost-monitors rnn :attributes '(:event \"warped pred.\")))\n      (monitor-bpn-results dataset rnn\n                           ;; Just collect and reset the warp\n                           ;; monitors after each batch of\n                           ;; instances.\n                           (make-step-monitor-monitors rnn)))\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3AMAKE-STEP-MONITOR-MONITOR-COUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **MAKE-STEP-MONITOR-MONITOR-COUNTER** *STEP-COUNTER*\n\n    In an [`RNN`][b0f3], `STEP-COUNTER` aggregates results of all\n    the time steps during the processing of instances in the current\n    batch. Return a new counter into which results from `STEP-COUNTER` can\n    be accumulated when the processing of the batch is finished. The\n    default implementation creates a copy of `STEP-COUNTER`.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-FNN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.3 Feed-Forward Nets\n\n[`FNN`][9de4] and [`RNN`][b0f3] have a lot in common (see their common superclass, [`BPN`][5187]).\nThere is very limited functionality that's specific to `FNN`s so let's\nget them out of they way before we study a full example.\n\n\u003Ca id=\"x-28MGL-BP-3AFNN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **FNN** *[BPN][5187]*\n\n    A feed-forward neural net (as opposed to a\n    recurrent one, see [`RNN`][b0f3]).\n\n\u003Ca id=\"x-28MGL-BP-3ABUILD-FNN-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **BUILD-FNN** *(&KEY FNN (CLASS ''FNN) INITARGS MAX-N-STRIPES NAME) &BODY CLUMPS*\n\n    Syntactic sugar to assemble `FNN`s from [`CLUMP`][a4fe]s. Like [`LET*`][49f5], it is a\n    sequence of bindings (of symbols to `CLUMP`s). The names of the clumps\n    created default to the symbol of the binding. In case a clump is not\n    bound to a symbol (because it was created in a nested expression),\n    the local function `CLUMP` can be used to find the clump with the\n    given name in the fnn being built. Example:\n    \n        (build-fnn ()\n          (features (->input :size n-features))\n          (biases (->weight :size n-features))\n          (weights (->weight :size (* n-hiddens n-features)))\n          (activations0 (->v*m :weights weights :x (clump 'features)))\n          (activations (->+ :args (list biases activations0)))\n          (output (->sigmoid :x activations)))\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-FNN-TUTORIAL-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### `FNN` Tutorial\n\nHopefully this example from `example\u002Fdigit-fnn.lisp` illustrates\nthe concepts involved. If it's too dense despite the comments, then\nread up on [Datasets][109e], [Gradient Based Optimization][c74a] and come back.\n\n```commonlisp\n(cl:defpackage :mgl-example-digit-fnn\n  (:use #:common-lisp #:mgl))\n\n(in-package :mgl-example-digit-fnn)\n\n;;; There are 10 possible digits used as inputs ...\n(defparameter *n-inputs* 10)\n;;; and we want to learn the rule that maps the input digit D to (MOD\n;;; (1+ D) 3).\n(defparameter *n-outputs* 3)\n\n;;; We define a feed-forward net to be able to specialize how inputs\n;;; are translated by adding a SET-INPUT method later.\n(defclass digit-fnn (fnn)\n  ())\n\n;;; Build a DIGIT-FNN with a single hidden layer of rectified linear\n;;; units and a softmax output.\n(defun make-digit-fnn (&key (n-hiddens 5))\n  (build-fnn (:class 'digit-fnn)\n    (input (->input :size *n-inputs*))\n    (hidden-activation (->activation input :size n-hiddens))\n    (hidden (->relu hidden-activation))\n    (output-activation (->activation hidden :size *n-outputs*))\n    (output (->softmax-xe-loss output-activation))))\n\n;;; This method is called with batches of 'instances' (input digits in\n;;; this case) by MINIMIZE and also by MONITOR-BPN-RESULTS before\n;;; performing a forward pass (i.e. computing the value of the\n;;; function represented by the network). Its job is to encode the\n;;; inputs by populating rows of the NODES matrix of the INPUT clump.\n;;;\n;;; Each input is encoded as a row of zeros with a single 1 at index\n;;; determined by the input digit. This is called one-hot encoding.\n;;; The TARGET could be encoded the same way, but instead we use the\n;;; sparse option supported by TARGET of ->SOFTMAX-XE-LOSS.\n(defmethod set-input (digits (fnn digit-fnn))\n  (let* ((input (nodes (find-clump 'input fnn)))\n         (output-lump (find-clump 'output fnn)))\n    (fill! 0 input)\n    (loop for i upfrom 0\n          for digit in digits\n          do (setf (mref input i digit) 1))\n    (setf (target output-lump)\n          (mapcar (lambda (digit)\n                    (mod (1+ digit) *n-outputs*))\n                  digits))))\n\n;;; Train the network by minimizing the loss (cross-entropy here) with\n;;; stochastic gradient descent.\n(defun train-digit-fnn ()\n  (let ((optimizer\n          ;; First create the optimizer for MINIMIZE.\n          (make-instance 'segmented-gd-optimizer\n                         :segmenter\n                         ;; We train each weight lump with the same\n                         ;; parameters and, in fact, the same\n                         ;; optimizer. But it need not be so, in\n                         ;; general.\n                         (constantly\n                          (make-instance 'sgd-optimizer\n                                         :learning-rate 1\n                                         :momentum 0.9\n                                         :batch-size 100))))\n        (fnn (make-digit-fnn)))\n    ;; The number of instances the FNN can work with in parallel. It's\n    ;; usually equal to the batch size or is a its divisor.\n    (setf (max-n-stripes fnn) 50)\n    ;; Initialize all weights randomly.\n    (map-segments (lambda (weights)\n                    (gaussian-random! (nodes weights) :stddev 0.01))\n                  fnn)\n    ;; Arrange for training and test error to be logged.\n    (monitor-optimization-periodically\n     optimizer '((:fn log-test-error :period 10000)\n                 (:fn reset-optimization-monitors :period 1000)))\n    ;; Finally, start the optimization.\n    (minimize optimizer\n              ;; Dress FNN in a BP-LEARNER and attach monitors for the\n              ;; cost to it. These monitors are going to be logged and\n              ;; reset after every 100 training instance by\n              ;; RESET-OPTIMIZATION-MONITORS above.\n              (make-instance 'bp-learner\n                             :bpn fnn\n                             :monitors (make-cost-monitors\n                                        fnn :attributes `(:event \"train\")))\n              ;; Training stops when the sampler runs out (after 10000\n              ;; instances).\n              :dataset (make-sampler 10000))))\n\n;;; Return a sampler object that produces MAX-N-SAMPLES number of\n;;; random inputs (numbers between 0 and 9).\n(defun make-sampler (max-n-samples)\n  (make-instance 'function-sampler :max-n-samples max-n-samples\n                 :generator (lambda () (random *n-inputs*))))\n\n;;; Log the test error. Also, describe the optimizer and the bpn at\n;;; the beginning of training. Called periodically during training\n;;; (see above).\n(defun log-test-error (optimizer learner)\n  (when (zerop (n-instances optimizer))\n    (describe optimizer)\n    (describe (bpn learner)))\n  (log-padded\n   (monitor-bpn-results (make-sampler 1000) (bpn learner)\n                        (make-cost-monitors\n                         (bpn learner) :attributes `(:event \"pred.\")))))\n\n#|\n\n;;; Transcript follows:\n(repeatably ()\n  (let ((*log-time* nil))\n    (train-digit-fnn)))\n.. training at n-instances: 0\n.. train cost: 0.000e+0 (0)\n.. #\u003CSEGMENTED-GD-OPTIMIZER {100E112E93}>\n..  SEGMENTED-GD-OPTIMIZER description:\n..    N-INSTANCES = 0\n..    OPTIMIZERS = (#\u003CSGD-OPTIMIZER\n..                    #\u003CSEGMENT-SET\n..                      (#\u003C->WEIGHT # :SIZE 15 1\u002F1 :NORM 0.04473>\n..                       #\u003C->WEIGHT # :SIZE 3 1\u002F1 :NORM 0.01850>\n..                       #\u003C->WEIGHT # :SIZE 50 1\u002F1 :NORM 0.07159>\n..                       #\u003C->WEIGHT # :SIZE 5 1\u002F1 :NORM 0.03056>)\n..                      {100E335B73}>\n..                    {100E06DF83}>)\n..    SEGMENTS = (#\u003C->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE\n..                  15 1\u002F1 :NORM 0.04473>\n..                #\u003C->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE\n..                  3 1\u002F1 :NORM 0.01850>\n..                #\u003C->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE\n..                  50 1\u002F1 :NORM 0.07159>\n..                #\u003C->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE\n..                  5 1\u002F1 :NORM 0.03056>)\n..  \n.. #\u003CSGD-OPTIMIZER {100E06DF83}>\n..  GD-OPTIMIZER description:\n..    N-INSTANCES = 0\n..    SEGMENT-SET = #\u003CSEGMENT-SET\n..                    (#\u003C->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE\n..                       15 1\u002F1 :NORM 0.04473>\n..                     #\u003C->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE\n..                       3 1\u002F1 :NORM 0.01850>\n..                     #\u003C->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE\n..                       50 1\u002F1 :NORM 0.07159>\n..                     #\u003C->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE\n..                       5 1\u002F1 :NORM 0.03056>)\n..                    {100E335B73}>\n..    LEARNING-RATE = 1.00000e+0\n..    MOMENTUM = 9.00000e-1\n..    MOMENTUM-TYPE = :NORMAL\n..    WEIGHT-DECAY = 0.00000e+0\n..    WEIGHT-PENALTY = 0.00000e+0\n..    N-AFTER-UPATE-HOOK = 0\n..    BATCH-SIZE = 100\n..  \n..  BATCH-GD-OPTIMIZER description:\n..    N-BEFORE-UPATE-HOOK = 0\n..  #\u003CDIGIT-FNN {100E11A423}>\n..   BPN description:\n..     CLUMPS = #(#\u003C->INPUT INPUT :SIZE 10 1\u002F50 :NORM 0.00000>\n..                #\u003C->ACTIVATION\n..                  (HIDDEN-ACTIVATION :ACTIVATION) :STRIPES 1\u002F50\n..                  :CLUMPS 4>\n..                #\u003C->RELU HIDDEN :SIZE 5 1\u002F50 :NORM 0.00000>\n..                #\u003C->ACTIVATION\n..                  (OUTPUT-ACTIVATION :ACTIVATION) :STRIPES 1\u002F50\n..                  :CLUMPS 4>\n..                #\u003C->SOFTMAX-XE-LOSS OUTPUT :SIZE 3 1\u002F50 :NORM 0.00000>)\n..     N-STRIPES = 1\n..     MAX-N-STRIPES = 50\n..   pred. cost: 1.100d+0 (1000.00)\n.. training at n-instances: 1000\n.. train cost: 1.093d+0 (1000.00)\n.. training at n-instances: 2000\n.. train cost: 5.886d-1 (1000.00)\n.. training at n-instances: 3000\n.. train cost: 3.574d-3 (1000.00)\n.. training at n-instances: 4000\n.. train cost: 1.601d-7 (1000.00)\n.. training at n-instances: 5000\n.. train cost: 1.973d-9 (1000.00)\n.. training at n-instances: 6000\n.. train cost: 4.882d-10 (1000.00)\n.. training at n-instances: 7000\n.. train cost: 2.771d-10 (1000.00)\n.. training at n-instances: 8000\n.. train cost: 2.283d-10 (1000.00)\n.. training at n-instances: 9000\n.. train cost: 2.123d-10 (1000.00)\n.. training at n-instances: 10000\n.. train cost: 2.263d-10 (1000.00)\n.. pred. cost: 2.210d-10 (1000.00)\n..\n==> (#\u003C->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE 5 1\u002F1 :NORM 2.94294>\n-->  #\u003C->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE 50 1\u002F1 :NORM 11.48995>\n-->  #\u003C->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE 3 1\u002F1 :NORM 3.39103>\n-->  #\u003C->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE 15 1\u002F1 :NORM 11.39339>)\n\n|#\n```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-RNN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.4 Recurrent Neural Nets\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-RNN-TUTORIAL-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### `RNN` Tutorial\n\nHopefully this example from `example\u002Fsum-sign-fnn.lisp` illustrates\nthe concepts involved. Make sure you are comfortable with\n[`FNN` Tutorial][6b38] before reading this.\n\n```commonlisp\n(cl:defpackage :mgl-example-sum-sign-rnn\n  (:use #:common-lisp #:mgl))\n\n(in-package :mgl-example-sum-sign-rnn)\n\n;;; There is a single input at each time step...\n(defparameter *n-inputs* 1)\n;;; and we want to learn the rule that outputs the sign of the sum of\n;;; inputs so far in the sequence.\n(defparameter *n-outputs* 3)\n\n;;; Generate a training example that's a sequence of random length\n;;; between 1 and LENGTH. Elements of the sequence are lists of two\n;;; elements:\n;;;\n;;; 1. The input for the network (a single random number).\n;;;\n;;; 2. The sign of the sum of inputs so far encoded as 0, 1, 2 (for\n;;;    negative, zero and positive values). To add a twist, the sum is\n;;;    reset whenever a negative input is seen.\n(defun make-sum-sign-instance (&key (length 10))\n  (let ((length (max 1 (random length)))\n        (sum 0))\n    (loop for i below length\n          collect (let ((x (1- (* 2 (random 2)))))\n                    (incf sum x)\n                    (when (\u003C x 0)\n                      (setq sum x))\n                    (list x (cond ((minusp sum) 0)\n                                  ((zerop sum) 1)\n                                  (t 2)))))))\n\n;;; Build an RNN with a single lstm hidden layer and softmax output.\n;;; For each time step, a SUM-SIGN-FNN will be instantiated.\n(defun make-sum-sign-rnn (&key (n-hiddens 1))\n  (build-rnn ()\n    (build-fnn (:class 'sum-sign-fnn)\n      (input (->input :size 1))\n      (h (->lstm input :name 'h :size n-hiddens))\n      (prediction (->softmax-xe-loss (->activation h :name 'prediction\n                                                   :size *n-outputs*))))))\n\n;;; We define this class to be able to specialize how inputs are\n;;; translated by adding a SET-INPUT method later.\n(defclass sum-sign-fnn (fnn)\n  ())\n\n;;; We have a batch of instances from MAKE-SUM-SIGN-INSTANCE for the\n;;; RNN. This function is invoked with elements of these instances\n;;; belonging to the same time step (i.e. at the same index) and sets\n;;; the input and target up.\n(defmethod set-input (instances (fnn sum-sign-fnn))\n  (let ((input-nodes (nodes (find-clump 'input fnn))))\n    (setf (target (find-clump 'prediction fnn))\n          (loop for stripe upfrom 0\n                for instance in instances\n                collect\n                ;; Sequences in the batch are not of equal length. The\n                ;; RNN sends a NIL our way if a sequence has run out.\n                (when instance\n                  (destructuring-bind (input target) instance\n                    (setf (mref input-nodes stripe 0) input)\n                    target))))))\n\n;;; Train the network by minimizing the loss (cross-entropy here) with\n;;; the Adam optimizer.\n(defun train-sum-sign-rnn ()\n  (let ((rnn (make-sum-sign-rnn)))\n    (setf (max-n-stripes rnn) 50)\n    ;; Initialize the weights in the usual sqrt(1 \u002F fan-in) style.\n    (map-segments (lambda (weights)\n                    (let* ((fan-in (mat-dimension (nodes weights) 0))\n                           (limit (sqrt (\u002F 6 fan-in))))\n                      (uniform-random! (nodes weights)\n                                       :limit (* 2 limit))\n                      (.+! (- limit) (nodes weights))))\n                  rnn)\n    (minimize (monitor-optimization-periodically\n               (make-instance 'adam-optimizer\n                              :learning-rate 0.2\n                              :mean-decay 0.9\n                              :mean-decay-decay 0.9\n                              :variance-decay 0.9\n                              :batch-size 100)\n               '((:fn log-test-error :period 30000)\n                 (:fn reset-optimization-monitors :period 3000)))\n              (make-instance 'bp-learner\n                             :bpn rnn\n                             :monitors (make-cost-monitors rnn))\n              :dataset (make-sampler 30000))))\n\n;;; Return a sampler object that produces MAX-N-SAMPLES number of\n;;; random inputs.\n(defun make-sampler (max-n-samples &key (length 10))\n  (make-instance 'function-sampler :max-n-samples max-n-samples\n                 :generator (lambda ()\n                              (make-sum-sign-instance :length length))))\n\n;;; Log the test error. Also, describe the optimizer and the bpn at\n;;; the beginning of training. Called periodically during training\n;;; (see above).\n(defun log-test-error (optimizer learner)\n  (when (zerop (n-instances optimizer))\n    (describe optimizer)\n    (describe (bpn learner)))\n  (let ((rnn (bpn learner)))\n    (log-padded\n     (append\n      (monitor-bpn-results (make-sampler 1000) rnn\n                           (make-cost-monitors\n                            rnn :attributes '(:event \"pred.\")))\n      ;; Same result in a different way: monitor predictions for\n      ;; sequences up to length 20, but don't unfold the RNN\n      ;; unnecessarily to save memory.\n      (let ((*warp-time* t))\n        (monitor-bpn-results (make-sampler 1000 :length 20) rnn\n                             ;; Just collect and reset the warp\n                             ;; monitors after each batch of\n                             ;; instances.\n                             (make-cost-monitors\n                              rnn :attributes '(:event \"warped pred.\"))))))\n    ;; Verify that no further unfoldings took place.\n    (assert (\u003C= (length (clumps rnn)) 10)))\n  (log-mat-room))\n\n#|\n\n;;; Transcript follows:\n(let (;; Backprop nets do not need double float. Using single floats\n      ;; is faster and needs less memory.\n      (*default-mat-ctype* :float)\n      ;; Enable moving data in and out of GPU memory so that the RNN\n      ;; can work with sequences so long that the unfolded network\n      ;; wouldn't otherwise fit in the GPU.\n      (*cuda-window-start-time* 1)\n      (*log-time* nil))\n  ;; Seed the random number generators.\n  (repeatably ()\n    ;; Enable CUDA if available.\n    (with-cuda* ()\n      (train-sum-sign-rnn))))\n.. training at n-instances: 0\n.. cost: 0.000e+0 (0)\n.. #\u003CADAM-OPTIMIZER {1006CD5663}>\n..  GD-OPTIMIZER description:\n..    N-INSTANCES = 0\n..    SEGMENT-SET = #\u003CSEGMENT-SET\n..                    (#\u003C->WEIGHT (H #) :SIZE 1 1\u002F1 :NORM 1.73685>\n..                     #\u003C->WEIGHT (H #) :SIZE 1 1\u002F1 :NORM 0.31893>\n..                     #\u003C->WEIGHT (#1=# #2=# :PEEPHOLE) :SIZE\n..                       1 1\u002F1 :NORM 1.81610>\n..                     #\u003C->WEIGHT (H #2#) :SIZE 1 1\u002F1 :NORM 0.21965>\n..                     #\u003C->WEIGHT (#1# #3=# :PEEPHOLE) :SIZE\n..                       1 1\u002F1 :NORM 1.74939>\n..                     #\u003C->WEIGHT (H #3#) :SIZE 1 1\u002F1 :NORM 0.40377>\n..                     #\u003C->WEIGHT (H PREDICTION) :SIZE\n..                       3 1\u002F1 :NORM 2.15898>\n..                     #\u003C->WEIGHT (:BIAS PREDICTION) :SIZE\n..                       3 1\u002F1 :NORM 2.94470>\n..                     #\u003C->WEIGHT (#1# #4=# :PEEPHOLE) :SIZE\n..                       1 1\u002F1 :NORM 0.97601>\n..                     #\u003C->WEIGHT (INPUT #4#) :SIZE 1 1\u002F1 :NORM 0.65261>\n..                     #\u003C->WEIGHT (:BIAS #4#) :SIZE 1 1\u002F1 :NORM 0.37653>\n..                     #\u003C->WEIGHT (INPUT #1#) :SIZE 1 1\u002F1 :NORM 0.92334>\n..                     #\u003C->WEIGHT (:BIAS #1#) :SIZE 1 1\u002F1 :NORM 0.01609>\n..                     #\u003C->WEIGHT (INPUT #5=#) :SIZE 1 1\u002F1 :NORM 1.09995>\n..                     #\u003C->WEIGHT (:BIAS #5#) :SIZE 1 1\u002F1 :NORM 1.41244>\n..                     #\u003C->WEIGHT (INPUT #6=#) :SIZE 1 1\u002F1 :NORM 0.40475>\n..                     #\u003C->WEIGHT (:BIAS #6#) :SIZE 1 1\u002F1 :NORM 1.75358>)\n..                    {1006CD8753}>\n..    LEARNING-RATE = 2.00000e-1\n..    MOMENTUM = NONE\n..    MOMENTUM-TYPE = :NONE\n..    WEIGHT-DECAY = 0.00000e+0\n..    WEIGHT-PENALTY = 0.00000e+0\n..    N-AFTER-UPATE-HOOK = 0\n..    BATCH-SIZE = 100\n..  \n..  BATCH-GD-OPTIMIZER description:\n..    N-BEFORE-UPATE-HOOK = 0\n..  \n..  ADAM-OPTIMIZER description:\n..    MEAN-DECAY-RATE = 1.00000e-1\n..    MEAN-DECAY-RATE-DECAY = 9.00000e-1\n..    VARIANCE-DECAY-RATE = 1.00000e-1\n..    VARIANCE-ADJUSTMENT = 1.00000d-7\n..  #\u003CRNN {10047C77E3}>\n..   BPN description:\n..     CLUMPS = #(#\u003CSUM-SIGN-FNN :STRIPES 1\u002F50 :CLUMPS 4>\n..                #\u003CSUM-SIGN-FNN :STRIPES 1\u002F50 :CLUMPS 4>)\n..     N-STRIPES = 1\n..     MAX-N-STRIPES = 50\n..   \n..   RNN description:\n..     MAX-LAG = 1\n..   pred.        cost: 1.223e+0 (4455.00)\n.. warped pred. cost: 1.228e+0 (9476.00)\n.. Foreign memory usage:\n.. foreign arrays: 162 (used bytes: 39,600)\n.. CUDA memory usage:\n.. device arrays: 114 (used bytes: 220,892, pooled bytes: 19,200)\n.. host arrays: 162 (used bytes: 39,600)\n.. host->device copies: 6,164, device->host copies: 4,490\n.. training at n-instances: 3000\n.. cost: 3.323e-1 (13726.00)\n.. training at n-instances: 6000\n.. cost: 3.735e-2 (13890.00)\n.. training at n-instances: 9000\n.. cost: 1.012e-2 (13872.00)\n.. training at n-instances: 12000\n.. cost: 3.026e-3 (13953.00)\n.. training at n-instances: 15000\n.. cost: 9.267e-4 (13948.00)\n.. training at n-instances: 18000\n.. cost: 2.865e-4 (13849.00)\n.. training at n-instances: 21000\n.. cost: 8.893e-5 (13758.00)\n.. training at n-instances: 24000\n.. cost: 2.770e-5 (13908.00)\n.. training at n-instances: 27000\n.. cost: 8.514e-6 (13570.00)\n.. training at n-instances: 30000\n.. cost: 2.705e-6 (13721.00)\n.. pred.        cost: 1.426e-6 (4593.00)\n.. warped pred. cost: 1.406e-6 (9717.00)\n.. Foreign memory usage:\n.. foreign arrays: 216 (used bytes: 52,800)\n.. CUDA memory usage:\n.. device arrays: 148 (used bytes: 224,428, pooled bytes: 19,200)\n.. host arrays: 216 (used bytes: 52,800)\n.. host->device copies: 465,818, device->host copies: 371,990\n..\n==> (#\u003C->WEIGHT (H (H :OUTPUT)) :SIZE 1 1\u002F1 :NORM 0.10624>\n-->  #\u003C->WEIGHT (H (H :CELL)) :SIZE 1 1\u002F1 :NORM 0.94460>\n-->  #\u003C->WEIGHT ((H :CELL) (H :FORGET) :PEEPHOLE) :SIZE 1 1\u002F1 :NORM 0.61312>\n-->  #\u003C->WEIGHT (H (H :FORGET)) :SIZE 1 1\u002F1 :NORM 0.38093>\n-->  #\u003C->WEIGHT ((H :CELL) (H :INPUT) :PEEPHOLE) :SIZE 1 1\u002F1 :NORM 1.17956>\n-->  #\u003C->WEIGHT (H (H :INPUT)) :SIZE 1 1\u002F1 :NORM 0.88011>\n-->  #\u003C->WEIGHT (H PREDICTION) :SIZE 3 1\u002F1 :NORM 49.93808>\n-->  #\u003C->WEIGHT (:BIAS PREDICTION) :SIZE 3 1\u002F1 :NORM 10.98112>\n-->  #\u003C->WEIGHT ((H :CELL) (H :OUTPUT) :PEEPHOLE) :SIZE 1 1\u002F1 :NORM 0.67996>\n-->  #\u003C->WEIGHT (INPUT (H :OUTPUT)) :SIZE 1 1\u002F1 :NORM 0.65251>\n-->  #\u003C->WEIGHT (:BIAS (H :OUTPUT)) :SIZE 1 1\u002F1 :NORM 10.23003>\n-->  #\u003C->WEIGHT (INPUT (H :CELL)) :SIZE 1 1\u002F1 :NORM 5.98116>\n-->  #\u003C->WEIGHT (:BIAS (H :CELL)) :SIZE 1 1\u002F1 :NORM 0.10681>\n-->  #\u003C->WEIGHT (INPUT (H :FORGET)) :SIZE 1 1\u002F1 :NORM 4.46301>\n-->  #\u003C->WEIGHT (:BIAS (H :FORGET)) :SIZE 1 1\u002F1 :NORM 1.57195>\n-->  #\u003C->WEIGHT (INPUT (H :INPUT)) :SIZE 1 1\u002F1 :NORM 0.36401>\n-->  #\u003C->WEIGHT (:BIAS (H :INPUT)) :SIZE 1 1\u002F1 :NORM 8.63833>)\n\n|#\n```\n\n\u003Ca id=\"x-28MGL-BP-3ARNN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **RNN** *[BPN][5187]*\n\n    A recurrent neural net (as opposed to a\n    feed-forward one. It is typically built with [`BUILD-RNN`][764b] that's no\n    more than a shallow convenience macro.\n    \n    An `RNN` takes instances as inputs that are sequences of variable\n    length. At each time step, the next unprocessed elements of these\n    sequences are set as input until all input sequences in the batch\n    run out. To be able to perform backpropagation, all intermediate\n    [`LUMP`][c1ac]s must be kept around, so the recursive connections are\n    transformed out by\n    [unfolding](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBackpropagation_through_time)\n    the network. Just how many lumps this means depends on the length of\n    the sequences.\n    \n    When an `RNN` is created, `MAX-LAG + 1` [`BPN`][5187]s are instantiated so\n    that all weights are present and one can start training it.\n\n\u003Ca id=\"x-28MGL-BP-3AUNFOLDER-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [reader] **UNFOLDER** *[RNN][b0f3] (:UNFOLDER)*\n\n    The `UNFOLDER` of an [`RNN`][b0f3] is function of no arguments\n    that builds and returns a [`BPN`][5187]. The unfolder is allowed to create\n    networks with arbitrary topology even different ones for different\n    [`TIME-STEP`][6e96]s with the help of [`LAG`][ff5a], or nested `RNN`s. Weights of\n    the same name are shared between the folds. That is, if a [`->WEIGHT`][b76f]\n    lump were to be created and a weight lump of the same name already\n    exists, then the existing lump will be added to the `BPN` created by\n    `UNFOLDER`.\n\n\u003Ca id=\"x-28MGL-BP-3AMAX-LAG-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [reader] **MAX-LAG** *[RNN][b0f3] (:MAX-LAG = 1)*\n\n    The networks built by [`UNFOLDER`][8e53] may contain new\n    weights up to time step `MAX-LAG`. Beyond that point, all weight\n    lumps must be reappearances of weight lumps with the same name at\n    previous time steps. Most recurrent networks reference only the\n    state of lumps at the previous time step (with the function [`LAG`][ff5a]),\n    hence the default of 1. But it is possible to have connections to\n    arbitrary time steps. The maximum connection lag must be specified\n    when creating the [`RNN`][b0f3].\n\n\u003Ca id=\"x-28MGL-BP-3ACUDA-WINDOW-START-TIME-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [accessor] **CUDA-WINDOW-START-TIME** *[RNN][b0f3] (:CUDA-WINDOW-START-TIME = \\*CUDA-WINDOW-START-TIME\\*)*\n\n    Due to unfolding, the memory footprint of an [`RNN`][b0f3]\n    is almost linear in the number of time steps (i.e. the max\n    sequence length). For prediction, this is addressed by\n    [Time Warp][d0e3]. For training, we cannot discard results of\n    previous time steps because they are needed for backpropagation,\n    but we can at least move them out of GPU memory if they are not\n    going to be used for a while and copy them back before they are\n    needed. Obviously, this is only relevant if CUDA is being used.\n    \n    If `CUDA-WINDOW-START-TIME` is `NIL`, then this feature is turned off.\n    Else, during training, at `CUDA-WINDOW-START-TIME` or later time\n    steps, matrices belonging to non-weight lumps may be forced out of\n    GPU memory and later brought back as neeeded.\n    \n    This feature is implemented in terms of\n    `MGL-MAT:WITH-SYNCING-CUDA-FACETS` that uses CUDA host memory (also\n    known as *page-locked* or *pinned memory*) to do asynchronous\n    copies concurrently with normal computation. The consequence of\n    this is that it is now main memory usage that's unbounded which\n    toghether with page-locking makes it a potent weapon to bring a\n    machine to a halt. You were warned.\n\n\u003Ca id=\"x-28MGL-BP-3A-2ACUDA-WINDOW-START-TIME-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*CUDA-WINDOW-START-TIME\\*** *NIL*\n\n    The default for [`CUDA-WINDOW-START-TIME`][f573].\n\n\u003Ca id=\"x-28MGL-BP-3ABUILD-RNN-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **BUILD-RNN** *(&KEY RNN (CLASS ''RNN) NAME INITARGS MAX-N-STRIPES (MAX-LAG 1)) &BODY BODY*\n\n    Create an `RNN` with `MAX-N-STRIPES` and `MAX-LAG` whose [`UNFOLDER`][8e53] is `BODY`\n    wrapped in a lambda. Bind symbol given as the `RNN` argument to the\n    `RNN` object so that `BODY` can see it.\n\n\u003Ca id=\"x-28MGL-BP-3ALAG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LAG** *NAME &KEY (LAG 1) RNN PATH*\n\n    In `RNN` or if it's `NIL` the `RNN` being extended with another\n    [`BPN`][5187] (called *unfolding*), look up the [`CLUMP`][a4fe] with `NAME` in the `BPN`\n    that's `LAG` number of time steps before the `BPN` being added. If this\n    function is called from [`UNFOLDER`][8e53] of an `RNN` (which is what happens\n    behind the scene in the body of [`BUILD-RNN`][764b]), then it returns an\n    opaque object representing a lagged connection to a clump, else it\n    returns the `CLUMP` itself.\n    \n    FIXDOC: `PATH`\n\n\u003Ca id=\"x-28MGL-BP-3ATIME-STEP-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **TIME-STEP** *&KEY (RNN \\*RNN\\*)*\n\n    Return the time step `RNN` is currently executing or being unfolded for.\n    It is 0 when the `RNN` is being unfolded for the first time.\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-INPUT-20-28METHOD-20-28T-20MGL-BP-3ARNN-29-29-29\">\u003C\u002Fa>\n\n- [method] **SET-INPUT** *INSTANCES (RNN RNN)*\n\n    `RNN`s operate on batches of instances just like [`FNN`][9de4]s. But the\n    instances here are like datasets: sequences or samplers and they are\n    turned into sequences of batches of instances with\n    [`MAP-DATASETS`][765c] `:IMPUTE` `NIL`. The batch of instances at index 2 is\n    clamped onto the [`BPN`][5187] at time step 2 with `SET-INPUT`.\n    \n    When the input sequences in the batch are not of the same length,\n    already exhausted sequences will produce `NIL` (due to `:IMPUTE` `NIL`)\n    above. When such a `NIL` is clamped with `SET-INPUT` on a `BPN` of the\n    `RNN`, `SET-INPUT` must set the [`IMPORTANCE`][038e] of the ->ERROR lumps to 0\n    else training would operate on the noise left there by previous\n    invocations.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-RNN-TIME-WARP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Time Warp\n\nThe unbounded memory usage of [`RNN`][b0f3]s with one [`BPN`][5187] allocated per\ntime step can become a problem. For training, where the gradients\noften have to be backpropagated from the last time step to the very\nbeginning, this is hard to solve but with [`CUDA-WINDOW-START-TIME`][f573] the\nlimit is no longer GPU memory.\n\nFor prediction on the other hand, one doesn't need to keep old steps\naround indefinitely: they can be discarded when future time steps\nwill never reference them again.\n\n\u003Ca id=\"x-28MGL-BP-3A-2AWARP-TIME-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*WARP-TIME\\*** *NIL*\n\n    Controls whether warping is enabled (see [Time Warp][d0e3]). Don't\n    enable it for training, as it would make backprop impossible.\n\n\u003Ca id=\"x-28MGL-BP-3AWARPED-TIME-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **WARPED-TIME** *&KEY (RNN \\*RNN\\*) (TIME (TIME-STEP :RNN RNN)) (LAG 0)*\n\n    Return the index of the [`BPN`][5187] in [`CLUMPS`][f7c1] of `RNN` whose task it is to\n    execute computation at `(- (TIME-STEP RNN) LAG)`. This is normally\n    the same as [`TIME-STEP`][6e96] (disregarding `LAG`). That is, `CLUMPS` can be\n    indexed by `TIME-STEP` to get the `BPN`. However, when [`*WARP-TIME*`][ed4f] is\n    true, execution proceeds in a cycle as the structure of the network\n    allows.\n    \n    Suppose we have a typical `RNN` that only ever references the previous\n    time step so its [`MAX-LAG`][084d] is 1. Its [`UNFOLDER`][8e53] returns `BPN`s of\n    identical structure bar a shift in their time lagged connections\n    except for the very first, so [`WARP-START`][d6e0] and [`WARP-LENGTH`][51d5] are both 1.\n    If `*WARP-TIME*` is `NIL`, then the mapping from `TIME-STEP` to the `BPN` in\n    `CLUMPS` is straightforward:\n    \n        time:   |  0 |  1 |  2 |  3 |  4 |  5\n        --------+----+----+----+----+----+----\n        warped: |  0 |  1 |  2 |  3 |  4 |  5\n        --------+----+----+----+----+----+----\n        bpn:    | b0 | b1 | b2 | b3 | b4 | b5\n    \n    When `*WARP-TIME*` is true, we reuse the `B1` - `B2` bpns in a loop:\n    \n        time:   |  0 |  1 |  2 |  3 |  4 |  5\n        --------+----+----+----+----+----+----\n        warped: |  0 |  1 |  2 |  1 |  2 |  1\n        --------+----+----+----+----+----+----\n        bpn:    | b0 | b1 | b2 | b1*| b2 | b1*\n    \n    `B1*` is the same `BPN` as `B1`, but its connections created by `LAG` go\n    through warped time and end up referencing `B2`. This way, memory\n    consumption is independent of the number time steps needed to\n    process a sequence or make predictions.\n    \n    To be able to pull this trick off `WARP-START` and `WARP-LENGTH` must be\n    specified when the `RNN` is instantiated. In general, with\n    `*WARP-TIME*` `(+ WARP-START (MAX 2 WARP-LENGTH))` bpns are needed.\n    The 2 comes from the fact that with cycle length 1 a bpn would need\n    to takes its input from itself which is problematic because it has\n    [`NODES`][cc1c] for only one set of values.\n\n\u003Ca id=\"x-28MGL-BP-3AWARP-START-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [reader] **WARP-START** *[RNN][b0f3] (:WARP-START = 1)*\n\n    The [`TIME-STEP`][6e96] from which [`UNFOLDER`][8e53] will create\n    [`BPN`][5187]s that essentially repeat every [`WARP-LENGTH`][51d5] steps.\n\n\u003Ca id=\"x-28MGL-BP-3AWARP-LENGTH-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [reader] **WARP-LENGTH** *[RNN][b0f3] (:WARP-LENGTH = 1)*\n\n    An integer such that the [`BPN`][5187] [`UNFOLDER`][8e53] creates at\n    time step `I` (where `(\u003C= WARP-START I)`) is identical to the `BPN`\n    created at time step `(+ WARP-START (MOD (- I WARP-START)\n    WARP-LENGTH))` except for a shift in its time lagged\n    connections.\n\n\u003Ca id=\"x-28MGL-BP-3ASTEP-MONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [accessor] **STEP-MONITORS** *[RNN][b0f3] (:STEP-MONITORS = NIL)*\n\n    During training, unfolded [`BPN`][5187]s corresponding to\n    previous time steps may be expensive to get at because they are no\n    longer in GPU memory. This consideration also applies to making\n    prediction with the additional caveat that with [`*WARP-TIME*`][ed4f] true,\n    previous states are discarded so it's not possible to gather\n    statistics after [`FORWARD`][c1ae] finished.\n    \n    Add monitor objects to this slot and they will be automatically\n    applied to the [`RNN`][b0f3] after each step when `FORWARD`ing the `RNN`\n    during training or prediction. To be able to easily switch between\n    sets of monitors, in addition to a list of monitors this can be a\n    symbol or a function, too. If it's a symbol, then its a designator\n    for its [`SYMBOL-VALUE`][cee6]. If it's a function, then it must have no\n    arguments and it's a designator for its return value.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LUMPS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.4 Lumps\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.1 Lump Base Class\n\n\u003Ca id=\"x-28MGL-BP-3ALUMP-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **LUMP** *[CLUMP][a4fe]*\n\n    A `LUMP` is a simple, layerlike component of a neural\n    network. There are many kinds of lumps, each of which performs a\n    specific operation or just stores inputs and weights. By convention,\n    the names of lumps start with the prefix `->`. Defined as classes,\n    they also have a function of the same name as the class to create\n    them easily. These maker functions typically have keyword arguments\n    corresponding to initargs of the class, with some (mainly the input\n    lumps) turned into normal positional arguments. So instead of having\n    to do\n    \n        (make-instance '->tanh :x some-input :name 'my-tanh)\n    \n    one can simply write\n    \n        (->tanh some-input :name 'my-tanh)\n    \n    Lumps instantiated in any way within a [`BUILD-FNN`][606c] or [`BUILD-RNN`][764b] are\n    automatically added to the network being built.\n    \n    A lump has its own [`NODES`][cc1c] and [`DERIVATIVES`][a81b] matrices allocated for it\n    in which the results of the forward and backward passes are stored.\n    This is in contrast to a [`BPN`][5187] whose `NODES` and `DERIVATIVES`\n    are those of its last constituent [`CLUMP`][a4fe].\n    \n    Since lumps almost always live within a `BPN`, their\n    [`N-STRIPES`][07fb] and [`MAX-N-STRIPES`][91a3] are\n    handled automagically behind the scenes.\n\n\u003Ca id=\"x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **SIZE** *[LUMP][c1ac] (:SIZE)*\n\n    The number of values in a single stripe.\n\n\u003Ca id=\"x-28MGL-COMMON-3ADEFAULT-VALUE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **DEFAULT-VALUE** *[LUMP][c1ac] (:DEFAULT-VALUE = 0)*\n\n    Upon creation or resize the lump's nodes get\n    filled with this value.\n\n\u003Ca id=\"x-28MGL-BP-3ADEFAULT-SIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **DEFAULT-SIZE** *LUMP*\n\n    Return a default for the [`SIZE`][85d3] of\n    `LUMP` if one is not supplied at instantiation. The value is often\n    computed based on the sizes of the inputs. This function is for\n    implementing new lump types.\n\n\u003Ca id=\"x-28MGL-COMMON-3ANODES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **NODES** *[LUMP][c1ac] (= NIL)*\n\n    The values computed by the lump in the forward\n    pass are stored here. It is an `N-STRIPES * SIZE` matrix that has\n    storage allocated for `MAX-N-STRIPES * SIZE` elements for\n    non-weight lumps. [`->WEIGHT`][b76f] lumps have no stripes nor restrictions\n    on their shape.\n\n\u003Ca id=\"x-28MGL-BP-3ADERIVATIVES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **DERIVATIVES** *[LUMP][c1ac]*\n\n    The derivatives computed in the backward pass are\n    stored here. This matrix is very much like [`NODES`][d699]\n    in shape and size.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-INPUTS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.2 Inputs\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-INPUT-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Input Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EINPUT-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->INPUT** *[->DROPOUT][441b]*\n\n    A lump that has no input lumps, does not change its\n    values in the forward pass (except when [`DROPOUT`][e7f6] is non-zero), and does not compute derivatives. *Clamp*\n    inputs on [`NODES`][cc1c] of input lumps in [`SET-INPUT`][0c9e].\n    \n    For convenience, `->INPUT` can perform dropout itself although it\n    defaults to no dropout.\n    \n    ```common-lisp\n    (->input :size 10 :name 'some-input)\n    ==> #\u003C->INPUT SOME-INPUT :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EINPUT-29-29\">\u003C\u002Fa>\n\n- [accessor] **DROPOUT** *[->INPUT][f54e] (= NIL)*\n\n    See [`DROPOUT`][2481].\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-EMBEDDING-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Embedding Lump\n\nThis lump is like an input and a simple activation molded together\nin the name of efficiency.\n\n\u003Ca id=\"x-28MGL-BP-3A--3EEMBEDDING-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->EMBEDDING** *[LUMP][c1ac]*\n\n    Select rows of [`WEIGHTS`][ab3c], one row for each index in\n    [`INPUT-ROW-INDICES`][1a52]. This lump is equivalent to adding an [`->INPUT`][f54e] lump\n    with a one hot encoding scheme and a [`->V*M`][dbc4] lump on top of it, but it\n    is more efficient in execution and in memory usage, because it works\n    with a sparse representation of the input.\n    \n    The [`SIZE`][019f] of this lump is the number of columns of `WEIGHTS` which is\n    determined automatically.\n    \n    ```common-lisp\n    (->embedding :weights (->weight :name 'embedding-weights\n                                    :dimensions '(3 5))\n                 :name 'embeddings)\n    ==> #\u003C->EMBEDDING EMBEDDINGS :SIZE 5 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-COMMON-3AWEIGHTS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EEMBEDDING-29-29\">\u003C\u002Fa>\n\n- [reader] **WEIGHTS** *[->EMBEDDING][f1c1] (:WEIGHTS)*\n\n    A weight lump whose rows indexed by\n    [`INPUT-ROW-INDICES`][1a52] are copied to the output of this lump.\n\n\u003Ca id=\"x-28MGL-BP-3AINPUT-ROW-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EEMBEDDING-29-29\">\u003C\u002Fa>\n\n- [accessor] **INPUT-ROW-INDICES** *[->EMBEDDING][f1c1] (:INPUT-ROW-INDICES)*\n\n    A sequence of batch size length of row indices. To\n    be set in [`SET-INPUT`][0c9e].\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-WEIGHT-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.3 Weight Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EWEIGHT-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->WEIGHT** *[LUMP][c1ac]*\n\n    A set of optimizable parameters of some kind. When\n    a [`BPN`][5187] is is trained (see [Training][0d82]) the [`NODES`][cc1c] of weight lumps\n    will be changed. Weight lumps perform no computation.\n    \n    Weights can be created by specifying the total size or the\n    dimensions:\n    \n    ```common-lisp\n    (dimensions (->weight :size 10 :name 'w))\n    => (1 10)\n    (dimensions (->weight :dimensions '(5 10) :name 'w))\n    => (5 10)\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3ADIMENSIONS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EWEIGHT-29-29\">\u003C\u002Fa>\n\n- [reader] **DIMENSIONS** *[->WEIGHT][b76f] (:DIMENSIONS)*\n\n    [`NODES`][cc1c] and [`DERIVATIVES`][a81b] of this lump will be\n    allocated with these dimensions.\n\n\u003Ca id=\"x-28MGL-BP-3AWITH-WEIGHTS-COPIED-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **WITH-WEIGHTS-COPIED** *(FROM-BPN) &BODY BODY*\n\n    In `BODY` [`->WEIGHT`][b76f] will first look up if a weight lump of the same\n    name exists in `FROM-BPN` and return that, or else create a weight\n    lump normally. If `FROM-BPN` is `NIL`, then no weights are copied.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ACTIVATIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.4 Activations\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ACTIVATION-SUBNET-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Activation Subnet\n\nSo we have some inputs. Usually the next step is to multiply the\ninput vector with a weight matrix and add biases. This can be done\ndirectly with ->+, [`->V*M`][dbc4] and [`->WEIGHT`][b76f], but it's more convenient to\nuse activation subnets to reduce the clutter.\n\n\u003Ca id=\"x-28MGL-BP-3A--3EACTIVATION-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->ACTIVATION** *[BPN][5187]*\n\n    Activation subnetworks are built by the function\n    [`->ACTIVATION`][b602] and they have a number of lumps hidden inside them.\n    Ultimately, this subnetwork computes a sum like `sum_i x_i * W_i +\n    sum_j y_j .* V_j + biases` where `x_i` are input lumps, `W_i` are\n    dense matrices representing connections, while `V_j` are peephole\n    connection vectors that are mulitplied in an elementwise manner with\n    their corresponding input `y_j`.\n\n\u003Ca id=\"x-28MGL-BP-3A--3EACTIVATION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **->ACTIVATION** *INPUTS &KEY (NAME (GENSYM)) SIZE PEEPHOLES (ADD-BIAS-P T)*\n\n    Create a subnetwork of class [`->ACTIVATION`][7162] that computes the over\n    activation from dense connection from lumps in `INPUTS`, and\n    elementwise connection from lumps in `PEEPHOLES`. Create new [`->WEIGHT`][b76f]\n    lumps as necessary. `INPUTS` and `PEEPHOLES` can be a single lump or a\n    list of lumps. Finally, if `ADD-BIAS-P`, then add an elementwise bias\n    too. `SIZE` must be specified explicitly, because it is not possible\n    to determine it unless there are peephole connections.\n    \n    ```common-lisp\n    (->activation (->input :size 10 :name 'input) :name 'h1 :size 4)\n    ==> #\u003C->ACTIVATION (H1 :ACTIVATION) :STRIPES 1\u002F1 :CLUMPS 4>\n    ```\n    \n    This is the basic workhorse of neural networks which takes care of\n    the linear transformation whose results and then fed to some\n    non-linearity ([`->SIGMOID`][83f9], [`->TANH`][5309], etc).\n    \n    The name of the subnetwork clump is `(,NAME :ACTIVATION)`. The bias\n    weight lump (if any) is named `(:BIAS ,NAME)`. Dense connection\n    weight lumps are named are named after the input and `NAME`: `(,(NAME\n    INPUT) ,NAME)`, while peepholes weight lumps are named `(,(NAME\n    INPUT) ,NAME :PEEPHOLE)`. This is useful to know if, for example,\n    they are to be initialized differently.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-BATCH-NORMALIZATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Batch-Normalization\n\n\u003Ca id=\"x-28MGL-BP-3A--3EBATCH-NORMALIZED-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->BATCH-NORMALIZED** *[LUMP][c1ac]*\n\n    This is an implementation of v3 of the [Batch\n    Normalization paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167). The output of\n    `->BATCH-NORMALIZED` is its input normalized so that for all elements\n    the mean across stripes is zero and the variance is 1. That is, the\n    mean of the batch is subtracted from the inputs and they are\n    rescaled by their sample stddev. Actually, after the normalization\n    step the values are rescaled and shifted (but this time with learnt\n    parameters) in order to keep the representational power of the model\n    the same. The primary purpose of this lump is to speed up learning,\n    but it also acts as a regularizer. See the paper for the details.\n    \n    To normalize the output of `LUMP` without no additional\n    regularizer effect:\n    \n    ```commonlisp\n    (->batch-normalized lump :batch-size :use-population)\n    ```\n    \n    The above uses an exponential moving average to estimate the mean\n    and variance of batches and these estimations are used at both\n    training and test time. In contrast to this, the published version\n    uses the sample mean and variance of the current batch at training\n    time which injects noise into the process. The noise is higher for\n    lower batch sizes and has a regularizing effect. This is the default\n    behavior (equivalent to `:BATCH-SIZE NIL`):\n    \n    ```commonlisp\n    (->batch-normalized lump)\n    ```\n    \n    For performance reasons one may wish to process a higher number of\n    instances in a batch (in the sense of [`N-STRIPES`][8dd7]) and get the\n    regularization effect associated with a lower batch size. This is\n    possible by setting `:BATCH-SIZE` to a divisor of the the number of\n    stripes. Say, the number of stripes is 128, but we want as much\n    regularization as we would get with 32:\n    \n    ```commonlisp\n    (->batch-normalized lump :batch-size 32)\n    ```\n    \n    The primary input of `->BATCH-NORMALIZED` is often an `->ACTIVATION`([`0`][7162] [`1`][b602]) and\n    its output is fed into an activation function (see\n    [Activation Functions][5d86]).\n\n\u003Ca id=\"x-28MGL-BP-3ABATCH-NORMALIZATION-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZED-29-29\">\u003C\u002Fa>\n\n- [reader] **BATCH-NORMALIZATION** *[->BATCH-NORMALIZED][9da9] (:NORMALIZATION)*\n\n    The [`->BATCH-NORMALIZATION`][c469] of this lump. May be\n    shared between multiple [`->BATCH-NORMALIZED`][9da9] lumps.\n    \n    Batch normalization is special in that it has state apart from the\n    computed results ([`NODES`][cc1c]) and its derivatives ([`DERIVATIVES`][a81b]). This\n    state is the estimated mean and variance of its inputs and they\n    are encapsulated by `->BATCH-NORMALIZATION`.\n    \n    If `NORMALIZATION` is not given at instantiation, then a new\n    `->BATCH-NORMALIZATION` object will be created automatically,\n    passing `:BATCH-SIZE`, `:VARIANCE-ADJUSTMENT`, and `:POPULATION-DECAY`\n    arguments on to `->BATCH-NORMALIZATION`. See [`BATCH-SIZE`][c918], [`VARIANCE-ADJUSTMENT`][aa86] and [`POPULATION-DECAY`][46c4]. New scale and shift weight lumps will be\n    created with names:\n    \n        `(,name :scale)\n        `(,name :shift)\n    \n    where `NAME` is the [`NAME`][5842] of this lump.\n    \n    This default behavior covers the use-case where the statistics\n    kept by `->BATCH-NORMALIZATION` are to be shared only between time\n    steps of an [`RNN`][b0f3].\n\n\u003Ca id=\"x-28MGL-BP-3A--3EBATCH-NORMALIZATION-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->BATCH-NORMALIZATION** *[->WEIGHT][b76f]*\n\n    The primary purpose of this class is to hold the\n    estimated mean and variance of the inputs to be normalized and allow\n    them to be shared between multiple [`->BATCH-NORMALIZED`][9da9] lumps that\n    carry out the computation. These estimations are saved and loaded by\n    [`SAVE-STATE`][c102] and [`LOAD-STATE`][6bd7].\n    \n    ```commonlisp\n    (->batch-normalization (->weight :name '(h1 :scale) :size 10)\n                           (->weight :name '(h1 :shift) :size 10)\n                           :name '(h1 :batch-normalization))\n    ```\n\n\u003Ca id=\"x-28MGL-COMMON-3ASCALE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **SCALE** *[->BATCH-NORMALIZATION][c469] (:SCALE)*\n\n    A weight lump of the same size as [`SHIFT`][7960]. This is\n    $\\gamma$ in the paper.\n\n\u003Ca id=\"x-28MGL-BP-3ASHIFT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **SHIFT** *[->BATCH-NORMALIZATION][c469] (:SHIFT)*\n\n    A weight lump of the same size as [`SCALE`][8970]. This is\n    $\\beta$ in the paper.\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **BATCH-SIZE** *[->BATCH-NORMALIZATION][c469] (:BATCH-SIZE = NIL)*\n\n    Normally all stripes participate in the batch.\n    Lowering the number of stripes may increase the regularization\n    effect, but it also makes the computation less efficient. By\n    setting `BATCH-SIZE` to a divisor of [`N-STRIPES`][8dd7] one can decouple the\n    concern of efficiency from that of regularization. The default\n    value, `NIL`, is equivalent to `N-STRIPES`. `BATCH-SIZE` only affects\n    training.\n    \n    With the special value `:USE-POPULATION`, instead of the mean and\n    the variance of the current batch, use the population statistics\n    for normalization. This effectively cancels the regularization\n    effect, leaving only the faster learning.\n\n\u003Ca id=\"x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **VARIANCE-ADJUSTMENT** *[->BATCH-NORMALIZATION][c469] (:VARIANCE-ADJUSTMENT = 1.0e-4)*\n\n    A small positive real number that's added to the\n    sample variance. This is $\\epsilon$ in the paper.\n\n\u003Ca id=\"x-28MGL-BP-3APOPULATION-DECAY-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **POPULATION-DECAY** *[->BATCH-NORMALIZATION][c469] (:POPULATION-DECAY = 0.99)*\n\n    While training, an exponential moving average of\n    batch means and standard deviances (termed *population\n    statistics*) is updated. When making predictions, normalization is\n    performed using these statistics. These population statistics are\n    persisted by [`SAVE-STATE`][c102].\n\n\u003Ca id=\"x-28MGL-BP-3A--3EBATCH-NORMALIZED-ACTIVATION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **->BATCH-NORMALIZED-ACTIVATION** *INPUTS &KEY (NAME (GENSYM)) SIZE PEEPHOLES BATCH-SIZE VARIANCE-ADJUSTMENT POPULATION-DECAY*\n\n    A utility functions that creates and wraps an `->ACTIVATION`([`0`][7162] [`1`][b602]) in\n    [`->BATCH-NORMALIZED`][9da9] and with its [`BATCH-NORMALIZATION`][eaf1] the two weight\n    lumps for the scale and shift\n    parameters. `(->BATCH-NORMALIZED-ACTIVATION INPUTS :NAME 'H1 :SIZE\n    10)` is equivalent to:\n    \n    ```commonlisp\n    (->batch-normalized (->activation inputs :name 'h1 :size 10 :add-bias-p nil)\n                        :name '(h1 :batch-normalized-activation))\n    ```\n    \n    Note how biases are turned off since normalization will cancel them\n    anyway (but a shift is added which amounts to the same effect).\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ACTIVATION-FUNCTIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.5 Activation Functions\n\nNow we are moving on to the most important non-linearities to which\nactivations are fed.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SIGMOID-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Sigmoid Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESIGMOID-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SIGMOID** *[->DROPOUT][441b] [LUMP][c1ac]*\n\n    Applies the `1\u002F(1 + e^{-x})` function elementwise\n    to its inputs. This is one of the classic non-linearities for neural\n    networks.\n    \n    For convenience, `->SIGMOID` can perform dropout itself although it\n    defaults to no dropout.\n    \n    ```common-lisp\n    (->sigmoid (->activation (->input :size 10) :size 5) :name 'this)\n    ==> #\u003C->SIGMOID THIS :SIZE 5 1\u002F1 :NORM 0.00000>\n    ```\n    \n    The [`SIZE`][019f] of this lump is the size of its input which is determined\n    automatically.\n\n\u003Ca id=\"x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESIGMOID-29-29\">\u003C\u002Fa>\n\n- [accessor] **DROPOUT** *[->SIGMOID][83f9] (= NIL)*\n\n    See [`DROPOUT`][2481].\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-TANH-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Tanh Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ETANH-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->TANH** *[LUMP][c1ac]*\n\n    Applies the [`TANH`][993b] function to its input in an\n    elementwise manner. The [`SIZE`][019f] of this lump is the size of its input\n    which is determined automatically.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SCALED-TANH-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Scaled Tanh Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESCALED-TANH-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SCALED-TANH** *[LUMP][c1ac]*\n\n    Pretty much like [`TANH`][993b] but its input and output is\n    scaled in such a way that the variance of its output is close to 1\n    if the variance of its input is close to 1 which is a nice property\n    to combat vanishing gradients. The actual function is `1.7159 *\n    tanh(2\u002F3 * x)`. The [`SIZE`][019f] of this lump is the size of its input which\n    is determined automatically.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-RELU-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Relu Lump\n\nWe are somewhere around year 2007 by now.\n\n\u003Ca id=\"x-28MGL-BP-3A--3ERELU-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->RELU** *[LUMP][c1ac]*\n\n    `max(0,x)` activation function. Be careful, relu\n    units can get stuck in the off state: if they move to far to\n    negative territory it can be very difficult to get out of it. The\n    [`SIZE`][019f] of this lump is the size of its input which is determined\n    automatically.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MAX-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Max Lump\n\nWe are in about year 2011.\n\n\u003Ca id=\"x-28MGL-BP-3A--3EMAX-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->MAX** *[LUMP][c1ac]*\n\n    This is basically maxout without dropout (see\n    http:\u002F\u002Farxiv.org\u002Fabs\u002F1302.4389). It groups its inputs by\n    [`GROUP-SIZE`][59dd], and outputs the maximum of each group.\n    The [`SIZE`][019f] of the output is automatically calculated, it is the size\n    of the input divided by [`GROUP-SIZE`][59dd].\n    \n    ```common-lisp\n    (->max (->input :size 120) :group-size 3 :name 'my-max)\n    ==> #\u003C->MAX MY-MAX :SIZE 40 1\u002F1 :NORM 0.00000 :GROUP-SIZE 3>\n    ```\n    \n    The advantage of `->MAX` over [`->RELU`][9d3a] is that flow gradient is never\n    stopped so there is no problem of units getting stuck in off\n    state.\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-29-29\">\u003C\u002Fa>\n\n- [reader] **GROUP-SIZE** *[->MAX][f652] (:GROUP-SIZE)*\n\n    The number of inputs in each group.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MIN-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Min Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EMIN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->MIN** *[LUMP][c1ac]*\n\n    Same as [`->MAX`][f652], but it computes the [`MIN`][115e] of groups.\n    Rarely useful.\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMIN-29-29\">\u003C\u002Fa>\n\n- [reader] **GROUP-SIZE** *[->MIN][9a84] (:GROUP-SIZE)*\n\n    The number of inputs in each group.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MAX-CHANNEL-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Max-Channel Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EMAX-CHANNEL-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->MAX-CHANNEL** *[LUMP][c1ac]*\n\n    Called LWTA (Local Winner Take All) or\n    Channel-Out (see http:\u002F\u002Farxiv.org\u002Fabs\u002F1312.1909) in the literature\n    it is basically [`->MAX`][f652], but instead of producing one output per\n    group, it just produces zeros for all unit but the one with the\n    maximum value in the group. This allows the next layer to get some\n    information about the path along which information flowed. The [`SIZE`][019f]\n    of this lump is the size of its input which is determined\n    automatically.\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-CHANNEL-29-29\">\u003C\u002Fa>\n\n- [reader] **GROUP-SIZE** *[->MAX-CHANNEL][6021] (:GROUP-SIZE)*\n\n    The number of inputs in each group.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LOSSES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.6 Losses\n\nUltimately, we need to tell the network what to learn which means\nthat the loss function to be minimized needs to be constructed as\npart of the network.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LOSS-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Loss Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ELOSS-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->LOSS** *[->SUM][edcf]*\n\n    Calculate the loss for the instances in the batch.\n    The main purpose of this lump is to provide a training signal.\n    \n    An error lump is usually a leaf in the graph of lumps (i.e. there\n    are no other lumps whose input is this one). The special thing about\n    error lumps is that 1 (but see [`IMPORTANCE`][038e]) is added automatically to\n    their derivatives. Error lumps have exactly one node (per stripe)\n    whose value is computed as the sum of nodes in their input lump.\n\n\u003Ca id=\"x-28MGL-BP-3AIMPORTANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ELOSS-29-29\">\u003C\u002Fa>\n\n- [accessor] **IMPORTANCE** *[->LOSS][2171] (:IMPORTANCE = NIL)*\n\n    This is to support weighted instances. That is\n    when not all training instances are equally important. If non-`NIL`,\n    a 1d `MAT` with the importances of stripes of the batch. When\n    `IMPORTANCE` is given (typically in [`SET-INPUT`][0c9e]), then instead of\n    adding 1 to the derivatives of all stripes, `IMPORTANCE` is added\n    elemtwise.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SQUARED-DIFFERENCE-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Squared Difference Lump\n\nIn regression, the squared error loss is most common. The squared\nerror loss can be constructed by combining [`->SQUARED-DIFFERENCE`][e8d2] with\na [`->LOSS`][2171].\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESQUARED-DIFFERENCE-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SQUARED-DIFFERENCE** *[LUMP][c1ac]*\n\n    This lump takes two input lumps and calculates\n    their squared difference `(x - y)^2` in an elementwise manner. The\n    [`SIZE`][019f] of this lump is automatically determined from the size of its\n    inputs. This lump is often fed into [`->LOSS`][2171] that sums the squared\n    differences and makes it part of the function to be minimized.\n    \n    ```common-lisp\n    (->loss (->squared-difference (->activation (->input :size 100)\n                                                :size 10)\n                                  (->input :name 'target :size 10))\n            :name 'squared-error)\n    ==> #\u003C->LOSS SQUARED-ERROR :SIZE 1 1\u002F1 :NORM 0.00000>\n    ```\n    \n    Currently this lump is not CUDAized, but it will copy data from the\n    GPU if it needs to.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SOFTMAX-XE-LOSS-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Softmax Cross-Entropy Loss Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESOFTMAX-XE-LOSS-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SOFTMAX-XE-LOSS** *[LUMP][c1ac]*\n\n    A specialized lump that computes the softmax of its\n    input in the forward pass and backpropagates a cross-entropy loss.\n    The advantage of doing these together is numerical stability. The\n    total cross-entropy is the sum of cross-entropies per group of\n    [`GROUP-SIZE`][a437] elements:\n    \n    $$\n    XE(x) = - \\sum_{i=1,g} t_i \\ln(s_i),\n    $$\n    \n    where `g` is the number of classes ([`GROUP-SIZE`][a437]), `t_i` are the targets (i.e. the true\n    probabilities of the class, often all zero but one), `s_i` is the\n    output of softmax calculated from input `X`:\n    \n    $$\n    s_i = {softmax}(x_1, x_2, ..., x_g) =\n      \\frac{e^x_i}{\\sum_{j=1,g} e^x_j}\n    $$\n    \n    In other words, in the forward phase this lump takes input `X`,\n    computes its elementwise [`EXP`][bc8c], normalizes each group of\n    [`GROUP-SIZE`][a437] elements to sum to 1 to get\n    the softmax which is the result that goes into [`NODES`][cc1c]. In the\n    backward phase, there are two sources of gradients: the lumps that\n    use the output of this lump as their input (currently not\n    implemented and would result in an error) and an implicit\n    cross-entropy loss.\n    \n    One can get the cross-entropy calculated in the most recent forward\n    pass by calling [`COST`][410c] on this lump.\n    \n    This is the most common loss function for classification. In fact,\n    it is nearly ubiquitous. See the [`FNN` Tutorial][6b38] and the\n    [`RNN` Tutorial][9700] for how this loss and [`SET-INPUT`][0c9e] work together.\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29\">\u003C\u002Fa>\n\n- [reader] **GROUP-SIZE** *[->SOFTMAX-XE-LOSS][85d34] (:GROUP-SIZE)*\n\n    The number of elements in a softmax group. This is\n    the number of classes for classification. Often `GROUP-SIZE` is\n    equal to [`SIZE`][019f] (it is the default), but in general the only\n    constraint is that `SIZE` is a multiple of `GROUP-SIZE`.\n\n\u003Ca id=\"x-28MGL-COMMON-3ATARGET-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29\">\u003C\u002Fa>\n\n- [accessor] **TARGET** *[->SOFTMAX-XE-LOSS][85d34] (:TARGET = NIL)*\n\n    Set in [`SET-INPUT`][0c9e], this is either a `MAT` of the same\n    size as the input lump `X` or if the target is very sparse, this\n    can also be a sequence of batch size length that contains the\n    index value pairs of non-zero entries:\n    \n        (;; first instance in batch has two non-zero targets\n         (;; class 10 has 30% expected probability\n          (10 . 0.3)\n          ;; class 2 has 70% expected probability\n          (2 .  0.7))\n         ;; second instance in batch puts 100% on class 7\n         7\n         ;; more instances in the batch follow\n         ...)\n    \n    Actually, in the rare case where [`GROUP-SIZE`][a437] is not [`SIZE`][019f] (i.e. there are several softmax\n    normalization groups for every example), the length of the above\n    target sequence is [`BATCH-SIZE`][fa6d] \\* N-GROUPS. Indices are always\n    relative to the start of the group.\n    \n    If [`GROUP-SIZE`][a437] is large (for example,\n    in neural language models with a huge number of words), using\n    sparse targets can make things go much faster, because calculation\n    of the derivative is no longer quadratic.\n    \n    Giving different weights to training instances is implicitly\n    supported. While target values in a group should sum to 1,\n    multiplying all target values with a weight `W` is equivalent to\n    training that `W` times on the same example.\n\n\u003Ca id=\"x-28MGL-BP-3AENSURE-SOFTMAX-TARGET-MATRIX-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **ENSURE-SOFTMAX-TARGET-MATRIX** *SOFTMAX-XE-LOSS N*\n\n    Set [`TARGET`][b5c7] of `SOFTMAX-XE-LOSS` to a `MAT` capable of holding the dense\n    target values for `N` stripes.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-STOCHASTICITY-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.7 Stochasticity\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-DROPOUT-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Dropout Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EDROPOUT-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->DROPOUT** *[LUMP][c1ac]*\n\n    The output of this lump is identical to its input,\n    except it randomly zeroes out some of them during training which act\n    as a very strong regularizer. See Geoffrey Hinton's 'Improving\n    neural networks by preventing co-adaptation of feature\n    detectors'.\n    \n    The [`SIZE`][019f] of this lump is the size of its input which is determined\n    automatically.\n\n\u003Ca id=\"x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EDROPOUT-29-29\">\u003C\u002Fa>\n\n- [accessor] **DROPOUT** *[->DROPOUT][441b] (:DROPOUT = 0.5)*\n\n    If non-`NIL`, then in the forward pass zero out each\n    node in this chunk with `DROPOUT` probability.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-GAUSSIAN-RANDOM-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Gaussian Random Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EGAUSSIAN-RANDOM-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->GAUSSIAN-RANDOM** *[LUMP][c1ac]*\n\n    This lump has no input, it produces normally\n    distributed independent random numbers with [`MEAN`][d96a] and [`VARIANCE`][404c] (or\n    [`VARIANCE-FOR-PREDICTION`][80e2]). This is useful building block for noise\n    based regularization methods.\n    \n    ```common-lisp\n    (->gaussian-random :size 10 :name 'normal :mean 1 :variance 2)\n    ==> #\u003C->GAUSSIAN-RANDOM NORMAL :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3AMEAN-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29\">\u003C\u002Fa>\n\n- [accessor] **MEAN** *[->GAUSSIAN-RANDOM][feaa] (:MEAN = 0)*\n\n    The mean of the normal distribution.\n\n\u003Ca id=\"x-28MGL-BP-3AVARIANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29\">\u003C\u002Fa>\n\n- [accessor] **VARIANCE** *[->GAUSSIAN-RANDOM][feaa] (:VARIANCE = 1)*\n\n    The variance of the normal distribution.\n\n\u003Ca id=\"x-28MGL-BP-3AVARIANCE-FOR-PREDICTION-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29\">\u003C\u002Fa>\n\n- [accessor] **VARIANCE-FOR-PREDICTION** *[->GAUSSIAN-RANDOM][feaa] (:VARIANCE-FOR-PREDICTION = 0)*\n\n    If not `NIL`, then this value overrides [`VARIANCE`][404c]\n    when not in training (i.e. when making predictions).\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SAMPLE-BINARY-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Binary Sampling Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESAMPLE-BINARY-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SAMPLE-BINARY** *[LUMP][c1ac]*\n\n    Treating values of its input as probabilities,\n    sample independent binomials. Turn true into 1 and false into 0. The\n    [`SIZE`][019f] of this lump is determined automatically from the size of its\n    input.\n    \n    ```common-lisp\n    (->sample-binary (->input :size 10) :name 'binarized-input)\n    ==> #\u003C->SAMPLE-BINARY BINARIZED-INPUT :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ARITHMETIC-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.8 Arithmetic\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SUM-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Sum Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESUM-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SUM** *[LUMP][c1ac]*\n\n    Computes the sum of all nodes of its input per\n    stripe. This [`SIZE`][019f] of this lump is always 1.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-V-2AM-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Vector-Matrix Multiplication Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EV-2AM-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->V\\*M** *[LUMP][c1ac]*\n\n    Perform `X * WEIGHTS` where `X` (the input) is of\n    size `M` and [`WEIGHTS`][ab3c] is a [`->WEIGHT`][b76f] whose single stripe is taken to\n    be of dimensions `M x N` stored in row major order. `N` is the size\n    of this lump. If [`TRANSPOSE-WEIGHTS-P`][533e] then `WEIGHTS` is `N x M` and `X\n    * WEIGHTS'` is computed.\n\n\u003Ca id=\"x-28MGL-COMMON-3AWEIGHTS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29\">\u003C\u002Fa>\n\n- [reader] **WEIGHTS** *[->V\\*M][dbc4] (:WEIGHTS)*\n\n    A [`->WEIGHT`][b76f] lump.\n\n\u003Ca id=\"x-28MGL-BP-3ATRANSPOSE-WEIGHTS-P-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29\">\u003C\u002Fa>\n\n- [reader] **TRANSPOSE-WEIGHTS-P** *[->V\\*M][dbc4] (:TRANSPOSE-WEIGHTS-P = NIL)*\n\n    Determines whether the input is multiplied by\n    [`WEIGHTS`][ab3c] or its transpose.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP--2B-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Elementwise Addition Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3E-2B-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->+** *[LUMP][c1ac]*\n\n    Performs elementwise addition on its input lumps.\n    The [`SIZE`][019f] of this lump is automatically determined from the size of\n    its inputs if there is at least one. If one of the inputs is a\n    [`->WEIGHT`][b76f] lump, then it is added to every stripe.\n    \n    ```common-lisp\n    (->+ (list (->input :size 10) (->weight :size 10 :name 'bias))\n         :name 'plus)\n    ==> #\u003C->+ PLUS :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP--2A-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Elementwise Multiplication Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3E-2A-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->\\*** *[LUMP][c1ac]*\n\n    Performs elementwise multiplication on its two\n    input lumps. The [`SIZE`][019f] of this lump is automatically determined from\n    the size of its inputs. Either input can be a [`->WEIGHT`][b76f] lump.\n    \n    ```common-lisp\n    (->* (->input :size 10) (->weight :size 10 :name 'scale)\n         :name 'mult)\n    ==> #\u003C->* MULT :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ABS-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Abs Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EABS-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->ABS** *[LUMP][c1ac]*\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-EXP-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Exp Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3EEXP-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->EXP** *[LUMP][c1ac]*\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-NORMALIZED-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Normalized Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ENORMALIZED-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->NORMALIZED** *[LUMP][c1ac]*\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SINE-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Sine Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESIN-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SIN** *[LUMP][c1ac]*\n\n    Applies the [`SIN`][ece2] function to its input in an\n    elementwise manner. The [`SIZE`][019f] of this lump is the size of its input\n    which is determined automatically.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-RNN-OPERATIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.9 Operations for `RNN`s\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LSTM-SUBNET-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### LSTM Subnet\n\n\u003Ca id=\"x-28MGL-BP-3A--3ELSTM-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->LSTM** *[BPN][5187]*\n\n    Long-Short Term Memory subnetworks are built by the\n    function [`->LSTM`][2823] and they have many lumps hidden inside them. These\n    lumps are packaged into a subnetwork to reduce clutter.\n\n\u003Ca id=\"x-28MGL-BP-3A--3ELSTM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **->LSTM** *INPUTS &KEY NAME CELL-INIT OUTPUT-INIT SIZE (ACTIVATION-FN '->ACTIVATION) (GATE-FN '->SIGMOID) (INPUT-FN '->TANH) (OUTPUT-FN '->TANH) (PEEPHOLES T)*\n\n    Create an LSTM layer consisting of input, forget, output gates with\n    which input, cell state and output are scaled. Lots of lumps are\n    created, the final one representing to output of the LSTM has `NAME`.\n    The rest of the lumps are named automatically based on `NAME`. This\n    function returns only the output lump (`m`), but all created lumps\n    are added automatically to the [`BPN`][5187] being built.\n    \n    There are many papers and tutorials on LSTMs. This version is well\n    described in \"Long Short-Term Memory Recurrent Neural Network\n    Architectures for Large Scale Acoustic Modeling\" (2014, Hasim Sak,\n    Andrew Senior, Francoise Beaufays). Using the notation from that\n    paper:\n    \n    $$\n    i_t = s(W\\_{ix} x\\_t + W\\_{im} m\\_{t-1} + W\\_{ic} \\odot\n    c\\_{t-1} + b\\_i)\n    $$\n    \n    $$\n    f\\_t = s(W\\_{fx} x\\_t + W\\_{fm} m\\_{t-1} + W\\_{fc} \\odot\n    c\\_{t-1} + b\\_f)\n    $$\n    \n    $$\n    c\\_t = f\\_t \\odot c\\_{t-1} + i\\_t \\odot g(W\\_{cx} x\\_t +\n    W\\_{cm} m\\_{t-1} + b\\_c)\n    $$\n    \n    $$\n    o\\_t = s(W\\_{ox} x\\_t + W\\_{om} m\\_{t-1} + W\\_{oc} \\odot\n    c\\_t + b\\_o)\n    $$\n    \n    $$\n    m\\_t = o\\_t \\odot h(c\\_t),\n    $$\n    \n    where `i`, `f`, and `o` are the input, forget and output gates. `c`\n    is the cell state and `m` is the actual output.\n    \n    Weight matrices for connections from `c` (`W_ic`, `W_fc` and `W_oc`)\n    are diagonal and represented by just the vector of diagonal values.\n    These connections are only added if `PEEPHOLES` is true.\n    \n    A notable difference from the paper is that in addition to being a\n    single lump, `x_t` (`INPUTS`) can also be a list of lumps. Whenever\n    some activation is to be calculated based on `x_t`, it is going to\n    be the sum of individual activations. For example, `W_ix * x_t` is\n    really `sum_j W_ijx * inputs_j`.\n    \n    If `CELL-INIT` is non-`NIL`, then it must be a [`CLUMP`][a4fe] of `SIZE` form which\n    stands for the initial state of the value cell (`c_{-1}`). `CELL-INIT`\n    being `NIL` is equivalent to the state of all zeros.\n    \n    `ACTIVATION-FN` defaults to `->ACTIVATION`([`0`][7162] [`1`][b602]), but it can be for example\n    [`->BATCH-NORMALIZED-ACTIVATION`][0f0f]. In general, functions like the\n    aforementioned two with signature like (`INPUTS` [`&KEY`][4336] `NAME` `SIZE`\n    `PEEPHOLES`) can be passed as `ACTIVATION-FN`.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SEQ-BARRIER-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Sequence Barrier Lump\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESEQ-BARRIER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->SEQ-BARRIER** *[LUMP][c1ac]*\n\n    In an [`RNN`][b0f3], processing of stripes (instances in the\n    batch) may require different number of time step so the final state\n    for stripe 0 is in stripe 0 of some lump L at time step 7, while for\n    stripe 1 it is in stripe 1 of sump lump L at time step 42.\n    \n    This lump copies the per-stripe states from different lumps into a\n    single lump so that further processing can take place (typically\n    when the `RNN` is embedded in another network).\n    \n    The [`SIZE`][019f] of this lump is automatically set to the size of the lump\n    returned by `(FUNCALL SEQ-ELT-FN 0)`.\n\n\u003Ca id=\"x-28MGL-BP-3ASEQ-ELT-FN-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESEQ-BARRIER-29-29\">\u003C\u002Fa>\n\n- [reader] **SEQ-ELT-FN** *[->SEQ-BARRIER][4e91] (:SEQ-ELT-FN)*\n\n    A function of an `INDEX` argument that returns the\n    lump with that index in some sequence.\n\n\u003Ca id=\"x-28MGL-BP-3ASEQ-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESEQ-BARRIER-29-29\">\u003C\u002Fa>\n\n- [accessor] **SEQ-INDICES** *[->SEQ-BARRIER][4e91]*\n\n    A sequence of length batch size of indices. The\n    element at index `I` is the index to be passed to [`SEQ-ELT-FN`][29c0] to\n    find the lump whose stripe `I` is copied to stripe `I` of this\n    this lump.\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-UTILITIES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.5 Utilities\n\n\u003Ca id=\"x-28MGL-BP-3ARENORMALIZE-ACTIVATIONS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **RENORMALIZE-ACTIVATIONS** *->V\\*M-LUMPS L2-UPPER-BOUND*\n\n    If the l2 norm of the incoming weight vector of the a unit is\n    larger than `L2-UPPER-BOUND` then renormalize it to `L2-UPPER-BOUND`.\n    The list of `->V*M-LUMPS` is assumed to be eventually fed to the same\n    lump.\n    \n    To use it, group the activation clumps into the same GD-OPTIMIZER\n    and hang this function on [`AFTER-UPDATE-HOOK`][124f], that latter of which is\n    done for you [`ARRANGE-FOR-RENORMALIZING-ACTIVATIONS`][8b55].\n    \n    See \"Improving neural networks by preventing co-adaptation of\n    feature detectors (Hinton, 2012)\",\n    \u003Chttp:\u002F\u002Farxiv.org\u002Fpdf\u002F1207.0580.pdf>.\n\n\u003Ca id=\"x-28MGL-BP-3AARRANGE-FOR-RENORMALIZING-ACTIVATIONS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **ARRANGE-FOR-RENORMALIZING-ACTIVATIONS** *BPN OPTIMIZER L2-UPPER-BOUND*\n\n    By pushing a lambda to [`AFTER-UPDATE-HOOK`][124f] of `OPTIMIZER` arrange for\n    all weights beings trained by `OPTIMIZER` to be renormalized (as in\n    [`RENORMALIZE-ACTIVATIONS`][c7fa] with `L2-UPPER-BOUND`).\n    \n    It is assumed that if the weights either belong to an activation\n    lump or are simply added to the activations (i.e. they are biases).\n\n\u003Ca id=\"x-28MGL-3A-40MGL-BM-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 12 Boltzmann Machines\n\n\n\u003Ca id=\"x-28MGL-3A-40MGL-GP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 13 Gaussian Processes\n\n\n\u003Ca id=\"x-28MGL-NLP-3A-40MGL-NLP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 14 Natural Language Processing\n\n###### \\[in package MGL-NLP\\]\nThis in nothing more then a couple of utilities for now which may\ngrow into a more serious toolset for NLP eventually.\n\n\u003Ca id=\"x-28MGL-NLP-3AMAKE-N-GRAM-MAPPEE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **MAKE-N-GRAM-MAPPEE** *FUNCTION N*\n\n    Make a function of a single argument that's suitable as the\n    function argument to a mapper function. It calls `FUNCTION` with every\n    `N` element.\n    \n    ```common-lisp\n    (map nil (make-n-gram-mappee #'print 3) '(a b c d e))\n    ..\n    .. (A B C) \n    .. (B C D) \n    .. (C D E) \n    ```\n\n\u003Ca id=\"x-28MGL-NLP-3ABLEU-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **BLEU** *CANDIDATES REFERENCES &KEY CANDIDATE-KEY REFERENCE-KEY (N 4)*\n\n    Compute the [BLEU score](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBLEU) for\n    bilingual CORPUS. BLEU measures how good a translation is compared\n    to human reference translations.\n    \n    `CANDIDATES` (keyed by `CANDIDATE-KEY`) and `REFERENCES` (keyed by\n    `REFERENCE-KEY`) are sequences of sentences. A sentence is a sequence\n    of words. Words are compared with [`EQUAL`][3fb5], and may be any kind of\n    object (not necessarily strings).\n    \n    Currently there is no support for multiple reference translations. `N`\n    determines the largest n-grams to consider.\n    \n    The first return value is the `BLEU` score (between 0 and 1, not as a\n    percentage). The second value is the sum of the lengths of\n    `CANDIDATES` divided by the sum of the lengths of `REFERENCES` (or `NIL`,\n    if the denominator is 0). The third is a list of n-gram\n    precisions (also between 0 and 1 or `NIL`), one for each element in\n    \\[1..`N`\\].\n    \n    This is basically a reimplementation of\n    [multi-bleu.perl](https:\u002F\u002Fgithub.com\u002Fmoses-smt\u002Fmosesdecoder\u002Fblob\u002Fmaster\u002Fscripts\u002Fgeneric\u002Fmulti-bleu.perl).\n    \n    ```common-lisp\n    (bleu '((1 2 3 4) (a b))\n          '((1 2 3 4) (1 2)))\n    => 0.8408964\n    => 1\n    => (;; 1-gram precision: 4\u002F6\n        2\u002F3\n        ;; 2-gram precision: 3\u002F4\n        3\u002F4\n        ;; 3-gram precision: 2\u002F2\n        1\n        ;; 4-gram precision: 1\u002F1\n        1)\n    ```\n\n\u003Ca id=\"x-28MGL-NLP-3A-40MGL-NLP-BAG-OF-WORDS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 14.1 Bag of Words\n\n\u003Ca id=\"x-28MGL-NLP-3ABAG-OF-WORDS-ENCODER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **BAG-OF-WORDS-ENCODER**\n\n    [`ENCODE`][fedd] all features of a document with a sparse\n    vector. Get the features of document from `MAPPER`, encode each\n    feature with [`FEATURE-ENCODER`][96d0]. `FEATURE-ENCODER` may return `NIL` if the\n    feature is not used. The result is a vector of encoded-feature\u002Fvalue\n    conses. encoded-features are unique (under [`ENCODED-FEATURE-TEST`][21ca])\n    within the vector but are in no particular order.\n    \n    Depending on `KIND`, value is calculated in various ways:\n    \n    - For `:FREQUENCY` it is the number of times the corresponding feature\n    was found in `DOCUMENT`.\n    \n    - For `:BINARY` it is always 1.\n    \n    - For `:NORMALIZED-FREQUENCY` and `:NORMALIZED-BINARY` are like the\n      unnormalized counterparts except that as the final step values in\n      the assembled sparse vector are normalized to sum to 1.\n    \n    - Finally, `:COMPACTED-BINARY` is like `:BINARY` but the return values\n      is not a vector of conses, but a vector of element-type\n      [`ENCODED-FEATURE-TYPE`][d443].\n    \n    ```common-lisp\n    (let* ((feature-indexer\n             (make-indexer\n              (alexandria:alist-hash-table '((\"I\" . 3) (\"me\" . 2) (\"mine\" . 1)))\n              2))\n           (bag-of-words-encoder\n             (make-instance 'bag-of-words-encoder\n                            :feature-encoder feature-indexer\n                            :feature-mapper (lambda (fn document)\n                                              (map nil fn document))\n                            :kind :frequency)))\n      (encode bag-of-words-encoder '(\"All\" \"through\" \"day\" \"I\" \"me\" \"mine\"\n                                     \"I\" \"me\" \"mine\" \"I\" \"me\" \"mine\")))\n    => #((0 . 3.0d0) (1 . 3.0d0))\n    ```\n\n\u003Ca id=\"x-28MGL-NLP-3AFEATURE-ENCODER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [reader] **FEATURE-ENCODER** *[BAG-OF-WORDS-ENCODER][cbb4] (:FEATURE-ENCODER)*\n\n\u003Ca id=\"x-28MGL-NLP-3AFEATURE-MAPPER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [reader] **FEATURE-MAPPER** *[BAG-OF-WORDS-ENCODER][cbb4] (:FEATURE-MAPPER)*\n\n\u003Ca id=\"x-28MGL-NLP-3AENCODED-FEATURE-TEST-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [reader] **ENCODED-FEATURE-TEST** *[BAG-OF-WORDS-ENCODER][cbb4] (:ENCODED-FEATURE-TEST = #'EQL)*\n\n\u003Ca id=\"x-28MGL-NLP-3AENCODED-FEATURE-TYPE-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [reader] **ENCODED-FEATURE-TYPE** *[BAG-OF-WORDS-ENCODER][cbb4] (:ENCODED-FEATURE-TYPE = T)*\n\n\u003Ca id=\"x-28MGL-NLP-3ABAG-OF-WORDS-KIND-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [reader] **BAG-OF-WORDS-KIND** *[BAG-OF-WORDS-ENCODER][cbb4] (:KIND = :BINARY)*\n\n\u003Ca id=\"x-28MGL-LOG-3A-40MGL-LOG-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 15 Logging\n\n###### \\[in package MGL-LOG\\]\n\u003Ca id=\"x-28MGL-LOG-3ALOG-MSG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LOG-MSG** *FORMAT &REST ARGS*\n\n\u003Ca id=\"x-28MGL-LOG-3AWITH-LOGGING-ENTRY-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [macro] **WITH-LOGGING-ENTRY** *(STREAM) &BODY BODY*\n\n\u003Ca id=\"x-28MGL-LOG-3A-2ALOG-FILE-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*LOG-FILE\\*** *NIL*\n\n\u003Ca id=\"x-28MGL-LOG-3A-2ALOG-TIME-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [variable] **\\*LOG-TIME\\*** *T*\n\n\u003Ca id=\"x-28MGL-LOG-3ALOG-MAT-ROOM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **LOG-MAT-ROOM** *&KEY (VERBOSE T)*\n\n  [0072]: #x-28MGL-OPT-3AON-OPTIMIZATION-FINISHED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:ON-OPTIMIZATION-FINISHED (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [0078]: #x-28MGL-CORE-3AINSTANCE-TO-EXECUTOR-PARAMETERS-20GENERIC-FUNCTION-29 \"MGL-CORE:INSTANCE-TO-EXECUTOR-PARAMETERS GENERIC-FUNCTION\"\n  [00a0]: #x-28MGL-BP-3ABP-LEARNER-20CLASS-29 \"MGL-BP:BP-LEARNER CLASS\"\n  [00ee]: #x-28MGL-3A-40MGL-LINKS-20MGL-PAX-3ASECTION-29 \"Links\"\n  [011d]: #x-28MGL-GD-3AMEAN-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29 \"MGL-GD:MEAN-DECAY (MGL-PAX:ACCESSOR MGL-GD:ADAM-OPTIMIZER)\"\n  [019f]: #x-28MGL-COMMON-3ASIZE-20GENERIC-FUNCTION-29 \"MGL-COMMON:SIZE GENERIC-FUNCTION\"\n  [038e]: #x-28MGL-BP-3AIMPORTANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ELOSS-29-29 \"MGL-BP:IMPORTANCE (MGL-PAX:ACCESSOR MGL-BP:->LOSS)\"\n  [03c7]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_funcal.htm \"FUNCALL (MGL-PAX:CLHS FUNCTION)\"\n  [0784]: #x-28MGL-NLP-3A-40MGL-NLP-BAG-OF-WORDS-20MGL-PAX-3ASECTION-29 \"Bag of Words\"\n  [07c7]: #x-28MGL-CORE-3A-40MGL-CONFUSION-MATRIX-20MGL-PAX-3ASECTION-29 \"Confusion Matrices\"\n  [07fb]: #x-28MGL-CORE-3AN-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29 \"MGL-CORE:N-STRIPES (MGL-PAX:READER MGL-BP:BPN)\"\n  [084d]: #x-28MGL-BP-3AMAX-LAG-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:MAX-LAG (MGL-PAX:READER MGL-BP:RNN)\"\n  [08ac]: #x-28MGL-DATASET-3AGENERATOR-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29 \"MGL-DATASET:GENERATOR (MGL-PAX:READER MGL-DATASET:FUNCTION-SAMPLER)\"\n  [0900]: #x-28MGL-GD-3AVARIANCE-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29 \"MGL-GD:VARIANCE-DECAY (MGL-PAX:ACCESSOR MGL-GD:ADAM-OPTIMIZER)\"\n  [0933]: #x-28MGL-BP-3AMONITOR-BPN-RESULTS-20FUNCTION-29 \"MGL-BP:MONITOR-BPN-RESULTS FUNCTION\"\n  [09ed]: #x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:LEARNING-RATE (MGL-PAX:ACCESSOR MGL-GD::GD-OPTIMIZER)\"\n  [0ba7]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MEASURER-20MGL-PAX-3ASECTION-29 \"Classification Measurers\"\n  [0c91]: #x-28MGL-GD-3A-40MGL-GD-NORMALIZED-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Normalized Batch Optimizer\"\n  [0c9e]: #x-28MGL-CORE-3ASET-INPUT-20GENERIC-FUNCTION-29 \"MGL-CORE:SET-INPUT GENERIC-FUNCTION\"\n  [0d6a]: #x-28MGL-NLP-3A-40MGL-NLP-20MGL-PAX-3ASECTION-29 \"Natural Language Processing\"\n  [0d82]: #x-28MGL-BP-3A-40MGL-BP-TRAINING-20MGL-PAX-3ASECTION-29 \"Training\"\n  [0f0f]: #x-28MGL-BP-3A--3EBATCH-NORMALIZED-ACTIVATION-20FUNCTION-29 \"MGL-BP:->BATCH-NORMALIZED-ACTIVATION FUNCTION\"\n  [0f83]: #x-28MGL-CORE-3ACONCAT-COUNTER-20CLASS-29 \"MGL-CORE:CONCAT-COUNTER CLASS\"\n  [109e]: #x-28MGL-DATASET-3A-40MGL-DATASET-20MGL-PAX-3ASECTION-29 \"Datasets\"\n  [10e7]: #x-28MGL-GD-3A-40MGL-GD-20MGL-PAX-3ASECTION-29 \"Gradient Descent\"\n  [115e]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_max_m.htm \"MIN (MGL-PAX:CLHS FUNCTION)\"\n  [124f]: #x-28MGL-GD-3AAFTER-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:AFTER-UPDATE-HOOK (MGL-PAX:ACCESSOR MGL-GD::GD-OPTIMIZER)\"\n  [1339]: #x-28MGL-CORE-3ADECODE-20GENERIC-FUNCTION-29 \"MGL-CORE:DECODE GENERIC-FUNCTION\"\n  [1355]: #x-28MGL-BP-3A-40MGL-FNN-20MGL-PAX-3ASECTION-29 \"Feed-Forward Nets\"\n  [16c4]: #x-28MGL-CORE-3AMAX-N-STRIPES-20GENERIC-FUNCTION-29 \"MGL-CORE:MAX-N-STRIPES GENERIC-FUNCTION\"\n  [175f]: #x-28MGL-BP-3AFIND-CLUMP-20FUNCTION-29 \"MGL-BP:FIND-CLUMP FUNCTION\"\n  [1a52]: #x-28MGL-BP-3AINPUT-ROW-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EEMBEDDING-29-29 \"MGL-BP:INPUT-ROW-INDICES (MGL-PAX:ACCESSOR MGL-BP:->EMBEDDING)\"\n  [1a61]: #x-28MGL-DIFFUN-3ADIFFUN-20CLASS-29 \"MGL-DIFFUN:DIFFUN CLASS\"\n  [1b5e]: #x-28MGL-CORE-3A-40MGL-FEATURE-SELECTION-20MGL-PAX-3ASECTION-29 \"Feature Selection\"\n  [1beb]: #x-28MGL-CORE-3AENCODER-2FDECODER-20CLASS-29 \"MGL-CORE:ENCODER\u002FDECODER CLASS\"\n  [1cab]: #x-28MGL-DATASET-3AMAX-N-SAMPLES-20-28MGL-PAX-3AACCESSOR-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29 \"MGL-DATASET:MAX-N-SAMPLES (MGL-PAX:ACCESSOR MGL-DATASET:FUNCTION-SAMPLER)\"\n  [207b]: #x-28MGL-BP-3A-40MGL-BP-INPUTS-20MGL-PAX-3ASECTION-29 \"Inputs\"\n  [20ca]: #x-28MGL-OPT-3ADO-GRADIENT-SINK-20MGL-PAX-3AMACRO-29 \"MGL-OPT:DO-GRADIENT-SINK MGL-PAX:MACRO\"\n  [20e8]: #x-28MGL-CORE-3ACOUNTER-VALUES-20GENERIC-FUNCTION-29 \"MGL-CORE:COUNTER-VALUES GENERIC-FUNCTION\"\n  [2171]: #x-28MGL-BP-3A--3ELOSS-20CLASS-29 \"MGL-BP:->LOSS CLASS\"\n  [21ca]: #x-28MGL-NLP-3AENCODED-FEATURE-TEST-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29 \"MGL-NLP:ENCODED-FEATURE-TEST (MGL-PAX:READER MGL-NLP:BAG-OF-WORDS-ENCODER)\"\n  [2312]: #x-28MGL-OPT-3AMAP-SEGMENTS-20GENERIC-FUNCTION-29 \"MGL-OPT:MAP-SEGMENTS GENERIC-FUNCTION\"\n  [2481]: #x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EDROPOUT-29-29 \"MGL-BP:DROPOUT (MGL-PAX:ACCESSOR MGL-BP:->DROPOUT)\"\n  [24aa]: #x-28MGL-CORE-3A-40MGL-FEATURE-ENCODING-20MGL-PAX-3ASECTION-29 \"Feature Encoding\"\n  [25fd]: #x-28MGL-GD-3A-40MGL-GD-SGD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"SGD Optimizer\"\n  [2823]: #x-28MGL-BP-3A--3ELSTM-20FUNCTION-29 \"MGL-BP:->LSTM FUNCTION\"\n  [2981]: #x-28MGL-DIFFUN-3A-40MGL-DIFFUN-20MGL-PAX-3ASECTION-29 \"Differentiable Functions\"\n  [29a1]: #x-28MGL-CORE-3A-40MGL-PERSISTENCE-20MGL-PAX-3ASECTION-29 \"Persistence\"\n  [29c0]: #x-28MGL-BP-3ASEQ-ELT-FN-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESEQ-BARRIER-29-29 \"MGL-BP:SEQ-ELT-FN (MGL-PAX:READER MGL-BP:->SEQ-BARRIER)\"\n  [2a2f]: #x-28MGL-GD-3ASGD-OPTIMIZER-20CLASS-29 \"MGL-GD:SGD-OPTIMIZER CLASS\"\n  [2aa3]: #x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-2A-20GENERIC-FUNCTION-29 \"MGL-CORE:MAKE-CLASSIFICATION-ACCURACY-MONITORS* GENERIC-FUNCTION\"\n  [2c39]: #x-28MGL-GD-3A-40MGL-GD-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Batch Based Optimizers\"\n  [2e8b]: #x-28MGL-CORE-3AWITH-PADDED-ATTRIBUTE-PRINTING-20MGL-PAX-3AMACRO-29 \"MGL-CORE:WITH-PADDED-ATTRIBUTE-PRINTING MGL-PAX:MACRO\"\n  [2ecb]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_concat.htm \"CONCATENATE (MGL-PAX:CLHS FUNCTION)\"\n  [2f78]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_length.htm \"LENGTH (MGL-PAX:CLHS FUNCTION)\"\n  [2fe9]: #x-28MGL-BP-3A-40MGL-BP-ARITHMETIC-20MGL-PAX-3ASECTION-29 \"Arithmetic\"\n  [3045]: #x-28MGL-BP-3A-40MGL-BP-LUMP-20MGL-PAX-3ASECTION-29 \"Lump Base Class\"\n  [3155]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_countc.htm \"COUNT (MGL-PAX:CLHS FUNCTION)\"\n  [31ed]: #x-28MGL-CORE-3ALABEL-INDICES-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDICES GENERIC-FUNCTION\"\n  [331b]: #x-28MGL-CORE-3AMAKE-EXECUTOR-WITH-PARAMETERS-20GENERIC-FUNCTION-29 \"MGL-CORE:MAKE-EXECUTOR-WITH-PARAMETERS GENERIC-FUNCTION\"\n  [332e]: #x-28MGL-3A-40MGL-BM-20MGL-PAX-3ASECTION-29 \"Boltzmann Machines\"\n  [3815]: #x-28MGL-OPT-3AMAKE-COST-MONITORS-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:MAKE-COST-MONITORS* GENERIC-FUNCTION\"\n  [3ce0]: #x-28MGL-GD-3ASEGMENTED-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:SEGMENTED-GD-OPTIMIZER CLASS\"\n  [3f2e]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_pr_obj.htm \"PRINT-OBJECT (MGL-PAX:CLHS GENERIC-FUNCTION)\"\n  [3f42]: #x-28MGL-LOG-3A-40MGL-LOG-20MGL-PAX-3ASECTION-29 \"Logging\"\n  [3f9f]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CV-BAGGING-20MGL-PAX-3ASECTION-29 \"CV Bagging\"\n  [3fb5]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_equal.htm \"EQUAL (MGL-PAX:CLHS FUNCTION)\"\n  [401f]: #x-28MGL-DATASET-3AFINISHEDP-20GENERIC-FUNCTION-29 \"MGL-DATASET:FINISHEDP GENERIC-FUNCTION\"\n  [404c]: #x-28MGL-BP-3AVARIANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29 \"MGL-BP:VARIANCE (MGL-PAX:ACCESSOR MGL-BP:->GAUSSIAN-RANDOM)\"\n  [410c]: #x-28MGL-COMMON-3ACOST-20GENERIC-FUNCTION-29 \"MGL-COMMON:COST GENERIC-FUNCTION\"\n  [418a]: #x-28MGL-OPT-3ASEGMENT-SET-20CLASS-29 \"MGL-OPT:SEGMENT-SET CLASS\"\n  [430d]: #x-28MGL-CORE-3ACLASSIFICATION-ACCURACY-COUNTER-20CLASS-29 \"MGL-CORE:CLASSIFICATION-ACCURACY-COUNTER CLASS\"\n  [4336]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002F03_da.htm '\"3.4.1\" (MGL-PAX:CLHS MGL-PAX:SECTION)'\n  [441b]: #x-28MGL-BP-3A--3EDROPOUT-20CLASS-29 \"MGL-BP:->DROPOUT CLASS\"\n  [443c]: #x-28MGL-3A-40MGL-CODE-ORGANIZATION-20MGL-PAX-3ASECTION-29 \"Code Organization\"\n  [4476]: #x-28MGL-CORE-3A-40MGL-EXECUTORS-20MGL-PAX-3ASECTION-29 \"Executors\"\n  [4528]: #x-28MGL-OPT-3AMONITOR-OPTIMIZATION-PERIODICALLY-20FUNCTION-29 \"MGL-OPT:MONITOR-OPTIMIZATION-PERIODICALLY FUNCTION\"\n  [46a4]: #x-28MGL-OPT-3AMINIMIZE-20FUNCTION-29 \"MGL-OPT:MINIMIZE FUNCTION\"\n  [46c2]: #x-28MGL-OPT-3AMAKE-COST-MONITORS-20FUNCTION-29 \"MGL-OPT:MAKE-COST-MONITORS FUNCTION\"\n  [46c4]: #x-28MGL-BP-3APOPULATION-DECAY-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-BP:POPULATION-DECAY (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [49f5]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Fs_let_l.htm \"LET* (MGL-PAX:CLHS MGL-PAX:MACRO)\"\n  [4a8e]: #x-28MGL-3A-40MGL-GLOSSARY-20MGL-PAX-3ASECTION-29 \"Glossary\"\n  [4bf1]: #x-28MGL-OPT-3AACCUMULATE-GRADIENTS-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:ACCUMULATE-GRADIENTS* GENERIC-FUNCTION\"\n  [4c73]: #x-28MGL-OPT-3AN-INSTANCES-20-28MGL-PAX-3AREADER-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:N-INSTANCES (MGL-PAX:READER MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [4e91]: #x-28MGL-BP-3A--3ESEQ-BARRIER-20CLASS-29 \"MGL-BP:->SEQ-BARRIER CLASS\"\n  [4f0b]: #x-28MGL-OPT-3AON-N-INSTANCES-CHANGED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:ON-N-INSTANCES-CHANGED (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [4f0e]: #x-28MGL-BP-3A-40MGL-BP-MONITORING-20MGL-PAX-3ASECTION-29 \"Monitoring\"\n  [4ffb]: #x-28MGL-CG-3ACG-20FUNCTION-29 \"MGL-CG:CG FUNCTION\"\n  [5187]: #x-28MGL-BP-3ABPN-20CLASS-29 \"MGL-BP:BPN CLASS\"\n  [51d5]: #x-28MGL-BP-3AWARP-LENGTH-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:WARP-LENGTH (MGL-PAX:READER MGL-BP:RNN)\"\n  [51f7]: #x-28MGL-BP-3A-40MGL-BP-RNN-OPERATIONS-20MGL-PAX-3ASECTION-29 \"Operations for `RNN`s\"\n  [5293]: #x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FCONT-20FUNCTION-29 \"MGL-RESAMPLE:SPLIT-FOLD\u002FCONT FUNCTION\"\n  [5309]: #x-28MGL-BP-3A--3ETANH-20CLASS-29 \"MGL-BP:->TANH CLASS\"\n  [533e]: #x-28MGL-BP-3ATRANSPOSE-WEIGHTS-P-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29 \"MGL-BP:TRANSPOSE-WEIGHTS-P (MGL-PAX:READER MGL-BP:->V*M)\"\n  [5611]: #x-28MGL-GD-3AMOMENTUM-TYPE-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:MOMENTUM-TYPE (MGL-PAX:READER MGL-GD::GD-OPTIMIZER)\"\n  [56b2]: #x-28MGL-BP-3A-40MGL-BP-OVERVIEW-20MGL-PAX-3ASECTION-29 \"Backprop Overview\"\n  [5748]: #x-28MGL-OPT-3A-40MGL-OPT-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Implementing Optimizers\"\n  [5752]: #x-28MGL-CORE-3ACOUNTER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29 \"MGL-CORE:COUNTER (MGL-PAX:READER MGL-CORE:MONITOR)\"\n  [5842]: #x-28MGL-COMMON-3ANAME-20GENERIC-FUNCTION-29 \"MGL-COMMON:NAME GENERIC-FUNCTION\"\n  [5979]: #x-28MGL-CORE-3ABASIC-COUNTER-20CLASS-29 \"MGL-CORE:BASIC-COUNTER CLASS\"\n  [59c2]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-MISC-20MGL-PAX-3ASECTION-29 \"Miscellaneous Operations\"\n  [59dd]: #x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-29-29 \"MGL-COMMON:GROUP-SIZE (MGL-PAX:READER MGL-BP:->MAX)\"\n  [5a43]: #x-28MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER CLASS\"\n  [5a82]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_eq.htm \"EQ (MGL-PAX:CLHS FUNCTION)\"\n  [5bd4]: #x-28MGL-BP-3ABACKWARD-20GENERIC-FUNCTION-29 \"MGL-BP:BACKWARD GENERIC-FUNCTION\"\n  [5cd8]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_numera.htm \"DENOMINATOR (MGL-PAX:CLHS FUNCTION)\"\n  [5d86]: #x-28MGL-BP-3A-40MGL-BP-ACTIVATION-FUNCTIONS-20MGL-PAX-3ASECTION-29 \"Activation Functions\"\n  [5ded]: #x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FMOD-20FUNCTION-29 \"MGL-RESAMPLE:SPLIT-FOLD\u002FMOD FUNCTION\"\n  [5fd4]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_eql.htm \"EQL (MGL-PAX:CLHS TYPE)\"\n  [5fdc]: #x-28MGL-CORE-3AMAP-BATCHES-FOR-MODEL-20FUNCTION-29 \"MGL-CORE:MAP-BATCHES-FOR-MODEL FUNCTION\"\n  [6004]: #x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-20FUNCTION-29 \"MGL-CORE:MAKE-CROSS-ENTROPY-MONITORS FUNCTION\"\n  [6021]: #x-28MGL-BP-3A--3EMAX-CHANNEL-20CLASS-29 \"MGL-BP:->MAX-CHANNEL CLASS\"\n  [606c]: #x-28MGL-BP-3ABUILD-FNN-20MGL-PAX-3AMACRO-29 \"MGL-BP:BUILD-FNN MGL-PAX:MACRO\"\n  [6098]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_vector.htm \"VECTOR (MGL-PAX:CLHS CLASS)\"\n  [60b3]: #x-28MGL-3A-40MGL-GP-20MGL-PAX-3ASECTION-29 \"Gaussian Processes\"\n  [60d2]: #x-28MGL-CORE-3ACONFUSION-MATRIX-20CLASS-29 \"MGL-CORE:CONFUSION-MATRIX CLASS\"\n  [60e3]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-20MGL-PAX-3ASECTION-29 \"Classification\"\n  [6202]: #x-28MGL-CORE-3AMONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ABP-LEARNER-29-29 \"MGL-CORE:MONITORS (MGL-PAX:ACCESSOR MGL-BP:BP-LEARNER)\"\n  [627a]: #x-28MGL-RESAMPLE-3AFRACTURE-STRATIFIED-20FUNCTION-29 \"MGL-RESAMPLE:FRACTURE-STRATIFIED FUNCTION\"\n  [62de]: #x-28MGL-CORE-3AADD-TO-COUNTER-20GENERIC-FUNCTION-29 \"MGL-CORE:ADD-TO-COUNTER GENERIC-FUNCTION\"\n  [6547]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_open.htm \"OPEN (MGL-PAX:CLHS FUNCTION)\"\n  [6598]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-COUNTER-20MGL-PAX-3ASECTION-29 \"Classification Counters\"\n  [6651]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_descri.htm \"DESCRIBE (MGL-PAX:CLHS FUNCTION)\"\n  [676d]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_wr_pr.htm \"PRINC (MGL-PAX:CLHS FUNCTION)\"\n  [6872]: #x-28MGL-BP-3A-40MGL-BP-WEIGHT-LUMP-20MGL-PAX-3ASECTION-29 \"Weight Lump\"\n  [6a6f]: #x-28MGL-OPT-3A-40MGL-OPT-EXTENSION-API-20MGL-PAX-3ASECTION-29 \"Extension API\"\n  [6b38]: #x-28MGL-BP-3A-40MGL-FNN-TUTORIAL-20MGL-PAX-3ASECTION-29 \"`FNN` Tutorial\"\n  [6bd7]: #x-28MGL-CORE-3ALOAD-STATE-20FUNCTION-29 \"MGL-CORE:LOAD-STATE FUNCTION\"\n  [6d31]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_vector.htm \"VECTOR (MGL-PAX:CLHS FUNCTION)\"\n  [6d9f]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_list_.htm \"LIST (MGL-PAX:CLHS FUNCTION)\"\n  [6da5]: #x-28MGL-CORE-3A-40MGL-ATTRIBUTES-20MGL-PAX-3ASECTION-29 \"Attributes\"\n  [6e96]: #x-28MGL-BP-3ATIME-STEP-20FUNCTION-29 \"MGL-BP:TIME-STEP FUNCTION\"\n  [6f82]: #x-28MGL-RESAMPLE-3AFRACTURE-20FUNCTION-29 \"MGL-RESAMPLE:FRACTURE FUNCTION\"\n  [7068]: #x-28MGL-CORE-3AMONITOR-20CLASS-29 \"MGL-CORE:MONITOR CLASS\"\n  [715c]: #x-28MGL-DATASET-3AFUNCTION-SAMPLER-20CLASS-29 \"MGL-DATASET:FUNCTION-SAMPLER CLASS\"\n  [7162]: #x-28MGL-BP-3A--3EACTIVATION-20CLASS-29 \"MGL-BP:->ACTIVATION CLASS\"\n  [71f9]: #x-28MGL-BP-3ASTEP-MONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29 \"MGL-BP:STEP-MONITORS (MGL-PAX:ACCESSOR MGL-BP:RNN)\"\n  [764b]: #x-28MGL-BP-3ABUILD-RNN-20MGL-PAX-3AMACRO-29 \"MGL-BP:BUILD-RNN MGL-PAX:MACRO\"\n  [765c]: #x-28MGL-DATASET-3AMAP-DATASETS-20FUNCTION-29 \"MGL-DATASET:MAP-DATASETS FUNCTION\"\n  [779d]: #x-28MGL-OPT-3A-40MGL-OPT-ITERATIVE-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Iterative Optimizer\"\n  [7960]: #x-28MGL-BP-3ASHIFT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-BP:SHIFT (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [79d8]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_list.htm \"LIST (MGL-PAX:CLHS CLASS)\"\n  [7a28]: #x-28MGL-BP-3A-40MGL-BP-EXTENSION-API-20MGL-PAX-3ASECTION-29 \"Clump API\"\n  [7bc3]: #x-28MGL-DATASET-3A-40MGL-SAMPLER-20MGL-PAX-3ASECTION-29 \"Samplers\"\n  [7c2f]: #x-28MGL-OPT-3AINITIALIZE-OPTIMIZER-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:INITIALIZE-OPTIMIZER* GENERIC-FUNCTION\"\n  [7ee3]: #x-28MGL-CORE-3A-40MGL-COUNTER-CLASSES-20MGL-PAX-3ASECTION-29 \"Counter classes\"\n  [80e2]: #x-28MGL-BP-3AVARIANCE-FOR-PREDICTION-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29 \"MGL-BP:VARIANCE-FOR-PREDICTION (MGL-PAX:ACCESSOR MGL-BP:->GAUSSIAN-RANDOM)\"\n  [80fa]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_mod_r.htm \"MOD (MGL-PAX:CLHS FUNCTION)\"\n  [8148]: #x-28MGL-CORE-3AREAD-STATE-20FUNCTION-29 \"MGL-CORE:READ-STATE FUNCTION\"\n  [82d8]: #x-28MGL-BP-3AADD-CLUMP-20FUNCTION-29 \"MGL-BP:ADD-CLUMP FUNCTION\"\n  [83e6]: #x-28MGL-CG-3A-40MGL-CG-20MGL-PAX-3ASECTION-29 \"Conjugate Gradient\"\n  [83f9]: #x-28MGL-BP-3A--3ESIGMOID-20CLASS-29 \"MGL-BP:->SIGMOID CLASS\"\n  [85d3]: #x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29 \"MGL-COMMON:SIZE (MGL-PAX:READER MGL-BP:LUMP)\"\n  [85d34]: #x-28MGL-BP-3A--3ESOFTMAX-XE-LOSS-20CLASS-29 \"MGL-BP:->SOFTMAX-XE-LOSS CLASS\"\n  [8611]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-SHUFFLING-20MGL-PAX-3ASECTION-29 \"Shuffling\"\n  [86fd]: #x-28MGL-RESAMPLE-3ASAMPLE-FROM-20FUNCTION-29 \"MGL-RESAMPLE:SAMPLE-FROM FUNCTION\"\n  [871e]: #x-28MGL-BP-3A-40MGL-RNN-20MGL-PAX-3ASECTION-29 \"Recurrent Neural Nets\"\n  [876d]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_ensu_1.htm \"ENSURE-DIRECTORIES-EXIST (MGL-PAX:CLHS FUNCTION)\"\n  [8788]: #x-28MGL-BP-3A-40MGL-BP-20MGL-PAX-3ASECTION-29 \"Backpropagation Neural Networks\"\n  [8970]: #x-28MGL-COMMON-3ASCALE-20GENERIC-FUNCTION-29 \"MGL-COMMON:SCALE GENERIC-FUNCTION\"\n  [8ae0]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_identi.htm \"IDENTITY (MGL-PAX:CLHS FUNCTION)\"\n  [8af5]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_numera.htm \"NUMERATOR (MGL-PAX:CLHS FUNCTION)\"\n  [8b55]: #x-28MGL-BP-3AARRANGE-FOR-RENORMALIZING-ACTIVATIONS-20FUNCTION-29 \"MGL-BP:ARRANGE-FOR-RENORMALIZING-ACTIVATIONS FUNCTION\"\n  [8cb8]: #x-28MGL-RESAMPLE-3ASPLIT-STRATIFIED-20FUNCTION-29 \"MGL-RESAMPLE:SPLIT-STRATIFIED FUNCTION\"\n  [8da0]: #x-28MGL-OPT-3AITERATIVE-OPTIMIZER-20CLASS-29 \"MGL-OPT:ITERATIVE-OPTIMIZER CLASS\"\n  [8dd7]: #x-28MGL-CORE-3AN-STRIPES-20GENERIC-FUNCTION-29 \"MGL-CORE:N-STRIPES GENERIC-FUNCTION\"\n  [8e53]: #x-28MGL-BP-3AUNFOLDER-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:UNFOLDER (MGL-PAX:READER MGL-BP:RNN)\"\n  [8f37]: #x-28MGL-CORE-3AMONITORS-20GENERIC-FUNCTION-29 \"MGL-CORE:MONITORS GENERIC-FUNCTION\"\n  [9006]: #x-28MGL-OPT-3ATERMINATION-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:TERMINATION (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [9105]: #x-28MGL-BP-3A-40MGL-BP-ACTIVATIONS-20MGL-PAX-3ASECTION-29 \"Activations\"\n  [911c]: #x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-20FUNCTION-29 \"MGL-CORE:MAKE-CLASSIFICATION-ACCURACY-MONITORS FUNCTION\"\n  [9192]: #x-28MGL-3A-40MGL-OVERVIEW-20MGL-PAX-3ASECTION-29 \"Overview\"\n  [91a3]: #x-28MGL-CORE-3AMAX-N-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29 \"MGL-CORE:MAX-N-STRIPES (MGL-PAX:READER MGL-BP:BPN)\"\n  [91f3]: #x-28MGL-BP-3A-40MGL-BP-UTILITIES-20MGL-PAX-3ASECTION-29 \"Utilities\"\n  [9385]: #x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTIONS-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDEX-DISTRIBUTIONS GENERIC-FUNCTION\"\n  [93a7]: #x-28MGL-BP-3A-40MGL-BP-LOSSES-20MGL-PAX-3ASECTION-29 \"Losses\"\n  [9524]: #x-28MGL-RESAMPLE-3ACROSS-VALIDATE-20FUNCTION-29 \"MGL-RESAMPLE:CROSS-VALIDATE FUNCTION\"\n  [95fe]: #x-28MGL-CORE-3AWRITE-STATE-20FUNCTION-29 \"MGL-CORE:WRITE-STATE FUNCTION\"\n  [9641]: #x-28MGL-BP-3A-40MGL-BP-LUMPS-20MGL-PAX-3ASECTION-29 \"Lumps\"\n  [96d0]: #x-28MGL-NLP-3AFEATURE-ENCODER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29 \"MGL-NLP:FEATURE-ENCODER (MGL-PAX:READER MGL-NLP:BAG-OF-WORDS-ENCODER)\"\n  [9700]: #x-28MGL-BP-3A-40MGL-RNN-TUTORIAL-20MGL-PAX-3ASECTION-29 \"`RNN` Tutorial\"\n  [9715]: #x-28MGL-CORE-3AATTRIBUTED-20CLASS-29 \"MGL-CORE:ATTRIBUTED CLASS\"\n  [9749]: #x-28MGL-CG-3ACG-ARGS-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29 \"MGL-CG:CG-ARGS (MGL-PAX:ACCESSOR MGL-CG:CG-OPTIMIZER)\"\n  [989a]: #x-28MGL-GD-3A-40MGL-GD-SEGMENTED-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Segmented GD Optimizer\"\n  [989c]: #x-28MGL-CORE-3AAPPLY-MONITORS-20FUNCTION-29 \"MGL-CORE:APPLY-MONITORS FUNCTION\"\n  [993b]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_sinh_.htm \"TANH (MGL-PAX:CLHS FUNCTION)\"\n  [9a5b]: #x-28MGL-OPT-3ASEGMENT-DERIVATIVES-20GENERIC-FUNCTION-29 \"MGL-OPT:SEGMENT-DERIVATIVES GENERIC-FUNCTION\"\n  [9a84]: #x-28MGL-BP-3A--3EMIN-20CLASS-29 \"MGL-BP:->MIN CLASS\"\n  [9d3a]: #x-28MGL-BP-3A--3ERELU-20CLASS-29 \"MGL-BP:->RELU CLASS\"\n  [9da9]: #x-28MGL-BP-3A--3EBATCH-NORMALIZED-20CLASS-29 \"MGL-BP:->BATCH-NORMALIZED CLASS\"\n  [9de4]: #x-28MGL-BP-3AFNN-20CLASS-29 \"MGL-BP:FNN CLASS\"\n  [a077]: #x-28MGL-CORE-3ACOUNTER-20GENERIC-FUNCTION-29 \"MGL-CORE:COUNTER GENERIC-FUNCTION\"\n  [a138]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Fm_setf_.htm \"SETF (MGL-PAX:CLHS MGL-PAX:MACRO)\"\n  [a210]: #x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SINK-20MGL-PAX-3ASECTION-29 \"Implementing Gradient Sinks\"\n  [a39b]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-20MGL-PAX-3ASECTION-29 \"Resampling\"\n  [a437]: #x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29 \"MGL-COMMON:GROUP-SIZE (MGL-PAX:READER MGL-BP:->SOFTMAX-XE-LOSS)\"\n  [a4fe]: #x-28MGL-BP-3ACLUMP-20CLASS-29 \"MGL-BP:CLUMP CLASS\"\n  [a541]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_wr_to_.htm \"PRINC-TO-STRING (MGL-PAX:CLHS FUNCTION)\"\n  [a81b]: #x-28MGL-BP-3ADERIVATIVES-20GENERIC-FUNCTION-29 \"MGL-BP:DERIVATIVES GENERIC-FUNCTION\"\n  [a884]: #x-28MGL-GD-3A-40MGL-GD-PER-WEIGHT-OPTIMIZATION-20MGL-PAX-3ASECTION-29 \"Per-weight Optimization\"\n  [aa2e]: #x-28MGL-BP-3A-40MGL-BP-STOCHASTICITY-20MGL-PAX-3ASECTION-29 \"Stochasticity\"\n  [aa86]: #x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-GD:VARIANCE-ADJUSTMENT (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [aabd]: #x-28MGL-OPT-3AMAP-GRADIENT-SINK-20GENERIC-FUNCTION-29 \"MGL-OPT:MAP-GRADIENT-SINK GENERIC-FUNCTION\"\n  [ab3c]: #x-28MGL-COMMON-3AWEIGHTS-20GENERIC-FUNCTION-29 \"MGL-COMMON:WEIGHTS GENERIC-FUNCTION\"\n  [ad8f]: #x-28MGL-DATASET-3A-2AINFINITELY-EMPTY-DATASET-2A-20VARIABLE-29 \"MGL-DATASET:*INFINITELY-EMPTY-DATASET* VARIABLE\"\n  [ada2]: #x-28MGL-CORE-3A-40MGL-PARAMETERIZED-EXECUTOR-CACHE-20MGL-PAX-3ASECTION-29 \"Parameterized Executor Cache\"\n  [ae23]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_seq.htm \"SEQUENCE (MGL-PAX:CLHS CLASS)\"\n  [ae3d]: #x-28MGL-OPT-3AMINIMIZE-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:MINIMIZE* GENERIC-FUNCTION\"\n  [aee6]: #x-28MGL-RESAMPLE-3ASAMPLE-STRATIFIED-20FUNCTION-29 \"MGL-RESAMPLE:SAMPLE-STRATIFIED FUNCTION\"\n  [af05]: #x-28MGL-GD-3AMOMENTUM-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:MOMENTUM (MGL-PAX:ACCESSOR MGL-GD::GD-OPTIMIZER)\"\n  [af6b]: #x-28MGL-GD-3ACLIP-L2-NORM-20FUNCTION-29 \"MGL-GD:CLIP-L2-NORM FUNCTION\"\n  [b01b]: #x-28MGL-CORE-3AMAP-OVER-EXECUTORS-20GENERIC-FUNCTION-29 \"MGL-CORE:MAP-OVER-EXECUTORS GENERIC-FUNCTION\"\n  [b0f3]: #x-28MGL-BP-3ARNN-20CLASS-29 \"MGL-BP:RNN CLASS\"\n  [b186]: #x-28MGL-CORE-3ACROSS-ENTROPY-COUNTER-20CLASS-29 \"MGL-CORE:CROSS-ENTROPY-COUNTER CLASS\"\n  [b5c7]: #x-28MGL-COMMON-3ATARGET-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29 \"MGL-COMMON:TARGET (MGL-PAX:ACCESSOR MGL-BP:->SOFTMAX-XE-LOSS)\"\n  [b602]: #x-28MGL-BP-3A--3EACTIVATION-20FUNCTION-29 \"MGL-BP:->ACTIVATION FUNCTION\"\n  [b647]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-BAGGING-20MGL-PAX-3ASECTION-29 \"Bagging\"\n  [b76f]: #x-28MGL-BP-3A--3EWEIGHT-20CLASS-29 \"MGL-BP:->WEIGHT CLASS\"\n  [ba91]: #x-28MGL-RESAMPLE-3ASTRATIFY-20FUNCTION-29 \"MGL-RESAMPLE:STRATIFY FUNCTION\"\n  [bbdf]: #x-28MGL-CORE-3AAPPLY-MONITOR-20GENERIC-FUNCTION-29 \"MGL-CORE:APPLY-MONITOR GENERIC-FUNCTION\"\n  [bc8c]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_exp_e.htm \"EXP (MGL-PAX:CLHS FUNCTION)\"\n  [bd13]: #x-28MGL-GD-3A-40MGL-GD-ADAM-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Adam Optimizer\"\n  [bdf9]: #x-28MGL-DATASET-3AN-SAMPLES-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29 \"MGL-DATASET:N-SAMPLES (MGL-PAX:READER MGL-DATASET:FUNCTION-SAMPLER)\"\n  [be8d]: #x-28MGL-DATASET-3A-40MGL-SAMPLER-FUNCTION-SAMPLER-20MGL-PAX-3ASECTION-29 \"Function Sampler\"\n  [be95]: #x-28MGL-CORE-3A-40MGL-COUNTER-20MGL-PAX-3ASECTION-29 \"Counters\"\n  [c102]: #x-28MGL-CORE-3ASAVE-STATE-20FUNCTION-29 \"MGL-CORE:SAVE-STATE FUNCTION\"\n  [c1ac]: #x-28MGL-BP-3ALUMP-20CLASS-29 \"MGL-BP:LUMP CLASS\"\n  [c1ae]: #x-28MGL-BP-3AFORWARD-20GENERIC-FUNCTION-29 \"MGL-BP:FORWARD GENERIC-FUNCTION\"\n  [c40e]: #x-28MGL-GD-3A-40MGL-GD-UTILITIES-20MGL-PAX-3ASECTION-29 \"Utilities\"\n  [c469]: #x-28MGL-BP-3A--3EBATCH-NORMALIZATION-20CLASS-29 \"MGL-BP:->BATCH-NORMALIZATION CLASS\"\n  [c573]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MONITOR-20MGL-PAX-3ASECTION-29 \"Classification Monitors\"\n  [c58b]: #x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SOURCE-20MGL-PAX-3ASECTION-29 \"Implementing Gradient Sources\"\n  [c701]: #x-28MGL-CORE-3A-40MGL-MONITOR-20MGL-PAX-3ASECTION-29 \"Monitors\"\n  [c74a]: #x-28MGL-OPT-3A-40MGL-OPT-20MGL-PAX-3ASECTION-29 \"Gradient Based Optimization\"\n  [c7fa]: #x-28MGL-BP-3ARENORMALIZE-ACTIVATIONS-20FUNCTION-29 \"MGL-BP:RENORMALIZE-ACTIVATIONS FUNCTION\"\n  [c8db]: #x-28MGL-CORE-3A-40MGL-FEATURES-20MGL-PAX-3ASECTION-29 \"Features\"\n  [c918]: #x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-COMMON:BATCH-SIZE (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [ca09]: #x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20GENERIC-FUNCTION-29 \"MGL-OPT:RESET-OPTIMIZATION-MONITORS GENERIC-FUNCTION\"\n  [caec]: #x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTION-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDEX-DISTRIBUTION GENERIC-FUNCTION\"\n  [cbb4]: #x-28MGL-NLP-3ABAG-OF-WORDS-ENCODER-20CLASS-29 \"MGL-NLP:BAG-OF-WORDS-ENCODER CLASS\"\n  [cc1c]: #x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29 \"MGL-COMMON:NODES GENERIC-FUNCTION\"\n  [cc37]: #x-28MGL-CORE-3AATTRIBUTES-20-28MGL-PAX-3AACCESSOR-20MGL-CORE-3AATTRIBUTED-29-29 \"MGL-CORE:ATTRIBUTES (MGL-PAX:ACCESSOR MGL-CORE:ATTRIBUTED)\"\n  [cc80]: #x-28MGL-CORE-3ALABEL-INDEX-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDEX GENERIC-FUNCTION\"\n  [cd3b]: #x-28MGL-CORE-3A-40MGL-MEASURER-20MGL-PAX-3ASECTION-29 \"Measurers\"\n  [cee6]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_symb_5.htm \"SYMBOL-VALUE (MGL-PAX:CLHS FUNCTION)\"\n  [d0e3]: #x-28MGL-BP-3A-40MGL-RNN-TIME-WARP-20MGL-PAX-3ASECTION-29 \"Time Warp\"\n  [d10a]: #x-28MGL-CG-3AON-CG-BATCH-DONE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29 \"MGL-CG:ON-CG-BATCH-DONE (MGL-PAX:ACCESSOR MGL-CG:CG-OPTIMIZER)\"\n  [d1e0]: #x-28MGL-BP-3A-40MGL-BPN-20MGL-PAX-3ASECTION-29 \"`BPN`s\"\n  [d3b2]: #x-28MGL-CORE-3APARAMETERIZED-EXECUTOR-CACHE-MIXIN-20CLASS-29 \"MGL-CORE:PARAMETERIZED-EXECUTOR-CACHE-MIXIN CLASS\"\n  [d443]: #x-28MGL-NLP-3AENCODED-FEATURE-TYPE-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29 \"MGL-NLP:ENCODED-FEATURE-TYPE (MGL-PAX:READER MGL-NLP:BAG-OF-WORDS-ENCODER)\"\n  [d479]: #x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20-28METHOD-20-28MGL-OPT-3AITERATIVE-OPTIMIZER-20T-29-29-29 \"MGL-OPT:RESET-OPTIMIZATION-MONITORS (METHOD (MGL-OPT:ITERATIVE-OPTIMIZER T))\"\n  [d699]: #x-28MGL-COMMON-3ANODES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29 \"MGL-COMMON:NODES (MGL-PAX:READER MGL-BP:LUMP)\"\n  [d6e0]: #x-28MGL-BP-3AWARP-START-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:WARP-START (MGL-PAX:READER MGL-BP:RNN)\"\n  [d811]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_apply.htm \"APPLY (MGL-PAX:CLHS FUNCTION)\"\n  [d94e]: #x-28MGL-GD-3ABATCH-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:BATCH-GD-OPTIMIZER CLASS\"\n  [d96a]: #x-28MGL-BP-3AMEAN-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29 \"MGL-BP:MEAN (MGL-PAX:ACCESSOR MGL-BP:->GAUSSIAN-RANDOM)\"\n  [db03]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_eql.htm \"EQL (MGL-PAX:CLHS FUNCTION)\"\n  [dbc4]: #x-28MGL-BP-3A--3EV-2AM-20CLASS-29 \"MGL-BP:->V*M CLASS\"\n  [dd95]: #x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:INITIALIZE-GRADIENT-SOURCE* GENERIC-FUNCTION\"\n  [e0e6]: #x-28MGL-GD-3AADAM-OPTIMIZER-20CLASS-29 \"MGL-GD:ADAM-OPTIMIZER CLASS\"\n  [e198]: #x-28MGL-COMMON-3A-40MGL-COMMON-20MGL-PAX-3ASECTION-29 \"Common Stuff\"\n  [e46f]: #x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-2A-20GENERIC-FUNCTION-29 \"MGL-CORE:MAKE-CROSS-ENTROPY-MONITORS* GENERIC-FUNCTION\"\n  [e4dd]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Fs_multip.htm \"MULTIPLE-VALUE-CALL (MGL-PAX:CLHS MGL-PAX:MACRO)\"\n  [e50c]: #x-28MGL-CORE-3AMONITOR-MODEL-RESULTS-20FUNCTION-29 \"MGL-CORE:MONITOR-MODEL-RESULTS FUNCTION\"\n  [e668]: #x-28MGL-CORE-3A-40MGL-MONITORING-20MGL-PAX-3ASECTION-29 \"Monitoring\"\n  [e746]: #x-28MGL-OPT-3A-40MGL-OPT-COST-20MGL-PAX-3ASECTION-29 \"Cost Function\"\n  [e7ea]: #x-28MGL-3A-40MGL-DEPENDENCIES-20MGL-PAX-3ASECTION-29 \"Dependencies\"\n  [e7f6]: #x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EINPUT-29-29 \"MGL-BP:DROPOUT (MGL-PAX:ACCESSOR MGL-BP:->INPUT)\"\n  [e8d2]: #x-28MGL-BP-3A--3ESQUARED-DIFFERENCE-20CLASS-29 \"MGL-BP:->SQUARED-DIFFERENCE CLASS\"\n  [ea7d]: #x-28MGL-LOG-3ALOG-MAT-ROOM-20FUNCTION-29 \"MGL-LOG:LOG-MAT-ROOM FUNCTION\"\n  [eaf1]: #x-28MGL-BP-3ABATCH-NORMALIZATION-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZED-29-29 \"MGL-BP:BATCH-NORMALIZATION (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZED)\"\n  [eb05]: #x-28MGL-CORE-3AMEASURER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29 \"MGL-CORE:MEASURER (MGL-PAX:READER MGL-CORE:MONITOR)\"\n  [ebd4]: #x-28MGL-OPT-3AON-OPTIMIZATION-STARTED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:ON-OPTIMIZATION-STARTED (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [ec8b]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_zerop.htm \"ZEROP (MGL-PAX:CLHS FUNCTION)\"\n  [ece2]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_sin_c.htm \"SIN (MGL-PAX:CLHS FUNCTION)\"\n  [ed4f]: #x-28MGL-BP-3A-2AWARP-TIME-2A-20VARIABLE-29 \"MGL-BP:*WARP-TIME* VARIABLE\"\n  [edcf]: #x-28MGL-BP-3A--3ESUM-20CLASS-29 \"MGL-BP:->SUM CLASS\"\n  [ee86]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_mod.htm \"MOD (MGL-PAX:CLHS TYPE)\"\n  [ee97]: #x-28MGL-CG-3ACG-OPTIMIZER-20CLASS-29 \"MGL-CG:CG-OPTIMIZER CLASS\"\n  [f00d]: #x-28MGL-OPT-3ASEGMENTS-20GENERIC-FUNCTION-29 \"MGL-OPT:SEGMENTS GENERIC-FUNCTION\"\n  [f17b]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CROSS-VALIDATION-20MGL-PAX-3ASECTION-29 \"Cross-validation\"\n  [f1c1]: #x-28MGL-BP-3A--3EEMBEDDING-20CLASS-29 \"MGL-BP:->EMBEDDING CLASS\"\n  [f257]: #x-28MGL-CORE-3A-40MGL-CORE-20MGL-PAX-3ASECTION-29 \"Core\"\n  [f491]: #x-28MGL-COMMON-3AFN-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29 \"MGL-COMMON:FN (MGL-PAX:READER MGL-DIFFUN:DIFFUN)\"\n  [f54e]: #x-28MGL-BP-3A--3EINPUT-20CLASS-29 \"MGL-BP:->INPUT CLASS\"\n  [f573]: #x-28MGL-BP-3ACUDA-WINDOW-START-TIME-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29 \"MGL-BP:CUDA-WINDOW-START-TIME (MGL-PAX:ACCESSOR MGL-BP:RNN)\"\n  [f652]: #x-28MGL-BP-3A--3EMAX-20CLASS-29 \"MGL-BP:->MAX CLASS\"\n  [f6ae]: #x-28MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER CLASS\"\n  [f790]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-PARTITIONS-20MGL-PAX-3ASECTION-29 \"Partitions\"\n  [f7aa]: #x-28MGL-3A-40MGL-INTRODUCTION-20MGL-PAX-3ASECTION-29 \"Introduction\"\n  [f7c1]: #x-28MGL-BP-3ACLUMPS-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29 \"MGL-BP:CLUMPS (MGL-PAX:READER MGL-BP:BPN)\"\n  [f85e]: #x-28MGL-LOG-3ALOG-MSG-20FUNCTION-29 \"MGL-LOG:LOG-MSG FUNCTION\"\n  [f956]: #x-28MGL-DATASET-3ASAMPLE-20GENERIC-FUNCTION-29 \"MGL-DATASET:SAMPLE GENERIC-FUNCTION\"\n  [f98e]: #x-28MGL-CORE-3ADO-EXECUTORS-20MGL-PAX-3AMACRO-29 \"MGL-CORE:DO-EXECUTORS MGL-PAX:MACRO\"\n  [fa6d]: #x-28MGL-COMMON-3ABATCH-SIZE-20GENERIC-FUNCTION-29 \"MGL-COMMON:BATCH-SIZE GENERIC-FUNCTION\"\n  [faaa]: #x-28MGL-CORE-3ADO-BATCHES-FOR-MODEL-20MGL-PAX-3AMACRO-29 \"MGL-CORE:DO-BATCHES-FOR-MODEL MGL-PAX:MACRO\"\n  [feaa]: #x-28MGL-BP-3A--3EGAUSSIAN-RANDOM-20CLASS-29 \"MGL-BP:->GAUSSIAN-RANDOM CLASS\"\n  [fedd]: #x-28MGL-CORE-3AENCODE-20GENERIC-FUNCTION-29 \"MGL-CORE:ENCODE GENERIC-FUNCTION\"\n  [ff5a]: #x-28MGL-BP-3ALAG-20FUNCTION-29 \"MGL-BP:LAG FUNCTION\"\n  [ff82]: #x-28MGL-CORE-3A-40MGL-MODEL-STRIPE-20MGL-PAX-3ASECTION-29 \"Batch Processing\"\n\n* * *\n###### \\[generated by [MGL-PAX](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl-pax)\\]\n","\u003Ca id=\"x-28MGL-3A-40MGL-MANUAL-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n# MGL 手册\n\n## 目录\n\n- [1 介绍][f7aa]\n    - [1.1 概述][9192]\n    - [1.2 链接][00ee]\n    - [1.3 依赖][e7ea]\n    - [1.4 代码组织][443c]\n    - [1.5 术语表][4a8e]\n- [2 常用内容][e198]\n- [3 数据集][109e]\n    - [3.1 抽样器][7bc3]\n        - [3.1.1 函数抽样器][be8d]\n- [4 重采样][a39b]\n    - [4.1 打乱顺序][8611]\n    - [4.2 划分][f790]\n    - [4.3 交叉验证][f17b]\n    - [4.4 装袋法][b647]\n    - [4.5 CV 装袋法][3f9f]\n    - [4.6 其他操作][59c2]\n- [5 核心][f257]\n    - [5.1 持久化][29a1]\n    - [5.2 批处理][ff82]\n    - [5.3 执行器][4476]\n        - [5.3.1 参数化执行器缓存][ada2]\n- [6 监控][e668]\n    - [6.1 监控器][c701]\n    - [6.2 测量器][cd3b]\n    - [6.3 计数器][be95]\n        - [6.3.1 属性][6da5]\n        - [6.3.2 计数器类][7ee3]\n- [7 分类][60e3]\n    - [7.1 分类监控器][c573]\n    - [7.2 分类测量器][0ba7]\n    - [7.3 分类计数器][6598]\n        - [7.3.1 混淆矩阵][07c7]\n- [8 特征][c8db]\n    - [8.1 特征选择][1b5e]\n    - [8.2 特征编码][24aa]\n- [9 基于梯度的优化][c74a]\n    - [9.1 迭代优化器][779d]\n    - [9.2 损失函数][e746]\n    - [9.3 梯度下降][10e7]\n        - [9.3.1 基于批处理的优化器][2c39]\n        - [9.3.2 分段式梯度下降优化器][989a]\n        - [9.3.3 按权重优化][a884]\n        - [9.3.4 工具函数][c40e]\n    - [9.4 共轭梯度][83e6]\n    - [9.5 扩展 API][6a6f]\n        - [9.5.1 实现优化器][5748]\n        - [9.5.2 实现梯度源][c58b]\n        - [9.5.3 实现梯度汇][a210]\n- [10 可微函数][2981]\n- [11 反向传播神经网络][8788]\n    - [11.1 反向传播概述][56b2]\n    - [11.2 Clump API][7a28]\n    - [11.3 `BPN`s][d1e0]\n        - [11.3.1 训练][0d82]\n        - [11.3.2 监控][4f0e]\n        - [11.3.3 前馈网络][1355]\n        - [11.3.4 循环神经网络][871e]\n    - [11.4 Lumps][9641]\n        - [11.4.1 Lump 基类][3045]\n        - [11.4.2 输入][207b]\n        - [11.4.3 权重 Lump][6872]\n        - [11.4.4 激活函数][9105]\n        - [11.4.5 激活函数][5d86]\n        - [11.4.6 损失][93a7]\n        - [11.4.7 随机性][aa2e]\n        - [11.4.8 算术][2fe9]\n        - [11.4.9 用于 `RNN`s 的操作][51f7]\n    - [11.5 工具函数][91f3]\n- [12 玻尔兹曼机][332e]\n- [13 高斯过程][60b3]\n- [14 自然语言处理][0d6a]\n    - [14.1 词袋模型][0784]\n- [15 日志记录][3f42]\n\n###### \\[在 MGL 包中\\]\n\u003Ca id=\"x-28-22mgl-22-20ASDF-2FSYSTEM-3ASYSTEM-29\">\u003C\u002Fa>\n\n- [system] **\"mgl\"**\n    - _版本:_ 0.1.0\n    - _描述:_ `MGL` 是一个用于反向传播神经网络、玻尔兹曼机、高斯过程等的机器学习库。\n    - _许可证:_ MIT，详见 COPYING。\n    - _作者:_ Gábor Melis \u003Cmega@retes.hu>\n    - _邮箱:_ [mega@retes.hu](mailto:mega@retes.hu)\n    - _主页:_ [http:\u002F\u002Fmelisgl.github.io\u002Fmgl](http:\u002F\u002Fmelisgl.github.io\u002Fmgl)\n    - _问题跟踪器:_ [https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues)\n    - _源码控制:_ [GIT](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl.git)\n    - *依赖于:* alexandria, array-operations, cl-reexport, closer-mop, lla, mgl-gnuplot, mgl-mat, mgl-pax, named-readtables, num-utils, pythonic-string-reader, swank\n\n\u003Ca id=\"x-28MGL-3A-40MGL-INTRODUCTION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 1 介绍\n\n\u003Ca id=\"x-28MGL-3A-40MGL-OVERVIEW-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.1 概述\n\nMGL 是由 [Gábor Melis](http:\u002F\u002Fquotenil.com) 开发的 Common Lisp 机器学习库，其中部分代码最初由 Ravenpack International 贡献。它主要专注于各种形式的神经网络（玻尔兹曼机、前馈和循环反向传播网络）。MGL 的大部分功能都建立在 MGL-MAT 之上，因此支持 BLAS 和 CUDA。\n\n总体而言，MGL 的重点在于功能的强大和性能的高效，而非易用性。也许在未来，如果能在功能与易用性之间找到合理的平衡，就会出现一个功能受限但易于使用的标准化接口。\n\n\u003Ca id=\"x-28MGL-3A-40MGL-LINKS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.2 链接\n\n以下是最新版本的 [官方仓库](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl) 和 [HTML 文档](http:\u002F\u002Fmelisgl.github.io\u002Fmgl-pax-world\u002Fmgl-manual.html)。\n\n\u003Ca id=\"x-28MGL-3A-40MGL-DEPENDENCIES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.3 依赖\n\nMGL 曾经依赖 [LLA](https:\u002F\u002Fgithub.com\u002Ftpapp\u002Flla) 来与 BLAS 和 LAPACK 接口。如今这已经基本成为历史，但对外部库的配置仍然通过 LLA 完成。请参阅 LLA 的 README 文件以了解如何进行设置。需要注意的是，现在 OpenBLAS 的安装更加简单，且速度与 ATLAS 不相上下。\n\n[CL-CUDA](https:\u002F\u002Fgithub.com\u002Ftakagi\u002Fcl-cuda) 和 [MGL-MAT](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl) 是两个主要的依赖项，同时也是尚未加入 Quicklisp 的库，因此只需将它们放入 `quicklisp\u002Flocal-projects\u002F` 目录即可。如果系统中没有合适的 GPU 或未安装 CUDA SDK，MGL 将自动回退到使用 BLAS 和 Lisp 代码。只需将代码包裹在 `MGL-MAT:WITH-CUDA*` 中，即可在 GPU 上运行；而通过 `MGL-MAT:CUDA-AVAILABLE-P` 可以检查是否正在使用 GPU。\n\n\u003Ca id=\"x-28MGL-3A-40MGL-CODE-ORGANIZATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.4 代码组织\n\nMGL 由多个专门用于不同任务的包组成。例如，`MGL-RESAMPLE` 包负责 [重采样][a39b]，而 `MGL-GD` 包则负责 [梯度下降][10e7]，以此类推。一方面，多个包有助于清晰地分离 API 和实现，并便于深入研究特定任务。另一方面，过多的包也可能带来不便，因此 `MGL` 包本身会重新导出构成 MGL 和 MGL-MAT（参见 MAT 手册）的所有其他包中的所有外部符号，因为 MGL 严重依赖于 MGL-MAT。\n\n不过，捆绑在一起但独立的 MGL-GNUPLOT 库是一个例外。\n\n内置测试可以通过以下方式运行：\n\n    (ASDF:OOS 'ASDF:TEST-OP '#:MGL)\n\n请注意，大多数测试具有一定的随机性，偶尔可能会失败。\n\n\u003Ca id=\"x-28MGL-3A-40MGL-GLOSSARY-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 1.5 术语表\n\n归根结底，机器学习就是为某个领域创建 **模型**。模型中观察到的数据称为 **实例**（也称为示例或样本）。实例的集合被称为 **数据集**。数据集用于拟合模型或进行 **预测**。有时“预测”一词过于具体，因此将模型应用于某些实例后得到的结果通常被称为 **结果**。\n\n\u003Ca id=\"x-28MGL-COMMON-3A-40MGL-COMMON-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 2 常用内容\n\n###### \\[在包 MGL-COMMON 中\\]\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **NAME** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-3D-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **NAME=** *X Y*\n\n    如果 X 和 Y 是 `EQL`([`0`][db03] [`1`][5fd4])，或者它们是元素为[`EQUAL`][3fb5]的结构化组件，则返回 `T`。字符串和位向量如果长度相同且各成分完全一致，则被认为是 `EQUAL`。其他数组只有当它们是[`EQ`][5a82]时才被视为 `EQUAL`。\n\n\u003Ca id=\"x-28MGL-COMMON-3ASIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **SIZE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **NODES** *OBJECT*\n\n    返回一个 `MGL-MAT:MAT` 对象，表示 `OBJECT` 的状态或结果。返回矩阵的第一维等于条带的数量。\n\n\u003Ca id=\"x-28MGL-COMMON-3ADEFAULT-VALUE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **DEFAULT-VALUE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **GROUP-SIZE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **BATCH-SIZE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3AWEIGHTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **WEIGHTS** *OBJECT*\n\n\u003Ca id=\"x-28MGL-COMMON-3ASCALE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **SCALE** *OBJECT*\n\n\u003Ca id=\"x-28MGL-DATASET-3A-40MGL-DATASET-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 3 数据集\n\n###### \\[在包 MGL-DATASET 中\\]\n实例通常是用户选择的任何类型的对象。它通常由一组数字表示，称为特征向量，或者由包含特征向量、标签等信息的结构表示。数据集是由这样的实例组成的[`SEQUENCE`][ae23]，或者是产生实例的[Samplers][7bc3]对象。\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAP-DATASET-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAP-DATASET** *FN DATASET*\n\n    对 `DATASET` 中的每个实例调用 `FN`。这基本上等同于遍历序列或采样器（参见[Samplers][7bc3]）的元素。\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAP-DATASETS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAP-DATASETS** *FN DATASETS &KEY (IMPUTE NIL IMPUTEP)*\n\n    对 `DATASETS` 中每个数据集的一个实例列表调用 `FN`。不返回任何值。如果指定了 `IMPUTE`，则会一直迭代到最大的数据集被消耗完，并用 `IMPUTE` 来填补缺失值。如果没有指定 `IMPUTE`，则迭代直到最小的数据集耗尽为止。\n    \n    ```common-lisp\n    (map-datasets #'prin1 '((0 1 2) (:a :b)))\n    .. (0 :A)(1 :B)\n    \n    (map-datasets #'prin1 '((0 1 2) (:a :b)) :impute nil)\n    .. (0 :A)(1 :B)(2 NIL)\n    ```\n    \n    当然也可以将序列与采样器混合使用：\n    \n    ```common-lisp\n    (map-datasets #'prin1\n                  (list '(0 1 2)\n                        (make-sequence-sampler '(:a :b) :max-n-samples 2)))\n    .. (0 :A)(1 :B)\n    ```\n\n\u003Ca id=\"x-28MGL-DATASET-3A-40MGL-SAMPLER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 3.1 采样器\n\n有些算法不需要对整个数据集进行随机访问，而是可以通过流式观察来工作。采样器是简单的生成器，提供两个函数：[`SAMPLE`][f956] 和 [`FINISHEDP`][401f]。\n\n\u003Ca id=\"x-28MGL-DATASET-3ASAMPLE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **SAMPLE** *SAMPLER*\n\n    如果 `SAMPLER` 还没有用完数据（参见[`FINISHEDP`][401f]），`SAMPLE` 会返回一个代表来自世界样本的对象，也就是可以用于训练或预测的输入。如果 `SAMPLER` 已经 `FINISHEDP`，则不允许调用 `SAMPLE`。\n\n\u003Ca id=\"x-28MGL-DATASET-3AFINISHEDP-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **FINISHEDP** *SAMPLER*\n\n    检查 `SAMPLER` 是否已经用完了所有样本。\n\n\u003Ca id=\"x-28MGL-DATASET-3ALIST-SAMPLES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LIST-SAMPLES** *SAMPLER MAX-SIZE*\n\n    返回最多 `MAX-SIZE` 长度的样本列表，如果 `SAMPLER` 提前用完，则返回更少的样本。\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAKE-SEQUENCE-SAMPLER-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-SEQUENCE-SAMPLER** *SEQ &KEY MAX-N-SAMPLES*\n\n    创建一个按原始顺序返回 `SEQ` 元素的采样器。如果 `MAX-N-SAMPLES` 不为零，则最多抽取 `MAX-N-SAMPLES` 个样本。\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAKE-RANDOM-SAMPLER-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-RANDOM-SAMPLER** *SEQ &KEY MAX-N-SAMPLES (REORDER \\#'MGL-RESAMPLE:SHUFFLE)*\n\n    创建一个以随机顺序返回 `SEQ` 元素的采样器。如果 `MAX-N-SAMPLES` 不为零，则最多抽取 `MAX-N-SAMPLES` 个样本。采样器首先遍历一次打乱顺序的 `SEQ` 副本，每当采样器到达副本末尾时，就会再次打乱顺序。打乱顺序是通过调用 `REORDER` 函数来完成的。\n\n\u003Ca id=\"x-28MGL-DATASET-3A-2AINFINITELY-EMPTY-DATASET-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*INFINITELY-EMPTY-DATASET\\*** *\\#\\\u003CFUNCTION-SAMPLER \"infinitely empty\" >*\n\n    这是[`MGL-OPT:MINIMIZE`][46a4]的默认数据集。它是一个无限的 `NIL` 流。\n\n\u003Ca id=\"x-28MGL-DATASET-3A-40MGL-SAMPLER-FUNCTION-SAMPLER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 3.1.1 函数采样器\n\n\u003Ca id=\"x-28MGL-DATASET-3AFUNCTION-SAMPLER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **FUNCTION-SAMPLER**\n\n    一种带有函数作为其[`GENERATOR`][08ac]的采样器，该函数会产生一个可能有限也可能无限的样本流，具体取决于[`MAX-N-SAMPLES`][1cab]。[`FINISHEDP`][401f]会在 `MAX-N-SAMPLES` 不为零且不大于已生成的样本数量（[`N-SAMPLES`][bdf9]）时返回 `T`。\n    \n        (list-samples (make-instance 'function-sampler\n                                     :generator (lambda ()\n                                                  (random 10))\n                                     :max-n-samples 5)\n                      10)\n        => (3 5 2 3 3)\n\n\u003Ca id=\"x-28MGL-DATASET-3AGENERATOR-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [读取器] **GENERATOR** *[FUNCTION-SAMPLER][715c] (:GENERATOR)*\n\n    一个无参数的生成函数，用于返回下一个样本。\n\n\u003Ca id=\"x-28MGL-DATASET-3AMAX-N-SAMPLES-20-28MGL-PAX-3AACCESSOR-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [访问者] **MAX-N-SAMPLES** *[FUNCTION-SAMPLER][715c] (:MAX-N-SAMPLES = NIL)*\n\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [读取器] **NAME** *[FUNCTION-SAMPLER][715c] (:NAME = NIL)*\n\n    一个任意对象，用于命名采样器。仅用于打印采样器对象。\n\n\u003Ca id=\"x-28MGL-DATASET-3AN-SAMPLES-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29\">\u003C\u002Fa>\n\n- [读取器] **N-SAMPLES** *[FUNCTION-SAMPLER][715c] (:N-SAMPLES = 0)*\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 4 重采样\n\n###### \\[在 MGL-RESAMPLE 包中\\]\n本包的核心是重采样方法，例如交叉验证和自助法，这些方法可用于模型评估、模型选择，也可作为一种简单的集成学习方式。此外，还提供了数据划分和采样函数，因为它们通常与重采样一起使用。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-SHUFFLING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.1 打乱顺序\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASHUFFLE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SHUFFLE** *SEQ*\n\n    复制 `SEQ` 并使用费雪-耶茨算法对其进行打乱。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASHUFFLE-21-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SHUFFLE!** *SEQ*\n\n    使用费雪-耶茨算法对 `SEQ` 进行原地打乱。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-PARTITIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.2 划分\n\n以下函数将数据集（目前仅支持 [`SEQUENCE`][ae23]）划分为若干个子集。原始数据集中每个元素恰好属于其中一个子集。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3AFRACTURE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **FRACTURE** *FRACTIONS SEQ &KEY WEIGHT*\n\n    将 `SEQ` 划分为若干个子序列。`FRACTIONS` 可以是正整数，也可以是非负实数列表。`WEIGHT` 可以是 `NIL`，或者是一个函数，当以 `SEQ` 中的元素作为参数调用时，返回非负实数。如果 `FRACTIONS` 是一个正整数，则返回该数量的子序列列表，各子序列的权重之和大致相等（可能因舍入误差而略有偏差）；否则，按照 `FRACTIONS` 中各元素的比例来划分 `SEQ` 的子序列，即第 I 个子序列的权重之和与 `FRACTIONS` 中的第 I 个元素成正比。若 `WEIGHT` 为 `NIL`，则假定所有元素的权重相同。\n\n    例如，要将数据划分为 5 个序列：\n\n    ```common-lisp\n    (fracture 5 '(0 1 2 3 4 5 6 7 8 9))\n    => ((0 1) (2 3) (4 5) (6 7) (8 9))\n    ```\n\n    要将数据划分为两个长度比例为 2:3 的序列：\n\n    ```common-lisp\n    (fracture '(2 3) '(0 1 2 3 4 5 6 7 8 9))\n    => ((0 1 2 3) (4 5 6 7 8 9))\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASTRATIFY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **STRATIFY** *SEQ &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL)*\n\n    返回 `SEQ` 的分层列表。`SEQ` 是一个元素序列，其中每个元素通过函数 `KEY` 来确定其所属的类别。这些类别是不可见的对象，通过 `TEST` 函数进行比较以判断是否相等。一个分层是由具有相同（在 `TEST` 下）`KEY` 值的元素组成的序列。\n\n    例如：\n\n    ```common-lisp\n    (stratify '(0 1 2 3 4 5 6 7 8 9) :key #'evenp)\n    => ((0 2 4 6 8) (1 3 5 7 9))\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3AFRACTURE-STRATIFIED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **FRACTURE-STRATIFIED** *FRACTIONS SEQ &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL) WEIGHT*\n\n    类似于 [`FRACTURE`][6f82]，但同时确保各个类别的样本在各个子集中均匀分布（参见 [`STRATIFY`][ba91]）。这在分类任务中非常有用，可以在保持各类别分布不变的情况下对数据集进行划分。\n\n    注意，返回的集合并不是随机顺序的，实际上它们会根据 `KEY` 值进行内部排序。\n\n    例如，要将数据划分为两个子集，使偶数和奇数的数量大致相等：\n\n    ```common-lisp\n    (fracture-stratified 2 '(0 1 2 3 4 5 6 7 8 9) :key #'evenp)\n    => ((0 2 1 3) (4 6 8 5 7 9))\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CROSS-VALIDATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.3 交叉验证\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ACROSS-VALIDATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **CROSS-VALIDATE** *DATA FN &KEY (N-FOLDS 5) (FOLDS (ALEXANDRIA:IOTA N-FOLDS)) (SPLIT-FN \\#'SPLIT-FOLD\u002FMOD) PASS-FOLD*\n\n    将 `FN` 映射到由 `SPLIT-FN` 划分的 `DATA` 的各个折中，并将结果收集到一个列表中。最简单的示例是：\n\n    ```common-lisp\n    (cross-validate '(0 1 2 3 4)\n                    (lambda (test training)\n                     (list test training))\n                    :n-folds 5)\n    => (((0) (1 2 3 4))\n        ((1) (0 2 3 4))\n        ((2) (0 1 3 4))\n        ((3) (0 1 2 4))\n        ((4) (0 1 2 3)))\n    ```\n\n    当然，在实际应用中，人们通常会训练模型，并返回训练好的模型和\u002F或其在 `TEST` 上的得分。此外，有时也可能只执行部分折，并记录具体是哪些折：\n\n    ```common-lisp\n    (cross-validate '(0 1 2 3 4)\n                    (lambda (fold test training)\n                     (list :fold fold test training))\n                    :folds '(2 3)\n                    :pass-fold t)\n    => ((:fold 2 (2) (0 1 3 4))\n        (:fold 3 (3) (0 1 2 4)))\n    ```\n\n    最后，还可以自定义数据的划分方式。默认情况下，会调用 [`SPLIT-FOLD\u002FMOD`][5ded]，传入 `DATA`、当前折（来自 `FOLDS`）以及折数 `N-FOLDS` 作为参数。`SPLIT-FOLD\u002FMOD` 返回两个值，然后传递给 `FN`。用户也可以使用 [`SPLIT-FOLD\u002FCONT`][5293] 或 [`SPLIT-STRATIFIED`][8cb8]，或其他接受这些参数的函数。唯一的限制是，`FN` 必须接收与 `SPLIT-FN` 返回值数量相同的参数（如果设置了 `PASS-FOLD` 参数，则还需额外接收折数参数）。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FMOD-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SPLIT-FOLD\u002FMOD** *SEQ FOLD N-FOLDS*\n\n    将 `SEQ` 划分为两个序列：一个包含索引除以 `N-FOLDS` 后余数为 `FOLD` 的元素，另一个包含其余元素。第二个序列通常是较大的那个。元素的相对顺序保持不变。此函数适合作为 [`CROSS-VALIDATE`][9524] 的 `SPLIT-FN` 参数。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FCONT-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SPLIT-FOLD\u002FCONT** *SEQ FOLD N-FOLDS*\n\n    想象将 `SEQ` 划分成 `N-FOLDS` 个大小相同（可能因四舍五入而略有差异）的子序列。将索引为 `FOLD` 的子序列作为第一个返回值，其余子序列拼接成一个整体作为第二个返回值。元素的相对顺序保持不变。此函数适合作为 [`CROSS-VALIDATE`][9524] 的 `SPLIT-FN` 参数。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPLIT-STRATIFIED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SPLIT-STRATIFIED** *SEQ FOLD N-FOLDS &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL) WEIGHT*\n\n    将 `SEQ` 划分为 `N-FOLDS` 个子集（类似于 [`FRACTURE-STRATIFIED`][627a]）。将索引为 `FOLD` 的子集作为第一个返回值，其余子集拼接成一个整体作为第二个返回值。此函数适合作为 [`CROSS-VALIDATE`][9524] 的 `SPLIT-FN` 参数（通常以闭包形式使用，绑定 `KEY`、`TEST` 和 `WEIGHT` 参数）。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-BAGGING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.4 装袋法\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ABAG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **BAG** *SEQ FN &KEY (RATIO 1) N WEIGHT (REPLACEMENT T) KEY (TEST \\#'EQL) (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    使用[`SAMPLE-FROM`][86fd]（传递 `RATIO`、`WEIGHT`、`REPLACEMENT`）从 `SEQ` 中采样，如果 `KEY` 不为 `NIL`，则使用[`SAMPLE-STRATIFIED`][aee6]。然后调用 `FN` 函数处理采样结果。如果 `N` 为 `NIL`，则会不断重复此过程，直到 `FN` 执行非局部退出。否则，`N` 必须是一个非负整数，表示要执行的迭代次数，`FN` 返回的主要值将被收集到一个列表中并返回。示例请参见 `SAMPLE-FROM` 和 `SAMPLE-STRATIFIED`。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASAMPLE-FROM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SAMPLE-FROM** *RATIO SEQ &KEY WEIGHT REPLACEMENT (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    从 `SEQ` 中有放回或无放回地进行采样，返回一个新的序列。结果序列中元素的权重总和大约是 `SEQ` 中权重总和乘以 `RATIO`。如果 `WEIGHT` 为 `NIL`，则假定所有元素具有相等的权重；否则，`WEIGHT` 应在被调用时返回一个非负实数，该实数对应于 `SEQ` 中某个元素的权重。\n    \n    随机选择一半元素：\n    \n    ```common-lisp\n    (sample-from 1\u002F2 '(0 1 2 3 4 5))\n    => (5 3 2)\n    ```\n    \n    随机选择一些元素，使其权重之和约为整个序列权重之和的一半：\n    \n    ```common-lisp\n    (sample-from 1\u002F2 '(0 1 2 3 4 5 6 7 8 9) :weight #'identity)\n    => ;; 权重之和约为 45\u002F2 的 28\n       (9 4 1 6 8)\n    ```\n    \n    进行有放回采样（即允许同一个元素被多次采样）：\n    \n    ```common-lisp\n    (sample-from 1 '(0 1 2 3 4 5) :replacement t)\n    => (1 1 5 1 4 4)\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASAMPLE-STRATIFIED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SAMPLE-STRATIFIED** *RATIO SEQ &KEY WEIGHT REPLACEMENT (KEY \\#'IDENTITY) (TEST \\#'EQL) (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    类似于[`SAMPLE-FROM`][86fd]，但确保结果中各类别的加权比例与 `SEQ` 中的比例大致相同。关于 `KEY` 和 `TEST` 的说明，请参阅[`STRATIFY`][ba91]。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CV-BAGGING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.5 CV 装袋法\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ABAG-CV-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **BAG-CV** *DATA FN &KEY N (N-FOLDS 5) (FOLDS (ALEXANDRIA:IOTA N-FOLDS)) (SPLIT-FN \\#'SPLIT-FOLD\u002FMOD) PASS-FOLD (RANDOM-STATE \\*RANDOM-STATE\\*)*\n\n    对 `DATA` 的不同随机排列进行交叉验证 `N` 次，并收集结果。由于[`CROSS-VALIDATE`][9524] 会收集 `FN` 的返回值，因此本函数的返回值是一个包含多个 `FN` 结果列表的列表。如果 `N` 为 `NIL`，则不收集任何结果，而是持续进行交叉验证，直到 `FN` 执行非局部退出。\n    \n    下面的例子简单地收集了 2 折交叉验证中测试集和训练集的结果，共重复 3 次：\n    \n    ```commonlisp\n    ;;; 这是非确定性的。\n    (bag-cv '(0 1 2 3 4) #'list :n 3 :n-folds 2)\n    => ((((2 3 4) (1 0))\n         ((1 0) (2 3 4)))\n        (((2 1 0) (4 3))\n         ((4 3) (2 1 0)))\n        (((1 0 3) (2 4))\n         ((2 4) (1 0 3))))\n    ```\n    \n    当单次交叉验证无法产生稳定结果时，CV 装袋法就显得非常有用。作为一种集成方法，CV 装袋法相比普通装袋法的优势在于：每个样本都会出现相同的次数，并且在第一次交叉验证完成后，每个样本都会有一个完整但不太可靠的估计值，随后通过进一步的交叉验证逐步完善。\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-MISC-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 4.6 其他操作\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3ASPREAD-STRATA-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SPREAD-STRATA** *SEQ &KEY (KEY \\#'IDENTITY) (TEST \\#'EQL)*\n\n    返回一个重新排序后的 `SEQ` 序列，使得属于不同层（根据 `KEY` 和 `TEST`，参见 [`STRATIFY`][ba91]）的元素均匀分布。同一层内的元素顺序保持不变。\n    \n    例如，为了使偶数和奇数均匀分布：\n    \n    ```common-lisp\n    (spread-strata '(0 2 4 6 8 1 3 5 7 9) :key #'evenp)\n    => (0 1 2 3 4 5 6 7 8 9)\n    ```\n    \n    对于类别不平衡的情况也适用：\n    \n    ```common-lisp\n    (spread-strata (vector 0 2 3 5 6 1 4)\n                   :key (lambda (x)\n                          (if (member x '(1 4))\n                              t\n                              nil)))\n    => #(0 1 2 3 4 5 6)\n    ```\n\n\u003Ca id=\"x-28MGL-RESAMPLE-3AZIP-EVENLY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **ZIP-EVENLY** *SEQS &KEY RESULT-TYPE*\n\n    将 `SEQS` 中的多个序列合并成一个单一的序列，使得来自同一源序列的元素索引在整个序列中均匀分布。如果 `RESULT-TYPE` 是 `LIST`（[`0`][79d8] [`1`][6d9f]），则返回一个列表；如果 `RESULT-TYPE` 是 `VECTOR`（[`0`][6098] [`1`][6d31]），则返回一个向量。如果 `RESULT-TYPE` 为 `NIL`，则由 `SEQS` 中第一个序列的类型决定。\n    \n    ```common-lisp\n    (zip-evenly '((0 2 4) (1 3)))\n    => (0 1 2 3 4)\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CORE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 5 核心\n\n###### \\[在 MGL-CORE 包中\\]\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-PERSISTENCE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 5.1 持久化\n\n\u003Ca id=\"x-28MGL-CORE-3ALOAD-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LOAD-STATE** *FILENAME OBJECT*\n\n    从 `FILENAME` 加载 `OBJECT` 的权重。返回 `OBJECT`。\n\n\u003Ca id=\"x-28MGL-CORE-3ASAVE-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SAVE-STATE** *FILENAME OBJECT &KEY (IF-EXISTS :ERROR) (ENSURE T)*\n\n    将 `OBJECT` 的权重保存到 `FILENAME`。如果 `ENSURE` 为真，则会对 `FILENAME` 调用 [`ENSURE-DIRECTORIES-EXIST`][876d]。`IF-EXISTS` 会被传递给 [`OPEN`][6547]。返回 `OBJECT`。\n\n\u003Ca id=\"x-28MGL-CORE-3AREAD-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **READ-STATE** *OBJECT STREAM*\n\n    从双值 `STREAM` 中读取 `OBJECT` 的权重，这里的权重指的是学习得到的参数。目前对数据尚未进行任何校验，未来随着序列化格式的变化，这一点肯定会有所改变。返回 `OBJECT`。\n\n\u003Ca id=\"x-28MGL-CORE-3AWRITE-STATE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **WRITE-STATE** *OBJECT STREAM*\n\n    将 `OBJECT` 的权重写入双值 `STREAM`。返回 `OBJECT`。\n\n\u003Ca id=\"x-28MGL-CORE-3AREAD-STATE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **READ-STATE\\*** *OBJECT STREAM CONTEXT*\n\n    这是 [`READ-STATE`][8148] 的扩展点。可以保证对于每个 `OBJECT`（在 [`EQ`][5a82] 的意义上），主方法 `READ-STATE*` 只会被调用一次。`CONTEXT` 是一个不透明的对象，必须传递给任何递归调用的 `READ-STATE*`。\n\n\u003Ca id=\"x-28MGL-CORE-3AWRITE-STATE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **WRITE-STATE\\*** *OBJECT STREAM CONTEXT*\n\n    这是 [`WRITE-STATE`][95fe] 的扩展点。可以保证对于每个 `OBJECT`（在 [`EQ`][5a82] 的意义上），主方法 `WRITE-STATE*` 只会被调用一次。`CONTEXT` 是一个不透明的对象，必须传递给任何递归调用的 `WRITE-STATE*`。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MODEL-STRIPE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 5.2 批量处理\n\n在训练或预测过程中逐个处理样本可能会很慢。支持批量处理以提高效率的模型被称为“条带化”模型。\n\n通常，在创建模型时或之后，会为其设置 [`MAX-N-STRIPES`][16c4]，该值应为一个正整数。当一批样本需要输入到模型中时，首先会被分割成长度不超过 `MAX-N-STRIPES` 的子批次。对于每个子批次，会调用 [`SET-INPUT`][0c9e]（FIXDOC），并且一个前置方法会负责将 [`N-STRIPES`][8dd7] 设置为该子批次的实际样本数量。当 `MAX-N-STRIPES` 被设置时，内部数据结构可能会被重新调整大小，这是一项开销较大的操作。而设置 `N-STRIPES` 则相对便宜，通常通过矩阵重塑来实现。\n\n需要注意的是，对于由不同部分组成的模型（例如，[`MGL-BP:BPN`][5187] 由 [`MGL-BP:LUMP`][c1ac] 组成），设置这些值会影响其组成部分，但不应直接更改各部分的条带数，因为那样会导致模型内部的一致性问题。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAX-N-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **MAX-N-STRIPES** *OBJECT*\n\n    `OBJECT` 同时能够处理的最大条带数。\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-MAX-N-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **SET-MAX-N-STRIPES** *MAX-N-STRIPES OBJECT*\n\n    分配必要的资源，以便在 `OBJECT` 中同时处理 `MAX-N-STRIPES` 个条带。当 `MAX-N-STRIPES` 被 [`SETF`][a138] 修改时，会调用此函数。\n\n\u003Ca id=\"x-28MGL-CORE-3AN-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **N-STRIPES** *OBJECT*\n\n    当前存在于 `OBJECT` 中的条带数量。该值最多等于 [`MAX-N-STRIPES`][16c4]。\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-N-STRIPES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **SET-N-STRIPES** *N-STRIPES OBJECT*\n\n    设置 `OBJECT` 中正在使用的条带数量（不超过 [`MAX-N-STRIPES`][16c4]）。当 `N-STRIPES` 被 [`SETF`][a138] 修改时，会调用此函数。\n\n\u003Ca id=\"x-28MGL-CORE-3AWITH-STRIPES-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **WITH-STRIPES** *SPECS &BODY BODY*\n\n    在条带化对象中绑定条带的起始索引，并可选地绑定结束索引。\n    \n        (WITH-STRIPES ((STRIPE1 OBJECT1 START1 END1)\n                       (STRIPE2 OBJECT2 START2)\n                       ...)\n         ...)\n    \n    以下是如何在一个 bpn 的输入块中找到第 N 个输入对应的索引范围：\n    \n         (with-stripes ((n input-lump start end))\n           (loop for i upfrom start below end\n                 do (setf (mref (nodes input-lump) i) 0d0)))\n    \n    注意，输入块是条带化的，但我们要索引的矩阵（[`NODES`][cc1c]）并不为 `WITH-STRIPES` 所知。事实上，对于块来说，相同的条带索引同样适用于 `NODES` 和 [`MGL-BP:DERIVATIVES`][a81b]。\n\n\u003Ca id=\"x-28MGL-CORE-3ASTRIPE-START-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **STRIPE-START** *STRIPE OBJECT*\n\n    返回 `STRIPE` 在 `OBJECT` 的某个数组或矩阵中的起始索引。\n\n\u003Ca id=\"x-28MGL-CORE-3ASTRIPE-END-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **STRIPE-END** *STRIPE OBJECT*\n\n    返回 `STRIPE` 在 `OBJECT` 的某个数组或矩阵中的结束索引（不包括该索引）。\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-INPUT-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **SET-INPUT** *INSTANCES MODEL*\n\n    将 `INSTANCES` 设置为 `MODEL` 的输入。无论模型是否支持批量操作，`INSTANCES` 始终是一个实例的 [`SEQUENCE`][ae23]。它会在一个 `:BEFORE` 方法中将 [`N-STRIPES`][8dd7] 设置为 (`LENGTH`)[2f78] `INSTANCES` 的长度。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAP-BATCHES-FOR-MODEL-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAP-BATCHES-FOR-MODEL** *FN DATASET MODEL*\n\n    使用适合 `MODEL` 的 `DATASET` 中的样本批次调用 `FN`。每个批次的样本数量不超过 `MODEL` 的 [`MAX-N-STRIPES`][16c4]，或者如果没有更多样本，则少于这个数量。\n\n\u003Ca id=\"x-28MGL-CORE-3ADO-BATCHES-FOR-MODEL-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **DO-BATCHES-FOR-MODEL** *(BATCH (DATASET MODEL)) &BODY BODY*\n\n    是 [`MAP-BATCHES-FOR-MODEL`][5fdc] 的便捷宏。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-EXECUTORS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 5.3 执行器\n\n\u003Ca id=\"x-28MGL-CORE-3AMAP-OVER-EXECUTORS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAP-OVER-EXECUTORS** *FN INSTANCES PROTOTYPE-EXECUTOR*\n\n    将 `INSTANCES` 分配给与 `PROTOTYPE-EXECUTOR` 执行相同功能的执行器，并使用这些实例以及对应的执行器调用 `FN`。\n\n    某些对象将功能和调用混为一谈：例如，[`MGL-BP:BPN`][5187] 的前向传播会根据输入计算输出，因此它类似于一个函数；但同时它也充当函数调用的角色——在计算输出的过程中，该 bpn（函数）对象本身的状态会发生变化。因此，即使是 bpn 的前向传播也不是线程安全的。此外，还有一个限制，即所有输入必须具有相同的大小。\n\n    例如，如果我们有一个函数可以根据特定大小的输入构建 bpn a，那么我们可以创建一个工厂来为特定的调用创建 bpn。不过，这个工厂很可能希望保持权重不变。在 [参数化执行器缓存][ada2] 中，[`MAKE-EXECUTOR-WITH-PARAMETERS`][331b] 就是这样一个工厂。\n\n    另一种可能性是并行化执行，这也是 `MAP-OVER-EXECUTORS` 允许的，但目前还没有现成的解决方案。\n\n    默认实现只是简单地使用 `INSTANCES` 和 `PROTOTYPE-EXECUTOR` 调用 `FN`。\n\n\u003Ca id=\"x-28MGL-CORE-3ADO-EXECUTORS-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **DO-EXECUTORS** *(INSTANCES OBJECT) &BODY BODY*\n\n    基于 [`MAP-OVER-EXECUTORS`][b01b] 的便捷宏。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-PARAMETERIZED-EXECUTOR-CACHE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 5.3.1 参数化执行器缓存\n\n\u003Ca id=\"x-28MGL-CORE-3APARAMETERIZED-EXECUTOR-CACHE-MIXIN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **PARAMETERIZED-EXECUTOR-CACHE-MIXIN**\n\n    将此混入模型中，实现 [`INSTANCE-TO-EXECUTOR-PARAMETERS`][0078]、[`MAKE-EXECUTOR-WITH-PARAMETERS`][331b] 和 [`DO-EXECUTORS`][f98e] 后，即可构建适用于不同实例的执行器。最典型的例子是使用 BPN 来计算高斯过程的均值和协方差。由于每个实例由不同数量的观测值组成，输入的大小并不固定，因此我们需要为每个输入维度（即参数）创建一个相应的 BPN（执行器）。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-EXECUTOR-WITH-PARAMETERS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAKE-EXECUTOR-WITH-PARAMETERS** *PARAMETERS CACHE*\n\n    为 `PARAMETERS` 创建一个新的执行器。`CACHE` 是一个 [`PARAMETERIZED-EXECUTOR-CACHE-MIXIN`][d3b2]。在 BPN 高斯过程的例子中，`PARAMETERS` 就是输入维度的列表。\n\n\u003Ca id=\"x-28MGL-CORE-3AINSTANCE-TO-EXECUTOR-PARAMETERS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **INSTANCE-TO-EXECUTOR-PARAMETERS** *INSTANCE CACHE*\n\n    返回能够处理 `INSTANCE` 的执行器所需的参数。由 [`MAP-OVER-EXECUTORS`][b01b] 在 `CACHE`（即一个 [`PARAMETERIZED-EXECUTOR-CACHE-MIXIN`][d3b2]）上调用。返回的参数是 [`EQUAL`][3fb5] 参数到执行器哈希表中的键。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MONITORING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 6 监控\n\n###### \\[在 MGL-CORE 包中\\]\n在训练或应用模型时，人们通常希望跟踪各种统计信息。例如，在使用交叉熵损失函数训练神经网络的情况下，这些统计信息可能包括平均交叉熵损失值、分类准确率，甚至是整个混淆矩阵以及隐藏层中的稀疏性水平。此外，还有一个问题：如何处理这些测量值（记录后丢弃、添加到某个计数器或列表中）。\n\n因此，在运行过程中，可能会有多个阶段需要我们关注。我们可以将这些阶段称为**事件**。针对每个事件，也可能有许多相对独立的操作可以执行。我们可以将这些操作称为**监控器**。有些监控器是由两个操作组成的：一个用于提取某些度量，另一个用于汇总这些度量。我们分别称这两个部分为**度量器**和**计数器**。\n\n例如，考虑训练一个反向传播神经网络。我们希望在反向传播完成后立即查看网络的状态。[`MGL-BP:BP-LEARNER`][00a0] 提供了一个名为 [`MONITORS`][6202] 的事件钩子，对应于梯度反向传播完成后的时刻。假设我们对训练代价的变化感兴趣：\n\n    (push (make-instance 'monitor\n                         :measurer (lambda (instances bpn)\n                                     (declare (ignore instances))\n                                     (mgl-bp:cost bpn))\n                         :counter (make-instance 'basic-counter))\n          (monitors learner))\n\n在训练过程中，这个监控器会在后台跟踪训练样本的代价。如果我们希望定期打印并重置这个监控器，可以在 [`MGL-OPT:ITERATIVE-OPTIMIZER`][8da0] 的 [`MGL-OPT:ON-N-INSTANCES-CHANGED`][4f0b] 访问器上再添加一个监控器：\n\n    (push (lambda (optimizer gradient-source n-instances)\n            (declare (ignore optimizer))\n            (when (zerop (mod n-instances 1000))\n              (format t \"n-instances: ~S~%\" n-instances)\n              (dolist (monitor (monitors gradient-source))\n                (when (counter monitor)\n                  (format t \"~A~%\" (counter monitor))\n                  (reset-counter (counter monitor)))))\n          (mgl-opt:on-n-instances-changed optimizer))\n\n需要注意的是，只要实现了具有相应签名的 [`APPLY-MONITOR`][bbdf]，我们推入的监控器可以是任何东西。另外，`ZEROP`[ec8b] + `MOD`([`0`][80fa] [`1`][ee86]) 的逻辑比较脆弱，因此你很可能更倾向于使用 [`MGL-OPT:MONITOR-OPTIMIZATION-PERIODICALLY`][4528]，而不是手动实现上述逻辑。\n\n这就是总体思路。具体的事件会在其触发位置进行文档说明。通常，还会有一些特定任务的工具函数来创建一组合理的默认监控器（参见 [分类监控器][c573]）。\n\n\u003Ca id=\"x-28MGL-CORE-3AAPPLY-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **APPLY-MONITORS** *MONITORS &REST ARGUMENTS*\n\n    对 `MONITORS` 和 `ARGUMENTS` 中的每个监控器调用 [`APPLY-MONITOR`][bbdf]。这就是触发事件的方式。\n\n\u003Ca id=\"x-28MGL-CORE-3AAPPLY-MONITOR-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **APPLY-MONITOR** *MONITOR &REST ARGUMENTS*\n\n    将 `MONITOR` 应用于 `ARGUMENTS`。这听起来相当通用，因为它确实如此。`MONITOR` 可以是任何东西，甚至只是一个简单的函数或符号，在这种情况下，它就等同于 [`CL:APPLY`][d811]。更多内容请参阅 [监控器][c701]。\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **COUNTER** *MONITOR*\n\n    返回表示 `MONITOR` 状态的对象，或者如果它没有任何状态（例如，它只是一个简单的日志记录函数），则返回 `NIL`。大多数监控器都有计数器，它们会将结果累积起来，直到被打印并重置。更多内容请参阅 [计数器][be95]。\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITOR-MODEL-RESULTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MONITOR-MODEL-RESULTS** *FN DATASET MODEL MONITORS*\n\n    使用 `DATASET` 中的批次数据调用 `FN`，直到数据用完（类似于 [`DO-BATCHES-FOR-MODEL`][faaa]）。`FN` 应该将 `MODEL` 应用于当前批次，并返回某种结果（对于神经网络来说，结果就是模型本身的状态）。对每个批次以及 `FN` 为该批次返回的结果应用 `MONITORS`。最后，返回 `MONITORS` 的计数器列表。\n\n    这个函数的目的是通过仅对模型进行一次应用，高效地收集各种结果和统计信息（如误差度量），并将从模型结果中提取感兴趣量的工作交给 `MONITORS` 来完成。\n\n    请参阅针对特定模型的版本，例如 [`MGL-BP:MONITOR-BPN-RESULTS`][0933]。\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITORS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MONITORS** *OBJECT*\n\n    返回与 `OBJECT` 关联的监控器。更多文档请参阅各种方法，例如 [`MONITORS`][6202]。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MONITOR-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 6.1 监控器\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITOR-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **MONITOR**\n\n    一种包含嵌套监控器 [`MEASURER`][eb05] 的监控器。当此监控器被应用时，它会先应用度量器，并将返回的值传递给其 [`COUNTER`][a077] 插槽上调用的 [`ADD-TO-COUNTER`][62de]。用户可以通过进一步特化 [`APPLY-MONITOR`][bbdf] 来改变这一行为。\n\n    当同一个事件监控器需要在一段时间内反复应用，并且其结果需要被汇总时，此类监控器非常有用，例如在跟踪训练统计信息或进行预测时。需要注意的是，监控器必须与其所处理的事件兼容。也就是说，嵌入的 `MEASURER` 必须能够接受与该事件相关联的参数。\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29\">\u003C\u002Fa>\n\n- [读取器] **MEASURER** *[MONITOR][7068] (:MEASURER)*\n\n    它本身必须是一个监控器，这意味着在其上定义了 [`APPLY-MONITOR`][bbdf]（但请参阅 [监控] [e668]）。返回的值由 [`COUNTER`][5752] 进行汇总。有关度量器的库，请参阅 [度量器][cd3b]。\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29\">\u003C\u002Fa>\n\n- [读取器] **COUNTER** *[MONITOR][7068] (:COUNTER)*\n\n    监控器的 `COUNTER` 负责汇总由 [`MEASURER`][eb05] 返回的成果。有关计数器的库，请参阅 [计数器][be95]。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-MEASURER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 6.2 测量器\n\n[`MEASURER`][eb05] 是 [`MONITOR`][7068] 对象的一部分，是一种嵌入式监控器，它根据所应用事件的参数（例如模型结果）计算特定的度量值（例如分类准确率）。测量器通常通过将某种模型特定的提取器与通用的测量函数结合来实现。\n\n所有通用的测量函数都会将其结果作为多个值返回，这些值与特定类型计数器的 [`ADD-TO-COUNTER`][62de] 参数相匹配（参见 [计数器][be95]），以便于在 `MONITOR` 中使用：\n\n    (multiple-value-call #'add-to-counter \u003Csome-counter>\n                         \u003Ccall-to-some-measurer>)\n\n以这种方式与测量器兼容的计数器类会在每个函数中注明。\n\n有关测量函数的列表，请参阅 [分类测量器][0ba7]。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-COUNTER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 6.3 计数器\n\n\u003Ca id=\"x-28MGL-CORE-3AADD-TO-COUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **ADD-TO-COUNTER** *计数器 &REST ARGS*\n\n    以某种方式将 `ARGS` 加到 `计数器` 上。具体的行为请参阅针对不同类型的专用方法文档。所支持的参数类型，正是度量函数（见 [Measurers][cd3b]）预期与该计数器配对时返回的多值。\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-VALUES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **COUNTER-VALUES** *计数器*\n\n    返回任意数量的值，用于表示 `计数器` 的状态。具体的行为请参阅针对不同类型的专用方法文档。\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNTER-RAW-VALUES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **COUNTER-RAW-VALUES** *计数器*\n\n    返回任意数量的值，这些值能够精确地表示 `计数器` 的当前状态；如果将这些返回值作为参数传递给同一类型的全新实例上的 [`ADD-TO-COUNTER`][62de]，就能完全恢复到原始状态。\n\n\u003Ca id=\"x-28MGL-CORE-3ARESET-COUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **RESET-COUNTER** *计数器*\n\n    将 `计数器` 的状态重置为刚创建时的状态。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-ATTRIBUTES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 6.3.1 属性\n\n\u003Ca id=\"x-28MGL-CORE-3AATTRIBUTED-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **ATTRIBUTED**\n\n    这是一个所有计数器都继承的工具类。其 [`ATTRIBUTES`][cc37] plist 可以存储几乎任何内容。目前，这些属性仅在打印时使用，并且可以由用户自定义。诸如 [Classification Monitors][c573] 中的监控器生成函数也会在其创建的计数器中添加额外的属性。\n\n    使用 `:PREPEND-ATTRIBUTES` 初始化参数，可以轻松添加新属性，而不会覆盖 `:INITFORM` 中已有的属性（例如本例中的 `:TYPE` \"rmse\"）。\n\n        (princ (make-instance 'rmse-counter\n                              :prepend-attributes '(:event \"pred.\"\n                                                    :dataset \"test\")))\n        ;; pred. test rmse: 0.000e+0 (0)\n        => #\u003CRMSE-COUNTER pred. test rmse: 0.000e+0 (0)>\n\n\u003Ca id=\"x-28MGL-CORE-3AATTRIBUTES-20-28MGL-PAX-3AACCESSOR-20MGL-CORE-3AATTRIBUTED-29-29\">\u003C\u002Fa>\n\n- [访问器] **ATTRIBUTES** *[ATTRIBUTED][9715] (:ATTRIBUTES = NIL)*\n\n    一个包含属性键和值的 plist。\n\n\u003Ca id=\"x-28MGL-COMMON-3ANAME-20-28METHOD-20-28MGL-CORE-3AATTRIBUTED-29-29-29\">\u003C\u002Fa>\n\n- [方法] **NAME** *(ATTRIBUTED ATTRIBUTED)*\n\n    从 `ATTRIBUTED` 的 [`ATTRIBUTES`][cc37] 中提取值并拼接成字符串。如果存在多个具有相同键的条目，则它们会紧密相邻地显示。\n\n    值可以根据外部包裹的 [`WITH-PADDED-ATTRIBUTE-PRINTING`][2e8b] 进行填充。\n\n\u003Ca id=\"x-28MGL-CORE-3AWITH-PADDED-ATTRIBUTE-PRINTING-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **WITH-PADDED-ATTRIBUTE-PRINTING** *(ATTRIBUTEDS) &BODY BODY*\n\n    记录每个属性键对应的值宽度，即该值通过 [`PRINC-TO-STRING`][a541] 转换后的字符数。在 `BODY` 中，如果打印具有相同键的属性，则强制使它们至少达到这个宽度。这样可以产生类似表格的美观输出：\n\n        (let ((attributeds\n                (list (make-instance 'basic-counter\n                                     :attributes '(:a 1 :b 23 :c 456))\n                      (make-instance 'basic-counter\n                                     :attributes '(:a 123 :b 45 :c 6)))))\n          (with-padded-attribute-printing (attributeds)\n            (map nil (lambda (attributed)\n                       (format t \"~A~%\" attributed))\n                 attributeds)))\n        ;; 1   23 456: 0.000e+0 (0)\n        ;; 123 45 6  : 0.000e+0 (0)\n\n\u003Ca id=\"x-28MGL-CORE-3ALOG-PADDED-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LOG-PADDED** *ATTRIBUTEDS*\n\n    使用非转义格式（类似于 [`PRINC`][676d] 或 ~A）记录 `ATTRIBUTEDS`，并通过 [`LOG-MSG`][f85e] 输出，尽量使输出呈现表格状。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-COUNTER-CLASSES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 6.3.2 计数器类\n\n除了这里介绍的基本计数器外，还可参阅 [Classification Counters][6598]。\n\n\u003Ca id=\"x-28MGL-CORE-3ABASIC-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **BASIC-COUNTER** *[ATTRIBUTED][9715]*\n\n    一种简单的计数器，其 [`ADD-TO-COUNTER`][62de] 接受两个额外的参数：分别称为 [`NUMERATOR`][8af5] 和 [`DENOMINATOR`][5cd8] 的内部累加值。[`COUNTER-VALUES`][20e8] 返回两个值：\n\n    - `NUMERATOR` 除以 `DENOMINATOR`（如果 `DENOMINATOR` 为 0，则返回 0），以及\n\n    - `DENOMINATOR`\n\n    下面是一个示例，用于计算分两批接收的 5 个数值的平均值：\n\n         (let ((counter (make-instance 'basic-counter)))\n           (add-to-counter counter 6.5 3)\n           (add-to-counter counter 3.5 2)\n           counter)\n         => #\u003CBASIC-COUNTER 2.00000e+0 (5)>\n\n\u003Ca id=\"x-28MGL-CORE-3ARMSE-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **RMSE-COUNTER** *[BASIC-COUNTER][5979]*\n\n    一种 [`BASIC-COUNTER`][5979]，其分子部分累积的是某些统计量的平方。它具有属性 `:TYPE` \"rmse\"。[`COUNTER-VALUES`][20e8] 返回的是 `BASIC-COUNTER` 的 `COUNTER-VALUES` 所得结果的平方根。\n\n        (let ((counter (make-instance 'rmse-counter)))\n          (add-to-counter counter (+ (* 3 3) (* 4 4)) 2)\n          counter)\n        => #\u003CRMSE-COUNTER rmse: 3.53553e+0 (2)>\n\n\u003Ca id=\"x-28MGL-CORE-3ACONCAT-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **CONCAT-COUNTER** *[ATTRIBUTED][9715]*\n\n    一种简单地将序列连接起来的计数器。\n\n    ```common-lisp\n    (let ((counter (make-instance 'concat-counter)))\n      (add-to-counter counter '(1 2 3) #(4 5))\n      (add-to-counter counter '(6 7))\n      (counter-values counter))\n    => (1 2 3 4 5 6 7)\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3ACONCATENATION-TYPE-20-28MGL-PAX-3AREADER-20MGL-CORE-3ACONCAT-COUNTER-29-29\">\u003C\u002Fa>\n\n- [读取器] **CONCATENATION-TYPE** *[CONCAT-COUNTER][0f83] (:CONCATENATION-TYPE = 'LIST)*\n\n    一个适合作为 [`CONCATENATE`][2ecb] 函数 `RESULT-TYPE` 参数的类型指示符。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 7 分类\n\n###### \\[在包 MGL-CORE 中\\]\n为了能够度量与分类相关的指标，我们需要定义实例的标签是什么。可以通过为特定类型的实例实现一个方法来进行自定义，但这些函数通常只作为默认实现出现，可以被覆盖。\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDEX-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **LABEL-INDEX** *INSTANCE*\n\n    返回 `INSTANCE` 的标签，表示为一个非负整数。\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTION-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **LABEL-INDEX-DISTRIBUTION** *INSTANCE*\n\n    返回一个一维概率数组，表示标签的分布。其中，标签索引为 [`LABEL-INDEX`][cc80] `I` 的概率，即为返回数组中索引为 `I` 的元素值。\n\n以下两个函数基本上与前两个函数相同，但以批处理模式运行：它们分别返回标签索引序列或标签分布序列。这些函数通常用于模型生成的结果上。为某个模型实现这些函数后，下面的监控器生成函数将自动生效。参见 FIXDOC: for bpn and boltzmann。\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDICES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **LABEL-INDICES** *RESULTS*\n\n    返回由某个模型对一批实例生成的 `RESULTS` 的标签索引序列。这类似于 [`LABEL-INDEX`][cc80]。\n\n\u003Ca id=\"x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTIONS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **LABEL-INDEX-DISTRIBUTIONS** *RESULT*\n\n    返回由某个模型对一批实例生成的 `RESULTS` 的标签索引分布序列。这类似于 [`LABEL-INDEX-DISTRIBUTION`][caec]。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MONITOR-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 7.1 分类监控器\n\n以下函数会返回一组监控器列表。这些监控器适用于签名为 (`INSTANCES` `MODEL`) 的事件，例如由 [`MONITOR-MODEL-RESULTS`][e50c] 及其各种模型特定变体产生的事件。它们是与模型无关的函数，可以扩展到新的分类器类型。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-CLASSIFICATION-ACCURACY-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-FN \\#'LABEL-INDEX)*\n\n    返回与 [`CLASSIFICATION-ACCURACY-COUNTER`][430d] 关联的一组 [`MONITOR`][7068] 对象。`LABEL-INDEX-FN` 是一个类似于 [`LABEL-INDEX`][cc80] 的函数。更多信息请参阅该函数。\n\n    实现基于 [`MAKE-CLASSIFICATION-ACCURACY-MONITORS*`][2aa3]。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-CROSS-ENTROPY-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-DISTRIBUTION-FN \\#'LABEL-INDEX-DISTRIBUTION)*\n\n    返回与 [`CROSS-ENTROPY-COUNTER`][b186] 关联的一组 [`MONITOR`][7068] 对象。`LABEL-INDEX-DISTRIBUTION-FN` 是一个类似于 [`LABEL-INDEX-DISTRIBUTION`][caec] 的函数。更多信息请参阅该函数。\n\n    实现基于 [`MAKE-CROSS-ENTROPY-MONITORS*`][e46f]。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-LABEL-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-LABEL-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES (LABEL-INDEX-FN \\#'LABEL-INDEX) (LABEL-INDEX-DISTRIBUTION-FN \\#'LABEL-INDEX-DISTRIBUTION)*\n\n    返回分类准确率和交叉熵监控器。参数说明请参阅 [`MAKE-CLASSIFICATION-ACCURACY-MONITORS`][911c] 和 [`MAKE-CROSS-ENTROPY-MONITORS`][6004]。\n\n上述监控器生成函数可以通过以下通用函数扩展，以支持新的分类器类型。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAKE-CLASSIFICATION-ACCURACY-MONITORS\\*** *MODEL OPERATION-MODE LABEL-INDEX-FN ATTRIBUTES*\n\n    与 [`MAKE-CLASSIFICATION-ACCURACY-MONITORS`][911c] 完全相同，只是缺少关键字参数。可通过专门化此函数来增加对新模型类型的支持。默认实现也允许一定的扩展性：如果 `MODEL` 上定义了 [`LABEL-INDICES`][31ed]，则会使用它从模型结果中提取标签索引。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAKE-CROSS-ENTROPY-MONITORS\\*** *MODEL OPERATION-MODE LABEL-INDEX-DISTRIBUTION-FN ATTRIBUTES*\n\n    与 [`MAKE-CROSS-ENTROPY-MONITORS`][6004] 完全相同，只是缺少关键字参数。可通过专门化此函数来增加对新模型类型的支持。默认实现同样具有一定的扩展性：如果 `MODEL` 上定义了 [`LABEL-INDEX-DISTRIBUTIONS`][9385]，则会使用它从模型结果中提取标签分布。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MEASURER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 7.2 分类度量\n\n此处的函数将某个已知的良好解（也称为*真值*或*目标*）与预测或近似值进行比较，并返回它们之间[不]相似性的度量。这些函数与模型无关，因此需要先提取真值和预测值。它们很少被直接使用，通常隐藏在[分类监控器][c573]之后。\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-CLASSIFICATION-ACCURACY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MEASURE-CLASSIFICATION-ACCURACY** *TRUTHS PREDICTIONS &KEY (TEST \\#'EQL) TRUTH-KEY PREDICTION-KEY WEIGHT*\n\n    返回正确分类的数量，第二个值为实例数量（在非加权情况下等于`TRUTHS`的长度）。`TRUTHS`（由`TRUTH-KEY`键控）是一个不透明的类别标签序列，通过`TEST`与`PREDICTIONS`中的另一个类别标签序列（由`PREDICTION-KEY`键控）进行比较。如果`WEIGHT`非空，则它是一个函数，用于返回`TRUTHS`中元素的权重。在加权情况下，两个计数（分别作为第一个和第二个值返回）会加上该元素的权重，而不是像非加权情况那样只加1。\n\n    注意，返回的值非常适合与[`MULTIPLE-VALUE-CALL`][e4dd]结合使用，配合#`ADD-TO-COUNTER`[62de]和一个[`CLASSIFICATION-ACCURACY-COUNTER`][430d]。\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-CROSS-ENTROPY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MEASURE-CROSS-ENTROPY** *TRUTHS PREDICTIONS &KEY TRUTH-KEY PREDICTION-KEY (MIN-PREDICTION-PR 1.0d-15)*\n\n    返回`TRUTHS`和`PREDICTIONS`中具有相同索引的元素对之间的交叉熵之和。`TRUTH-KEY`是一个函数，当应用于`TRUTHS`中的元素时，会返回一个表示某种离散目标分布（下文中的P）的序列。`TRUTH-KEY`可以是`NIL`，这等同于[`IDENTITY`][8ae0]函数。`PREDICTION-KEY`则是`PREDICTIONS`中类似的键，但它返回的序列代表了对真实分布的近似（下文中的Q）。\n\n    真实分布与近似分布的交叉熵定义如下：\n\n        cross-entropy(p,q) = - sum_i p(i) * log(q(i))\n\n    本函数返回的是根据`TRUTH-KEY`和`PREDICTION-KEY`键控的`TRUTHS`和`PREDICTIONS`中元素对的交叉熵之和。\n\n    由于涉及对数运算，当q(i)接近零时，可能会出现数值问题。为避免这种情况，所有小于`MIN-PREDICTION-PR`的q(i)都会被视为等于`MIN-PREDICTION-PR`。\n\n    函数返回的第二个值是所有`TRUTHS`中p(i)的总和。这通常等于`(LENGTH TRUTHS)`，因为`TRUTHS`中的元素应构成一个概率分布，但这一约束并未强制执行，从而允许控制各元素的相对重要性。\n\n    函数返回的第三个值是一个属性列表，将分布序列中出现的每个索引映射到一个包含两个元素的列表：\n\n         sum_j p_j(i) * log(q_j(i))\n    \n    和\n    \n        sum_j p_j(i)\n    \n    其中`J`索引`TRUTHS`和`PREDICTIONS`。\n\n        (measure-cross-entropy '((0 1 0)) '((0.1 0.7 0.2)))\n        => 0.35667497\n           1\n           (2 (0.0 0)\n            1 (0.35667497 1)\n            0 (0.0 0))\n    \n    注意，返回的值非常适合与[`MULTIPLE-VALUE-CALL`][e4dd]结合使用，配合#`ADD-TO-COUNTER`[62de]和一个[`CROSS-ENTROPY-COUNTER`][b186]。\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-ROC-AUC-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MEASURE-ROC-AUC** *PREDICTIONS PRED &KEY (KEY \\#'IDENTITY) WEIGHT*\n\n    返回代表二分类问题预测的`PREDICTIONS`的ROC曲线下的面积。`PRED`是一个谓词函数，用于判断某个预测是否属于所谓的正类。`KEY`为每个元素返回一个数值，表示预测者认为该元素属于该类的可能性大小，尽管这并不一定是概率。\n\n    如果`WEIGHT`为`NIL`，则`PREDICTIONS`中的所有元素在AUC的未归一化求和中都按1计算。否则，`WEIGHT`必须是一个类似于`KEY`的函数，但它应返回元素的重要性（一个正实数）。如果某个预测的权重为2，则相当于在`PREDICTIONS`中存在另一份完全相同的副本。\n\n    该算法基于汤姆·福塞特所著论文《ROC分析导论》中的算法2。\n\n    ROC AUC等于随机选择的一个正例其`KEY`（得分）高于随机选择的一个负例的概率。考虑到得分可能相同的情况，更精确的说法是：AUC是上述概率在所有可能的按得分排序的序列上的期望值。\n\n\u003Ca id=\"x-28MGL-CORE-3AMEASURE-CONFUSION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MEASURE-CONFUSION** *TRUTHS PREDICTIONS &KEY (TEST \\#'EQL) TRUTH-KEY PREDICTION-KEY WEIGHT*\n\n    根据`TRUTHS`和`PREDICTIONS`创建一个[`CONFUSION-MATRIX`][60d2]。`TRUTHS`（由`TRUTH-KEY`键控）是一个类别标签序列，通过`TEST`与`PREDICTIONS`中的另一个类别标签序列（由`PREDICTION-KEY`键控）进行比较。如果`WEIGHT`非空，则它是一个函数，用于返回`TRUTHS`中元素的权重。在加权情况下，两个计数（分别作为第一个和第二个值返回）会加上该元素的权重。\n\n    注意，返回的混淆矩阵可以使用[`ADD-TO-COUNTER`][62de]添加到另一个混淆矩阵中。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CLASSIFICATION-COUNTER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 7.3 分类计数器\n\n\u003Ca id=\"x-28MGL-CORE-3ACLASSIFICATION-ACCURACY-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **分类准确率计数器** *[基础计数器][5979]*\n\n    一个 [`BASIC-COUNTER`][5979]，其 `:TYPE` 属性为 \"acc.\"，并具有一个打印百分比的 [`PRINT-OBJECT`][3f2e] 方法。\n\n\u003Ca id=\"x-28MGL-CORE-3ACROSS-ENTROPY-COUNTER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **交叉熵计数器** *[基础计数器][5979]*\n\n    一个 [`BASIC-COUNTER`][5979]，其 `:TYPE` 属性为 \"xent\"。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-CONFUSION-MATRIX-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 7.3.1 混淆矩阵\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **混淆矩阵**\n\n    混淆矩阵用于记录分类结果。正确的类别称为 `target`，分类器的输出称为 `prediction`。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-CONFUSION-MATRIX-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-CONFUSION-MATRIX** *&KEY (TEST \\#'EQL)*\n\n    类别使用 `TEST` 进行比较。\n\n\u003Ca id=\"x-28MGL-CORE-3ASORT-CONFUSION-CLASSES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **SORT-CONFUSION-CLASSES** *MATRIX CLASSES*\n\n    返回一个按展示目的排序的 `CLASSES` 列表。\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-CLASS-NAME-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **CONFUSION-CLASS-NAME** *MATRIX CLASS*\n\n    为展示目的命名 `CLASS`。\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-COUNT-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **CONFUSION-COUNT** *MATRIX TARGET PREDICTION*\n\n\u003Ca id=\"x-28MGL-CORE-3AMAP-CONFUSION-MATRIX-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **MAP-CONFUSION-MATRIX** *FN MATRIX*\n\n    对混淆矩阵中的每个单元格调用 `FN`，传入 `TARGET`、`PREDICTION` 和 [`COUNT`][3155] 参数。可以省略计数为零的单元格。\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-CLASSES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [泛型函数] **CONFUSION-MATRIX-CLASSES** *MATRIX*\n\n    所有类别的列表。默认是从计数中收集类别，但如果某些类别在结果中不存在，则可以覆盖此行为。\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-ACCURACY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **CONFUSION-MATRIX-ACCURACY** *MATRIX &KEY FILTER*\n\n    返回 `MATRIX` 中结果的整体准确率。计算方法是正确分类的案例数（命中）除以总案例数。同时返回命中数和总案例数作为第二个和第三个值。如果提供了 `FILTER` 函数，则将其与单元格的目标和预测一起调用；对于 `FILTER` 返回 `NIL` 的单元格，将忽略该单元格。\n\n    虽然可以通过提供适当的过滤器轻松计算精确度和召回率，但这些指标也有专门的便捷函数提供。\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-PRECISION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **CONFUSION-MATRIX-PRECISION** *MATRIX PREDICTION*\n\n    返回当分类器预测为 `PREDICTION` 时的准确率。\n\n\u003Ca id=\"x-28MGL-CORE-3ACONFUSION-MATRIX-RECALL-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **CONFUSION-MATRIX-RECALL** *MATRIX TARGET*\n\n    返回当正确类别为 `TARGET` 时的准确率。\n\n\u003Ca id=\"x-28MGL-CORE-3AADD-CONFUSION-MATRIX-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **ADD-CONFUSION-MATRIX** *MATRIX RESULT-MATRIX*\n\n    将 `MATRIX` 添加到 `RESULT-MATRIX` 中。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-FEATURES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 8 特征\n\n###### \\[在 MGL-CORE 包中\\]\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-FEATURE-SELECTION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 8.1 特征选择\n\n以下所有*评分函数*均返回一个 [`EQUAL`][3fb5] 哈希表，将特征映射到分数。\n\n\u003Ca id=\"x-28MGL-CORE-3ACOUNT-FEATURES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **COUNT-FEATURES** *DOCUMENTS MAPPER &KEY (KEY \\#'IDENTITY)*\n\n    返回加权特征，形式为一个 [`EQUAL`][3fb5] 哈希表，其键为 `DOCUMENTS` 的特征，值为特征出现的次数。`MAPPER` 接受一个函数和一份文档，并用文档的特征调用该函数。\n\n    ```common-lisp\n    (sort (alexandria:hash-table-alist\n           (count-features '((\"hello\" \"world\")\n                             (\"this\" \"is\" \"our\" \"world\"))\n                           (lambda (fn document)\n                             (map nil fn document))))\n          #'string\u003C :key #'car)\n    => ((\"hello\" . 1) (\"is\" . 1) (\"our\" . 1) (\"this\" . 1) (\"world\" . 2))\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3AFEATURE-LLRS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **FEATURE-LLRS** *DOCUMENTS MAPPER CLASS-FN &KEY (CLASSES (ALL-DOCUMENT-CLASSES DOCUMENTS CLASS-FN))*\n\n    返回加权特征，形式为一个 [`EQUAL`][3fb5] 哈希表，其键为 `DOCUMENTS` 的特征，值为它们的对数似然比。`MAPPER` 接受一个函数和一份文档，并用文档的特征调用该函数。\n\n    ```common-lisp\n    (sort (alexandria:hash-table-alist\n           (feature-llrs '((:a \"hello\" \"world\")\n                           (:b \"this\" \"is\" \"our\" \"world\"))\n                         (lambda (fn document)\n                           (map nil fn (rest document)))\n                         #'first))\n          #'string\u003C :key #'car)\n    => ((\"hello\" . 2.6032386) (\"is\" . 2.6032386) (\"our\" . 2.6032386)\n        (\"this\" . 2.6032386) (\"world\" . 4.8428774e-8))\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3AFEATURE-DISAMBIGUITIES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **FEATURE-DISAMBIGUITIES** *DOCUMENTS MAPPER CLASS-FN &KEY (CLASSES (ALL-DOCUMENT-CLASSES DOCUMENTS CLASS-FN))*\n\n    返回加权特征，形式为一个 [`EQUAL`][3fb5] 哈希表，其键为 `DOCUMENTS` 的特征，值为它们的*歧义性*。`MAPPER` 接受一个函数和一份文档，并用文档的特征调用该函数。\n\n    引自论文《利用歧义度量特征选择算法进行支持向量机分类》。\n\n\u003Ca id=\"x-28MGL-CORE-3A-40MGL-FEATURE-ENCODING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 8.2 特征编码\n\n特征很少能够直接输入算法，通常需要经过某种形式的转换。假设我们有一个简单的语言模型，它以单个词作为输入，并预测下一个词。然而，输入和输出都需要被编码为长度为1000的浮点向量。我们的做法是根据某种度量（参见[特征选择][1b5e]）找出出现频率最高的1000个词，并将这些词与整数\\[0..999\\]一一对应起来（这就是[`ENCODE`][fedd]的过程）。例如，通过使用[独热编码](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FOne-hot)，我们在输入时会将一个词转换成一个浮点向量。当模型输出下一个词的概率分布时，我们会找到概率最大的索引，并查找出与之对应的词（这就是[`DECODE`][1339]的过程）。\n\n\u003Ca id=\"x-28MGL-CORE-3AENCODE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **ENCODE** *ENCODER DECODED*\n\n    使用`ENCODER`对`DECODED`进行编码。这个接口足够通用，以至于几乎没有任何实际意义。有关简单示例，请参阅[`ENCODER\u002FDECODER`][1beb]；更复杂的示例则可参考[`MGL-NLP:BAG-OF-WORDS-ENCODER`][cbb4]。\n\n    如果`ENCODER`是一个函数标识符，则只需用`DECODED`对其调用[`FUNCALL`][03c7]即可。\n\n\u003Ca id=\"x-28MGL-CORE-3ADECODE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **DECODE** *DECODER ENCODED*\n\n    使用`DECODER`对`ENCODED`进行解码。对于一对`DECODER`和`ENCODER`来说，`(DECODE DECODER (ENCODE ENCODER OBJECT))`在某种意义上必须等于`OBJECT`。\n\n    如果`DECODER`是一个函数标识符，则只需用`ENCODED`对其调用[`FUNCALL`][03c7]即可。\n\n\u003Ca id=\"x-28MGL-CORE-3AENCODER-2FDECODER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **ENCODER\u002FDECODER**\n\n    通过内部维护一个从解码值到编码值以及从编码值到解码值的[`EQUAL`][3fb5]哈希表，实现O(1)时间复杂度的[`ENCODE`][fedd]和[`DECODE`][1339]操作。只要哈希表中的元素具有读写一致性，`ENCODER\u002FDECODER`对象就可以被保存和加载（参见[持久化][29a1])。\n\n    ```common-lisp\n    (let ((indexer\n            (make-indexer\n             (alexandria:alist-hash-table '((\"I\" . 3) (\"me\" . 2) (\"mine\" . 1)))\n             2)))\n      (values (encode indexer \"I\")\n              (encode indexer \"me\")\n              (encode indexer \"mine\")\n              (decode indexer 0)\n              (decode indexer 1)\n              (decode indexer 2)))\n    => 0\n    => 1\n    => NIL\n    => \"I\"\n    => \"me\"\n    => NIL\n    ```\n\n\u003Ca id=\"x-28MGL-CORE-3AMAKE-INDEXER-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-INDEXER** *SCORED-FEATURES N &KEY (START 0) (CLASS 'ENCODER\u002FDECODER)*\n\n    从`SCORED-FEATURES`中选取前`N`个特征（参见[特征选择][1b5e]），并从`START`开始为其分配索引。返回一个[`ENCODER\u002FDECODER`][1beb]（或其它`CLASS`），用于在对象和索引之间进行转换。\n\n    另请参阅[词袋模型][0784]。\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 9 基于梯度的优化\n\n###### \\[位于MGL-OPT包中\\]\n我们有一个实值可微函数F，任务是找到使其取最小值的参数。优化过程从F的参数空间中的某一点开始，然后基于当前点或其附近点的梯度和函数值，迭代地更新该点。\n\n需要注意的是，尽管问题描述为全局优化，但对于非凸函数而言，大多数优化算法往往会收敛到局部最优解。\n\n目前有两种优化算法：\n[梯度下降法][10e7]（及其多种变体）和[共轭梯度法][83e6]，它们都属于一阶方法（不需要二阶导数），但可以通过[扩展API][6a6f]添加更多算法。\n\n\u003Ca id=\"x-28MGL-OPT-3AMINIMIZE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MINIMIZE** *OPTIMIZER GRADIENT-SOURCE &KEY (WEIGHTS (LIST-SEGMENTS GRADIENT-SOURCE)) (DATASET \\*INFINITELY-EMPTY-DATASET\\*)*\n\n    通过更新`GRADIENT-SOURCE`所代表的实值函数的部分参数（即`WEIGHTS`，可以是`MAT`或一系列`MAT`），来最小化该函数的值。最后返回`WEIGHTS`。`DATASET`（参见[数据集][109e]）是同一函数的一组未优化参数。例如，`WEIGHTS`可能是神经网络的权重，而`DATASET`则是由适合[`SET-INPUT`][0c9e]的输入组成的训练集。默认的`DATASET`（[`*INFINITELY-EMPTY-DATASET*`][ad8f]）适用于所有参数均已优化的情况，此时环境中已无任何可供获取的信息。\n\n    当`DATASET`为采样器且样本耗尽时，或者满足其他终止条件时（参见[`TERMINATION`][9006]等），优化过程将终止。如果`DATASET`是`SEQUENCE`类型，则会反复循环使用。\n\n    各种优化器的示例分别在[梯度下降法][10e7]和[共轭梯度法][83e6]中提供。\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-ITERATIVE-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.1 迭代优化器\n\n\u003Ca id=\"x-28MGL-OPT-3AITERATIVE-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **迭代优化器**\n\n    是基于[梯度下降][10e7]和[共轭梯度][83e6]的优化器的抽象基类，它会迭代处理实例，直到满足终止条件为止。\n\n\u003Ca id=\"x-28MGL-OPT-3AN-INSTANCES-20-28MGL-PAX-3AREADER-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [读取器] **N-INSTANCES** *[迭代优化器][8da0] (:N-INSTANCES = 0)*\n\n    此优化器迄今为止已处理的实例数量。在优化过程中自动递增。\n\n\u003Ca id=\"x-28MGL-OPT-3ATERMINATION-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **TERMINATION** *[迭代优化器][8da0] (:TERMINATION = NIL)*\n\n    如果是一个数字，则表示以[`N-INSTANCES`][4c73]为单位的训练实例数。当`N-INSTANCES`等于或大于该值时，优化将停止。如果`TERMINATION`为`NIL`，则优化将继续进行。如果为`T`，则优化将停止。如果它是一个无参数的函数，则其返回值将被视为由`TERMINATION`返回的值来处理。\n\n\u003Ca id=\"x-28MGL-OPT-3AON-OPTIMIZATION-STARTED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **ON-OPTIMIZATION-STARTED** *[迭代优化器][8da0] (:ON-OPTIMIZATION-STARTED = NIL)*\n\n    一个带有参数`(OPTIMIZER GRADIENT-SOURCE N-INSTANCES)`的事件钩子。在完成初始化（INITIALIZE-OPTIMIZER*、INITIALIZE-GRADIENT-SOURCE*）之后，但在优化开始之前调用。\n\n\u003Ca id=\"x-28MGL-OPT-3AON-OPTIMIZATION-FINISHED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **ON-OPTIMIZATION-FINISHED** *[迭代优化器][8da0] (:ON-OPTIMIZATION-FINISHED = NIL)*\n\n    一个带有参数`(OPTIMIZER GRADIENT-SOURCE N-INSTANCES)`的事件钩子。在优化完成后调用。\n\n\u003Ca id=\"x-28MGL-OPT-3AON-N-INSTANCES-CHANGED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **ON-N-INSTANCES-CHANGED** *[迭代优化器][8da0] (:ON-N-INSTANCES-CHANGED = NIL)*\n\n    一个带有参数`(OPTIMIZER GRADIENT-SOURCE N-INSTANCES)`的事件钩子。在一批实例的优化完成且[`N-INSTANCES`][4c73]递增时调用。\n\n现在让我们讨论几个实用的工具。\n\n\u003Ca id=\"x-28MGL-OPT-3AMONITOR-OPTIMIZATION-PERIODICALLY-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MONITOR-OPTIMIZATION-PERIODICALLY** *OPTIMIZER PERIODIC-FNS*\n\n    对于`PERIODIC-FNS`列表中的每个周期性函数，将其监控添加到`OPTIMIZER`的[`ON-OPTIMIZATION-STARTED`][ebd4]、[`ON-OPTIMIZATION-FINISHED`][0072]和[`ON-N-INSTANCES-CHANGED`][4f0b]钩子中。这些监控是简单的函数，它们只是使用事件参数（`OPTIMIZER` `GRADIENT-SOURCE` [`N-INSTANCES`][4c73]）调用每个周期性函数。返回`OPTIMIZER`。\n\n    要在`OPTIMIZER`每看到1000个实例后记录并重置梯度源的监控：\n\n        (monitor-optimization-periodically optimizer\n                                           '((:fn log-my-test-error\n                                              :period 2000)\n                                             (:fn reset-optimization-monitors\n                                              :period 1000\n                                              :last-eval 0)))\n    \n    注意，我们不需要传递`PERIODIC-FN`本身，而是可以直接传递`PERIODIC-FN`的初始化参数。`:LAST-EVAL`为0的部分可以防止[`RESET-OPTIMIZATION-MONITORS`][ca09]在优化开始时被调用，因为那时监控本来就是空的。\n\n\u003Ca id=\"x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **RESET-OPTIMIZATION-MONITORS** *OPTIMIZER GRADIENT-SOURCE*\n\n    报告`OPTIMIZER`和`GRADIENT-SOURCE`的[`MONITORS`][8f37]状态，并重置它们的计数器。参见[`MONITOR-OPTIMIZATION-PERIODICALLY`][4528]了解其使用示例。\n\n\u003Ca id=\"x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20-28METHOD-20-28MGL-OPT-3AITERATIVE-OPTIMIZER-20T-29-29-29\">\u003C\u002Fa>\n\n- [方法] **RESET-OPTIMIZATION-MONITORS** *(OPTIMIZER ITERATIVE-OPTIMIZER) GRADIENT-SOURCE*\n\n    记录`OPTIMIZER`和`GRADIENT-SOURCE`的监控计数器，并将其重置。\n\n\u003Ca id=\"x-28MGL-OPT-3AREPORT-OPTIMIZATION-PARAMETERS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **REPORT-OPTIMIZATION-PARAMETERS** *OPTIMIZER GRADIENT-SOURCE*\n\n    一种常在优化开始时调用的实用函数（来自[`ON-OPTIMIZATION-STARTED`][ebd4]）。默认实现会记录`GRADIENT-SOURCE`（如[`DESCRIBE`][6651]所示）和`OPTIMIZER`的描述，并调用[`LOG-MAT-ROOM`][ea7d]。\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-COST-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.2 损失函数\n\n通常将被最小化的函数称为*成本*或*损失*函数。\n\n\u003Ca id=\"x-28MGL-COMMON-3ACOST-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **COST** *MODEL*\n\n    返回正在被最小化的成本函数的值。只有在正在进行的优化过程中调用此函数才有意义（参见[`MINIMIZE`][46a4]）。成本是指一批实例的成本。\n\n\u003Ca id=\"x-28MGL-OPT-3AMAKE-COST-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-COST-MONITORS** *MODEL &KEY OPERATION-MODE ATTRIBUTES*\n\n    返回一个[`MONITOR`][7068]对象列表，每个对象都与一个属性为`:TYPE` \"cost\"的[`BASIC-COUNTER`][5979]相关联。基于[`MAKE-COST-MONITORS*`][3815]实现。\n\n\u003Ca id=\"x-28MGL-OPT-3AMAKE-COST-MONITORS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAKE-COST-MONITORS\\*** *MODEL OPERATION-MODE ATTRIBUTES*\n\n    与[`MAKE-COST-MONITORS`][46c2]相同，只是多了一些关键字参数。可以通过专门化此函数来支持新的模型类型。\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 9.3 梯度下降\n\n###### \\[在包 MGL-GD 中\\]\n梯度下降是一种一阶优化算法。它完全依赖于一阶导数，甚至不会评估要被最小化的函数。让我们看看如何针对某些参数最小化一个数值 Lisp 函数。\n\n```commonlisp\n(cl:defpackage :mgl-example-sgd\n  (:use #:common-lisp #:mgl))\n\n(in-package :mgl-example-sgd)\n\n;;; 创建一个表示正弦函数的对象。\n(defparameter *diff-fn-1*\n  (make-instance 'mgl-diffun:diffun\n                 :fn #'sin\n                 ;; 我们将优化它的唯一参数。\n                 :weight-indices '(0)))\n\n;;; 最小化 SIN。注意这里没有数据集参与，因为所有参数都在被优化。\n(minimize (make-instance 'sgd-optimizer :termination 1000)\n          *diff-fn-1*\n          :weights (make-mat 1))\n;;; => 一个 MAT，其中只有一个值，约为 -pi\u002F2。\n\n;;; 创建一个可微函数 f(x,y)=(x-y)^2。其中 x 是一个参数，\n;;; 其值来自传递给 MINIMIZE 的 DATASET 参数。y 是需要优化的参数（即“权重”）。\n(defparameter *diff-fn-2*\n  (make-instance 'mgl-diffun:diffun\n                 :fn (lambda (x y)\n                       (expt (- x y) 2))\n                 :parameter-indices '(0)\n                 :weight-indices '(1)))\n\n;;; 找到使与采样器生成的样本距离最小的 y 值。\n(minimize (make-instance 'sgd-optimizer :batch-size 10)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset (make-instance 'function-sampler\n                                  :generator (lambda ()\n                                               (list (+ 10\n                                                        (gaussian-random-1))))\n                                  :max-n-samples 1000))\n;;; => 一个包含单个值约为 10 的 MAT，这是数据集中样本的期望值。\n\n;;; 数据集也可以是一个 SEQUENCE，在这种情况下我们最好设置 TERMINATION，\n;;; 否则优化过程将永远不会结束。\n(minimize (make-instance 'sgd-optimizer :termination 1000)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset '((0) (1) (2) (3) (4) (5)))\n;;; => 一个包含单个值约为 2.5 的 MAT。\n```\n\n我们将看到一些用于访问优化器参数的访问器。\n一般来说，在优化过程中可以随时使用 [`SETF`][a138] 修改可变槽访问器（与只读和只写访问器不同），\n也可以在优化器子类上定义方法来以任何方式计算该值。例如，按每个小批量衰减学习率：\n\n```commonlisp\n(defmethod learning-rate ((optimizer my-sgd-optimizer))\n  (* (slot-value optimizer 'learning-rate)\n     (expt 0.998\n           (\u002F (n-instances optimizer) 60000))))\n```\n\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.1 基于批次的优化器\n\n首先让我们看看所有基于批次的优化器共有的内容，\n然后再讨论 [SGD 优化器][25fd]、[Adam 优化器][bd13] 和\n[归一化批次优化器][0c91]。所有基于批次的优化器都是\n[`ITERATIVE-OPTIMIZER`][8da0]，因此也请参阅\n[迭代优化器][779d]。\n\n\u003Ca id=\"x-28MGL-GD-3ABATCH-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **BATCH-GD-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    这是另一个基于梯度的优化器的抽象基类，\n    它会在处理完 [`BATCH-SIZE`][fa6d] 个输入后同时更新所有权重。其子类包括\n    [`SGD-OPTIMIZER`][2a2f]、[`ADAM-OPTIMIZER`][e0e6] 和\n    [`NORMALIZED-BATCH-GD-OPTIMIZER`][f6ae]。\n    \n    当某些权重可能由于缺少输入值而未被使用时，\n    [`PER-WEIGHT-BATCH-GD-OPTIMIZER`][5a43] 可能是更好的选择。\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **BATCH-SIZE** *GD-OPTIMIZER (:BATCH-SIZE = 1)*\n\n    在处理完 `BATCH-SIZE` 个输入后，权重会被更新。当 `BATCH-SIZE` 为 1 时，就得到随机梯度下降；\n    当 `BATCH-SIZE` 等于数据集中的实例数量时，就得到标准的“批”梯度下降；\n    而当 `BATCH-SIZE` 大小介于两者之间时，就得到了最实用的“小批量”折中方案。\n\n\u003Ca id=\"x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **LEARNING-RATE** *GD-OPTIMIZER (:LEARNING-RATE = 0.1)*\n\n    这是沿着梯度方向的步长。如果优化发散，就减小它；\n    如果优化没有进展，则增大它。\n\n\u003Ca id=\"x-28MGL-GD-3AMOMENTUM-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **MOMENTUM** *GD-OPTIMIZER (:MOMENTUM = 0)*\n\n    取值范围为 [0, 1)。`MOMENTUM` 倍的前一次权重变化会加到梯度上。0 表示没有动量。\n\n\u003Ca id=\"x-28MGL-GD-3AMOMENTUM-TYPE-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [读者] **MOMENTUM-TYPE** *GD-OPTIMIZER (:MOMENTUM-TYPE = :NORMAL)*\n\n    可以是 `:NORMAL`、`:NESTEROV` 或 `:NONE` 中的一种。对于纯粹的优化问题，Nesterov 动量可能更好，\n    但同时也可能增加过拟合的风险。使用 `:NONE` 相当于动量为 0，而且占用更少的内存。需要注意的是，\n    即使 `MOMENTUM` 不为零，使用 `:NONE` 时也会忽略 `MOMENTUM`。\n\n\u003Ca id=\"x-28MGL-GD-3AWEIGHT-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **WEIGHT-DECAY** *GD-OPTIMIZER (:WEIGHT-DECAY = 0)*\n\n    这是一种 L2 正则化惩罚项。它会抑制大权重，类似于零均值高斯先验。`WEIGHT-DECAY` 乘以权重会被加到梯度上，\n    以惩罚过大的权重。这相当于在所求极小值的函数中添加了 `WEIGHT-DECAY`\\*sum\\_i{0.5 \\* WEIGHT\\_i^2}。\n\n\u003Ca id=\"x-28MGL-GD-3AWEIGHT-PENALTY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **WEIGHT-PENALTY** *GD-OPTIMIZER (:WEIGHT-PENALTY = 0)*\n\n    这是一种 L1 正则化惩罚项。它会鼓励稀疏性。\n    `SIGN`(WEIGHT) 乘以 `WEIGHT-PENALTY` 会被加到梯度上，从而推动权重趋向于负无穷。这相当于在所求极小值的函数中添加了\n    `WEIGHT-PENALTY`\\*sum\\_i{abs(WEIGHT\\_i)}。将其应用于特征偏置，则会对特征施加稀疏性约束。\n\n\u003Ca id=\"x-28MGL-GD-3AUSE-SEGMENT-DERIVATIVES-P-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [读者] **USE-SEGMENT-DERIVATIVES-P** *GD-OPTIMIZER (:USE-SEGMENT-DERIVATIVES-P = NIL)*\n\n    如果梯度来源（即被优化的模型）和优化器都支持此功能，则可以节省内存。其工作原理如下：\n    梯度来源会被要求将某个片段的导数放入一个累加器中，这个累加器就是该片段的 [`SEGMENT-DERIVATIVES`][9a5b]。\n    这样优化器就不必再分配一个用于汇总导数的矩阵。\n\n\u003Ca id=\"x-28MGL-GD-3AAFTER-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **AFTER-UPDATE-HOOK** *GD-OPTIMIZER (:AFTER-UPDATE-HOOK = NIL)*\n\n    一组无参数函数，在每次权重更新后被调用。\n\n\u003Ca id=\"x-28MGL-GD-3ABEFORE-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3ABATCH-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **BEFORE-UPDATE-HOOK** *[BATCH-GD-OPTIMIZER][d94e] (:BEFORE-UPDATE-HOOK = NIL)*\n\n    一组无参数函数。每个函数在权重更新之前被调用（即在累积的梯度被除以批次长度之后）。\n    这很方便用来附加一些额外的梯度累积代码。\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-SGD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### SGD 优化器\n\n\u003Ca id=\"x-28MGL-GD-3ASGD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **SGD-OPTIMIZER** *[BATCH-GD-OPTIMIZER][d94e]*\n\n当[`BATCH-SIZE`][fa6d]为1时，这就是随机梯度下降。当批量大小增大时，就变成了小批量梯度下降和批量梯度下降。\n\n假设`ACCUMULATOR`包含了某个小批量的梯度之和，那么权重更新公式如下：\n\n$$\n\\Delta_w^{t+1} = momentum * \\Delta_w^t\n  + \\frac{accumulator}{batchsize}\n  + l_2 w + l_1 sign(w)\n$$\n\n$$\nw^{t+1} = w^{t} - learningrate * \\Delta_w,\n$$\n\n这与更传统的形式相同：\n\n$$\n\\Delta_w^{t+1} = momentum * \\Delta_w^{t}\n  + learningrate * \\left(\\frac{\\frac{df}{dw}}{batchsize}\n                       + l_2 w + l_1 sign(w)\\right)\n$$\n\n$$\nw^{t+1} = w^{t} - \\Delta_w,\n$$\n\n不过，在优化过程中批量大小、动量或学习率发生变化时，前者的效果更好。上述是使用普通动量的情况；此外，还有Nesterov动量（参见[`MOMENTUM-TYPE`][5611]）可供选择。\n\n有关所有基于批量的优化器共有的各种选项的说明，请参阅[Batch Based Optimizers][2c39]。\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-ADAM-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Adam优化器\n\n\u003Ca id=\"x-28MGL-GD-3AADAM-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **ADAM-OPTIMIZER** *[BATCH-GD-OPTIMIZER][d94e]*\n\n    Adam是一种一阶随机梯度下降优化器。它通过指数移动平均法维护每个参数导数的均值和原始方差的内部估计。其步长基本上是`M\u002F(sqrt(V)+E)`，其中`M`是估计的均值，`V`是估计的方差，而`E`是一个小调整因子，用于防止梯度爆炸。更多信息请参阅该论文的第5版（http:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6980）。\n\n    需要注意的是，Adam不支持使用动量。事实上，如果动量不是`:NONE`，系统会报错。\n\n    有关所有基于批量的优化器共有的各种选项的说明，请参阅[Batch Based Optimizers][2c39]。\n\n\u003Ca id=\"x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **LEARNING-RATE** *[ADAM-OPTIMIZER][e0e6] (= 2.0e-4)*\n\n    与[`LEARNING-RATE`][09ed]相同，但采用了Adam论文中建议的默认值。\n\n\u003Ca id=\"x-28MGL-GD-3AMEAN-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MEAN-DECAY** *[ADAM-OPTIMIZER][e0e6] (:MEAN-DECAY = 0.9)*\n\n    一个介于0和1之间的数，决定了导数估计均值的更新速度。若设为0，则基本等同于`RMSPROP`（前提是[`VARIANCE-DECAY`][0900]不太大），或者AdaGrad（若`VARIANCE-DECAY`接近1且学习率逐渐衰减）。这在论文中对应$\\beta_1$。\n\n\u003Ca id=\"x-28MGL-GD-3AMEAN-DECAY-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **MEAN-DECAY-DECAY** *[ADAM-OPTIMIZER][e0e6] (:MEAN-DECAY-DECAY = (- 1 1.0d-7))*\n\n    一个应接近1的值。每次更新后，[`MEAN-DECAY`][011d]都会乘以此值。这在论文中对应$\\lambda$。\n\n\u003Ca id=\"x-28MGL-GD-3AVARIANCE-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **VARIANCE-DECAY** *[ADAM-OPTIMIZER][e0e6] (:VARIANCE-DECAY = 0.999)*\n\n    一个介于0和1之间的数，决定了导数估计方差的更新速度。这在论文中对应$\\beta_2$。\n\n\u003Ca id=\"x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **VARIANCE-ADJUSTMENT** *[ADAM-OPTIMIZER][e0e6] (:VARIANCE-ADJUSTMENT = 1.0d-7)*\n\n    在Adam内部，估计的均值会被除以估计的方差的平方根（按每个权重计算），如果分母接近零，可能会导致数值问题。为了避免这种情况，会在分母上加上一个很小的正数`VARIANCE-ADJUSTMENT`。这在论文中对应`epsilon`。\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-NORMALIZED-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 归一化批量优化器\n\n\u003Ca id=\"x-28MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **NORMALIZED-BATCH-GD-OPTIMIZER** *[BATCH-GD-OPTIMIZER][d94e]*\n\n    类似于[`BATCH-GD-OPTIMIZER`][d94e]，但它会记录每个权重在批次中被使用的次数，并用这个次数来除累积梯度，而不是除以`N-INSTANCES-IN-BATCH`。只有当所训练的学习器中存在缺失值时，这种做法才会产生差异。该类与[`PER-WEIGHT-BATCH-GD-OPTIMIZER`][5a43]的主要区别在于，所有权重的批次结束时间都是一致的。\n\n\u003Ca id=\"x-28MGL-GD-3AN-WEIGHT-USES-IN-BATCH-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [accessor] **N-WEIGHT-USES-IN-BATCH** *[NORMALIZED-BATCH-GD-OPTIMIZER][f6ae]*\n\n    当前批次中该权重被使用的次数。\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-SEGMENTED-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.2 分段GD优化器\n\n\u003Ca id=\"x-28MGL-GD-3ASEGMENTED-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **SEGMENTED-GD-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    这种优化器会将各个分段的训练委托给其他优化器。它可用于将不同分段的训练委托给不同的优化器（这些优化器必须能够处理可分段的数据），或者仅仅是为了不训练所有分段。\n\n\u003Ca id=\"x-28MGL-GD-3ASEGMENTER-20-28MGL-PAX-3AREADER-20MGL-GD-3ASEGMENTED-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **SEGMENTER** *[SEGMENTED-GD-OPTIMIZER][3ce0] (:SEGMENTER)*\n\n    当此优化器初始化时，它会遍历学习器的分段，使用[`MAP-SEGMENTS`][2312]。`SEGMENTER`是一个函数，会对每个分段调用一次，并返回一个优化器或`NIL`。多个分段可以映射到同一个优化器。在收集完分段到优化器的映射关系后，每个优化器都会通过INITIALIZE-OPTIMIZER方法，用分配给它的分段列表进行初始化。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENTS-20-28MGL-PAX-3AREADER-20MGL-GD-3ASEGMENTED-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [reader] **SEGMENTS** *[SEGMENTED-GD-OPTIMIZER][3ce0]*\n\n[`SEGMENTED-GD-OPTIMIZER`][3ce0]继承自[`ITERATIVE-OPTIMIZER`][8da0]，因此也请参阅[迭代优化器][779d]。\n\n\u003Ca id=\"x-28MGL-GD-3A-40MGL-GD-PER-WEIGHT-OPTIMIZATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.3.3 按权重优化\n\n\u003Ca id=\"x-28MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **PER-WEIGHT-BATCH-GD-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n这与[基于批处理的优化器][2c39]非常相似，但它在何时更新权重方面更加智能。基本上，每个权重都有自己的批处理，与其他权重的批处理无关。这种设计具有理想的特点。例如，可以将两个神经网络组合在一起，而无需在它们之间添加任何连接，这样学习的结果将等同于它们各自独立训练的情况。此外，添加仅包含缺失值的输入也不会改变任何结果。\n\n由于其完全非批处理的特性，目前没有该优化器的CUDA实现。\n\n\u003Ca id=\"x-28MGL-GD-3AN-WEIGHT-USES-IN-BATCH-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **N-WEIGHT-USES-IN-BATCH** *[PER-WEIGHT-BATCH-GD-OPTIMIZER][5a43]*\n\n    该权重在其当前批处理中被使用的次数。\n\n\u003Ca id=\"x-28MGL-GD-3ACLIP-L2-NORM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **CLIP-L2-NORM** *MATS L2-UPPER-BOUND &KEY CALLBACK*\n\n    将`MATS`缩放，使其$L_2$范数不超过`L2-UPPER-BOUND`。\n\n    将`MATS`视为一个单一向量来计算其范数。如果范数大于`L2-UPPER-BOUND`，则以范数除以`L2-UPPER-BOUND`的比例对每个矩阵进行破坏性缩放；如果`CALLBACK`不为`NIL`，则使用缩放因子调用`CALLBACK`函数。\n\n\u003Ca id=\"x-28MGL-GD-3AARRANGE-FOR-CLIPPING-GRADIENTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **ARRANGE-FOR-CLIPPING-GRADIENTS** *BATCH-GD-OPTIMIZER L2-UPPER-BOUND &KEY CALLBACK*\n\n    确保由`BATCH-GD-OPTIMIZER`累积的批次归一化梯度的范数在每次更新前都被裁剪到`L2-UPPER-BOUND`。参见[`CLIP-L2-NORM`][af6b]。\n\n\u003Ca id=\"x-28MGL-CG-3A-40MGL-CG-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n\n\n### 9.4 共轭梯度法\n\n###### \\[在MGL-CG包中\\]\n共轭梯度法是一种一阶优化算法。它比梯度下降法更为先进，因为它会进行线搜索，但这也使得它不适合用于非确定性函数。下面我们来看看如何针对某些参数最小化一个数值Lisp函数。\n\n```\n;;; 创建一个表示正弦函数的对象。\n(defparameter *diff-fn-1*\n  (make-instance 'mgl-diffun:diffun\n                 :fn #'sin\n                 ;; 我们将优化其唯一的参数。\n                 :weight-indices '(0)))\n\n;;; 最小化SIN。注意这里没有数据集参与，因为所有参数都在被优化。\n(minimize (make-instance 'cg-optimizer\n                         :batch-size 1\n                         :termination 1)\n          *diff-fn-1*\n          :weights (make-mat 1))\n;;; => 一个MAT，其中只有一个值，约为-pi\u002F2。\n\n;;; 创建一个可微分的函数f(x,y)=(x-y)^2。x是一个来自传递给MINIMIZE的DATASET参数的值，y则是需要优化的参数（即“权重”）。\n(defparameter *diff-fn-2*\n  (make-instance 'mgl-diffun:diffun\n                 :fn (lambda (x y)\n                       (expt (- x y) 2))\n                 :parameter-indices '(0)\n                 :weight-indices '(1)))\n\n;;; 找出使距离最小化的y值，该距离是采样器生成的实例之间的距离。\n(minimize (make-instance 'cg-optimizer :batch-size 10)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset (make-instance 'function-sampler\n                                  :generator (lambda ()\n                                               (list (+ 10\n                                                        (gaussian-random-1))))\n                                  :max-n-samples 1000))\n;;; => 一个MAT，其中只有一个值，约为10，即数据集中实例的期望值。\n\n;;; 数据集也可以是一个SEQUENCE，在这种情况下最好设置TERMINATION，否则优化将永远不会结束。请注意，只需一个epoch就足够了。\n(minimize (make-instance 'cg-optimizer :termination 6)\n          *diff-fn-2*\n          :weights (make-mat 1)\n          :dataset '((0) (1) (2) (3) (4) (5)))\n;;; => 一个MAT，其中只有一个值，约为2.5。\n```\n\n\n\u003Ca id=\"x-28MGL-CG-3ACG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **CG** *FN W &KEY (MAX-N-LINE-SEARCHES \\*DEFAULT-MAX-N-LINE-SEARCHES\\*) (MAX-N-EVALUATIONS-PER-LINE-SEARCH \\*DEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH\\*) (MAX-N-EVALUATIONS \\*DEFAULT-MAX-N-EVALUATIONS\\*) (SIG \\*DEFAULT-SIG\\*) (RHO \\*DEFAULT-RHO\\*) (INT \\*DEFAULT-INT\\*) (EXT \\*DEFAULT-EXT\\*) (RATIO \\*DEFAULT-RATIO\\*) SPARE-VECTORS*\n\n[`CG-OPTIMIZER`][ee97] 会将每一批数据传递给此函数，并同时传递其\n    [`CG-ARGS`][9749]。\n    \n    使用共轭梯度法最小化一个可微的多元函数。采用 Polak-Ribiere 型共轭梯度来计算搜索方向，结合二次和三次多项式近似以及 Wolfe-Powell 停止准则进行线搜索，并使用斜率比方法来猜测初始步长。此外，还进行一系列检查以确保探索正在进行，且外推不会无限制地增大。\n    \n    `FN` 是一个接受两个参数的函数：[`WEIGHTS`][ab3c] 和 `DERIVATIVES`。`WEIGHTS` 是一个与 `W` 大小相同的 `MAT`，表示搜索的起始点。`DERIVATIVES` 也是一个相同大小的 `MAT`，用于存放 `FN` 计算出的偏导数。`FN` 返回待最小化的函数值。\n    \n    `CG` 会执行多次线搜索，并在每一步调用 `FN`。一次线搜索最多调用 `MAX-N-EVALUATIONS-PER-LINE-SEARCH` 次，可能会成功将最小值改进到足够大的幅度，也可能失败。需要注意的是，即使线搜索失败，仍有可能进一步改进结果，只是认为改进幅度太小而已。`CG` 会在以下任一情况下停止：\n    \n    - 连续两次线搜索失败\n    \n    - 达到 `MAX-N-LINE-SEARCHES`\n    \n    - 达到 `MAX-N-EVALUATIONS`\n    \n    `CG` 返回一个 `MAT`，其中包含最佳权重、最小值、已执行的线搜索次数、成功的线搜索次数以及评估次数。\n    \n    使用 `MAX-N-EVALUATIONS` 时，请注意，在第一次线搜索之前还会额外评估一次 `FN`。\n    \n    `SPARE-VECTORS` 是一组预先分配的与 `W` 大小相同的 `MAT`。传递 6 个即可满足当前算法的需求，从而完全避免动态分配大小为 `W` 的向量。\n    \n    `注意`：如果函数在少数几次迭代内就终止，这可能表明函数值和导数值不一致（即 `FN` 函数的实现可能存在错误）。\n    \n    `SIG` 和 `RHO` 是控制 Wolfe-Powell 条件的常数。`SIG` 是允许的前一次与新一次斜率（搜索方向上的导数）之间绝对值的最大比值，因此将 `SIG` 设置为较低的正值会提高线搜索的精度。`RHO` 是允许的最低分数，表示线搜索中从初始点斜率所预期的变化比例。常数必须满足 0 \u003C `RHO` \u003C `SIG` \u003C 1。根据待优化函数的性质调整 `SIG` 可以加快最小化过程；而 `RHO` 则不太值得过多调整。\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-INT-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-INT\\*** *0.1*\n\n    在当前区间极限的 `INT` 范围内，不再重新评估。\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-EXT-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-EXT\\*** *3*\n\n    最多可以将当前步长外推 `EXT` 倍。\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-SIG-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-SIG\\*** *0.1*\n\n    `SIG` 和 `RHO` 是控制 Wolfe-Powell 条件的常数。`SIG` 是允许的前一次与新一次斜率（搜索方向上的导数）之间绝对值的最大比值，因此将 `SIG` 设置为较低的正值会提高线搜索的精度。\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-RHO-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-RHO\\*** *0.05*\n\n    `RHO` 是允许的最低分数，表示线搜索中从初始点斜率所预期的变化比例。常数必须满足 0 \u003C `RHO` \u003C `SIG` \u003C 1。\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-RATIO-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-RATIO\\*** *10*\n\n    允许的最大斜率比。\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-MAX-N-LINE-SEARCHES-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-MAX-N-LINE-SEARCHES\\*** *NIL*\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-MAX-N-EVALUATIONS-PER-LINE-SEARCH\\*** *20*\n\n\u003Ca id=\"x-28MGL-CG-3A-2ADEFAULT-MAX-N-EVALUATIONS-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*DEFAULT-MAX-N-EVALUATIONS\\*** *NIL*\n\n\u003Ca id=\"x-28MGL-CG-3ACG-OPTIMIZER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **CG-OPTIMIZER** *[ITERATIVE-OPTIMIZER][8da0]*\n\n    在处理完 [`BATCH-SIZE`][fa6d] 个输入后，同时更新所有权重。\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **BATCH-SIZE** *[CG-OPTIMIZER][ee97] (:BATCH-SIZE)*\n\n    处理完 `BATCH-SIZE` 个实例后，权重就会更新。通常，[`CG`][4ffb] 会作用于所有可用数据，但为了减少过拟合，使用较小的批次大小可以在优化过程中引入一些噪声。如果未设置 `BATCH-SIZE`，则会在优化开始时将其初始化为数据集的大小。\n\n\u003Ca id=\"x-28MGL-CG-3ACG-ARGS-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **CG-ARGS** *[CG-OPTIMIZER][ee97] (:CG-ARGS = 'NIL)*\n\n\u003Ca id=\"x-28MGL-CG-3AON-CG-BATCH-DONE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [访问器] **ON-CG-BATCH-DONE** *[CG-OPTIMIZER][ee97] (:ON-CG-BATCH-DONE = NIL)*\n\n    当共轭梯度批次处理完成后触发的事件钩子。该钩子上的处理器会被调用，并传入 8 个参数：\n    \n        (优化器 梯度源 实例\n         最佳权重 最佳函数值 线搜索次数\n         成功线搜索次数 评估次数)\n    \n    其中后 5 个参数正是 [`CG`][4ffb] 函数的返回值。\n\n\u003Ca id=\"x-28MGL-CG-3ALOG-CG-BATCH-DONE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **LOG-CG-BATCH-DONE** *优化器 梯度源 实例 最佳权重 最佳函数值 线搜索次数 成功线搜索次数 评估次数*\n\n    这是一个可以添加到 [`ON-CG-BATCH-DONE`][d10a] 的函数。默认实现只是简单地记录事件参数。\n\n\u003Ca id=\"x-28MGL-CG-3ASEGMENT-FILTER-20-28MGL-PAX-3AREADER-20MGL-CG-3ACG-OPTIMIZER-29-29\">\u003C\u002Fa>\n\n- [读取器] **SEGMENT-FILTER** *[CG-OPTIMIZER][ee97] (:SEGMENT-FILTER = (CONSTANTLY T))*\n\n    一个用于筛选段落的兴趣性与否的谓词函数。由 [`INITIALIZE-OPTIMIZER*`][7c2f] 调用。\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-EXTENSION-API-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n\n\n### 9.5 扩展 API\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-optimizer-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.5.1 实现优化器\n\n对于新的优化器类型，必须对以下通用函数进行专门化定义。\n\n\u003Ca id=\"x-28MGL-OPT-3AMINIMIZE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MINIMIZE\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    由[`MINIMIZE`][46a4]在调用[`INITIALIZE-OPTIMIZER*`][7c2f]和[`INITIALIZE-GRADIENT-SOURCE*`][dd95]之后调用，此通用函数是编写优化器的主要扩展点。\n\n\u003Ca id=\"x-28MGL-OPT-3AINITIALIZE-OPTIMIZER-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **INITIALIZE-OPTIMIZER\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    此函数会在训练开始前自动调用，用于设置`OPTIMIZER`以适于优化`GRADIENT-SOURCE`。通常会为梯度创建适当大小的累加器。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **SEGMENTS** *OPTIMIZER*\n\n    单个优化器可以优化多个被称为“段”的权重矩阵。此函数将它们作为一个列表返回。\n\n其余部分仅作为实现优化器的实用工具。\n\n\u003Ca id=\"x-28MGL-OPT-3ATERMINATE-OPTIMIZATION-P-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **TERMINATE-OPTIMIZATION-P** *N-INSTANCES TERMINATION*\n\n    用于[`ITERATIVE-OPTIMIZER`][8da0]子类的实用函数。它根据`N-INSTANCES`和`TERMINATION`（分别为`ITERATIVE-OPTIMIZER`相应访问器的值）来决定是否终止优化。\n\n\u003Ca id=\"x-28MGL-OPT-3ASET-N-INSTANCES-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SET-N-INSTANCES** *OPTIMIZER GRADIENT-SOURCE N-INSTANCES*\n\n    设置`OPTIMIZER`的[`N-INSTANCES`][4c73]，并触发[`ON-N-INSTANCES-CHANGED`][4f0b]。[`ITERATIVE-OPTIMIZER`][8da0]的子类必须调用此函数来递增[`N-INSTANCES`][4c73]。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-SET-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **SEGMENT-SET**\n\n    这是一个用于具有[`SEGMENTS`][f00d]列表的优化器的工具类，该优化器能够在这几个段与一个单独的`MAT`（累加器）之间来回复制权重。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENTS-20-28MGL-PAX-3AREADER-20MGL-OPT-3ASEGMENT-SET-29-29\">\u003C\u002Fa>\n\n- [读取器] **SEGMENTS** *[SEGMENT-SET][418a] (:SEGMENTS)*\n\n    权重矩阵的列表。\n\n\u003Ca id=\"x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-OPT-3ASEGMENT-SET-29-29\">\u003C\u002Fa>\n\n- [读取器] **SIZE** *[SEGMENT-SET][418a]*\n\n    [`SEGMENTS`][f00d]中所有权重矩阵大小之和。\n\n\u003Ca id=\"x-28MGL-OPT-3ADO-SEGMENT-SET-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **DO-SEGMENT-SET** *(SEGMENT &OPTIONAL START) SEGMENT-SET &BODY BODY*\n\n    遍历`SEGMENT-SET`中的[`SEGMENTS`][f00d]。如果指定了`START`，则将其绑定到`SEGMENT`在`SEGMENT-SET`中的起始索引。该起始索引是前面各段大小之和。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-SET\\\u003C-MAT-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SEGMENT-SET\\\u003C-MAT** *SEGMENT-SET MAT*\n\n    将`MAT`中的值复制到`SEGMENT-SET`的权重矩阵中，就好像它们被连接成一个单独的`MAT`一样。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-SET->MAT-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **SEGMENT-SET->MAT** *SEGMENT-SET MAT*\n\n    将`SEGMENT-SET`中的值复制到`MAT`中，就好像它们被连接成一个单独的`MAT`一样。\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SOURCE-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.5.2 实现梯度源\n\n权重可以以多种方式存储。优化器需要更新权重，因此假设权重存储在任意数量的称为“段”的`MAT`对象中。\n\n本节中的通用函数除特别说明外，都必须针对新的梯度源进行专门化。\n\n\u003Ca id=\"x-28MGL-OPT-3AMAP-SEGMENTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAP-SEGMENTS** *FN GRADIENT-SOURCE*\n\n    对`GRADIENT-SOURCE`的每个段应用`FN`。\n\n\u003Ca id=\"x-28MGL-OPT-3AMAP-SEGMENT-RUNS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAP-SEGMENT-RUNS** *FN SEGMENT*\n\n    调用`FN`时传入`SEGMENT`中连续且不缺失的索引区间的起始和结束位置。由支持部分更新的优化器调用。默认实现假定所有权重都存在。只有当计划使用能够处理未使用或缺失权重的优化器时，才需要对此进行专门化，例如[`MGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER`][f6ae]和`OPTIMIZER`[`MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER`][5a43]。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-WEIGHTS-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **SEGMENT-WEIGHTS** *SEGMENT*\n\n    返回`SEGMENT`的权重矩阵。一个段本身不一定是一个`MAT`对象。例如，它可以是`MGL-BM:BM`的一个`MGL-BM:CHUNK`，或者`MGL-BP:BPN`[5187]的一个[`MGL-BP:LUMP`][c1ac]，其[`NODES`][cc1c]槽位保存着权重。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-WEIGHTS-20-28METHOD-20-28MGL-MAT-3AMAT-29-29-29\">\u003C\u002Fa>\n\n- [方法] **SEGMENT-WEIGHTS** *(MAT MAT)*\n\n    当段确实是一个`MAT`时，直接返回它。\n\n\u003Ca id=\"x-28MGL-OPT-3ASEGMENT-DERIVATIVES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **SEGMENT-DERIVATIVES** *SEGMENT*\n\n    返回`SEGMENT`的导数矩阵。一个段本身不一定是一个`MAT`对象。例如，它可以是`MGL-BM:BM`的一个`MGL-BM:CHUNK`，或者`MGL-BP:BPN`[5187]的一个[`MGL-BP:LUMP`][c1ac]，其DERIVATIVES槽位保存着梯度。\n\n\u003Ca id=\"x-28MGL-OPT-3ALIST-SEGMENTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LIST-SEGMENTS** *GRADIENT-SOURCE*\n\n    一个实用函数，通过对`GRADIENT-SOURCE`调用[`MAP-SEGMENTS`][2312]来返回段的列表。\n\n\u003Ca id=\"x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **INITIALIZE-GRADIENT-SOURCE\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    在调用[`MINIMIZE*`][ae3d]之前自动调用此函数，如果`GRADIENT-SOURCE`需要某种设置，则可以对其进行专门化。\n\n\u003Ca id=\"x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20-28METHOD-20-28T-20T-20T-20T-29-29-29\">\u003C\u002Fa>\n\n- [方法] **INITIALIZE-GRADIENT-SOURCE\\*** *OPTIMIZER GRADIENT-SOURCE WEIGHTS DATASET*\n\n    默认方法不做任何操作。\n\n\u003Ca id=\"x-28MGL-OPT-3AACCUMULATE-GRADIENTS-2A-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **ACCUMULATE-GRADIENTS\\*** *GRADIENT-SOURCE SINK BATCH MULTIPLIER VALUEP*\n\n    将一阶梯度之和乘以`MULTIPLIER`后加到`SINK`的累加器中（通常通过[`DO-GRADIENT-SINK`][20ca]访问），并且如果`VALUEP`为真，则返回针对一批实例所优化函数的值之和。`GRADIENT-SOURCE`是表示所优化函数的对象，`SINK`则是梯度汇合处。\n    \n    注意，`BATCH`中的实例数量可能大于`GRADIENT-SOURCE`一次能处理的数量（例如，根据[`MAX-N-STRIPES`][16c4]的限制），因此使用[`DO-BATCHES-FOR-MODEL`][faaa]或类似的方法（如将`BATCH`按`MAX-N-STRIPES`分组）可能会很有用。\n\n\u003Ca id=\"x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SINK-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 9.5.3 实现梯度源\n\n优化器会在梯度源上调用 [`ACCUMULATE-GRADIENTS*`][4bf1]。`ACCUMULATE-GRADIENTS*` 的一个参数是 `SINK`。梯度源知道哪个分段对应哪个累加矩阵（如果有的话）。梯度源完全由 [`MAP-GRADIENT-SINK`][aabd] 定义。\n\n\u003Ca id=\"x-28MGL-OPT-3AMAP-GRADIENT-SINK-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAP-GRADIENT-SINK** *FN SINK*\n\n    对 `SINK` 中的每个分段及其对应的累加矩阵 `MAT`，调用带有 (`SEGMENT` `ACCUMULATOR`) 参数列表的 `FN`。\n\n\u003Ca id=\"x-28MGL-OPT-3ADO-GRADIENT-SINK-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **DO-GRADIENT-SINK** *((SEGMENT ACCUMULATOR) SINK) &BODY BODY*\n\n    是基于 [`MAP-GRADIENT-SINK`][aabd] 的便捷宏。\n\n\u003Ca id=\"x-28MGL-DIFFUN-3A-40MGL-DIFFUN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n\n\n## 10 可微函数\n\n###### \\[在 MGL-DIFFUN 包中\\]\n\u003Ca id=\"x-28MGL-DIFFUN-3ADIFFUN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **DIFFUN**\n\n    `DIFFUN` 将一个 Lisp 函数（存储在其 [`FN`][f491] slot 中）包装成一个梯度源（参见 [实现梯度源][c58b]），从而使其能够用于 [`MINIMIZE`][46a4]。请参阅 [梯度下降][10e7] 和 [共轭梯度法][83e6] 中的示例。\n\n\u003Ca id=\"x-28MGL-COMMON-3AFN-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29\">\u003C\u002Fa>\n\n- [读取器] **FN** *[DIFFUN][1a61] (:FN)*\n\n    一个实值 Lisp 函数。它可以有任意数量的参数。\n\n\u003Ca id=\"x-28MGL-DIFFUN-3APARAMETER-INDICES-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29\">\u003C\u002Fa>\n\n- [读取器] **PARAMETER-INDICES** *[DIFFUN][1a61] (:PARAMETER-INDICES = NIL)*\n\n    不参与优化的参数索引列表。这些参数的值将来自 [`MINIMIZE`][46a4] 的 `DATASET` 参数。\n\n\u003Ca id=\"x-28MGL-DIFFUN-3AWEIGHT-INDICES-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29\">\u003C\u002Fa>\n\n- [读取器] **WEIGHT-INDICES** *[DIFFUN][1a61] (:WEIGHT-INDICES = NIL)*\n\n    需要优化的参数索引列表，其值将来自 [`MINIMIZE`][46a4] 的 `WEIGHTS` 参数。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 11 反向传播神经网络\n\n###### \\[在 MGL-BP 包中\\]\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-OVERVIEW-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.1 反向传播概述\n\n反向传播神经网络只是一些具有大量称为“权重”的参数的函数，并且以层状结构呈现，可以表示为 [计算图](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAutomatic_differentiation)。该网络被训练用来 [`MINIMIZE`][46a4] 某种由网络计算出的 *损失函数* 值。\n\n在本实现中，一个 [`BPN`][5187] 由多个 [`LUMP`][c1ac] 组成（大致对应于各层）。支持前馈神经网络和循环神经网络（分别为 [`FNN`][9de4] 和 [`RNN`][b0f3]）。`BPN` 不仅可以包含 `LUMP`，还可以包含其他 `BPN`。正如我们所见，网络是由复合对象组成的，而复合与简单部分的抽象基类称为 [`CLUMP`][a4fe]。\n\n\u003Ca id=\"x-28MGL-BP-3ACLUMP-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **CLUMP**\n\n    `CLUMP` 是一个 [`LUMP`][c1ac] 或者一个 [`BPN`][5187]。它代表一个可微函数。`CLUMP` 的参数在实例化时指定。有些参数本身就是 `CLUMP`，因此它们会永久性地连接在一起，例如：\n\n    ```commonlisp\n    (->v*m (->input :size 10 :name 'input)\n           (->weight :dimensions '(10 20) :name 'weight)\n           :name 'activation)\n    ```\n    \n    上述代码创建了三个 `CLUMP`：名为 `ACTIVATION` 的向量-矩阵乘法 `CLUMP`，它引用了其操作数：`INPUT` 和 `WEIGHT`。请注意，这个例子只是定义了一个函数，尚未进行实际计算。\n    \n    通过这种 `CLUMP` 的连接方式，可以构建前馈网络（[`FNN`][9de4]）或循环神经网络（[`RNN`][b0f3]），而这些网络本身也是 `CLUMP`，因此可以根据需要以层次化的方式构建网络。非复合的 `CLUMP` 称为 `LUMP`（注意去掉了代表复合的字母 `C`）。各种 `LUMP` 子类型对应不同的层类型（[`->SIGMOID`][83f9]、[`->DROPOUT`][441b]、[`->RELU`][9d3a]、[`->TANH`][5309] 等）。\n\n此时，您不妨先阅读 [`FNN 教程`][6b38]，以便对整个流程有一个直观的了解。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-EXTENSION-API-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.2 团块 API\n\n这些主要是用于扩展目的。在正常操作中，从这里唯一需要用到的是在钳位输入或提取预测时的 [`NODES`][cc1c]。\n\n\u003Ca id=\"x-28MGL-BP-3ASTRIPEDP-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **STRIPEDP** *CLUMP*\n\n    为了效率，前向和反向传播阶段以批处理模式进行：将多个实例分批送入网络。因此，团块必须能够存储每个实例的值及其梯度。然而，有些团块对一个批次中的每个实例都产生相同的结果。这些团块就是权重，即网络的参数。`STRIPEDP` 在 `CLUMP` 不代表权重（即它不是 [`->WEIGHT`][b76f]）时返回真。\n\n    对于条纹团块，它们的 [`NODES`][cc1c] 和 [`DERIVATIVES`][a81b] 是 `MAT` 对象，其第一维度（二维情况下的行数）等于批次中的实例数量。非条纹团块的形状则没有限制，仅受其用途所决定。\n\n\u003Ca id=\"x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **NODES** *OBJECT*\n\n    返回一个 `MGL-MAT:MAT` 对象，表示 `OBJECT` 的状态或结果。返回矩阵的第一维度等于条纹的数量。\n\n[`CLUMP`][a4fe] 的 [`NODES`][cc1c] 存储了最近一次 [`FORWARD`][c1ae] 计算出的结果。对于 [`->INPUT`][f54e] 团块，输入值应放置于此处（参见 [`SET-INPUT`][0c9e]）。目前，该矩阵始终是二维的，但这一限制未来可能会取消。\n\n\u003Ca id=\"x-28MGL-BP-3ADERIVATIVES-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **DERIVATIVES** *CLUMP*\n\n    返回表示 `CLUMP` 所计算函数的偏导数的 `MAT` 对象。返回的偏导数是由之前的 [`BACKWARD`][5bd4] 调用累积而来的。\n\n    该矩阵的形状与 [`NODES`][cc1c] 返回的矩阵相同。\n\n\u003Ca id=\"x-28MGL-BP-3AFORWARD-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **FORWARD** *CLUMP*\n\n    计算由 `CLUMP` 表示的函数在所有条纹上的值，并将结果放入 `CLUMP` 的 [`NODES`][cc1c] 中。\n\n\u003Ca id=\"x-28MGL-BP-3ABACKWARD-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **BACKWARD** *CLUMP*\n\n    计算由 `CLUMP` 表示的函数的偏导数，并将其加到相应参数团块的 [`DERIVATIVES`][a81b] 中。`CLUMP` 的 `DERIVATIVES` 包含所有团块对该输出的偏导数之和。此函数应在一次 [`FORWARD`][c1ae] 之后调用。\n\n    以应用网络到两个实例 `x1` 和 `x2` 的批次时的 [`->SIGMOID`][83f9] 团块为例。`x1` 和 `x2` 被设置在 [`->INPUT`][f54e] 团块 X 中。Sigmoid 计算 `1\u002F(1+exp(-x))`，其中 `X` 是其唯一的参数团块。\n\n        f(x) = 1\u002F(1+exp(-x))\n\n    当对 Sigmoid 团块调用 `BACKWARD` 时，其 `DERIVATIVES` 是一个 2x1 的 `MAT` 对象，包含损失函数的偏导数：\n\n        dL(x1)\u002Fdf\n        dL(x2)\u002Fdf\n\n    现在，Sigmoid 的 `BACKWARD` 方法需要将 `dL(x1)\u002Fdx1` 和 `dL(x2)\u002Fdx2` 加到 `X` 的 `DERIVATIVES` 中。由于 `dL(x1)\u002Fdx1 = dL(x1)\u002Fdf * df(x1)\u002Fdx1`，而第一项已经在 Sigmoid 的 `DERIVATIVES` 中，因此它只需要计算第二项。\n\n此外，团块还需支持 [`SIZE`][019f]、[`N-STRIPES`][8dd7]、[`MAX-N-STRIPES`][16c4]（以及后两者的 [`SETF`][a138] 方法），这些功能只需通过继承 [`BPN`][5187]、[`FNN`][9de4]、[`RNN`][b0f3] 或者 [`LUMP`][c1ac] 即可实现。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BPN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 11.3 `BPN`s\n\n\u003Ca id=\"x-28MGL-BP-3ABPN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **BPN** *[CLUMP][a4fe]*\n\n    [`FNN`][9de4] 和 [`RNN`][b0f3] 的抽象基类。\n\n\u003Ca id=\"x-28MGL-CORE-3AN-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29\">\u003C\u002Fa>\n\n- [读取器] **N-STRIPES** *[BPN][5187] (:N-STRIPES = 1)*\n\n    网络当前拥有的实例数量。此值会自动设置为传递给 [`SET-INPUT`][0c9e] 的实例数量，因此很少需要直接操作，尽管也可以手动设置。当设置时，所有 [`CLUMPS`][f7c1] 的 `N-STRIPES` 都会被设置为相同的值。\n\n\u003Ca id=\"x-28MGL-CORE-3AMAX-N-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29\">\u003C\u002Fa>\n\n- [读取器] **MAX-N-STRIPES** *[BPN][5187] (:MAX-N-STRIPES = NIL)*\n\n    网络可以并行处理的最大实例数量。在 [`BUILD-FNN`][606c] 或 [`BUILD-RNN`][764b] 内部，它默认为父网络的 `MAX-N-STRIPES`，否则默认为 1。当设置时，所有 [`CLUMPS`][f7c1] 的 `MAX-N-STRIPES` 都会被设置为相同的值。\n\n\u003Ca id=\"x-28MGL-BP-3ACLUMPS-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29\">\u003C\u002Fa>\n\n- [读取器] **CLUMPS** *[BPN][5187] (:CLUMPS = (MAKE-ARRAY 0 :ELEMENT-TYPE 'CLUMP :ADJUSTABLE T :FILL-POINTER T))*\n\n    一个具有填充指针的拓扑排序可调整数组，其中保存着构成网络的团块。团块可以通过 [`ADD-CLUMP`][82d8] 添加进去，或者更常见的是在 [`BUILD-FNN`][606c] 或 [`BUILD-RNN`][764b] 过程中自动添加。由于需求较少，大多数情况下使用 [`FIND-CLUMP`][175f] 就足够了。\n\n\u003Ca id=\"x-28MGL-BP-3AFIND-CLUMP-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **FIND-CLUMP** *NAME BPN &KEY (ERRORP T)*\n\n    在 `BPN` 的 [`CLUMPS`][f7c1] 中查找名称为 `NAME` 的团块。如同往常一样，名称比较使用 [`EQUAL`][3fb5]。如果未找到，则返回 `NIL` 或根据 `ERRORP` 抛出错误。\n\n\u003Ca id=\"x-28MGL-BP-3AADD-CLUMP-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **ADD-CLUMP** *CLUMP BPN*\n\n    将 `CLUMP` 添加到 `BPN` 中。`CLUMP` 的 [`MAX-N-STRIPES`][16c4] 会被设置为与 `BPN` 相同的值。如果尝试添加一个名称已被 `BPN` 的某个 [`CLUMPS`][f7c1] 使用过的团块，则会引发错误。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-TRAINING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.1 训练\n\n[`BPN`][5187] 被训练以最小化其所计算的损失函数。在将 `BPN` 传递给 [`MINIMIZE`][46a4]（作为其 `GRADIENT-SOURCE` 参数）之前，必须先将其包装在一个 [`BP-LEARNER`][00a0] 对象中。`BP-LEARNER` 拥有 [`MONITORS`][6202] 插槽，例如被 [`RESET-OPTIMIZATION-MONITORS`][d479] 所使用。\n\n不考虑其他复杂因素，基本的训练流程如下：\n\n```commonlisp\n(minimize optimizer (make-instance 'bp-learner :bpn bpn)\n          :dataset dataset)\n```\n\n\n\u003Ca id=\"x-28MGL-BP-3ABP-LEARNER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **BP-LEARNER**\n\n\u003Ca id=\"x-28MGL-BP-3ABPN-20-28MGL-PAX-3AREADER-20MGL-BP-3ABP-LEARNER-29-29\">\u003C\u002Fa>\n\n- [读取器] **BPN** *[BP-LEARNER][00a0] (:BPN)*\n\n    此 [`BP-LEARNER`][00a0] 提供梯度的 `BPN`。\n\n\u003Ca id=\"x-28MGL-CORE-3AMONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ABP-LEARNER-29-29\">\u003C\u002Fa>\n\n- [访问器] **MONITORS** *[BP-LEARNER][00a0] (:MONITORS = NIL)*\n\n    一个包含[`MONITOR`][7068]的列表。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MONITORING-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.2 监控\n\n\u003Ca id=\"x-28MGL-BP-3AMONITOR-BPN-RESULTS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MONITOR-BPN-RESULTS** *DATASET BPN MONITORS*\n\n    对于`DATASET`中的每一批（大小为[`MAX-N-STRIPES`][16c4]的`BPN`）实例，将该批设置为下一个输入，使用[`SET-INPUT`][0c9e]，执行一次[`FORWARD`][c1ae]传递，并对`BPN`应用`MONITORS`（通过[`APPLY-MONITORS`][989c]）。最后，返回`MONITORS`的计数器。这是基于[`MONITOR-MODEL-RESULTS`][e50c]构建的。\n\n\u003Ca id=\"x-28MGL-BP-3AMAKE-STEP-MONITOR-MONITORS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-STEP-MONITOR-MONITORS** *RNN &KEY (COUNTER-VALUES-FN \\#'COUNTER-RAW-VALUES) (MAKE-COUNTER \\#'MAKE-STEP-MONITOR-MONITOR-COUNTER)*\n\n    返回一个监控器列表，每个监控器对应于`RNN`中的每一个[`STEP-MONITORS`][71f9]。这些监控器使用`COUNTER-VALUES-FN`从其对应的“扭曲”计数器中提取结果，并将其添加到由`MAKE-COUNTER`创建的自身计数器中。哇。呃。其想法是，可以这样做来监控扭曲的预测：\n\n    ```commonlisp\n    (let ((*warp-time* t))\n      (setf (step-monitors rnn)\n            (make-cost-monitors rnn :attributes '(:event \"warped pred.\")))\n      (monitor-bpn-results dataset rnn\n                           ;; 只需在每批实例后收集并重置扭曲监控器。\n                           (make-step-monitor-monitors rnn)))\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3AMAKE-STEP-MONITOR-MONITOR-COUNTER-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [通用函数] **MAKE-STEP-MONITOR-MONITOR-COUNTER** *STEP-COUNTER*\n\n    在[`RNN`][b0f3]中，`STEP-COUNTER`会汇总当前批次处理过程中所有时间步的结果。当批次处理完成后，返回一个新的计数器，用于累积来自`STEP-COUNTER`的结果。默认实现会创建`STEP-COUNTER`的一个副本。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-FNN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.3 前馈网络\n\n[`FNN`][9de4]和[`RNN`][b0f3]有很多共同之处（参见它们的共同超类[`BPN`][5187]）。专门针对`FNN`的功能非常有限，所以在我们研究完整示例之前，先简单介绍一下它们吧。\n\n\u003Ca id=\"x-28MGL-BP-3AFNN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **FNN** *[BPN][5187]*\n\n    一种前馈神经网络（与递归神经网络相对，参见[`RNN`][b0f3]）。\n\n\u003Ca id=\"x-28MGL-BP-3ABUILD-FNN-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **BUILD-FNN** *(&KEY FNN (CLASS ''FNN) INITARGS MAX-N-STRIPES NAME) &BODY CLUMPS*\n\n    用于从[`CLUMP`][a4fe]组装`FNN`的语法糖。类似于[`LET*`][49f5]，它是一系列绑定（将符号绑定到`CLUMP`）。默认情况下，所创建的clump名称与绑定的符号名称相同。如果某个clump没有绑定到符号（因为它是在嵌套表达式中创建的），则可以使用局部函数`CLUMP`在正在构建的fnn中找到具有给定名称的clump。例如：\n\n        (build-fnn ()\n          (features (->input :size n-features))\n          (biases (->weight :size n-features))\n          (weights (->weight :size (* n-hiddens n-features)))\n          (activations0 (->v*m :weights weights :x (clump 'features)))\n          (activations (->+ :args (list biases activations0)))\n          (output (->sigmoid :x activations)))\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-FNN-TUTORIAL-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### `FNN`教程\n\n希望这个来自`example\u002Fdigit-fnn.lisp`的例子能够说明相关概念。如果尽管有注释，仍然觉得过于密集，请先了解一下[数据集][109e]、[基于梯度的优化][c74a]，然后再回来阅读。\n\n```commonlisp\n(cl:defpackage :mgl-example-digit-fnn\n  (:use #:common-lisp #:mgl))\n\n(in-package :mgl-example-digit-fnn)\n\n;;; 输入中有10种可能的数字 ...\n(defparameter *n-inputs* 10)\n;;; 我们想学习将输入数字D映射到(MOD (1+ D) 3)的规则。\n(defparameter *n-outputs* 3)\n\n;;; 定义一个前馈网络，以便以后可以通过添加SET-INPUT方法来专门化输入的转换方式。\n(defclass digit-fnn (fnn)\n  ())\n\n;;; 构建一个带有单层修正线性单元和softmax输出的DIGIT-FNN。\n(defun make-digit-fnn (&key (n-hiddens 5))\n  (build-fnn (:class 'digit-fnn)\n    (input (->input :size *n-inputs*))\n    (hidden-activation (->activation input :size n-hiddens))\n    (hidden (->relu hidden-activation))\n    (output-activation (->activation hidden :size *n-outputs*))\n    (output (->softmax-xe-loss output-activation))))\n\n;;; 此方法由MINIMIZE以及MONITOR-BPN-RESULTS在执行前向传递（即计算网络所表示的函数值）之前，以“实例”批次（本例中为输入数字）调用。它的任务是通过填充INPUT clump的NODES矩阵的行来编码输入。\n;;;\n;;; 每个输入被编码为一行全零，其中只有一个1位于由输入数字决定的索引位置。这被称为独热编码。TARGET也可以用同样的方式编码，但这里我们使用了->SOFTMAX-XE-LOSS的TARGET支持的稀疏选项。\n(defmethod set-input (digits (fnn digit-fnn))\n  (let* ((input (nodes (find-clump 'input fnn)))\n         (output-lump (find-clump 'output fnn)))\n    (fill! 0 input)\n    (loop for i upfrom 0\n          for digit in digits\n          do (setf (mref input i digit) 1))\n    (setf (target output-lump)\n          (mapcar (lambda (digit)\n                    (mod (1+ digit) *n-outputs*))\n                  digits))))\n\n;;; 使用随机梯度下降法最小化损失（这里为交叉熵）来训练网络。\n(defun train-digit-fnn ()\n  (let ((optimizer\n          ;; 首先创建用于 MINIMIZE 的优化器。\n          (make-instance 'segmented-gd-optimizer\n                         :segmenter\n                         ;; 我们用相同的参数，实际上也是同一个优化器，\n                         ;; 来训练每一组权重。不过一般情况下并不一定如此。\n                         (constantly\n                          (make-instance 'sgd-optimizer\n                                         :learning-rate 1\n                                         :momentum 0.9\n                                         :batch-size 100))))\n        (fnn (make-digit-fnn)))\n    ;; FNN 可以并行处理的实例数量。通常等于批次大小或其因数。\n    (setf (max-n-stripes fnn) 50)\n    ;; 随机初始化所有权重。\n    (map-segments (lambda (weights)\n                    (gaussian-random! (nodes weights) :stddev 0.01))\n                  fnn)\n    ;; 设置记录训练和测试误差的机制。\n    (monitor-optimization-periodically\n     optimizer '((:fn log-test-error :period 10000)\n                 (:fn reset-optimization-monitors :period 1000)))\n    ;; 最后，开始优化过程。\n    (minimize optimizer\n              ;; 将 FNN 包装成一个 BP 学习器，并为其附加成本监控器。\n              ;; 这些监控器会在每训练 100 个样本后由上述 RESET-OPTIMIZATION-MONITORS 重置并记录。\n              (make-instance 'bp-learner\n                             :bpn fnn\n                             :monitors (make-cost-monitors\n                                        fnn :attributes `(:event \"train\")))\n              ;; 训练将在采样器用尽数据时停止（10000 个样本后）。\n              :dataset (make-sampler 10000))))\n\n;;; 返回一个采样器对象，该对象会生成 MAX-N-SAMPLES 个随机输入（0 到 9 之间的数字）。\n(defun make-sampler (max-n-samples)\n  (make-instance 'function-sampler :max-n-samples max-n-samples\n                 :generator (lambda () (random *n-inputs*))))\n\n;;; 记录测试误差。同时，在训练开始时描述优化器和 BPN。在训练期间定期调用（见上文）。\n(defun log-test-error (optimizer learner)\n  (when (zerop (n-instances optimizer))\n    (describe optimizer)\n    (describe (bpn learner)))\n  (log-padded\n   (monitor-bpn-results (make-sampler 1000) (bpn learner)\n                        (make-cost-monitors\n                         (bpn learner) :attributes `(:event \"pred.\")))))\n\n#|\n\n;;; 以下是运行日志：\n(repeatably ()\n  (let ((*log-time* nil))\n    (train-digit-fnn)))\n.. 训练开始时，n-instances: 0\n.. 训练成本：0.000e+0 (0)\n.. #\u003CSEGMENTED-GD-OPTIMIZER {100E112E93}>\n.. SEGMENTED-GD-OPTIMIZER 描述：\n..    N-INSTANCES = 0\n..    OPTIMIZERS = (#\u003CSGD-OPTIMIZER\n..                    #\u003CSEGMENT-SET\n..                      (#\u003C->WEIGHT # :SIZE 15 1\u002F1 :NORM 0.04473>\n..                       #\u003C->WEIGHT # :SIZE 3 1\u002F1 :NORM 0.01850>\n..                       #\u003C->WEIGHT # :SIZE 50 1\u002F1 :NORM 0.07159>\n..                       #\u003C->WEIGHT # :SIZE 5 1\u002F1 :NORM 0.03056>)\n..                      {100E335B73}>\n..                    {100E06DF83}>)\n..    SEGMENTS = (#\u003C->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE\n..                  15 1\u002F1 :NORM 0.04473>\n..                #\u003C->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE\n..                  3 1\u002F1 :NORM 0.01850>\n..                #\u003C->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE\n..                  50 1\u002F1 :NORM 0.07159>\n..                #\u003C->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE\n..                  5 1\u002F1 :NORM 0.03056>)\n..  \n.. #\u003CSGD-OPTIMIZER {100E06DF83}>\n.. GD-OPTIMIZER 描述：\n..    N-INSTANCES = 0\n..    SEGMENT-SET = #\u003CSEGMENT-SET\n..                    (#\u003C->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE\n..                       15 1\u002F1 :NORM 0.04473>\n..                     #\u003C->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE\n..                       3 1\u002F1 :NORM 0.01850>\n..                     #\u003C->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE\n..                       50 1\u002F1 :NORM 0.07159>\n..                     #\u003C->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE\n..                       5 1\u002F1 :NORM 0.03056>)\n..                    {100E335B73}>\n..    LEARNING-RATE = 1.00000e+0\n..    MOMENTUM = 9.00000e-1\n..    MOMENTUM-TYPE = :NORMAL\n..    WEIGHT-DECAY = 0.00000e+0\n..    WEIGHT-PENALTY = 0.00000e+0\n..    N-AFTER-UPATE-HOOK = 0\n..    BATCH-SIZE = 100\n..  \n..  BATCH-GD-OPTIMIZER 描述：\n..    N-BEFORE-UPATE-HOOK = 0\n..  #\u003CDIGIT-FNN {100E11A423}>\n..   BPN 描述：\n..     CLUMPS = #(#\u003C->INPUT INPUT :SIZE 10 1\u002F50 :NORM 0.00000>\n..                #\u003C->ACTIVATION\n..                  (HIDDEN-ACTIVATION :ACTIVATION) :STRIPES 1\u002F50\n..                  :CLUMPS 4>\n..                #\u003C->RELU HIDDEN :SIZE 5 1\u002F50 :NORM 0.00000>\n..                #\u003C->ACTIVATION\n..                  (OUTPUT-ACTIVATION :ACTIVATION) :STRIPES 1\u002F50\n..                  :CLUMPS 4>\n..                #\u003C->SOFTMAX-XE-LOSS OUTPUT :SIZE 3 1\u002F50 :NORM 0.00000>)\n..     N-STRIPES = 1\n..     MAX-N-STRIPES = 50\n..   预测成本：1.100d+0 (1000.00)\n.. 训练开始时，n-instances: 1000\n.. 训练成本：1.093d+0 (1000.00)\n.. 训练开始时，n-instances: 2000\n.. 训练成本：5.886d-1 (1000.00)\n.. 训练开始时，n-instances: 3000\n.. 训练成本：3.574d-3 (1000.00)\n.. 训练开始时，n-instances: 4000\n.. 训练成本：1.601d-7 (1000.00)\n.. 训练开始时，n-instances: 5000\n.. 训练成本：1.973d-9 (1000.00)\n.. 训练开始时，n-instances: 6000\n.. 训练成本：4.882d-10 (1000.00)\n.. 训练开始时，n-instances: 7000\n.. 训练成本：2.771d-10 (1000.00)\n.. 训练开始时，n-instances: 8000\n.. 训练成本：2.283d-10 (1000.00)\n.. 训练开始时，n-instances: 9000\n.. 训练成本：2.123d-10 (1000.00)\n.. 训练开始时，n-instances: 10000\n.. 训练成本：2.263d-10 (1000.00)\n.. 预测成本：2.210d-10 (1000.00)\n..\n==> (#\u003C->WEIGHT (:BIAS HIDDEN-ACTIVATION) :SIZE 5 1\u002F1 :NORM 2.94294>\n-->  #\u003C->WEIGHT (INPUT HIDDEN-ACTIVATION) :SIZE 50 1\u002F1 :NORM 11.48995>\n-->  #\u003C->WEIGHT (:BIAS OUTPUT-ACTIVATION) :SIZE 3 1\u002F1 :NORM 3.39103>\n-->  #\u003C->WEIGHT (HIDDEN OUTPUT-ACTIVATION) :SIZE 15 1\u002F1 :NORM 11.39339>)\n\n|#\n```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-RNN-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.3.4 循环神经网络\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-RNN-TUTORIAL-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### `RNN` 教程\n\n希望这个来自 `example\u002Fsum-sign-fnn.lisp` 的示例能够说明相关概念。\n在阅读本节之前，请确保你已经熟悉了 [`FNN` 教程][6b38]。\n\n```commonlisp\n(cl:defpackage :mgl-example-sum-sign-rnn\n  (:use #:common-lisp #:mgl))\n\n(in-package :mgl-example-sum-sign-rnn)\n\n;;; 在每个时间步只有一个输入...\n(defparameter *n-inputs* 1)\n;;; 我们希望学习一种规则，该规则输出序列中到目前为止所有输入之和的符号。\n(defparameter *n-outputs* 3)\n\n;;; 生成一个训练示例，该示例是一个随机长度的序列，\n;;; 长度介于 1 和 LENGTH 之间。序列中的元素是包含两个元素的列表：\n;;;\n;;; 1. 网络的输入（一个随机数）。\n;;;\n;;; 2. 到目前为止所有输入之和的符号，编码为 0、1、2（分别表示负值、零和正值）。为了增加一些变化，每当遇到一个负输入时，总和就会被重置。\n(defun make-sum-sign-instance (&key (length 10))\n  (let ((length (max 1 (random length)))\n        (sum 0))\n    (loop for i below length\n          collect (let ((x (1- (* 2 (random 2)))))\n                    (incf sum x)\n                    (when (\u003C x 0)\n                      (setq sum x))\n                    (list x (cond ((minusp sum) 0)\n                                  ((zerop sum) 1)\n                                  (t 2)))))))\n\n;;; 构建一个具有单个 LSTM 隐藏层和 softmax 输出的 RNN。\n;;; 对于每个时间步，都会实例化一个 SUM-SIGN-FNN。\n(defun make-sum-sign-rnn (&key (n-hiddens 1))\n  (build-rnn ()\n    (build-fnn (:class 'sum-sign-fnn)\n      (input (->input :size 1))\n      (h (->lstm input :name 'h :size n-hiddens))\n      (prediction (->softmax-xe-loss (->activation h :name 'prediction\n                                                   :size *n-outputs*))))))\n\n;;; 我们定义这个类是为了能够在稍后通过添加 SET-INPUT 方法来专门化输入的转换方式。\n(defclass sum-sign-fnn (fnn)\n  ())\n\n;;; 我们有一批来自 MAKE-SUM-SIGN-INSTANCE 的 RNN 实例。此函数使用属于同一时间步（即在同一索引处）的这些实例的元素调用，并设置输入和目标。\n(defmethod set-input (instances (fnn sum-sign-fnn))\n  (let ((input-nodes (nodes (find-clump 'input fnn))))\n    (setf (target (find-clump 'prediction fnn))\n          (loop for stripe upfrom 0\n                for instance in instances\n                collect\n                ;; 批次中的序列长度不一致。如果某个序列已经结束，RNN 会向我们发送 NIL。\n                (when instance\n                  (destructuring-bind (input target) instance\n                    (setf (mref input-nodes stripe 0) input)\n                    target))))))\n\n;;; 使用 Adam 优化器最小化损失（这里是交叉熵）来训练网络。\n(defun train-sum-sign-rnn ()\n  (let ((rnn (make-sum-sign-rnn)))\n    (setf (max-n-stripes rnn) 50)\n    ;; 按照通常的 sqrt(1 \u002F fan-in) 方式初始化权重。\n    (map-segments (lambda (weights)\n                    (let* ((fan-in (mat-dimension (nodes weights) 0))\n                           (limit (sqrt (\u002F 6 fan-in))))\n                      (uniform-random! (nodes weights)\n                                       :limit (* 2 limit))\n                      (.+! (- limit) (nodes weights))))\n                  rnn)\n    (minimize (monitor-optimization-periodically\n               (make-instance 'adam-optimizer\n                              :learning-rate 0.2\n                              :mean-decay 0.9\n                              :mean-decay-decay 0.9\n                              :variance-decay 0.9\n                              :batch-size 100)\n               '((:fn log-test-error :period 30000)\n                 (:fn reset-optimization-monitors :period 3000)))\n              (make-instance 'bp-learner\n                             :bpn rnn\n                             :monitors (make-cost-monitors rnn))\n              :dataset (make-sampler 30000))))\n\n;;; 返回一个采样器对象，该对象可以产生 MAX-N-SAMPLES 个随机输入。\n(defun make-sampler (max-n-samples &key (length 10))\n  (make-instance 'function-sampler :max-n-samples max-n-samples\n                 :generator (lambda ()\n                              (make-sum-sign-instance :length length))))\n\n;;; 记录测试误差。此外，在训练开始时描述优化器和 bpn。在训练期间定期调用（见上文）。\n(defun log-test-error (optimizer learner)\n  (when (zerop (n-instances optimizer))\n    (describe optimizer)\n    (describe (bpn learner)))\n  (let ((rnn (bpn learner)))\n    (log-padded\n     (append\n      (monitor-bpn-results (make-sampler 1000) rnn\n                           (make-cost-monitors\n                            rnn :attributes '(:event \"pred.\")))\n      ;; 以另一种方式得到相同的结果：监测长度不超过 20 的序列的预测结果，但不要不必要地展开 RNN，以节省内存。\n      (let ((*warp-time* t))\n        (monitor-bpn-results (make-sampler 1000 :length 20) rnn\n                             \u002F\u002F 只需收集并重置每次批次实例后的扭曲监控数据。\n                             (make-cost-monitors\n                              rnn :attributes '(:event \"warped pred.\"))))))\n    \u002F\u002F 验证没有进一步的展开发生。\n    (assert (\u003C= (length (clumps rnn)) 10))\n  (log-mat-room))\n\n#|\n\n;;; 以下为记录：\n(let (;; 反向传播网络不需要双精度浮点数。使用单精度浮点数速度更快且所需内存更少。\n      (*default-mat-ctype* :float)\n      ;; 启用数据在 GPU 内存中的移动，使 RNN 能够处理长度超出展开后网络所能容纳的序列。\n      (*cuda-window-start-time* 1)\n      (*log-time* nil))\n  ;; 初始化随机数生成器。\n  (repeatably ()\n    ;; 如果可用，则启用 CUDA。\n    (with-cuda* ()\n      (train-sum-sign-rnn))))\n.. 在 n-instances: 0 处进行训练\n.. 开销：0.000e+0 (0)\n.. #\u003CADAM-OPTIMIZER {1006CD5663}>\n..  GD-OPTIMIZER 描述：\n..    N-INSTANCES = 0\n..    SEGMENT-SET = #\u003CSEGMENT-SET\n..                    (#\u003C->WEIGHT (H #) :SIZE 1 1\u002F1 :NORM 1.73685>\n..                     #\u003C->WEIGHT (H #) :SIZE 1 1\u002F1 :NORM 0.31893>\n..                     #\u003C->WEIGHT (#1=# #2=# :PEEPHOLE) :SIZE\n..                       1 1\u002F1 :NORM 1.81610>\n..                     #\u003C->WEIGHT (H #2#) :SIZE 1 1\u002F1 :NORM 0.21965>\n..                     #\u003C->WEIGHT (#1# #3=# :PEEPHOLE) :SIZE\n..                       1 1\u002F1 :NORM 1.74939>\n..                     #\u003C->WEIGHT (H #3#) :SIZE 1 1\u002F1 :NORM 0.40377>\n..                     #\u003C->WEIGHT (H PREDICTION) :SIZE\n..                       3 1\u002F1 :NORM 2.15898>\n..                     #\u003C->WEIGHT (:BIAS PREDICTION) :SIZE\n..                       3 1\u002F1 :NORM 2.94470>\n..                     #\u003C->WEIGHT (#1# #4=# :PEEPHOLE) :SIZE\n..                       1 1\u002F1 :NORM 0.97601>\n..                     #\u003C->WEIGHT (INPUT #4#) :SIZE 1 1\u002F1 :NORM 0.65261>\n..                     #\u003C->WEIGHT (:BIAS #4#) :SIZE 1 1\u002F1 :NORM 0.37653>\n..                     #\u003C->WEIGHT (INPUT #1#) :SIZE 1 1\u002F1 :NORM 0.92334>\n..                     #\u003C->WEIGHT (:BIAS #1#) :SIZE 1 1\u002F1 :NORM 0.01609>\n..                     #\u003C->WEIGHT (INPUT #5=#) :SIZE 1 1\u002F1 :NORM 1.09995>\n..                     #\u003C->WEIGHT (:BIAS #5#) :SIZE 1 1\u002F1 :NORM 1.41244>\n..                     #\u003C->WEIGHT (INPUT #6=#) :SIZE 1 1\u002F1 :NORM 0.40475>\n..                     #\u003C->WEIGHT (:BIAS #6#) :SIZE 1 1\u002F1 :NORM 1.75358>)\n..                    {1006CD8753}>\n..    学习率 = 2.00000e-1\n..    动量 = 无\n..    动量类型 = :NONE\n..    权重衰减 = 0.00000e+0\n..    权重惩罚 = 0.00000e+0\n..    更新后钩子数量 = 0\n..    批量大小 = 100\n..  \n..  BATCH-GD-OPTIMIZER 描述：\n..    更新前钩子数量 = 0\n..  \n..  ADAM-OPTIMIZER 描述：\n..    均值衰减率 = 1.00000e-1\n..    均值衰减率衰减 = 9.00000e-1\n..    方差衰减率 = 1.00000e-1\n..    方差调整 = 1.00000d-7\n..  #\u003CRNN {10047C77E3}>\n..   BPN 描述：\n..     团块 = #(#\u003CSUM-SIGN-FNN :条纹 1\u002F50 :团块 4>\n..                #\u003CSUM-SIGN-FNN :条纹 1\u002F50 :团块 4>)\n..     条纹数 = 1\n..     最大条纹数 = 50\n..   \n..   RNN 描述：\n..     最大滞后 = 1\n..   预测成本：1.223e+0 (4455.00)\n..   矫正后的预测成本：1.228e+0 (9476.00)\n..   外部内存使用情况：\n..   外部数组：162 个（已使用字节数：39,600）\n..   CUDA 内存使用情况：\n..   设备数组：114 个（已使用字节数：220,892，池化字节数：19,200）\n..   主机数组：162 个（已使用字节数：39,600）\n..   主机到设备复制：6,164 次，设备到主机复制：4,490 次\n..   在 n-instances: 3000 处进行训练\n..   成本：3.323e-1 (13726.00)\n..   在 n-instances: 6000 处进行训练\n..   成本：3.735e-2 (13890.00)\n..   在 n-instances: 9000 处进行训练\n..   成本：1.012e-2 (13872.00)\n..   在 n-instances: 12000 处进行训练\n..   成本：3.026e-3 (13953.00)\n..   在 n-instances: 15000 处进行训练\n..   成本：9.267e-4 (13948.00)\n..   在 n-instances: 18000 处进行训练\n..   成本：2.865e-4 (13849.00)\n..   在 n-instances: 21000 处进行训练\n..   成本：8.893e-5 (13758.00)\n..   在 n-instances: 24000 处进行训练\n..   成本：2.770e-5 (13908.00)\n..   在 n-instances: 27000 处进行训练\n..   成本：8.514e-6 (13570.00)\n..   在 n-instances: 30000 处进行训练\n..   成本：2.705e-6 (13721.00)\n..   预测成本：1.426e-6 (4593.00)\n..   矫正后的预测成本：1.406e-6 (9717.00)\n..   外部内存使用情况：\n..   外部数组：216 个（已使用字节数：52,800）\n..   CUDA 内存使用情况：\n..   设备数组：148 个（已使用字节数：224,428，池化字节数：19,200）\n..   主机数组：216 个（已使用字节数：52,800）\n..   主机到设备复制：465,818 次，设备到主机复制：371,990 次\n..\n==> (#\u003C->WEIGHT (H (H :OUTPUT)) :SIZE 1 1\u002F1 :NORM 0.10624>\n-->  #\u003C->WEIGHT (H (H :CELL)) :SIZE 1 1\u002F1 :NORM 0.94460>\n-->  #\u003C->WEIGHT ((H :CELL) (H :FORGET) :PEEPHOLE) :SIZE 1 1\u002F1 :NORM 0.61312>\n-->  #\u003C->WEIGHT (H (H :FORGET)) :SIZE 1 1\u002F1 :NORM 0.38093>\n-->  #\u003C->WEIGHT ((H :CELL) (H :INPUT) :PEEPHOLE) :SIZE 1 1\u002F1 :NORM 1.17956>\n-->  #\u003C->WEIGHT (H (H :INPUT)) :SIZE 1 1\u002F1 :NORM 0.88011>\n-->  #\u003C->WEIGHT (H PREDICTION) :SIZE 3 1\u002F1 :NORM 49.93808>\n-->  #\u003C->WEIGHT (:BIAS PREDICTION) :SIZE 3 1\u002F1 :NORM 10.98112>\n-->  #\u003C->WEIGHT ((H :CELL) (H :OUTPUT) :PEEPHOLE) :SIZE 1 1\u002F1 :NORM 0.67996>\n-->  #\u003C->WEIGHT (INPUT (H :OUTPUT)) :SIZE 1 1\u002F1 :NORM 0.65251>\n-->  #\u003C->WEIGHT (:BIAS (H :OUTPUT)) :SIZE 1 1\u002F1 :NORM 10.23003>\n-->  #\u003C->WEIGHT (INPUT (H :CELL)) :SIZE 1 1\u002F1 :NORM 5.98116>\n-->  #\u003C->WEIGHT (:BIAS (H :CELL)) :SIZE 1 1\u002F1 :NORM 0.10681>\n-->  #\u003C->WEIGHT (INPUT (H :FORGET)) :SIZE 1 1\u002F1 :NORM 4.46301>\n-->  #\u003C->WEIGHT (:BIAS (H :FORGET)) :SIZE 1 1\u002F1 :NORM 1.57195>\n-->  #\u003C->WEIGHT (INPUT (H :INPUT)) :SIZE 1 1\u002F1 :NORM 0.36401>\n-->  #\u003C->WEIGHT (:BIAS (H :INPUT)) :SIZE 1 1\u002F1 :NORM 8.63833>)\n\n|#\n```\n\n\u003Ca id=\"x-28MGL-BP-3ARNN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **RNN** *[BPN][5187]*\n\n    循环神经网络（与前馈神经网络相对）。它通常由 [`BUILD-RNN`][764b] 构建，后者不过是一个简单的便捷宏。\n    \n    `RNN` 的输入是可变长度的序列。在每个时间步，这些序列中尚未处理的下一个元素会被设置为输入，直到批次中的所有输入序列都被处理完。为了能够进行反向传播，必须保留所有的中间 [`LUMP`][c1ac]，因此通过[时间反向传播](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBackpropagation_through_time)将递归连接展开。具体需要多少个团块取决于序列的长度。\n    \n    当创建一个 `RNN` 时，会实例化 `MAX-LAG + 1` 个 [`BPN`][5187]，以确保所有权重都存在，并可以开始训练。\n\n\u003Ca id=\"x-28MGL-BP-3AUNFOLDER-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [读取器] **UNFOLDER** *[RNN][b0f3] (:UNFOLDER)*\n\n    `RNN` 的 `UNFOLDER` 是一个无参数函数，用于构建并返回一个 [`BPN`][5187]。该解折叠器允许创建具有任意拓扑结构的网络，甚至可以根据不同的 [`TIME-STEP`][6e96] 使用不同的拓扑结构，或嵌套 `RNN`s。同名的权重会在各个折叠之间共享。也就是说，如果要创建一个 [`->WEIGHT`][b76f] 团块，而已经存在同名的权重团块，则现有的团块会被添加到由 `UNFOLDER` 创建的 `BPN` 中。\n\n\u003Ca id=\"x-28MGL-BP-3AMAX-LAG-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [读者] **MAX-LAG** *[RNN][b0f3] (:MAX-LAG = 1)*\n\n    由[`UNFOLDER`][8e53]构建的网络可能在时间步`MAX-LAG`之前包含新的权重。超过这个时间步之后，所有的权重块都必须是之前时间步中同名权重块的重复出现。大多数循环网络只引用前一个时间步的块状态（通过函数[`LAG`][ff5a]），因此默认值为1。不过，也可以建立到任意时间步的连接。创建[`RNN`][b0f3]时必须指定最大连接滞后。\n\n\u003Ca id=\"x-28MGL-BP-3ACUDA-WINDOW-START-TIME-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [访问器] **CUDA-WINDOW-START-TIME** *[RNN][b0f3] (:CUDA-WINDOW-START-TIME = \\*CUDA-WINDOW-START-TIME\\*)*\n\n    由于展开操作，[`RNN`][b0f3]的内存占用几乎与时间步数（即最大序列长度）成正比。对于预测任务，这可以通过[时间扭曲][d0e3]来解决。而对于训练，我们不能丢弃先前时间步的结果，因为它们对反向传播至关重要，但我们至少可以在这些结果暂时不再使用时将其移出GPU内存，并在需要时再复制回来。显然，这一功能仅在使用CUDA时才有意义。\n\n    如果`CUDA-WINDOW-START-TIME`为`NIL`，则此功能关闭。否则，在训练过程中，当达到或超过`CUDA-WINDOW-START-TIME`的时间步时，属于非权重块的矩阵可能会被强制移出GPU内存，并在需要时再重新加载。\n\n    该功能是通过`MGL-MAT:WITH-SYNCING-CUDA-FACETS`实现的，它利用CUDA主机内存（也称为*页锁定*或*固定内存*）来进行异步复制，同时进行正常的计算。其结果是，现在不受限制的是主内存的使用量，再加上页锁定机制，这使得系统很容易被耗尽而停止运行。请务必注意这一点。\n\n\u003Ca id=\"x-28MGL-BP-3A-2ACUDA-WINDOW-START-TIME-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*CUDA-WINDOW-START-TIME\\*** *NIL*\n\n    是[`CUDA-WINDOW-START-TIME`][f573]的默认值。\n\n\u003Ca id=\"x-28MGL-BP-3ABUILD-RNN-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **BUILD-RNN** *(&KEY RNN (CLASS ''RNN) NAME INITARGS MAX-N-STRIPES (MAX-LAG 1)) &BODY BODY*\n\n    创建一个具有`MAX-N-STRIPES`和`MAX-LAG`的`RNN`，其[`UNFOLDER`][8e53]为`BODY`包裹在一个lambda函数中。将作为`RNN`参数给出的符号绑定到`RNN`对象上，以便`BODY`可以访问它。\n\n\u003Ca id=\"x-28MGL-BP-3ALAG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LAG** *NAME &KEY (LAG 1) RNN PATH*\n\n    在`RNN`中，或者如果它是`NIL`，则是在另一个[`BPN`][5187]扩展的`RNN`中（称为*展开*），查找距离当前添加的`BPN`向前`LAG`个时间步的`BPN`中的`CLUMP`[a4fe]，其名称为`NAME`。如果此函数是从`RNN`的[`UNFOLDER`][8e53]调用的（这是在[`BUILD-RNN`][764b]主体中幕后发生的情况），则返回一个表示延迟连接到某个块的不透明对象；否则，直接返回该`CLUMP`本身。\n\n    FIXDOC：`PATH`\n\n\u003Ca id=\"x-28MGL-BP-3ATIME-STEP-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **TIME-STEP** *&KEY (RNN \\*RNN\\*)*\n\n    返回`RNN`当前正在执行或被展开的时间步。当`RNN`首次被展开时，该值为0。\n\n\u003Ca id=\"x-28MGL-CORE-3ASET-INPUT-20-28METHOD-20-28T-20MGL-BP-3ARNN-29-29-29\">\u003C\u002Fa>\n\n- [方法] **SET-INPUT** *INSTANCES (RNN RNN)*\n\n    `RNN`s像[`FNN`][9de4]一样，也是以实例批次为单位进行操作。但这里的实例更类似于数据集：序列或采样器，它们会通过[`MAP-DATASETS`][765c] `:IMPUTE` `NIL`转换为实例批次序列。索引为2的实例批次会通过`SET-INPUT`被固定到`RNN`的第2个时间步的`BPN`上。\n\n    当批次中的输入序列长度不一致时，已经耗尽的序列会在上述过程中产生`NIL`（由于`:IMPUTE`设置为`NIL`）。当这样的`NIL`通过`SET-INPUT`被固定到`RNN`的`BPN`上时，`SET-INPUT`必须将->ERROR块的`IMPORTANCE`[038e]设置为0，否则训练过程就会受到之前调用遗留下来的噪声干扰。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-RNN-TIME-WARP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 时间扭曲\n\n每个时间步分配一个[`BPN`][5187]的[`RNN`][b0f3]会导致内存使用量无限制增长，这可能成为一个问题。对于训练而言，由于梯度通常需要从最后一个时间步反向传播到最开始的时间步，这个问题很难解决，但借助[`CUDA-WINDOW-START-TIME`][f573]，限制已不再局限于GPU内存。\n\n另一方面，对于预测任务来说，没有必要无限期地保留旧的时间步：当未来的时间步不会再引用它们时，就可以将其丢弃。\n\n\u003Ca id=\"x-28MGL-BP-3A-2AWARP-TIME-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*WARP-TIME\\*** *NIL*\n\n    控制是否启用时间扭曲（参见[时间扭曲][d0e3]）。不要在训练时启用它，因为这会使反向传播无法进行。\n\n\u003Ca id=\"x-28MGL-BP-3AWARPED-TIME-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **WARPED-TIME** *&KEY (RNN \\*RNN\\*) (TIME (TIME-STEP :RNN RNN)) (LAG 0)*\n\n返回 [`BPN`][5187] 在 `RNN` 的 [`CLUMPS`][f7c1] 中的索引，该 `BPN` 的任务是在 `(- (TIME-STEP RNN) LAG)` 时刻执行计算。通常情况下，这与 [`TIME-STEP`][6e96] 相同（忽略 `LAG`）。也就是说，可以通过 `TIME-STEP` 来索引 `CLUMPS` 以获取对应的 `BPN`。然而，当 [`*WARP-TIME*`][ed4f] 为真时，执行会按照网络结构允许的方式循环进行。\n\n假设我们有一个典型的 `RNN`，它只引用前一个时间步，因此其 [`MAX-LAG`][084d] 为 1。它的 [`UNFOLDER`][8e53] 会返回结构相同但时间延迟连接有所偏移的 `BPN`，除了第一个之外。因此，[`WARP-START`][d6e0] 和 [`WARP-LENGTH`][51d5] 都为 1。如果 `*WARP-TIME*` 为 `NIL`，那么从 `TIME-STEP` 到 `CLUMPS` 中 `BPN` 的映射就非常直接：\n\n        时间:   |  0 |  1 |  2 |  3 |  4 |  5\n        --------+----+----+----+----+----+----\n        变形后: |  0 |  1 |  2 |  1 |  2 |  1\n        --------+----+----+----+----+----+----\n        BPN:    | b0 | b1 | b2 | b1*| b2 | b1*\n\n    其中，`B1*` 与 `B1` 是同一个 `BPN`，但其由 `LAG` 创建的连接经过变形时间后，最终指向 `B2`。这样，内存消耗就与处理序列或进行预测所需的时间步数无关了。\n\n为了实现这一技巧，在实例化 `RNN` 时必须指定 `WARP-START` 和 `WARP-LENGTH`。一般来说，当 `*WARP-TIME*` 为真时，需要 `( + WARP-START (MAX 2 WARP-LENGTH))` 个 `BPN`。之所以要加上 2，是因为当循环长度为 1 时，某个 `BPN` 就需要从自身获取输入，而这是有问题的，因为它仅为其一组值分配了 [`NODES`][cc1c]。\n\n\u003Ca id=\"x-28MGL-BP-3AWARP-START-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [reader] **WARP-START** *[RNN][b0f3] (:WARP-START = 1)*\n\n    表示从哪个 [`TIME-STEP`][6e96] 开始，[`UNFOLDER`][8e53] 将创建每隔 [`WARP-LENGTH`][51d5] 步骤就会重复的 [`BPN`][5187]。\n\n\u003Ca id=\"x-28MGL-BP-3AWARP-LENGTH-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [reader] **WARP-LENGTH** *[RNN][b0f3] (:WARP-LENGTH = 1)*\n\n    一个整数，表示在时间步 `I`（其中 `(\u003C= WARP-START I)`）时，[`UNFOLDER`][8e53] 所创建的 [`BPN`][5187] 与在时间步 `(+ WARP-START (MOD (- I WARP-START) WARP-LENGTH))` 时所创建的 `BPN` 完全相同，唯一的区别在于其时间延迟连接的位置有所不同。\n\n\u003Ca id=\"x-28MGL-BP-3ASTEP-MONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29\">\u003C\u002Fa>\n\n- [accessor] **STEP-MONITORS** *[RNN][b0f3] (:STEP-MONITORS = NIL)*\n\n    在训练过程中，对应于先前时间步的展开后的 [`BPN`][5187] 可能因为不再位于 GPU 内存中而难以访问。这一考虑同样适用于预测阶段，而且还有一个额外的问题：当 [`*WARP-TIME*`][ed4f] 为真时，先前的状态会被丢弃，因此在 [`FORWARD`][c1ae] 结束后无法再收集统计信息。\n\n    如果在此槽位中添加监控对象，它们将在训练或预测期间每次调用 `FORWARD` 处理 `RNN` 时自动应用于该 `RNN`。为了便于在不同的监控集合之间切换，除了可以使用监控列表外，还可以使用符号或函数。如果是符号，则表示其 [`SYMBOL-VALUE`][cee6]；如果是函数，则必须无参数，并表示其返回值。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LUMPS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n\n\n### 11.4 块状组件\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.1 块状组件基类\n\n\u003Ca id=\"x-28MGL-BP-3ALUMP-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **LUMP** *[CLUMP][a4fe]*\n\n    `LUMP` 是神经网络中一种简单的、类似层的组件。块状组件种类繁多，每种都执行特定的操作，或者只是存储输入和权重。按照惯例，块状组件的名称通常以 `->` 作为前缀。这些组件被定义为类，同时也有与类同名的函数，以便于创建它们。这些创建函数通常带有与类初始化参数相对应的关键词参数，其中一些（主要是输入类组件）则被转换为普通的位置参数。因此，与其写\n    \n        (make-instance '->tanh :x some-input :name 'my-tanh)\n    \n    不如直接写\n    \n        (->tanh some-input :name 'my-tanh)\n    \n    在 [`BUILD-FNN`][606c] 或 [`BUILD-RNN`][764b] 中以任何方式实例化的块状组件都会自动添加到正在构建的网络中。\n\n    每个块状组件都有自己的 [`NODES`][cc1c] 和 [`DERIVATIVES`][a81b] 矩阵，用于存储前向和反向传播的结果。这与 [`BPN`][5187] 不同，后者使用的 `NODES` 和 `DERIVATIVES` 是其最后一个组成成分 [`CLUMP`][a4fe] 的。\n\n    由于块状组件几乎总是存在于 `BPN` 中，因此它们的 [`N-STRIPES`][07fb] 和 [`MAX-N-STRIPES`][91a3] 会在后台自动处理。\n\n\u003Ca id=\"x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **SIZE** *[LUMP][c1ac] (:SIZE)*\n\n    单条纹中的数值数量。\n\n\u003Ca id=\"x-28MGL-COMMON-3ADEFAULT-VALUE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **DEFAULT-VALUE** *[LUMP][c1ac] (:DEFAULT-VALUE = 0)*\n\n    在创建或调整大小时，块状组件的节点将被填充为此值。\n\n\u003Ca id=\"x-28MGL-BP-3ADEFAULT-SIZE-20GENERIC-FUNCTION-29\">\u003C\u002Fa>\n\n- [generic-function] **DEFAULT-SIZE** *LUMP*\n\n    如果在实例化时未提供 [`SIZE`][85d3]，则返回 `LUMP` 的默认值。该值通常根据输入的大小来计算。此函数用于实现新的块状组件类型。\n\n\u003Ca id=\"x-28MGL-COMMON-3ANODES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **NODES** *[LUMP][c1ac] (= NIL)*\n\n    前向传播中由块状组件计算出的值会存储在这里。对于非权重类块状组件，这是一个 `N-STRIPES * SIZE` 的矩阵，预先分配了 `MAX-N-STRIPES * SIZE` 个元素的空间。而 [`->WEIGHT`][b76f] 类型的块状组件则没有条纹，也没有形状上的限制。\n\n\u003Ca id=\"x-28MGL-BP-3ADERIVATIVES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29\">\u003C\u002Fa>\n\n- [reader] **DERIVATIVES** *[LUMP][c1ac]*\n\n    反向传播中计算出的导数会存储在这里。这个矩阵的形状和大小与 [`NODES`][d699] 非常相似。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-INPUTS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.2 输入类组件\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-INPUT-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 输入块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EINPUT-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->INPUT** *[->DROPOUT][441b]*\n\n    一个没有输入块、在前向传播中不改变其值（除非[`DROPOUT`][e7f6]不为零）且不计算导数的块。*钳位*\n    在[`SET-INPUT`][0c9e]中对输入块的[`NODES`][cc1c]进行输入限制。\n    \n    为了方便起见，`->INPUT`本身可以执行dropout，尽管默认情况下不执行dropout。\n    \n    ```common-lisp\n    (->input :size 10 :name 'some-input)\n    ==> #\u003C->INPUT SOME-INPUT :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EINPUT-29-29\">\u003C\u002Fa>\n\n- [访问器] **DROPOUT** *[->INPUT][f54e] (= NIL)*\n\n    参见[`DROPOUT`][2481]。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-EMBEDDING-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 嵌入块\n\n这个块就像一个输入块和一个简单激活块为了效率而结合在一起。\n\n\u003Ca id=\"x-28MGL-BP-3A--3EEMBEDDING-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->EMBEDDING** *[LUMP][c1ac]*\n\n    从[`WEIGHTS`][ab3c]中选择行，每条索引对应一行，在[`INPUT-ROW-INDICES`][1a52]中。这个块等同于添加一个带有独热编码方案的[`->INPUT`][f54e]块，以及在其上叠加一个[`->V*M`][dbc4]块，但它在执行和内存使用上更加高效，因为它使用输入的稀疏表示。\n    \n    这个块的[`SIZE`][019f]是`WEIGHTS`的列数，会自动确定。\n    \n    ```common-lisp\n    (->embedding :weights (->weight :name 'embedding-weights\n                                    :dimensions '(3 5))\n                 :name 'embeddings)\n    ==> #\u003C->EMBEDDING EMBEDDINGS :SIZE 5 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-COMMON-3AWEIGHTS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EEMBEDDING-29-29\">\u003C\u002Fa>\n\n- [读取器] **WEIGHTS** *[->EMBEDDING][f1c1] (:WEIGHTS)*\n\n    一个权重块，其行由[`INPUT-ROW-INDICES`][1a52]索引，并复制到该块的输出中。\n\n\u003Ca id=\"x-28MGL-BP-3AINPUT-ROW-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EEMBEDDING-29-29\">\u003C\u002Fa>\n\n- [访问器] **INPUT-ROW-INDICES** *[->EMBEDDING][f1c1] (:INPUT-ROW-INDICES)*\n\n    一批次长度的行索引序列。将在[`SET-INPUT`][0c9e]中设置。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-WEIGHT-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.3 权重块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EWEIGHT-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->WEIGHT** *[LUMP][c1ac]*\n\n    一组可优化的参数。当一个[`BPN`][5187]被训练时（参见[训练][0d82]），权重块的[`NODES`][cc1c]将会被改变。权重块不进行任何计算。\n    \n    权重可以通过指定总大小或维度来创建：\n    \n    ```common-lisp\n    (dimensions (->weight :size 10 :name 'w))\n    => (1 10)\n    (dimensions (->weight :dimensions '(5 10) :name 'w))\n    => (5 10)\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3ADIMENSIONS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EWEIGHT-29-29\">\u003C\u002Fa>\n\n- [读取器] **DIMENSIONS** *[->WEIGHT][b76f] (:DIMENSIONS)*\n\n    该块的[`NODES`][cc1c]和[`DERIVATIVES`][a81b]将按照这些维度分配。\n\n\u003Ca id=\"x-28MGL-BP-3AWITH-WEIGHTS-COPIED-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **WITH-WEIGHTS-COPIED** *(FROM-BPN) &BODY BODY*\n\n    在`BODY`中，`->WEIGHT`会首先查找`FROM-BPN`中是否存在同名的权重块，如果存在则返回该块，否则正常创建一个新的权重块。如果`FROM-BPN`为`NIL`，则不会复制任何权重。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ACTIVATIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.4 激活函数\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ACTIVATION-SUBNET-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 激活子网络\n\n我们已经有了输入。通常下一步是将输入向量与权重矩阵相乘并加上偏置。这可以直接用`->+`、[`->V*M`][dbc4]和[`->WEIGHT`][b76f]完成，但使用激活子网络可以更方便地减少代码混乱。\n\n\u003Ca id=\"x-28MGL-BP-3A--3EACTIVATION-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->ACTIVATION** *[BPN][5187]*\n\n    激活子网络由函数[`->ACTIVATION`][b602]构建，内部隐藏着多个块。最终，这个子网络会计算一个类似于`sum_i x_i * W_i + sum_j y_j .* V_j + biases`的总和，其中`x_i`是输入块，`W_i`是代表连接的稠密矩阵，而`V_j`则是以元素方式与其对应的输入`y_j`相乘的窥孔连接向量。\n\n\u003Ca id=\"x-28MGL-BP-3A--3EACTIVATION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **->ACTIVATION** *INPUTS &KEY (NAME (GENSYM)) SIZE PEEPHOLES (ADD-BIAS-P T)*\n\n    创建一个属于[`->ACTIVATION`][7162]类的子网络，用于计算来自`INPUTS`中的稠密连接块的激活，以及来自`PEEPHOLES`中的元素级连接块的激活。必要时创建新的[`->WEIGHT`][b76f]块。`INPUTS`和`PEEPHOLES`可以是一个单独的块，也可以是一组块。最后，如果`ADD-BIAS-P`为真，则还会添加一个元素级偏置。`SIZE`必须显式指定，因为如果没有窥孔连接，就无法确定它。\n    \n    ```common-lisp\n    (->activation (->input :size 10 :name 'input) :name 'h1 :size 4)\n    ==> #\u003C->ACTIVATION (H1 :ACTIVATION) :STRIPES 1\u002F1 :CLUMPS 4>\n    ```\n    \n    这是神经网络的基本工作单元，负责线性变换，并将结果传递给非线性函数（如[`->SIGMOID`][83f9]、[`->TANH`][5309]等）。\n    \n    子网络块的名字是`(,NAME :ACTIVATION)`。偏置权重块（如果有）名为`(:BIAS ,NAME)`。稠密连接权重块以输入名称和`NAME`命名：`(,(NAME INPUT) ,NAME)`，而窥孔连接权重块则命名为`(,(NAME INPUT) ,NAME :PEEPHOLE)`。这一点很有用，例如在需要对它们进行不同初始化时。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-BATCH-NORMALIZATION-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 批量归一化\n\n\u003Ca id=\"x-28MGL-BP-3A--3EBATCH-NORMALIZED-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->BATCH-NORMALIZED** *[LUMP][c1ac]*\n\n这是[批量归一化论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167)第3版的实现。`->BATCH-NORMALIZED` 的输出是对其输入进行归一化后的结果，使得所有元素在条带维度上的均值为零、方差为1。也就是说，从输入中减去批次均值，并根据样本标准差对其进行缩放。实际上，在归一化步骤之后，数值会再次通过学习得到的参数进行缩放和偏移（但这次使用的是可学习的参数），以保持模型的表示能力不变。该模块的主要目的是加速训练，同时它也起到正则化的作用。详细信息请参阅论文。\n\n要在不引入额外正则化效果的情况下对 `LUMP` 的输出进行归一化：\n\n```commonlisp\n(->batch-normalized lump :batch-size :use-population)\n```\n\n上述代码使用指数移动平均来估计批次的均值和方差，并在训练和测试时都使用这些估计值。与此不同的是，发表版本在训练时使用当前批次的样本均值和方差，这会在过程中引入噪声。噪声的程度与批次大小成反比，批次越小，噪声越大，从而产生正则化效果。这是默认行为（等同于 `:BATCH-SIZE NIL`）：\n\n```commonlisp\n(->batch-normalized lump)\n```\n\n出于性能考虑，有时希望在一个批次中处理更多的实例（按照 [`N-STRIPES`][8dd7] 的概念），同时获得较小批次大小带来的正则化效果。可以通过将 `:BATCH-SIZE` 设置为条带数量的因数来实现。例如，假设条带数量为128，但我们希望获得与批次大小为32时相同的正则化效果：\n\n```commonlisp\n(->batch-normalized lump :batch-size 32)\n```\n\n`->BATCH-NORMALIZED` 的主要输入通常是 `->ACTIVATION`（[`0`][7162] 或 [`1`][b602]），其输出会传递给激活函数（参见 [Activation Functions][5d86]）。\n\n\u003Ca id=\"x-28MGL-BP-3ABATCH-NORMALIZATION-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZED-29-29\">\u003C\u002Fa>\n\n- [reader] **BATCH-NORMALIZATION** *[->BATCH-NORMALIZED][9da9] (:NORMALIZATION)*\n\n    此模块的 [`->BATCH-NORMALIZATION`][c469]。可以被多个 [`->BATCH-NORMALIZED`][9da9] 模块共享。\n    \n    批量归一化的一个特殊之处在于，它除了计算结果（[`NODES`][cc1c]）及其导数（[`DERIVATIVES`][a81b]）之外，还具有状态。这些状态包括其输入的估计均值和方差，它们被封装在 `->BATCH-NORMALIZATION` 中。\n    \n    如果在实例化时未指定 `NORMALIZATION`，则会自动创建一个新的 `->BATCH-NORMALIZATION` 对象，并将 `:BATCH-SIZE`、`:VARIANCE-ADJUSTMENT` 和 `:POPULATION-DECAY` 参数传递给 `->BATCH-NORMALIZATION`。请参阅 [`BATCH-SIZE`][c918]、[`VARIANCE-ADJUSTMENT`][aa86] 和 [`POPULATION-DECAY`][46c4]。新的尺度和偏移权重模块将以以下名称创建：\n    \n        `(,name :scale)\n        `(,name :shift)\n    \n    其中 `NAME` 是此模块的 [`NAME`][5842]。\n    \n    这种默认行为适用于 `->BATCH-NORMALIZATION` 所保存的统计信息仅在 [`RNN`][b0f3] 的各个时间步之间共享的情况。\n\n\u003Ca id=\"x-28MGL-BP-3A--3EBATCH-NORMALIZATION-20CLASS-29\">\u003C\u002Fa>\n\n- [class] **->BATCH-NORMALIZATION** *[->WEIGHT][b76f]*\n\n    该类的主要用途是存储待归一化输入的估计均值和方差，并允许这些信息在多个执行计算的 [`->BATCH-NORMALIZED`][9da9] 模块之间共享。这些估计值由 [`SAVE-STATE`][c102] 和 [`LOAD-STATE`][6bd7] 进行保存和加载。\n    \n    ```commonlisp\n    (->batch-normalization (->weight :name '(h1 :scale) :size 10)\n                           (->weight :name '(h1 :shift) :size 10)\n                           :name '(h1 :batch-normalization))\n    ```\n\n\u003Ca id=\"x-28MGL-COMMON-3ASCALE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **SCALE** *[->BATCH-NORMALIZATION][c469] (:SCALE)*\n\n    一个与 [`SHIFT`][7960] 大小相同的权重模块。这就是论文中的 $\\gamma$。\n\n\u003Ca id=\"x-28MGL-BP-3ASHIFT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **SHIFT** *[->BATCH-NORMALIZATION][c469] (:SHIFT)*\n\n    一个与 [`SCALE`][8970] 大小相同的权重模块。这就是论文中的 $\\beta$。\n\n\u003Ca id=\"x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **BATCH-SIZE** *[->BATCH-NORMALIZATION][c469] (:BATCH-SIZE = NIL)*\n\n    通常情况下，所有条带都会参与批次计算。降低条带数量可能会增加正则化效果，但也会使计算效率降低。通过将 `BATCH-SIZE` 设置为 [`N-STRIPES`][8dd7] 的因数，可以将效率问题与正则化问题分开考虑。默认值 `NIL` 等同于 `N-STRIPES`。`BATCH-SIZE` 只影响训练过程。\n    \n    使用特殊值 `:USE-POPULATION` 时，归一化将不再使用当前批次的均值和方差，而是使用总体统计信息。这将有效地消除正则化效果，只留下加速训练的作用。\n\n\u003Ca id=\"x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **VARIANCE-ADJUSTMENT** *[->BATCH-NORMALIZATION][c469] (:VARIANCE-ADJUSTMENT = 1.0e-4)*\n\n    一个很小的正实数，会被加到样本方差上。这就是论文中的 $\\epsilon$。\n\n\u003Ca id=\"x-28MGL-BP-3APOPULATION-DECAY-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29\">\u003C\u002Fa>\n\n- [reader] **POPULATION-DECAY** *[->BATCH-NORMALIZATION][c469] (:POPULATION-DECAY = 0.99)*\n\n    在训练过程中，会不断更新批次均值和标准差的指数移动平均值（称为“总体统计信息”）。在进行预测时，会使用这些统计信息来进行归一化。这些总体统计信息会通过 [`SAVE-STATE`][c102] 被持久化保存。\n\n\u003Ca id=\"x-28MGL-BP-3A--3EBATCH-NORMALIZED-ACTIVATION-20FUNCTION-29\">\u003C\u002Fa>\n\n- [function] **->BATCH-NORMALIZED-ACTIVATION** *INPUTS &KEY (NAME (GENSYM)) SIZE PEEPHOLES BATCH-SIZE VARIANCE-ADJUSTMENT POPULATION-DECAY*\n\n一个工具函数，用于创建并包装一个 `->ACTIVATION`（[`0`][7162] [`1`][b602]），使其包含\n    [`->BATCH-NORMALIZED`][9da9]，并通过其[`BATCH-NORMALIZATION`][eaf1] 对缩放和偏移参数的两个权重块进行归一化。\n    `(->BATCH-NORMALIZED-ACTIVATION INPUTS :NAME 'H1 :SIZE 10)` 等价于：\n    \n    ```commonlisp\n    (->batch-normalized (->activation inputs :name 'h1 :size 10 :add-bias-p nil)\n                        :name '(h1 :batch-normalized-activation))\n    ```\n    \n    注意，偏置被关闭了，因为归一化无论如何都会将其抵消（但会添加一个偏移量，其效果与偏置相同）。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ACTIVATION-FUNCTIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.5 激活函数\n\n现在我们进入最重要的非线性变换阶段，这些变换将应用于激活值。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SIGMOID-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Sigmoid 块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESIGMOID-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SIGMOID** *[->DROPOUT][441b] [LUMP][c1ac]*\n\n    对其输入逐元素应用 `1\u002F(1 + e^{-x})` 函数。这是神经网络中经典的非线性激活函数之一。\n    \n    为了方便起见，`->SIGMOID` 可以自行执行丢弃操作，但默认情况下不进行丢弃。\n    \n    ```common-lisp\n    (->sigmoid (->activation (->input :size 10) :size 5) :name 'this)\n    ==> #\u003C->SIGMOID THIS :SIZE 5 1\u002F1 :NORM 0.00000>\n    ```\n    \n    该块的[`SIZE`][019f] 是其输入的大小，由系统自动确定。\n\n\u003Ca id=\"x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESIGMOID-29-29\">\u003C\u002Fa>\n\n- [访问器] **DROPOUT** *[->SIGMOID][83f9] (= NIL)*\n\n    参见[`DROPOUT`][2481]。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-TANH-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Tanh 块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ETANH-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->TANH** *[LUMP][c1ac]*\n\n    对其输入逐元素应用[`TANH`][993b] 函数。该块的[`SIZE`][019f] 是其输入的大小，由系统自动确定。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SCALED-TANH-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 缩放 Tanh 块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESCALED-TANH-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SCALED-TANH** *[LUMP][c1ac]*\n\n    类似于[`TANH`][993b]，但其输入和输出经过缩放处理，使得当输入的方差接近 1 时，输出的方差也接近 1，这一特性有助于缓解梯度消失问题。实际函数为 `1.7159 * tanh(2\u002F3 * x)`。该块的[`SIZE`][019f] 是其输入的大小，由系统自动确定。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-RELU-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Relu 块\n\n此时大约是 2007 年左右。\n\n\u003Ca id=\"x-28MGL-BP-3A--3ERELU-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->RELU** *[LUMP][c1ac]*\n\n    `max(0,x)` 激活函数。需要注意的是，Relu 单元可能会陷入关闭状态：如果它们的值变得过于负数，就很难再恢复到激活状态。该块的[`SIZE`][019f] 是其输入的大小，由系统自动确定。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MAX-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Max 块\n\n时间大约在 2011 年左右。\n\n\u003Ca id=\"x-28MGL-BP-3A--3EMAX-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->MAX** *[LUMP][c1ac]*\n\n    这基本上就是没有丢弃操作的 maxout 层（参见 http:\u002F\u002Farxiv.org\u002Fabs\u002F1302.4389）。它将输入按[`GROUP-SIZE`][59dd] 分组，并输出每个组的最大值。\n    输出的[`SIZE`][019f] 会自动计算，等于输入大小除以[`GROUP-SIZE`][59dd]。\n    \n    ```common-lisp\n    (->max (->input :size 120) :group-size 3 :name 'my-max)\n    ==> #\u003C->MAX MY-MAX :SIZE 40 1\u002F1 :NORM 0.00000 :GROUP-SIZE 3>\n    ```\n    \n    与[`->RELU`][9d3a] 相比，`->MAX` 的优势在于梯度永远不会被阻断，因此不存在单元陷入关闭状态的问题。\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-29-29\">\u003C\u002Fa>\n\n- [读取器] **GROUP-SIZE** *[->MAX][f652] (:GROUP-SIZE)*\n\n    每个组中的输入数量。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MIN-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Min 块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EMIN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->MIN** *[LUMP][c1ac]*\n\n    与[`->MAX`][f652] 类似，但它计算的是各组的[`MIN`][115e]。通常用处较少。\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMIN-29-29\">\u003C\u002Fa>\n\n- [读取器] **GROUP-SIZE** *[->MIN][9a84] (:GROUP-SIZE)*\n\n    每个组中的输入数量。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-MAX-CHANNEL-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Max-Channel 块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EMAX-CHANNEL-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->MAX-CHANNEL** *[LUMP][c1ac]*\n\n    在文献中被称为 LWTA（局部胜者全得）或 Channel-Out（参见 http:\u002F\u002Farxiv.org\u002Fabs\u002F1312.1909）。它本质上是[`->MAX`][f652]，但不同之处在于，它不是为每个组生成一个输出，而是只对组中数值最大的单元输出 1，其余单元输出 0。这样可以让下一层了解信息流动的路径。该块的[`SIZE`][019f] 是其输入的大小，由系统自动确定。\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-CHANNEL-29-29\">\u003C\u002Fa>\n\n- [读取器] **GROUP-SIZE** *[->MAX-CHANNEL][6021] (:GROUP-SIZE)*\n\n    每个组中的输入数量。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LOSSES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.6 损失函数\n\n最终，我们需要告诉网络应该学习什么，这就意味着需要在网络中构建一个要最小化的损失函数。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LOSS-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 损失块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ELOSS-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->LOSS** *[->SUM][edcf]*\n\n    计算批次中各个样本的损失。该块的主要作用是提供训练信号。\n    \n    误差块通常是块图中的叶节点（即没有其他块将其作为输入）。误差块的一个特殊之处在于，其导数会自动加上 1（但请参阅[`IMPORTANCE`][038e]）。误差块恰好有一个节点（每条带）的值是其输入块中所有节点值的总和。\n\n\u003Ca id=\"x-28MGL-BP-3AIMPORTANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ELOSS-29-29\">\u003C\u002Fa>\n\n- [访问器] **IMPORTANCE** *[->LOSS][2171] (:IMPORTANCE = NIL)*\n\n    用于支持加权样本。也就是说，并非所有训练样本都同等重要。如果设置为非 `NIL`，将会提供一个 1 维的 `MAT`，其中包含批次中各条带的重要性权重。当提供了`IMPORTANCE`（通常通过[`SET-INPUT`][0c9e] 设置）时，就不会简单地给所有条带的导数加上 1，而是按照`IMPORTANCE` 的值逐元素相加。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SQUARED-DIFFERENCE-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 平方差块\n\n在回归任务中，平方误差损失是最常用的。平方误差损失可以通过将[`->SQUARED-DIFFERENCE`][e8d2]与[`->LOSS`][2171]结合来构建。\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESQUARED-DIFFERENCE-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SQUARED-DIFFERENCE** *[LUMP][c1ac]*\n\n    该块接受两个输入块，并以逐元素的方式计算它们的平方差 `(x - y)^2`。此块的[`SIZE`][019f]会自动根据其输入的大小确定。该块通常会被送入[`->LOSS`][2171]，后者会对平方差求和，使其成为待最小化的函数的一部分。\n    \n    ```common-lisp\n    (->loss (->squared-difference (->activation (->input :size 100)\n                                                :size 10)\n                                  (->input :name 'target :size 10))\n            :name 'squared-error)\n    ==> #\u003C->LOSS SQUARED-ERROR :SIZE 1 1\u002F1 :NORM 0.00000>\n    ```\n    \n    目前该块尚未进行CUDA加速，但如果需要，它会将数据从GPU复制过来。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SOFTMAX-XE-LOSS-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Softmax交叉熵损失块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESOFTMAX-XE-LOSS-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SOFTMAX-XE-LOSS** *[LUMP][c1ac]*\n\n    这是一个专用块，在前向传播中计算其输入的Softmax值，并反向传播交叉熵损失。将这两步合并的好处在于数值稳定性。总交叉熵是按每组[`GROUP-SIZE`][a437]个元素计算的交叉熵之和：\n    \n    $$\n    XE(x) = - \\sum_{i=1,g} t_i \\ln(s_i),\n    $$\n    \n    其中 `g` 是类别数（[`GROUP-SIZE`][a437]），`t_i` 是目标值（即该类别的真实概率，通常只有一个为1，其余为0），`s_i` 是由输入 `X` 计算出的Softmax输出：\n    \n    $$\n    s_i = {softmax}(x_1, x_2, ..., x_g) =\n      \\frac{e^x_i}{\\sum_{j=1,g} e^x_j}\n    $$\n    \n    换句话说，在前向阶段，该块接收输入 `X`，对其每个元素执行[`EXP`][bc8c]操作，然后对每组[`GROUP-SIZE`][a437]个元素进行归一化，使其总和为1，得到Softmax结果，该结果会传递到[`NODES`][cc1c]中。在反向传播阶段，梯度来源有两个：使用该块输出作为输入的其他块（目前尚未实现，会导致错误）以及隐式的交叉熵损失。\n    \n    可以通过调用该块上的[`COST`][410c]来获取最近一次前向传播中计算出的交叉熵。\n    \n    这是分类任务中最常见的损失函数，几乎无处不在。有关该损失函数与[`SET-INPUT`][0c9e]如何协同工作的详细信息，请参阅[`FNN`教程][6b38]和[`RNN`教程][9700]。\n\n\u003Ca id=\"x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29\">\u003C\u002Fa>\n\n- [读取器] **GROUP-SIZE** *[->SOFTMAX-XE-LOSS][85d34] (:GROUP-SIZE)*\n\n    Softmax分组中的元素数量。对于分类任务来说，这即是类别数。通常情况下，`GROUP-SIZE` 等于 [`SIZE`][019f]（默认设置），但总体而言，唯一的约束条件是 `SIZE` 必须是 `GROUP-SIZE` 的整数倍。\n\n\u003Ca id=\"x-28MGL-COMMON-3ATARGET-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29\">\u003C\u002Fa>\n\n- [访问器] **TARGET** *[->SOFTMAX-XE-LOSS][85d34] (:TARGET = NIL)*\n\n    在[`SET-INPUT`][0c9e]中设置，目标可以是与输入块 `X` 大小相同的 `MAT`，或者如果目标非常稀疏，也可以是一个批次长度的序列，其中包含非零条目的索引值对：\n    \n        (;; 批次中的第一个实例有两个非零目标\n         (;; 类别10有30%的预期概率\n          (10 . 0.3)\n          ;; 类别2有70%的预期概率\n          (2 .  0.7))\n         ;; 批次中的第二个实例将100%的概率分配给类别7\n         7\n         ;; 批次中还有更多实例…\n         ...)\n    \n    实际上，在极少数情况下，如果[`GROUP-SIZE`][a437]不等于[`SIZE`][019f]（即每个样本有多个Softmax归一化组），那么上述目标序列的长度将是[`BATCH-SIZE`][fa6d]乘以组数。索引始终相对于该组的起始位置。\n    \n    如果[`GROUP-SIZE`][a437]很大（例如，在具有大量词汇的神经语言模型中），使用稀疏目标可以使计算速度大大加快，因为导数的计算不再是二次复杂度。\n    \n    隐式支持对不同训练实例赋予不同的权重。虽然一个组内的目标值之和应为1，但将所有目标值乘以权重 `W` 等同于在同一实例上训练 `W` 次。\n\n\u003Ca id=\"x-28MGL-BP-3AENSURE-SOFTMAX-TARGET-MATRIX-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **ENSURE-SOFTMAX-TARGET-MATRIX** *SOFTMAX-XE-LOSS N*\n\n    将`SOFTMAX-XE-LOSS`的[`TARGET`][b5c7]设置为能够容纳 `N` 条带密集目标值的 `MAT`。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-STOCHASTICITY-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.7 随机性\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-DROPOUT-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### Dropout块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EDROPOUT-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->DROPOUT** *[LUMP][c1ac]*\n\n    该块的输出与其输入完全相同，只是在训练过程中会随机地将其中一部分置零，从而起到非常强的正则化作用。请参考Geoffrey Hinton的文章《通过防止特征检测器的协同适应来改进神经网络》。\n    \n    该块的[`SIZE`][019f]与其输入的大小相同，且会自动确定。\n\n\u003Ca id=\"x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EDROPOUT-29-29\">\u003C\u002Fa>\n\n- [访问器] **DROPOUT** *[->DROPOUT][441b] (:DROPOUT = 0.5)*\n\n    如果不为`NIL`，则在前向传播中以`DROPOUT`概率将该块中的每个节点置零。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-GAUSSIAN-RANDOM-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 高斯随机块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EGAUSSIAN-RANDOM-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->GAUSSIAN-RANDOM** *[LUMP][c1ac]*\n\n    该块没有输入，它会生成服从正态分布的独立随机数，均值为[`MEAN`][d96a]，方差为[`VARIANCE`][404c]（或[`VARIANCE-FOR-PREDICTION`][80e2]）。这对于基于噪声的正则化方法来说是非常有用的构建模块。\n    \n    ```common-lisp\n    (->gaussian-random :size 10 :name 'normal :mean 1 :variance 2)\n    ==> #\u003C->GAUSSIAN-RANDOM NORMAL :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3AMEAN-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29\">\u003C\u002Fa>\n\n- [访问器] **MEAN** *[->GAUSSIAN-RANDOM][feaa] (:MEAN = 0)*\n\n    正态分布的均值。\n\n\u003Ca id=\"x-28MGL-BP-3AVARIANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29\">\u003C\u002Fa>\n\n- [访问器] **方差** *[->高斯随机][feaa] (:方差 = 1)*\n\n    正态分布的方差。\n\n\u003Ca id=\"x-28MGL-BP-3AVARIANCE-FOR-PREDICTION-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29\">\u003C\u002Fa>\n\n- [访问器] **预测用方差** *[->高斯随机][feaa] (:预测用方差 = 0)*\n\n    如果不为 `NIL`，则此值会在非训练期间（即进行预测时）覆盖 [`方差`][404c]。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SAMPLE-BINARY-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 二值采样块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESAMPLE-BINARY-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SAMPLE-BINARY** *[LUMP][c1ac]*\n\n    将其输入的值视为概率，独立地进行二项式采样。将 `true` 转换为 1，`false` 转换为 0。该块的 [`SIZE`][019f] 会根据其输入的大小自动确定。\n    \n    ```common-lisp\n    (->sample-binary (->input :size 10) :name 'binarized-input)\n    ==> #\u003C->SAMPLE-BINARY BINARIZED-INPUT :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ARITHMETIC-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.8 算术运算\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SUM-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 求和块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESUM-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SUM** *[LUMP][c1ac]*\n\n    对其输入的每个条带的所有节点求和。该块的 [`SIZE`][019f] 始终为 1。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-V-2AM-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 向量-矩阵乘法块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EV-2AM-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->V\\*M** *[LUMP][c1ac]*\n\n    执行 `X * WEIGHTS`，其中 `X`（输入）的大小为 `M`，而 [`WEIGHTS`][ab3c] 是一个 [`->WEIGHT`][b76f]，其唯一的条带被视为尺寸为 `M x N` 的矩阵，按行优先顺序存储。`N` 是该块的大小。如果 [`TRANSPOSE-WEIGHTS-P`][533e] 为真，则 `WEIGHTS` 的尺寸为 `N x M`，并计算 `X * WEIGHTS'`。\n\n\u003Ca id=\"x-28MGL-COMMON-3AWEIGHTS-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29\">\u003C\u002Fa>\n\n- [读取器] **WEIGHTS** *[->V\\*M][dbc4] (:WEIGHTS)*\n\n    一个 [`->WEIGHT`][b76f] 块。\n\n\u003Ca id=\"x-28MGL-BP-3ATRANSPOSE-WEIGHTS-P-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29\">\u003C\u002Fa>\n\n- [读取器] **TRANSPOSE-WEIGHTS-P** *[->V\\*M][dbc4] (:TRANSPOSE-WEIGHTS-P = NIL)*\n\n    决定是将输入与 [`WEIGHTS`][ab3c] 相乘，还是与其转置相乘。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP--2B-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 元素级加法块\n\n\u003Ca id=\"x-28MGL-BP-3A--3E-2B-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->+** *[LUMP][c1ac]*\n\n    对其输入的块进行元素级加法。如果至少有一个输入，该块的 [`SIZE`][019f] 会自动从输入的大小中确定。如果其中一个输入是 [`->WEIGHT`][b76f] 块，则它会被加到每一个条带上。\n    \n    ```common-lisp\n    (->+ (list (->input :size 10) (->weight :size 10 :name 'bias))\n         :name 'plus)\n    ==> #\u003C->+ PLUS :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP--2A-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 元素级乘法块\n\n\u003Ca id=\"x-28MGL-BP-3A--3E-2A-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->\\*** *[LUMP][c1ac]*\n\n    对其两个输入的块进行元素级乘法。该块的 [`SIZE`][019f] 会自动从输入的大小中确定。任一输入都可以是 [`->WEIGHT`][b76f] 块。\n    \n    ```common-lisp\n    (->* (->input :size 10) (->weight :size 10 :name 'scale)\n         :name 'mult)\n    ==> #\u003C->* MULT :SIZE 10 1\u002F1 :NORM 0.00000>\n    ```\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-ABS-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 绝对值块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EABS-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->ABS** *[LUMP][c1ac]*\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-EXP-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 指数块\n\n\u003Ca id=\"x-28MGL-BP-3A--3EEXP-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->EXP** *[LUMP][c1ac]*\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-NORMALIZED-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 归一化块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ENORMALIZED-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->NORMALIZED** *[LUMP][c1ac]*\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SINE-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 正弦块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESIN-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SIN** *[LUMP][c1ac]*\n\n    对其输入以元素级方式应用 [`SIN`][ece2] 函数。该块的 [`SIZE`][019f] 为其输入的大小，且自动确定。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-RNN-OPERATIONS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n#### 11.4.9 用于 `RNN` 的操作\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-LSTM-SUBNET-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### LSTM 子网络\n\n\u003Ca id=\"x-28MGL-BP-3A--3ELSTM-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->LSTM** *[BPN][5187]*\n\n    长短期记忆子网络由函数 [`->LSTM`][2823] 构建，内部包含许多块。这些块被封装成一个子网络，以减少复杂性。\n\n\u003Ca id=\"x-28MGL-BP-3A--3ELSTM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **->LSTM** *INPUTS &KEY NAME CELL-INIT OUTPUT-INIT SIZE (ACTIVATION-FN '->ACTIVATION) (GATE-FN '->SIGMOID) (INPUT-FN '->TANH) (OUTPUT-FN '->TANH) (PEEPHOLES T)*\n\n创建一个由输入门、遗忘门和输出门组成的 LSTM 层，这些门会对输入、单元状态和输出进行缩放。会生成多个小块，其中代表 LSTM 输出的最后一个名为 `NAME`，其余的小块则根据 `NAME` 自动命名。该函数仅返回输出小块 (`m`)，但所有生成的小块都会自动添加到正在构建的 [`BPN`][5187] 中。\n\n关于 LSTM 有许多论文和教程。本版本在《用于大规模声学建模的长短期记忆循环神经网络架构》（2014 年，Hasim Sak、Andrew Senior、Francoise Beaufays）中有详细描述。使用该论文中的符号：\n\n$$\ni_t = s(W\\_{ix} x\\_t + W\\_{im} m\\_{t-1} + W\\_{ic} \\odot\nc\\_{t-1} + b\\_i)\n$$\n\n$$\nf\\_t = s(W\\_{fx} x\\_t + W\\_{fm} m\\_{t-1} + W\\_{fc} \\odot\nc\\_{t-1} + b\\_f)\n$$\n\n$$\nc\\_t = f\\_t \\odot c\\_{t-1} + i\\_t \\odot g(W\\_{cx} x\\_t +\nW\\_{cm} m\\_{t-1} + b\\_c)\n$$\n\n$$\no\\_t = s(W\\_{ox} x\\_t + W\\_{om} m\\_{t-1} + W\\_{oc} \\odot\nc\\_t + b\\_o)\n$$\n\n$$\nm\\_t = o\\_t \\odot h(c\\_t),\n$$\n\n其中 `i`、`f` 和 `o` 分别是输入门、遗忘门和输出门。`c` 是单元状态，而 `m` 则是实际的输出。\n\n从 `c` 连接的权重矩阵（`W_ic`、`W_fc` 和 `W_oc`）为对角矩阵，仅用对角线元素的向量表示。这些连接仅在 `PEEPHOLES` 为真时才会被添加。\n\n与论文的一个显著区别在于，除了可以作为一个单独的小块外，`x_t`（`INPUTS`）也可以是一个小块列表。每当需要基于 `x_t` 计算激活值时，实际上都是各个激活值的总和。例如，`W_ix * x_t` 实际上是 `sum_j W_ijx * inputs_j`。\n\n如果 `CELL-INIT` 不为 `NIL`，则它必须是一个大小为 `SIZE` 的 [`CLUMP`][a4fe]，用来表示初始的细胞状态 (`c_{-1}`)。若 `CELL-INIT` 为 `NIL`，则等同于所有状态均为零。\n\n`ACTIVATION-FN` 默认为 `->ACTIVATION`([`0`][7162] [`1`][b602])，但也可以是例如 [`->BATCH-NORMALIZED-ACTIVATION`][0f0f]。一般来说，像上述两种具有签名（`INPUTS` [`&KEY`][4336] `NAME` `SIZE` `PEEPHOLES`）的函数都可以作为 `ACTIVATION-FN` 传递。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-SEQ-BARRIER-LUMP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n##### 序列屏障小块\n\n\u003Ca id=\"x-28MGL-BP-3A--3ESEQ-BARRIER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **->SEQ-BARRIER** *[LUMP][c1ac]*\n\n    在 [`RNN`][b0f3] 中，处理批次中的不同条带（实例）可能需要不同的时间步数，因此条带 0 的最终状态可能位于某个小块 L 的第 7 个时间步，而条带 1 的最终状态则可能位于另一个小块 L 的第 42 个时间步。\n\n    该小块将来自不同小块的每个条带状态复制到一个单一的小块中，以便后续处理能够继续进行（通常是在 `RNN` 嵌入到其他网络中时）。\n\n    此小块的 [`SIZE`][019f] 会自动设置为 `(FUNCALL SEQ-ELT-FN 0)` 返回的小块的大小。\n\n\u003Ca id=\"x-28MGL-BP-3ASEQ-ELT-FN-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESEQ-BARRIER-29-29\">\u003C\u002Fa>\n\n- [读取器] **SEQ-ELT-FN** *[->SEQ-BARRIER][4e91] (:SEQ-ELT-FN)*\n\n    一个接受 `INDEX` 参数的函数，用于返回序列中具有该索引的小块。\n\n\u003Ca id=\"x-28MGL-BP-3ASEQ-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESEQ-BARRIER-29-29\">\u003C\u002Fa>\n\n- [访问者] **SEQ-INDICES** *[->SEQ-BARRIER][4e91]*\n\n    一个长度等于批次大小的索引序列。索引 `I` 处的元素是要传递给 [`SEQ-ELT-FN`][29c0] 的索引，以找到其条带 `I` 被复制到此小块条带 `I` 中的小块。\n\n\u003Ca id=\"x-28MGL-BP-3A-40MGL-BP-UTILITIES-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n\n\n### 11.5 工具函数\n\n\u003Ca id=\"x-28MGL-BP-3ARENORMALIZE-ACTIVATIONS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **RENORMALIZE-ACTIVATIONS** *->V\\*M-LUMPS L2-UPPER-BOUND*\n\n    如果某个单元的输入权重向量的 l2 范数大于 `L2-UPPER-BOUND`，则将其重新归一化至 `L2-UPPER-BOUND`。假设 `->V*M-LUMPS` 列表最终会被馈送到同一个小块中。\n\n    使用方法是将激活小块分组到同一个 GD-OPTIMIZER 中，并将此函数挂载到 [`AFTER-UPDATE-HOOK`][124f] 上，后者可以通过 [`ARRANGE-FOR-RENORMALIZING-ACTIVATIONS`][8b55] 自动完成。\n\n    参见“通过防止特征检测器的协同适应来改进神经网络”（Hinton，2012），\u003Chttp:\u002F\u002Farxiv.org\u002Fpdf\u002F1207.0580.pdf>。\n\n\u003Ca id=\"x-28MGL-BP-3AARRANGE-FOR-RENORMALIZING-ACTIVATIONS-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **ARRANGE-FOR-RENORMALIZING-ACTIVATIONS** *BPN OPTIMIZER L2-UPPER-BOUND*\n\n    通过将一个 lambda 函数推送到 `OPTIMIZER` 的 [`AFTER-UPDATE-HOOK`][124f]，安排对 `OPTIMIZER` 训练的所有权重进行重新归一化（如 [`RENORMALIZE-ACTIVATIONS`][c7fa] 所述，使用 `L2-UPPER-BOUND`）。\n\n    假设这些权重要么属于某个激活小块，要么只是简单地添加到激活中（即它们是偏置项）。\n\n\u003Ca id=\"x-28MGL-3A-40MGL-BM-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 12 玻尔兹曼机\n\n\n\u003Ca id=\"x-28MGL-3A-40MGL-GP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 13 高斯过程\n\n\n\u003Ca id=\"x-28MGL-NLP-3A-40MGL-NLP-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 14 自然语言处理\n\n###### \\[在包 MGL-NLP 中\\]\n目前这只是一个包含几个实用工具的简单模块，未来可能会发展成一套更完善的自然语言处理工具集。\n\n\u003Ca id=\"x-28MGL-NLP-3AMAKE-N-GRAM-MAPPEE-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **MAKE-N-GRAM-MAPPEE** *FUNCTION N*\n\n    创建一个适用于映射函数作为参数的单参数函数。它会以每 `N` 个元素为一组调用 `FUNCTION`。\n    \n    ```common-lisp\n    (map nil (make-n-gram-mappee #'print 3) '(a b c d e))\n    ..\n    .. (A B C) \n    .. (B C D) \n    .. (C D E) \n    ```\n\n\u003Ca id=\"x-28MGL-NLP-3ABLEU-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **BLEU** *CANDIDATES REFERENCES &KEY CANDIDATE-KEY REFERENCE-KEY (N 4)*\n\n    计算双语语料库的 [BLEU 分数](http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FBLEU)。BLEU 用于衡量机器翻译相对于人工参考译文的质量。\n    \n    `CANDIDATES`（由 `CANDIDATE-KEY` 索引）和 `REFERENCES`（由 `REFERENCE-KEY` 索引）都是句子序列。句子是由单词组成的序列。单词通过 [`EQUAL`][3fb5] 进行比较，可以是任何类型的对象（不一定是字符串）。\n    \n    目前尚不支持多条参考译文。`N` 决定了要考虑的最大 n-gram 长度。\n    \n    第一个返回值是 `BLEU` 分数（介于 0 和 1 之间，不是百分比）。第二个值是 `CANDIDATES` 的总长度除以 `REFERENCES` 的总长度（如果分母为 0，则返回 `NIL`）。第三个值是一个 n-gram 精确率列表（同样介于 0 和 1 之间或 `NIL`），每个元素对应 \\[1..`N`\\] 中的一个值。\n    \n    这基本上是对 [multi-bleu.perl](https:\u002F\u002Fgithub.com\u002Fmoses-smt\u002Fmosesdecoder\u002Fblob\u002Fmaster\u002Fscripts\u002Fgeneric\u002Fmulti-bleu.perl) 的重新实现。\n    \n    ```common-lisp\n    (bleu '((1 2 3 4) (a b))\n          '((1 2 3 4) (1 2)))\n    => 0.8408964\n    => 1\n    => (;; 1-gram precision: 4\u002F6\n        2\u002F3\n        ;; 2-gram precision: 3\u002F4\n        3\u002F4\n        ;; 3-gram precision: 2\u002F2\n        1\n        ;; 4-gram precision: 1\u002F1\n        1)\n    ```\n\n\u003Ca id=\"x-28MGL-NLP-3A-40MGL-NLP-BAG-OF-WORDS-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n### 14.1 词袋模型\n\n\u003Ca id=\"x-28MGL-NLP-3ABAG-OF-WORDS-ENCODER-20CLASS-29\">\u003C\u002Fa>\n\n- [类] **BAG-OF-WORDS-ENCODER**\n\n    使用稀疏向量对文档的所有特征进行编码。从 `MAPPER` 获取文档的特征，并使用 `FEATURE-ENCODER` 对每个特征进行编码。如果该特征未被使用，`FEATURE-ENCODER` 可能会返回 `NIL`。结果是一个由编码后的特征与值组成的 cons 列表。编码后的特征在向量中是唯一的（根据 [`ENCODED-FEATURE-TEST`][21ca]），但顺序并不固定。\n    \n    根据 `KIND` 的不同，值的计算方式也有所不同：\n    \n    - 对于 `:FREQUENCY`，它是相应特征在 `DOCUMENT` 中出现的次数。\n    \n    - 对于 `:BINARY`，其值始终为 1。\n    \n    - 对于 `:NORMALIZED-FREQUENCY` 和 `:NORMALIZED-BINARY`，它们与非归一化的版本类似，只是在最终步骤中，组装好的稀疏向量中的值会被归一化，使其总和为 1。\n    \n    - 最后，`:COMPACTED-BINARY` 类似于 `:BINARY`，但返回值不是 cons 列表，而是一个元素类型为 [`ENCODED-FEATURE-TYPE`][d443] 的向量。\n    \n    ```common-lisp\n    (let* ((feature-indexer\n             (make-indexer\n              (alexandria:alist-hash-table '((\"I\" . 3) (\"me\" . 2) (\"mine\" . 1)))\n              2))\n           (bag-of-words-encoder\n             (make-instance 'bag-of-words-encoder\n                            :feature-encoder feature-indexer\n                            :feature-mapper (lambda (fn document)\n                                              (map nil fn document))\n                            :kind :frequency)))\n      (encode bag-of-words-encoder '(\"All\" \"through\" \"day\" \"I\" \"me\" \"mine\"\n                                     \"I\" \"me\" \"mine\" \"I\" \"me\" \"mine\")))\n    => #((0 . 3.0d0) (1 . 3.0d0))\n    ```\n\n\u003Ca id=\"x-28MGL-NLP-3AFEATURE-ENCODER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [读取器] **FEATURE-ENCODER** *[BAG-OF-WORDS-ENCODER][cbb4] (:FEATURE-ENCODER)*\n\n\u003Ca id=\"x-28MGL-NLP-3AFEATURE-MAPPER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [读取器] **FEATURE-MAPPER** *[BAG-OF-WORDS-ENCODER][cbb4] (:FEATURE-MAPPER)*\n\n\u003Ca id=\"x-28MGL-NLP-3AENCODED-FEATURE-TEST-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [读取器] **ENCODED-FEATURE-TEST** *[BAG-OF-WORDS-ENCODER][cbb4] (:ENCODED-FEATURE-TEST = #'EQL)*\n\n\u003Ca id=\"x-28MGL-NLP-3AENCODED-FEATURE-TYPE-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [读取器] **ENCODED-FEATURE-TYPE** *[BAG-OF-WORDS-ENCODER][cbb4] (:ENCODED-FEATURE-TYPE = T)*\n\n\u003Ca id=\"x-28MGL-NLP-3ABAG-OF-WORDS-KIND-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29\">\u003C\u002Fa>\n\n- [读取器] **BAG-OF-WORDS-KIND** *[BAG-OF-WORDS-ENCODER][cbb4] (:KIND = :BINARY)*\n\n\u003Ca id=\"x-28MGL-LOG-3A-40MGL-LOG-20MGL-PAX-3ASECTION-29\">\u003C\u002Fa>\n\n## 15 日志记录\n\n###### \\[在包 MGL-LOG 中\\]\n\u003Ca id=\"x-28MGL-LOG-3ALOG-MSG-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LOG-MSG** *FORMAT &REST ARGS*\n\n\u003Ca id=\"x-28MGL-LOG-3AWITH-LOGGING-ENTRY-20MGL-PAX-3AMACRO-29\">\u003C\u002Fa>\n\n- [宏] **WITH-LOGGING-ENTRY** *(STREAM) &BODY BODY*\n\n\u003Ca id=\"x-28MGL-LOG-3A-2ALOG-FILE-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*LOG-FILE\\*** *NIL*\n\n\u003Ca id=\"x-28MGL-LOG-3A-2ALOG-TIME-2A-20VARIABLE-29\">\u003C\u002Fa>\n\n- [变量] **\\*LOG-TIME\\*** *T*\n\n\u003Ca id=\"x-28MGL-LOG-3ALOG-MAT-ROOM-20FUNCTION-29\">\u003C\u002Fa>\n\n- [函数] **LOG-MAT-ROOM** *&KEY (VERBOSE T)*\n\n[0072]: #x-28MGL-OPT-3AON-OPTIMIZATION-FINISHED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:ON-OPTIMIZATION-FINISHED (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [0078]: #x-28MGL-CORE-3AINSTANCE-TO-EXECUTOR-PARAMETERS-20GENERIC-FUNCTION-29 \"MGL-CORE:INSTANCE-TO-EXECUTOR-PARAMETERS GENERIC-FUNCTION\"\n  [00a0]: #x-28MGL-BP-3ABP-LEARNER-20CLASS-29 \"MGL-BP:BP-LEARNER CLASS\"\n  [00ee]: #x-28MGL-3A-40MGL-LINKS-20MGL-PAX-3ASECTION-29 \"Links\"\n  [011d]: #x-28MGL-GD-3AMEAN-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29 \"MGL-GD:MEAN-DECAY (MGL-PAX:ACCESSOR MGL-GD:ADAM-OPTIMIZER)\"\n  [019f]: #x-28MGL-COMMON-3ASIZE-20GENERIC-FUNCTION-29 \"MGL-COMMON:SIZE GENERIC-FUNCTION\"\n  [038e]: #x-28MGL-BP-3AIMPORTANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ELOSS-29-29 \"MGL-BP:IMPORTANCE (MGL-PAX:ACCESSOR MGL-BP:->LOSS)\"\n  [03c7]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_funcal.htm \"FUNCALL (MGL-PAX:CLHS FUNCTION)\"\n  [0784]: #x-28MGL-NLP-3A-40MGL-NLP-BAG-OF-WORDS-20MGL-PAX-3ASECTION-29 \"Bag of Words\"\n  [07c7]: #x-28MGL-CORE-3A-40MGL-CONFUSION-MATRIX-20MGL-PAX-3ASECTION-29 \"Confusion Matrices\"\n  [07fb]: #x-28MGL-CORE-3AN-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29 \"MGL-CORE:N-STRIPES (MGL-PAX:READER MGL-BP:BPN)\"\n  [084d]: #x-28MGL-BP-3AMAX-LAG-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:MAX-LAG (MGL-PAX:READER MGL-BP:RNN)\"\n  [08ac]: #x-28MGL-DATASET-3AGENERATOR-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29 \"MGL-DATASET:GENERATOR (MGL-PAX:READER MGL-DATASET:FUNCTION-SAMPLER)\"\n  [0900]: #x-28MGL-GD-3AVARIANCE-DECAY-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3AADAM-OPTIMIZER-29-29 \"MGL-GD:VARIANCE-DECAY (MGL-PAX:ACCESSOR MGL-GD:ADAM-OPTIMIZER)\"\n  [0933]: #x-28MGL-BP-3AMONITOR-BPN-RESULTS-20FUNCTION-29 \"MGL-BP:MONITOR-BPN-RESULTS FUNCTION\"\n  [09ed]: #x-28MGL-GD-3ALEARNING-RATE-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:LEARNING-RATE (MGL-PAX:ACCESSOR MGL-GD::GD-OPTIMIZER)\"\n  [0ba7]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MEASURER-20MGL-PAX-3ASECTION-29 \"Classification Measurers\"\n  [0c91]: #x-28MGL-GD-3A-40MGL-GD-NORMALIZED-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Normalized Batch Optimizer\"\n  [0c9e]: #x-28MGL-CORE-3ASET-INPUT-20GENERIC-FUNCTION-29 \"MGL-CORE:SET-INPUT GENERIC-FUNCTION\"\n  [0d6a]: #x-28MGL-NLP-3A-40MGL-NLP-20MGL-PAX-3ASECTION-29 \"Natural Language Processing\"\n  [0d82]: #x-28MGL-BP-3A-40MGL-BP-TRAINING-20MGL-PAX-3ASECTION-29 \"Training\"\n  [0f0f]: #x-28MGL-BP-3A--3EBATCH-NORMALIZED-ACTIVATION-20FUNCTION-29 \"MGL-BP:->BATCH-NORMALIZED-ACTIVATION FUNCTION\"\n  [0f83]: #x-28MGL-CORE-3ACONCAT-COUNTER-20CLASS-29 \"MGL-CORE:CONCAT-COUNTER CLASS\"\n  [109e]: #x-28MGL-DATASET-3A-40MGL-DATASET-20MGL-PAX-3ASECTION-29 \"Datasets\"\n  [10e7]: #x-28MGL-GD-3A-40MGL-GD-20MGL-PAX-3ASECTION-29 \"Gradient Descent\"\n  [115e]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_max_m.htm \"MIN (MGL-PAX:CLHS FUNCTION)\"\n  [124f]: #x-28MGL-GD-3AAFTER-UPDATE-HOOK-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:AFTER-UPDATE-HOOK (MGL-PAX:ACCESSOR MGL-GD::GD-OPTIMIZER)\"\n  [1339]: #x-28MGL-CORE-3ADECODE-20GENERIC-FUNCTION-29 \"MGL-CORE:DECODE GENERIC-FUNCTION\"\n  [1355]: #x-28MGL-BP-3A-40MGL-FNN-20MGL-PAX-3ASECTION-29 \"Feed-Forward Nets\"\n  [16c4]: #x-28MGL-CORE-3AMAX-N-STRIPES-20GENERIC-FUNCTION-29 \"MGL-CORE:MAX-N-STRIPES GENERIC-FUNCTION\"\n  [175f]: #x-28MGL-BP-3AFIND-CLUMP-20FUNCTION-29 \"MGL-BP:FIND-CLUMP FUNCTION\"\n  [1a52]: #x-28MGL-BP-3AINPUT-ROW-INDICES-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EEMBEDDING-29-29 \"MGL-BP:INPUT-ROW-INDICES (MGL-PAX:ACCESSOR MGL-BP:->EMBEDDING)\"\n  [1a61]: #x-28MGL-DIFFUN-3ADIFFUN-20CLASS-29 \"MGL-DIFFUN:DIFFUN CLASS\"\n  [1b5e]: #x-28MGL-CORE-3A-40MGL-FEATURE-SELECTION-20MGL-PAX-3ASECTION-29 \"Feature Selection\"\n  [1beb]: #x-28MGL-CORE-3AENCODER-2FDECODER-20CLASS-29 \"MGL-CORE:ENCODER\u002FDECODER CLASS\"\n  [1cab]: #x-28MGL-DATASET-3AMAX-N-SAMPLES-20-28MGL-PAX-3AACCESSOR-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29 \"MGL-DATASET:MAX-N-SAMPLES (MGL-PAX:ACCESSOR MGL-DATASET:FUNCTION-SAMPLER)\"\n  [207b]: #x-28MGL-BP-3A-40MGL-BP-INPUTS-20MGL-PAX-3ASECTION-29 \"Inputs\"\n  [20ca]: #x-28MGL-OPT-3ADO-GRADIENT-SINK-20MGL-PAX-3AMACRO-29 \"MGL-OPT:DO-GRADIENT-SINK MGL-PAX:MACRO\"\n  [20e8]: #x-28MGL-CORE-3ACOUNTER-VALUES-20GENERIC-FUNCTION-29 \"MGL-CORE:COUNTER-VALUES GENERIC-FUNCTION\"\n  [2171]: #x-28MGL-BP-3A--3ELOSS-20CLASS-29 \"MGL-BP:->LOSS CLASS\"\n  [21ca]: #x-28MGL-NLP-3AENCODED-FEATURE-TEST-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29 \"MGL-NLP:ENCODED-FEATURE-TEST (MGL-PAX:READER MGL-NLP:BAG-OF-WORDS-ENCODER)\"\n  [2312]: #x-28MGL-OPT-3AMAP-SEGMENTS-20GENERIC-FUNCTION-29 \"MGL-OPT:MAP-SEGMENTS GENERIC-FUNCTION\"\n  [2481]: #x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EDROPOUT-29-29 \"MGL-BP:DROPOUT (MGL-PAX:ACCESSOR MGL-BP:->DROPOUT)\"\n  [24aa]: #x-28MGL-CORE-3A-40MGL-FEATURE-ENCODING-20MGL-PAX-3ASECTION-29 \"Feature Encoding\"\n  [25fd]: #x-28MGL-GD-3A-40MGL-GD-SGD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"SGD Optimizer\"\n  [2823]: #x-28MGL-BP-3A--3ELSTM-20FUNCTION-29 \"MGL-BP:->LSTM FUNCTION\"\n  [2981]: #x-28MGL-DIFFUN-3A-40MGL-DIFFUN-20MGL-PAX-3ASECTION-29 \"Differentiable Functions\"\n  [29a1]: #x-28MGL-CORE-3A-40MGL-PERSISTENCE-20MGL-PAX-3ASECTION-29 \"Persistence\"\n  [29c0]: #x-28MGL-BP-3ASEQ-ELT-FN-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESEQ-BARRIER-29-29 \"MGL-BP:SEQ-ELT-FN (MGL-PAX:READER MGL-BP:->SEQ-BARRIER)\"\n  [2a2f]: #x-28MGL-GD-3ASGD-OPTIMIZER-20CLASS-29 \"MGL-GD:SGD-OPTIMIZER CLASS\"\n  [2aa3]: #x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-2A-20GENERIC-FUNCTION-29 \"MGL-CORE:MAKE-CLASSIFICATION-ACCURACY-MONITORS* GENERIC-FUNCTION\"\n  [2c39]: #x-28MGL-GD-3A-40MGL-GD-BATCH-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Batch Based Optimizers\"\n  [2e8b]: #x-28MGL-CORE-3AWITH-PADDED-ATTRIBUTE-PRINTING-20MGL-PAX-3AMACRO-29 \"MGL-CORE:WITH-PADDED-ATTRIBUTE-PRINTING MGL-PAX:MACRO\"\n  [2ecb]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_concat.htm \"CONCATENATE (MGL-PAX:CLHS FUNCTION)\"\n  [2f78]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_length.htm \"LENGTH (MGL-PAX:CLHS FUNCTION)\"\n  [2fe9]: #x-28MGL-BP-3A-40MGL-BP-ARITHMETIC-20MGL-PAX-3ASECTION-29 \"Arithmetic\"\n  [3045]: #x-28MGL-BP-3A-40MGL-BP-LUMP-20MGL-PAX-3ASECTION-29 \"Lump Base Class\"\n  [3155]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_countc.htm \"COUNT (MGL-PAX:CLHS FUNCTION)\"\n  [31ed]: #x-28MGL-CORE-3ALABEL-INDICES-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDICES GENERIC-FUNCTION\"\n  [331b]: #x-28MGL-CORE-3AMAKE-EXECUTOR-WITH-PARAMETERS-20GENERIC-FUNCTION-29 \"MGL-CORE:MAKE-EXECUTOR-WITH-PARAMETERS GENERIC-FUNCTION\"\n  [332e]: #x-28MGL-3A-40MGL-BM-20MGL-PAX-3ASECTION-29 \"Boltzmann Machines\"\n  [3815]: #x-28MGL-OPT-3AMAKE-COST-MONITORS-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:MAKE-COST-MONITORS* GENERIC-FUNCTION\"\n  [3ce0]: #x-28MGL-GD-3ASEGMENTED-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:SEGMENTED-GD-OPTIMIZER CLASS\"\n  [3f2e]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_pr_obj.htm \"PRINT-OBJECT (MGL-PAX:CLHS GENERIC-FUNCTION)\"\n  [3f42]: #x-28MGL-LOG-3A-40MGL-LOG-20MGL-PAX-3ASECTION-29 \"Logging\"\n  [3f9f]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CV-BAGGING-20MGL-PAX-3ASECTION-29 \"CV Bagging\"\n  [3fb5]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_equal.htm \"EQUAL (MGL-PAX:CLHS FUNCTION)\"\n  [401f]: #x-28MGL-DATASET-3AFINISHEDP-20GENERIC-FUNCTION-29 \"MGL-DATASET:FINISHEDP GENERIC-FUNCTION\"\n  [404c]: #x-28MGL-BP-3AVARIANCE-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29 \"MGL-BP:VARIANCE (MGL-PAX:ACCESSOR MGL-BP:->GAUSSIAN-RANDOM)\"\n  [410c]: #x-28MGL-COMMON-3ACOST-20GENERIC-FUNCTION-29 \"MGL-COMMON:COST GENERIC-FUNCTION\"\n  [418a]: #x-28MGL-OPT-3ASEGMENT-SET-20CLASS-29 \"MGL-OPT:SEGMENT-SET CLASS\"\n  [430d]: #x-28MGL-CORE-3ACLASSIFICATION-ACCURACY-COUNTER-20CLASS-29 \"MGL-CORE:CLASSIFICATION-ACCURACY-COUNTER CLASS\"\n  [4336]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002F03_da.htm '\"3.4.1\" (MGL-PAX:CLHS MGL-PAX:SECTION)'\n  [441b]: #x-28MGL-BP-3A--3EDROPOUT-20CLASS-29 \"MGL-BP:->DROPOUT CLASS\"\n  [443c]: #x-28MGL-3A-40MGL-CODE-ORGANIZATION-20MGL-PAX-3ASECTION-29 \"Code Organization\"\n  [4476]: #x-28MGL-CORE-3A-40MGL-EXECUTORS-20MGL-PAX-3ASECTION-29 \"Executors\"\n  [4528]: #x-28MGL-OPT-3AMONITOR-OPTIMIZATION-PERIODICALLY-20FUNCTION-29 \"MGL-OPT:MONITOR-OPTIMIZATION-PERIODICALLY FUNCTION\"\n  [46a4]: #x-28MGL-OPT-3AMINIMIZE-20FUNCTION-29 \"MGL-OPT:MINIMIZE FUNCTION\"\n  [46c2]: #x-28MGL-OPT-3AMAKE-COST-MONITORS-20FUNCTION-29 \"MGL-OPT:MAKE-COST-MONITORS FUNCTION\"\n  [46c4]: #x-28MGL-BP-3APOPULATION-DECAY-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-BP:POPULATION-DECAY (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [49f5]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Fs_let_l.htm \"LET* (MGL-PAX:CLHS MGL-PAX:MACRO)\"\n  [4a8e]: #x-28MGL-3A-40MGL-GLOSSARY-20MGL-PAX-3ASECTION-29 \"Glossary\"\n  [4bf1]: #x-28MGL-OPT-3AACCUMULATE-GRADIENTS-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:ACCUMULATE-GRADIENTS* GENERIC-FUNCTION\"\n  [4c73]: #x-28MGL-OPT-3AN-INSTANCES-20-28MGL-PAX-3AREADER-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:N-INSTANCES (MGL-PAX:READER MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [4e91]: #x-28MGL-BP-3A--3ESEQ-BARRIER-20CLASS-29 \"MGL-BP:->SEQ-BARRIER CLASS\"\n  [4f0b]: #x-28MGL-OPT-3AON-N-INSTANCES-CHANGED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:ON-N-INSTANCES-CHANGED (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [4f0e]: #x-28MGL-BP-3A-40MGL-BP-MONITORING-20MGL-PAX-3ASECTION-29 \"Monitoring\"\n  [4ffb]: #x-28MGL-CG-3ACG-20FUNCTION-29 \"MGL-CG:CG FUNCTION\"\n  [5187]: #x-28MGL-BP-3ABPN-20CLASS-29 \"MGL-BP:BPN CLASS\"\n  [51d5]: #x-28MGL-BP-3AWARP-LENGTH-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:WARP-LENGTH (MGL-PAX:READER MGL-BP:RNN)\"\n  [51f7]: #x-28MGL-BP-3A-40MGL-BP-RNN-OPERATIONS-20MGL-PAX-3ASECTION-29 \"Operations for `RNN`s\"\n  [5293]: #x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FCONT-20FUNCTION-29 \"MGL-RESAMPLE:SPLIT-FOLD\u002FCONT FUNCTION\"\n  [5309]: #x-28MGL-BP-3A--3ETANH-20CLASS-29 \"MGL-BP:->TANH CLASS\"\n  [533e]: #x-28MGL-BP-3ATRANSPOSE-WEIGHTS-P-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EV-2AM-29-29 \"MGL-BP:TRANSPOSE-WEIGHTS-P (MGL-PAX:READER MGL-BP:->V*M)\"\n  [5611]: #x-28MGL-GD-3AMOMENTUM-TYPE-20-28MGL-PAX-3AREADER-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:MOMENTUM-TYPE (MGL-PAX:READER MGL-GD::GD-OPTIMIZER)\"\n  [56b2]: #x-28MGL-BP-3A-40MGL-BP-OVERVIEW-20MGL-PAX-3ASECTION-29 \"Backprop Overview\"\n  [5748]: #x-28MGL-OPT-3A-40MGL-OPT-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Implementing Optimizers\"\n  [5752]: #x-28MGL-CORE-3ACOUNTER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29 \"MGL-CORE:COUNTER (MGL-PAX:READER MGL-CORE:MONITOR)\"\n  [5842]: #x-28MGL-COMMON-3ANAME-20GENERIC-FUNCTION-29 \"MGL-COMMON:NAME GENERIC-FUNCTION\"\n  [5979]: #x-28MGL-CORE-3ABASIC-COUNTER-20CLASS-29 \"MGL-CORE:BASIC-COUNTER CLASS\"\n  [59c2]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-MISC-20MGL-PAX-3ASECTION-29 \"Miscellaneous Operations\"\n  [59dd]: #x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EMAX-29-29 \"MGL-COMMON:GROUP-SIZE (MGL-PAX:READER MGL-BP:->MAX)\"\n  [5a43]: #x-28MGL-GD-3APER-WEIGHT-BATCH-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:PER-WEIGHT-BATCH-GD-OPTIMIZER CLASS\"\n  [5a82]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_eq.htm \"EQ (MGL-PAX:CLHS FUNCTION)\"\n  [5bd4]: #x-28MGL-BP-3ABACKWARD-20GENERIC-FUNCTION-29 \"MGL-BP:BACKWARD GENERIC-FUNCTION\"\n  [5cd8]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_numera.htm \"DENOMINATOR (MGL-PAX:CLHS FUNCTION)\"\n  [5d86]: #x-28MGL-BP-3A-40MGL-BP-ACTIVATION-FUNCTIONS-20MGL-PAX-3ASECTION-29 \"Activation Functions\"\n  [5ded]: #x-28MGL-RESAMPLE-3ASPLIT-FOLD-2FMOD-20FUNCTION-29 \"MGL-RESAMPLE:SPLIT-FOLD\u002FMOD FUNCTION\"\n  [5fd4]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_eql.htm \"EQL (MGL-PAX:CLHS TYPE)\"\n  [5fdc]: #x-28MGL-CORE-3AMAP-BATCHES-FOR-MODEL-20FUNCTION-29 \"MGL-CORE:MAP-BATCHES-FOR-MODEL FUNCTION\"\n  [6004]: #x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-20FUNCTION-29 \"MGL-CORE:MAKE-CROSS-ENTROPY-MONITORS FUNCTION\"\n  [6021]: #x-28MGL-BP-3A--3EMAX-CHANNEL-20CLASS-29 \"MGL-BP:->MAX-CHANNEL CLASS\"\n  [606c]: #x-28MGL-BP-3ABUILD-FNN-20MGL-PAX-3AMACRO-29 \"MGL-BP:BUILD-FNN MGL-PAX:MACRO\"\n  [6098]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_vector.htm \"VECTOR (MGL-PAX:CLHS CLASS)\"\n  [60b3]: #x-28MGL-3A-40MGL-GP-20MGL-PAX-3ASECTION-29 \"Gaussian Processes\"\n  [60d2]: #x-28MGL-CORE-3ACONFUSION-MATRIX-20CLASS-29 \"MGL-CORE:CONFUSION-MATRIX CLASS\"\n  [60e3]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-20MGL-PAX-3ASECTION-29 \"Classification\"\n  [6202]: #x-28MGL-CORE-3AMONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ABP-LEARNER-29-29 \"MGL-CORE:MONITORS (MGL-PAX:ACCESSOR MGL-BP:BP-LEARNER)\"\n  [627a]: #x-28MGL-RESAMPLE-3AFRACTURE-STRATIFIED-20FUNCTION-29 \"MGL-RESAMPLE:FRACTURE-STRATIFIED FUNCTION\"\n  [62de]: #x-28MGL-CORE-3AADD-TO-COUNTER-20GENERIC-FUNCTION-29 \"MGL-CORE:ADD-TO-COUNTER GENERIC-FUNCTION\"\n  [6547]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_open.htm \"OPEN (MGL-PAX:CLHS FUNCTION)\"\n  [6598]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-COUNTER-20MGL-PAX-3ASECTION-29 \"Classification Counters\"\n  [6651]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_descri.htm \"DESCRIBE (MGL-PAX:CLHS FUNCTION)\"\n  [676d]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_wr_pr.htm \"PRINC (MGL-PAX:CLHS FUNCTION)\"\n  [6872]: #x-28MGL-BP-3A-40MGL-BP-WEIGHT-LUMP-20MGL-PAX-3ASECTION-29 \"Weight Lump\"\n  [6a6f]: #x-28MGL-OPT-3A-40MGL-OPT-EXTENSION-API-20MGL-PAX-3ASECTION-29 \"Extension API\"\n  [6b38]: #x-28MGL-BP-3A-40MGL-FNN-TUTORIAL-20MGL-PAX-3ASECTION-29 \"`FNN` Tutorial\"\n  [6bd7]: #x-28MGL-CORE-3ALOAD-STATE-20FUNCTION-29 \"MGL-CORE:LOAD-STATE FUNCTION\"\n  [6d31]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_vector.htm \"VECTOR (MGL-PAX:CLHS FUNCTION)\"\n  [6d9f]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_list_.htm \"LIST (MGL-PAX:CLHS FUNCTION)\"\n  [6da5]: #x-28MGL-CORE-3A-40MGL-ATTRIBUTES-20MGL-PAX-3ASECTION-29 \"Attributes\"\n  [6e96]: #x-28MGL-BP-3ATIME-STEP-20FUNCTION-29 \"MGL-BP:TIME-STEP FUNCTION\"\n  [6f82]: #x-28MGL-RESAMPLE-3AFRACTURE-20FUNCTION-29 \"MGL-RESAMPLE:FRACTURE FUNCTION\"\n  [7068]: #x-28MGL-CORE-3AMONITOR-20CLASS-29 \"MGL-CORE:MONITOR CLASS\"\n  [715c]: #x-28MGL-DATASET-3AFUNCTION-SAMPLER-20CLASS-29 \"MGL-DATASET:FUNCTION-SAMPLER CLASS\"\n  [7162]: #x-28MGL-BP-3A--3EACTIVATION-20CLASS-29 \"MGL-BP:->ACTIVATION CLASS\"\n  [71f9]: #x-28MGL-BP-3ASTEP-MONITORS-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29 \"MGL-BP:STEP-MONITORS (MGL-PAX:ACCESSOR MGL-BP:RNN)\"\n  [764b]: #x-28MGL-BP-3ABUILD-RNN-20MGL-PAX-3AMACRO-29 \"MGL-BP:BUILD-RNN MGL-PAX:MACRO\"\n  [765c]: #x-28MGL-DATASET-3AMAP-DATASETS-20FUNCTION-29 \"MGL-DATASET:MAP-DATASETS FUNCTION\"\n  [779d]: #x-28MGL-OPT-3A-40MGL-OPT-ITERATIVE-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Iterative Optimizer\"\n  [7960]: #x-28MGL-BP-3ASHIFT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-BP:SHIFT (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [79d8]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_list.htm \"LIST (MGL-PAX:CLHS CLASS)\"\n  [7a28]: #x-28MGL-BP-3A-40MGL-BP-EXTENSION-API-20MGL-PAX-3ASECTION-29 \"Clump API\"\n  [7bc3]: #x-28MGL-DATASET-3A-40MGL-SAMPLER-20MGL-PAX-3ASECTION-29 \"Samplers\"\n  [7c2f]: #x-28MGL-OPT-3AINITIALIZE-OPTIMIZER-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:INITIALIZE-OPTIMIZER* GENERIC-FUNCTION\"\n  [7ee3]: #x-28MGL-CORE-3A-40MGL-COUNTER-CLASSES-20MGL-PAX-3ASECTION-29 \"Counter classes\"\n  [80e2]: #x-28MGL-BP-3AVARIANCE-FOR-PREDICTION-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29 \"MGL-BP:VARIANCE-FOR-PREDICTION (MGL-PAX:ACCESSOR MGL-BP:->GAUSSIAN-RANDOM)\"\n  [80fa]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_mod_r.htm \"MOD (MGL-PAX:CLHS FUNCTION)\"\n  [8148]: #x-28MGL-CORE-3AREAD-STATE-20FUNCTION-29 \"MGL-CORE:READ-STATE FUNCTION\"\n  [82d8]: #x-28MGL-BP-3AADD-CLUMP-20FUNCTION-29 \"MGL-BP:ADD-CLUMP FUNCTION\"\n  [83e6]: #x-28MGL-CG-3A-40MGL-CG-20MGL-PAX-3ASECTION-29 \"Conjugate Gradient\"\n  [83f9]: #x-28MGL-BP-3A--3ESIGMOID-20CLASS-29 \"MGL-BP:->SIGMOID CLASS\"\n  [85d3]: #x-28MGL-COMMON-3ASIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29 \"MGL-COMMON:SIZE (MGL-PAX:READER MGL-BP:LUMP)\"\n  [85d34]: #x-28MGL-BP-3A--3ESOFTMAX-XE-LOSS-20CLASS-29 \"MGL-BP:->SOFTMAX-XE-LOSS CLASS\"\n  [8611]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-SHUFFLING-20MGL-PAX-3ASECTION-29 \"Shuffling\"\n  [86fd]: #x-28MGL-RESAMPLE-3ASAMPLE-FROM-20FUNCTION-29 \"MGL-RESAMPLE:SAMPLE-FROM FUNCTION\"\n  [871e]: #x-28MGL-BP-3A-40MGL-RNN-20MGL-PAX-3ASECTION-29 \"Recurrent Neural Nets\"\n  [876d]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_ensu_1.htm \"ENSURE-DIRECTORIES-EXIST (MGL-PAX:CLHS FUNCTION)\"\n  [8788]: #x-28MGL-BP-3A-40MGL-BP-20MGL-PAX-3ASECTION-29 \"Backpropagation Neural Networks\"\n  [8970]: #x-28MGL-COMMON-3ASCALE-20GENERIC-FUNCTION-29 \"MGL-COMMON:SCALE GENERIC-FUNCTION\"\n  [8ae0]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_identi.htm \"IDENTITY (MGL-PAX:CLHS FUNCTION)\"\n  [8af5]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_numera.htm \"NUMERATOR (MGL-PAX:CLHS FUNCTION)\"\n  [8b55]: #x-28MGL-BP-3AARRANGE-FOR-RENORMALIZING-ACTIVATIONS-20FUNCTION-29 \"MGL-BP:ARRANGE-FOR-RENORMALIZING-ACTIVATIONS FUNCTION\"\n  [8cb8]: #x-28MGL-RESAMPLE-3ASPLIT-STRATIFIED-20FUNCTION-29 \"MGL-RESAMPLE:SPLIT-STRATIFIED FUNCTION\"\n  [8da0]: #x-28MGL-OPT-3AITERATIVE-OPTIMIZER-20CLASS-29 \"MGL-OPT:ITERATIVE-OPTIMIZER CLASS\"\n  [8dd7]: #x-28MGL-CORE-3AN-STRIPES-20GENERIC-FUNCTION-29 \"MGL-CORE:N-STRIPES GENERIC-FUNCTION\"\n  [8e53]: #x-28MGL-BP-3AUNFOLDER-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:UNFOLDER (MGL-PAX:READER MGL-BP:RNN)\"\n  [8f37]: #x-28MGL-CORE-3AMONITORS-20GENERIC-FUNCTION-29 \"MGL-CORE:MONITORS GENERIC-FUNCTION\"\n  [9006]: #x-28MGL-OPT-3ATERMINATION-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:TERMINATION (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [9105]: #x-28MGL-BP-3A-40MGL-BP-ACTIVATIONS-20MGL-PAX-3ASECTION-29 \"Activations\"\n  [911c]: #x-28MGL-CORE-3AMAKE-CLASSIFICATION-ACCURACY-MONITORS-20FUNCTION-29 \"MGL-CORE:MAKE-CLASSIFICATION-ACCURACY-MONITORS FUNCTION\"\n  [9192]: #x-28MGL-3A-40MGL-OVERVIEW-20MGL-PAX-3ASECTION-29 \"Overview\"\n  [91a3]: #x-28MGL-CORE-3AMAX-N-STRIPES-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29 \"MGL-CORE:MAX-N-STRIPES (MGL-PAX:READER MGL-BP:BPN)\"\n  [91f3]: #x-28MGL-BP-3A-40MGL-BP-UTILITIES-20MGL-PAX-3ASECTION-29 \"Utilities\"\n  [9385]: #x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTIONS-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDEX-DISTRIBUTIONS GENERIC-FUNCTION\"\n  [93a7]: #x-28MGL-BP-3A-40MGL-BP-LOSSES-20MGL-PAX-3ASECTION-29 \"Losses\"\n  [9524]: #x-28MGL-RESAMPLE-3ACROSS-VALIDATE-20FUNCTION-29 \"MGL-RESAMPLE:CROSS-VALIDATE FUNCTION\"\n  [95fe]: #x-28MGL-CORE-3AWRITE-STATE-20FUNCTION-29 \"MGL-CORE:WRITE-STATE FUNCTION\"\n  [9641]: #x-28MGL-BP-3A-40MGL-BP-LUMPS-20MGL-PAX-3ASECTION-29 \"Lumps\"\n  [96d0]: #x-28MGL-NLP-3AFEATURE-ENCODER-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29 \"MGL-NLP:FEATURE-ENCODER (MGL-PAX:READER MGL-NLP:BAG-OF-WORDS-ENCODER)\"\n  [9700]: #x-28MGL-BP-3A-40MGL-RNN-TUTORIAL-20MGL-PAX-3ASECTION-29 \"`RNN` Tutorial\"\n  [9715]: #x-28MGL-CORE-3AATTRIBUTED-20CLASS-29 \"MGL-CORE:ATTRIBUTED CLASS\"\n  [9749]: #x-28MGL-CG-3ACG-ARGS-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29 \"MGL-CG:CG-ARGS (MGL-PAX:ACCESSOR MGL-CG:CG-OPTIMIZER)\"\n  [989a]: #x-28MGL-GD-3A-40MGL-GD-SEGMENTED-GD-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Segmented GD Optimizer\"\n  [989c]: #x-28MGL-CORE-3AAPPLY-MONITORS-20FUNCTION-29 \"MGL-CORE:APPLY-MONITORS FUNCTION\"\n  [993b]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_sinh_.htm \"TANH (MGL-PAX:CLHS FUNCTION)\"\n  [9a5b]: #x-28MGL-OPT-3ASEGMENT-DERIVATIVES-20GENERIC-FUNCTION-29 \"MGL-OPT:SEGMENT-DERIVATIVES GENERIC-FUNCTION\"\n  [9a84]: #x-28MGL-BP-3A--3EMIN-20CLASS-29 \"MGL-BP:->MIN CLASS\"\n  [9d3a]: #x-28MGL-BP-3A--3ERELU-20CLASS-29 \"MGL-BP:->RELU CLASS\"\n  [9da9]: #x-28MGL-BP-3A--3EBATCH-NORMALIZED-20CLASS-29 \"MGL-BP:->BATCH-NORMALIZED CLASS\"\n  [9de4]: #x-28MGL-BP-3AFNN-20CLASS-29 \"MGL-BP:FNN CLASS\"\n  [a077]: #x-28MGL-CORE-3ACOUNTER-20GENERIC-FUNCTION-29 \"MGL-CORE:COUNTER GENERIC-FUNCTION\"\n  [a138]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Fm_setf_.htm \"SETF (MGL-PAX:CLHS MGL-PAX:MACRO)\"\n  [a210]: #x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SINK-20MGL-PAX-3ASECTION-29 \"Implementing Gradient Sinks\"\n  [a39b]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-20MGL-PAX-3ASECTION-29 \"Resampling\"\n  [a437]: #x-28MGL-COMMON-3AGROUP-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29 \"MGL-COMMON:GROUP-SIZE (MGL-PAX:READER MGL-BP:->SOFTMAX-XE-LOSS)\"\n  [a4fe]: #x-28MGL-BP-3ACLUMP-20CLASS-29 \"MGL-BP:CLUMP CLASS\"\n  [a541]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_wr_to_.htm \"PRINC-TO-STRING (MGL-PAX:CLHS FUNCTION)\"\n  [a81b]: #x-28MGL-BP-3ADERIVATIVES-20GENERIC-FUNCTION-29 \"MGL-BP:DERIVATIVES GENERIC-FUNCTION\"\n  [a884]: #x-28MGL-GD-3A-40MGL-GD-PER-WEIGHT-OPTIMIZATION-20MGL-PAX-3ASECTION-29 \"Per-weight Optimization\"\n  [aa2e]: #x-28MGL-BP-3A-40MGL-BP-STOCHASTICITY-20MGL-PAX-3ASECTION-29 \"Stochasticity\"\n  [aa86]: #x-28MGL-GD-3AVARIANCE-ADJUSTMENT-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-GD:VARIANCE-ADJUSTMENT (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [aabd]: #x-28MGL-OPT-3AMAP-GRADIENT-SINK-20GENERIC-FUNCTION-29 \"MGL-OPT:MAP-GRADIENT-SINK GENERIC-FUNCTION\"\n  [ab3c]: #x-28MGL-COMMON-3AWEIGHTS-20GENERIC-FUNCTION-29 \"MGL-COMMON:WEIGHTS GENERIC-FUNCTION\"\n  [ad8f]: #x-28MGL-DATASET-3A-2AINFINITELY-EMPTY-DATASET-2A-20VARIABLE-29 \"MGL-DATASET:*INFINITELY-EMPTY-DATASET* VARIABLE\"\n  [ada2]: #x-28MGL-CORE-3A-40MGL-PARAMETERIZED-EXECUTOR-CACHE-20MGL-PAX-3ASECTION-29 \"Parameterized Executor Cache\"\n  [ae23]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_seq.htm \"SEQUENCE (MGL-PAX:CLHS CLASS)\"\n  [ae3d]: #x-28MGL-OPT-3AMINIMIZE-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:MINIMIZE* GENERIC-FUNCTION\"\n  [aee6]: #x-28MGL-RESAMPLE-3ASAMPLE-STRATIFIED-20FUNCTION-29 \"MGL-RESAMPLE:SAMPLE-STRATIFIED FUNCTION\"\n  [af05]: #x-28MGL-GD-3AMOMENTUM-20-28MGL-PAX-3AACCESSOR-20MGL-GD-3A-3AGD-OPTIMIZER-29-29 \"MGL-GD:MOMENTUM (MGL-PAX:ACCESSOR MGL-GD::GD-OPTIMIZER)\"\n  [af6b]: #x-28MGL-GD-3ACLIP-L2-NORM-20FUNCTION-29 \"MGL-GD:CLIP-L2-NORM FUNCTION\"\n  [b01b]: #x-28MGL-CORE-3AMAP-OVER-EXECUTORS-20GENERIC-FUNCTION-29 \"MGL-CORE:MAP-OVER-EXECUTORS GENERIC-FUNCTION\"\n  [b0f3]: #x-28MGL-BP-3ARNN-20CLASS-29 \"MGL-BP:RNN CLASS\"\n  [b186]: #x-28MGL-CORE-3ACROSS-ENTROPY-COUNTER-20CLASS-29 \"MGL-CORE:CROSS-ENTROPY-COUNTER CLASS\"\n  [b5c7]: #x-28MGL-COMMON-3ATARGET-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3ESOFTMAX-XE-LOSS-29-29 \"MGL-COMMON:TARGET (MGL-PAX:ACCESSOR MGL-BP:->SOFTMAX-XE-LOSS)\"\n  [b602]: #x-28MGL-BP-3A--3EACTIVATION-20FUNCTION-29 \"MGL-BP:->ACTIVATION FUNCTION\"\n  [b647]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-BAGGING-20MGL-PAX-3ASECTION-29 \"Bagging\"\n  [b76f]: #x-28MGL-BP-3A--3EWEIGHT-20CLASS-29 \"MGL-BP:->WEIGHT CLASS\"\n  [ba91]: #x-28MGL-RESAMPLE-3ASTRATIFY-20FUNCTION-29 \"MGL-RESAMPLE:STRATIFY FUNCTION\"\n  [bbdf]: #x-28MGL-CORE-3AAPPLY-MONITOR-20GENERIC-FUNCTION-29 \"MGL-CORE:APPLY-MONITOR GENERIC-FUNCTION\"\n  [bc8c]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_exp_e.htm \"EXP (MGL-PAX:CLHS FUNCTION)\"\n  [bd13]: #x-28MGL-GD-3A-40MGL-GD-ADAM-OPTIMIZER-20MGL-PAX-3ASECTION-29 \"Adam Optimizer\"\n  [bdf9]: #x-28MGL-DATASET-3AN-SAMPLES-20-28MGL-PAX-3AREADER-20MGL-DATASET-3AFUNCTION-SAMPLER-29-29 \"MGL-DATASET:N-SAMPLES (MGL-PAX:READER MGL-DATASET:FUNCTION-SAMPLER)\"\n  [be8d]: #x-28MGL-DATASET-3A-40MGL-SAMPLER-FUNCTION-SAMPLER-20MGL-PAX-3ASECTION-29 \"Function Sampler\"\n  [be95]: #x-28MGL-CORE-3A-40MGL-COUNTER-20MGL-PAX-3ASECTION-29 \"Counters\"\n  [c102]: #x-28MGL-CORE-3ASAVE-STATE-20FUNCTION-29 \"MGL-CORE:SAVE-STATE FUNCTION\"\n  [c1ac]: #x-28MGL-BP-3ALUMP-20CLASS-29 \"MGL-BP:LUMP CLASS\"\n  [c1ae]: #x-28MGL-BP-3AFORWARD-20GENERIC-FUNCTION-29 \"MGL-BP:FORWARD GENERIC-FUNCTION\"\n  [c40e]: #x-28MGL-GD-3A-40MGL-GD-UTILITIES-20MGL-PAX-3ASECTION-29 \"Utilities\"\n  [c469]: #x-28MGL-BP-3A--3EBATCH-NORMALIZATION-20CLASS-29 \"MGL-BP:->BATCH-NORMALIZATION CLASS\"\n  [c573]: #x-28MGL-CORE-3A-40MGL-CLASSIFICATION-MONITOR-20MGL-PAX-3ASECTION-29 \"Classification Monitors\"\n  [c58b]: #x-28MGL-OPT-3A-40MGL-OPT-GRADIENT-SOURCE-20MGL-PAX-3ASECTION-29 \"Implementing Gradient Sources\"\n  [c701]: #x-28MGL-CORE-3A-40MGL-MONITOR-20MGL-PAX-3ASECTION-29 \"Monitors\"\n  [c74a]: #x-28MGL-OPT-3A-40MGL-OPT-20MGL-PAX-3ASECTION-29 \"Gradient Based Optimization\"\n  [c7fa]: #x-28MGL-BP-3ARENORMALIZE-ACTIVATIONS-20FUNCTION-29 \"MGL-BP:RENORMALIZE-ACTIVATIONS FUNCTION\"\n  [c8db]: #x-28MGL-CORE-3A-40MGL-FEATURES-20MGL-PAX-3ASECTION-29 \"Features\"\n  [c918]: #x-28MGL-COMMON-3ABATCH-SIZE-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZATION-29-29 \"MGL-COMMON:BATCH-SIZE (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZATION)\"\n  [ca09]: #x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20GENERIC-FUNCTION-29 \"MGL-OPT:RESET-OPTIMIZATION-MONITORS GENERIC-FUNCTION\"\n  [caec]: #x-28MGL-CORE-3ALABEL-INDEX-DISTRIBUTION-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDEX-DISTRIBUTION GENERIC-FUNCTION\"\n  [cbb4]: #x-28MGL-NLP-3ABAG-OF-WORDS-ENCODER-20CLASS-29 \"MGL-NLP:BAG-OF-WORDS-ENCODER CLASS\"\n  [cc1c]: #x-28MGL-COMMON-3ANODES-20GENERIC-FUNCTION-29 \"MGL-COMMON:NODES GENERIC-FUNCTION\"\n  [cc37]: #x-28MGL-CORE-3AATTRIBUTES-20-28MGL-PAX-3AACCESSOR-20MGL-CORE-3AATTRIBUTED-29-29 \"MGL-CORE:ATTRIBUTES (MGL-PAX:ACCESSOR MGL-CORE:ATTRIBUTED)\"\n  [cc80]: #x-28MGL-CORE-3ALABEL-INDEX-20GENERIC-FUNCTION-29 \"MGL-CORE:LABEL-INDEX GENERIC-FUNCTION\"\n  [cd3b]: #x-28MGL-CORE-3A-40MGL-MEASURER-20MGL-PAX-3ASECTION-29 \"Measurers\"\n  [cee6]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_symb_5.htm \"SYMBOL-VALUE (MGL-PAX:CLHS FUNCTION)\"\n  [d0e3]: #x-28MGL-BP-3A-40MGL-RNN-TIME-WARP-20MGL-PAX-3ASECTION-29 \"Time Warp\"\n  [d10a]: #x-28MGL-CG-3AON-CG-BATCH-DONE-20-28MGL-PAX-3AACCESSOR-20MGL-CG-3ACG-OPTIMIZER-29-29 \"MGL-CG:ON-CG-BATCH-DONE (MGL-PAX:ACCESSOR MGL-CG:CG-OPTIMIZER)\"\n  [d1e0]: #x-28MGL-BP-3A-40MGL-BPN-20MGL-PAX-3ASECTION-29 \"`BPN`s\"\n  [d3b2]: #x-28MGL-CORE-3APARAMETERIZED-EXECUTOR-CACHE-MIXIN-20CLASS-29 \"MGL-CORE:PARAMETERIZED-EXECUTOR-CACHE-MIXIN CLASS\"\n  [d443]: #x-28MGL-NLP-3AENCODED-FEATURE-TYPE-20-28MGL-PAX-3AREADER-20MGL-NLP-3ABAG-OF-WORDS-ENCODER-29-29 \"MGL-NLP:ENCODED-FEATURE-TYPE (MGL-PAX:READER MGL-NLP:BAG-OF-WORDS-ENCODER)\"\n  [d479]: #x-28MGL-OPT-3ARESET-OPTIMIZATION-MONITORS-20-28METHOD-20-28MGL-OPT-3AITERATIVE-OPTIMIZER-20T-29-29-29 \"MGL-OPT:RESET-OPTIMIZATION-MONITORS (METHOD (MGL-OPT:ITERATIVE-OPTIMIZER T))\"\n  [d699]: #x-28MGL-COMMON-3ANODES-20-28MGL-PAX-3AREADER-20MGL-BP-3ALUMP-29-29 \"MGL-COMMON:NODES (MGL-PAX:READER MGL-BP:LUMP)\"\n  [d6e0]: #x-28MGL-BP-3AWARP-START-20-28MGL-PAX-3AREADER-20MGL-BP-3ARNN-29-29 \"MGL-BP:WARP-START (MGL-PAX:READER MGL-BP:RNN)\"\n  [d811]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_apply.htm \"APPLY (MGL-PAX:CLHS FUNCTION)\"\n  [d94e]: #x-28MGL-GD-3ABATCH-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:BATCH-GD-OPTIMIZER CLASS\"\n  [d96a]: #x-28MGL-BP-3AMEAN-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EGAUSSIAN-RANDOM-29-29 \"MGL-BP:MEAN (MGL-PAX:ACCESSOR MGL-BP:->GAUSSIAN-RANDOM)\"\n  [db03]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_eql.htm \"EQL (MGL-PAX:CLHS FUNCTION)\"\n  [dbc4]: #x-28MGL-BP-3A--3EV-2AM-20CLASS-29 \"MGL-BP:->V*M CLASS\"\n  [dd95]: #x-28MGL-OPT-3AINITIALIZE-GRADIENT-SOURCE-2A-20GENERIC-FUNCTION-29 \"MGL-OPT:INITIALIZE-GRADIENT-SOURCE* GENERIC-FUNCTION\"\n  [e0e6]: #x-28MGL-GD-3AADAM-OPTIMIZER-20CLASS-29 \"MGL-GD:ADAM-OPTIMIZER CLASS\"\n  [e198]: #x-28MGL-COMMON-3A-40MGL-COMMON-20MGL-PAX-3ASECTION-29 \"Common Stuff\"\n  [e46f]: #x-28MGL-CORE-3AMAKE-CROSS-ENTROPY-MONITORS-2A-20GENERIC-FUNCTION-29 \"MGL-CORE:MAKE-CROSS-ENTROPY-MONITORS* GENERIC-FUNCTION\"\n  [e4dd]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Fs_multip.htm \"MULTIPLE-VALUE-CALL (MGL-PAX:CLHS MGL-PAX:MACRO)\"\n  [e50c]: #x-28MGL-CORE-3AMONITOR-MODEL-RESULTS-20FUNCTION-29 \"MGL-CORE:MONITOR-MODEL-RESULTS FUNCTION\"\n  [e668]: #x-28MGL-CORE-3A-40MGL-MONITORING-20MGL-PAX-3ASECTION-29 \"Monitoring\"\n  [e746]: #x-28MGL-OPT-3A-40MGL-OPT-COST-20MGL-PAX-3ASECTION-29 \"Cost Function\"\n  [e7ea]: #x-28MGL-3A-40MGL-DEPENDENCIES-20MGL-PAX-3ASECTION-29 \"Dependencies\"\n  [e7f6]: #x-28MGL-BP-3ADROPOUT-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3A--3EINPUT-29-29 \"MGL-BP:DROPOUT (MGL-PAX:ACCESSOR MGL-BP:->INPUT)\"\n  [e8d2]: #x-28MGL-BP-3A--3ESQUARED-DIFFERENCE-20CLASS-29 \"MGL-BP:->SQUARED-DIFFERENCE CLASS\"\n  [ea7d]: #x-28MGL-LOG-3ALOG-MAT-ROOM-20FUNCTION-29 \"MGL-LOG:LOG-MAT-ROOM FUNCTION\"\n  [eaf1]: #x-28MGL-BP-3ABATCH-NORMALIZATION-20-28MGL-PAX-3AREADER-20MGL-BP-3A--3EBATCH-NORMALIZED-29-29 \"MGL-BP:BATCH-NORMALIZATION (MGL-PAX:READER MGL-BP:->BATCH-NORMALIZED)\"\n  [eb05]: #x-28MGL-CORE-3AMEASURER-20-28MGL-PAX-3AREADER-20MGL-CORE-3AMONITOR-29-29 \"MGL-CORE:MEASURER (MGL-PAX:READER MGL-CORE:MONITOR)\"\n  [ebd4]: #x-28MGL-OPT-3AON-OPTIMIZATION-STARTED-20-28MGL-PAX-3AACCESSOR-20MGL-OPT-3AITERATIVE-OPTIMIZER-29-29 \"MGL-OPT:ON-OPTIMIZATION-STARTED (MGL-PAX:ACCESSOR MGL-OPT:ITERATIVE-OPTIMIZER)\"\n  [ec8b]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_zerop.htm \"ZEROP (MGL-PAX:CLHS FUNCTION)\"\n  [ece2]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ff_sin_c.htm \"SIN (MGL-PAX:CLHS FUNCTION)\"\n  [ed4f]: #x-28MGL-BP-3A-2AWARP-TIME-2A-20VARIABLE-29 \"MGL-BP:*WARP-TIME* VARIABLE\"\n  [edcf]: #x-28MGL-BP-3A--3ESUM-20CLASS-29 \"MGL-BP:->SUM CLASS\"\n  [ee86]: http:\u002F\u002Fwww.lispworks.com\u002Fdocumentation\u002FHyperSpec\u002FBody\u002Ft_mod.htm \"MOD (MGL-PAX:CLHS TYPE)\"\n  [ee97]: #x-28MGL-CG-3ACG-OPTIMIZER-20CLASS-29 \"MGL-CG:CG-OPTIMIZER CLASS\"\n  [f00d]: #x-28MGL-OPT-3ASEGMENTS-20GENERIC-FUNCTION-29 \"MGL-OPT:SEGMENTS GENERIC-FUNCTION\"\n  [f17b]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-CROSS-VALIDATION-20MGL-PAX-3ASECTION-29 \"Cross-validation\"\n  [f1c1]: #x-28MGL-BP-3A--3EEMBEDDING-20CLASS-29 \"MGL-BP:->EMBEDDING CLASS\"\n  [f257]: #x-28MGL-CORE-3A-40MGL-CORE-20MGL-PAX-3ASECTION-29 \"Core\"\n  [f491]: #x-28MGL-COMMON-3AFN-20-28MGL-PAX-3AREADER-20MGL-DIFFUN-3ADIFFUN-29-29 \"MGL-COMMON:FN (MGL-PAX:READER MGL-DIFFUN:DIFFUN)\"\n  [f54e]: #x-28MGL-BP-3A--3EINPUT-20CLASS-29 \"MGL-BP:->INPUT CLASS\"\n  [f573]: #x-28MGL-BP-3ACUDA-WINDOW-START-TIME-20-28MGL-PAX-3AACCESSOR-20MGL-BP-3ARNN-29-29 \"MGL-BP:CUDA-WINDOW-START-TIME (MGL-PAX:ACCESSOR MGL-BP:RNN)\"\n  [f652]: #x-28MGL-BP-3A--3EMAX-20CLASS-29 \"MGL-BP:->MAX CLASS\"\n  [f6ae]: #x-28MGL-GD-3ANORMALIZED-BATCH-GD-OPTIMIZER-20CLASS-29 \"MGL-GD:NORMALIZED-BATCH-GD-OPTIMIZER CLASS\"\n  [f790]: #x-28MGL-RESAMPLE-3A-40MGL-RESAMPLE-PARTITIONS-20MGL-PAX-3ASECTION-29 \"Partitions\"\n  [f7aa]: #x-28MGL-3A-40MGL-INTRODUCTION-20MGL-PAX-3ASECTION-29 \"Introduction\"\n  [f7c1]: #x-28MGL-BP-3ACLUMPS-20-28MGL-PAX-3AREADER-20MGL-BP-3ABPN-29-29 \"MGL-BP:CLUMPS (MGL-PAX:READER MGL-BP:BPN)\"\n  [f85e]: #x-28MGL-LOG-3ALOG-MSG-20FUNCTION-29 \"MGL-LOG:LOG-MSG FUNCTION\"\n  [f956]: #x-28MGL-DATASET-3ASAMPLE-20GENERIC-FUNCTION-29 \"MGL-DATASET:SAMPLE GENERIC-FUNCTION\"\n  [f98e]: #x-28MGL-CORE-3ADO-EXECUTORS-20MGL-PAX-3AMACRO-29 \"MGL-CORE:DO-EXECUTORS MGL-PAX:MACRO\"\n  [fa6d]: #x-28MGL-COMMON-3ABATCH-SIZE-20GENERIC-FUNCTION-29 \"MGL-COMMON:BATCH-SIZE GENERIC-FUNCTION\"\n  [faaa]: #x-28MGL-CORE-3ADO-BATCHES-FOR-MODEL-20MGL-PAX-3AMACRO-29 \"MGL-CORE:DO-BATCHES-FOR-MODEL MGL-PAX:MACRO\"\n  [feaa]: #x-28MGL-BP-3A--3EGAUSSIAN-RANDOM-20CLASS-29 \"MGL-BP:->GAUSSIAN-RANDOM CLASS\"\n  [fedd]: #x-28MGL-CORE-3AENCODE-20GENERIC-FUNCTION-29 \"MGL-CORE:ENCODE GENERIC-FUNCTION\"\n  [ff5a]: #x-28MGL-BP-3ALAG-20FUNCTION-29 \"MGL-BP:LAG FUNCTION\"\n  [ff82]: #x-28MGL-CORE-3A-40MGL-MODEL-STRIPE-20MGL-PAX-3ASECTION-29 \"Batch Processing\"\n\n* * *\n###### \\[由 [MGL-PAX](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl-pax) 生成\\]","# MGL 快速上手指南\n\nMGL 是一个基于 Common Lisp 的机器学习库，专注于反向传播神经网络、玻尔兹曼机和高斯过程等模型。它构建在 `MGL-MAT` 之上，支持 BLAS 和 CUDA 加速，旨在提供高性能和强大的功能。\n\n## 1. 环境准备\n\n### 系统要求\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Lisp 实现**：推荐使用 SBCL (Steel Bank Common Lisp)\n*   **硬件加速（可选）**：\n    *   **CPU**：需安装 BLAS\u002FLAPACK 库（推荐 OpenBLAS，配置简单且性能优异）。\n    *   **GPU**：若需 CUDA 加速，需安装 NVIDIA GPU 驱动及 CUDA SDK。若无 GPU 或未安装 SDK，MGL 会自动回退到 BLAS 或纯 Lisp 代码运行。\n\n### 前置依赖\nMGL 依赖以下关键库，其中部分库尚未收录于 Quicklisp 官方仓库，需手动放置：\n*   `CL-CUDA`\n*   `MGL-MAT`\n*   其他依赖（通常可通过 Quicklisp 自动解决）：`alexandria`, `lla`, `mgl-gnuplot`, `mgl-pax` 等。\n\n> **注意**：BLAS\u002FLAPACK 的配置通过 `LLA` 库进行。请参考 [LLA README](https:\u002F\u002Fgithub.com\u002Ftpapp\u002Flla) 设置外部库路径。\n\n## 2. 安装步骤\n\n### 第一步：获取未收录于 Quicklisp 的依赖\n将 `CL-CUDA` 和 `MGL-MAT` 克隆或下载到 Quicklisp 的本地项目目录中：\n\n```bash\n# 假设 QUICKLISP_HOME 为 ~\u002Fquicklisp\ncd ~\u002Fquicklisp\u002Flocal-projects\u002F\ngit clone https:\u002F\u002Fgithub.com\u002Ftakagi\u002Fcl-cuda.git\ngit clone https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl-mat.git\n```\n\n### 第二步：获取 MGL\n同样将 MGL 放入本地项目目录：\n\n```bash\ncd ~\u002Fquicklisp\u002Flocal-projects\u002F\ngit clone https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl.git\n```\n\n### 第三步：加载系统\n启动你的 Common Lisp REPL（如 SBCL），执行以下命令加载 MGL：\n\n```common-lisp\n(ql:quickload :mgl)\n```\n\n### 第四步：验证安装（可选）\n运行内置测试套件以验证安装是否正确（注意：由于测试包含随机性，偶尔可能会失败）：\n\n```common-lisp\n(asdf:oos 'asdf:test-op '#:mgl)\n```\n\n## 3. 基本使用\n\nMGL 的核心概念包括数据集（Datasets）、采样器（Samplers）和模型训练。以下是一个简单的使用流程示例。\n\n### 3.1 数据准备与采样器\nMGL 使用采样器（Samplers）来流式处理数据，避免一次性加载大量数据到内存。\n\n```common-lisp\n(in-package :mgl)\n\n;; 创建一个简单的序列采样器\n;; 从列表 '(1 2 3 4 5) 中按顺序获取样本，最多获取 3 个\n(defparameter *my-sampler*\n  (make-sequence-sampler '(1 2 3 4 5) :max-n-samples 3))\n\n;; 获取样本\n(sample *my-sampler*) ;; => 1\n(sample *my-sampler*) ;; => 2\n(finishedp *my-sampler*) ;; => NIL\n\n;; 创建随机采样器（每次遍历结束后重新洗牌）\n(defparameter *random-sampler*\n  (make-random-sampler '(1 2 3 4 5)))\n```\n\n### 3.2 数据集操作\n你可以使用 `map-dataset` 遍历数据集中的每个实例。\n\n```common-lisp\n;; 遍历打印数据集中的每个元素\n(map-dataset #'print *my-sampler*)\n;; 输出:\n;; 1 \n;; 2 \n;; 3 \n```\n\n### 3.3 GPU 加速检查\n如果你配置了 CUDA，可以通过以下方式检查是否正在使用 GPU：\n\n```common-lisp\n;; 检查 CUDA 是否可用\n(mgl-mat:cuda-available-p)\n\n;; 在 CUDA 上下文中执行代码\n(mgl-mat:with-cuda*\n  ;; 在此处执行矩阵运算或模型训练\n  (format t \"Running on GPU if available.~%\"))\n```\n\n### 3.4 简单模型训练概念\nMGL 的训练通常涉及定义优化器（Optimizer）和成本函数（Cost Function）。虽然具体模型构建较为复杂，但基本流程如下：\n\n1.  定义神经网络结构（使用 `BPN` 或 `Lumps` API）。\n2.  定义损失函数。\n3.  使用梯度下降优化器（如 `MGL-GD` 包中的工具）进行训练。\n\n*注：由于 MGL 侧重底层性能和灵活性，构建完整神经网络需要较多样板代码，建议参考官方 HTML 文档中的 \"Backpropagation Neural Networks\" 章节获取详细建模示例。*\n\n---\n**更多资源：**\n*   [官方仓库](https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl)\n*   [HTML 文档](http:\u002F\u002Fmelisgl.github.io\u002Fmgl-pax-world\u002Fmgl-manual.html)","某量化交易团队的核心算法工程师正致力于在现有的 Common Lisp 高频交易系统中原生集成一个基于循环神经网络（RNN）的市场情绪预测模块，以利用 Lisp 强大的宏系统和实时数据处理能力。\n\n### 没有 mgl 时\n- **生态割裂严重**：必须通过外部进程调用 Python 或 C++ 编写的机器学习模型，导致 Lisp 主程序与模型服务间存在高昂的 IPC（进程间通信）延迟，难以满足微秒级交易需求。\n- **手动实现复杂**：若坚持纯 Lisp 实现，需从零编写反向传播算法、梯度下降优化器及矩阵运算底层逻辑，开发周期长达数月且极易引入数值计算错误。\n- **缺乏调试监控**：训练过程中无法直观监控损失函数收敛情况或混淆矩阵变化，只能依靠打印原始日志，难以快速定位模型过拟合或梯度消失问题。\n- **数据预处理繁琐**：缺少内置的数据重采样、交叉验证和特征编码工具，每次实验前需手写大量样板代码来处理时间序列数据的洗牌与分区。\n\n### 使用 mgl 后\n- **原生无缝集成**：直接利用 mgl 提供的 RNN 和反向传播网络组件，在 Lisp 环境中原生构建和训练模型，消除了跨语言调用开销，显著降低推理延迟。\n- **开箱即用算法**：直接调用封装好的梯度下降优化器、共轭梯度法及 Boltzmann 机器等核心算法，将开发重心从底层数学实现转移到模型架构设计上，效率提升数倍。\n- **可视化监控完善**：借助内置的 Monitor 和 Measurer 模块，实时追踪分类准确率与损失值，并可通过集成的 gnuplot 接口即时生成训练曲线，快速迭代调优。\n- **标准化数据流**：利用内置的 Sampler 和 Partition 功能轻松实现数据的交叉验证与 Bagging 处理，配合特征编码工具，大幅简化了从原始行情数据到模型输入的预处理流程。\n\nmgl 让 Common Lisp 开发者无需跳出熟悉的语言生态，即可高效构建、训练并部署高性能神经网络，实现了系统低延迟与开发高效率的完美统一。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmelisgl_mgl_d8a3b34b.png","melisgl","Gábor Melis","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmelisgl_a96054b4.png",null,"mega@retes.hu","GaborMelis","http:\u002F\u002Fquotenil.com","https:\u002F\u002Fgithub.com\u002Fmelisgl",[85],{"name":86,"color":87,"percentage":88},"Common Lisp","#3fb68b",100,643,40,"2026-04-01T12:28:59","MIT",4,"未说明","非必需。支持 NVIDIA GPU 加速（通过 CL-CUDA），若无 GPU 或未安装 CUDA SDK，将回退使用 BLAS 和 Lisp 代码。具体显卡型号、显存大小及 CUDA 版本未在文中明确指定。",{"notes":97,"python":98,"dependencies":99},"1. 这是一个 Common Lisp 机器学习库，主要专注于神经网络（玻尔兹曼机、前馈和递归反向传播网络）。\n2. 核心依赖 MGL-MAT 以提供 BLAS 和 CUDA 支持。\n3. 依赖项中的 CL-CUDA 和 MGL-MAT 尚未收录于 Quicklisp，需手动下载并放入 quicklisp\u002Flocal-projects\u002F 目录。\n4. 外部线性代数库（BLAS\u002FLAPACK）的配置通过 LLA 进行，推荐使用 OpenBLAS。\n5. 内置测试具有随机性，可能会偶尔失败。","不适用 (基于 Common Lisp)",[100,101,102,103,104,105,106,107,108,109],"alexandria","array-operations","cl-reexport","closer-mop","lla","mgl-gnuplot","mgl-mat","mgl-pax","named-readtables","num-utils",[13],"2026-03-27T02:49:30.150509","2026-04-06T05:32:26.208635",[114,119,124,129,134,139],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},11369,"在 SBCL\u002FLinux 上运行测试时出现内存错误（MEMORY-FAULT-ERROR）怎么办？","这个问题通常与 OpenBLAS 有关。尝试更换其他的 libblas 实现（例如不使用 OpenBLAS），问题即可解决。如果怀疑是堆损坏导致的，可以尝试注释掉 mgl-mat\u002Ftest\u002Ftest-mat.lisp 中的部分测试用例以定位具体问题。","https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues\u002F8",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},11370,"加载 MGL 时报错找不到 \"MGL-GNUPLOT\" 包怎么办？","这通常是本地配置问题。确保你使用了 `ql:register-local-projects` 来注册本地项目。如果你将 MGL 链接到了 ~\u002Fquicklisp\u002Flocal-projects\u002F，ASDF 可能无法自动找到 mgl-gnuplot.asd。建议检查是否错误地使用了 `REQUIRE` 而不是 `QL:QUICKLOAD`，并确保正确注册了本地项目路径。","https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues\u002F10",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},11371,"编译时出现 \"Package SWANK-MOP does not exist\" 错误如何解决？","这是因为 MGL 存在未声明的 swank-mop 依赖。该问题已在后续提交中修复（commit 2995d1a）。请更新到最新版本以解决此构建错误。","https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues\u002F15",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},11372,"编译时出现 \"Package CLNU does not exist\" 错误如何解决？","这是一个已知的构建依赖问题，已在 commit 4022eda 中修复。请更新代码库到最新提交以解决此问题。","https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues\u002F14",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},11373,"MGL 现在可以通过 Quicklisp 安装吗？","是的，MGL 现在已经包含在 Quicklisp 中。此前因为依赖项 CL-CUDA 和 MGL-MAT 未在 Quicklisp 中而导致无法直接安装，但现在这些依赖问题已解决，可以直接通过 Quicklisp 加载 MGL。","https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues\u002F11",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},11374,"HTML 文档中的数学公式（LaTeX）没有正确渲染怎么办？","这个问题已通过更新 mgl-pax 和应用 3bmd 的变通方法修复。请确保你使用的是最新版本的 mgl-pax 和相关文档生成工具，数学符号现在应该能正常显示。","https:\u002F\u002Fgithub.com\u002Fmelisgl\u002Fmgl\u002Fissues\u002F9",[]]