[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-antirez--neural-redis":3,"tool-antirez--neural-redis":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":75,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":97,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":107,"github_topics":108,"view_count":23,"oss_zip_url":108,"oss_zip_packed_at":108,"status":16,"created_at":109,"updated_at":110,"faqs":111,"releases":112},3824,"antirez\u002Fneural-redis","neural-redis","Neural networks module for Redis","neural-redis 是一个为 Redis 设计的开源模块，它将前馈神经网络作为原生数据类型直接集成到数据库中。该项目旨在打破机器学习的高门槛，让开发者无需搭建复杂的外部系统，即可在 Redis 内部完成数据收集、模型训练与预测执行的全流程。\n\n它主要解决了传统机器学习流程繁琐、部署困难的问题，特别适用于需要快速响应用户行为数据的场景。无论是判断“向哪位用户推送何种促销最有效”，还是预测数据趋势，neural-redis 都能通过简单的 API 调用轻松实现。它非常适合移动端和 Web 应用开发者，帮助他们在几分钟内为应用赋予智能决策能力，而无需深厚的算法背景。\n\n在技术亮点上，neural-redis 支持自动数据归一化和在线多线程训练，允许在模型持续学习的同时进行实时预测，并具备防止过拟合的自动检测机制。不过需要注意的是，目前该工具处于早期测试阶段，主要针对中小型回归与分类问题优化，并不适合处理复杂的计算机视觉等重型任务。对于希望以最低成本尝试机器学习的团队而言，这是一个极具探索价值的轻量级方案。","Neural Redis\n===\n\n*Machine learning is like highschool sex. Everyone says they do it, nobody really does, and no one knows what it actually is.* -- [@Mikettownsend](https:\u002F\u002Ftwitter.com\u002FMikettownsend\u002Fstatus\u002F780453119238955008).\n\nNeural Redis is a Redis loadable module that implements feed forward neural\nnetworks as a native data type for Redis. The project goal is to provide\nRedis users with an extremely simple to use machine learning experience.\n\nNormally machine learning is operated by collecting data, training some\nsystem, and finally executing the resulting program in order to solve actual\nproblems. In Neural Redis all this phases are compressed into a single API:\nthe data collection and training all happen inside the Redis server.\nNeural networks can be executed while there is an ongoing training, and can\nbe re-trained multiple times as new data from the outside is collected\n(for instance user events).\n\nThe project starts from the observation that, while complex problems like\ncomputer vision need slow to train and complex neural networks setups, many\nregression and classification problems that are able to enhance the user\nexperience in many applications, are approachable by feed forward fully\nconnected small networks, that are very fast to train, very generic, and\nrobust against non optimal parameters configurations.\n\nNeural Redis implements:\n\n* A very simple to use API.\n* Automatic data normalization.\n* Online training of neural networks in different threads.\n* Ability to use the neural network while the system is training it (we train a copy and only later merge the weights).\n* Fully connected neural networks using the RPROP (Resilient back propagation) learning algorithm.\n* Automatic training with simple overtraining detection.\n\nThe goal is to help developers, especially of mobile and web applications, to\nhave a simple access to machine learning, in order to answer questions like:\n\n* What promotion could work most likely with this user?\n* What AD should I display to obtain the best conversion?\n* What template is the user likely to appreciate?\n* What is a likely future trend for this data points?\n\nOf course you can do more, since neural networks are pretty flexible. You\ncan even have fun with computer visions datasets like\n[MINST](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F), however keep in mind that\nthe neural networks implemented in Neural Redis are not optimized for\ncomplex computer visions tasks like convolutional networks (it will\nscore 2.3%, very far from the state of art!), nor Neural Redis implements\nthe wonders of recurrent neural networks.\n\nHowever you'll be surpirsed by the number of tasks in which a simple\nneural network that can be trained in minutes, will be able to discover\nlinear ad non linear correlations.\n\nLoading the extension\n===\n\nTo run this extension you need Redis `unstable`, grab it from Github, it\nis the default branch. Then compile the extension, and load it starting\nRedis with:\n\n    redis-server --loadmodule \u002Fpath\u002Fto\u002Fneuralredis.so\n\nAlternatively add the following in your `redis.conf` file:\n\n    loadmodule \u002Fpath\u002Fto\u002Fneuralredis.so\n\nWARNING: alpha code\n===\n\n**WARNING:** this is alpha code. It is likely to contain bugs and may\neasily crash the Redis server. Also note that currently only\nRDB persistence is implemented in the module, while AOF rewrite\nis not implemented at all. Use at your own risk.\n\nIf you are not still scared enough, please consider that I wrote the\nmore than 1000 lines of C code composing this extension, and this\nREADME file, in roughly two days.\n\nNote that this implementation may be hugely improved. For instance\ncurrently only the sigmoid activaction function and the root mean\nsquare loss functions are supported: while for the problems this\nmodule is willing to address this limited neural network implementation\nis showing to be quite flexible, it is possible to do much better\ndepending on the problem at hand.\n\nHello World\n===\n\nIn order to understand how the API works, here is an hello world example\nwhere we'll teach our neural network to do... additions :-)\n\nTo create a new neural network we use the following command:\n\n    > NR.CREATE net REGRESSOR 2 3 -> 1 NORMALIZE DATASET 50 TEST 10\n    (integer) 13\n\nThe command creates a neural network, configured for regression tasks\n(as opposed to classification: well'll explain what this means\nin the course of this tutorial).\n\nNote that the command replied with \"13\". It means that the network\nhas a total of 13 tunable parameters, considering all the weights\nthat go from units or biases to other units. Larger networks\nwill have a lot more parameters.\n\nThe neural network has 2 inputs, 3 hidden layers, and a single output.\nRegression means that given certain inputs and desired outputs, we want the\nneural network to be able to *understand* the function that given the\ninputs computes the outputs, and compute this function when new inputs\nare presented to it.\n\nThe `NORMALIZE` option means that it is up to Redis to normalize the\ndata it receives, so there is no need to provide data in the -\u002F+ 1 range.\nThe options `DATASET 50` and `TEST 10` means that we want an internal\nmemory for the dataset of 50 and 10 items respectively for the training\ndataset, and the testing dataset.\n\nThe learning happens using the training dataset, while the testing dataset\nis used in order to detect if the network is able to generalize, that is,\nis really able to understand how to approximate a given function.\nAt the same time, the testing dataset is useful to avoid to train the network\ntoo much, a problem known as *overfitting*. Overfitting means that the\nnetwork becomes too much specific, at the point to be only capable of replying\ncorrectly to the inputs and outputs it was presented with.\n\nNow it is time to provide the network with some data, so that it can learn\nthe function we want to approximate:\n\n    > NR.OBSERVE net 1 2 -> 3\n    1) (integer) 1\n    2) (integer) 0\n\nWe are saying: given the inputs 1 and 2, the output is 3.\nThe reply to the `NR.OBSERVE` command is the number of data items\nstored in the neural network memory, respectively in the training\nand testing data sets.\n\nWe continue like that with other examples:\n\n    > NR.OBSERVE net 4 5 -> 9\n    > NR.OBSERVE net 3 4 -> 7\n    > NR.OBSERVE net 1 1 -> 2\n    > NR.OBSERVE net 2 2 -> 4\n    > NR.OBSERVE net 0 9 -> 9\n    > NR.OBSERVE net 7 5 -> 12\n    > NR.OBSERVE net 3 1 -> 4\n    > NR.OBSERVE net 5 6 -> 11\n\nAt this point we need to train the neural network, so that it\ncan learn:\n\n    > NR.TRAIN net AUTOSTOP\n\nThe `NR.TRAIN` command starts a training thread. the `AUTOSTOP` option\nmeans that we want the training to stop before overfitting starts\nto happen.\n\nUsing the `NR.INFO` command you can see if the network is still training.\nHowever in this specific case, the network will take a few milliseconds to\ntrain, so we can immediately try if it actually learned how to add two\nnumbers:\n\n    > NR.RUN net 1 1\n    1) \"2.0776522297040843\"\n\n    > NR.RUN net 3 2\n    1) \"5.1765427204933099\"\n\nWell, more or less it works. Let's look at some internal info now:\n\n    > NR.INFO net\n     1) id\n     2) (integer) 1\n     3) type\n     4) regressor\n     5) auto-normalization\n     6) (integer) 1\n     7) training\n     8) (integer) 0\n     9) layout\n    10) 1) (integer) 2\n        2) (integer) 3\n        3) (integer) 1\n    11) training-dataset-maxlen\n    12) (integer) 50\n    13) training-dataset-len\n    14) (integer) 6\n    15) test-dataset-maxlen\n    16) (integer) 10\n    17) test-dataset-len\n    18) (integer) 2\n    19) training-total-steps\n    20) (integer) 1344\n    21) training-total-seconds\n    22) 0.00\n    23) dataset-error\n    24) \"7.5369825612397299e-05\"\n    25) test-error\n    26) \"0.00042670663615723583\"\n    27) classification-errors-perc\n    28) 0.00\n\nAs you can see we have 6 dataset items and 2 test items. We configured\nthe network at creation time to have space for 50 and 10 items. As you add\nitems with `NR.OBSERVE` the network will put items evenly on both datasets,\nproportionally to their respective size. Finally when the datasets are full,\nold random entries are replaced with new\nones.\n\nWe can also see that the network was trained with 1344 steps for 0 seconds\n(just a few milliseconds). Each step is the training performed with a single\ndata item, so the same 6 items were presented to the network for 244 cycles\nin total.\n\nA few words about normalization\n===\n\nIf we try to use our network with values outside the range it learned with,\nwe'll see it failing:\n\n    > NR.RUN net 10 10\n    1) \"12.855978185382257\"\n\nThis happens because the automatic normalization will consider the maximum\nvalues seen in the training dataset. So if you plan to use auto normalization,\nmake sure to show the network samples with different values, including inputs\nat the maximum of the data you'll want to use the network with in the future.\n\nClassification tasks\n===\n\nRegression approximates a function having certain inputs and outputs in the\ntraining data set. Classification instead is the task of, given a set of\ninputs representing *something*, to label it with one of a fixed set of\nlabels.\n\nFor example the inputs may be features of Greek jars, and the classification\noutput could be one of the following three jar types:\n\n* Type 0: Kylix type A\n* Type 1: Kylix type B\n* Type 2: Kassel cup\n\nAs a programmer you may think that, the output class, is just a single output\nnumber. However neural networks don't work well this way, for example\nclassifying type 0 with an output between 0 and 0.33, type 1 with an output\nbetween 0.33 and 0.66, and finally type 2 with an output between 0.66 and 1,\nwill not work well at all.\n\nThe way to go instead is to use three distinct outputs, where we set two\nalways to 0, and a single one to 1, corresponding to the type the output\nrepresents, so:\n\n* Type 0: [1, 0, 0]\n* Type 1: [0, 1, 0]\n* Type 2: [0, 0, 1]\n\nWhen you create a neural network with the `NR.CREATE` command, and use as\nsecond argument `CLASSIFIER` instead of `REGRESSOR`, Neural Redis will do\nthe above transformation for you, so when you train your network with\n`NR.OBSERVE` you'll just use, as output, as single number: 0, 1 or 2.\n\nOf course, you need to create the network with three outputs like that:\n\n    > NR.CREATE mynet CLASSIFIER 5 10 -> 3\n    (integer) 93\n\nOur network is currently untrained, but it can already be run, even if the\nreplies it will provide are totally random:\n\n    > NR.RUN mynet 0 1 1 0 1\n    1) \"0.50930603602918945\"\n    2) \"0.48879876200255651\"\n    3) \"0.49534453421381375\"\n\nAs you can see, the network *voted* for type 0, since the first output is\ngreater than the others. There is a Neural Redis command that saves you the\nwork of finding the greatest output client side in order to interpret the\nresult as a number between 0 and 2. It is identical to `NR.RUN` but just\noutputs directly the class ID, and is called `NR.CLASS`:\n\n    > NR.CLASS mynet 0 1 1 0 1\n    (integer) 0\n\nHowever note that ofter `NR.RUN` is useful for classification problems.\nFor example a blogging platform may want to train a neural network to\npredict the template that will appeal more to the user, based on the\nregistration data we just obtained, that include the country, sex, age\nand category of the blog.\n\nWhile the prediction of the network will be the output with the highest\nvalue, if we want to present different templates, it makes sense to\npresent, in the listing, as the second one the one with the second\nhighest output value and so forth.\n\nBefore diving into a practical classification example, there is a last\nthing to say. Networks of type CLASSIFIER are also trained in a different\nway: instead of giving as output a list of zeros and ones you directly\nprovide to `NR.OBSERVE` the class ID as a number, so in the example\nof the jars, we don't need to write `NR.OBSERVE 1 0.4 .2 0 1 -> 0 0 1` to\nspecify as output of the provided data sample the third class, but\nwe should just write:\n\n    > NR.OBSERVE mynet 1 0.4 .2 0 1 -> 2\n\nThe \"2\" will be translated into \"0 0 1\" automatically, as \"1\"\nwould be translated to \"0 1 0\" and so forth.\n\nA practical example: the Titanic dataset\n===\n\n[Kaggle.com](https:\u002F\u002Fwww.kaggle.com\u002F) is hosting a machine learning\ncompetition. One of the datasets they use, is the list of the Titanic\npassengers, their ticket class, fair, number of relatives, age,\nsex, and other information, and... If they survived or not during the\nTitanic incident.\n\nYou can find both the code and a CSV with a reduced dataset of 891\nentries in the `examples` directory of this Github repository.\n\nIn this example we are going to try to predict, given a few input\nvariables, if a specific person is going to survive or not, so this\nis a classification task, where we label persons with two different\nlabels: survived or died.\n\nThis problem is pretty similar, even if a bit more scaring, than the\nproblem of labeling users or predicting their response in some web\napplication according to their behavior and the other data we collected\nin the past (hint: machine learning is all about collecting data...).\n\nIn the CSV there are a number of information about each passenger,\nbut here in order to make the example simpler we'll use just the\nfollowing fields:\n\n* Ticket class (1st, 2nd, 3rd).\n* Sex.\n* Age.\n* Sibsp (Number of siblings, spouses aboard).\n* Parch (Number of parents and children aboard).\n* Ticket fare.\n\nIf there is a correlation between this input variables and the\nability to survive, our neural network should find it.\n\nNote that while we have six inputs, we'll need a total network\nwith 9 total inputs, since sex and ticket class, are actually\n*input classes*, so like we did in the output, we'll need to\ndo in the input. Each input will signal if the passenger is in\none of the possible classes. This are our nine inputs:\n\n* Is male? (0 or 1).\n* Is Female? (0 or 1).\n* Traveled in first class? (0 or 1).\n* Traveled in second class? (0 or 1).\n* Traveled in third class? (0 or 1).\n* Age.\n* Number of siblings \u002F spouses.\n* Number of parents \u002F children.\n* Ticket fare.\n\nWe have a bit less than 900 passengers (I'm using a reduced\ndataset here), however we want to take about 200 for verification\nat application side, without sending them to Redis at all.\n\nThe neural network will also use part of the dataset for\nverification, since here I'm planning to use the automatic training\nstop feature, in order to detect overfitting.\n\nSuch a network can be created with:\n\n    > NR.CREATE mynet CLASSIFIER 9 15 -> 2 DATASET 1000 TEST 500 NORMALIZE\n\nAlso note that we are using a neural network with a single hidden\nlayer (the layers between inputs and outputs are called hidden, in\ncase you are new to neural networks). The hidden layer has 15 units.\nThis is still a pretty small network, but we expect that for the\namount of data and the kind of correlations that there could be in\nthis data, this could be enough. It's possible to test with\ndifferent parameters, and I plan to implement a `NR.CONFIGURE`\ncommand so that it will be possible to change this things on the fly.\n\nAlso note that since we defined a testing dataset maximum size to be half\nthe one of the training dataset (1000 vs 500), `NR.OBSERVE` will automatically\nput one third of the entires in the testing dataset.\n\nIf you check the Ruby program that implements this example inside the\nsource distribution, you'll see how data is fed directly as it is\nto the network, since we asked for auto normalization:\n\n```\ndef feed_data(r,dataset,mode)\n    errors = 0\n    dataset.each{|d|\n        pclass = [0,0,0]\n        pclass[d[:pclass]-1] = 1\n        inputs = pclass +\n                 [d[:male],d[:female]] +\n                 [d[:age],\n                  d[:sibsp],\n                  d[:parch],\n                  d[:fare]]\n        outputs = d[:survived]\n        if mode == :observe\n            r.send('nr.observe',:mynet,*inputs,'->',outputs)\n        elsif mode == :test\n            res = r.send('nr.class',:mynet,*inputs)\n            if res != outputs\n                errors += 1\n            end\n        end\n    }\n    if mode == :test\n        puts \"#{errors} prediction errors on #{dataset.length} items\"\n    end\nend\n```\n\nThe function is able to both send data or evaluate the error rate.\n\nAfter we load 601 entries from the dataset, before any training, the output\nof `NR.INFO` will look like this:\n\n    > NR.INFO mynet\n     1) id\n     2) (integer) 5\n     3) type\n     4) classifier\n     5) auto-normalization\n     6) (integer) 1\n     7) training\n     8) (integer) 0\n     9) layout\n    10) 1) (integer) 9\n        2) (integer) 15\n        3) (integer) 2\n    11) training-dataset-maxlen\n    12) (integer) 1000\n    13) training-dataset-len\n    14) (integer) 401\n    15) test-dataset-maxlen\n    16) (integer) 500\n    17) test-dataset-len\n    18) (integer) 200\n    19) training-total-steps\n    20) (integer) 0\n    21) training-total-seconds\n    22) 0.00\n    23) dataset-error\n    24) \"0\"\n    25) test-error\n    26) \"0\"\n    27) classification-errors-perc\n    28) 0.00\n    29) overfitting-detected\n    30) no\n\nAs expected, we have 401 training items and 200 testing dataset.\nNote that for networks declared as classifiers, we have an additional\nfield in the info output, which is `classification-errors-perc`. Once\nwe train the network this field will be populated with the percentage (from\n0% to 100%) of items in the testing dataset which were misclassified by\nthe neural network. It's time to train our network:\n\n    > NR.TRAIN mynet AUTOSTOP\n    Training has started\n\nIf we check the `NR.INFO` output after the training, we'll discover a few\ninteresting things (only quoting the relevant part of the output):\n\n    19) training-total-steps\n    20) (integer) 64160\n    21) training-total-seconds\n    22) 0.29\n    23) dataset-error\n    24) \"0.1264141065389438\"\n    25) test-error\n    26) \"0.13803731074639586\"\n    27) classification-errors-perc\n    28) 19.00\n    29) overfitting-detected\n    30) yes\n\nThe network was trained for 0.29 seconds. At the end of the training,\nthat was stopped for overfitting, the error rate in the testing dataset\nwas 19%.\n\nYou can also specify to train for a given amonut of seconds or cycles.\nFor now we just use the `AUTOSTOP` feature since it is simpler. However we'll\ndig into more details in the next section.\n\nWe can now show the output of the Ruby program after its full execution:\n\n    47 prediction errors on 290 items\n\nDoes not look too bad, considering how simple is our model and the fact\nwe trained with just 401 data points. Modeling just on the percentage of\npeople that survived VS the ones that died, we could miss-predict more\nthan 100 passengers.\n\nWe can also play with a few variables interactively in order to check\nwhat are the inputs that make a difference according to our trained\nneural network.\n\nLet's start asking the probable outcome for a woman, 30 years old,\nfirst class, without siblings and parents:\n\n    > NR.RUN mynet 1 0 0 0 1 30 0 0 200\n    1) \"0.093071778873849084\"\n    2) \"0.90242156736283008\"\n\nThe network is positive she survived, with 90% of probabilities.\nWhat if she is a lot older than 30 years old, let's say 70?\n\n    > NR.RUN mynet 1 0 0 0 1 70 0 0 200\n    1) \"0.11650946245068818\"\n    2) \"0.88784839170875851\"\n\nThis lowers her probability to 88.7%.\nAnd if she traveled in third class with a very cheap ticket?\n\n    > NR.RUN mynet 0 0 1 0 1 70 0 0 20\n    1) \"0.53693405013043127\"\n    2) \"0.51547605838387811\"\n\nThis time is 50% and 50%... Throw your coin.\n\nThe gist of this example is that, many problems you face as a developer\nin order to optimize your application and do better choices in the\ninteraction with your users, are Titanic problems, but not in their\nsize, just in the fact that a simple model can \"solve\" them.\n\nOverfitting detection and training tricks\n===\n\nOne thing that makes neural networks hard to use in an interactive\nway like the one they are proposed in this Redis module, is for sure\noverfitting. If you train too much, the neural network ends to be\nlike that one student that can exactly tell you all the words in the\nlesson, but if you ask a more generic question about the argument she\nor he just wonders and can't reply.\n\nSo the `NR.TRAIN` command `AUTOSTOP` option attempts to detect\noverfitting to stop the training before it's too late. How is this\nperformed? Well the current solution is pretty trivial: as the training\nhappens, we check the current error of the neural network between\nthe training dataset and the testing dataset.\n\nWhen overfitting kicks in, usually what we see is that the network error\nrate starts to be lower and lower in the training dataset, but instead\nof also reducing in the testing dataset it inverts the tendency and\nstarts to grow. To detect this turning point is not simple for two\nreasons:\n\n1. The error may fluctuates as the network learns.\n2. The network error may just go higher in the testing dataset since the learning is trapped into a *local minima*, but then a better solution may be found.\n\nSo while `AUTOSTOP` kinda does what it advertises (but I'll work on\nimproving it in the future, and there are neural network experts that\nknow much better than me and can submit a kind Pull Request :-), there\nare also means to manually train the network, and see how the error\nchanges with training.\n\nFor instance, this is the error rate in the Titanic dataset after\nthe automatic stop:\n\n    21) training-total-seconds\n    22) 0.17\n    23) dataset-error\n    24) \"0.13170509045457734\"\n    25) test-error\n    26) \"0.13433443241900492\"\n    27) classification-errors-perc\n    28) 18.50\n\nWe can use the `MAXTIME` and `MAXCYCLES` options in order to train for\na specific amount of time (note that these options are also applicable\nwhen `AUTOSTOP` is specified). Normally `MAXTIME` is set to 10000, which\nare milliseconds, so to 10 seconds of total training before killing the\ntraining thread. Let's train our network for 30 seconds, without auto stop.\n\n    > NR.TRAIN mynet MAXTIME 30000\n    Training has started\n\nAs a side note, while one or more trainings are in progress, we can\nlist them:\n\n    > NR.THREADS\n    1) nn_id=9 key=mynet db=0 maxtime=30000 maxcycles=0\n\nAfter the training stops, let's show info again:\n\n    21) training-total-seconds\n    22) 30.17\n    23) dataset-error\n    24) \"0.0674554189303056\"\n    25) test-error\n    26) \"0.20468644603795394\"\n    27) classification-errors-perc\n    28) 21.50\n\nYou can see that our network overtrained: the error rate of the training\ndataset is now lower: 0.06. But actually the performances in data it\nnever saw before, that is the testing dataset, is greater at 0.20!\n\nAnd indeed, it classifies in a wrong way 21% of entries instead of 18.50%.\n\nHowever it's not always like that, so to test things manually is a good\nidea when working at machine learning experiments, especially with this\nmodule that is experimental.\n\nAn interesting example is the `iris.rb` program inside the `examples`\ndirectory: it will load the `Iris.csv` dataset into Redis, which is\na very popular dataset with three variants of Iris flowers with their\nsepal and petal features. If you run the program, the percentage of\nentries classified in a wrong way will be 4%, however if you train the\nnetwork a few more cycles with:\n\n    NR.TRAIN iris MAXCYCLES 100\n\nYou'll see that often the error will drop to 2%.\n\nBetter overfitting detection with the BACKTRACK option\n===\n\nWhen using `AUTOSTOP`, there is an additional option that can be specified\n(it has no effects alone), that is: `BACKTRACK`. When backtracking is\nenabled, while the network is trained, every time there is some hint\nthat the network may start to overfit, the current version of the network\nis saved. At the end of the training, if the saved network is better\n(has a smaller error) compared to the current one, it is used instead\nof the final version of the trained network.\n\nThis avoids certain pathological runs when `AUTOSTOP` is used but\noverfitting is not detected. However, it adds running time since we\nneed to clone the NN from time to time during the training.\n\nFor example using `BACKTRACK` with the Iris dataset (see the `iris.rb`\nfile inside the examples directory) it never overtrains, while without\nabout 2% of the runs may overtrain.\n\nA more complex non linear classification example\n===\n\nThe Titanic example is surely more interesting, however it is possible\nthat most relations between inputs and outputs are linear, so we'll now\ntry a non linear classification task, just for the sake of showing the\ncapabilities of a small neural network.\n\nIn the examples directory of this source distribution there is an example\ncalled `circles.rb`, we'll use it as a reference.\n\nWe'll just setup a classification problem where the neural network\nwill be asked to classify two inputs, which are from our point of\nview two coordinates in a 2D space, into three different classes:\n0, 1 and 2.\n\nWhile the neural network does not know this, we'll generate the data\nso that different classes actually map to three different circles\nin the 2D space: the circles also contain intersections. The function\nthat generates the dataset is the following:\n\n```\n    point_class = rand(3) # Class will be 0, 1 or 2\n    if point_class == 0\n        x = Math.sin(k)\u002F2+rand()\u002F10;\n        y = Math.cos(k)\u002F2+rand()\u002F10;\n    elsif point_class == 1\n        x = Math.sin(k)\u002F3+0.4+rand()\u002F8;\n        y = Math.cos(k)\u002F4+0.4+rand()\u002F6;\n    else\n        x = Math.sin(k)\u002F3-0.5+rand()\u002F30;\n        y = Math.cos(k)\u002F3+rand()\u002F40;\n    end\n```\n\nThe basic trigonometric function:\n\n    x = Math.sin(k)\n    y = Math.cos(k)\n\nWith `k` going from 0 to 2*PI, is just a circle, so the above functions\nare just circles, plus the `rand()` calls in order to introduce noise.\nBasically if I trace the above three classes of points in a graphical\nway with [load81](https:\u002F\u002Fgithub.com\u002Fantirez\u002Fload81), I obtain the\nfollowing image:\n\n![Circles plotted with LOAD81](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fantirez_neural-redis_readme_a7e7db23efc1.png)\n\nThe program `circles.rb`, it will generate the same set of points and\nwill feed them into the neural network configured to accept 2 inputs\nand output one of three possible classes.\n\nAfter about 2 seconds of training, we try to visualize what the neural\nnetwork has learned (also part of the `circles.rb` command) in this way:\nfor each point in an `80x80` grid, we ask the network to classify the\npoint. This is the ASCII-artist result:\n\n```\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n\u002F...............................................................................\n\u002F\u002F\u002F.............................................................................\n\u002F\u002F\u002F\u002F............................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F..........................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.......................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.......................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.....................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....\u002F\u002F\u002F\u002F\u002F\u002F\u002F.................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........\u002F\u002F\u002F\u002F\u002F\u002F...............................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOO..............................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOO..........................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOO.....................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOO.................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO..\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOO..OOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOO......OOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....\u002F\u002F\u002F\u002F\u002F\u002F...OOOOOOOOOOOOOOOOOOOOOO........OOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......OOOOOOOOOOOOOOOOOOOO.........OOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........OOOOOOOOOOOOOOOOOOO.........OOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........OOOOOOOOOOOOOOOOO.........OOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..............OOOOOOOOOOOOOO..........OOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....................OOOOOOOO..........OOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.....................OOOOOOOO.........OOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......................OOOOOOO........OOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........................OOOOO........OOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................OOOO.......OOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................OOOO.....OOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............................OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............................OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F...OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F...OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n...........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n.........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n```\n\nAs you can see, while the problem had no linear solution, the neural network\nwas able to split the 2D space into areas, with the *holes* where there\nis the intersection between the circles areas, and thiner surfaces where the\ncircles actually cross each other (in the intersection between the two\ncircumferences there are points of two different classes).\n\nThis example was not practical perhaps but shows well the power of the\nneural network in non linear tasks.\n\nCase study: Sentiment Analysis\n===\n\nNeural Redis is not the right tool for advanced NLP tasks, and for sentiment\nanalysis, which is a very hard problem, there are RNNs and other, more complex\ntools, that can provide state of art results.\n\nHowever exactly for the same reason, SA is a very good example to show how\nto model problems, and that even the simplest of the intuitions can allow\nNeural Redis to handle problems in a decent way (even if far from the top\nspecialized systems) after a training of 5 minutes or so.\n\nThis case study is based on the source code inside the examples directory\ncalled `sentiment.rb`. It uses a very popular dataset used for sentiment\nanalysis benchmarking, composed of 2000 movies reviews, 1000 which are\npositive, and 1000 negative.\n\nThe reviews are like the following:\n\n    It must be some sort of warped critical nightmare: the best movie of\n    the year would be a summer vehicle, a jim carrey vehicle at that.\n    And so it is. The truman show is the most perplexing, crazed, paranoid\n    and rib-tickling morality play i've seen since i-don't-know-when.\n\nNormally we should try to do the best we can do in order to pre-process\nthe data, but we are lazy dogs, so we don't do anything at all.\nHowever we still need to map our inputs and outputs to meaningful\nparameters. For the outputs, it's trivial, is a categorization task:\n*negative or positive*. But how do we map words to inputs?\n\nNormally you assign different words to different IDs, and then use such\nIDs as indexes. This creates two problems in our case:\n\n1. We need to select a vocabulary. Usually this is done in a pre-processing stage where we potentially examine a non-annotated large corpus of text. But remember that we are lazy?\n2. \"very good\" and \"not good\" have very different meanings, we can't stop to single words, otherwise our result is likely be disappointing.\n\nSo I did the following. Let's say our network is composed of 3000 inputs,\n100 hidden units, and the 2 outputs for the classification.\n\nWe split the initial inputs into two sides: 1500 of inputs just take the\nsingle words. The other 1500 inputs, we use for combinations of two words.\nWhat I did was to just use hashing to map the words in the text to the input\nunits:\n\n    INDEX_1 = HASH(word) % 1500\n    INDEX_2 = 1500 + (HASH(word + next_word) % 1500)\n\nThis is a bit crazy, I'm curious to know if it's something that people tried\nin the past, since different words and different combinations of words\nwill hash to the same, so we'll get a bit less precise results, however it\nis unlikely that words *highly polarized* in the opposite direction (positive\nVS negative) will hash to the same bucket, if we use enough inputs.\n\nSo each single word and combination of words is a \"vote\" in the input\nunit. As we scan the sentences to give the votes, we also sum all the\nsingle votes we gave, so that we finally normalize to make sure all\nour inputs summed will give \"1\". This way the sentiment analysis does not\ndepend by the length of the sentence.\n\nWhile this approach is very simple, it works and produces a NN in a matter\nof seconds that can score 80% in the 2000 movies dataset. I just spent\na couple of hours on it, probably it's possible to do much better with\na more advanced scheme. However the gist of this use case is: be creative\nwhen trying to map your data to the neural network.\n\nIf you run `sentiment.rb` you'll see the network quickly converging\nand at the end, you'll be able to type sentences that the NN will\nclassify as positive or negative:\n\n    nn_id=7 cycle=61 key=sentiment ... classerr=21.500000\n    nn_id=7 cycle=62 key=sentiment ... classerr=20.333334\n\n    Best net so far can predict sentiment polarity 78.17 of times\n\n    Imagine and type a film review sentence:\n\n    > WTF this movie was terrible\n    Negativity: 0.99966669082641602\n    Positivity: 0.00037576013710349798\n\n    > Good one\n    Negativity: 0.28475716710090637\n    Positivity: 0.73368257284164429\n\n    > This is a masterpiece\n    Negativity: 2.219095662781001e-08\n    Positivity: 0.99999994039535522\n\nOf course you'll find a number of sentences that the net will classify\nin the wrong way... However the longer sentence you type and more similar\nto an actual movie review, the more likely it can predict it correctly.\n\nAPI reference\n===\n\nIn the above tutorial not all the options of all the commands may be\ncovered, so here there is a small reference with all the commands\nsupported by this extension and associated options.\n\n### NR.CREATE key [CLASSIFIER|REGRESSOR] inputs [hidden-layer-units ...] -> outputs [NORMALIZE] [DATASET maxlen] [TEST maxlen]\n\nCreate a new neural network if the target key is empty, or returns an error.\n\n* key - The key name holding the neural network.\n* CLASSIFIER or REGRESSOR is the network type, read this tutorial for more info.\n* inputs - Number of input units\n* hidden-layer-units zero or more arguments indicating the number of hidden units, one number for each layer.\n* outputs - Number of outputs units\n* NORMALIZE - Specify if you want the network to normalize your inputs. Use this if you don't know what we are talking about.\n* DATASET maxlen - Max number of data samples in the training dataset.\n* TEST maxlen - Max number of data samples in the testing dataset.\n\nExample:\n\n    NR.CREATE mynet CLASSIFIER 64 100 -> 10 NORMALIZE DATASET 1000 TEST 500\n\n### NR.OBSERVE key i0 i1 i2 i3 i4 ... iN -> o0 o1 o3 ... oN [TRAIN|TEST]\n\nAdd a data sample into the training or testing dataset (if specified as last argument) or evenly into one or the other, according to their respective sizes, if no target is specified.\n\nFor neural networks of type CLASSIFIER the output must be just one, in the range from 0 to `number-of-outputs - 1`. It's up to the network to translate the class ID into a set of zeros and ones.\n\nThe command returns the number of data samples inside the training and testing dataset. If the target datasets are already full, a random entry is evicted and substituted with the new data.\n\n## NR.RUN key i0 i1 i2 i3 i4 ... iN\n\nRun the network stored at key, returning an array of outputs.\n\n## NR.CLASS key i0 i1 i2 i3 i4 ... iN\n\nLike `NR.RUN` but can be used only with NNs of type CLASSIFIER. Instead of outputting the raw neural network outputs, the command returns the output class directly, which is, the index of the output with the greatest value.\n\n## NR.TRAIN key [MAXCYCLES count] [MAXTIME milliseconds] [AUTOSTOP] [BACKTRACK]\n\nTrain a network in a background thread. When the training finishes\nautomatically updates the weights of the trained networks with the\nnew ones and updates the training statistics.\n\nThe command works with a copy of the network, so it is possible to\nuse the network while it is undergoing a training.\n\nIf no AUTOSTOP is specified, trains the network till the maximum number of\ncycles or milliseconds are reached. If no maximum number of cycles is specified\nthere are no cycles limits. If no milliseconds are specified, the limit is\nset to 10000 milliseconds (10 seconds).\n\nIf AUTOSTOP is specified, the training will still stop when the maximum \number of cycles or milliseconds is specified, but will also try to stop\nthe training if overfitting is detected. Check the previous sections for\na description of the (still naive) algorithm the implementation uses in\norder to stop.\n\nIf BACKTRACK is specified, and AUTOSTOP is also specified, while the network\nis trained, the trainer thread saves a copy of the neural network every\ntime it has a better score compared to the previously saved one and there are\nhints suggesting that overfitting may happen soon. This network is used later\nif it is found to have a smaller error.\n\n## NR.INFO key\n\nShow many internal information about the neural network. Just try it :-)\n\n## NR.THREADS\n\nShow all the active training threads.\n\n## NR.RESET key\n\nSet the neural network weights to random ones (that is, the network will\ncompletely unlearn what it learned so far), and reset training statistics.\nHowever the datasets are not touched at all. This is useful when you\nwant to retrain a network from scratch.\n\nContributing\n===\n\nThe main aim of Neural Redis, which is currently just a 48h personal\nhackatlon, is to show the potential that there is in an accessible API\nthat provides a simple to use machine learning tool, that can be used\nand trained interactively.\n\nHowever the neural network implementation can be surely improved in different\nways, so if you are an expert in this field, feel free to submit changes\nor ideas. One thing that I want to retain is the simplicity of the outer\nlayer: the API. However the techniques used in the internals can be more\ncomplex in order to improve the results.\n\nThere is to note that, given the API exported, the implementation of\nthe neural network should be, more than state of art in solving a specific\nproblem, more designed in order to work well enough in a large set of\nconditions. While the current fully connected network has its limits,\nit together with BPROP learning shows to be quite resistant to misuses.\nSo an improved version should be able to retain, and extend this quality.\nThe simplest way to guarantee this is to have a set of benchmarks of different\ntypes using open datasets, and to score different implementations against\nit.\n\nPlans\n===\n\n* Better overfitting detection.\n* Implement RNNs with a simpler to use API.\n* Use a different loss function for classification NNs.\n* Get some ML expert which is sensible to simple APIs involved.\n\nHave fun with machine learning,\n\nSalvatore\n","神经 Redis\n===\n\n*机器学习就像高中时期的性行为。人人都说会，但真正懂的却很少，甚至没人知道它到底是什么。* -- [@Mikettownsend](https:\u002F\u002Ftwitter.com\u002FMikettownsend\u002Fstatus\u002F780453119238955008)。\n\nNeural Redis 是一个 Redis 可加载模块，它将前馈神经网络实现为 Redis 的原生数据类型。该项目的目标是为 Redis 用户提供一种极其简单的机器学习体验。\n\n通常情况下，机器学习需要先收集数据、训练模型，最后再运行生成的程序来解决实际问题。而在 Neural Redis 中，这些步骤被压缩成一个单一的 API：数据收集和训练都在 Redis 服务器内部完成。神经网络可以在训练过程中执行，并且随着外部不断收集到的新数据（例如用户事件），可以多次重新训练。\n\n该项目基于这样一个观察：虽然像计算机视觉这样的复杂问题需要耗时较长且结构复杂的神经网络，但许多能够提升用户体验的回归和分类问题，其实可以用小型的全连接前馈网络来解决。这类网络训练速度快、适用范围广，而且对参数配置不理想的情况也具有较强的鲁棒性。\n\nNeural Redis 实现了以下功能：\n\n* 非常易用的 API。\n* 自动数据归一化。\n* 在不同线程中进行神经网络的在线训练。\n* 系统在训练时仍可使用该神经网络（我们先训练一个副本，稍后再合并权重）。\n* 使用 RPROP（弹性反向传播）学习算法的全连接神经网络。\n* 自动训练，并具备简单的过拟合检测机制。\n\n我们的目标是帮助开发者，尤其是移动和 Web 应用程序的开发者，轻松地接入机器学习，从而回答诸如以下问题：\n\n* 哪种促销活动最有可能打动这位用户？\n* 我应该展示哪条广告以获得最佳转化率？\n* 用户可能会喜欢哪种模板？\n* 这些数据点未来的趋势会怎样？\n\n当然，你还可以做更多事情，因为神经网络非常灵活。你甚至可以尝试使用像 [MNIST](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F) 这样的计算机视觉数据集，不过需要注意的是，Neural Redis 中实现的神经网络并不针对卷积网络等复杂的计算机视觉任务进行优化（其准确率仅为 2.3%，远低于当前最先进的水平！），同时 Neural Redis 也不支持循环神经网络的强大功能。\n\n然而，你会惊讶地发现，许多任务都可以通过一个能在几分钟内训练好的简单神经网络来发现线性和非线性的相关性。\n\n加载扩展\n===\n要运行此扩展，你需要 Redis `unstable` 版本，可以从 GitHub 上获取，默认分支即可。然后编译扩展，并在启动 Redis 时加载：\n\n    redis-server --loadmodule \u002Fpath\u002Fto\u002Fneuralredis.so\n\n或者将以下内容添加到你的 `redis.conf` 文件中：\n\n    loadmodule \u002Fpath\u002Fto\u002Fneuralredis.so\n\n警告：Alpha 版代码\n===\n**警告：** 这是 Alpha 版代码，可能存在 bug，并可能导致 Redis 服务器崩溃。此外，请注意，目前该模块仅实现了 RDB 持久化，而 AOF 重写功能尚未实现。请自行承担风险使用。\n\n如果你仍然觉得不够吓人，那么请考虑一下：我编写这个超过 1000 行 C 代码的扩展以及这份 README 文件，仅仅用了两天时间。\n\n请注意，此实现还有很大的改进空间。例如，目前仅支持 Sigmoid 激活函数和均方根损失函数：尽管对于该模块旨在解决的问题而言，这种有限的神经网络实现已经表现出相当的灵活性，但根据具体问题的不同，仍有进一步优化的空间。\n\nHello World\n===\n为了理解 API 的工作方式，这里有一个 Hello World 示例，我们将教会我们的神经网络做……加法 :-)\n\n要创建一个新的神经网络，我们可以使用以下命令：\n\n    > NR.CREATE net REGRESSOR 2 3 -> 1 NORMALIZE DATASET 50 TEST 10\n    (integer) 13\n\n该命令创建了一个用于回归任务的神经网络（与分类任务相对：我们将在本教程中解释两者的区别）。\n\n请注意，命令返回了“13”，这意味着该网络共有 13 个可调参数，包括所有连接各层节点或偏置项的权重。更大的网络则会有更多的参数。\n\n该神经网络有 2 个输入、3 个隐藏层和 1 个输出。回归意味着，给定某些输入和期望的输出，我们希望神经网络能够“理解”从输入到输出之间的映射关系，并在接收到新的输入时自动计算出相应的输出。\n\n`NORMALIZE` 选项表示 Redis 会自动对输入的数据进行归一化处理，因此无需将数据预先缩放到 -\u002F+ 1 的范围内。`DATASET 50` 和 `TEST 10` 选项则分别指定了用于训练和测试的数据集的内部存储容量为 50 条和 10 条记录。\n\n学习过程使用训练数据集进行，而测试数据集则用来检测网络是否能够泛化，即是否真的掌握了如何近似某个函数。同时，测试数据集也有助于避免过度训练，也就是所谓的“过拟合”。过拟合是指网络过于专注于训练数据中的特定模式，以至于只能正确响应那些它曾经见过的输入和输出。\n\n现在，我们需要为网络提供一些数据，让它学会我们要近似的函数：\n\n    > NR.OBSERVE net 1 2 -> 3\n    1) (integer) 1\n    2) (integer) 0\n\n我们是在告诉网络：当输入为 1 和 2 时，输出应为 3。`NR.OBSERVE` 命令的返回值是神经网络内存中分别存储在训练和测试数据集中的数据条目数量。\n\n我们继续添加其他示例：\n\n    > NR.OBSERVE net 4 5 -> 9\n    > NR.OBSERVE net 3 4 -> 7\n    > NR.OBSERVE net 1 1 -> 2\n    > NR.OBSERVE net 2 2 -> 4\n    > NR.OBSERVE net 0 9 -> 9\n    > NR.OBSERVE net 7 5 -> 12\n    > NR.OBSERVE net 3 1 -> 4\n    > NR.OBSERVE net 5 6 -> 11\n\n此时，我们需要开始训练神经网络，让它学习：\n\n    > NR.TRAIN net AUTOSTOP\n\n`NR.TRAIN` 命令会启动一个训练线程。`AUTOSTOP` 选项表示我们希望在出现过拟合之前停止训练。\n\n你可以使用 `NR.INFO` 命令查看网络是否仍在训练中。不过在这个例子中，网络只需要几毫秒就能完成训练，因此我们可以立即测试它是否真的学会了如何将两个数字相加：\n\n    > NR.RUN net 1 1\n    1) \"2.0776522297040843\"\n\n> NR.RUN net 3 2\n    1) \"5.1765427204933099\"\n\n嗯，大致上是能工作的。现在我们来看看一些内部信息：\n\n    > NR.INFO net\n     1) id\n     2) (integer) 1\n     3) type\n     4) regressor\n     5) auto-normalization\n     6) (integer) 1\n     7) training\n     8) (integer) 0\n     9) layout\n    10) 1) (integer) 2\n        2) (integer) 3\n        3) (integer) 1\n    11) training-dataset-maxlen\n    12) (integer) 50\n    13) training-dataset-len\n    14) (integer) 6\n    15) test-dataset-maxlen\n    16) (integer) 10\n    17) test-dataset-len\n    18) (integer) 2\n    19) training-total-steps\n    20) (integer) 1344\n    21) training-total-seconds\n    22) 0.00\n    23) dataset-error\n    24) \"7.5369825612397299e-05\"\n    25) test-error\n    26) \"0.00042670663615723583\"\n    27) classification-errors-perc\n    28) 0.00\n\n如你所见，我们有6个数据集条目和2个测试条目。我们在创建网络时配置了分别可容纳50和10个条目的空间。当你使用`NR.OBSERVE`添加条目时，网络会根据各自数据集的大小比例，将条目均匀地分配到两个数据集中。当数据集满时，旧的随机条目会被新的条目替换。\n\n我们还可以看到，网络经过1344步训练，耗时0秒（实际上只有几毫秒）。每一步都是用单个数据条目进行的训练，因此相同的6个条目总共被呈现给网络244次。\n\n关于归一化的几点说明\n===\n\n如果我们尝试用超出网络学习范围的值来使用我们的网络，就会发现它会失效：\n\n    > NR.RUN net 10 10\n    1) \"12.855978185382257\"\n\n这是因为自动归一化会以训练数据集中观察到的最大值为基准。所以如果你打算使用自动归一化功能，务必向网络展示不同范围的样本，包括未来你希望网络处理的数据的最大值。\n\n分类任务\n===\n\n回归是在训练数据集中通过已知的输入和输出来近似一个函数。而分类则是给定一组代表“某种事物”的输入，将其标记为一组固定标签中的某一类的任务。\n\n例如，输入可以是希腊陶罐的特征，而分类输出可能是以下三种陶罐类型之一：\n\n* 类型0：A型基利克斯杯\n* 类型1：B型基利克斯杯\n* 类型2：卡塞尔杯\n\n作为程序员，你可能会认为输出类别只是一个单一的数字。然而，神经网络并不适合这样工作。比如，用0到0.33之间的输出表示类型0，0.33到0.66之间表示类型1，最后0.66到1之间表示类型2，这样的方式根本无法很好地完成任务。\n\n正确的做法是使用三个独立的输出，其中两个始终设为0，只有一个设为1，对应于输出所代表的类型，即：\n\n* 类型0：[1, 0, 0]\n* 类型1：[0, 1, 0]\n* 类型2：[0, 0, 1]\n\n当你使用`NR.CREATE`命令创建神经网络，并将第二个参数设置为`CLASSIFIER`而不是`REGRESSOR`时，Neural Redis会为你完成上述转换。因此，当你使用`NR.OBSERVE`训练网络时，只需将输出设置为0、1或2即可。\n\n当然，你需要以这种方式创建具有三个输出的网络：\n\n    > NR.CREATE mynet CLASSIFIER 5 10 -> 3\n    (integer) 93\n\n目前我们的网络尚未训练，但已经可以运行，尽管它给出的回复完全是随机的：\n\n    > NR.RUN mynet 0 1 1 0 1\n    1) \"0.50930603602918945\"\n    2) \"0.48879876200255651\"\n    3) \"0.49534453421381375\"\n\n正如你所见，网络“投票”选择了类型0，因为第一个输出大于其他输出。Neural Redis提供了一个命令，可以帮你省去在客户端寻找最大输出并将其解释为0到2之间数字的工作。这个命令与`NR.RUN`相同，但直接输出类别ID，称为`NR.CLASS`：\n\n    > NR.CLASS mynet 0 1 1 0 1\n    (integer) 0\n\n不过请注意，`NR.RUN`在分类问题中也很有用。例如，一个博客平台可能希望训练一个神经网络，根据刚刚收集到的注册数据（包括国家、性别、年龄和博客类别）预测用户最感兴趣的模板。\n\n虽然网络的预测结果将是具有最高值的输出，但如果我们想展示不同的模板，那么在列表中按输出值从高到低依次展示第二、第三等模板就很有意义了。\n\n在进入实际的分类示例之前，还有一点需要说明。CLASSIFIER类型的网络在训练时也有不同的方式：你不需要像以前那样提供一组由0和1组成的输出，而是可以直接将类别ID作为数字传递给`NR.OBSERVE`。例如，在陶罐的例子中，我们不需要写`NR.OBSERVE 1 0.4 .2 0 1 -> 0 0 1`来指定提供的数据样本属于第三类，而只需写：\n\n    > NR.OBSERVE mynet 1 0.4 .2 0 1 -> 2\n\n这里的“2”会被自动转换为“0 0 1”，就像“1”会被转换为“0 1 0”一样。\n\n实际示例：泰坦尼克号数据集\n===\n\n[Kaggle.com](https:\u002F\u002Fwww.kaggle.com\u002F) 正在举办一场机器学习竞赛。他们使用的其中一个数据集是泰坦尼克号乘客的信息，包括他们的船票等级、票价、亲属数量、年龄、性别以及其他信息，还有他们在泰坦尼克号沉没事件中是否幸存。\n\n你可以在这个Github仓库的`examples`目录中找到代码以及包含891条记录的简化数据集的CSV文件。\n\n在这个例子中，我们将尝试根据几个输入变量来预测某个人是否会幸存，因此这是一个分类任务，我们需要用两种不同的标签来标记人员：幸存或死亡。\n\n这个问题与根据用户行为和其他过去收集的数据来标记用户或预测其在某些网络应用程序中的反应的问题非常相似，甚至可以说更令人担忧（提示：机器学习的关键就在于数据的收集……）。\n\nCSV文件中包含了每位乘客的大量信息，但为了简化示例，我们只使用以下字段：\n\n* 船票等级（1等、2等、3等）\n* 性别\n* 年龄\n* Sibsp（船上兄弟姐妹及配偶的数量）\n* Parch（船上父母及子女的数量）\n* 票价\n\n如果这些输入变量与生存能力之间存在相关性，那么我们的神经网络就应该能够找到这种关系。\n\n需要注意的是，虽然我们有六个输入，但实际上我们需要一个总共有九个输入的网络，因为性别和船票等级实际上是“输入类别”，所以我们需要像处理输出那样，在输入端也进行类似的处理。每个输入都会指示乘客是否属于某个可能的类别。这就是我们的九个输入：\n\n* 是否为男性？（0 或 1）。\n* 是否为女性？（0 或 1）。\n* 是否乘坐头等舱？（0 或 1）。\n* 是否乘坐二等舱？（0 或 1）。\n* 是否乘坐三等舱？（0 或 1）。\n* 年龄。\n* 兄弟姐妹\u002F配偶的数量。\n* 父母\u002F子女的数量。\n* 票价。\n\n我们手头有不到900名乘客的数据（这里使用的是一个精简的数据集），但我们希望在应用端保留大约200条记录用于验证，而完全不将它们发送到Redis中。\n\n神经网络也会使用一部分数据集来进行验证，因为我计划利用自动停止训练的功能来检测过拟合现象。\n\n这样的网络可以通过以下命令创建：\n\n    > NR.CREATE mynet CLASSIFIER 9 15 -> 2 DATASET 1000 TEST 500 NORMALIZE\n\n另外需要注意的是，我们使用的是一个单隐藏层的神经网络（对于刚接触神经网络的朋友来说，输入层和输出层之间的那一层就称为隐藏层）。这个隐藏层包含15个单元。尽管这仍然是一个规模较小的网络，但考虑到数据量以及数据中可能存在的相关性，我们认为这已经足够了。当然，我们也可以尝试不同的参数配置，并且我计划实现一个`NR.CONFIGURE`命令，以便能够动态地调整这些设置。\n\n此外，由于我们设定的测试数据集最大容量是训练数据集的一半（1000条对比500条），`NR.OBSERVE`命令会自动将三分之一的记录放入测试数据集中。\n\n如果你查看源代码分发包中实现这个示例的Ruby程序，就会发现数据是如何直接原样输入到网络中的，因为我们要求了自动归一化功能：\n\n```\ndef feed_data(r,dataset,mode)\n    errors = 0\n    dataset.each{|d|\n        pclass = [0,0,0]\n        pclass[d[:pclass]-1] = 1\n        inputs = pclass +\n                 [d[:male],d[:female]] +\n                 [d[:age],\n                  d[:sibsp],\n                  d[:parch],\n                  d[:fare]]\n        outputs = d[:survived]\n        if mode == :observe\n            r.send('nr.observe',:mynet,*inputs,'->',outputs)\n        elsif mode == :test\n            res = r.send('nr.class',:mynet,*inputs)\n            if res != outputs\n                errors += 1\n            end\n        end\n    }\n    if mode == :test\n        puts \"#{errors} prediction errors on #{dataset.length} items\"\n    end\nend\n```\n\n该函数既可以用来输入数据，也可以用来评估错误率。\n\n当我们从数据集中加载601条记录后，在尚未开始任何训练之前，执行`NR.INFO`命令的输出将会是这样的：\n\n    > NR.INFO mynet\n     1) id\n     2) (integer) 5\n     3) type\n     4) classifier\n     5) auto-normalization\n     6) (integer) 1\n     7) training\n     8) (integer) 0\n     9) layout\n    10) 1) (integer) 9\n        2) (integer) 15\n        3) (integer) 2\n    11) training-dataset-maxlen\n    12) (integer) 1000\n    13) training-dataset-len\n    14) (integer) 401\n    15) test-dataset-maxlen\n    16) (integer) 500\n    17) test-dataset-len\n    18) (integer) 200\n    19) training-total-steps\n    20) (integer) 0\n    21) training-total-seconds\n    22) 0.00\n    23) dataset-error\n    24) \"0\"\n    25) test-error\n    26) \"0\"\n    27) classification-errors-perc\n    28) 0.00\n    29) overfitting-detected\n    30) no\n\n正如预期的那样，我们有401条训练数据和200条测试数据。请注意，对于被声明为分类器的网络，其信息输出中会多出一项`classification-errors-perc`字段。一旦我们对网络进行训练，这一字段就会显示测试数据集中被神经网络错误分类的样本所占的百分比（范围从0%到100%）。现在是时候开始训练我们的网络了：\n\n    > NR.TRAIN mynet AUTOSTOP\n    Training has started\n\n训练完成后再次检查`NR.INFO`的输出，我们会发现一些有趣的信息（仅摘录相关部分）：\n\n    19) training-total-steps\n    20) (integer) 64160\n    21) training-total-seconds\n    22) 0.29\n    23) dataset-error\n    24) \"0.1264141065389438\"\n    25) test-error\n    26) \"0.13803731074639586\"\n    27) classification-errors-perc\n    28) 19.00\n    29) overfitting-detected\n    30) yes\n\n该网络总共训练了0.29秒。在因过拟合而自动停止训练时，测试数据集上的错误率为19%。\n\n你也可以指定按照固定的秒数或迭代次数来训练网络。目前我们只是简单地使用了`AUTOSTOP`功能，因为它更为便捷。不过，我们将在下一节中更深入地探讨相关内容。\n\n现在我们可以展示Ruby程序完整执行后的输出：\n\n    47 prediction errors on 290 items\n\n考虑到我们的模型非常简单，而且只用401个数据点进行了训练，这样的结果还算不错。如果仅仅根据幸存者与遇难者的比例来进行建模，我们可能会误判超过100名乘客。\n\n我们还可以通过交互式地调整几个变量，来查看哪些输入因素会对经过训练的神经网络产生显著影响。\n\n让我们先来预测一位30岁、乘坐头等舱、没有兄弟姐妹和父母的女性的生存概率：\n\n    > NR.RUN mynet 1 0 0 0 1 30 0 0 200\n    1) \"0.093071778873849084\"\n    2) \"0.90242156736283008\"\n\n网络认为她有90%的概率会存活。那么如果她年纪更大一些，比如70岁呢？\n\n    > NR.RUN mynet 1 0 0 0 1 70 0 0 200\n    1) \"0.11650946245068818\"\n    2) \"0.88784839170875851\"\n\n这样一来，她的生存概率就降到了88.7%。如果她乘坐的是三等舱，而且票价非常便宜呢？\n\n    > NR.RUN mynet 0 0 1 0 1 70 0 0 20\n    1) \"0.53693405013043127\"\n    2) \"0.51547605838387811\"\n\n这次的结果变成了五五开……只能靠抛硬币来决定了。\n\n这个示例的核心在于：作为开发者，你在优化应用程序、改善与用户交互体验时所面临的许多问题，其实都类似于“泰坦尼克号”问题——并不是因为问题本身的复杂性，而是因为一个简单的模型就能“解决”它们。\n\n过拟合检测与训练技巧\n===\n\n让神经网络难以像本Redis模块中所建议的那样以交互方式使用的因素之一，无疑是过拟合现象。如果你过度训练，神经网络就会变得像那种能准确复述课文所有内容的学生一样；然而，当被问及关于文章主旨的更概括性问题时，她或他却茫然不知所措，无法作答。\n\n因此，`NR.TRAIN`命令中的`AUTOSTOP`选项旨在检测过拟合情况，从而在为时已晚之前停止训练。它是如何实现这一点的呢？目前的解决方案相当简单：在训练过程中，我们会持续比较神经网络在训练数据集和测试数据集上的误差。\n\n通常情况下，当过拟合开始发生时，我们会看到训练数据集上的误差率不断下降，而测试数据集上的误差率却不降反升，甚至开始上升。要准确捕捉到这个转折点并不容易，主要有两个原因：\n\n1. 随着网络的学习，误差可能会波动。\n2. 在测试数据集上，网络的误差可能只会更高，因为学习过程陷入了*局部最小值*，但随后可能会找到更好的解决方案。\n\n因此，虽然`AUTOSTOP`大致实现了其宣传的效果（不过我将来会继续改进它；而且确实有一些神经网络专家比我更懂行，他们完全可以提交一个优秀的拉取请求 :-)），我们也可以手动训练网络，并观察误差随训练的变化情况。\n\n例如，这是在泰坦尼克号数据集上自动停止训练后的错误率：\n\n    21) training-total-seconds\n    22) 0.17\n    23) dataset-error\n    24) \"0.13170509045457734\"\n    25) test-error\n    26) \"0.13433443241900492\"\n    27) classification-errors-perc\n    28) 18.50\n\n我们可以使用`MAXTIME`和`MAXCYCLES`选项来指定训练的时间长度（需要注意的是，即使指定了`AUTOSTOP`，这些选项仍然适用）。通常情况下，`MAXTIME`被设置为10000毫秒，也就是总共10秒钟后就会终止训练线程。现在让我们不启用自动停止功能，而是将网络训练30秒钟。\n\n    > NR.TRAIN mynet MAXTIME 30000\n    Training has started\n\n顺带一提，在一个或多个训练正在进行时，我们可以列出它们的状态：\n\n    > NR.THREADS\n    1) nn_id=9 key=mynet db=0 maxtime=30000 maxcycles=0\n\n训练结束后，我们再次查看相关信息：\n\n    21) training-total-seconds\n    22) 30.17\n    23) dataset-error\n    24) \"0.0674554189303056\"\n    25) test-error\n    26) \"0.20468644603795394\"\n    27) classification-errors-perc\n    28) 21.50\n\n你可以看到，我们的网络出现了过拟合：训练集上的错误率现在降到了0.06。然而，对于从未见过的数据——即测试集——其表现却变得更差了，错误率上升到了0.20！\n\n实际上，网络对21%的样本分类错误，而之前只有18.50%。\n\n不过这种情况并不总是如此，因此在进行机器学习实验时，手动测试是个不错的想法，尤其是在使用这个尚处于实验阶段的模块时。\n\n一个有趣的例子是`examples`目录下的`iris.rb`程序：它会将著名的鸢尾花数据集`Iris.csv`加载到Redis中。该数据集包含了三种不同类型的鸢尾花，每种都有各自的萼片和花瓣特征。如果你运行这个程序，分类错误率大约为4%。但是，如果你再让网络多训练几个周期：\n\n    NR.TRAIN iris MAXCYCLES 100\n\n你会发现，错误率往往会降到2%。\n\n使用BACKTRACK选项更好地检测过拟合\n===\n\n在使用`AUTOSTOP`时，还有一个可以额外指定的选项（单独使用并无效果），那就是`BACKTRACK`。当启用回溯功能时，网络在训练过程中，每当出现可能开始过拟合的迹象时，当前的网络版本都会被保存下来。训练结束时，如果保存下来的网络比当前的网络更好（即具有更小的误差），那么就会用保存的版本来代替最终的训练结果。\n\n这样可以避免在使用`AUTOSTOP`时，由于未能检测到过拟合而导致的一些异常情况。不过，这也增加了运行时间，因为在训练过程中需要不时地复制神经网络。\n\n例如，在使用`BACKTRACK`处理鸢尾花数据集时（参见`examples`目录下的`iris.rb`文件），几乎不会发生过拟合；而在没有启用该选项的情况下，大约有2%的运行可能会出现过拟合。\n\n一个更复杂的非线性分类示例\n===\n\n泰坦尼克号的例子当然很有意思，但其中输入与输出之间的关系很可能主要是线性的。因此，我们现在尝试一个非线性的分类任务，只是为了展示小型神经网络的能力。\n\n在这个源代码发行版的`examples`目录中，有一个名为`circles.rb`的示例，我们将以此作为参考。\n\n我们将设置一个分类问题：要求神经网络根据两个输入（从我们的角度来看，它们是二维空间中的两个坐标）将其分为三类：0、1和2。\n\n尽管神经网络并不知道这一点，我们会生成数据，使得不同的类别实际上对应于二维空间中的三个圆圈，且这些圆圈之间存在交叠。生成数据的函数如下：\n\n```\n    point_class = rand(3) # 类别为0、1或2\n    if point_class == 0\n        x = Math.sin(k)\u002F2+rand()\u002F10;\n        y = Math.cos(k)\u002F2+rand()\u002F10;\n    elsif point_class == 1\n        x = Math.sin(k)\u002F3+0.4+rand()\u002F8;\n        y = Math.cos(k)\u002F4+0.4+rand()\u002F6;\n    else\n        x = Math.sin(k)\u002F3-0.5+rand()\u002F30;\n        y = Math.cos(k)\u002F3+rand()\u002F40;\n    end\n```\n\n基本的三角函数：\n\n    x = Math.sin(k)\n    y = Math.cos(k)\n\n当`k`从0变化到2π时，就描绘出一个圆。上述函数只是在基本圆的基础上加上了一些随机噪声。如果我用[load81](https:\u002F\u002Fgithub.com\u002Fantirez\u002Fload81)以图形方式绘制这三类点，就会得到如下图像：\n\n![用LOAD81绘制的圆圈](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fantirez_neural-redis_readme_a7e7db23efc1.png)\n\n接下来，`circles.rb`程序会生成同样的点集，并将其输入到配置为接受2个输入、输出3个类别之一的神经网络中。\n\n经过大约2秒钟的训练后，我们尝试可视化神经网络所学到的内容（这也是`circles.rb`命令的一部分）：对于一个80×80的网格中的每个点，我们都让网络对其进行分类。以下是ASCII艺术形式的结果：\n\n```\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n................................................................................\n\u002F...............................................................................\n\u002F\u002F\u002F.............................................................................\n\u002F\u002F\u002F\u002F............................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F..........................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.......................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.................................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.......................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.....................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....\u002F\u002F\u002F\u002F\u002F\u002F\u002F.................................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........\u002F\u002F\u002F\u002F\u002F\u002F...............................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOO..............................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOO..........................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOO.....................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOO.................\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO..\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOO..OOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOO......OOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....\u002F\u002F\u002F\u002F\u002F\u002F...OOOOOOOOOOOOOOOOOOOOOO........OOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......OOOOOOOOOOOOOOOOOOOO.........OOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........OOOOOOOOOOOOOOOOOOO.........OOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F...........OOOOOOOOOOOOOOOOO.........OOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..............OOOOOOOOOOOOOO..........OOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F....................OOOOOOOO..........OOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.....................OOOOOOOO.........OOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F......................OOOOOOO........OOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F........................OOOOO........OOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................OOOO.......OOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................OOOO.....OOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F............................OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.............................OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F...OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F...OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.........................\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F.OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n\u002F\u002F..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n...........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n..........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n.........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n........................\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002F\u002FOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO\n```\n\n正如你所见，虽然这个问题没有线性解法，但神经网络能够将二维空间划分为不同的区域，其中“孔洞”部分对应于圆的交集区域，而较薄的表面则表示圆实际相交的地方（在两条圆周的交点上，存在属于不同类别的点）。\n\n这个例子可能并不实用，但它很好地展示了神经网络在非线性任务中的强大能力。\n\n案例研究：情感分析\n===\n\nNeural Redis 并不是处理高级自然语言处理任务的理想工具。对于情感分析——这一极具挑战性的难题——循环神经网络（RNN）及其他更为复杂的模型能够提供最先进的结果。\n\n然而，也正因为如此，情感分析恰恰是一个很好的示例，用以展示如何建模问题，以及即使是最简单的直觉，在经过大约 5 分钟的训练后，也能让 Neural Redis 以相当不错的方式处理问题（尽管与顶尖的专业系统相比仍有差距）。\n\n本案例研究基于 examples 目录下的 `sentiment.rb` 源代码。它使用了一个非常流行的情感分析基准数据集，该数据集包含 2000 条电影评论，其中 1000 条为正面评价，1000 条为负面评价。\n\n这些评论大致如下：\n\n    这一定是一场扭曲的影评噩梦：年度最佳影片竟会是一部暑期商业片，而且还是金·凯瑞主演的！事实的确如此。《楚门的世界》是我自不知何时以来见过的最令人费解、疯狂、偏执又让人捧腹大笑的道德寓言剧。\n\n通常情况下，我们应当尽力对数据进行预处理，但这次我们偷了个懒，什么都没做。不过，我们仍然需要将输入和输出映射到有意义的参数上。对于输出而言，这很简单——毕竟这是一个分类任务：*正面或负面*。那么，我们该如何将单词映射为输入呢？\n\n一般的做法是为不同的单词分配不同的 ID，然后使用这些 ID 作为索引。但在我们的场景中，这种方法会带来两个问题：\n\n1. 我们需要选定一个词汇表。通常这一步会在预处理阶段完成，可能会考察大量未标注的文本语料库。但别忘了，我们可是很懒的！\n2. “非常好”和“不好”的含义截然不同，如果只关注单个单词，结果很可能不尽如人意。\n\n于是，我采取了以下做法。假设我们的网络由 3000 个输入单元、100 个隐藏单元以及用于分类的 2 个输出单元组成。\n\n我们将初始输入分为两部分：1500 个输入单元直接接收单个单词；另外 1500 个输入单元则用于处理两个单词的组合。我的做法是利用哈希函数将文本中的单词映射到输入单元上：\n\n    INDEX_1 = HASH(单词) % 1500\n    INDEX_2 = 1500 + (HASH(当前单词 + 下一个单词) % 1500)\n\n这种做法听起来有些疯狂，我也很好奇过去是否有人尝试过类似的方法。由于不同的单词及其组合可能会被哈希到同一个单元，因此结果的精确度会有所降低。不过，只要输入单元足够多，那些在情感上高度对立的词语（例如正面与负面）不太可能被哈希到同一桶中。\n\n这样一来，每个单独的单词以及每一对单词组合都相当于在对应的输入单元上投了一票。我们在遍历句子并为每个单词投票的同时，还会累加所有投票数，最后进行归一化处理，确保所有输入之和等于 1。这样，情感分析的结果就不会受到句子长度的影响。\n\n尽管这种方法非常简单，但它确实有效，并且只需几秒钟就能构建出一个在 2000 条电影评论数据集上准确率达到 80% 的神经网络。我只花了几个小时就完成了这个实验，想必采用更高级的方案还能取得更好的效果。不过，这个用例的核心在于：在将数据映射到神经网络时，要充分发挥创造力。\n\n如果你运行 `sentiment.rb`，就会看到网络迅速收敛。最终，你可以输入一些句子，让神经网络判断其情感倾向是正面还是负面：\n\n    nn_id=7 cycle=61 key=sentiment ... classerr=21.500000\n    nn_id=7 cycle=62 key=sentiment ... classerr=20.333334\n\n    目前表现最好的网络能够以 78.17% 的准确率预测情感极性。\n\n    请想象并输入一条电影评论：\n\n    > 这部电影太糟糕了，简直离谱！\n    负面情绪：0.99966669082641602\n    正面情绪：0.00037576013710349798\n\n    > 不错啊！\n    负面情绪：0.28475716710090637\n    正面情绪：0.73368257284164429\n\n    > 这真是一部杰作！\n    负面情绪：2.219095662781001e-08\n    正面情绪：0.99999994039535522\n\n当然，你也会发现一些网络判断错误的句子……不过，你输入的句子越长、越接近真实的电影评论，它就越有可能被正确分类。\n\nAPI 参考\n===\n\n在上述教程中，并未涵盖所有命令的所有选项，因此这里提供了一份简要参考，列出了此扩展支持的所有命令及其相关选项。\n\n\n\n### NR.CREATE key [CLASSIFIER|REGRESSOR] inputs [hidden-layer-units ...] -> outputs [NORMALIZE] [DATASET maxlen] [TEST maxlen]\n\n如果目标键为空，则创建一个新的神经网络；否则返回错误。\n\n* key — 存储神经网络的键名。\n* CLASSIFIER 或 REGRESSOR 是网络类型，更多信息请参阅本教程。\n* inputs — 输入单元的数量。\n* hidden-layer-units — 零个或多个参数，分别表示每一层的隐藏单元数量。\n* outputs — 输出单元的数量。\n* NORMALIZE — 指定是否希望网络对输入进行归一化处理。如果你不清楚这是什么意思，就启用它吧。\n* DATASET maxlen — 训练数据集的最大样本数。\n* TEST maxlen — 测试数据集的最大样本数。\n\n示例：\n\n    NR.CREATE mynet CLASSIFIER 64 100 -> 10 NORMALIZE DATASET 1000 TEST 500\n\n### NR.OBSERVE key i0 i1 i2 i3 i4 ... iN -> o0 o1 o3 ... oN [TRAIN|TEST]\n\n将一个数据样本添加到训练或测试数据集中（如果指定了最后一个参数），或者根据两者的规模比例，均匀地分配到其中一个数据集中（如果没有指定目标）。对于分类型神经网络，输出必须是一个介于 0 到 `输出单元数 - 1` 之间的整数。至于如何将类别 ID 转换为一组二进制值，则由网络自行决定。\n\n该命令会返回训练和测试数据集中当前的数据样本数量。如果目标数据集已满，则会随机移除一条记录，并用新数据替换。\n\n## NR.RUN key i0 i1 i2 i3 i4 ... iN\n\n运行存储在指定键下的神经网络，返回一个输出数组。\n\n## NR.CLASS key i0 i1 i2 i3 i4 ... iN\n\n类似于 `NR.RUN`，但仅适用于分类型神经网络。与直接输出神经网络的原始结果不同，该命令会直接返回预测的类别，即具有最大值的输出索引。\n\n## NR.TRAIN 键 [MAXCYCLES 计数] [MAXTIME 毫秒] [AUTOSTOP] [BACKTRACK]\n\n在后台线程中训练网络。训练完成后，\n会自动用新权重更新已训练的网络，并更新训练统计信息。\n\n该命令操作的是网络的副本，因此在网络训练期间仍可正常使用该网络。\n\n如果未指定 AUTOSTOP，则网络将一直训练，直到达到最大循环次数或毫秒数。若未指定最大循环次数，则无循环限制；若未指定毫秒数，则默认为 10000 毫秒（10 秒）。\n\n如果指定了 AUTOSTOP，网络将在达到最大循环次数或毫秒数时停止训练，同时还会尝试在检测到过拟合时停止训练。有关实现所采用的（目前仍较为简单）停止算法的说明，请参阅前面的相关章节。\n\n如果同时指定了 BACKTRACK 和 AUTOSTOP，在网络训练过程中，训练线程会在每次得分优于之前保存的版本且有迹象表明即将发生过拟合时，保存一份神经网络的副本。之后，若发现该副本的误差更小，则会使用它。\n\n## NR.INFO 键\n\n显示关于神经网络的大量内部信息。不妨亲自试一试 :-)\n\n## NR.THREADS\n\n显示所有正在运行的训练线程。\n\n## NR.RESET 键\n\n将神经网络的权重重置为随机值（即网络将完全遗忘此前学到的内容），并重置训练统计信息。但数据集本身不会受到影响。这在需要从头开始重新训练网络时非常有用。\n\n贡献\n===\n\nNeural Redis 目前只是一个为期 48 小时的个人黑客马拉松项目，其主要目标是展示一个易于访问、提供简单易用机器学习工具的 API 所蕴含的潜力，该工具可以进行交互式使用和训练。\n\n然而，神经网络的实现无疑还有许多改进空间。如果你是该领域的专家，请随时提交修改建议或想法。我希望能够保持外部接口层——即 API——的简洁性，而内部实现则可以更加复杂，以提升性能。\n\n需要注意的是，鉴于当前公开的 API，神经网络的实现不应仅仅追求在特定问题上的最先进水平，而应更多地考虑在各种条件下都能良好工作。尽管目前的全连接网络存在局限性，但结合 BPROP 学习算法，它对误用具有相当的鲁棒性。因此，改进后的版本应当能够保留并进一步扩展这一特性。确保这一点的最简单方法是使用开放数据集构建一套涵盖不同类型的基准测试，并以此来评估各种实现方案。\n\n计划\n===\n* 改进过拟合检测机制。\n* 实现 RNN，并提供更易用的 API。\n* 为分类神经网络采用不同的损失函数。\n* 邀请熟悉简单 API 的机器学习专家参与合作。\n\n祝你机器学习玩得开心！\n\n萨尔瓦托雷","# Neural Redis 快速上手指南\n\nNeural Redis 是一个 Redis 可加载模块，它将前馈神经网络实现为 Redis 的原生数据类型。该工具旨在让开发者无需离开 Redis 环境即可轻松完成数据收集、训练和预测，特别适合移动端和 Web 应用中的回归与分类场景（如推荐系统、广告转化预测等）。\n\n> **注意**：本项目目前处于 **Alpha 阶段**，代码尚不稳定，可能导致 Redis 服务崩溃。仅支持 RDB 持久化，不支持 AOF 重写。请在非生产环境中谨慎试用。\n\n## 环境准备\n\n*   **操作系统**：Linux \u002F macOS (需支持编译 C 扩展)\n*   **Redis 版本**：必须使用 **Redis unstable** 分支（即 GitHub 上的默认开发分支）。稳定版 Redis 可能不兼容此模块。\n*   **构建依赖**：\n    *   `gcc` 或 `clang` 编译器\n    *   `make`\n    *   Redis 源码（用于编译模块）\n\n## 安装步骤\n\n1.  **获取 Redis unstable 源码**\n    从 GitHub 克隆最新的不稳定分支：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fredis\u002Fredis.git\n    cd redis\n    make\n    ```\n\n2.  **获取并编译 Neural Redis 模块**\n    克隆项目并编译生成 `.so` 文件：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fantirez\u002Fneural-redis.git\n    cd neural-redis\n    make\n    ```\n    *注：若国内访问 GitHub 较慢，可尝试使用 Gitee 镜像或配置代理加速。*\n\n3.  **加载模块启动 Redis**\n    编译完成后，通过命令行参数加载模块启动 Redis：\n    ```bash\n    redis-server --loadmodule \u002Fpath\u002Fto\u002Fneural-redis\u002Fsrc\u002Fneuralredis.so\n    ```\n    或者在 `redis.conf` 配置文件中添加以下行后重启：\n    ```text\n    loadmodule \u002Fpath\u002Fto\u002Fneural-redis\u002Fsrc\u002Fneuralredis.so\n    ```\n\n## 基本使用\n\n以下示例演示如何创建一个简单的神经网络，教它学习“加法”运算（回归任务）。\n\n### 1. 创建神经网络\n创建一个名为 `net` 的网络，配置为回归模式（`REGRESSOR`），拥有 2 个输入节点、3 个隐藏层节点和 1 个输出节点。开启自动归一化（`NORMALIZE`），并设置训练集和测试集的内存容量。\n\n```redis\n> NR.CREATE net REGRESSOR 2 3 -> 1 NORMALIZE DATASET 50 TEST 10\n(integer) 13\n```\n*返回值为 13，表示该网络共有 13 个可调整参数。*\n\n### 2. 提供训练数据\n使用 `NR.OBSERVE` 命令向网络输入样本数据（输入 -> 期望输出）。这里我们输入几组加法算式：\n\n```redis\n> NR.OBSERVE net 1 2 -> 3\n1) (integer) 1\n2) (integer) 0\n\n> NR.OBSERVE net 4 5 -> 9\n> NR.OBSERVE net 3 4 -> 7\n> NR.OBSERVE net 1 1 -> 2\n> NR.OBSERVE net 2 2 -> 4\n> NR.OBSERVE net 0 9 -> 9\n> NR.OBSERVE net 7 5 -> 12\n> NR.OBSERVE net 3 1 -> 4\n> NR.OBSERVE net 5 6 -> 11\n```\n*返回值分别表示当前训练集和测试集中存储的数据项数量。*\n\n### 3. 开始训练\n启动后台训练线程。使用 `AUTOSTOP` 参数可防止过拟合（当检测到泛化能力不再提升时自动停止）。\n\n```redis\n> NR.TRAIN net AUTOSTOP\n```\n*对于小型网络，训练通常在几毫秒内完成。*\n\n### 4. 执行预测\n使用 `NR.RUN` 命令输入新数据进行预测：\n\n```redis\n> NR.RUN net 1 1\n1) \"2.0776522297040843\"\n\n> NR.RUN net 3 2\n1) \"5.1765427204933099\"\n```\n*结果接近真实值（2 和 5），表明网络已学会加法逻辑。*\n\n### 5. 查看状态\n使用 `NR.INFO` 查看网络详细信息，包括训练步数、误差率等：\n\n```redis\n> NR.INFO net\n 1) id\n 2) (integer) 1\n 3) type\n 4) regressor\n ...\n23) dataset-error\n24) \"7.5369825612397299e-05\"\n25) test-error\n26) \"0.00042670663615723583\"\n```\n\n### 进阶提示：分类任务\n若需处理分类问题（如判断用户类型），创建网络时将 `REGRESSOR` 替换为 `CLASSIFIER`，并设置对应的输出节点数量（类别数）。预测时可使用 `NR.CLASS` 直接获取类别 ID，无需手动比较输出向量最大值。\n\n```redis\n> NR.CREATE mynet CLASSIFIER 5 10 -> 3\n> NR.CLASS mynet 0 1 1 0 1\n(integer) 0\n```","某电商初创团队希望在用户浏览商品时，实时预测并展示最可能促成购买的个性化促销文案。\n\n### 没有 neural-redis 时\n- **架构复杂且延迟高**：需要独立部署 Python 机器学习服务，Redis 与应用层需频繁跨网络调用，导致推荐响应延迟增加。\n- **数据流转割裂**：用户行为数据先写入 Redis，再导出到外部系统训练，模型更新滞后，无法利用最新的点击反馈。\n- **开发维护成本高**：团队需同时维护数据库、消息队列和 ML 训练管道，小团队难以承受多套系统的运维压力。\n- **实时性差**：模型通常按天或按周批量重训，无法在用户产生新行为的分钟级时间内调整策略。\n\n### 使用 neural-redis 后\n- **架构极简零延迟**：直接在 Redis 内部加载神经网络模块，数据存储与模型推理在同一进程完成，毫秒级返回推荐结果。\n- **在线实时训练**：利用 neural-redis 的在线学习特性，用户每次点击或购买行为可直接作为新样本输入，模型在后台线程自动增量更新。\n- **单一技术栈**：开发人员只需掌握 Redis API 即可完成数据采集、训练和预测，无需引入额外的 AI 基础设施。\n- **动态适应趋势**：模型能即时捕捉突发流量或用户偏好变化（如节假日效应），并在训练过程中不影响线上服务的正常推理。\n\nneural-redis 将复杂的机器学习流程压缩为简单的 Redis 命令，让中小团队也能以极低门槛实现真正的实时智能决策。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fantirez_neural-redis_9c995e67.png","antirez","Salvatore Sanfilippo","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fantirez_bbe22c7e.png","Computer programmer based in Sicily, Italy. I mostly write OSS software. Born 1977. Not a puritan.","Redis Labs","Catania, Sicily, Italy","antirez@gmail.com","http:\u002F\u002Finvece.org","https:\u002F\u002Fgithub.com\u002Fantirez",[85,89],{"name":86,"color":87,"percentage":88},"C","#555555",98.7,{"name":90,"color":91,"percentage":92},"Makefile","#427819",1.3,2232,101,"2026-04-01T10:04:11","BSD-3-Clause",4,"Linux, macOS","不需要 GPU，仅使用 CPU 运行","未说明",{"notes":102,"python":103,"dependencies":104},"该工具是 Redis 的 C 语言加载模块（.so 文件），非 Python 包。必须使用 Redis 的 'unstable' 分支（GitHub 默认分支）进行编译和加载。目前处于 Alpha 阶段，代码不稳定可能导致服务器崩溃，且仅支持 RDB 持久化，不支持 AOF 重写。网络结构仅限于全连接前馈神经网络，使用 RPROP 算法，不支持卷积或循环神经网络。","不需要 Python",[105,106],"Redis unstable 版本","GCC (用于编译 C 扩展)",[13,51],null,"2026-03-27T02:49:30.150509","2026-04-06T08:18:29.639652",[],[]]