[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-antirez--ttt-rl":3,"tool-antirez--ttt-rl":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[19,13,20,18],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[20,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[20,18,14],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":10,"last_commit_at":63,"category_tags":64,"status":22},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[20,14,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":76,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":102,"github_topics":103,"view_count":10,"oss_zip_url":103,"oss_zip_packed_at":103,"status":22,"created_at":104,"updated_at":105,"faqs":106,"releases":107},1385,"antirez\u002Fttt-rl","ttt-rl","Reinforcement Learning example in C, playing tic tac toe","ttt-rl 是一个用纯 C 语言编写的井字棋强化学习示例项目。它旨在通过一个极简的实战案例，帮助开发者从零开始理解强化学习与神经网络的核心原理。\n\n该项目解决了传统机器学习示例往往依赖庞大框架（如 PyTorch）、代码复杂且难以窥探底层细节的痛点。ttt-rl 完全不使用任何外部库，仅在 400 行代码内实现了一个完整的智能体。其独特之处在于“白板”学习模式：神经网络初始权重随机，除了基本规则外对游戏策略一无所知，仅依靠胜负平结果的奖励信号进行自我进化。经过约 200 万局与随机对手的对抗训练，它能达到近乎完美的胜率，几乎不再输掉比赛。\n\n代码内部注释丰富，平均每两行代码就有一行解释，并特别标注了关键的学习机会点。这使得 ttt-rl 非常适合希望深入探究 AI 算法本质的程序员、学生或研究人员作为入门教材。如果你厌倦了黑盒式的调用，想亲手拆解并理解强化学习如何从无到有地掌握技能，这个轻量级项目将是极佳的学习起点。","# Tic Tac Toe with Reinforcement Learning\n\n*The only winning move is not to play*\n\nThis code implements a neural network that learns to play tic-tac-toe using\nreinforcement learning, just playing against a random adversary, in **under\n400 lines of C code**, without any external library used. I guess there are\nmany examples of RL out there, written in PyTorch or with other ML frameworks,\nhowever what I wanted to accomplish here was to have the whole thing\nimplemented from scratch, so that each part is understandable and self-evident.\n\nWhile the code is a toy, designed to help interested people to learn the basics\nof reinforcement learning, it actually showcases the power of RL in learning\nthings without any pre-existing clue about the game:\n\n1. It uses cold start: the neural network is initialized with random weights.\n2. *Tabula rasa* learning: no knowledge of the game is put into the program, if not the fact that X or O can't be put into already used tile, and the fact that a victory or a tie is reached when there are three same symbols in a row or all the tiles are used.\n3. The only signal used to train the neural network is the reward of the game: win, lose, tie.\n\nIn my past experience with [the Kilo editor](https:\u002F\u002Fgithub.com\u002Fantirez\u002Fkilo) and the [Picol interpreter](https:\u002F\u002Fgithub.com\u002Fantirez\u002Fpicol) I noticed that for programmers that want to understand new fields (especially young programmers) small C programs without dependencies, clearly written, commented and *very short* are a good starting point, so, in order to celebrate the Turing Award assigned to Sutton and Barto, I thought of writing this one.\n\nTo try this program, compile with:\n\n    cc ttt.c -o ttt -O3 -Wall -W -ffast-math -lm\n\nThen run with\n\n    .\u002Fttt\n\nBy default, the program plays against a random opponent (an opponent just\nthrowing random \"X\" at random places at each move) for 150k games. Then it\nstarts a CLI interface to play with the human user. You can specify how many\ngames you want it to play against the random opponent before playing with\nthe human specifying it as first argument:\n\n    .\u002Fttt 2000000\n\nWith 2 million games (a few seconds required) it usually no longer loses\na game.\n\nAfter playing against itself for a few iterations, the program achieves\nwhat is likely perfect playing:\n\n    Games: 2000000, Wins: 1756049 (87.8%)\n                    Losses: 731 (0.0%)\n                    Ties: 243220 (12.2%)\n\nNote that there are runs that are more lucky than others, likely because of\nweights initialization in the neural network. Run the program multiple times\nif you can't reach 0 losses.\n\n# How it works\n\nThe code tries hard to be simple, and is well commented, with about one line of comment for each two lines of code: to understand how it works should be relatively easy. Yet, in this README, I try to outline a few important things. Also make sure to check the *LEARNING OPPORTUNITY* comments inside the code: there, I tried to highlight important results or techniques in the field of neural networks that you may want to study better.\n\n## Board representation\n\nThe state of the game is just that:\n\n```c\ntypedef struct {\n    char board[9];          \u002F\u002F Can be \".\" (empty) or \"X\", \"O\".\n    int current_player;     \u002F\u002F 0 for player (X), 1 for computer (O).\n} GameState;\n```\n\nThe human and computer play always in the same order: the human starts,\nthe computer replies to the move. They also play always with the same\nsymbol: \"X\" for the human, \"O\" for the computer.\n\nThe board itself is just represented by 9 characters, depending on the\nfact the tile is empty, or contains X or O.\n\n## Neural network\n\nThe neural network is very hard-coded, because the code really wants to be\nsimple: it only have a single hidden layer, which is enough to model\na so simple game (adding more layers don't help to converge faster nor\nto play better).\n\nNote that tic tac toe has only 5478 possible states, and by default with\n100 hidden neurons our neural network has:\n\n    18 (inputs) * 100 (hidden) +\n    100 (hidden) * 9 (outputs) weights +\n    100 + 9 biases\n\nFor a total of 2809 parameters, so our neural network is very near to be able to\nmemorize each state of the game. However you can reduce the hidden size\nto 25 (or less) and it is still able to play well (but not perfectly) with\naround 700 parameters (or less).\n\n```c\ntypedef struct {\n    \u002F\u002F Weights and biases.\n    float weights_ih[NN_INPUT_SIZE * NN_HIDDEN_SIZE];\n    float weights_ho[NN_HIDDEN_SIZE * NN_OUTPUT_SIZE];\n    float biases_h[NN_HIDDEN_SIZE];\n    float biases_o[NN_OUTPUT_SIZE];\n\n    \u002F\u002F Activations are part of the structure itself for simplicity.\n    float inputs[NN_INPUT_SIZE];\n    float hidden[NN_HIDDEN_SIZE];\n    float raw_logits[NN_OUTPUT_SIZE]; \u002F\u002F Outputs before softmax().\n    float outputs[NN_OUTPUT_SIZE];    \u002F\u002F Outputs after softmax().\n} NeuralNetwork;\n```\n\nActivations are always memorized directly inside the neural network,\nso calculating the deltas and performing the backpropagation is very\nsimple.\n\nWe use RELU because of simple derivative. Almost everything would work in this\ncase. Weights initialization don't care about RELU, they are just random\nfrom -0.5 to 0.5 (no He weights initialization).\n\nThe output is computed using softmax(), since this neural network basically\nassigns probabilities to every possible next move. In theory we use cross\nentropy to calculate the loss function, but in practice we evaluate our\n*agent* based on the results of the games, so we only use it implicitly here:\n\n```c\n        deltas[i] = output[i] - target[i]\n```\n\nThat is the delta in case of softmax and cross entropy.\n\n## Reinforcement learning policy\n\nThis is the reward policy used:\n\n```c\n    if (winner == 'T') {\n        reward = 0.2f;  \u002F\u002F Small reward for draw\n    } else if (winner == nn_symbol) {\n        reward = 1.0f;  \u002F\u002F Large reward for win\n    } else {\n        reward = -1.0f; \u002F\u002F Negative reward for loss\n    }\n```\n\nWhen rewarding, we create all the states of the game where the neural network was about to move, and for each state, we reward the winning moves (not just the\nfinal move that won, but *all* the moves performed in the game we won) using as\ntarget output all the other moves set to 0, and the move we want to reward\nset to 1. Then we do a pass of backpropagation and update the weights.\n\nFor ties, it's like winning, but the reward is scaled. Instead, when the game\nwas lost, we use as a target the move set to 0, all the invalid moves set to\n0 as well, and all the other valid moves set to `1\u002F(number-of-valid-moves)`.\n\nHowever, we also perform scaling according to how early the move was performed: for moves that are near the start of the game, we give smaller rewards, and for moves that are later in the game (near the end of the game) we provide a stronger reward:\n\n        float move_importance = 0.5f + 0.5f * (float)move_idx\u002F(float)num_moves;\n        float scaled_reward = reward * move_importance;\n\nNote that the above makes a lot of difference in the way the program works.\nAlso note that while this may seem similar to Time Difference in reinforcement\nlearning, it is not: we don't have a simple way in this case to evaluate if\na single step provided a positive or negative reward: we need to wait for\neach game to finish. The temporal scaling above is just a way to code inside\nthe network that early moves are more open, while, as the game goes on, we\nneed to play more selectively.\n\n## Weights updating\n\nWe just use trivial backpropagation, and the code is designed in order to\nshow very clearly that, after all, things work in a VERY similar way to\nwhat happens with supervised learning: the difference is just the input\u002Foutput\npairs are not known beforehand, but they are provided on the fly based on the\nreward policy of reinforcement learning.\n\nPlease check the code for more information.\n\n## Future work\n\nThings I didn't test because the complexity would kinda sabotage the educational value of the program and\u002For for lack of time, but that could be interesting exercises and interesting other projects \u002F branches:\n\n* Can this approach work with connect four as well? The much larger space of the problem would be really interesting and less of a toy.\n* Train the network to play both sides by having an additional input set, that is the symbol that is going to do the move (useful especially in the case of connect four) so that we can use the network itself as opponent, instead of playing against random moves.\n* Implement proper sampling, in the case above, so that initially moves are quite random, later they start to pick more consistently the predicted move.\n* MCTS.\n","# 基于强化学习的井字棋游戏\n\n*唯一获胜的走法，就是不走棋*\n\n这段代码实现了一个神经网络，利用强化学习来学习井字棋的游戏规则——只需与随机对手对弈，仅用**不到400行C语言代码**完成，且未使用任何外部库。虽然市面上已有大量基于强化学习的示例代码，这些代码或以PyTorch为框架编写，或采用其他机器学习框架，但我真正想实现的是从零开始完整地构建这个系统，让每一部分都清晰易懂、一目了然。\n\n尽管这段代码只是一个“玩具”级的程序，旨在帮助有兴趣的读者掌握强化学习的基础知识，但它却充分展示了强化学习在无需事先了解游戏规则的情况下，仍能高效学习并掌握新事物的强大能力：\n\n1. 采用冷启动策略：神经网络的权重初始化为随机值。\n2. “白手起家”的学习方式：程序中并未预设任何关于游戏规则的知识，除了明确X和O不能落在已被占用的方格中；同时，当连续出现三个相同符号，或所有方格都被填满时，即判定为胜利或平局。\n3. 训练神经网络所依赖的唯一信号，便是游戏的奖励：赢、输、平。\n\n在我过去使用[Kilo编辑器](https:\u002F\u002Fgithub.com\u002Fantirez\u002Fkilo)和[Picol解释器](https:\u002F\u002Fgithub.com\u002Fantirez\u002Fpicol)的经历中，我注意到：对于那些希望深入了解新领域（尤其是年轻程序员）的开发者来说，简洁、无依赖、结构清晰、注释详尽且“极其短小”的C语言程序，往往是一个很好的入门起点。因此，为了向萨顿和巴托颁发图灵奖致敬，我决定亲手编写这段代码。\n\n要运行此程序，请使用以下命令进行编译：\n\n```bash\ncc ttt.c -o ttt -O3 -Wall -W -ffast-math -lm\n```\n\n然后执行：\n\n```bash\n.\u002Fttt\n```\n\n默认情况下，程序将与随机对手对弈（即每次移动时，对手会随机在随机位置投放“X”）。玩家将进行15万局对弈。随后，程序会启动一个命令行界面，供用户与人类玩家对弈。您可以通过在第一个参数中指定想要与随机对手对弈的局数，来调整游戏的对局数量：\n\n```bash\n.\u002Fttt 2000000\n```\n\n在运行200万局后（所需时间仅几秒），程序通常已不再输掉任何一局。\n\n经过多次自我对弈，程序最终达到了近乎完美的水平：\n\n```bash\nGames: 2000000, Wins: 1756049 (87.8%)\n                    Losses: 731 (0.0%)\n                    Ties: 243220 (12.2%)\n```\n\n需要注意的是，有些运行结果比其他运行更幸运，这很可能与神经网络的权重初始化有关。如果您无法达到0局输棋，不妨多运行几次程序。\n\n## 程序的工作原理\n\n代码力求简单，并附有详尽的注释，每两行代码大约只有一行注释：理解其工作原理应该相当容易。不过，在本README中，我仍将重点阐述几个关键点。此外，请务必仔细阅读代码内部的“LEARNING OPPORTUNITY”注释——在那里，我尝试突出显示神经网络领域中一些重要的研究成果或技术，这些内容或许值得您进一步深入研究。\n\n### 方格板的表示\n\n游戏的状态仅由以下数据构成：\n\n```c\ntypedef struct {\n    char board[9];          \u002F\u002F 可以是“.”（空格）或“X”、“O”。\n    int current_player;     \u002F\u002F 0 表示人类玩家（X），1 表示电脑玩家（O）。\n} GameState;\n```\n\n人类玩家和电脑玩家始终按相同的顺序进行游戏：人类先手，电脑则根据人类的走法作出回应。双方也始终使用相同的标记：人类玩家用“X”，电脑玩家用“O”。\n\n方格板本身仅由9个字符表示，具体取决于该方格是否为空，或者已放置“X”或“O”。\n\n### 神经网络\n\n神经网络的结构非常简单，因为代码的设计初衷就是追求简洁：它仅包含一层隐藏层，而这一层足以很好地模拟这款如此简单的游戏（增加更多层不仅无法加快收敛速度，反而可能使游戏表现更差）。\n\n值得注意的是，井字棋共有5478种可能的状态；默认情况下，我们的神经网络拥有100个隐藏神经元：\n\n- 18个输入神经元 * 100个隐藏神经元 +\n- 100个隐藏神经元 * 9个输出神经元 +\n- 100个隐藏神经元 + 9个偏置项\n\n总计2809个参数，因此我们的神经网络几乎能够完全记住游戏中的每一个状态。不过，您也可以将隐藏层的神经元数量减少至25个（甚至更少），即便如此，我们的神经网络依然能够以约700个参数左右的规模，出色地完成游戏对弈任务——尽管未必能达到完美水平。\n\n```c\ntypedef struct {\n    \u002F\u002F 权重与偏置。\n    float weights_ih[NN_INPUT_SIZE * NN_HIDDEN_SIZE];\n    float weights_ho[NN_HIDDEN_SIZE * NN_OUTPUT_SIZE];\n    float biases_h[NN_HIDDEN_SIZE];\n    float biases_o[NN_OUTPUT_SIZE];\n\n    \u002F\u002F 为了简化起见，激活函数直接存储在结构体中。\n    float inputs[NN_INPUT_SIZE];\n    float hidden[NN_HIDDEN_SIZE];\n    float raw_logits[NN_OUTPUT_SIZE]; \u002F\u002F 激活函数之前的数据。\n    float outputs[NN_OUTPUT_SIZE];    \u002F\u002F 激活函数之后的数据。\n} NeuralNetwork;\n```\n\n激活函数直接存储在神经网络内部，因此计算梯度并进行反向传播的过程极为简单。\n\n我们选择ReLU作为激活函数，因为其导数简单。在本例中，几乎所有其他激活函数都能正常工作。权重初始化并不需要考虑ReLU，它们只是从-0.5到0.5之间随机生成（无需采用He初始化方法）。\n\n输出通过softmax函数计算得出，因为该神经网络本质上会为每一种可能的下一步行动分配概率。理论上，我们应使用交叉熵来计算损失函数，但在实际应用中，我们主要依据对局的结果来评估“智能体”的表现，因此在这里我们只是隐式地使用了交叉熵：\n\n```c\n        deltas[i] = output[i] - target[i]\n```\n\n这就是在使用softmax和交叉熵时的梯度计算公式。\n\n## 强化学习策略\n\n此处采用的奖励策略如下：\n\n```c\n    if (winner == 'T') {\n        reward = 0.2f;  \u002F\u002F 对于平局，给予较小的奖励\n    } else if (winner == nn_symbol) {\n        reward = 1.0f;  \u002F\u002F 对于获胜，给予较大的奖励\n    } else {\n        reward = -1.0f; \u002F\u002F 对于失败，给予负值奖励\n    }\n```\n\n在进行奖励时，我们会为神经网络即将移动的所有游戏状态创建奖励，并且针对每个状态，我们都会对获胜的走法进行奖励（不仅仅是最终获胜的那一步，而是我们在赢得比赛的过程中所执行的所有走法），并将其他所有被设置为 0 的走法作为目标输出，而我们要奖励的走法则设置为 1。随后，我们通过一次反向传播过程来更新权重。\n\n对于平局的情况，其奖励机制与胜利时类似，但奖励会被适当缩放。相反，在游戏失败时，我们以将走法设置为 0 的走法作为目标，同时将所有无效走法也设置为 0，并将所有其他有效走法设置为 `1\u002F(有效走法数量)`。\n\n不过，我们还会根据走法完成的时机进行相应的缩放：对于那些接近游戏开局的走法，我们给予较小的奖励；而对于那些在游戏后期（接近游戏结束时）才出现的走法，则会提供更高的奖励：\n\n        float move_importance = 0.5f + 0.5f * (float)move_idx\u002F(float)num_moves;\n        float scaled_reward = reward * move_importance;\n\n需要注意的是，上述设计对程序的运行方式产生了显著的影响。\n此外，请注意，尽管这一方法在强化学习中看似与“时间差”有相似之处，但实际上并不相同：在这种情况下，我们并没有一种简单的方法来评估单个步骤是否带来了正向或负向的奖励——我们需要等待每一场游戏全部结束。上面提到的时间尺度缩放，只是在网络内部实现的一种编码方式：早期的走法更加开放，而随着游戏进程的推进，我们则需要更加有选择地进行决策。\n\n## 权重更新\n\n我们仅使用简单的反向传播算法，而代码的设计初衷正是为了清晰地展示：归根结底，程序的工作方式与监督学习非常相似：唯一的区别在于，输入\u002F输出对并非事先已知，而是根据强化学习的奖励策略，实时动态地提供给模型。\n\n如需了解更多信息，请查看相关代码。\n\n## 未来工作\n\n以下是我尚未进行测试的领域，因为这些内容若未充分测试，可能会因复杂度过高而削弱本程序的教育价值，或者由于时间不足而无法实现。不过，这些领域同样可以成为有趣的练习课题，甚至为其他项目或分支带来启发：\n\n* 这种方法是否也能应用于四子棋？问题空间的规模要大得多，这将极具研究价值，而且也更贴近实际应用。\n* 通过增加一个额外的输入变量——即即将执行走法的棋子符号（尤其在四子棋中很有用），训练网络以同时应对双方的博弈。这样一来，我们就可以直接利用网络本身作为对手，而非单纯地与随机走法进行对抗。\n* 在上述场景中，实施合理的采样策略：初始阶段，走法应较为随机；随着游戏进程的推进，走法的选择也会逐渐变得更加稳定、更具针对性。\n* 实现 MCTS 算法。","# ttt-rl 快速上手指南\n\n`ttt-rl` 是一个用纯 C 语言实现的井字棋（Tic-Tac-Toe）强化学习项目。它不依赖任何外部机器学习库，仅用不到 400 行代码展示了神经网络如何通过自我对弈从零开始学会玩游戏。\n\n## 环境准备\n\n本项目设计为无依赖的轻量级工具，对环境要求极低。\n\n*   **操作系统**：Linux、macOS 或带有 GCC 的 Windows (如 WSL, MinGW)。\n*   **编译器**：需要安装 `gcc` 或 `clang`。\n    *   Ubuntu\u002FDebian: `sudo apt-get install build-essential`\n    *   macOS: 安装 Xcode Command Line Tools (`xcode-select --install`)\n    *   CentOS\u002FRHEL: `sudo yum groupinstall \"Development Tools\"`\n*   **前置依赖**：无第三方库依赖，仅需标准 C 数学库（通常由 `-lm` 链接）。\n\n## 安装步骤\n\n该项目无需复杂的安装过程，只需下载源码并编译即可。\n\n1.  **获取源码**\n    克隆仓库或下载 `ttt.c` 文件到本地目录。\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fantirez\u002Fttt-rl.git\n    cd ttt-rl\n    ```\n\n2.  **编译程序**\n    使用以下命令进行优化编译：\n    ```bash\n    cc ttt.c -o ttt -O3 -Wall -W -ffast-math -lm\n    ```\n\n## 基本使用\n\n编译成功后，可直接运行程序。程序默认会先进行自我训练，然后进入人机对战模式。\n\n### 1. 默认运行（快速体验）\n程序将自动与随机对手进行 15 万局训练，随后启动命令行界面供用户试玩。\n```bash\n.\u002Fttt\n```\n*注：训练过程通常在几秒内完成。*\n\n### 2. 增加训练量（达到完美胜率）\n若希望 AI 达到几乎不败的水平（损失率接近 0%），可指定更大的训练对局数（例如 200 万局）：\n```bash\n.\u002Fttt 2000000\n```\n运行后，你将看到类似以下的统计信息，表明 AI 已掌握完美策略：\n```text\nGames: 2000000, Wins: 1756049 (87.8%)\n                Losses: 731 (0.0%)\n                Ties: 243220 (12.2%)\n```\n\n### 3. 人机对战\n训练结束后，程序会自动进入 CLI 交互模式。\n*   **人类玩家**：始终执 \"X\"，先行。\n*   **AI 玩家**：始终执 \"O\"，后手。\n*   **操作方式**：根据提示输入棋盘位置索引（0-8）进行落子。","某高校计算机系讲师计划开设强化学习入门课，希望学生能透过代码直观理解算法核心，而非被复杂的框架依赖劝退。\n\n### 没有 ttt-rl 时\n- 教学门槛过高：现有示例多基于 PyTorch 等大型框架，学生需先耗费数周配置环境和学习 API，难以聚焦算法逻辑本身。\n- 代码黑盒严重：动辄数千行的封装代码掩盖了神经网络权重更新、奖励反馈等关键细节，初学者如同“盲人摸象”。\n- 缺乏从零构建的视角：学生习惯了调用现成库，无法理解如何在不依赖外部库的情况下，仅用几百行 C 代码实现完整的智能体训练闭环。\n- 调试与修改困难：庞大的依赖树使得在嵌入式设备或受限环境中演示变得几乎不可能，限制了实验场景的灵活性。\n\n### 使用 ttt-rl 后\n- 极简上手体验：ttt-rl 仅用不到 400 行无依赖的 C 代码即可运行，学生编译即用，将精力完全集中在强化学习原理上。\n- 逻辑透明可见：从冷启动随机权重到仅凭输赢信号进行“白板”学习，每一行代码都清晰展示了状态表示、网络结构及奖励机制的实现细节。\n- 完整复现算法精髓：通过观察 ttt-rl 在与随机对手博弈 200 万局后达到近乎零失误的过程，学生能亲眼见证智能体如何从无知识状态进化为完美玩家。\n- 灵活的教学扩展：由于代码短小精悍且无外部依赖，教师可轻松带领学生在任何环境下修改参数、调整网络层数，甚至将其移植到其他轻量级项目中。\n\nttt-rl 通过极致精简的代码实现，将强化学习从复杂的框架束缚中解放出来，成为连接理论概念与工程实现的完美桥梁。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fantirez_ttt-rl_8cf8707d.png","antirez","Salvatore Sanfilippo","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fantirez_bbe22c7e.png","Computer programmer based in Sicily, Italy. I mostly write OSS software. Born 1977. Not a puritan.","Redis Labs","Catania, Sicily, Italy","antirez@gmail.com","http:\u002F\u002Finvece.org","https:\u002F\u002Fgithub.com\u002Fantirez",[86],{"name":87,"color":88,"percentage":89},"C","#555555",100,578,60,"2026-03-29T08:21:22","BSD-2-Clause",4,"Linux, macOS, Windows","不需要 GPU，仅使用 CPU 运行","未说明（因程序极小且无外部依赖，常规内存即可）",{"notes":99,"python":100,"dependencies":101},"该工具完全由 C 语言编写，无任何外部库依赖。只需标准 C 编译器（如 gcc 或 clang）即可编译运行。编译命令示例：cc ttt.c -o ttt -O3 -Wall -W -ffast-math -lm。程序通过自我对弈进行强化学习，默认对弈 15 万局后进入人机交互模式。","不需要 Python",[],[18],null,"2026-03-27T02:49:30.150509","2026-04-06T09:45:07.596466",[],[]]