[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-dickreuter--neuron_poker":3,"tool-dickreuter--neuron_poker":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",152630,2,"2026-04-12T23:33:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":75,"owner_location":76,"owner_email":75,"owner_twitter":75,"owner_website":77,"owner_url":78,"languages":79,"stars":100,"forks":101,"last_commit_at":102,"license":103,"difficulty_score":104,"env_os":105,"env_gpu":106,"env_ram":106,"env_deps":107,"category_tags":117,"github_topics":118,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":126,"updated_at":127,"faqs":128,"releases":158},6996,"dickreuter\u002Fneuron_poker","neuron_poker","Texas holdem OpenAi gym poker environment with reinforcement learning based on keras-rl. Includes virtual rendering and montecarlo for equity calculation.","Neuron Poker 是一个专为德州扑克设计的开源人工智能训练环境，基于 OpenAI Gym 构建。它旨在解决如何让机器通过自我对弈学习复杂扑克策略的难题，为开发者提供了一个完整的实验平台，用于训练和测试各类扑克 AI 代理。\n\n该项目非常适合人工智能研究人员、强化学习开发者以及扑克算法爱好者使用。用户不仅可以运行内置的随机玩家或基于规则的策略模型，还能利用键盘实时操控玩家进行观察，甚至部署基于 Keras-RL 的深度 Q 网络（DQN）智能体进行端到端的强化学习训练。\n\nNeuron Poker 的核心技术亮点在于其高效的胜率计算能力。项目集成了蒙特卡洛模拟算法，并特别提供了 C++ 版本实现，相比纯 Python 版本速度提升约 500 倍，显著加速了训练过程。此外，它还支持遗传算法优化策略阈值、虚拟图形渲染复盘以及对局数据可视化分析。无论是想探索博弈论算法，还是希望协作开发更强的扑克机器人，Neuron Poker 都提供了灵活且专业的底层支持。","Neuron Poker: OpenAi gym environment for texas holdem poker\n===========================================================\n\nThis is an environment for training neural networks to play texas\nholdem. Please try to model your own players and create a pull request\nso we can collaborate and create the best possible player.\n\nUsage:\n------\n\nRun:\n\n- Install Python 3.11, I would also recommend to install PyCharm.\n- Install uv with ``curl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh``\n- Create a virtual environment and install dependencies with ``uv sync``\n- Run 6 random players playing against each other:\n  ``uv run poker-random-render`` or\n- To manually control the players: ``uv run poker-keypress-render``\n- Example of genetic algorithm with self improvement: ``uv run poker-equity-improvement``\n- In order to use the C++ version of the equity calculator, you will also need to install Visual Studio 2019 (or GCC over Cygwin may work as well). To use it, use the -c option when running main.py.\n- For more advanced users: ``uv run poker-dqn-train-cpp`` will start training the deep Q agent with C++ Monte Carlo for faster calculation\n- Run all tests: ``uv run pytest`` (use -n to run tests in parallel)\n\n.. figure:: doc\u002Ftable.gif\n   :alt:\n\n\nAnalysis of the run\n~~~~~~~~~~~~~~~~~~~\n\nAt the end of an episode, the performance of the players can be observed via the summary plot.\n|image0|\n\nPackages and modules:\n~~~~~~~~~~~~~~~~~~~~~\n\nmain.py: entry point and command line interpreter. Runs agents with the gym. The docstring at the top of the file describes the command line options.\nThey are interpreted by docopt.\n\ngym\\_env\n~~~~~~~~\n\n-  ``env.py``: Texas Hold’em unlimited openai gym environment &\n   ``rendering.py``: rendering graphics while playing\n\nagents\n~~~~~~\nPlease add your model based agents here.\n\n-  ``agent_random.py``: an agent making random decisions\n-  ``agent_keypress.py``: an agent taking decision via keypress\n-  ``agent_consider_equity.py``: an agent considering equity information\n-  ``agent_keras_rl_dqn.py``: Deep Q learning agent, using keras-rl for deep reinforcement learning\n-  ``agent_custom_q1.py``: Custom implementation of deep q learning\n\nNote that the observation property is a dictionary that contains all the information about the players and table that can be used to make a decision.\n\nCustom implementation of q learning\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCustom impelemtation of reinforcement learning. This package is now in a separate repo:\nwww.github.com\u002Fdickreuter\u002Ftf_rl\n\n\ntools\n~~~~~\n\n-  ``hand_evaluator.py``: evaluate the best hand of multiple players\n-  ``helper.py``: helper functions\n-  ``montecarlo_numpy2.py``: fast numpy based montecarlo simulation to\n   calculate equity. Not yet working correctly. Some tests are failing. Feel free to fix them.\n-  ``montecarlo_python.py``: relatively slow python based montecarlo for equity calculation. Supports\n   preflight ranges for other players.\n-  ``montecarlo_cpp``: c++ implementation of equity calculator. Around 500x faster than python version\n\ntests\n^^^^^\n\n-  ``test_gym_env.py``: tests for the end.\n-  ``test_montecarlo.py``: tests for the hands evaluator and python\n   based equity calculator.\n-  ``test_montecarlo_numpy.py``: tests for the numpy montecarlo\n\n\nRoadmap\n-------\n\nAgents\n~~~~~~\n\n- [x] Agent based on user interaction (keypress)\n- [x] Random agent\n- [x] Equity based strategy (i.e. call and bet above threshold)\n- [x] Equity based strategy with genetic algorithm, adjusting the treshold based on winning agent.\n- [x] C++ implementation of equity calculator to significantly speed up runs\n- [x] Agent based on reinforcement learning with experience replay (Deep Q learning, based on keras-rl)\n- [\u002F] Custom agents (see above section for more details)\n\nReinforcement learning: Deep Q agent\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n``neuron_poker.agents.agent_dqn`` implements a deep q agent with help of keras-rl.\nA number of parameters can be se:\n\n- nb_max_start_steps = 20  # maximum of random actions at the beginning\n- nb_steps_warmup = 75  # before training starts, should be higher than start steps\n- nb_steps = 10000  # total number of steps\n- memory_limit = int(nb_steps \u002F 3)  # limiting the memory of experience replay\n- batch_size = 500  # number of items sampled from memory to train\n\nTraining can be observed via tensorboard (run ``tensorboard --logdir=.\u002FGraph`` from command line)\n|image2|\n\n\nHow to contribute\n-----------------\n\nLaunching from main.py\n~~~~~~~~~~~~~~~~~~~~~~\n\nIn ``main.py`` an agent is launched as follows (here adding 6 random\nagents to the table). To edit what is accepted to main.py via command\nline, simply add another line in the docstring at the top of main.py.\n\n.. code:: python\n\n    def random_action(render):\n        \"\"\"Create an environment with 6 random players\"\"\"\n        env_name = 'neuron_poker-v0'\n        stack = 500\n        self.env = gym.make(env_name, num_of_players=6, initial_stacks=stack)\n        for _ in range(num_of_plrs):\n            player = RandomPlayer(500)\n            self.env.add_player(player)\n\n        self.env.reset()\n\nAs you can see, as a first step, the environment needs to be created. As a second step, different agents need to be\nadded to the table. As a third step the game is kicked off with a reset. Agents with autoplay set to True will automatically\nplay, by having the action method called of their class. Alternatively you can use the PlayerShell class\nand the environment will require you call call the step function manually and loop over it. This may be helpful\nwhen using other packages which are designed to interface with the gym, such as keras-rl.\n\nAdding a new model \u002F agent\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAn example agent can be seen in random\\_agent.py\n\nTo build a new agent, an agent needs to be created, where the follwing\nfunction is modified. You will need to use the observation parameter,\nwhich contains the current state of the table, the players and and the\nagent itself, as a parameter to determine the best action.\n\n.. code:: python\n\n    def action(self, action_space, observation):  # pylint: disable=no-self-use\n        \"\"\"Mandatory method that calculates the move based on the observation array and the action space.\"\"\"\n        _ = observation  # not using the observation for random decision\n        this_player_action_space = {Action.FOLD, Action.CHECK, Action.CALL, Action.RAISE_POT, Action.RAISE_HAlF_POT}\n        possible_moves = this_player_action_space.intersection(set(action_space))\n        action = random.choice(list(possible_moves))\n        return action\n\nObserving the state\n~~~~~~~~~~~~~~~~~~~\n\nThe state is represented as a numpy array that contains the following\ninformation:\n\n.. code:: python\n\n    class CommunityData:\n        def __init__(self, num_players):\n            self.current_player_position = [False] * num_players  # ix[0] = dealer\n            self.stage = [False] * 4  # one hot: preflop, flop, turn, river\n            self.community_pot: float: the full pot of this hand\n            self.current_round_pot: float: the pot of funds added in this round\n            self.active_players = [False] * num_players  # one hot encoded, 0 = dealer\n            self.big_blind\n            self.small_blind\n\n\n    class StageData:  # as a list, 8 times:\n        \"\"\"Preflop, flop, turn and river, 2 rounds each\"\"\"\n\n        def __init__(self, num_players):\n            self.calls = [False] * num_players  # ix[0] = dealer\n            self.raises = [False] * num_players  # ix[0] = dealer\n            self.min_call_at_action = [0] * num_players  # ix[0] = dealer\n            self.contribution = [0] * num_players  # ix[0] = dealer\n            self.stack_at_action = [0] * num_players  # ix[0] = dealer\n            self.community_pot_at_action = [0] * num_players  # ix[0] = dealer\n\n\n    class PlayerData:\n        \"Player specific information\"\n\n        def __init__(self):\n            self.position: one hot encoded, 0=dealer\n            self.equity_to_river: montecarlo\n            self.equity_to_river_2plr: montecarlo\n            self.equity_to_river_3plr: montecarlo\n            self.stack: current player stack\n\nHow to integrate your code on Github\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIt will be hard for one person alone to beat the world at poker. That's\nwhy this repo aims to have a collaborative environment, where models can\nbe added and evaluated.\n\nTo contribute do the following:\n\n- Get Pycharm and build the virtual python environment. Use can do: ``uv sync``\n- If you want to use the 500x faster c++ based equity calculator, also install visual studio, but this is not necessary\n- Clone your fork to your local machine. You can do this directly from pycharm: VCS --> check out from version control --> git\n- Add as remote the original repository where you created the fork from and call it upstream (the connection to your fork should be called origin). This can be done with vcs --> git --> remotes\n- Create a new branch: click on master at the bottom right, and then click on 'new branch'\n- Make your edits.\n- Ensure all tests pass. Under file --> settings --> python integrated tools switch to pytest (see screenshot). |image1| You can then just right click on the tests folder and run all tests. All tests need to pass. Make sure to add your own tests by simply naming the funtion test\\_... \\\n- Make sure all the tests are passing. Best run pytest as described above (in pycharm just right click on the tests folder and run it). If a test fails, you can debug the test, by right clicking on it and put breakpoints, or even open a console at the breakpoint: https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F19329601\u002Finteractive-shell-debugging-with-pycharm\n- Commit your changes (CTRL+K}\n- Push your changes to your origin (your fork) (CTRL+SHIFT+K)\n- To bring your branch up to date with upstream master, if it has moved on: rebase onto upstream master: click on your branch name at the bottom right of pycharm, then click on upstream\u002Fmaster, then rebase onto. You may need to resolve soe conflicts. Once this is done, make sure to always force-push (ctrl+shift+k), (not just push). This can be done by selecting the dropdown next to push and choose force-push (important: don't push and merge a rebased branch with your remote)\n- Create a pull request on your github.com to merge your branch with the upstream master.\n- When your pull request is approved, it will be merged into the upstream\u002Fmaster.\n\n.. |image0| image:: doc\u002Fpots.png\n.. |image1| image:: doc\u002Fpytest.png\n.. |image2| image:: doc\u002Ftensorboard-example.png\n","神经元扑克：用于德州扑克的 OpenAI Gym 环境\n===========================================================\n\n这是一个用于训练神经网络玩德州扑克的环境。请尝试构建您自己的玩家模型，并提交拉取请求，以便我们共同协作，打造出最优秀的扑克玩家。\n\n使用方法：\n------\n\n运行：\n\n- 安装 Python 3.11，同时建议安装 PyCharm。\n- 使用命令 ``curl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh`` 安装 uv。\n- 创建虚拟环境并使用 ``uv sync`` 安装依赖项。\n- 运行 6 名随机玩家相互对战：``uv run poker-random-render``；或\n- 手动控制玩家：``uv run poker-keypress-render``。\n- 自我改进的遗传算法示例：``uv run poker-equity-improvement``。\n- 若要使用 C++ 版本的胜率计算器，还需安装 Visual Studio 2019（或者通过 Cygwin 使用 GCC 也可能可行）。使用时，在运行 main.py 时添加 -c 参数。\n- 对于高级用户：``uv run poker-dqn-train-cpp`` 将启动深度 Q 学习智能体的训练，采用 C++ 蒙特卡洛方法以加快计算速度。\n- 运行所有测试：``uv run pytest``（使用 -n 参数可并行运行测试）。\n\n.. figure:: doc\u002Ftable.gif\n   :alt:\n\n\n运行分析\n~~~~~~~~~~~~~~~~~~~\n\n在每局游戏结束时，可以通过汇总图表观察玩家的表现。\n|image0|\n\n包和模块：\n~~~~~~~~~~~~~~~~~~~~~\n\nmain.py：入口点及命令行解释器。它通过 gym 运行智能体。文件顶部的文档字符串描述了命令行选项，这些选项由 docopt 解析。\n\ngym_env\n~~~~~~~~\n\n- ``env.py``：德州扑克无限注版 OpenAI Gym 环境；\n- ``rendering.py``：游戏过程中的图形渲染。\n\nagents\n~~~~~~\n请在此处添加基于您模型的智能体。\n\n- ``agent_random.py``：随机决策智能体。\n- ``agent_keypress.py``：通过按键输入做出决策的智能体。\n- ``agent_consider_equity.py``：考虑胜率信息的智能体。\n- ``agent_keras_rl_dqn.py``：基于 Keras-RL 的深度强化学习智能体。\n- ``agent_custom_q1.py``：自定义实现的深度 Q 学习智能体。\n\n请注意，observation 属性是一个字典，包含可用于决策的所有玩家和牌桌信息。\n\n自定义 Q 学习实现\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n强化学习的自定义实现。该包现已迁至独立仓库：www.github.com\u002Fdickreuter\u002Ftf_rl\n\n\ntools\n~~~~~\n\n- ``hand_evaluator.py``：评估多名玩家的最佳牌型。\n- ``helper.py``：辅助函数。\n- ``montecarlo_numpy2.py``：基于 NumPy 的快速蒙特卡洛模拟，用于计算胜率。目前尚未完全正确，部分测试失败。欢迎修复。\n- ``montecarlo_python.py``：相对较慢的 Python 版本蒙特卡洛胜率计算工具，支持为其他玩家预设范围。\n- ``montecarlo_cpp``：C++ 实现的胜率计算器，速度约为 Python 版本的 500 倍。\n\ntests\n^^^^^\n\n- ``test_gym_env.py``：针对环境的测试。\n- ``test_montecarlo.py``：针对牌型评估器和 Python 版本胜率计算器的测试。\n- ``test_montecarlo_numpy.py``：针对 NumPy 版本蒙特卡洛的测试。\n\n\n路线图\n-------\n\nAgents\n~~~~~~\n\n- [x] 基于用户交互（按键）的智能体。\n- [x] 随机智能体。\n- [x] 基于胜率的策略（例如高于阈值时跟注或加注）。\n- [x] 结合遗传算法的胜率策略，根据获胜智能体调整阈值。\n- [x] 使用 C++ 实现的胜率计算器，显著提升运行速度。\n- [x] 基于经验回放的强化学习智能体（深度 Q 学习，基于 Keras-RL）。\n- [\u002F] 自定义智能体（详情见上文）。\n\n强化学习：深度 Q 智能体\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n``neuron_poker.agents.agent_dqn`` 在 Keras-RL 的帮助下实现了深度 Q 学习智能体。可以设置以下参数：\n\n- nb_max_start_steps = 20  # 开始阶段的最大随机动作次数。\n- nb_steps_warmup = 75  # 训练开始前的预热步数，应大于 start steps。\n- nb_steps = 10000  # 总步数。\n- memory_limit = int(nb_steps \u002F 3)  # 限制经验回放缓存大小。\n- batch_size = 500  # 从缓存中采样的训练批次大小。\n\n可通过 TensorBoard 观察训练过程（在命令行中运行 ``tensorboard --logdir=.\u002FGraph``）。\n|image2|\n\n\n如何贡献\n-----------------\n\n从 main.py 启动\n~~~~~~~~~~~~~~~~~~~~~~\n\n在 ``main.py`` 中，智能体的启动方式如下（此处向牌桌添加 6 名随机智能体）。若要编辑通过命令行传递给 main.py 的内容，只需在 main.py 文件顶部的文档字符串中添加一行即可。\n\n.. code:: python\n\n    def random_action(render):\n        \"\"\"创建一个有 6 名随机玩家的环境\"\"\"\n        env_name = 'neuron_poker-v0'\n        stack = 500\n        self.env = gym.make(env_name, num_of_players=6, initial_stacks=stack)\n        for _ in range(num_of_plrs):\n            player = RandomPlayer(500)\n            self.env.add_player(player)\n\n        self.env.reset()\n\n如您所见，首先需要创建环境，其次将不同智能体添加到牌桌上，最后通过 reset 方法启动游戏。设置 autoplay 为 True 的智能体会自动进行游戏，调用其类中的 action 方法。此外，您也可以使用 PlayerShell 类，此时环境会要求您手动调用 step 函数并循环执行。这在使用其他专为与 gym 接口设计的包时可能很有帮助，例如 Keras-RL。\n\n添加新模型\u002F智能体\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n示例智能体可在 random_agent.py 中找到。\n\n要构建新的智能体，需创建一个智能体类，并修改以下方法。您需要使用 observation 参数——它包含了当前牌桌、玩家以及智能体自身的状态——作为决定最佳行动的依据。\n\n.. code:: python\n\n    def action(self, action_space, observation):  # pylint: disable=no-self-use\n        \"\"\"必须的方法，根据观察数组和动作空间计算下一步行动。\"\"\"\n        _ = observation  # 不使用观察结果进行随机决策\n        this_player_action_space = {Action.FOLD, Action.CHECK, Action.CALL, Action.RAISE_POT, Action.RAISE_HAlF_POT}\n        possible_moves = this_player_action_space.intersection(set(action_space))\n        action = random.choice(list(possible_moves))\n        return action\n\n观察状态\n~~~~~~~~~~~~~~~~~~~\n\n状态以 NumPy 数组的形式表示，包含以下信息：\n\n.. code:: python\n\nclass 社区数据:\n        def __init__(self, num_players):\n            self.current_player_position = [False] * num_players  # ix[0] = 底池庄家\n            self.stage = [False] * 4  # one hot：翻牌前、翻牌圈、转牌圈、河牌圈\n            self.community_pot: float: 当前牌局的总底池金额\n            self.current_round_pot: float: 本轮加入的底池金额\n            self.active_players = [False] * num_players  # one hot 编码，0 = 底池庄家\n            self.big_blind\n            self.small_blind\n\n\n    class 局面数据：# 以列表形式存在，共8次：\n        \"\"\"翻牌前、翻牌圈、转牌圈和河牌圈，每轮各2次\"\"\"\n\n        def __init__(self, num_players):\n            self.calls = [False] * num_players  # ix[0] = 底池庄家\n            self.raises = [False] * num_players  # ix[0] = 底池庄家\n            self.min_call_at_action = [0] * num_players  # ix[0] = 底池庄家\n            self.contribution = [0] * num_players  # ix[0] = 底池庄家\n            self.stack_at_action = [0] * num_players  # ix[0] = 底池庄家\n            self.community_pot_at_action = [0] * num_players  # ix[0] = 底池庄家\n\n\n    class 玩家数据：\n        “玩家特定信息”\n\n        def __init__(self):\n            self.position: one hot 编码，0=底池庄家\n            self.equity_to_river: 蒙特卡洛模拟\n            self.equity_to_river_2plr: 蒙特卡洛模拟\n            self.equity_to_river_3plr: 蒙特卡洛模拟\n            self.stack: 当前玩家筹码量\n\n如何在 GitHub 上集成你的代码\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n单靠一个人很难在扑克比赛中战胜世界顶尖水平。因此，这个仓库旨在创建一个协作环境，允许大家添加和评估不同的模型。\n\n要贡献代码，请按照以下步骤操作：\n\n- 安装 PyCharm 并搭建虚拟 Python 环境。可以使用命令 ``uv sync``。\n- 如果你想使用快 500 倍的基于 C++ 的胜率计算器，还需要安装 Visual Studio，但这不是必需的。\n- 将你的 Fork 复制到本地机器上。你可以在 PyCharm 中直接操作：VCS --> 从版本控制检出 --> Git。\n- 添加原始仓库作为远程仓库，并命名为 upstream（与你的 Fork 的连接应称为 origin）。这可以通过 VCS --> Git --> Remotes 来完成。\n- 创建新分支：点击右下角的 master，然后选择“新建分支”。\n- 进行代码修改。\n- 确保所有测试都通过。在 File --> Settings --> Python 集成工具中切换到 pytest（见截图）。|image1| 之后，只需右键点击 tests 文件夹并运行所有测试即可。所有测试必须全部通过。记得为你的代码添加相应的测试，只需将函数命名为 test\\_...。\n- 确保所有测试都成功通过。最好按照上述方法运行 pytest（在 PyCharm 中右键点击 tests 文件夹并执行）。如果测试失败，可以通过右键点击测试并设置断点来调试，甚至可以在断点处打开控制台：https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F19329601\u002Finteractive-shell-debugging-with-pycharm。\n- 提交更改（CTRL+K）。\n- 将更改推送到你的 origin（即你的 Fork）（CTRL+SHIFT+K）。\n- 若要使你的分支与上游 master 保持同步，尤其是当上游 master 发生更新时：执行 rebase 操作，将你的分支变基到 upstream master。这可能需要解决一些冲突。完成后，务必使用强制推送（ctrl+shift+k），而不是普通的推送。可以通过选择推送旁边的下拉菜单，然后选择“强制推送”来完成（重要提示：不要简单地推送并合并变基后的分支）。\n- 在 your.github.com 上创建一个 Pull Request，将你的分支合并到 upstream master。\n- 当你的 Pull Request 被批准后，它将会被合并到 upstream\u002Fmaster 中。\n\n.. |image0| image:: doc\u002Fpots.png\n.. |image1| image:: doc\u002Fpytest.png\n.. |image2| image:: doc\u002Ftensorboard-example.png","# Neuron Poker 快速上手指南\n\nNeuron Poker 是一个基于 OpenAI Gym 的德州扑克环境，专为训练神经网络玩德州扑克而设计。本指南将帮助你快速搭建环境并运行示例。\n\n## 环境准备\n\n在开始之前，请确保你的系统满足以下要求：\n\n*   **操作系统**: Windows, macOS 或 Linux\n*   **Python 版本**: Python 3.11 (必须)\n*   **推荐 IDE**: PyCharm (可选，但推荐用于开发和调试)\n*   **可选依赖 (高性能模式)**:\n    *   若需使用速度提升约 500 倍的 C++ 胜率计算器，Windows 用户需安装 **Visual Studio 2019** (包含 C++ 构建工具)。\n    *   Linux\u002FMac 用户可尝试通过 Cygwin 安装 GCC。\n\n## 安装步骤\n\n本项目推荐使用 `uv` 进行依赖管理和虚拟环境创建，以获得更快的安装速度。\n\n1.  **安装 uv 工具**\n    在终端中运行以下命令安装 `uv`：\n    ```bash\n    curl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh\n    ```\n    *(注：国内用户若下载缓慢，可手动访问 https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh 下载脚本后本地运行)*\n\n2.  **克隆项目并同步依赖**\n    进入项目目录，创建虚拟环境并安装所有依赖：\n    ```bash\n    uv sync\n    ```\n\n## 基本使用\n\n安装完成后，你可以尝试以下几种运行模式：\n\n### 1. 运行随机玩家对战（最简单示例）\n启动一个包含 6 个随机决策玩家的局，并渲染图形界面：\n```bash\nuv run poker-random-render\n```\n\n### 2. 手动键盘控制\n通过键盘按键手动控制玩家决策：\n```bash\nuv run poker-keypress-render\n```\n\n### 3. 遗传算法自我进化示例\n运行基于遗传算法的胜率改进示例：\n```bash\nuv run poker-equity-improvement\n```\n\n### 4. 高级：训练深度 Q 学习代理 (DQN)\n如果你已安装 C++ 编译环境，可以使用蒙特卡洛加速训练深度强化学习代理：\n```bash\nuv run poker-dqn-train-cpp\n```\n*提示：训练过程可通过 TensorBoard 观察，运行 `tensorboard --logdir=.\u002FGraph` 即可查看可视化图表。*\n\n### 5. 运行测试\n确保所有功能正常：\n```bash\nuv run pytest\n```\n*(可使用 `-n` 参数并行运行测试以加快速度)*\n\n---\n**下一步建议**：\n查看 `agents` 文件夹中的现有代理实现（如 `agent_random.py`），复制并修改 `action` 方法来创建你自己的智能扑克玩家，然后提交 Pull Request 参与协作。","某量化团队正在研发一套基于深度强化学习的德州扑克自动交易策略，需要在一个高保真且可量化的环境中训练智能体。\n\n### 没有 neuron_poker 时\n- **环境搭建困难**：开发者需从零编写德州扑克规则引擎和状态空间定义，极易出现逻辑漏洞且无法兼容 OpenAI Gym 标准接口。\n- **胜率计算缓慢**：使用纯 Python 进行蒙特卡洛模拟来计算底池权益（Equity），单次决策耗时过长，导致模型训练效率极低。\n- **策略验证黑盒**：缺乏可视化渲染功能，难以直观观察智能体在牌局中的具体决策过程，调试全靠打印日志，排查问题如同“盲人摸象”。\n- **基线对比缺失**：缺少内置的随机玩家、固定阈值策略等基准对手，难以科学评估新算法的真实进化程度。\n\n### 使用 neuron_poker 后\n- **即插即用环境**：直接调用 neuron_poker 提供的标准化 Gym 环境，内置完整的德州扑克规则与状态观测字典，让团队能立即聚焦于算法本身。\n- **高性能权益计算**：切换至 neuron_poker 集成的 C++ 版蒙特卡洛计算器，将权益计算速度提升约 500 倍，大幅缩短深度 Q 网络（DQN）的训练周期。\n- **可视化复盘分析**：利用 neuron_poker 的虚拟渲染功能实时观看牌局演进，并通过回合结束后的总结图表量化分析玩家表现，快速定位策略缺陷。\n- **多样化对抗生态**：轻松加载 neuron_poker 预置的随机代理、遗传算法优化代理及人工按键代理，构建多层次的对抗场景以验证策略鲁棒性。\n\nneuron_poker 通过提供高性能的计算内核与标准化的训练环境，将德州扑克 AI 的研发重心从繁琐的基础设施构建转移到了核心策略的迭代优化上。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdickreuter_neuron_poker_34116cb6.png","dickreuter","Nicolas Dickreuter","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdickreuter_cf310789.jpg",null,"London","www.dickreuter.com","https:\u002F\u002Fgithub.com\u002Fdickreuter",[80,84,88,92,96],{"name":81,"color":82,"percentage":83},"Python","#3572A5",77.9,{"name":85,"color":86,"percentage":87},"Cython","#fedf5b",10.1,{"name":89,"color":90,"percentage":91},"C++","#f34b7d",9.9,{"name":93,"color":94,"percentage":95},"Jupyter Notebook","#DA5B0B",1.9,{"name":97,"color":98,"percentage":99},"CMake","#DA3434",0.2,715,196,"2026-04-08T12:40:31","MIT",4,"Windows, Linux, macOS","未说明",{"notes":108,"python":109,"dependencies":110},"推荐使用 PyCharm 和 uv 工具管理虚拟环境及依赖。若需使用基于 C++ 的高性能胜率计算器（速度提升约 500 倍），在 Windows 上需安装 Visual Studio 2019，或在 Linux\u002FmacOS 上通过 Cygwin 安装 GCC。深度强化学习训练支持通过 TensorBoard 监控。","3.11",[111,112,113,114,115,116],"uv","gym","keras-rl","numpy","pytest","tensorboard",[14,13],[119,120,121,122,123,124,125],"poker","openai-gym","holdem","reinforcement-learning","neural-network","gym-environment","pokerbot","2026-03-27T02:49:30.150509","2026-04-13T13:39:21.483690",[129,134,139,144,149,154],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},31509,"运行推理或 selfplay 时出现 'AssertionError: The environment must specify an observation space' 错误怎么办？","这是因为新版本的 gym 库破坏了向后兼容性。请编辑项目根目录下的 requirements.txt 文件，将 'gym' 这一行修改为指定版本 'gym==0.23.1'，然后重新运行安装命令：pip install -r requirements.txt。","https:\u002F\u002Fgithub.com\u002Fdickreuter\u002Fneuron_poker\u002Fissues\u002F73",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},31510,"训练 DQN 后加载模型玩牌时出现 'AttributeError: NoneType object has no attribute load_weights' 错误如何解决？","这通常是由于 keras-rl 与 TensorFlow 2 不兼容导致的。建议尝试使用以下依赖组合：tensorflow==2.0.0b1, Keras==2.3.1, keras-rl==0.4.2。如果问题依旧，可能需要检查代码中是否正确初始化了 DQN 代理，或者暂时回退到兼容的 TensorFlow 版本。","https:\u002F\u002Fgithub.com\u002Fdickreuter\u002Fneuron_poker\u002Fissues\u002F16",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},31511,"为什么在对手加注后，我无法再次加注（Re-raise），选项里只有跟注和弃牌？","这是环境参数限制导致的。维护者已在环境中添加了一个参数，默认限制了加注轮数。你可以通过修改环境配置将该参数值增大（例如设置为 9999）来允许无限次或更多次的再加注。相关修复已合并到 PR #14 中。","https:\u002F\u002Fgithub.com\u002Fdickreuter\u002Fneuron_poker\u002Fissues\u002F13",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},31512,"DQN 训练日志显示最后一个加注者在所有人都跟注或弃牌后还能再次行动（如跟注或弃牌），这是 Bug 吗？","是的，这是一个逻辑判断错误。代码中判断回合结束的条件使用了大于号（>），导致最后一步未被正确识别。解决方法是修改 gym_env\u002Fenv.py 文件中 PlayerCycle.next_player 方法里的判断逻辑，将 'self.counter > self.max_steps_after_raiser + raiser_reference' 改为 'self.counter >= self.max_steps_after_raiser + raiser_reference'。","https:\u002F\u002Fgithub.com\u002Fdickreuter\u002Fneuron_poker\u002Fissues\u002F25",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},31513,"为什么权益（Equity）计算的迭代次数默认只有 10 次，导致结果方差很大？","默认设置确实较低以追求速度，但这会导致高方差。用户可以通过修改环境变量或配置参数来提高蒙特卡洛模拟的迭代次数（例如提高到 5000 次），以获得更准确的权益估计，特别是在使用 C++ 实现时成本增加不明显。","https:\u002F\u002Fgithub.com\u002Fdickreuter\u002Fneuron_poker\u002Fissues\u002F32",{"id":155,"question_zh":156,"answer_zh":157,"source_url":133},31514,"运行程序时提示缺少 'GLU' 模块或导入错误怎么办？","这是因为依赖文件中遗漏了 'gluon' 或相关库。请在 requirements.txt 文件中添加 'GLU'（或具体的 gluonnlp 等相关包名，视具体报错而定），然后重新运行 pip install -r requirements.txt 进行安装。",[]]