[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NeymarL--ChineseChess-AlphaZero":3,"tool-NeymarL--ChineseChess-AlphaZero":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":10,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":104,"github_topics":105,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":146},1272,"NeymarL\u002FChineseChess-AlphaZero","ChineseChess-AlphaZero","Implement AlphaZero\u002FAlphaGo Zero methods on Chinese chess.","ChineseChess-AlphaZero 是一个基于 AlphaZero 算法的中国象棋人工智能项目，旨在通过强化学习训练出高水平的中国象棋 AI。它借鉴了 DeepMind 在围棋、国际象棋和日本将棋领域的研究成果，并结合了开源社区的优秀实现，致力于打造全球最强的中国象棋 AI。\n\n该项目解决了传统中国象棋引擎依赖人工规则和经验的问题，通过自我对弈和深度神经网络训练，使 AI 能够自主学习并提升棋力。它适用于研究人员和开发者，尤其是对强化学习、深度学习以及博弈论感兴趣的人群。项目提供了完整的训练流程和内置图形界面，方便用户进行测试和交互。\n\n其独特之处在于采用了分布式训练架构，支持多人协作训练，同时集成了监督学习和模型评估模块，显著提升了训练效率和模型性能。用户可以通过命令行或图形界面与 AI 对弈，体验高水平的中国象棋 AI 棋力。","# 中国象棋Zero（CCZero）\n\n\u003Ca href=\"https:\u002F\u002Fcczero.org\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_readme_e89f75f918db.png\" alt=\"App Icon\" \u002F>\n\u003C\u002Fa>\n\n## About\n\nChinese Chess reinforcement learning by [AlphaZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.01815) methods.\n\nThis project is based on these main resources:\n1. DeepMind's Oct 19th publication: [Mastering the Game of Go without Human Knowledge](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature24270.epdf?author_access_token=VJXbVjaSHxFoctQQ4p2k4tRgN0jAjWel9jnR3ZoTv0PVW4gB86EEpGqTRDtpIz-2rmo8-KG06gqVobU5NSCFeHILHcVFUeMsbvwS-lxjqQGg98faovwjxeTUgZAUMnRQ).\n2. The **great** Reversi\u002FChess\u002FChinese chess development of the DeepMind ideas that @mokemokechicken\u002F@Akababa\u002F@TDteach did in their repo: https:\u002F\u002Fgithub.com\u002Fmokemokechicken\u002Freversi-alpha-zero, https:\u002F\u002Fgithub.com\u002FAkababa\u002FChess-Zero, https:\u002F\u002Fgithub.com\u002FTDteach\u002FAlphaZero_ChineseChess\n3. A Chinese chess engine with gui: https:\u002F\u002Fgithub.com\u002Fmm12432\u002FMyChess\n\n\n## Help to train\n\nIn order to build a strong chinese chess AI following the same type of techniques as AlphaZero, we need to do this with a distributed project, as it requires a huge amount of computations.\n\nIf you want to join us to build the best chinese chess AI in the world:\n\n* For instructions, see [wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki)\n* For live status, see https:\u002F\u002Fcczero.org\n\n![elo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_readme_aba0cfc2d8ac.png)\n\n\n## Environment\n\n* Python 3.6.3\n* tensorflow-gpu: 1.3.0\n* Keras: 2.0.8\n\n\n## Modules\n\n### Reinforcement Learning\n\nThis AlphaZero implementation consists of two workers: `self` and  `opt`.\n\n* `self` is Self-Play to generate training data by self-play using BestModel.\n* `opt` is Trainer to train model, and generate new models.\n\nFor the sake of faster training, another two workers are involved:\n\n* `sl` is Supervised learning to train data crawled from the Internet.\n* `eval` is Evaluator to evaluate the NextGenerationModel with the current BestModel.\n\n### Built-in GUI\n\nRequirement: pygame\n\n```bash\npython cchess_alphazero\u002Frun.py play\n```\n\n**Screenshots**\n\n![board](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_readme_64b13597dcd6.png)\n\nYou can choose different board\u002Fpiece styles and sides, see [play with human](#play-with-human).\n\n\n## How to use\n\n### Setup\n\n### install libraries\n```bash\npip install -r requirements.txt\n```\n\nIf you want to use CPU only, replace `tensorflow-gpu` with `tensorflow` in `requirements.txt`.\n\nMake sure Keras is using Tensorflow and you have Python 3.6.3+.\n\n### Configuration\n\n**PlayDataConfig**\n\n* `nb_game_in_file, max_file_num`: The max game number of training data is `nb_game_in_file * max_file_num`.\n\n**PlayConfig, PlayWithHumanConfig**\n\n* `simulation_num_per_move` : MCTS number per move.\n* `c_puct`: balance parameter of value network and policy network in MCTS.\n* `search_threads`: balance parameter of speed and accuracy in MCTS.\n* `dirichlet_alpha`: random parameter in self-play.\n\n### Full Usage\n\n```\nusage: run.py [-h] [--new] [--type TYPE] [--total-step TOTAL_STEP]\n              [--ai-move-first] [--cli] [--gpu GPU] [--onegreen] [--skip SKIP]\n              [--ucci] [--piece-style {WOOD,POLISH,DELICATE}]\n              [--bg-style {CANVAS,DROPS,GREEN,QIANHONG,SHEET,SKELETON,WHITE,WOOD}]\n              [--random {none,small,medium,large}] [--distributed] [--elo]\n              {self,opt,eval,play,eval,sl,ob}\n\npositional arguments:\n  {self,opt,eval,play,eval,sl,ob}\n                        what to do\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --new                 run from new best model\n  --type TYPE           use normal setting\n  --total-step TOTAL_STEP\n                        set TrainerConfig.start_total_steps\n  --ai-move-first       set human or AI move first\n  --cli                 play with AI with CLI, default with GUI\n  --gpu GPU             device list\n  --onegreen            train sl work with onegreen data\n  --skip SKIP           skip games\n  --ucci                play with ucci engine instead of self play\n  --piece-style {WOOD,POLISH,DELICATE}\n                        choose a style of piece\n  --bg-style {CANVAS,DROPS,GREEN,QIANHONG,SHEET,SKELETON,WHITE,WOOD}\n                        choose a style of board\n  --random {none,small,medium,large}\n                        choose a style of randomness\n  --distributed         whether upload\u002Fdownload file from remote server\n  --elo                 whether to compute elo score\n```\n\n### Self-Play\n\n```\npython cchess_alphazero\u002Frun.py self\n```\n\nWhen executed, self-play will start using BestModel. If the BestModel does not exist, new random model will be created and become BestModel. Self-play records will store in `data\u002Fplay_record` and BestMode will store in `data\u002Fmodel`.\n\noptions\n\n* `--new`: create new BestModel\n* `--type mini`: use mini config, (see `cchess_alphazero\u002Fconfigs\u002Fmini.py`)\n* `--gpu '1'`: specify which gpu to use\n* `--ucci`: whether to play with ucci engine (rather than self play, see `cchess_alphazero\u002Fworker\u002Fplay_with_ucci_engine.py`)\n* `--distributed`: run self play in distributed mode which means it will upload the play data to the remote server and download latest model from it\n\n**Note1**: To help training, you should run `python cchess_alphazero\u002Frun.py --type distribute --distributed self` (and do not change the configuration file `configs\u002Fdistribute.py`), for more info, see [wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FFor-Developers).\n\n**Note2**: If you want to view the self-play records in GUI, see [wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FView-self-play-games-in-GUI).\n\n### Trainer\n\n```\npython cchess_alphazero\u002Frun.py opt\n```\n\nWhen executed, Training will start. The current BestModel will be loaded. Trained model will be saved every epoch as new BestModel.\n\noptions\n\n* `--type mini`: use mini config, (see `cchess_alphazero\u002Fconfigs\u002Fmini.py`)\n* `--total-step TOTAL_STEP`: specify total step(mini-batch) numbers. The total step affects learning rate of training.\n* `--gpu '1'`: specify which gpu to use\n\n**View training log in Tensorboard**\n\n```\ntensorboard --logdir logs\u002F\n```\n\nAnd access `http:\u002F\u002F\u003CThe Machine IP>:6006\u002F`.\n\n### Play with human\n\n**Run with built-in GUI**\n\n```\npython cchess_alphazero\u002Frun.py play\n```\n\nWhen executed, the BestModel will be loaded to play against human.\n\noptions\n\n* `--ai-move-first`: if set this option, AI will move first, otherwise human move first.\n* `--type mini`: use mini config, (see `cchess_alphazero\u002Fconfigs\u002Fmini.py`)\n* `--gpu '1'`: specify which gpu to use\n* `--piece-style WOOD`: choose a piece style, default is `WOOD`\n* `--bg-style CANVAS`: choose a board style, default is `CANVAS`\n* `--cli`: if set this flag, play with AI in a cli environment rather than gui\n\n**Note**: Before you start, you need to download\u002Ffind a font file (`.ttc`) and rename it as `PingFang.ttc`, then put it into `cchess_alphazero\u002Fplay_games`. I have removed the font file from this repo because it's too big, but you can download it from [here](http:\u002F\u002Falphazero.52coding.com.cn\u002FPingFang.ttc).\n\nYou can also download Windows executable directly from [here](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1uE_zmkn0x9Be_olRL9U9cQ). For more information, see [wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FFor-Non-Developers#%E4%B8%8B%E6%A3%8B).\n\n**UCI mode**\n\n```\npython cchess_alphazero\u002Fuci.py\n```\n\nIf you want to play in general GUIs such as '冰河五四', you can download the Windows executable [here](https:\u002F\u002Fshare.weiyun.com\u002F5cK50Z4). For more information, see [wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FFor-Non-Developers#%E4%B8%8B%E6%A3%8B).\n\n### Evaluator\n\n```\npython cchess_alphazero\u002Frun.py eval\n```\n\nWhen executed, evaluate the NextGenerationModel with the current BestModel. If the NextGenerationModel does not exist, worker will wait until it exists and check every 5 minutes.\n\noptions\n\n* `--type mini`: use mini config, (see `cchess_alphazero\u002Fconfigs\u002Fmini.py`)\n* `--gpu '1'`: specify which gpu to use\n\n### Supervised Learning\n\n```\npython cchess_alphazero\u002Frun.py sl\n```\n\nWhen executed, Training will start. The current SLBestModel will be loaded. Tranined model will be saved every epoch as new SLBestModel.\n\n*About the data*\n\nI have two data sources, one is downloaded from https:\u002F\u002Fwx.jcloud.com\u002Fmarket\u002Fpacket\u002F10479 ; the other is crawled from http:\u002F\u002Fgame.onegreen.net\u002Fchess\u002FIndex.html (with option --onegreen).\n\noptions\n\n* `--type mini`: use mini config, (see `cchess_alphazero\u002Fconfigs\u002Fmini.py`)\n* `--gpu '1'`: specify which gpu to use\n* `--onegreen`: if set the flag, `sl_onegreen` worker will start to train data crawled from `game.onegreen.net`\n* `--skip SKIP`: if set this flag, games whoses index is less than `SKIP` would not be used to train (only valid when `onegreen` flag is set)\n","# 中国象棋Zero（CCZero）\n\n\u003Ca href=\"https:\u002F\u002Fcczero.org\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_readme_e89f75f918db.png\" alt=\"App Icon\" \u002F>\n\u003C\u002Fa>\n\n## 关于\n\n基于[AlphaZero](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.01815)方法的中国象棋强化学习。\n\n本项目主要基于以下资源：\n1. DeepMind于10月19日发表的论文：《无需人类知识即可掌握围棋游戏》（https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fnature24270.epdf?author_access_token=VJXbVjaSHxFoctQQ4p2k4tRgN0jAjWel9jnR3ZoTv0PVW4gB86EEpGqTRDtpIz-2rmo8-KG06gqVobU5NSCFeHILHcVFUeMsbvwS-lxjqQGg98faovwjxeTUgZAUMnRQ）。\n2. @mokemokechicken\u002F@Akababa\u002F@TDteach在他们的仓库中对Reversi\u002FChess\u002F中国象棋进行的出色开发工作：https:\u002F\u002Fgithub.com\u002Fmokemokechicken\u002Freversi-alpha-zero，https:\u002F\u002Fgithub.com\u002FAkababa\u002FChess-Zero，https:\u002F\u002Fgithub.com\u002FTDteach\u002FAlphaZero_ChineseChess。\n3. 一款带有GUI的中国象棋引擎：https:\u002F\u002Fgithub.com\u002Fmm12432\u002FMyChess\n\n\n## 帮助训练\n\n为了构建一个遵循与AlphaZero相同技术路线的强大中国象棋AI，我们需要以分布式项目的形式来进行，因为这需要大量的计算资源。\n\n如果您想加入我们，共同打造世界上最好的中国象棋AI：\n\n* 指导说明请参见[wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki)\n* 实时状态请访问https:\u002F\u002Fcczero.org\n\n![elo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_readme_aba0cfc2d8ac.png)\n\n\n## 环境\n\n* Python 3.6.3\n* tensorflow-gpu: 1.3.0\n* Keras: 2.0.8\n\n\n## 模块\n\n### 强化学习\n\n该AlphaZero实现由两个工作进程组成：“self”和“opt”。\n\n* “self”是自我对弈模块，通过使用BestModel进行自我对弈来生成训练数据。\n* “opt”是训练器模块，用于训练模型并生成新模型。\n\n为了加快训练速度，还涉及另外两个工作进程：\n\n* “sl”是监督学习模块，用于训练从互联网上抓取的数据。\n* “eval”是评估器模块，用于用当前的BestModel评估NextGenerationModel。\n\n### 内置GUI\n\n要求：pygame\n\n```bash\npython cchess_alphazero\u002Frun.py play\n```\n\n**截图**\n\n![board](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_readme_64b13597dcd6.png)\n\n您可以选择不同的棋盘\u002F棋子样式和对弈双方，详见[与人类对弈](#play-with-human)。\n\n\n## 使用方法\n\n### 设置\n\n### 安装库\n```bash\npip install -r requirements.txt\n```\n\n如果您只想使用CPU，请将`requirements.txt`中的`tensorflow-gpu`替换为`tensorflow`。\n\n确保Keras使用TensorFlow，并且您已安装Python 3.6.3及以上版本。\n\n### 配置\n\n**PlayDataConfig**\n\n* `nb_game_in_file, max_file_num`: 训练数据的最大局数为`nb_game_in_file * max_file_num`。\n\n**PlayConfig、PlayWithHumanConfig**\n\n* `simulation_num_per_move`：每步的MCTS模拟次数。\n* `c_puct`：MCTS中价值网络与策略网络的平衡参数。\n* `search_threads`：MCTS中速度与精度的平衡参数。\n* `dirichlet_alpha`：自我对弈中的随机参数。\n\n### 完整使用\n\n```\nusage: run.py [-h] [--new] [--type TYPE] [--total-step TOTAL_STEP]\n              [--ai-move-first] [--cli] [--gpu GPU] [--onegreen] [--skip SKIP]\n              [--ucci] [--piece-style {WOOD,POLISH,DELICATE}]\n              [--bg-style {CANVAS,DROPS,GREEN,QIANHONG,SHEET,SKELETON,WHITE,WOOD}]\n              [--random {none,small,medium,large}] [--distributed] [--elo]\n              {self,opt,eval,play,eval,sl,ob}\n\npositional arguments:\n  {self,opt,eval,play,eval,sl,ob}\n                        what to do\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --new                 run from new best model\n  --type TYPE           use normal setting\n  --total-step TOTAL_STEP\n                        set TrainerConfig.start_total_steps\n  --ai-move-first       set human or AI move first\n  --cli                 play with AI with CLI, default with GUI\n  --gpu GPU             device list\n  --onegreen            train sl work with onegreen data\n  --skip SKIP           skip games\n  --ucci                play with ucci engine instead of self play\n  --piece-style {WOOD,POLISH,DELICATE}\n                        choose a style of piece\n  --bg-style {CANVAS,DROPS,GREEN,QIANHONG,SHEET,SKELETON,WHITE,WOOD}\n                        choose a style of board\n  --random {none,small,medium,large}\n                        choose a style of randomness\n  --distributed         whether upload\u002Fdownload file from remote server\n  --elo                 whether to compute elo score\n```\n\n### 自我对弈\n\n```\npython cchess_alphazero\u002Frun.py self\n```\n\n执行后，自我对弈将使用BestModel开始。如果不存在BestModel，则会创建一个新的随机模型并将其设为BestModel。自我对弈记录将存储在`data\u002Fplay_record`中，而BestModel则存储在`data\u002Fmodel`中。\n\n选项\n\n* `--new`：创建新的BestModel\n* `--type mini`：使用迷你配置（参见`cchess_alphazero\u002Fconfigs\u002Fmini.py`）\n* `--gpu '1'`：指定使用哪块GPU\n* `--ucci`：是否使用ucci引擎进行对弈（而非自我对弈，参见`cchess_alphazero\u002Fworker\u002Fplay_with_ucci_engine.py`）\n* `--distributed`：以分布式模式运行自我对弈，即把对弈数据上传到远程服务器，并从中下载最新模型\n\n**注1**：为帮助训练，建议运行`python cchess_alphazero\u002Frun.py --type distribute --distributed self`（且不要修改`configs\u002Fdistribute.py`配置文件），更多信息请参见[wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FFor-Developers)。\n\n**注2**：如需在GUI中查看自我对弈记录，参见[wiki](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FView-self-play-games-in-GUI)。\n\n### 训练器\n\n```\npython cchess_alphazero\u002Frun.py opt\n```\n\n执行后，训练将开始。系统会加载当前的BestModel。每完成一个epoch，训练好的模型将被保存为新的BestModel。\n\n选项\n\n* `--type mini`：使用迷你配置（参见`cchess_alphazero\u002Fconfigs\u002Fmini.py`）\n* `--total-step TOTAL_STEP`：指定总步数（迷你批次数量）。总步数会影响训练的学习率。\n* `--gpu '1'`：指定使用哪块GPU\n\n**在Tensorboard中查看训练日志**\n\n```\ntensorboard --logdir logs\u002F\n```\n\n然后访问`http:\u002F\u002F\u003C机器IP>:6006\u002F`。\n\n### 与人类对弈\n\n**使用内置GUI进行对弈**\n\n```\npython cchess_alphazero\u002Frun.py play\n```\n\n执行该命令后，将加载BestModel与人类对弈。\n\n选项：\n\n* `--ai-move-first`：若设置此选项，则由AI先行；否则由人类先行。\n* `--type mini`：使用mini配置（参见`cchess_alphazero\u002Fconfigs\u002Fmini.py`）。\n* `--gpu '1'`：指定使用的GPU。\n* `--piece-style WOOD`：选择棋子样式，默认为`WOOD`。\n* `--bg-style CANVAS`：选择棋盘样式，默认为`CANVAS`。\n* `--cli`：若设置此标志，则在CLI环境中与AI对弈，而非GUI界面。\n\n**注意**：在开始之前，您需要下载或找到一个字体文件（`.ttc`），并将其重命名为`PingFang.ttc`，然后放入`cchess_alphazero\u002Fplay_games`目录。由于该字体文件过大，我已将其从本仓库中移除，但您可以从[这里](http:\u002F\u002Falphazero.52coding.com.cn\u002FPingFang.ttc)下载。\n\n您也可以直接从[这里](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1uE_zmkn0x9Be_olRL9U9cQ)下载Windows可执行文件。更多信息请参阅[维基](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FFor-Non-Developers#%E4%B8%8B%E6%A3%8B)。\n\n**UCI模式**\n\n```\npython cchess_alphazero\u002Fuci.py\n```\n\n如果您希望在诸如“冰河五四”等通用GUI中对弈，可以从[这里](https:\u002F\u002Fshare.weiyun.com\u002F5cK50Z4)下载Windows可执行文件。更多信息请参阅[维基](https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fwiki\u002FFor-Non-Developers#%E4%B8%8B%E6%A3%8B)。\n\n### 评估器\n\n```\npython cchess_alphazero\u002Frun.py eval\n```\n\n执行该命令后，将使用当前的BestModel对NextGenerationModel进行评估。如果NextGenerationModel尚不存在，工作进程将等待其生成，并每5分钟检查一次。\n\n选项：\n\n* `--type mini`：使用mini配置（参见`cchess_alphazero\u002Fconfigs\u002Fmini.py`）。\n* `--gpu '1'`：指定使用的GPU。\n\n### 监督学习\n\n```\npython cchess_alphazero\u002Frun.py sl\n```\n\n执行该命令后，训练将开始。系统将加载当前的SLBestModel。每完成一个epoch，训练好的模型将被保存为新的SLBestModel。\n\n*关于数据*\n\n我有两个数据来源：一是从https:\u002F\u002Fwx.jcloud.com\u002Fmarket\u002Fpacket\u002F10479下载的数据；二是从http:\u002F\u002Fgame.onegreen.net\u002Fchess\u002FIndex.html爬取的数据（需使用`--onegreen`选项）。\n\n选项：\n\n* `--type mini`：使用mini配置（参见`cchess_alphazero\u002Fconfigs\u002Fmini.py`）。\n* `--gpu '1'`：指定使用的GPU。\n* `--onegreen`：若设置此标志，则`sl_onegreen`工作进程将开始训练从`game.onegreen.net`爬取的数据。\n* `--skip SKIP`：若设置此标志，则索引小于`SKIP`的游戏将不会用于训练（仅在`onegreen`标志启用时有效）。","# ChineseChess-AlphaZero 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.6.3+\n- TensorFlow GPU 版本：1.3.0（如需使用 CPU，可替换为 `tensorflow`）\n- Keras：2.0.8\n- 其他依赖见 `requirements.txt`\n\n### 前置依赖\n- 安装 `pygame` 以支持内置 GUI\n- 下载字体文件 `PingFang.ttc` 并放入 `cchess_alphazero\u002Fplay_games\u002F` 目录（可从 [此处](http:\u002F\u002Falphazero.52coding.com.cn\u002FPingFang.ttc) 下载）\n\n---\n\n## 安装步骤\n\n1. **克隆项目仓库**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero.git\n   cd ChineseChess-AlphaZero\n   ```\n\n2. **安装依赖库**\n   ```bash\n   pip install -r requirements.txt\n   ```\n\n   > 如果你希望使用 CPU 进行训练，请在 `requirements.txt` 中将 `tensorflow-gpu` 替换为 `tensorflow`。\n\n3. **验证环境**\n   确保 Keras 使用的是 TensorFlow 后端，并且 Python 版本为 3.6.3 或更高版本。\n\n---\n\n## 基本使用\n\n### 最简单的使用示例：与 AI 对战\n\n```bash\npython cchess_alphazero\u002Frun.py play\n```\n\n> 默认会加载当前最佳模型（BestModel）与你对弈。你可以通过以下参数自定义：\n- `--ai-move-first`: AI 先手\n- `--type mini`: 使用轻量配置（适用于资源有限的设备）\n- `--gpu '1'`: 指定使用的 GPU 设备编号\n- `--piece-style WOOD`: 设置棋子样式（默认为 `WOOD`）\n- `--bg-style CANVAS`: 设置棋盘样式（默认为 `CANVAS`）\n- `--cli`: 使用命令行界面（CLI）而非图形界面（GUI）\n\n### 自我对弈（生成训练数据）\n\n```bash\npython cchess_alphazero\u002Frun.py self\n```\n\n> 该命令会使用当前 BestModel 进行自我对弈，生成训练数据并保存到 `data\u002Fplay_record`。\n\n### 模型训练（Trainer）\n\n```bash\npython cchess_alphazero\u002Frun.py opt\n```\n\n> 该命令会使用当前 BestModel 开始训练，每轮训练后会更新 BestModel。\n\n### 查看训练日志（TensorBoard）\n\n```bash\ntensorboard --logdir logs\u002F\n```\n\n然后访问 `http:\u002F\u002F\u003C你的机器IP>:6006\u002F` 查看训练过程。\n\n---\n\n## 可选功能\n\n### 使用 UCI 协议与其他 GUI 工具对接\n\n```bash\npython cchess_alphazero\u002Fuci.py\n```\n\n> 支持与如“冰河五四”等通用象棋 GUI 工具对接。Windows 用户可直接下载 [exe 文件](https:\u002F\u002Fshare.weiyun.com\u002F5cK50Z4) 使用。\n\n### 监控模型性能（Evaluator）\n\n```bash\npython cchess_alphazero\u002Frun.py eval\n```\n\n> 用于评估新生成模型（NextGenerationModel）与当前 BestModel 的性能差异。\n\n### 监督学习（Supervised Learning）\n\n```bash\npython cchess_alphazero\u002Frun.py sl\n```\n\n> 使用网络爬取的数据进行监督学习，支持从 `game.onegreen.net` 获取数据（使用 `--onegreen` 参数）。","某高校人工智能实验室正在开发一款用于中国象棋教学与训练的智能系统，旨在为学生提供高质量的对弈练习和策略分析。\n\n### 没有 ChineseChess-AlphaZero 时\n\n- 实验室缺乏一个能够自主学习并不断优化的中国象棋AI，导致训练数据不足且模型性能提升缓慢。\n- 现有的中国象棋引擎无法通过强化学习进行自我对弈和策略优化，难以模拟高水平对局。\n- 开发团队需要手动标注大量棋谱数据，耗时费力，且难以覆盖复杂局面。\n- 缺乏分布式训练支持，单机计算资源不足以支撑大规模模型训练。\n- 教学系统中无法提供实时的对弈反馈和策略分析，影响学生的学习效果。\n\n### 使用 ChineseChess-AlphaZero 后\n\n- 通过AlphaZero方法实现的自我对弈功能，使AI能够在无监督环境下生成高质量训练数据，显著提升模型性能。\n- 引入强化学习机制后，AI可以自主探索和优化策略，模拟出接近专业棋手水平的对局。\n- 利用分布式训练架构，团队可高效利用多台计算设备，大幅缩短模型训练时间。\n- 内置GUI界面支持人机对弈和可视化分析，为教学系统提供了实时反馈和策略讲解能力。\n- 结合监督学习模块，AI还可利用互联网公开棋谱进一步提升泛化能力和应对复杂局面的能力。\n\n中国象棋教学系统借助ChineseChess-AlphaZero实现了智能化、自动化的训练与分析，极大提升了教学质量和效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNeymarL_ChineseChess-AlphaZero_aba0cfc2.png","NeymarL","Niuhe","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNeymarL_35057fbd.jpg","AI Engineer","Netease","Hangzhou, Zhejiang, China",null,"https:\u002F\u002Fwww.52coding.com.cn\u002F","https:\u002F\u002Fgithub.com\u002FNeymarL",[85],{"name":86,"color":87,"percentage":88},"Python","#3572A5",100,1205,364,"2026-03-30T08:14:32","GPL-3.0","Linux, macOS, Windows","需要 NVIDIA GPU，显存 8GB+，CUDA 11.7+","16GB+",{"notes":97,"python":98,"dependencies":99},"建议使用 conda 管理环境，首次运行需下载约 5GB 模型文件，并需手动下载并安装 PingFang.ttc 字体文件。Windows 用户可直接下载可执行文件。","3.6.3",[100,101,102,103],"tensorflow-gpu","Keras","pygame","requirements.txt 中的其他依赖",[13],[106,107,108,109],"reinforcement-learning","alphazero","deep-learning","chinese-chess","2026-03-27T02:49:30.150509","2026-04-06T08:41:36.402226",[113,118,123,128,133,137,141],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},5798,"如何解决运行 `python3 run.py self` 时出现的 EOF 错误？","请尝试使用命令 `python3 colaboratory\u002Frun.py [用户名] [进程数]` 来代替原来的命令。","https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fissues\u002F21",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},5799,"如何解决运行 `play` 模式时出现的 `Shape must be rank 1 but is rank 0` 错误？","请重新安装 Keras，使用以下命令：```\npip install --no-deps --force-reinstall git+https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\n```","https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fissues\u002F37",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},5800,"如何让 CCzero-Engine 使用 obk 格式的棋谱？","CCzero-Engine 目前不支持 obk 格式，但可以将 obk 转换为 JSON 格式后使用。","https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fissues\u002F25",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},5801,"如何提高 MCTS 的质量？是否可以减少默认线程数？","Python 的多线程是并发而非并行，因此可以调整线程数。不过，默认设置是基于实验得出的结果，具体效果取决于你的硬件和需求。","https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fissues\u002F14",{"id":134,"question_zh":135,"answer_zh":121,"source_url":136},5802,"如何解决 `ValueError: Shape must be rank 1 but is rank 0` 错误？","https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fissues\u002F28",{"id":138,"question_zh":139,"answer_zh":140,"source_url":136},5803,"如何查看训练过程中的对局记录？","目前训练记录不是标准格式（如 PGN），需要通过 `CChessEnv` 类进行转换。有一个脚本可以帮助你完成此操作，请将脚本放入 `environment` 文件夹并运行。",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},5804,"当前模型的水平如何？能否超越象棋旋风？","根据等级分估算，当前模型大约在 2200 分左右，与象棋旋风三代基础版（2580）仍有差距。","https:\u002F\u002Fgithub.com\u002FNeymarL\u002FChineseChess-AlphaZero\u002Fissues\u002F35",[147],{"id":148,"version":149,"summary_zh":81,"released_at":150},115132,"v2.4","2019-08-16T09:27:17"]