[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-dromara--Omega-AI":3,"tool-dromara--Omega-AI":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":105,"github_topics":106,"view_count":113,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":114,"updated_at":115,"faqs":116,"releases":132},591,"dromara\u002FOmega-AI","Omega-AI","Omega-AI：基于java打造的深度学习框架，帮助你快速搭建神经网络，实现模型推理与训练，引擎支持自动求导，多线程与GPU运算，GPU支持CUDA，CUDNN。","Omega-AI 是基于 Java 语言打造的深度学习框架，旨在帮助开发者利用熟悉的语言快速搭建神经网络，实现模型训练与推理。长期以来，AI 领域主要由 Python 主导，这给 Java 开发者带来了较高的入门门槛。Omega-AI 有效解决了这一问题，不仅让 Java 工程师能轻松接入人工智能技术，还能通过阅读源码深入理解算法的实现原理。\n\n技术上，Omega-AI 内置自动求导引擎，支持多线程与 GPU 并行计算，完美适配 CUDA 和 CUDNN 加速环境。其模型支持库极为丰富，涵盖 CNN、RNN、YOLO 等传统网络，也包含 Transformer、Llama 大模型及 Stable Diffusion 等前沿架构。值得一提的是，核心引擎除必要的 CUDA 依赖外，极少引入第三方包，保证了运行环境的纯净与稳定。\n\n无论是希望钻研深度学习原理的研究者，还是需要为企业系统无缝集成 AI 能力的 Java 开发者，Omega-AI 都能提供强有力的支持。项目提供了从图像识别到文本生成的丰富示例，欢迎加入社区共同探索。","![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cbeb2b0d6b2e.png)\n\n# 自己打造一个深度学习框架 for java\n\n##  前言\n从2016年开始利用空余时间研究深度学习的方面，由于工作的原因，最熟悉的编程语言就是java，所以框架的编程语言自然而然就使用了java。自己打造框架的初衷就是为了更加深入了解各个算法、模型、实现的原理和思路，同时让java开发者更加容易接触AI领域。\n## 框架介绍\nOmega-AI：基于java打造的深度学习框架，帮助你快速搭建神经网络，实现训练或测试模型，支持多GPU训练。框架目前支持BP神经网络、卷积神经网络、循环神经网络、vgg16、resnet、yolo、lstm、transformer、gpt、llama、diffusion、stable diffusion等模型的构建，目前引擎最新版本支持CUDA和CUDNN两种GPU加速方式，关于GPU加速的环境配置与jcuda版本jar包的对应依赖，引擎中所实现的模型和算法除了使用cuda和cudnn相关依赖包之外均不使用任何api和第三方依赖包。欢迎添加QQ群([119593195]())进行技术讨论和交流，别忘了给Omega-AI项目点个star，项目需要你们的支持。\n\n## 官方网站：\n\n[https:\u002F\u002Fomega-ai.dromara.org](https:\u002F\u002Fomega-ai.dromara.org)\n\n## 源码地址：\n\n[https:\u002F\u002Fgitee.com\u002Fdromara\u002Fomega-ai](https:\u002F\u002Fgitee.com\u002Fdromara\u002Fomega-ai)\n\n[https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI](https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI)\n\n[https:\u002F\u002Fgitcode.com\u002Fdromara\u002Fomega-ai](https:\u002F\u002Fgitcode.com\u002Fdromara\u002Fomega-ai)\n\n## 依赖\n由于omega-engine-v4-gpu加入了jcuda支持，所以omega-engine-v4-gpu需要安装与jcuda版本对应的cuda，如果您的机器安装的CUDA版本是11.7.x，那么对应omega-engine需要引入的jcuda 11.7.0版本。\n\n## 快速开始\n##### 1.检查当前CUDA版本\n```txt\nnvcc --version\n```\n##### 2.安装CUDA与CUDNN\nhttps:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit-archive\n##### 3.引入或下载与当前CUDA版本对应的omega-engine包\n[win-cu-x.x 版本包列表](#版本依赖包)\n```xml\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.7-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n##### 4.初始化GPU环境与释放显存\n```java\npublic static void main(String[] args) {\n    try {\n        \u002F\u002F初始化GPU环境获取Context对象\n        CUDAModules.initContext();\n        CNNTest cnn = new CNNTest();\n        cnn.cnnNetwork_cifar10();\n    } finally {\n        \u002F\u002F释放所有显存\n        CUDAMemoryManager.free();\n    }\n}\n```\n\n## 系统参数\n由于训练vgg16模型的参数比较庞大，所以在部署项目的时候需要对jvm内存进行调整.\n调整事例如：-Xmx20480m -Xms20480m -Xmn10240m\n\n## Demo展示\n\n### 卷积神经网络系列\n#### [基于卷积神经网络mnist手写数字识别](http:\u002F\u002F120.237.148.121:8011\u002Fmnist)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_8eaa953c1c90.png)\n\n### yolo目标识别算法系列\n#### [基于yolo算法目标识别](#yolo-banana-detection-demo)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cad1064b1fa4.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_a76e2ee05e9f.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_0b355e2640d1.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_1d31d0d0ec81.png)\n\n#### [基于yolov3口罩佩戴识别](#yolov3-mask-demo口罩佩戴识别)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_ed149eef8ad0.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_764cc62f2369.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_6dd24bdac5ee.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_556dfe9717d4.png)\n\n#### [基于yolov3安全帽佩戴识别](#yolov3-helmet-demo安全帽佩戴识别)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_62beb03796b4.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_82c2bae9a3a5.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_1fa21febae89.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_7e0bc5ed88a4.png)\n\n#### [基于yolov7智能冰柜商品识别](#yolov7-sm-demo智能冰柜商品识别)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_925e30dc7ee1.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_2a8f03b23938.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_464627ddc245.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_c24865f07fd0.png)\n\n### GAN对抗生成神经网络系列\n#### [基于GAN生成对抗神经网络实现生成手写体数字图片](#gan-mnist-demo-生成手写数字)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_65b14e89d1e8.gif)\n\n#### [基于DCGAN生成对抗神经网络实现生成动漫头像图片](#dcgan-anime-demo-生成动漫头像)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_9ed314924911.gif)\n\n### 时序模型系列\n#### [基于RNN循环神经网络实现小说生成器](#rnn-中文小说生成器)\n##### 斗破苍穹前50章原文\n```txt\n    月如银盘，漫天繁星。山崖之颠，萧炎斜躺在草地之上，嘴中叼中一根青草，微微嚼动，任由那淡淡的苦涩在嘴中弥漫开来举起有些白皙的手掌，挡在眼前，目光透过手指缝隙，遥望着天空上那轮巨大的银月。唉想起下午的测试，萧炎轻叹了一口气，懒懒的抽回手掌，双手枕着脑袋，眼神有些恍惚十五年了呢低低的自喃声，忽然毫无边际的从少年嘴中轻吐了出来。在萧炎的心中，有一个仅有他自己知道的秘密：他并不是这个世界的人，或者说，萧炎的灵魂，并不属于这个世界，他来自一个名叫地球的蔚蓝星球，至于为什么会来到这里，这种离奇经过，他也无法解释，不过在生活了一段时间之后，他还是后知后觉的明白了过来：他穿越了！随着年龄的增长，对这块大陆，萧炎也是有了些模糊的了解大陆名为斗气大陆，大陆上并没有小说中常见的各系魔法，而斗气，才是大陆的唯一主调！在这片大陆上，斗气的修炼，几乎已经在无数代人的努力之下，发展到了巅峰地步，而且由于斗气的不断繁衍，最后甚至扩散到了民间之中，这也导致，斗气，与人类的日常生活，变得息息相关，如此，斗气在大陆中的重要性，更是变得无可替代！因为斗气的极端繁衍，同时也导致从这条主线中分化出了无数条斗气修炼之法，所谓手有长短，分化出来的斗气修炼之法，自然也是有强有弱。经过归纳统计，斗气大陆将斗气功法的等级，由高到低分为四阶十二级：天.地.玄.黄！而每一阶，又分初，中，高三级....................\n```\n##### 生成器效果(pickTopN:N=3,狗屁不通)\n```txt\n    这个故事所造成的后果，便是造就了大批每天东在这样年，前，萧仅有是自己的萧的摇了摇头，道，就等因为炼了，才造就出三的天修炼天，的同样非也是有些有些异的儿一直在倒是，废，的分了，然便想要不定斗气大月月月月的定。透明的，方价脸有多中为不可是。你说完师到后气会让对，我不可以时，他倒是在乎这种高到功法的斗技出其种有些不愿的吸手一道，斗气，萧家现上，是这事，不是这个修有程体的什纸契到这片的小脸！三老，我光在萧战一巴掌，双中，是一个灵到的常识。心吧？望着萧炎那些神有些恍点不想受你的美的，用气忽然，传进你耳枚的属散，另次我前便是对着身空的长出身也只有想起，不，萧炎哥以说的造，的时候，他的道：你修门成为自然是各种天材少年老，一声冷静的望着对面的在一，手中，了下来的事，，你向了角落阵嘲笑，微有着不份还眼角散的，萧炎牙齿在桌面，上下没被等级之人的强化，并且他这老难，还是难去人的说过别的功，而且这几年，还要是分，同，你的要求，这几年实条，听过你有一年的，，你成就是我萧炎的面庞，萧战叹了口沾染鲜之的手一，在白纸之名为斗成为你！你是没搞的鬼？嘿人当失也了口之事发。萧动那小娃冷的的老头，笑眯眯凝重的道，这是这事所的的事，，你当还在一年时知，三年之前，你成年自然宛如疯天阶十属，所以，有云岚宗宗，更强有的么还年轻指的戒路，萧炎愕然了转。萧叔之时，萧炎却才有一星大者，在真真切切的。当药的，庞一瞪，手指惊颤的斗着萧炎心里一好气得俏脸忽些，不炎轻重的：自然也造就了他不的老师，云岚一宗，虽然有家，小脸，那双宛如轻疑般待遇这老然药，所的，这里，有种，都会身到这里许，自会不攻，微！父头一动容。丹有一种条件。首位的上，然必要进道，斗气大陆人，一种个灵魂，竟与什天，今事悔婚之种事，总的记不得萧，也将会被各方势力可惜手间中到时的变，那将老：想家老师？闻言一笑声，竟一手掌萧上，猛的之时响，再让你看看你就身也只清出了岁的事方与大一家，，萧叔叔，今天这种高深的吐了一口气少，那便是以事再次开始修炼中，萧后，萧炎会了一辈子不废物玩区，当然还是在炼黄之气！炼药之术之神，而有的得发了，那便请回好下去。药时，需前说自身属性的灵魂重却，火焰属于他便是一种发愣到斗者更让修！一老人手中有聚药老成年自己这几年，看来到，你以为了天他在云月上片刻下，也并不少在纳低嫣道对明公你，纳兰上然了起着些白老的魔此，你这本我还年轻间，还今的你我已经知为，至九品的先是，萧炎那些回成，无奈的身视了可\n```\n\n#### [基于SEQ2SEQ模型实现英文翻译器](#seq2seq-英文翻译器)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_d1d66e1d9a2b.png)\n\n### GPT系列\n#### [基于微型GPT2架构实现小说生成器](#gpt-中文小说生成器)\n##### 斗破苍穹前50章原文\n```txt\n    月如银盘，漫天繁星。山崖之颠，萧炎斜躺在草地之上，嘴中叼中一根青草，微微嚼动，任由那淡淡的苦涩在嘴中弥漫开来举起有些白皙的手掌，挡在眼前，目光透过手指缝隙，遥望着天空上那轮巨大的银月。唉想起下午的测试，萧炎轻叹了一口气，懒懒的抽回手掌，双手枕着脑袋，眼神有些恍惚十五年了呢低低的自喃声，忽然毫无边际的从少年嘴中轻吐了出来。在萧炎的心中，有一个仅有他自己知道的秘密：他并不是这个世界的人，或者说，萧炎的灵魂，并不属于这个世界，他来自一个名叫地球的蔚蓝星球，至于为什么会来到这里，这种离奇经过，他也无法解释，不过在生活了一段时间之后，他还是后知后觉的明白了过来：他穿越了！随着年龄的增长，对这块大陆，萧炎也是有了些模糊的了解大陆名为斗气大陆，大陆上并没有小说中常见的各系魔法，而斗气，才是大陆的唯一主调！在这片大陆上，斗气的修炼，几乎已经在无数代人的努力之下，发展到了巅峰地步，而且由于斗气的不断繁衍，最后甚至扩散到了民间之中，这也导致，斗气，与人类的日常生活，变得息息相关，如此，斗气在大陆中的重要性，更是变得无可替代！因为斗气的极端繁衍，同时也导致从这条主线中分化出了无数条斗气修炼之法，所谓手有长短，分化出来的斗气修炼之法，自然也是有强有弱。经过归纳统计，斗气大陆将斗气功法的等级，由高到低分为四阶十二级：天.地.玄.黄！而每一阶，又分初，中，高三级....................\n```\n##### 生成器效果(embedDim:128,max_len:64,headNum:8,decoderNum:8,pickTopN:N=1,颇为接近原文)\n```txt\n    萧炎的目光，依然是若无其事的，而即使是这样的话，也是一位炼药师再好气的确是。萧炎嘴角一裂，却是忽然话音有心急促的抽动，让得少年呼吸微微急促。少年缓缓抬起头，目光淡然的转过身来，眼瞳中浮现许些阴冷，让得无奈的摊贩前，毫不留下无视。你这即将衣心中的淡然，不过，以萧炎此刻比看来，对方的男子接受尽了重伤，看得体内那些通气的一些奇异的能量气足够。斗气的带动在体内之后，萧炎的温养我的骨骼。眼眸子中，无奈静的深陷入了一些铁片之中的深蓝色物体，而在空中纠缠成的印，可见加列家族的两人打发现，在她的名声中，正常生长，比他毛而不可行走，在恐怕的狼头佣兵团，绝对会遭受到毁灭般的打击。只要一想到日后那铺天盖地的报复，穆力心头便是杀意狂涌。听得穆力的喝声，萧炎嘴角挑起一抹嘲讽与森然，嘴唇微动：爆！嘭！又是一声闷响乍然响起，不过这记闷响，竟然是从穆力的身体之内传出。噗嗤！忽然在体内爆炸的劲气，让得穆力脸色瞬间惨白，原来，脚步骤然一顿，身在半空划起一道抛物线。杀了他！飞起的瞬间，这名佣兵急忙冲着被这突然变故搞得发愣了一下，旋即满脸垂涎的笑问道，他对那位脸庞上半晌啊，脑袋笑容的带着，小医仙两人影一闪电般的对着黑暗地面的巨型与肉体型洒进，巨剑的剑柄，萧炎身便是一道地面，任由铁剑携带着劲气掠来。在魔猿身前一次攻击之下，一地面几米距离时，一团森白火焰猛的凭空腾现，箭支穿进火焰中，瞬间，便是化为了漆黑粉末。望着这一幕，加列怒脸色微变，心头泛起一股不安，看来这位黑袍人，也是一位不弱于大斗师的强者。缓缓吐了一口气，加列怒从身后的侍从手中拿起一把深蓝色的长枪，身体之上，淡淡的蓝色斗气渗发而出，顿时，附近的空气都为之湿润了不少，显然，他的斗气功法是偏向略微阴寒的水属性。手掌紧握着长枪，加列怒死死的盯着黑袍人，身体在略微调整之后，脚掌在地面突兀一踏，身形不断的对着萧炎两人群中挤去。如此多的人数进入魔兽山脉，普通魔兽定然不敢轻易袭击，如此，生命也就多了几分保障，只要等自己在路途中寻找到前段何种级别搞定，不过却取三年了，可以再出现在学院里吃了亏，还得怪我们。那名叫做戈剌的青年，上前一步，对着萧炎不怀好意的笑道。缓缓的吐了一口气，在众人的注视下，萧炎无奈的耸了耸肩，上前两步，在行至萧玉身旁时，忽然手臂一伸，狠狠的揽住 那柔软的纤腰，将之勒进怀中。被萧炎骤然偷袭，萧玉先是一愣，紧接着俏脸布满晕红，考虑到罗布在一旁，她只得停止挣扎\n```\n\n#### [基于GPT2架构实现聊天机器人](#gpt-中文聊天机器人)\n##### 训练数据：50W日常聊天语料\n###### 备注:以下是训练数据事例，每一个回复以\" \"空格分隔，每一段对话以换行\u002Fn分隔，以一段对话为一条训练数据\n```txt\n少侠好眼力\t少侠啥时候来北京\t遥遥无期你又没时间\t\n哥怎么这么帅\t是吗？谢谢嘞\t和小鲜肉一样。嫩嫩的\t\n你不怕掉下去啊\t这是海拔米我觉得不够高\t注意安全\t\n你这文案写的我有点感动是怎么回事\t哭没得\t没有咧\t\n都考上\t小仙女决定满足你这个愿望\t因为我有魔法棒\t\n啥时候看演唱会\t上海站好像延期了，不知延到啥时候本来是五月中旬\t靠你了\t\n大哥难道是求婚啦！\t不不不大哥还没有这么速度呢随便拼着玩儿的\t嘻嘻好看\t\n中午老大爷遛弯去了么\t对呀，哈哈。\t转发这条咸鱼，今年必有好事儿发生。\n我的爱情独白就是清空我的购物车\t沉迷于一夜暴富不可自拔的身家过百元的贵妇\t只想发财只想发财只想发财，对脱单好无兴趣\t\n自己用啊\t我有\t可是那张不用钱的嘢\t那要是里面没钱呢\t无钱再刷自己的卡\t哈哈哈哈哈哈哈哈这样就很不道德了\t没有没有\t\t\n第一张是藤椒鸡吗！\t嘻嘻嘻对一家好次川菜的椒麻鸡！\t这几天牙疼但是一直在想这种辣辣的鸡\t嘤嘤嘤就是这种时候会想吃辣\t\n```\n###### 模型参数\n```java\n\u002F\u002F gpt 124M参数量\nmaxLen = 128  \u002F\u002F最大token数\nembedDim = 768 \u002F\u002Fembeding编码维度\nheadNum = 12  \u002F\u002F多头注意力头数\ndecoderNum = 12  \u002F\u002F解码器层数\nlearnRate = 0.0001f  \u002F\u002F学习率\nepoch = 3 \u002F\u002F循环训练次数\ndropoutRate = 0.1f\ntrain_data = 450000 \u002F\u002F训练集数量\nvail_data = 50000  \u002F\u002F验证集数量\ntrain_loss = 1.08f \u002F\u002F最终训练集损失在1.0左右\nvail_loss = 1.2f  \u002F\u002F最终验证集损失在1.2左右\n````\n###### 推理效果图\n![GPT2聊天机器人](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_e2f41e7240aa.png)\n\n#### [基于gpt2-medium实现医疗问答系统](#gpt-医疗问答系统)\n##### 训练数据：20W医疗问答语料\n###### 模型参数\n```java\n\u002F\u002F gpt2-medium 350M参数量\nmaxLen = 256  \u002F\u002F最大token数\nembedDim = 1024 \u002F\u002Fembeding编码维度\nheadNum = 16  \u002F\u002F多头注意力头数\ndecoderNum = 24  \u002F\u002F解码器层数\nlearnRate = 0.001f  \u002F\u002F初始学习率\nepoch = 5 \u002F\u002F循环训练次数\ndropoutRate = 0.1f\ntrain_loss = 1.56f \u002F\u002F最终训练集损失在1.5左右\nvail_loss = 1.8f  \u002F\u002F最终验证集损失在1.8左右\n````\n###### 推理效果图\n![GPT2医疗问答系统](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cc07586a5361.png)\n\n#### [基于llama2-medium实现医疗问答系统](#llama2-医疗问答系统)\n##### 预训练数据：Wiki中文百科,BaiduBaiKe,shibing624\u002Fmedica\n##### 预训练权重文件： https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1DobIvoYH_Yr8cv60VjCRng?pwd=euvp\n##### 微调训练数据（SFT）：shibing624\u002Fmedical,HuatuoGPT-sft-data-v1,DISC-Med-SFT,ChatMed\n##### 微调权重文件：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dve8XEk2o0lcoL36MdPhQg?pwd=wptj\n##### tokenizer（SentencePiece）：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Wx6Bcchd2UodU3YtEzWaYw?pwd=ehew \n##### 预处理后数据集：[数据集下载](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m-Of8dwOQ5kYuLZVYpw3qA?pwd=7c92)\n###### 模型参数\n```java\n\u002F\u002F llama2-chatglm 92M参数量\nmaxLen = 512  \u002F\u002F最大token数\nembedDim = 512 \u002F\u002Fembeding编码维度\nheadNum = 8  \u002F\u002F多头注意力头数\ndecoderNum = 8  \u002F\u002F解码器层数\nmaxLearnRate = 0.0003f  \u002F\u002F最大学习率\nminLearnRate = 0.0001f  \u002F\u002F最小学习率\nepoch = 1 \u002F\u002F循环训练次数\ndropoutRate = 0.0f\ntrain_loss = 2.0f \u002F\u002F最终训练集损失在2.0左右\n````\n###### 推理效果图\n![Llama2医疗问答系统](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_7288c9623957.png)\n\n#### [基于llama3.1实现对话机器人](#llama3.1-对话机器人)\n##### tokenizer（BPE）\n###### 模型参数\n```java\n\u002F\u002F llama3.1 26M参数量\nmaxLen = 512\u002F\u002F最大token数\nembedDim = 512\u002F\u002Fembeding编码维度\nheadNum = 16  \u002F\u002F多头注意力头数\nnKVHeadNum = 8 \u002F\u002Fkv注意力头数\ndecoderNum = 8  \u002F\u002F解码器层数\nmaxLearnRate = 1e-4f  \u002F\u002F最大学习率\nminLearnRate = 1e-5f  \u002F\u002F最小学习率\nepoch = 5 \u002F\u002F循环训练次数\ndropoutRate = 0.0f\npre_train_loss = 2.3f \u002F\u002F预训练最终训练集损失在2.3左右\nsft_train_loss = 1.6f \u002F\u002F微调训练最终训练集损失在1.6左右\n````\n###### 推理效果图\n![基于llama3.1实现对话机器人](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_44545b9ba08e.png)\n\n\n### Diffusion model 扩散模型系列\n#### [基于diffusion扩散模型实现生成动漫头像图片](#diffusion-动漫头像生成)\n#### 训练过程演示图\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_68c244f59ab0.gif)\n#### 50次循环训练后反向去噪生成过程图\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_f8347534e8d5.gif)\n![输入图片说明](images\u002Fdiffusion_11(2)_anime.gif)\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_08dd6350ad86.gif)\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_50efa98d6529.gif)\n\n#### [基于stable diffusion模型实现文生图](#StableDiffusion文生图)\n#### VQ-VAE演示图\n| 原图 | VQ-VAE | 原图 | VQ-VAE |\n|----|--------|----|--------|\n|  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_353bccd7e8bc.png)  |    ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_fd1166c9609e.png)    |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_3de259a5feb4.png)  |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_613f31266189.png)  |\n|  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_42c936ed0e31.png)  |    ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_3dc0db6d2e39.png)    |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_45a34db47c0b.png)  |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_82c0ee5ee512.png)  |\n\n\n#### 文生图演示图\n| 文本1 | 图片1 | 文本2 | 图片2 |\n|-----|-----|-----|-----|\n|   a highly detailed anime landscape,big tree on the water, epic sky,golden grass,detailed.  |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_b2b9f5812bbc.png)   |   3d art of a golden tree in the river，with intricate flora and flowing water，detailed.  |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_9f0a822ca21d.png)  |\n|   a vibrant anime mountain lands  |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_f0971a3a6c98.png)  |  a dark warrior in epic armor stands among glowing crimson leaves in a mystical forest.   |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cac21b1c9ae3.png)  |\n|   cute fluffy panda, anime, ghibli style, pastel colors, soft shadows, detailed fur, vibrant eyes, fantasy setting, digital art  | ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_f6160dcfd3ee.png)    | a epic city,3d,detailed._[a epic city,3d,detailed.    |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_6adb089090e4.png)   |\n\n#### 训练过程与模型参数\n##### 1.下载数据集：open-image-preferences-v1-more-results\n\n##### 2.训练VQ-VAE [训练脚本](#VQVAE)\n```java\n\u002F\u002FVQ-VAE模型参数 \nz_dims=128 \u002F\u002F编码层输出通道数与解码层输入通道数\nlatendDim=4 \u002F\u002F隐空间通道数\nnum_vq_embeddings=512 \u002F\u002Fvq码表嵌入向量维度\nnum_res_blocks=2 \u002F\u002F每个resblock层数\nch_mult=1,2,2,4 \u002F\u002F通道递增倍数\nch=128  \u002F\u002F通道数基数,每个编码或解码模型通道数=ch_mult[i] * ch\n```\n##### 3.加载CLIP模型：clip-vit-base-patch32\n```java\n\u002F\u002Fclip-vit-base-patch32模型参数 \nmaxContextLen=77 \u002F\u002F最大支持文本token长度\nvocabSize=49408  \u002F\u002F词表总数据\nheadNum=8  \u002F\u002F多头注意力头数\nn_layers=12  \u002F\u002FCLIPEncoder编码层层数\ntextEmbedDim=512  \u002F\u002F文本嵌入向量维度\n```\n##### 4.训练Unet [训练脚本](#StableDiffusion文生图)\n```java\n\u002F\u002FDiffusionUNetCond2模型参数\nunetHeadNum=8 \u002F\u002F多头注意力头数\ndownChannels=128,256,512,768  \u002F\u002F网络通道数\nnumLayer=2 \u002F\u002F每个resblock层数\ntimeSteps=1000 \u002F\u002F时间序列总数\ntEmbDim=512  \u002F\u002F时间序列嵌入向量维度\nlatendSize=32  \u002F\u002F隐空间维度\ngroupNum=32  \u002F\u002Fgroup_norm分组数\n```\n\n\n##  功能介绍\n#### 支持的网络层类型：\n\nFullylayer 全连接层\n\nConvolutionLayer 卷积层\n\nConvolutionTransposeLayer 反卷积层\n\nPoolingLayer 池化层(maxpooling,meanpooling)\n\nAVGPooingLayer 全局平均池化层\n\nEmbeddingLayer 向量映射层(将高维度词向量映射成低维度向量)该层的输入数据为one-hot编码后的数据\n\nEmbeddingIDLayer 向量映射层(将高维度词向量映射成低维度向量)\n\nRNNLayer 循环神经网络层\n\nLSTMLayer 长短记忆网络层\n\nRouteLayer 路由层\n\nUPSampleLayer 上采样层\n\nYoloLayer yolo层\n\nFastCausalSelfAttentionLayer 多层自注意力层\n\nMLPLayer gpt2-mlp层\n\nTransformerBlock transformer基础块\n\n#### 激活函数层\n\nSoftmaxLayer (softmax激活函)\n\nReluLayer\n\nLeakyReluLayer\n\nTanhLayer\n\nSigmodLayer\n\nSiLULayer\n\nGeLULayer\n\n#### 归一化层\n\nBNLayer (Batch Normalization)批归一化\n\nLNLayer (Layer Normalization)层归一化\n\n#### 正则化\n\nDropoutLayer\n#### 优化器\n\nMomentum\n\nAdam\n\nAdamw\n\nSgd (sgd with momentum)\n\nRMSProp\n\n#### 训练器\n\nBGDOptimizer (批量梯度下降法)\n\nMBSGDOptimizer (小批量随机梯度下降)\n\nSGDOptimizer（随机梯度下降算法）\n\n#### 损失函数(loss function)\n\nMSELoss (平方差损失函数)\n\nCrossEntropyLoss (交叉熵损失函数)\n\nCrossEntropyLossWithSoftmax (交叉熵损失 + softmax)\n\nMultiLabelSoftMargin (多标签损失函数)\n\n#### 学习率更新器（LearnRateUpdate）\n\nNONE (固定学习率)\n\nLR_DECAY (decay)\n\nGD_GECAY (gd_decay)\n\nCONSTANT(gd_decay)\n\nRANDOM [Math.pow(RandomUtils.getInstance().nextFloat(), power) * this.lr]\n\nPOLY [this.lr * Math.pow((1.0f - (batchIndex * 1.0f \u002F trainTime \u002F dataSize * batchSize)), power)]\n\nSTEP [this.lr * Math.pow(this.scale, batchIndex \u002F step)]\n\nEXP [this.lr * Math.pow(this.gama, batchIndex)]\n\nSIG [this.lr \u002F (1 + Math.pow(Math.E, this.gama * (batchIndex - step)))]\n\n#### 数据加载器\n\n.bin (二进制数据文件)\n\n.idx3-ubyte\n\n.txt\n\n## 使用说明\n\n### 自带的数据集\n\niris（鸢尾花数据集）\n\nmnist（手写数字数据集）\n\ncifar_10 （cifar_10数据集）\n\n## 附加数据集\n[cifar-10](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1EC_h_iGKUfDu6Ld-8ltdsA?pwd=23g1)\n\n[banana-detection](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mUr12FJm9OGbsObqfjZ81Q?pwd=jish)\n\n[vailCode](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11wZY9gQQ9OuoViw11IW6BQ?pwd=2rdt)\n\n[helmet](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pbTaDHoRzhV-kuWoXOCPqw?pwd=y8ij)\n\n[mask](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1D3zTYTiNYmtU6x7Ui9ej_A?pwd=r4o3)\n\n[自动售货机数据集sm](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10o8IZwD-WmChKtmzzg9q7w?pwd=gt8p )\n\n[大语言模型训练数据集](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1FKkg9h4awphRtQ8yZaeH3A?pwd=ywax)\n\n## 数据集成绩\n\niris epoch:5 bp神经网络[3层全连接层]  测试数据集准确率100%\n\nmnist epoch:10 alexnet 测试数据集准确率98.6% \n\ncifar_10 epoch:50 alexnet 测试数据集准确率76.6%\n\ncifar_10 epoch:50 vgg16 测试数据集准确率86.45%\n\ncifar_10 epoch:300 resnet18 [batchSize:128,初始learningRate:0.1,learnRateUpdate:GD_GECAY,optimizer:adamw] 数据预处理[randomCrop,randomHorizontalFilp,cutout,normalize] 测试数据集准确率91.23% \n\n## 事例代码\n\n#### bp iris demo\n\n```java\npublic void bpNetwork_iris() {\n\t\t\u002F\u002F TODO Auto-generated method stub\n\n\t\t\u002F**\n\t\t * 读取训练数据集\n\t\t *\u002F\n\t\tString iris_train = \"\u002Fdataset\u002Firis\u002Firis.txt\";\n\t\t\n\t\tString iris_test = \"\u002Fdataset\u002Firis\u002Firis_test.txt\";\n\t\t\n\t\tString[] labelSet = new String[] {\"1\",\"-1\"};\n\t\t\n\t\tDataSet trainData = DataLoader.loalDataByTxt(iris_train, \",\", 1, 1, 4, 2,labelSet);\n\t\tDataSet testData = DataLoader.loalDataByTxt(iris_test, \",\", 1, 1, 4, 2,labelSet);\n\t\t\n\t\tSystem.out.println(\"train_data:\"+JsonUtils.toJson(trainData));\n\t\n\t\tBPNetwork netWork = new BPNetwork(new SoftmaxWithCrossEntropyLoss());\n\t\t\n\t\tInputLayer inputLayer = new InputLayer(1,1,4);\n\t\t\n\t\tFullyLayer hidden1 = new FullyLayer(4, 40);\n\t\t\n\t\tReluLayer active1 = new ReluLayer();\n\t\t\n\t\tFullyLayer hidden2 = new FullyLayer(40, 20);\n\t\t\n\t\tReluLayer active2 = new ReluLayer();\n\t\t\n\t\tFullyLayer hidden3 = new FullyLayer(20, 2);\n\n\t\tSoftmaxWithCrossEntropyLayer hidden4 = new SoftmaxWithCrossEntropyLayer(2);\n\t\t\n\t\tnetWork.addLayer(inputLayer);\n\t\tnetWork.addLayer(hidden1);\n\t\tnetWork.addLayer(active1);\n\t\tnetWork.addLayer(hidden2);\n\t\tnetWork.addLayer(active2);\n\t\tnetWork.addLayer(hidden3);\n\t\tnetWork.addLayer(hidden4);\n\n\t\ttry {\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 8, 0.00001d, 10, LearnRateUpdate.NONE);\n\t\t\n\t\t\toptimizer.train(trainData);\n\t\t\t\n\t\t\toptimizer.test(testData);\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n````\n\n#### cnn mnist demo\n\n```java\npublic void cnnNetwork_mnist() {\n\t\t\u002F\u002F TODO Auto-generated method stub\n\t\t\n\t\ttry {\n\n\t\t\t\u002F**\n\t\t\t * 读取训练数据集\n\t\t\t *\u002F\n\t\t\tString mnist_train_data = \"\u002Fdataset\u002Fmnist\u002Ftrain-images.idx3-ubyte\";\n\t\t\t\n\t\t\tString mnist_train_label = \"\u002Fdataset\u002Fmnist\u002Ftrain-labels.idx1-ubyte\";\n\t\t\t\n\t\t\tString mnist_test_data = \"\u002Fdataset\u002Fmnist\u002Ft10k-images.idx3-ubyte\";\n\t\t\t\n\t\t\tString mnist_test_label = \"\u002Fdataset\u002Fmnist\u002Ft10k-labels.idx1-ubyte\";\n\t\t\t\n\t\t\tString[] labelSet = new String[] {\"0\",\"1\",\"2\",\"3\",\"4\",\"5\",\"6\",\"7\",\"8\",\"9\"};\n\t\t\t\n\t\t\tResource trainDataRes = new ClassPathResource(mnist_train_data);\n\n\t\t\tResource trainLabelRes = new ClassPathResource(mnist_train_label);\n\t\t\t\n\t\t\tResource testDataRes = new ClassPathResource(mnist_test_data);\n\t\t\t\n\t\t\tResource testLabelRes = new ClassPathResource(mnist_test_label);\n\t\t\t\n\t\t\tDataSet trainData = DataLoader.loadDataByUByte(trainDataRes.getFile(), trainLabelRes.getFile(), labelSet, 1, 1 , 784, true);\n\t\t\t\n\t\t\tDataSet testData = DataLoader.loadDataByUByte(testDataRes.getFile(), testLabelRes.getFile(), labelSet, 1, 1 , 784, true);\n\n\t\t\tint channel = 1;\n\t\t\t\n\t\t\tint height = 28;\n\t\t\t\n\t\t\tint width = 28;\n\t\t\t\n\t\t\tCNN netWork = new CNN(new SoftmaxWithCrossEntropyLoss(), UpdaterType.momentum);\n\t\t\t\n\t\t\tnetWork.learnRate = 0.001d;\n\t\t\t\n\t\t\tInputLayer inputLayer = new InputLayer(channel, 1, 784);\n\t\t\t\n\t\t\tConvolutionLayer conv1 = new ConvolutionLayer(channel, 6, width, height, 5, 5, 2, 1, false);\n\t\t\t\n\t\t\tBNLayer bn1 = new BNLayer();\n\t\t\t\n\t\t\tLeakyReluLayer active1 = new LeakyReluLayer();\n\t\t\t\n\t\t\tPoolingLayer pool1 = new PoolingLayer(conv1.oChannel, conv1.oWidth, conv1.oHeight, 2, 2, 2, PoolingType.MAX_POOLING);\n\t\t\t\n\t\t\tConvolutionLayer conv2 = new ConvolutionLayer(pool1.oChannel, 12, pool1.oWidth, pool1.oHeight, 5, 5, 0, 1, false);\n\t\t\t\n\t\t\tBNLayer bn2 = new BNLayer();\n\t\t\t\n\t\t\tLeakyReluLayer active2 = new LeakyReluLayer();\n\t\t\t\n\t\t\tDropoutLayer drop1 = new DropoutLayer(0.5d);\n\t\t\t\n\t\t\t\n\t\t\tPoolingLayer pool2 = new PoolingLayer(conv2.oChannel, conv2.oWidth, conv2.oHeight, 2, 2, 2, PoolingType.MAX_POOLING);\n\n\t\t\tint fInputCount = pool2.oChannel * pool2.oWidth * pool2.oHeight;\n\t\t\t\n\t\t\tint inputCount = (int) (Math.sqrt((fInputCount) + 10) + 10);\n\t\t\t\n\t\t\tFullyLayer full1 = new FullyLayer(fInputCount, inputCount, false);\n\n\t\t\tBNLayer bn3 = new BNLayer();\n\t\t\t\n\t\t\tLeakyReluLayer active3 = new LeakyReluLayer();\n\t\t\t\n\t\t\tFullyLayer full2 = new FullyLayer(inputCount, 10);\n\t\t\t\n\t\t\tSoftmaxWithCrossEntropyLayer softmax = new SoftmaxWithCrossEntropyLayer(10);\n\n\t\t\tnetWork.addLayer(inputLayer);\n\t\t\tnetWork.addLayer(conv1);\n\t\t\tnetWork.addLayer(bn1);\n\t\t\tnetWork.addLayer(active1);\n\t\t\tnetWork.addLayer(pool1);\n\t\t\tnetWork.addLayer(conv2);\n\t\t\tnetWork.addLayer(bn2);\n\t\t\tnetWork.addLayer(active2);\n\t\t\tnetWork.addLayer(drop1);\n\t\t\tnetWork.addLayer(pool2);\n\t\t\tnetWork.addLayer(full1);\n\t\t\tnetWork.addLayer(bn3);\n\t\t\tnetWork.addLayer(active3);\n\t\t\tnetWork.addLayer(full2);\n\t\t\tnetWork.addLayer(softmax);\n\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 10, 0.0001d, 96, LearnRateUpdate.NONE);\n\n\t\t\tlong start = System.currentTimeMillis();\n\t\t\t\n\t\t\toptimizer.train(trainData);\n\t\t\t\n\t\t\toptimizer.test(testData);\n\t\t\t\n\t\t\tSystem.out.println(((System.currentTimeMillis() - start) \u002F 1000) + \"s.\");\n\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\n\t}\n````\n#### resnet cifar10 demo\n\n```java\n\tpublic void resnet18_cifar10() {\n\t\t\u002F\u002F TODO Auto-generated method stub\n\n\t\ttry {\n\n\t\t\tString[] labelSet = new String[] {\"airplane\",\"automobile\",\"bird\",\"cat\",\"deer\",\"dog\",\"frog\",\"horse\",\"ship\",\"truck\"};\n\t    \t\n\t\t\tString[] train_data_filenames = new String[] {\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_1.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_2.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_3.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_4.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_5.bin\"\n\t\t\t};\n\t\t\t\n\t\t\tString test_data_filename = \"H:\u002Fdataset\u002Fcifar-10\u002Ftest_batch.bin\";\n\t\t\t\n\t\t\tfloat[] mean = new float[] {0.491f, 0.482f, 0.446f};\n\t\t\tfloat[] std = new float[] {0.247f, 0.243f, 0.261f};\n\t\t\t\n\t\t\tDataSet trainData = DataLoader.getImagesToDataSetByBin(train_data_filenames, 10000, 3, 32, 32, 10, labelSet, true);\n\n\t\t\tDataSet testData = DataLoader.getImagesToDataSetByBin(test_data_filename, 10000, 3, 32, 32, 10, labelSet, true, mean, std);\n\t\t\t\n\t\t\tSystem.out.println(\"data is ready.\");\n\n\t\t\tint channel = 3;\n\t\t\t\n\t\t\tint height = 32;\n\t\t\t\n\t\t\tint width = 32;\n\t\t\t\n\t\t\tCNN netWork = new CNN(LossType.softmax_with_cross_entropy, UpdaterType.adamw);\n\t\t\t\n\t\t\tnetWork.CUDNN = true;\n\t\t\t\n\t\t\tnetWork.learnRate = 0.1f;\n\t\t\t\n\t\t\tInputLayer inputLayer = new InputLayer(channel, height, width);\n\t\t\t\n\t\t\tConvolutionLayer conv1 = new ConvolutionLayer(channel, 64, width, height, 3, 3, 1, 1, false);\n\t\t\t\n\t\t\tBNLayer bn1 = new BNLayer();\n\t\t\t\n\t\t\tReluLayer active1 = new ReluLayer();\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block1  64 * 32 * 32\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl1 = new BasicBlockLayer(conv1.oChannel, 64, conv1.oHeight, conv1.oWidth, 1, netWork);\n\t\t\tReluLayer active2 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block2  64 * 32 * 32\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl2 = new BasicBlockLayer(bl1.oChannel, 64, bl1.oHeight, bl1.oWidth, 1, netWork);\n\t\t\tReluLayer active3 = new ReluLayer();\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block3  128 * 16 * 16\n\t\t\t * downSample 32 \u002F 2 = 16\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl3 = new BasicBlockLayer(bl2.oChannel, 128, bl2.oHeight, bl2.oWidth, 2, netWork);\n\t\t\tReluLayer active4 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block4  128 * 16 * 16\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl4 = new BasicBlockLayer(bl3.oChannel, 128, bl3.oHeight, bl3.oWidth, 1, netWork);\n\t\t\tReluLayer active5 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block5  256 * 8 * 8\n\t\t\t * downSample 16 \u002F 2 = 8\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl5 = new BasicBlockLayer(bl4.oChannel, 256, bl4.oHeight, bl4.oWidth, 2, netWork);\n\t\t\tReluLayer active6 = new ReluLayer();\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block6  256 * 8 * 8\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl6 = new BasicBlockLayer(bl5.oChannel, 256, bl5.oHeight, bl5.oWidth, 1, netWork);\n\t\t\tReluLayer active7 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block7  512 * 4 * 4\n\t\t\t * downSample 8 \u002F 2 = 4\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl7 = new BasicBlockLayer(bl6.oChannel, 512, bl6.oHeight, bl6.oWidth, 2, netWork);\n\t\t\tReluLayer active8 = new ReluLayer();\n\t\t\t\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block8  512 * 4 * 4\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl8 = new BasicBlockLayer(bl7.oChannel, 512, bl7.oHeight, bl7.oWidth, 1, netWork);\n\t\t\tReluLayer active9 = new ReluLayer();\n\t\t\t\n\t\t\tAVGPoolingLayer pool2 = new AVGPoolingLayer(bl8.oChannel, bl8.oWidth, bl8.oHeight);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * fully  512 * 1 * 1\n\t\t\t *\u002F\n\t\t\tint fInputCount = pool2.oChannel * pool2.oWidth * pool2.oHeight;\n\t\t\t\n\t\t\tFullyLayer full1 = new FullyLayer(fInputCount, 10);\n\n\t\t\tnetWork.addLayer(inputLayer);\n\t\t\tnetWork.addLayer(conv1);\n\t\t\tnetWork.addLayer(bn1);\n\t\t\tnetWork.addLayer(active1);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block1  64\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl1);\n\t\t\tnetWork.addLayer(active2);\n\t\t\tnetWork.addLayer(bl2);\n\t\t\tnetWork.addLayer(active3);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block2  128\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl3);\n\t\t\tnetWork.addLayer(active4);\n\t\t\tnetWork.addLayer(bl4);\n\t\t\tnetWork.addLayer(active5);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block3  256\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl5);\n\t\t\tnetWork.addLayer(active6);\n\t\t\tnetWork.addLayer(bl6);\n\t\t\tnetWork.addLayer(active7);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block4  512\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl7);\n\t\t\tnetWork.addLayer(active8);\n\t\t\tnetWork.addLayer(bl8);\n\t\t\tnetWork.addLayer(active9);\n\t\t\t\n\t\t\tnetWork.addLayer(pool2);\n\t\t\tnetWork.addLayer(full1);\n\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 250, 0.001f, 128, LearnRateUpdate.GD_GECAY, false);\n\n\t\t\tlong start = System.currentTimeMillis();\n\t\t\t\n\t\t\toptimizer.train(trainData, testData, mean, std);\n\n\t\t\toptimizer.test(testData);\n\t\t\t\n\t\t\tSystem.out.println(((System.currentTimeMillis() - start) \u002F 1000) + \"s.\");\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t\t\n\t\t}\n\t\t\n\t}\n````\n#### yolo banana-detection demo\n``` java\npublic void yolov1_tiny() {\n\t\t\n\t\ttry {\n\t\t\t\n\t\t\tString cfg_path = \"H:\u002Fvoc\u002Ftrain\u002Fyolov1-tiny.cfg\";\n\t\t\t\n\t\t\tString trainPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_train\\\\images\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_train\\\\label.csv\";\n\t\t\t\n\t\t\tString testPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_val\\\\images\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_val\\\\label.csv\";\n\t\t\t\n\t\t\tYoloDataLoader trainData = new YoloDataLoader(trainPath, trainLabelPath, 1000, 3, 256, 256, 5, LabelType.csv, true);\n\t\t\t\n\t\t\tYoloDataLoader vailData = new YoloDataLoader(testPath, testLabelPath, 100, 3, 256, 256, 5, LabelType.csv, true);\n\t\t\t\n\t\t\tDataSet trainSet = formatToYolo(trainData.getDataSet());\n\t\t\t\n\t\t\tDataSet vailSet = formatToYolo(vailData.getDataSet());\n\t\t\t\n\t\t\tSystem.out.println(\"load data finish.\");\n\t\t\t\n\t\t\tCNN netWork = new CNN(LossType.yolo3, UpdaterType.adamw);\n\t\t\t\n\t\t\tnetWork.CUDNN = true;\n\t\t\t\n\t\t\tnetWork.learnRate = 0.001f;\n\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 1000, 0.001f, 64, LearnRateUpdate.CONSTANT, false);\n\n\t\t\tlong start = System.currentTimeMillis();\n\t\t\t\n\t\t\toptimizer.trainObjectRecognition(trainSet, vailSet);\n\t\t\t\n\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tfloat[][][] draw_bbox = optimizer.showObjectRecognition(vailSet, 64);\n\t\t\t\n\t\t\tYoloDataLoader testData = new YoloDataLoader(testPath, testLabelPath, 1000, 3, 256, 256, 5, LabelType.csv, false);\n\t\t\t\n\t\t\tString outputPath = \"H:\\\\voc\\\\banana-detection\\\\test\\\\\";\n\t\t\t\n\t\t\tshowImg(outputPath, testData.getDataSet(), 1, draw_bbox, false);\n\t\t\t\n\t\t\tSystem.out.println(((System.currentTimeMillis() - start) \u002F 1000) + \"s.\");\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t\t\n\t}\n```\n\n#### yolov3 mask demo（口罩佩戴识别）\n``` java\npublic void yolov3_tiny_mask() {\n\t\t\n\t\tint im_w = 416;\n\t\tint im_h = 416;\n\t\tint batchSize = 24;\n\t\tint class_num = 2;\n\t\tString[] labelset = new String[] {\"unmask\",\"mask\"};\n\t\ttry {\n\t\t\tString cfg_path = \"H:\\\\voc\\\\mask\\\\data\\\\\\\\dataset\\\\yolov3-tiny-mask.cfg\";\n\t\t\tString trainPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\train\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\train_label.txt\";\n\t\t\tString testPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\vail\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\vail_label.txt\";\n\t\t\tString weightPath = \"H:\\\\voc\\\\yolo-weights\\\\yolov3-tiny.conv.15\";\n\t\t\t\u002F**\n\t\t\t * 数据加载器\n\t\t\t *\u002F\n\t\t\tDetectionDataLoader trainData = new DetectionDataLoader(trainPath, trainLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tDetectionDataLoader vailData = new DetectionDataLoader(testPath, testLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n                        \u002F**\n\t\t\t * 创建yolo模型\n\t\t\t *\u002F\n\t\t\tYolo netWork = new Yolo(LossType.yolo3, UpdaterType.adamw);\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.001f;\n                        \u002F**\n\t\t\t * 加载模型结构\n\t\t\t *\u002F\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n                        \u002F**\n\t\t\t * 加载预训练权重\n\t\t\t *\u002F\n\t\t\tDarknetLoader.loadWeight(netWork, weightPath, 14, true);\n                        \u002F**\n\t\t\t * 创建优化器\n\t\t\t *\u002F\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 1000, 0.001f, batchSize, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.trainObjectRecognitionOutputs(trainData, vailData);\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tList\u003CYoloBox> draw_bbox = optimizer.showObjectRecognitionYoloV3(vailData, batchSize);\n\t\t\tString outputPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\test_yolov3\\\\\";\n\t\t\tshowImg(outputPath, vailData, class_num, draw_bbox, batchSize, false, im_w, im_h, labelset);\n\n\t\t}catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\t\n\t}\n```\n\n#### yolov3 helmet demo（安全帽佩戴识别）\n``` java\npublic void yolov3_tiny_helmet() {\n\t\t\n\t\tint im_w = 416;\n\t\tint im_h = 416;\n\t\tint batchSize = 24;\n\t\tint class_num = 5;\n\t\tString[] labelset = new String[] {\"none\",\"white\",\"yellow\",\"blue\",\"red\"};\n\t\ttry {\n\t\t\tString cfg_path = \"H:\\\\voc\\\\helmet_dataset\\\\yolov3-tiny-helmet.cfg\";\n\t\t\tString trainPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\train\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\train_label.txt\";\n\t\t\tString testPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\vail\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\vail_label.txt\";\n\t\t\tString weightPath = \"H:\\\\voc\\\\yolo-weights\\\\yolov3-tiny.conv.15\";\n\t\t\t\u002F**\n\t\t\t * 数据加载器\n\t\t\t *\u002F\n\t\t\tDetectionDataLoader trainData = new DetectionDataLoader(trainPath, trainLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tDetectionDataLoader vailData = new DetectionDataLoader(testPath, testLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n                        \u002F**\n\t\t\t * 创建yolo模型\n\t\t\t *\u002F\n\t\t\tYolo netWork = new Yolo(LossType.yolo3, UpdaterType.adamw);\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.001f;\n                        \u002F**\n\t\t\t * 加载模型结构\n\t\t\t *\u002F\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n                        \u002F**\n\t\t\t * 加载预训练权重\n\t\t\t *\u002F\n\t\t\tDarknetLoader.loadWeight(netWork, weightPath, 14, true);\n                        \u002F**\n\t\t\t * 创建优化器\n\t\t\t *\u002F\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 300, 0.001f, batchSize, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.trainObjectRecognitionOutputs(trainData, vailData);\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tList\u003CYoloBox> draw_bbox = optimizer.showObjectRecognitionYoloV3(vailData, batchSize);\n\t\t\tString outputPath = \"H:\\\\voc\\\\helmet\\\\test_yolov3\\\\\";\n\t\t\tshowImg(outputPath, vailData, class_num, draw_bbox, batchSize, false, im_w, im_h, labelset);\n\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t\t\t\n\t}\n```\n\n#### yolov7-sm-demo智能冰柜商品识别\n``` java\n    public void yolov7_tiny_sm() {\n\t\tint im_w = 416;\n\t\tint im_h = 416;\n\t\tint batchSize = 12;\n\t\tint class_num = 113;\n\t\tString[] labelset = new String[113];\n\t\ttry {\n\t\t\tString cfg_path = \"H:\\\\voc\\\\sm\\\\resized\\\\yolov7-tiny-sm.cfg\";\n\t\t\tString labelPath = \"H:\\\\voc\\\\\\\\sm\\\\VOC\\\\labels.txt\";\n\t\t\tString trainPath = \"H:\\\\voc\\\\sm\\\\resized\\\\train\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\sm\\\\resized\\\\train_label.txt\";\n\t\t\tString testPath = \"H:\\\\voc\\\\sm\\\\resized\\\\vail\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\sm\\\\resized\\\\vail_label.txt\";\n\t\t\tString weightPath = \"H:\\\\voc\\\\darknet_yolov7\\\\yolov7-tiny.conv.87\";\n\t\t\ttry (FileInputStream fin = new FileInputStream(labelPath);\n\t\t\t\tInputStreamReader reader = new InputStreamReader(fin);\t\n\t\t\t    BufferedReader buffReader = new BufferedReader(reader);){\n\t\t\t\tString strTmp = \"\";\n\t\t\t\tint idx = 0;\n\t\t        while((strTmp = buffReader.readLine())!=null){\n\t\t        \tlabelset[idx] = strTmp;\n\t\t        \tidx++;\n\t\t        }\t\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO: handle exception\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t\tDetectionDataLoader trainData = new DetectionDataLoader(trainPath, trainLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tDetectionDataLoader vailData = new DetectionDataLoader(testPath, testLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tYolo netWork = new Yolo(LossType.yolov7, UpdaterType.adamw);\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.001f;\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n\t\t\tDarknetLoader.loadWeight(netWork, weightPath, 86, true);\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 1000, 0.001f, batchSize, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.trainObjectRecognitionOutputs(trainData, vailData);\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tList\u003CYoloBox> draw_bbox = optimizer.showObjectRecognitionYoloV3(vailData, batchSize);\n\t\t\tString outputPath = \"H:\\\\voc\\\\sm\\\\test_yolov7\\\\\";\n\t\t\tshowImg(outputPath, vailData, class_num, draw_bbox, batchSize, false, im_w, im_h, labelset);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\t\n\t}\n```\n\n#### gan mnist demo 生成手写数字\n``` java\npublic static void gan_anime() {\n\t\t\n\t\tint imgSize = 784;\n\t\tint ngf = 784; \u002F\u002F生成器featrue map数\n\t\tint nz = 100; \u002F\u002F噪声维度\n\t\tint batchSize = 2048;\n\t\t\n\t\tint d_every = 1;\n\t\tint g_every = 1;\n\t\t\n\t\tfloat[] mean = new float[] {0.5f};\n\t\tfloat[] std = new float[] {0.5f};\n\t\t\n\t\ttry {\n\t\t\t\n\t\t\tString mnist_train_data = \"\u002Fdataset\u002Fmnist\u002Ftrain-images.idx3-ubyte\";\n\t\t\t\n\t\t\tString mnist_train_label = \"\u002Fdataset\u002Fmnist\u002Ftrain-labels.idx1-ubyte\";\n\t\t\t\n\t\t\tString[] labelSet = new String[] {\"0\",\"1\",\"2\",\"3\",\"4\",\"5\",\"6\",\"7\",\"8\",\"9\"};\n\t\t\t\n\t\t\tResource trainDataRes = new ClassPathResource(mnist_train_data);\n\n\t\t\tResource trainLabelRes = new ClassPathResource(mnist_train_label);\n\t\t\t\n\t\t\tDataSet trainData = DataLoader.loadDataByUByte(trainDataRes.getFile(), trainLabelRes.getFile(), labelSet, 1, 1 , 784, true, mean, std);\n\t\t\t\n\t\t\tBPNetwork netG = NetG(ngf, nz);\n\t\t\t\n\t\t\tBPNetwork netD = NetD(imgSize);\n\t\t\t\n\t\t\tGANOptimizer optimizer = new GANOptimizer(netG, netD, batchSize, 3500, d_every, g_every, 0.001f, LearnRateUpdate.CONSTANT, false);\n\t\t\t\n\t\t\toptimizer.train(trainData);\n\t\t\t\n\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n```\n\n#### dcgan anime demo 生成动漫头像\n``` java\n\tpublic static void dcgan_anime() {\n\t\t\n\t\tint imw = 64;\n\t\tint imh = 64;\n\t\tint ngf = 64; \u002F\u002F生成器featrue map数\n\t\tint ndf = 64; \u002F\u002F判别器feature map数\n\t\tint nz = 100; \u002F\u002F噪声维度\n\t\tint batchSize = 64;\n\t\t\n\t\tint d_every = 1;\n\t\tint g_every = 5;\n\t\t\n\t\tfloat[] mean = new float[] {0.5f,0.5f,0.5f};\n\t\tfloat[] std = new float[] {0.5f,0.5f,0.5f};\n\t\t\n\t\ttry {\n\t\t\t\n\t\t\tString imgDirPath = \"H:\\\\voc\\\\gan_anime\\\\ml2021spring-hw6\\\\faces\\\\\";\n\t\t\t\n\t\t\tCNN netG = NetG(ngf, nz);\n\t\t\t\n\t\t\tCNN netD = NetD(ndf, imw, imh);\n\t\t\t\n\t\t\tImageDataLoader dataLoader = new ImageDataLoader(imgDirPath, imw, imh, batchSize, true, mean, std);\n\t\t\t\n\t\t\tGANOptimizer optimizer = new GANOptimizer(netG, netD, batchSize, 2000, d_every, g_every, 0.001f, LearnRateUpdate.POLY, false);\n\t\t\t\n\t\t\toptimizer.train(dataLoader);\n\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n```\n\n#### RNN 中文小说生成器\n```java\n    public void charRNN() {\n\t\ttry {\n\t\t\tint time = 256;\n\t\t\tint batchSize = 64;\n\t\t\tint embedding_dim = 256;\n\t\t\tint hiddenSize = 512;\n\n\t\t\tString trainPath = \"H:\\\\rnn_dataset\\\\dpcc.txt\";\n\t\t\tOneHotDataLoader trainData = new OneHotDataLoader(trainPath, time, batchSize);\n\t\t\t\n\t\t\tRNN netWork = new RNN(LossType.softmax_with_cross_entropy, UpdaterType.adamw, time);\n\n\t\t\tInputLayer inputLayer = new InputLayer(1, 1, trainData.characters);\n\t\t\tEmbeddingLayer em = new EmbeddingLayer(trainData.characters, embedding_dim);\n\t\t\tRNNLayer l1 = new RNNLayer(embedding_dim, hiddenSize, time, ActiveType.tanh, false, netWork);\n\t\t\tRNNLayer l2 = new RNNLayer(hiddenSize, hiddenSize, time, ActiveType.tanh, false, netWork);\n\t\t\tRNNLayer l3 = new RNNLayer(hiddenSize, hiddenSize, time, ActiveType.tanh, false, netWork);\n\t\t\tFullyLayer f1 = new FullyLayer(hiddenSize, hiddenSize, false);\n\t\t\tBNLayer bn = new BNLayer();\n\t\t\tLeakyReluLayer a1 = new LeakyReluLayer();\n\t\t\tFullyLayer f2 = new FullyLayer(hiddenSize, trainData.characters, true);\n\t\t\tnetWork.addLayer(inputLayer);\n\t\t\tnetWork.addLayer(em);\n\t\t\tnetWork.addLayer(l1);\n\t\t\tnetWork.addLayer(l2);\n\t\t\tnetWork.addLayer(l3);\n\t\t\tnetWork.addLayer(f1);\n\t\t\tnetWork.addLayer(bn);\n\t\t\tnetWork.addLayer(a1);\n\t\t\tnetWork.addLayer(f2);\n\t\t\t\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.01f;\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 2, 0.001f, batchSize, LearnRateUpdate.POLY, false);\n\t\t\toptimizer.trainRNN(trainData);\n\t\t\t\n\t\t\tint gen_len = 1000;\n\t\t\tint max_len = 256;\n\t\t\tString pre_txt = \"这个故事所造成的后果，便是造就了大批每天\";\n\t\t\tTensor input = null;\n\t\t\tTensor output = null;\n\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\tnetWork.RUN_MODEL = RunModel.TEST;\n\t\t\tfor(int i = 0;i\u003Cgen_len;i++) {\n\t\t\t\tnetWork.time = input.number;\n\t\t\t\tString txt = genTxt(input, output, netWork, trainData, max_len);\n\t\t\t\tif(netWork.time > 1) {\n\t\t\t\t\tpre_txt += txt.substring(input.number - 1, input.number);\n\t\t\t\t}else {\n\t\t\t\t\tpre_txt += txt;\n\t\t\t\t}\n\t\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\t}\n\t\t\tSystem.out.println(pre_txt);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### SEQ2SEQ 英文翻译器\n```java\n    public void seq2seq() {\n\t\ttry {\n\t\t\tint batchSize = 128;\n\t\t\tint en_em = 64;\n\t\t\tint de_em = 128;\n\t\t\tint en_hidden = 256;\n\t\t\tint de_hidden = 256;\n\t\t\t\n\t\t\tString trainPath = \"H:\\\\rnn_dataset\\\\translate1000.csv\";\n\t\t\tIndexDataLoader trainData = new IndexDataLoader(trainPath, batchSize);\n\t\t\t\n\t\t\tSeq2Seq network = new Seq2Seq(LossType.softmax_with_cross_entropy, UpdaterType.adamw,\n\t\t\t\t\ttrainData.max_en, trainData.max_ch - 1, en_em, en_hidden, trainData.en_characters, de_em, de_hidden, trainData.ch_characters);\n\t\t\tnetwork.CUDNN = true;\n\t\t\tnetwork.learnRate = 0.01f;\n\t\t\t\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 100, 0.001f, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.lr_step = new int[] {100,200};\n\t\t\toptimizer.trainRNN(trainData);\n\n\t\t\tScanner scanner = new Scanner(System.in);\n\t\t\twhile (true) {\n\t\t\t\t\n\t\t\t\tSystem.out.println(\"请输入英文:\");\n\t\t\t\tString input_txt = scanner.nextLine();\n\t\t\t\tif(input_txt.equals(\"exit\")){\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tinput_txt = input_txt.toLowerCase();\n\t\t\t\tSystem.out.println(input_txt);\n\t\t\t\toptimizer.predict(trainData, input_txt);\t\n\t\t\t}\n\t\t\tscanner.close();\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### gpt-中文小说生成器\n```java\n    public static void gpt_dp() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = true;\n\t\t\tint batchSize = 32;\n\t\t\tint max_len = 64;\n\t\t\tint embedDim = 512;\n\t\t\tint headNum = 8;\n\t\t\tint decoderNum = 6;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\gpt\\\\dpcc50.txt\";\n\t\t\tCNTokenizer trainData = new CNTokenizer(trainPath, max_len, batchSize);\n\t\t\tNanoGPT network = new NanoGPT(LossType.softmax_with_cross_entropy, UpdaterType.adamw, headNum, decoderNum, trainData.characters, max_len, embedDim, bias, dropout);\n\t\t\tnetwork.learnRate = 0.001f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 3, 0.001f, LearnRateUpdate.GD_GECAY, false);\n\t\t\toptimizer.trainNanoGPT_GEN(trainData);\n\t\t\tint gen_len = 1000;\n\t\t\tnetwork.RUN_MODEL = RunModel.TEST;\n\t\t\tTensor input = null;\n\t\t\tTensor output = null;\n\t\t\tString pre_txt = \"萧炎\";\n\t\t\tTensor positions = CNChatTokenizer.getPositions(1, pre_txt.length());\n\t\t\tTensor mask = CNChatTokenizer.triu(1, network.headNum, pre_txt.length(), pre_txt.length(), 1);\n\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\tfor(int i = 0;i\u003Cgen_len;i++) {\n\t\t\t\tnetwork.time = input.number;\n\t\t\t\tString txt = genTxt(input, output, network, trainData, pre_txt.length(), mask, positions);\n\t\t\t\tif(network.time > 1) {\n\t\t\t\t\tpre_txt += txt.substring(input.number - 1, input.number);\n\t\t\t\t}else {\n\t\t\t\t\tpre_txt += txt;\n\t\t\t\t}\n\t\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\t}\n\t\t\tSystem.out.println(pre_txt);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### gpt-中文聊天机器人\n```java\n    public static void ch_chat_gpt2() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = true;\n\t\t\tint batchSize = 32;\n\t\t\tint max_len = 128;\n\t\t\tint embedDim = 768;\n\t\t\tint head_num = 12;\n\t\t\tint decoderNum = 12;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\gpt\\\\chatdata\\\\train-format20w.txt\";\n\t\t\tCNChatTokenizer trainData = new CNChatTokenizer(trainPath, max_len, batchSize);\n\t\t\tNanoGPT network = new NanoGPT(LossType.softmax_with_cross_entropy, UpdaterType.adamw, head_num, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, false);\n\t\t\tnetwork.learnRate = 0.0001f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 3, 0.0001f, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.lr_step = new int[] {1, 2};\n\t\t\toptimizer.trainNanoGPT(trainData);\n\t\t\tScanner scanner = new Scanner(System.in);\n\t\t\tString context = \"\";\n\t\t\twhile (true) {\n\t\t\t\tSystem.out.println(\"请输入中文:\");\n\t\t\t\tString input_txt = scanner.nextLine();\n\t\t\t\tif(input_txt.equals(\"clean\")){\n\t\t\t\t\tcontext = \"\";\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif(input_txt.equals(\"exit\")){\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tinput_txt = input_txt.toLowerCase() + \" \";\n\t\t\t\tSystem.out.println(\"user:\"+input_txt);\n\t\t\t\tinput_txt = context + input_txt;\n\t\t\t\tTensor input = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\tTensor positions = CNChatTokenizer.getPositions(1, input.number);\n\t\t\t\tfor(int t = 0;t\u003Cmax_len;t++) {\n\t\t\t\t\tnetwork.time = input.number;\n\t\t\t\t\tTensor output = network.forward(input, positions);\n\t\t\t\t\toutput.syncHost();\n\t\t\t\t\tString txts = output2TXT(output, trainData, true);\n\t\t\t\t\tString nextWord = txts.substring(txts.length() - 1, input_txt.length());\n\t\t\t\t\tif(trainData.sd.get(nextWord)!=null && (trainData.sd.get(nextWord).equals(\"\u003Csep>\") || trainData.sd.get(nextWord).equals(\"\u003Ceos>\"))) {\n\t\t\t\t\t\tinput_txt += nextWord;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}else {\n\t\t\t\t\t\tinput_txt += nextWord;\n\t\t\t\t\t}\n\t\t\t\t\tinput = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\t\tCNChatTokenizer.getPositions(1, input.number, positions);\n\t\t\t\t}\n\t\t\t\tString[] chatList = input_txt.split(\" \");\n\t\t\t\tString current = chatList[chatList.length - 1];\n\t\t\t\tSystem.out.println(\"chatbot:\"+current);\n\t\t\t\tcontext += input_txt + current;\n\t\t\t}\n\t\t\tscanner.close();\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n    }\n```\n\n#### gpt-医疗问答系统\n```java\n    public static void gpt2_yl_qa() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = true;\n\t\t\tint batchSize = 16;\n\t\t\tint max_len = 256;\n\t\t\tint embedDim = 1024;\n\t\t\tint head_num = 16;\n\t\t\tint decoderNum = 24;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\gpt\\\\cMedQA2\\\\qaData.txt\";\n\t\t\tCNChatTokenizer trainData = new CNChatTokenizer(trainPath, max_len, batchSize);\n\t\t\tNanoGPT network = new NanoGPT(LossType.softmax_with_cross_entropy, UpdaterType.adamw, head_num, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, false);\n\t\t\tnetwork.learnRate = 0.001f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 5, 0.0001f, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.lr_step = new int[] {1, 2};\n\t\t\toptimizer.trainNanoGPT(trainData);\n\t\t\tnetwork.RUN_MODEL = RunModel.TEST;\n\t\t\tScanner scanner = new Scanner(System.in);\n\t\t\twhile (true) {\n\t\t\t\tSystem.out.println(\"请输入中文:\");\n\t\t\t\tString input_txt = scanner.nextLine();\n\t\t\t\tif(input_txt.equals(\"exit\")){\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tinput_txt = input_txt.toLowerCase() + \" \";\n\t\t\t\tSystem.out.println(\"user:\"+input_txt);\n\t\t\t\tTensor input = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\tTensor positions = CNChatTokenizer.getPositions(1, input.number);\n\t\t\t\tfor(int t = 0;t\u003Cmax_len;t++) {\n\t\t\t\t\tnetwork.time = input.number;\n\t\t\t\t\tTensor output = network.forward(input, positions);\n\t\t\t\t\toutput.syncHost();\n\t\t\t\t\tString txts = output2TXT(output, trainData, true);\n\t\t\t\t\tString nextWord = txts.substring(txts.length() - 1, input_txt.length());\n\t\t\t\t\tif(trainData.sd.get(nextWord)!=null && (trainData.sd.get(nextWord).equals(\"\u003Csep>\") || trainData.sd.get(nextWord).equals(\"\u003Ceos>\"))) {\n\t\t\t\t\t\tinput_txt += trainData.sd.get(nextWord);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}else {\n\t\t\t\t\t\tinput_txt += nextWord;\n\t\t\t\t\t}\n\t\t\t\t\tinput = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\t\tCNChatTokenizer.getPositions(1, input.number, positions);\n\t\t\t\t}\n\t\t\t\tSystem.out.println(\"chatbot:\"+input_txt.split(\" \")[1]);\n\t\t\t}\n\t\t\tscanner.close();\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### llama2-医疗问答系统\n```java\n    public static void llama2_chinese_chatglm_vocab() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = false;\n\t\t\tboolean flashAttention = false;\n\t\t\tint batchSize = 8;\t\t\t\n\t\t\tint max_len = 512;\n\t\t\tint embedDim = 512;\n\t\t\tint head_num = 8;\n\t\t\tint decoderNum = 8;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\wbm_idx_chatglm_vocab.txt\";\n\t\t\tString tokenizer_path = \"H:\\\\transformer_dataset\\\\tokenizer.model\";\n\t\t\tSentencePieceTokenizer tokenizer = new SentencePieceTokenizer(tokenizer_path, 64793);\n\t\t\tCNWikiTokenizer4 trainData = new CNWikiTokenizer4(trainPath, max_len, batchSize, 6250865, tokenizer);\n\t\t\tLlama2 network = new Llama2(LossType.softmax_with_cross_entropy_idx, UpdaterType.adamw, head_num, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, flashAttention);\n\t\t\tnetwork.learnRate = 3e-4f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 1, 0.0001f, LearnRateUpdate.COSINE, false);\n\t\t\toptimizer.lr_step = new int[] {1, 2};\n\t\t\toptimizer.lr = 3e-4f;\n\t\t\toptimizer.min_lr = 1e-5f;\n\t\t\toptimizer.setWarmUp(true);\n\t\t\toptimizer.warmUpTime = 1000;\n\t\t\toptimizer.lrDecayIters = (int) (trainData.count_it * 0.96);\n\t\t\toptimizer.trainLlama2_chinese(trainData);\n\t\t\tString model_path = \"H:\\\\model\\\\llama2-92m-chinese.model\";\n\t\t\tModelUtils.saveModel(network, model_path);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### llama3.1-对话机器人\n```java\n        public static void llama3_monkey() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = false;\n\t\t\tboolean flashAttention = false;\n\t\t\tint batchSize = 2;\n\t\t\tint max_len = 512;\n\t\t\tint embedDim = 512;\n\t\t\tint head_num = 16;\n\t\t\tint nKVHeadNum = 8;\n\t\t\tint decoderNum = 8;\n\t\t\t\n\t\t\tString trainPath = \"H:\\\\model\\\\pretrain_data_6400.bin\";\n\t\t\tString vocabPath = \"H:\\\\transformer_dataset\\\\6400\\\\vocab.json\";\n\t\t\tString mergesPath = \"H:\\\\transformer_dataset\\\\6400\\\\merges.txt\";\n\t\t\t\n\t\t\tBPETokenizer3 tokenizer = new BPETokenizer3(vocabPath, mergesPath);\n\t\t\tCNBpeTokenizer trainData = new CNBpeTokenizer(trainPath, max_len, batchSize, tokenizer, BinDataType.unint16);\n\t\t\tLlama3 network = new Llama3(LossType.softmax_with_cross_entropy_idx, UpdaterType.adamw, head_num, nKVHeadNum, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, flashAttention);\n\t\t\tnetwork.learnRate = 1e-4f;\n\t\t\tnetwork.CLIP_GRAD_NORM = true;\n\t\t\tinitWeight(network, decoderNum);\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 2, 0.0001f, LearnRateUpdate.CONSTANT, false);\n\t\t\toptimizer.trainLlama3_chinese(trainData, 8, true);\n\t\t\tString save_model_path = \"H:\\\\model\\\\llama3-26m-chinese.model\";\n\t\t\tModelUtils.saveModel(network, save_model_path);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### diffusion-动漫头像生成\n```java\npublic static void duffsion_anime() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tint batchSize = 8;\n\t\t\tint imw = 96;\n\t\t\tint imh = 96;\n\t\t\tint mChannel = 64;\n\t\t\tint resBlockNum = 2;\n\t\t\tint T = 1000;\n\t\t\tint[] channelMult = new int[] {1, 2, 3, 4};\n\t\t\tString imgDirPath = \"H:\\\\voc\\\\gan_anime\\\\ml2021spring-hw6\\\\faces\\\\\";\n\t\t\tDiffusionImageDataLoader dataLoader = new DiffusionImageDataLoader(imgDirPath, imw, imh, batchSize, false);\n\t\t\tDiffusionUNet network = new DiffusionUNet(LossType.MSE, UpdaterType.adamw, T, 3, mChannel, channelMult, resBlockNum, imw, imh, bias);\n\t\t\tnetwork.CUDNN = true;\n\t\t\tnetwork.learnRate = 0.0002f;\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(network, 50, 0.00001f, batchSize, LearnRateUpdate.GD_GECAY, false);\n\t\t\toptimizer.trainGaussianDiffusion(dataLoader);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n``` \n\n#### VQVAE\n```java\npublic static void anime_vqvae2_lpips_gandisc_32_nogan2() {\n\n\t\ttry {\n\t\t\tint batchSize = 4;\n\t\t\tint imageSize = 256;\n\t\t\tint z_dims = 128;\n\t\t\tint latendDim = 4;\n\t\t\tint num_vq_embeddings = 512;\n\t\t\tint num_res_blocks = 2;\n\t\t\tint[] ch_mult = new int[] {1, 2, 2, 4};\n\t\t\tint ch = 128;\n\t\t\t\n\t\t\tfloat[] mean = new float[] {0.5f, 0.5f, 0.5f};\n\t\t\tfloat[] std = new float[] {0.5f, 0.5f, 0.5f};\n\t\t\tString imgDirPath = \"I:\\\\dataset\\\\sd-anime\\\\anime_op\\\\256\\\\\";\n\t\t\tDiffusionImageDataLoader dataLoader = new DiffusionImageDataLoader(imgDirPath, imageSize, imageSize, batchSize, true, false, mean, std);\n\t\t\t\n\t\t\tVQVAE2 network = new VQVAE2(LossType.MSE, UpdaterType.adamw, z_dims, latendDim, num_vq_embeddings, imageSize, ch_mult, ch, num_res_blocks);\n\t\t\tnetwork.CUDNN = true;\n\t\t\tnetwork.learnRate = 0.001f;\n\t\t\t\n\t\t\tLPIPS lpips = new LPIPS(LossType.MSE, UpdaterType.adamw, imageSize);\n\t\t\tString lpipsWeight = \"H:\\\\model\\\\lpips.json\";\n\t\t\tLPIPSTest.loadLPIPSWeight(LagJsonReader.readJsonFileSmallWeight(lpipsWeight), lpips, false);\n\t\t\tlpips.CUDNN = true;\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(network, 200, 0.00001f, batchSize, LearnRateUpdate.CONSTANT, false);\n\t\t\toptimizer.trainVQVAE2_lpips_nogan(dataLoader, lpips);\n\n\t\t\tString save_model_path = \"\u002Fomega\u002Fmodels\u002Fanime_vqvae2_256.model\";\n\t\t\tModelUtils.saveModel(network, save_model_path);\n\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t\n\t}\n```\n\n#### StableDiffusion文生图\n```java\npublic static void tiny_sd_train_anime_32() throws Exception {\n\t\tString labelPath = \"I:\\\\dataset\\\\sd-anime\\\\anime_op\\\\data.json\";\n\t\tString imgDirPath = \"I:\\\\dataset\\\\sd-anime\\\\anime_op\\\\256\\\\\";\n\t\tboolean horizontalFilp = true;\n\t\tint imgSize = 256;\n\t\tint maxContextLen = 77;\n\t\tint batchSize = 8;\n\n\t\tfloat[] mean = new float[] {0.5f, 0.5f,0.5f};\n\t\tfloat[] std = new float[] {0.5f, 0.5f,0.5f};\n\t\t\n\t\tString vocabPath = \"H:\\\\model\\\\bpe_tokenizer\\\\vocab.json\";\n\t\tString mergesPath = \"H:\\\\model\\\\bpe_tokenizer\\\\merges.txt\";\n\t\tBPETokenizerEN bpe = new BPETokenizerEN(vocabPath, mergesPath, 49406, 49407);\n\t\tSDImageDataLoaderEN dataLoader = new SDImageDataLoaderEN(bpe, labelPath, imgDirPath, imgSize, imgSize, maxContextLen, batchSize, horizontalFilp, mean, std);\n\t\t\n\t\tint time = maxContextLen;\n\t\tint maxPositionEmbeddingsSize = 77;\n\t\tint vocabSize = 49408;\n\t\tint headNum = 8;\n\t\tint n_layers = 12;\n\t\tint textEmbedDim = 512;\n\t\tClipTextModel clip = new ClipTextModel(LossType.MSE, UpdaterType.adamw, headNum, time, vocabSize, textEmbedDim, maxPositionEmbeddingsSize, n_layers);\n\t\tclip.CUDNN = true;\n\t\tclip.time = time;\n\t\tclip.RUN_MODEL = RunModel.EVAL;\n\t\tString clipWeight = \"H:\\\\model\\\\clip-vit-base-patch32.json\";\n\t\tClipModelUtils.loadWeight(LagJsonReader.readJsonFileSmallWeight(clipWeight), clip, true);\n\t\t\n\t\tint z_dims = 128;\n\t\tint latendDim = 4;\n\t\tint num_vq_embeddings = 512;\n\t\tint num_res_blocks = 2;\n\t\tint[] ch_mult = new int[] {1, 2, 2, 4};\n\t\tint ch = 128;\n\t\tVQVAE2 vae = new VQVAE2(LossType.MSE, UpdaterType.adamw, z_dims, latendDim, num_vq_embeddings, imgSize, ch_mult, ch, num_res_blocks);\n\t\tvae.CUDNN = true;\n\t\tvae.learnRate = 0.001f;\n\t\tvae.RUN_MODEL = RunModel.EVAL;\n\t\tString vaeModel = \"anime_vqvae2_256.model\";\n\t\tModelUtils.loadModel(vae, vaeModel);\n\t\t\n\t\tint unetHeadNum = 8;\n\t\tint[] downChannels = new int[] {128, 256, 512, 768};\n\t\tint numLayer = 2;\n\t\tint timeSteps = 1000;\n\t\tint tEmbDim = 512;\n\t\tint latendSize = 32;\n\t\tint groupNum = 32;\n\t\tDiffusionUNetCond2 unet = new DiffusionUNetCond2(LossType.MSE, UpdaterType.adamw, latendDim, latendSize, latendSize, downChannels, unetHeadNum, numLayer, timeSteps, tEmbDim, maxContextLen, textEmbedDim, groupNum);\n\t\tunet.CUDNN = true;\n\t\tunet.learnRate = 0.0001f;\n\t\t\n\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(unet, 500, 0.00001f, batchSize, LearnRateUpdate.CONSTANT, false);\n\t\toptimizer.trainTinySD_Anime(dataLoader, vae, clip);\n\t\t\n\t\tString save_model_path = \"\u002Fomega\u002Fmodels\u002Fsd_anime256.model\";\n\t\tModelUtils.saveModel(unet, save_model_path);\n\t}\n```\n\n\n## 版本依赖包\n```xml\n\u003C!-- windows cuda 11.7 -->\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.7-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n\u003C!-- windows cuda 11.8 -->\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.8-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n\u003C!-- windows cuda 12.x -->\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu12.x-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n## 未来可期\n\n实现llama2，unet，diffusion model等模型\n\n### 训练情况可视化\n\n支持动态调参，可视化训练\n\n\n## 彩蛋\n\n### 基于神经网络+遗传算法实现AI赛车游戏\n\nhttp:\u002F\u002F119.3.123.193:8011\u002FAICar\n\n## 版本更新\n### omega-engine-v3\n#### 2022-06-20\n1.添加gup支持，使用jcuda调用cuda的cublasSgemm矩阵乘法，参考了caffe的卷积操作已将卷积操作优化成im2col+gemm实现，计算效率得到大大提高\n\n2.添加vgg16 demo，该模型在cifar10数据集上表现为测试数据集准确率86.45%\n\n3.利用jdk ForkJoin框架实现任务拆分，充分利用cpu多线程，提高对数组操作与计算速度\n\n4.参考darknet对学习率更新机制进行升级，目前已支持RANDOM、POLY、STEP、EXP、SIG等多种学习率更新方法，并且实现学习率warmup功能\n\n5.添加basicblock模块，新增resnet模型支持，目前该模型在cifar10数据集上的表现，epoch:300，测试数据集准确率为91.23%\n\n### omega-engine-v3-gpu\n#### 2022-07-02\n1.开启omega-engine-v3-gpu版本开发，该版本将实现对omega-engine的gpu全面支持\n\n2.全面优化卷积层计算，包括前向传播与反向传播.\n\n#### 2022-08-17\n1.初步完成卷积层的gpu改造，使得卷积神经网络计算速度整体提升，增加im2col与col2im两个经典的核函数（Im2colKernel.cu，Col2imKernel.cu）\n\n2.添加cuda内存管理器，用于管理整体显存的生命周期，减少频繁申请显存的操作，减少主机与显卡之间的数据传输.\n\n#### 2022-09-02\n1.修改bn层计算dmean公式,减少计算量\n\n2.更换数据存储方式，以便使用gpu计算，减少4维数组与1维数组之间的转换，获得成倍的计算效率提升\n\n3.全面优化gpu计算，更新cuda核函数实现，使得训练与预测计算效获得大大提升\n\n4.后续版本将进一步优化gpu版本，预计将整个计算过程搬迁入gpu计算，从而减少主机与设备(显卡)之间传输，希望进一步获得更快的计算速度\n\n### omega-engine-v4-gpu\n\n#### 2023-01-10\n1.开启omega-engine-v4-gpu版本开发，该版本将实现对omega-engine的CUDNN全面支持\n\n2.新增全局平均池化层实现\n\n3.将softmax与cross_entropy结合成softmax_with_cross_entropy作为损失函数使用(注意:使用softmax_with_cross_entropy损失函数,将不需要额外添加SoftmaxLayer)\n\n4.新增BN层对CUDNN支持，实现源码请移步(实现源码请移步BNCudnnKernel.java)\n\n5.后续版本将逐渐实现引擎对CUDNN支持\n\n#### 2023-04-13\n1.omega-engine-v4-gpu版本添加cudnn支持，整体推理与训练效率提升4倍\n\n2.优化bn层，激活函数层内存使用，整体内存显存占用减少30%~40%\n\n3.新增yolo目标识别实现，当前实现的yolo版本为yolov1版本(实现源码请移步YoloV1Test.java)\n\n4.新增图片绘制工具，帮助绘制预测框与回显图片\n\n5.后续版本将逐渐实现引擎对yolov3,yolov5等模型\n\n#### 2023-08-02 \n1.新增自动求导功能(包含cpu，gpu版本). \n\n2.新增multiLabel_soft_margin loss损失函数，yolo loss（Yolov3Loss）.\n\n3.新增yolov3目标识别实现，当前实现的yolo版本为yolov3版本(实现源码请移步YoloV3Test.java) . \n\n4.新增目标识别数据增强功能(随机裁剪边缘，随机上下反转，hsv变换等).\n\n5.使用自动求导功能实现MSN损失函数，代替原有的MSN loss. \n\n6.后续版本将逐渐实现引擎对yolov5,GAN,transformer等模型支持.\n\n#### 2023-12-01\n1.新增yolov4版本实现，具体结构请查看yolov4-tiny.cfg文件.\n\n2.新增yolov7版本实现，添加yolov7 loss实现,具体理论解析请查看readme.md文件. \n\n4.新增基于yolov7-tiny实现智能冰柜商品识别demo. \n\n5.SiLU激活函数实现. \n\n6.修改yoloLayer(yolo层)，根据yolov4版本实现scale缩放公式从原来exp(xy)+b修改成sigmoid(xy) * scale - 0.5 * (scale - 1)，该操作可一定程度减缓由于exp()函数带来的数值不稳定和无穷大NaN的现象. \n\n7.新增GAN实现，详情源码请查看com.omega.gan包，里面实现了手写体数字生成与动漫头像生成的事例.\n\n8.新增RNN循环神经网络模型实现，添加RNNBlockLayer层，该层实现了RNN,LSTM,GRU三种循环神经网络基础模块.\n\n9.后续版本将逐渐实现引擎对CycleGAN风格迁移,LSTM,GRU,transformer等模型支持. \n\n#### 2024-05-20\n1.新增循环神经网络LSTM模型实现（小说生成器demo）.\n\n2.新增循环神经网络seq2seq模型实现（中英文翻译器demo）.\n\n3.新增transformer家族GPT模型支持，新增MultHeadSelfAttention（多头自注意力机制）实现FastCausalSelfAttentionLayer、MultiHeadAttentionLayer，新增MLP层实现MLPLayer，新增EmbeddingIDLayer（输入数据为id），新增Layer Normallization层等transformer系列基础层.\n\n4.新增大语言nano GPT2模型实现（莎士比亚剧本生成demo）.\n\n5.新增大语言GPT2模型实现（中文聊天机器人demo）.\n\n6.新增大语言GPT2模型实现（中文医疗问答系统demo）.\n\n7.新增BPE（byte pair encode）tokenizer编码器实现.\n\n\n## 欢迎打扰\n\n### QQ：465973119\n### 技术交流QQ群：119593195\n### 电子邮箱：465973119@qq.com","![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cbeb2b0d6b2e.png)\n\n# 自己打造一个深度学习框架 for Java\n\n## 前言\n从 2016 年开始利用空余时间研究深度学习领域，由于工作的原因，最熟悉的编程语言就是 Java，所以框架的编程语言自然而然就使用了 Java。自己打造框架的初衷就是为了更加深入了解各个算法、模型、实现的原理和思路，同时让 Java 开发者更加容易接触 AI 领域。\n\n## 框架介绍\nOmega-AI：基于 Java 打造的深度学习框架，帮助你快速搭建神经网络，实现训练或测试模型，支持多 GPU 训练。框架目前支持 BP 神经网络（Back Propagation Neural Network）、卷积神经网络（Convolutional Neural Network）、循环神经网络（Recurrent Neural Network）、VGG16、ResNet、YOLO、LSTM、Transformer、GPT、Llama、Diffusion、Stable Diffusion 等模型的构建。目前引擎最新版本支持 CUDA 和 CUDNN 两种 GPU 加速方式，关于 GPU 加速的环境配置与 jcuda 版本 jar 包的对应依赖，引擎中所实现的模型和算法除了使用 CUDA 和 CUDNN 相关依赖包之外均不使用任何 API 和第三方依赖包。欢迎添加 QQ 群 ([119593195]()) 进行技术讨论和交流，别忘了给 Omega-AI 项目点个 Star，项目需要你们的支持。\n\n## 官方网站：\n\n[https:\u002F\u002Fomega-ai.dromara.org](https:\u002F\u002Fomega-ai.dromara.org)\n\n## 源码地址：\n\n[https:\u002F\u002Fgitee.com\u002Fdromara\u002Fomega-ai](https:\u002F\u002Fgitee.com\u002Fdromara\u002Fomega-ai)\n\n[https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI](https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI)\n\n[https:\u002F\u002Fgitcode.com\u002Fdromara\u002Fomega-ai](https:\u002F\u002Fgitcode.com\u002Fdromara\u002Fomega-ai)\n\n## 依赖\n由于 omega-engine-v4-gpu 加入了 jcuda 支持，所以 omega-engine-v4-gpu 需要安装与 jcuda 版本对应的 CUDA。如果您的机器安装的 CUDA 版本是 11.7.x，那么对应 omega-engine 需要引入的 jcuda 11.7.0 版本。\n\n## 快速开始\n##### 1. 检查当前 CUDA 版本\n```txt\nnvcc --version\n```\n##### 2. 安装 CUDA 与 CUDNN\nhttps:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit-archive\n##### 3. 引入或下载与当前 CUDA 版本对应的 omega-engine 包\n[win-cu-x.x 版本包列表](#版本依赖包)\n```xml\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.7-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n##### 4. 初始化 GPU 环境与释放显存\n```java\npublic static void main(String[] args) {\n    try {\n        \u002F\u002F初始化 GPU 环境获取 Context 对象\n        CUDAModules.initContext();\n        CNNTest cnn = new CNNTest();\n        cnn.cnnNetwork_cifar10();\n    } finally {\n        \u002F\u002F释放所有显存\n        CUDAMemoryManager.free();\n    }\n}\n```\n\n## 系统参数\n由于训练 VGG16 模型的参数比较庞大，所以在部署项目的时候需要对 JVM 内存进行调整。\n调整示例如下：-Xmx20480m -Xms20480m -Xmn10240m\n\n## Demo 展示\n\n### 卷积神经网络系列\n#### [基于卷积神经网络 MNIST 手写数字识别](http:\u002F\u002F120.237.148.121:8011\u002Fmnist)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_8eaa953c1c90.png)\n\n### YOLO 目标识别算法系列\n#### [基于 YOLO 算法目标识别](#yolo-banana-detection-demo)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cad1064b1fa4.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_a76e2ee05e9f.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_0b355e2640d1.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_1d31d0d0ec81.png)\n\n#### [基于 YOLOv3 口罩佩戴识别](#yolov3-mask-demo 口罩佩戴识别)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_ed149eef8ad0.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_764cc62f2369.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_6dd24bdac5ee.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_556dfe9717d4.png)\n\n#### [基于 YOLOv3 安全帽佩戴识别](#yolov3-helmet-demo 安全帽佩戴识别)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_62beb03796b4.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_82c2bae9a3a5.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_1fa21febae89.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_7e0bc5ed88a4.png)\n\n#### [基于 YOLOv7 智能冰柜商品识别](#yolov7-sm-demo 智能冰柜商品识别)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_925e30dc7ee1.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_2a8f03b23938.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_464627ddc245.png)![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_c24865f07fd0.png)\n\n### GAN 对抗生成神经网络系列\n#### [基于 GAN 生成对抗神经网络实现生成手写体数字图片](#gan-mnist-demo-生成手写数字)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_65b14e89d1e8.gif)\n\n#### [基于 DCGAN 生成对抗神经网络实现生成动漫头像图片](#dcgan-anime-demo-生成动漫头像)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_9ed314924911.gif)\n\n### 时序模型系列\n#### [基于 RNN 循环神经网络实现小说生成器](#rnn-中文小说生成器)\n##### 斗破苍穹前 50 章原文\n```txt\n    月如银盘，漫天繁星。山崖之巅，萧炎斜躺在草地之上，嘴中叼着一根青草，微微嚼动，任由那淡淡的苦涩在嘴中弥漫开来举起有些白皙的手掌，挡在眼前，目光透过手指缝隙，遥望着天空上那轮巨大的银月。唉想起下午的测试，萧炎轻叹了一口气，懒懒的抽回手掌，双手枕着脑袋，眼神有些恍惚十五年了呢低低的自喃声，忽然毫无边际的从少年嘴中轻吐了出来。在萧炎的心中，有一个仅有他自己知道的秘密：他并不是这个世界的人，或者说，萧炎的灵魂，并不属于这个世界，他来自一个名叫地球的蔚蓝星球，至于为什么会来到这里，这种离奇经过，他也无法解释，不过在生活了一段时间之后，他还是后知后觉地明白了过来：他穿越了！随着年龄的增长，对这块大陆，萧炎也是有了些模糊的了解大陆名为斗气大陆，大陆上并没有小说中常见的各系魔法，而斗气，才是大陆的唯一主调！在这片大陆上，斗气的修炼，几乎已经在无数代人的努力之下，发展到了巅峰地步，而且由于斗气的不断繁衍，最后甚至扩散到了民间之中，这也导致，斗气，与人类的日常生活，变得息息相关，如此，斗气在大陆中的重要性，更是变得无可替代！因为斗气的极端繁衍，同时也导致从这条主线中分化出了无数条斗气修炼之法，所谓手有长短，分化出来的斗气修炼之法，自然也是有强有弱。经过归纳统计，斗气大陆将斗气功法的等级，由高到低分为四阶十二级：天。地。玄。黄！而每一阶，又分初，中，高三级....................\n```\n##### 生成器效果 (pickTopN:N=3, 狗屁不通)\n```txt\n    这个故事所造成的后果，便是造就了大批每天东在这样年，前，萧仅仅是自己的萧的摇了摇头，道，就等因为炼了，才造就出三的天修炼天，的同样非也是有些有些异的儿一直在倒是，废，的分了，然便想要不定斗气大月月月月的定。透明的，方价脸有多中为不可是。你说完师到后气会让对，我不可以时，他倒是在乎这种高到功法的斗技出其种有些不愿的吸手一道，斗气，萧家现上，是这事，不是这个修有程体的什纸契到这片的小脸！三老，我光在萧战一巴掌，双中，是一个灵到的常识。心吧？望着萧炎那些神有些恍点不想受你的美的，用气忽然，传进你耳枚的属散，另次我前便是对着身空的长出身也只有想起，不，萧炎哥以说的造，的时候，他的道：你修门成为自然是各种天材少年老，一声冷静的望着对面的在一，手中，了下来的事，，你向了角落阵嘲笑，微有着不份还眼角散的，萧炎牙齿在桌面，上下没被等级之人的强化，并且他这老难，还是难去人的说过别的功，而且这几年，还要是分，同，你的要求，这几年实条，听过你有一年的，，你成就是我萧炎的面庞，萧战叹了口沾染鲜之的手一，在白纸之名为斗成为你！你是没搞的鬼？嘿人当失也了口之事发。萧动那小娃冷的的老头，笑眯眯凝重的道，这是这事所的事，，你当还在一年时知，三年之前，你成年自然宛如疯天阶十属，所以，有云岚宗宗，更强有的么还年轻指的戒路，萧炎愕然了转。萧叔之时，萧炎却才有一星大者，在真真切切的。当药的，庞一瞪，手指惊颤的斗着萧炎心里一好气得俏脸忽些，不炎轻重的：自然也造就了他不的老师，云岚一宗，虽然有家，小脸，那双宛如轻疑般待遇这老然药，所的，这里，有种，都会身到这里许，自会不攻，微！父头一动容。丹有一种条件。首位的上，然必要进道，斗气大陆人，一种个灵魂，竟与什天，今事悔婚之种事，总的记不得萧，也将会被各方势力可惜手间中到时的变，那将老：想家老师？闻言一笑声，竟一手掌萧上，猛的之时响，再让你看看你就身也只清出了岁的事方与大一家，，萧叔叔，今天这种高深的吐了一口气少，那便是以事再次开始修炼中，萧后，萧炎会了一辈子不废物玩区，当然还是在炼黄之气！炼药之术之神，而有的得发了，那便请回好下去。药时，需前说自身属性的灵魂重却，火焰属于他便是一种发愣到斗者更让修！一老人手中有聚药老成年自己这几年，看来到，你以为了天他在云月上片刻下，也并不少在纳低嫣道对明公你，纳兰上然了起着些白老的魔此，你这本我还年轻间，还今的你我已经知为，至九品的先是，萧炎那些回成，无奈的身视了可\n```\n\n#### [基于 SEQ2SEQ 模型实现英文翻译器](#seq2seq-英文翻译器)\n\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_d1d66e1d9a2b.png)\n\n### GPT 系列\n#### [基于微型 GPT2 架构实现小说生成器](#gpt-中文小说生成器)\n##### 斗破苍穹前 50 章原文\n```txt\n    月如银盘，漫天繁星。山崖之颠，萧炎斜躺在草地之上，嘴中叼中一根青草，微微嚼动，任由那淡淡的苦涩在嘴中弥漫开来举起有些白皙的手掌，挡在眼前，目光透过手指缝隙，遥望着天空上那轮巨大的银月。唉想起下午的测试，萧炎轻叹了一口气，懒懒的抽回手掌，双手枕着脑袋，眼神有些恍惚十五年了呢低低的自喃声，忽然毫无边际的从少年嘴中轻吐了出来。在萧炎的心中，有一个仅有他自己知道的秘密：他并不是这个世界的人，或者说，萧炎的灵魂，并不属于这个世界，他来自一个名叫地球的蔚蓝星球，至于为什么会来到这里，这种离奇经过，他也无法解释，不过在生活了一段时间之后，他还是后知后觉的明白了过来：他穿越了！随着年龄的增长，对这块大陆，萧炎也是有了些模糊的了解大陆名为斗气大陆，大陆上并没有小说中常见的各系魔法，而斗气，才是大陆的唯一主调！在这片大陆上，斗气的修炼，几乎已经在无数代人的努力之下，发展到了巅峰地步，而且由于斗气的不断繁衍，最后甚至扩散到了民间之中，这也导致，斗气，与人类的日常生活，变得息息相关，如此，斗气在大陆中的重要性，更是变得无可替代！因为斗气的极端繁衍，同时也导致从这条主线中分化出了无数条斗气修炼之法，所谓手有长短，分化出来的斗气修炼之法，自然也是有强有弱。经过归纳统计，斗气大陆将斗气功法的等级，由高到低分为四阶十二级：天。地。玄。黄！而每一阶，又分初，中，高三级....................\n```\n##### 生成器效果 (embedDim:128,max_len:64,headNum:8,decoderNum:8,pickTopN:N=1,颇为接近原文)\n```txt\n    萧炎的目光，依然是若无其事的，而即使是这样的话，也是一位炼药师再好气的确是。萧炎嘴角一裂，却是忽然话音有心急促的抽动，让得少年呼吸微微急促。少年缓缓抬起头，目光淡然的转过身来，眼瞳中浮现许些阴冷，让得无奈的摊贩前，毫不留下无视。你这即将衣心中的淡然，不过，以萧炎此刻比看来，对方的男子接受尽了重伤，看得体内那些通气的一些奇异的能量气足够。斗气的带动在体内之后，萧炎的温养我的骨骼。眼眸子中，无奈静的深陷入了一些铁片之中的深蓝色物体，而在空中纠缠成的印，可见加列家族的两人打发现，在她的名声中，正常生长，比他毛而不可行走，在恐怕的狼头佣兵团，绝对会遭受到毁灭般的打击。只要一想到日后那铺天盖地的报复，穆力心头便是杀意狂涌。听得穆力的喝声，萧炎嘴角挑起一抹嘲讽与森然，嘴唇微动：爆！嘭！又是一声闷响乍然响起，不过这记闷响，竟然是从穆力的身体之内传出。噗嗤！忽然在体内爆炸的劲气，让得穆力脸色瞬间惨白，原来，脚步骤然一顿，身在半空划起一道抛物线。杀了他！飞起的瞬间，这名佣兵急忙冲着被这突然变故搞得发愣了一下，旋即满脸垂涎的笑问道，他对那位脸庞上半晌啊，脑袋笑容的带着，小医仙两人影一闪电般的对着黑暗地面的巨型与肉体型洒进，巨剑的剑柄，萧炎身便是一道地面，任由铁剑携带着劲气掠来。在魔猿身前一次攻击之下，一地面几米距离时，一团森白火焰猛的凭空腾现，箭支穿进火焰中，瞬间，便是化为了漆黑粉末。望着这一幕，加列怒脸色微变，心头泛起一股不安，看来这位黑袍人，也是一位不弱于大斗师的强者。缓缓吐了一口气，加列怒从身后的侍从手中拿起一把深蓝色的长枪，身体之上，淡淡的蓝色斗气渗发而出，顿时，附近的空气都为之湿润了不少，显然，他的斗气功法是偏向略微阴寒的水属性。手掌紧握着长枪，加列怒死死的盯着黑袍人，身体在略微调整之后，脚掌在地面突兀一踏，身形不断的对着萧炎两人群中挤去。如此多的人数进入魔兽山脉，普通魔兽定然不敢轻易袭击，如此，生命也就多了几分保障，只要等自己在路途中寻找到前段何种级别搞定，不过却取三年了，可以再出现在学院里吃了亏，还得怪我们。那名叫做戈剌的青年，上前一步，对着萧炎不怀好意的笑道。缓缓的吐了一口气，在众人的注视下，萧炎无奈的耸了耸肩，上前两步，在行至萧玉身旁时，忽然手臂一伸，狠狠的揽住 那柔软的纤腰，将之勒进怀中。被萧炎骤然偷袭，萧玉先是一愣，紧接着俏脸布满晕红，考虑到罗布在一旁，她只得停止挣扎\n```\n\n#### [基于 GPT2 架构实现聊天机器人](#gpt-中文聊天机器人)\n##### 训练数据：50W 日常聊天语料\n###### 备注：以下是训练数据事例，每一个回复以\" \"空格分隔，每一段对话以换行\u002Fn 分隔，以一段对话为一条训练数据\n```txt\n少侠好眼力\t少侠啥时候来北京\t遥遥无期你又没时间\t\n哥怎么这么帅\t是吗？谢谢嘞\t和小鲜肉一样。嫩嫩的\t\n你不怕掉下去啊\t这是海拔米我觉得不够高\t注意安全\t\n你这文案写的我有点感动是怎么回事\t哭没得\t没有咧\t\n都考上\t小仙女决定满足你这个愿望\t因为我有魔法棒\t\n啥时候看演唱会\t上海站好像延期了，不知延到啥时候本来是五月中旬\t靠你了\t\n大哥难道是求婚啦！\t不不不大哥还没有这么速度呢随便拼着玩儿的\t嘻嘻好看\t\n中午老大爷遛弯去了么\t对呀，哈哈。\t转发这条咸鱼，今年必有好事儿发生。\n我的爱情独白就是清空我的购物车\t沉迷于一夜暴富不可自拔的身家过百元的贵妇\t只想发财只想发财只想发财，对脱单好无兴趣\t\n自己用啊\t我有\t可是那张不用钱的嘢\t那要是里面没钱呢\t无钱再刷自己的卡\t哈哈哈哈哈哈哈哈这样就很不道德了\t没有没有\t\t\n第一张是藤椒鸡吗！\t嘻嘻嘻对一家好次川菜的椒麻鸡！\t这几天牙疼但是一直在想这种辣辣的鸡\t嘤嘤嘤就是这种时候会想吃辣\t\n```\n###### 模型参数\n```java\n\u002F\u002F gpt 124M 参数量\nmaxLen = 128  \u002F\u002F最大 token 数\nembedDim = 768 \u002F\u002Fembeding 编码维度\nheadNum = 12  \u002F\u002F多头注意力头数\ndecoderNum = 12  \u002F\u002F解码器层数\nlearnRate = 0.0001f  \u002F\u002F学习率\nepoch = 3 \u002F\u002F循环训练次数\ndropoutRate = 0.1f\ntrain_data = 450000 \u002F\u002F训练集数量\nvail_data = 50000  \u002F\u002F验证集数量\ntrain_loss = 1.08f \u002F\u002F最终训练集损失在 1.0 左右\nvail_loss = 1.2f  \u002F\u002F最终验证集损失在 1.2 左右\n````\n###### 推理效果图\n![GPT2 聊天机器人](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_e2f41e7240aa.png)\n\n#### [基于 gpt2-medium 实现医疗问答系统](#gpt-医疗问答系统)\n##### 训练数据：20W 医疗问答语料\n###### 模型参数\n```java\n\u002F\u002F gpt2-medium 350M 参数量\nmaxLen = 256  \u002F\u002F最大 token 数\nembedDim = 1024 \u002F\u002Fembeding 编码维度\nheadNum = 16  \u002F\u002F多头注意力头数\ndecoderNum = 24  \u002F\u002F解码器层数\nlearnRate = 0.001f  \u002F\u002F初始学习率\nepoch = 5 \u002F\u002F循环训练次数\ndropoutRate = 0.1f\ntrain_loss = 1.56f \u002F\u002F最终训练集损失在 1.5 左右\nvail_loss = 1.8f  \u002F\u002F最终验证集损失在 1.8 左右\n````\n###### 推理效果图\n![GPT2 医疗问答系统](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cc07586a5361.png)\n\n#### [基于 llama2-medium 实现医疗问答系统](#llama2-医疗问答系统)\n##### 预训练数据：Wiki 中文百科，BaiduBaiKe,shibing624\u002Fmedica\n##### 预训练权重文件：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1DobIvoYH_Yr8cv60VjCRng?pwd=euvp\n##### 微调训练数据（SFT）：shibing624\u002Fmedical,HuatuoGPT-sft-data-v1,DISC-Med-SFT,ChatMed\n##### 微调权重文件：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dve8XEk2o0lcoL36MdPhQg?pwd=wptj\n##### tokenizer（SentencePiece）：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Wx6Bcchd2UodU3YtEzWaYw?pwd=ehew \n##### 预处理后数据集：[数据集下载](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m-Of8dwOQ5kYuLZVYpw3qA?pwd=7c92)\n###### 模型参数\n```java\n\u002F\u002F llama2-chatglm 92M 参数量\nmaxLen = 512  \u002F\u002F最大 token 数\nembedDim = 512 \u002F\u002Fembeding 编码维度\nheadNum = 8  \u002F\u002F多头注意力头数\ndecoderNum = 8  \u002F\u002F解码器层数\nmaxLearnRate = 0.0003f  \u002F\u002F最大学习率\nminLearnRate = 0.0001f  \u002F\u002F最小学习率\nepoch = 1 \u002F\u002F循环训练次数\ndropoutRate = 0.0f\ntrain_loss = 2.0f \u002F\u002F最终训练集损失在 2.0 左右\n````\n###### 推理效果图\n![Llama2 医疗问答系统](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_7288c9623957.png)\n\n#### [基于 llama3.1 实现对话机器人](#llama3.1-对话机器人)\n##### tokenizer（BPE）\n###### 模型参数\n```java\n\u002F\u002F llama3.1 26M 参数量\nmaxLen = 512\u002F\u002F最大 token 数\nembedDim = 512\u002F\u002Fembeding 编码维度\nheadNum = 16  \u002F\u002F多头注意力头数\nnKVHeadNum = 8 \u002F\u002Fkv 注意力头数\ndecoderNum = 8  \u002F\u002F解码器层数\nmaxLearnRate = 1e-4f  \u002F\u002F最大学习率\nminLearnRate = 1e-5f  \u002F\u002F最小学习率\nepoch = 5 \u002F\u002F循环训练次数\ndropoutRate = 0.0f\npre_train_loss = 2.3f \u002F\u002F预训练最终训练集损失在 2.3 左右\nsft_train_loss = 1.6f \u002F\u002F微调训练最终训练集损失在 1.6 左右\n````\n###### 推理效果图\n![基于 llama3.1 实现对话机器人](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_44545b9ba08e.png)\n\n### Diffusion model 扩散模型系列\n#### [基于 diffusion 扩散模型实现生成动漫头像图片](#diffusion-动漫头像生成)\n#### 训练过程演示图\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_68c244f59ab0.gif)\n#### 50 次循环训练后反向去噪生成过程图\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_f8347534e8d5.gif)\n![输入图片说明](images\u002Fdiffusion_11(2)_anime.gif)\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_08dd6350ad86.gif)\n![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_50efa98d6529.gif)\n\n#### [基于 stable diffusion 模型实现文生图](#StableDiffusion 文生图)\n#### VQ-VAE 演示图\n| 原图 | VQ-VAE | 原图 | VQ-VAE |\n|----|--------|----|--------|\n|  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_353bccd7e8bc.png)  |    ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_fd1166c9609e.png)    |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_3de259a5feb4.png)  |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_613f31266189.png)  |\n|  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_42c936ed0e31.png)  |    ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_3dc0db6d2e39.png)    |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_45a34db47c0b.png)  |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_82c0ee5ee512.png)  |\n\n\n#### 文生图演示图\n| 文本 1 | 图片 1 | 文本 2 | 图片 2 |\n|-----|-----|-----|-----|\n|   a highly detailed anime landscape,big tree on the water, epic sky,golden grass,detailed.  |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_b2b9f5812bbc.png)   |   3d art of a golden tree in the river，with intricate flora and flowing water，detailed.  |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_9f0a822ca21d.png)  |\n|   a vibrant anime mountain lands  |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_f0971a3a6c98.png)  |  a dark warrior in epic armor stands among glowing crimson leaves in a mystical forest.   |   ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_cac21b1c9ae3.png)  |\n|   cute fluffy panda, anime, ghibli style, pastel colors, soft shadows, detailed fur, vibrant eyes, fantasy setting, digital art  | ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_f6160dcfd3ee.png)    | a epic city,3d,detailed._[a epic city,3d,detailed.    |  ![输入图片说明](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_readme_6adb089090e4.png)   |\n\n#### 训练过程与模型参数\n##### 1.下载数据集：open-image-preferences-v1-more-results\n\n##### 2.训练 VQ-VAE [训练脚本](#VQVAE)\n```java\n\u002F\u002FVQ-VAE 模型参数 \nz_dims=128 \u002F\u002F编码层输出通道数与解码层输入通道数\nlatendDim=4 \u002F\u002F隐空间通道数\nnum_vq_embeddings=512 \u002F\u002Fvq 码表嵌入向量维度\nnum_res_blocks=2 \u002F\u002F每个 resblock 层数\nch_mult=1,2,2,4 \u002F\u002F通道递增倍数\nch=128  \u002F\u002F通道数基数，每个编码或解码模型通道数=ch_mult[i] * ch\n```\n##### 3.加载 CLIP 模型：clip-vit-base-patch32\n```java\n\u002F\u002Fclip-vit-base-patch32 模型参数 \nmaxContextLen=77 \u002F\u002F最大支持文本 token 长度\nvocabSize=49408  \u002F\u002F词表总数据\nheadNum=8  \u002F\u002F多头注意力头数\nn_layers=12  \u002F\u002FCLIPEncoder 编码层层数\ntextEmbedDim=512  \u002F\u002F文本嵌入向量维度\n```\n##### 4.训练 Unet [训练脚本](#StableDiffusion 文生图)\n```java\n\u002F\u002FDiffusionUNetCond2 模型参数\nunetHeadNum=8 \u002F\u002F多头注意力头数\ndownChannels=128,256,512,768  \u002F\u002F网络通道数\nnumLayer=2 \u002F\u002F每个 resblock 层数\ntimeSteps=1000 \u002F\u002F时间序列总数\ntEmbDim=512  \u002F\u002F时间序列嵌入向量维度\nlatendSize=32  \u002F\u002F隐空间维度\ngroupNum=32  \u002F\u002Fgroup_norm 分组数\n```\n\n\n##  功能介绍\n#### 支持的网络层类型：\n\nFullylayer 全连接层\n\nConvolutionLayer 卷积层\n\nConvolutionTransposeLayer 反卷积层\n\nPoolingLayer 池化层(maxpooling,meanpooling)\n\nAVGPooingLayer 全局平均池化层\n\nEmbeddingLayer 向量映射层 (将高维度词向量映射成低维度向量) 该层的输入数据为 one-hot 编码后的数据\n\nEmbeddingIDLayer 向量映射层 (将高维度词向量映射成低维度向量)\n\nRNNLayer 循环神经网络层\n\nLSTMLayer 长短记忆网络层\n\nRouteLayer 路由层\n\nUPSampleLayer 上采样层\n\nYoloLayer yolo 层\n\nFastCausalSelfAttentionLayer 多层自注意力层\n\nMLPLayer gpt2-mlp 层\n\nTransformerBlock transformer 基础块\n\n#### 激活函数层\n\nSoftmaxLayer (softmax 激活函)\n\nReluLayer\n\nLeakyReluLayer\n\nTanhLayer\n\nSigmodLayer\n\nSiLULayer\n\nGeLULayer\n\n#### 归一化层\n\nBNLayer (Batch Normalization) 批归一化\n\nLNLayer (Layer Normalization) 层归一化\n\n#### 正则化\n\nDropoutLayer\n#### 优化器\n\nMomentum\n\nAdam\n\nAdamw\n\nSgd (sgd with momentum)\n\nRMSProp\n\n#### 训练器\n\nBGDOptimizer (批量梯度下降法)\n\nMBSGDOptimizer (小批量随机梯度下降)\n\nSGDOptimizer（随机梯度下降算法）\n\n#### 损失函数 (loss function)\n\nMSELoss (平方差损失函数)\n\nCrossEntropyLoss (交叉熵损失函数)\n\nCrossEntropyLossWithSoftmax (交叉熵损失 + softmax)\n\nMultiLabelSoftMargin (多标签损失函数)\n\n#### 学习率更新器（LearnRateUpdate）\n\nNONE (固定学习率)\n\nLR_DECAY (decay)\n\nGD_GECAY (gd_decay)\n\nCONSTANT(gd_decay)\n\nRANDOM [Math.pow(RandomUtils.getInstance().nextFloat(), power) * this.lr]\n\nPOLY [this.lr * Math.pow((1.0f - (batchIndex * 1.0f \u002F trainTime \u002F dataSize * batchSize)), power)]\n\nSTEP [this.lr * Math.pow(this.scale, batchIndex \u002F step)]\n\nEXP [this.lr * Math.pow(this.gama, batchIndex)]\n\nSIG [this.lr \u002F (1 + Math.pow(Math.E, this.gama * (batchIndex - step)))]\n\n#### 数据加载器\n\n.bin (二进制数据文件)\n\n.idx3-ubyte\n\n.txt\n\n## 使用说明\n\n### 自带的数据集\n\niris（鸢尾花数据集）\n\nmnist（手写数字数据集）\n\ncifar_10 （cifar_10 数据集）\n\n## 附加数据集\n[cifar-10](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1EC_h_iGKUfDu6Ld-8ltdsA?pwd=23g1)\n\n[banana-detection](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mUr12FJm9OGbsObqfjZ81Q?pwd=jish)\n\n[vailCode](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11wZY9gQQ9OuoViw11IW6BQ?pwd=2rdt)\n\n[helmet](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pbTaDHoRzhV-kuWoXOCPqw?pwd=y8ij)\n\n[mask](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1D3zTYTiNYmtU6x7Ui9ej_A?pwd=r4o3)\n\n[自动售货机数据集 sm](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10o8IZwD-WmChKtmzzg9q7w?pwd=gt8p )\n\n[大语言模型训练数据集](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1FKkg9h4awphRtQ8yZaeH3A?pwd=ywax)\n\n## 数据集成绩\n\niris epoch:5 bp 神经网络 [3 层全连接层]  测试数据集准确率 100%\n\nmnist epoch:10 alexnet 测试数据集准确率 98.6% \n\ncifar_10 epoch:50 alexnet 测试数据集准确率 76.6%\n\ncifar_10 epoch:50 vgg16 测试数据集准确率 86.45%\n\ncifar_10 epoch:300 resnet18 [batchSize:128,初始 learningRate:0.1,learnRateUpdate:GD_GECAY,optimizer:adamw] 数据预处理 [randomCrop,randomHorizontalFlip,cutout,normalize] 测试数据集准确率 91.23% \n\n## 示例代码\n\n#### bp iris 示例\n\n```java\npublic void bpNetwork_iris() {\n\t\t\u002F\u002F TODO Auto-generated method stub\n\n\t\t\u002F**\n\t\t * 读取训练数据集\n\t\t *\u002F\n\t\tString iris_train = \"\u002Fdataset\u002Firis\u002Firis.txt\";\n\t\t\n\t\tString iris_test = \"\u002Fdataset\u002Firis\u002Firis_test.txt\";\n\t\t\n\t\tString[] labelSet = new String[] {\"1\",\"-1\"};\n\t\t\n\t\tDataSet trainData = DataLoader.loalDataByTxt(iris_train, \",\", 1, 1, 4, 2,labelSet);\n\t\tDataSet testData = DataLoader.loalDataByTxt(iris_test, \",\", 1, 1, 4, 2,labelSet);\n\t\t\n\t\tSystem.out.println(\"train_data:\"+JsonUtils.toJson(trainData));\n\t\n\t\tBPNetwork netWork = new BPNetwork(new SoftmaxWithCrossEntropyLoss());\n\t\t\n\t\tInputLayer inputLayer = new InputLayer(1,1,4);\n\t\t\n\t\tFullyLayer hidden1 = new FullyLayer(4, 40);\n\t\t\n\t\tReluLayer active1 = new ReluLayer();\n\t\t\n\t\tFullyLayer hidden2 = new FullyLayer(40, 20);\n\t\t\n\t\tReluLayer active2 = new ReluLayer();\n\t\t\n\t\tFullyLayer hidden3 = new FullyLayer(20, 2);\n\n\t\tSoftmaxWithCrossEntropyLayer hidden4 = new SoftmaxWithCrossEntropyLayer(2);\n\t\t\n\t\tnetWork.addLayer(inputLayer);\n\t\tnetWork.addLayer(hidden1);\n\t\tnetWork.addLayer(active1);\n\t\tnetWork.addLayer(hidden2);\n\t\tnetWork.addLayer(active2);\n\t\tnetWork.addLayer(hidden3);\n\t\tnetWork.addLayer(hidden4);\n\n```\n\ntry {\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 8, 0.00001d, 10, LearnRateUpdate.NONE);\n\t\t\n\t\t\toptimizer.train(trainData);\n\t\t\t\n\t\t\toptimizer.test(testData);\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n````\n\n#### CNN (卷积神经网络) MNIST 示例\n\n```java\npublic void cnnNetwork_mnist() {\n\t\t\u002F\u002F TODO Auto-generated method stub\n\t\t\n\t\ttry {\n\n\t\t\t\u002F**\n\t\t\t * 读取训练数据集\n\t\t\t *\u002F\n\t\t\tString mnist_train_data = \"\u002Fdataset\u002Fmnist\u002Ftrain-images.idx3-ubyte\";\n\t\t\t\n\t\t\tString mnist_train_label = \"\u002Fdataset\u002Fmnist\u002Ftrain-labels.idx1-ubyte\";\n\t\t\t\n\t\t\tString mnist_test_data = \"\u002Fdataset\u002Fmnist\u002Ft10k-images.idx3-ubyte\";\n\t\t\t\n\t\t\tString mnist_test_label = \"\u002Fdataset\u002Fmnist\u002Ft10k-labels.idx1-ubyte\";\n\t\t\t\n\t\t\tString[] labelSet = new String[] {\"0\",\"1\",\"2\",\"3\",\"4\",\"5\",\"6\",\"7\",\"8\",\"9\"};\n\t\t\t\n\t\t\tResource trainDataRes = new ClassPathResource(mnist_train_data);\n\n\t\t\tResource trainLabelRes = new ClassPathResource(mnist_train_label);\n\t\t\t\n\t\t\tResource testDataRes = new ClassPathResource(mnist_test_data);\n\t\t\t\n\t\t\tResource testLabelRes = new ClassPathResource(mnist_test_label);\n\t\t\t\n\t\t\tDataSet trainData = DataLoader.loadDataByUByte(trainDataRes.getFile(), trainLabelRes.getFile(), labelSet, 1, 1 , 784, true);\n\t\t\t\n\t\t\tDataSet testData = DataLoader.loadDataByUByte(testDataRes.getFile(), testLabelRes.getFile(), labelSet, 1, 1 , 784, true);\n\n\t\t\tint channel = 1;\n\t\t\t\n\t\t\tint height = 28;\n\t\t\t\n\t\t\tint width = 28;\n\t\t\t\n\t\t\tCNN netWork = new CNN(new SoftmaxWithCrossEntropyLoss(), UpdaterType.momentum);\n\t\t\t\n\t\t\tnetWork.learnRate = 0.001d;\n\t\t\t\n\t\t\tInputLayer inputLayer = new InputLayer(channel, 1, 784);\n\t\t\t\n\t\t\tConvolutionLayer conv1 = new ConvolutionLayer(channel, 6, width, height, 5, 5, 2, 1, false);\n\t\t\t\n\t\t\tBNLayer bn1 = new BNLayer();\n\t\t\t\n\t\t\tLeakyReluLayer active1 = new LeakyReluLayer();\n\t\t\t\n\t\t\tPoolingLayer pool1 = new PoolingLayer(conv1.oChannel, conv1.oWidth, conv1.oHeight, 2, 2, 2, PoolingType.MAX_POOLING);\n\t\t\t\n\t\t\tConvolutionLayer conv2 = new ConvolutionLayer(pool1.oChannel, 12, pool1.oWidth, pool1.oHeight, 5, 5, 0, 1, false);\n\t\t\t\n\t\t\tBNLayer bn2 = new BNLayer();\n\t\t\t\n\t\t\tLeakyReluLayer active2 = new LeakyReluLayer();\n\t\t\t\n\t\t\tDropoutLayer drop1 = new DropoutLayer(0.5d);\n\t\t\t\n\t\t\t\n\t\t\tPoolingLayer pool2 = new PoolingLayer(conv2.oChannel, conv2.oWidth, conv2.oHeight, 2, 2, 2, PoolingType.MAX_POOLING);\n\n\t\t\tint fInputCount = pool2.oChannel * pool2.oWidth * pool2.oHeight;\n\t\t\t\n\t\t\tint inputCount = (int) (Math.sqrt((fInputCount) + 10) + 10);\n\t\t\t\n\t\t\tFullyLayer full1 = new FullyLayer(fInputCount, inputCount, false);\n\n\t\t\tBNLayer bn3 = new BNLayer();\n\t\t\t\n\t\t\tLeakyReluLayer active3 = new LeakyReluLayer();\n\t\t\t\n\t\t\tFullyLayer full2 = new FullyLayer(inputCount, 10);\n\t\t\t\n\t\t\tSoftmaxWithCrossEntropyLayer softmax = new SoftmaxWithCrossEntropyLayer(10);\n\n\t\t\tnetWork.addLayer(inputLayer);\n\t\t\tnetWork.addLayer(conv1);\n\t\t\tnetWork.addLayer(bn1);\n\t\t\tnetWork.addLayer(active1);\n\t\t\tnetWork.addLayer(pool1);\n\t\t\tnetWork.addLayer(conv2);\n\t\t\tnetWork.addLayer(bn2);\n\t\t\tnetWork.addLayer(active2);\n\t\t\tnetWork.addLayer(drop1);\n\t\t\tnetWork.addLayer(pool2);\n\t\t\tnetWork.addLayer(full1);\n\t\t\tnetWork.addLayer(bn3);\n\t\t\tnetWork.addLayer(active3);\n\t\t\tnetWork.addLayer(full2);\n\t\t\tnetWork.addLayer(softmax);\n\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 10, 0.0001d, 96, LearnRateUpdate.NONE);\n\n\t\t\tlong start = System.currentTimeMillis();\n\t\t\t\n\t\t\toptimizer.train(trainData);\n\t\t\t\n\t\t\toptimizer.test(testData);\n\t\t\t\n\t\t\tSystem.out.println(((System.currentTimeMillis() - start) \u002F 1000) + \"s.\");\n\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\n\t}\n````\n#### ResNet (残差网络) CIFAR-10 示例\n\n```java\n\tpublic void resnet18_cifar10() {\n\t\t\u002F\u002F TODO Auto-generated method stub\n\n\t\ttry {\n\n\t\t\tString[] labelSet = new String[] {\"airplane\",\"automobile\",\"bird\",\"cat\",\"deer\",\"dog\",\"frog\",\"horse\",\"ship\",\"truck\"};\n\t    \t\n\t\t\tString[] train_data_filenames = new String[] {\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_1.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_2.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_3.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_4.bin\",\n\t\t\t\t\t\"H:\u002Fdataset\u002Fcifar-10\u002Fdata_batch_5.bin\"\n\t\t\t};\n\t\t\t\n\t\t\tString test_data_filename = \"H:\u002Fdataset\u002Fcifar-10\u002Ftest_batch.bin\";\n\t\t\t\n\t\t\tfloat[] mean = new float[] {0.491f, 0.482f, 0.446f};\n\t\t\tfloat[] std = new float[] {0.247f, 0.243f, 0.261f};\n\t\t\t\n\t\t\tDataSet trainData = DataLoader.getImagesToDataSetByBin(train_data_filenames, 10000, 3, 32, 32, 10, labelSet, true);\n\n\t\t\tDataSet testData = DataLoader.getImagesToDataSetByBin(test_data_filename, 10000, 3, 32, 32, 10, labelSet, true, mean, std);\n\t\t\t\n\t\t\tSystem.out.println(\"data is ready.\");\n\n\t\t\tint channel = 3;\n\t\t\t\n\t\t\tint height = 32;\n\t\t\t\n\t\t\tint width = 32;\n\t\t\t\n\t\t\tCNN netWork = new CNN(LossType.softmax_with_cross_entropy, UpdaterType.adamw);\n\t\t\t\n\t\t\tnetWork.CUDNN = true;\n\t\t\t\n\t\t\tnetWork.learnRate = 0.1f;\n\t\t\t\n\t\t\tInputLayer inputLayer = new InputLayer(channel, height, width);\n\t\t\t\n\t\t\tConvolutionLayer conv1 = new ConvolutionLayer(channel, 64, width, height, 3, 3, 1, 1, false);\n\t\t\t\n\t\t\tBNLayer bn1 = new BNLayer();\n\t\t\t\n\t\t\tReluLayer active1 = new ReluLayer();\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block1  64 * 32 * 32\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl1 = new BasicBlockLayer(conv1.oChannel, 64, conv1.oHeight, conv1.oWidth, 1, netWork);\n\t\t\tReluLayer active2 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block2  64 * 32 * 32\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl2 = new BasicBlockLayer(bl1.oChannel, 64, bl1.oHeight, bl1.oWidth, 1, netWork);\n\t\t\tReluLayer active3 = new ReluLayer();\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block3  128 * 16 * 16\n\t\t\t * downSample 32 \u002F 2 = 16\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl3 = new BasicBlockLayer(bl2.oChannel, 128, bl2.oHeight, bl2.oWidth, 2, netWork);\n\t\t\tReluLayer active4 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block4  128 * 16 * 16\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl4 = new BasicBlockLayer(bl3.oChannel, 128, bl3.oHeight, bl3.oWidth, 1, netWork);\n\t\t\tReluLayer active5 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block5  256 * 8 * 8\n\t\t\t * downSample 16 \u002F 2 = 8\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl5 = new BasicBlockLayer(bl4.oChannel, 256, bl4.oHeight, bl4.oWidth, 2, netWork);\n\t\t\tReluLayer active6 = new ReluLayer();\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block6  256 * 8 * 8\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl6 = new BasicBlockLayer(bl5.oChannel, 256, bl5.oHeight, bl5.oWidth, 1, netWork);\n\t\t\tReluLayer active7 = new ReluLayer();\n\n\t\t\t\u002F**\n\t\t\t * block7  512 * 4 * 4\n\t\t\t * downSample 8 \u002F 2 = 4\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl7 = new BasicBlockLayer(bl6.oChannel, 512, bl6.oHeight, bl6.oWidth, 2, netWork);\n\t\t\tReluLayer active8 = new ReluLayer();\n\t\t\t\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block8  512 * 4 * 4\n\t\t\t *\u002F\n\t\t\tBasicBlockLayer bl8 = new BasicBlockLayer(bl7.oChannel, 512, bl7.oHeight, bl7.oWidth, 1, netWork);\n\t\t\tReluLayer active9 = new ReluLayer();\n\t\t\t\n\t\t\tAVGPoolingLayer pool2 = new AVGPoolingLayer(bl8.oChannel, bl8.oWidth, bl8.oHeight);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * fully  512 * 1 * 1\n\t\t\t *\u002F\n\t\t\tint fInputCount = pool2.oChannel * pool2.oWidth * pool2.oHeight;\n\t\t\t\n\t\t\tFullyLayer full1 = new FullyLayer(fInputCount, 10);\n\n```\nnetWork.addLayer(inputLayer);\n\t\t\tnetWork.addLayer(conv1);\n\t\t\tnetWork.addLayer(bn1);\n\t\t\tnetWork.addLayer(active1);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block1  64\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl1);\n\t\t\tnetWork.addLayer(active2);\n\t\t\tnetWork.addLayer(bl2);\n\t\t\tnetWork.addLayer(active3);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block2  128\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl3);\n\t\t\tnetWork.addLayer(active4);\n\t\t\tnetWork.addLayer(bl4);\n\t\t\tnetWork.addLayer(active5);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block3  256\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl5);\n\t\t\tnetWork.addLayer(active6);\n\t\t\tnetWork.addLayer(bl6);\n\t\t\tnetWork.addLayer(active7);\n\t\t\t\n\t\t\t\u002F**\n\t\t\t * block4  512\n\t\t\t *\u002F\n\t\t\tnetWork.addLayer(bl7);\n\t\t\tnetWork.addLayer(active8);\n\t\t\tnetWork.addLayer(bl8);\n\t\t\tnetWork.addLayer(active9);\n\t\t\t\n\t\t\tnetWork.addLayer(pool2);\n\t\t\tnetWork.addLayer(full1);\n\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 250, 0.001f, 128, LearnRateUpdate.GD_GECAY, false);\n\n\t\t\tlong start = System.currentTimeMillis();\n\t\t\t\n\t\t\toptimizer.train(trainData, testData, mean, std);\n\n\t\t\toptimizer.test(testData);\n\t\t\t\n\t\t\tSystem.out.println(((System.currentTimeMillis() - start) \u002F 1000) + \"s.\");\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t\t\n\t\t}\n\t\t\n\t}\n````\n#### YOLO 香蕉检测演示\n``` java\npublic void yolov1_tiny() {\n\t\t\n\t\ttry {\n\t\t\t\n\t\t\tString cfg_path = \"H:\u002Fvoc\u002Ftrain\u002Fyolov1-tiny.cfg\";\n\t\t\t\n\t\t\tString trainPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_train\\\\images\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_train\\\\label.csv\";\n\t\t\t\n\t\t\tString testPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_val\\\\images\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\banana-detection\\\\bananas_val\\\\label.csv\";\n\t\t\t\n\t\t\tYoloDataLoader trainData = new YoloDataLoader(trainPath, trainLabelPath, 1000, 3, 256, 256, 5, LabelType.csv, true);\n\t\t\t\n\t\t\tYoloDataLoader vailData = new YoloDataLoader(testPath, testLabelPath, 100, 3, 256, 256, 5, LabelType.csv, true);\n\t\t\t\n\t\t\tDataSet trainSet = formatToYolo(trainData.getDataSet());\n\t\t\t\n\t\t\tDataSet vailSet = formatToYolo(vailData.getDataSet());\n\t\t\t\n\t\t\tSystem.out.println(\"load data finish.\");\n\t\t\t\n\t\t\tCNN netWork = new CNN(LossType.yolo3, UpdaterType.adamw);\n\t\t\t\n\t\t\tnetWork.CUDNN = true;\n\t\t\t\n\t\t\tnetWork.learnRate = 0.001f;\n\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 1000, 0.001f, 64, LearnRateUpdate.CONSTANT, false);\n\n\t\t\tlong start = System.currentTimeMillis();\n\t\t\t\n\t\t\toptimizer.trainObjectRecognition(trainSet, vailSet);\n\t\t\t\n\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tfloat[][][] draw_bbox = optimizer.showObjectRecognition(vailSet, 64);\n\t\t\t\n\t\t\tYoloDataLoader testData = new YoloDataLoader(testPath, testLabelPath, 1000, 3, 256, 256, 5, LabelType.csv, false);\n\t\t\t\n\t\t\tString outputPath = \"H:\\\\voc\\\\banana-detection\\\\test\\\\\";\n\t\t\t\n\t\t\tshowImg(outputPath, testData.getDataSet(), 1, draw_bbox, false);\n\t\t\t\n\t\t\tSystem.out.println(((System.currentTimeMillis() - start) \u002F 1000) + \"s.\");\n\t\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t\t\n\t}\n```\n\n#### YOLOv3 口罩检测演示（口罩佩戴识别）\n``` java\npublic void yolov3_tiny_mask() {\n\t\t\n\t\tint im_w = 416;\n\t\tint im_h = 416;\n\t\tint batchSize = 24;\n\t\tint class_num = 2;\n\t\tString[] labelset = new String[] {\"unmask\",\"mask\"};\n\t\ttry {\n\t\t\tString cfg_path = \"H:\\\\voc\\\\mask\\\\data\\\\\\\\dataset\\\\yolov3-tiny-mask.cfg\";\n\t\t\tString trainPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\train\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\train_label.txt\";\n\t\t\tString testPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\vail\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\vail_label.txt\";\n\t\t\tString weightPath = \"H:\\\\voc\\\\yolo-weights\\\\yolov3-tiny.conv.15\";\n\t\t\t\u002F**\n\t\t\t * 数据加载器\n\t\t\t *\u002F\n\t\t\tDetectionDataLoader trainData = new DetectionDataLoader(trainPath, trainLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tDetectionDataLoader vailData = new DetectionDataLoader(testPath, testLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n                        \u002F**\n\t\t\t * 创建 yolo 模型\n\t\t\t *\u002F\n\t\t\tYolo netWork = new Yolo(LossType.yolo3, UpdaterType.adamw);\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.001f;\n                        \u002F**\n\t\t\t * 加载模型结构\n\t\t\t *\u002F\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n                        \u002F**\n\t\t\t * 加载预训练权重\n\t\t\t *\u002F\n\t\t\tDarknetLoader.loadWeight(netWork, weightPath, 14, true);\n                        \u002F**\n\t\t\t * 创建优化器\n\t\t\t *\u002F\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 1000, 0.001f, batchSize, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.trainObjectRecognitionOutputs(trainData, vailData);\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tList\u003CYoloBox> draw_bbox = optimizer.showObjectRecognitionYoloV3(vailData, batchSize);\n\t\t\tString outputPath = \"H:\\\\voc\\\\mask\\\\data\\\\resized\\\\test_yolov3\\\\\";\n\t\t\tshowImg(outputPath, vailData, class_num, draw_bbox, batchSize, false, im_w, im_h, labelset);\n\n\t\t}catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\t\n\t}\n```\n\n#### YOLOv3 头盔演示（安全帽佩戴识别）\n``` java\npublic void yolov3_tiny_helmet() {\n\t\t\n\t\tint im_w = 416;\n\t\tint im_h = 416;\n\t\tint batchSize = 24;\n\t\tint class_num = 5;\n\t\tString[] labelset = new String[] {\"none\",\"white\",\"yellow\",\"blue\",\"red\"};\n\t\ttry {\n\t\t\tString cfg_path = \"H:\\\\voc\\\\helmet_dataset\\\\yolov3-tiny-helmet.cfg\";\n\t\t\tString trainPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\train\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\train_label.txt\";\n\t\t\tString testPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\vail\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\helmet\\\\resized\\\\vail_label.txt\";\n\t\t\tString weightPath = \"H:\\\\voc\\\\yolo-weights\\\\yolov3-tiny.conv.15\";\n\t\t\t\u002F**\n\t\t\t * 数据加载器\n\t\t\t *\u002F\n\t\t\tDetectionDataLoader trainData = new DetectionDataLoader(trainPath, trainLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tDetectionDataLoader vailData = new DetectionDataLoader(testPath, testLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n                        \u002F**\n\t\t\t * 创建 yolo 模型\n\t\t\t *\u002F\n\t\t\tYolo netWork = new Yolo(LossType.yolo3, UpdaterType.adamw);\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.001f;\n                        \u002F**\n\t\t\t * 加载模型结构\n\t\t\t *\u002F\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n                        \u002F**\n\t\t\t * 加载预训练权重\n\t\t\t *\u002F\n\t\t\tDarknetLoader.loadWeight(netWork, weightPath, 14, true);\n                        \u002F**\n\t\t\t * 创建优化器\n\t\t\t *\u002F\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 300, 0.001f, batchSize, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.trainObjectRecognitionOutputs(trainData, vailData);\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tList\u003CYoloBox> draw_bbox = optimizer.showObjectRecognitionYoloV3(vailData, batchSize);\n\t\t\tString outputPath = \"H:\\\\voc\\\\helmet\\\\test_yolov3\\\\\";\n\t\t\tshowImg(outputPath, vailData, class_num, draw_bbox, batchSize, false, im_w, im_h, labelset);\n\t\t\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t\t\t\n\t}\n```\n\n#### YOLOv7-SM 智能冰柜商品识别演示\n``` java\n    public void yolov7_tiny_sm() {\n\t\tint im_w = 416;\n\t\tint im_h = 416;\n\t\tint batchSize = 12;\n\t\tint class_num = 113;\n\t\tString[] labelset = new String[113];\n\t\ttry {\n\t\t\tString cfg_path = \"H:\\\\voc\\\\sm\\\\resized\\\\yolov7-tiny-sm.cfg\";\n\t\t\tString labelPath = \"H:\\\\voc\\\\\\\\sm\\\\VOC\\\\labels.txt\";\n\t\t\tString trainPath = \"H:\\\\voc\\\\sm\\\\resized\\\\train\";\n\t\t\tString trainLabelPath = \"H:\\\\voc\\\\sm\\\\resized\\\\train_label.txt\";\n\t\t\tString testPath = \"H:\\\\voc\\\\sm\\\\resized\\\\vail\";\n\t\t\tString testLabelPath = \"H:\\\\voc\\\\sm\\\\resized\\\\vail_label.txt\";\n\t\t\tString weightPath = \"H:\\\\voc\\\\darknet_yolov7\\\\yolov7-tiny.conv.87\";\n\t\t\ttry (FileInputStream fin = new FileInputStream(labelPath);\n\t\t\t\tInputStreamReader reader = new InputStreamReader(fin);\t\n\t\t\t    BufferedReader buffReader = new BufferedReader(reader);){\n\t\t\t\tString strTmp = \"\";\n\t\t\t\tint idx = 0;\n\t\t        while((strTmp = buffReader.readLine())!=null){\n\t\t        \tlabelset[idx] = strTmp;\n\t\t        \tidx++;\n\t\t        }\t\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO: handle exception\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t\tDetectionDataLoader trainData = new DetectionDataLoader(trainPath, trainLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tDetectionDataLoader vailData = new DetectionDataLoader(testPath, testLabelPath, LabelFileType.txt, im_w, im_h, class_num, batchSize, DataType.yolov3);\n\t\t\tYolo netWork = new Yolo(LossType.yolov7, UpdaterType.adamw);\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.001f;\n\t\t\tModelLoader.loadConfigToModel(netWork, cfg_path);\n\t\t\tDarknetLoader.loadWeight(netWork, weightPath, 86, true);\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 1000, 0.001f, batchSize, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.trainObjectRecognitionOutputs(trainData, vailData);\n\t\t\t\u002F**\n\t\t\t * 处理测试预测结果\n\t\t\t *\u002F\n\t\t\tList\u003CYoloBox> draw_bbox = optimizer.showObjectRecognitionYoloV3(vailData, batchSize);\n\t\t\tString outputPath = \"H:\\\\voc\\\\sm\\\\test_yolov7\\\\\";\n\t\t\tshowImg(outputPath, vailData, class_num, draw_bbox, batchSize, false, im_w, im_h, labelset);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}finally {\n\t\t\ttry {\n\t\t\t\tCUDAMemoryManager.freeAll();\n\t\t\t} catch (Exception e) {\n\t\t\t\t\u002F\u002F TODO Auto-generated catch block\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\t\n\t}\n```\n\n#### GAN MNIST 手写数字生成演示\n``` java\npublic static void gan_anime() {\n\t\t\n\t\tint imgSize = 784;\n\t\tint ngf = 784; \u002F\u002F生成器 featrue map 数\n\t\tint nz = 100; \u002F\u002F噪声维度\n\t\tint batchSize = 2048;\n\t\t\n\t\tint d_every = 1;\n\t\tint g_every = 1;\n\t\t\n\t\tfloat[] mean = new float[] {0.5f};\n\t\tfloat[] std = new float[] {0.5f};\n\t\t\n\t\ttry {\n\t\t\t\n\t\t\tString mnist_train_data = \"\u002Fdataset\u002Fmnist\u002Ftrain-images.idx3-ubyte\";\n\t\t\t\n\t\t\tString mnist_train_label = \"\u002Fdataset\u002Fmnist\u002Ftrain-labels.idx1-ubyte\";\n\t\t\t\n\t\t\tString[] labelSet = new String[] {\"0\",\"1\",\"2\",\"3\",\"4\",\"5\",\"6\",\"7\",\"8\",\"9\"};\n\t\t\t\n\t\t\tResource trainDataRes = new ClassPathResource(mnist_train_data);\n\n\t\t\tResource trainLabelRes = new ClassPathResource(mnist_train_label);\n\t\t\t\n\t\t\tDataSet trainData = DataLoader.loadDataByUByte(trainDataRes.getFile(), trainLabelRes.getFile(), labelSet, 1, 1 , 784, true, mean, std);\n\t\t\t\n\t\t\tBPNetwork netG = NetG(ngf, nz);\n\t\t\t\n\t\t\tBPNetwork netD = NetD(imgSize);\n\t\t\t\n\t\t\tGANOptimizer optimizer = new GANOptimizer(netG, netD, batchSize, 3500, d_every, g_every, 0.001f, LearnRateUpdate.CONSTANT, false);\n\t\t\t\n\t\t\toptimizer.train(trainData);\n\t\t\t\n\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n```\n\n#### DCGAN Anime 动漫头像生成演示\n``` java\n\tpublic static void dcgan_anime() {\n\t\t\n\t\tint imw = 64;\n\t\tint imh = 64;\n\t\tint ngf = 64; \u002F\u002F生成器 featrue map 数\n\t\tint ndf = 64; \u002F\u002F判别器 feature map 数\n\t\tint nz = 100; \u002F\u002F噪声维度\n\t\tint batchSize = 64;\n\t\t\n\t\tint d_every = 1;\n\t\tint g_every = 5;\n\t\t\n\t\tfloat[] mean = new float[] {0.5f,0.5f,0.5f};\n\t\tfloat[] std = new float[] {0.5f,0.5f,0.5f};\n\t\t\n\t\ttry {\n\t\t\t\n\t\t\tString imgDirPath = \"H:\\\\voc\\\\gan_anime\\\\ml2021spring-hw6\\\\faces\\\\\";\n\t\t\t\n\t\t\tCNN netG = NetG(ngf, nz);\n\t\t\t\n\t\t\tCNN netD = NetD(ndf, imw, imh);\n\t\t\t\n\t\t\tImageDataLoader dataLoader = new ImageDataLoader(imgDirPath, imw, imh, batchSize, true, mean, std);\n\t\t\t\n\t\t\tGANOptimizer optimizer = new GANOptimizer(netG, netD, batchSize, 2000, d_every, g_every, 0.001f, LearnRateUpdate.POLY, false);\n\t\t\t\n\t\t\toptimizer.train(dataLoader);\n\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\n\t}\n```\n\n#### RNN 中文小说生成器\n```java\n    public void charRNN() {\n\t\ttry {\n\t\t\tint time = 256;\n\t\t\tint batchSize = 64;\n\t\t\tint embedding_dim = 256;\n\t\t\tint hiddenSize = 512;\n\n\t\t\tString trainPath = \"H:\\\\rnn_dataset\\\\dpcc.txt\";\n\t\t\tOneHotDataLoader trainData = new OneHotDataLoader(trainPath, time, batchSize);\n\t\t\t\n\t\t\tRNN netWork = new RNN(LossType.softmax_with_cross_entropy, UpdaterType.adamw, time);\n```\n\n```java\nInputLayer inputLayer = new InputLayer(1, 1, trainData.characters);\n\t\t\tEmbeddingLayer em = new EmbeddingLayer(trainData.characters, embedding_dim);\n\t\t\tRNNLayer l1 = new RNNLayer(embedding_dim, hiddenSize, time, ActiveType.tanh, false, netWork);\n\t\t\tRNNLayer l2 = new RNNLayer(hiddenSize, hiddenSize, time, ActiveType.tanh, false, netWork);\n\t\t\tRNNLayer l3 = new RNNLayer(hiddenSize, hiddenSize, time, ActiveType.tanh, false, netWork);\n\t\t\tFullyLayer f1 = new FullyLayer(hiddenSize, hiddenSize, false);\n\t\t\tBNLayer bn = new BNLayer();\n\t\t\tLeakyReluLayer a1 = new LeakyReluLayer();\n\t\t\tFullyLayer f2 = new FullyLayer(hiddenSize, trainData.characters, true);\n\t\t\tnetWork.addLayer(inputLayer);\n\t\t\tnetWork.addLayer(em);\n\t\t\tnetWork.addLayer(l1);\n\t\t\tnetWork.addLayer(l2);\n\t\t\tnetWork.addLayer(l3);\n\t\t\tnetWork.addLayer(f1);\n\t\t\tnetWork.addLayer(bn);\n\t\t\tnetWork.addLayer(a1);\n\t\t\tnetWork.addLayer(f2);\n\t\t\t\n\t\t\tnetWork.CUDNN = true;\n\t\t\tnetWork.learnRate = 0.01f;\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 2, 0.001f, batchSize, LearnRateUpdate.POLY, false);\n\t\t\toptimizer.trainRNN(trainData);\n\t\t\t\n\t\t\tint gen_len = 1000;\n\t\t\tint max_len = 256;\n\t\t\tString pre_txt = \"这个故事所造成的后果，便是造就了大批每天\";\n\t\t\tTensor input = null;\n\t\t\tTensor output = null;\n\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\tnetWork.RUN_MODEL = RunModel.TEST;\n\t\t\tfor(int i = 0;i\u003Cgen_len;i++) {\n\t\t\t\tnetWork.time = input.number;\n\t\t\t\tString txt = genTxt(input, output, netWork, trainData, max_len);\n\t\t\t\tif(netWork.time > 1) {\n\t\t\t\t\tpre_txt += txt.substring(input.number - 1, input.number);\n\t\t\t\t}else {\n\t\t\t\t\tpre_txt += txt;\n\t\t\t\t}\n\t\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\t}\n\t\t\tSystem.out.println(pre_txt);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### SEQ2SEQ 英文翻译器\n```java\n    public void seq2seq() {\n\t\ttry {\n\t\t\tint batchSize = 128;\n\t\t\tint en_em = 64;\n\t\t\tint de_em = 128;\n\t\t\tint en_hidden = 256;\n\t\t\tint de_hidden = 256;\n\t\t\t\n\t\t\tString trainPath = \"H:\\\\rnn_dataset\\\\translate1000.csv\";\n\t\t\tIndexDataLoader trainData = new IndexDataLoader(trainPath, batchSize);\n\t\t\t\n\t\t\tSeq2Seq network = new Seq2Seq(LossType.softmax_with_cross_entropy, UpdaterType.adamw,\n\t\t\t\t\ttrainData.max_en, trainData.max_ch - 1, en_em, en_hidden, trainData.en_characters, de_em, de_hidden, trainData.ch_characters);\n\t\t\tnetwork.CUDNN = true;\n\t\t\tnetwork.learnRate = 0.01f;\n\t\t\t\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 100, 0.001f, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.lr_step = new int[] {100,200};\n\t\t\toptimizer.trainRNN(trainData);\n\n\t\t\tScanner scanner = new Scanner(System.in);\n\t\t\twhile (true) {\n\t\t\t\t\n\t\t\t\tSystem.out.println(\"请输入英文:\");\n\t\t\t\tString input_txt = scanner.nextLine();\n\t\t\t\tif(input_txt.equals(\"exit\")){\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tinput_txt = input_txt.toLowerCase();\n\t\t\t\tSystem.out.println(input_txt);\n\t\t\t\toptimizer.predict(trainData, input_txt);\t\n\t\t\t}\n\t\t\tscanner.close();\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### gpt-中文小说生成器\n```java\n    public static void gpt_dp() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = true;\n\t\t\tint batchSize = 32;\n\t\t\tint max_len = 64;\n\t\t\tint embedDim = 512;\n\t\t\tint headNum = 8;\n\t\t\tint decoderNum = 6;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\gpt\\\\dpcc50.txt\";\n\t\t\tCNTokenizer trainData = new CNTokenizer(trainPath, max_len, batchSize);\n\t\t\tNanoGPT network = new NanoGPT(LossType.softmax_with_cross_entropy, UpdaterType.adamw, headNum, decoderNum, trainData.characters, max_len, embedDim, bias, dropout);\n\t\t\tnetwork.learnRate = 0.001f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 3, 0.001f, LearnRateUpdate.GD_GECAY, false);\n\t\t\toptimizer.trainNanoGPT_GEN(trainData);\n\t\t\tint gen_len = 1000;\n\t\t\tnetwork.RUN_MODEL = RunModel.TEST;\n\t\t\tTensor input = null;\n\t\t\tTensor output = null;\n\t\t\tString pre_txt = \"萧炎\";\n\t\t\tTensor positions = CNChatTokenizer.getPositions(1, pre_txt.length());\n\t\t\tTensor mask = CNChatTokenizer.triu(1, network.headNum, pre_txt.length(), pre_txt.length(), 1);\n\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\tfor(int i = 0;i\u003Cgen_len;i++) {\n\t\t\t\tnetwork.time = input.number;\n\t\t\t\tString txt = genTxt(input, output, network, trainData, pre_txt.length(), mask, positions);\n\t\t\t\tif(network.time > 1) {\n\t\t\t\t\tpre_txt += txt.substring(input.number - 1, input.number);\n\t\t\t\t}else {\n\t\t\t\t\tpre_txt += txt;\n\t\t\t\t}\n\t\t\t\tinput = createTxtData(input, pre_txt, trainData.characters, trainData.dictionary, max_len);\n\t\t\t}\n\t\t\tSystem.out.println(pre_txt);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### gpt-中文聊天机器人\n```java\n    public static void ch_chat_gpt2() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = true;\n\t\t\tint batchSize = 32;\n\t\t\tint max_len = 128;\n\t\t\tint embedDim = 768;\n\t\t\tint head_num = 12;\n\t\t\tint decoderNum = 12;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\gpt\\\\chatdata\\\\train-format20w.txt\";\n\t\t\tCNChatTokenizer trainData = new CNChatTokenizer(trainPath, max_len, batchSize);\n\t\t\tNanoGPT network = new NanoGPT(LossType.softmax_with_cross_entropy, UpdaterType.adamw, head_num, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, false);\n\t\t\tnetwork.learnRate = 0.0001f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 3, 0.0001f, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.lr_step = new int[] {1, 2};\n\t\t\toptimizer.trainNanoGPT(trainData);\n\t\t\tScanner scanner = new Scanner(System.in);\n\t\t\tString context = \"\";\n\t\t\twhile (true) {\n\t\t\t\tSystem.out.println(\"请输入中文:\");\n\t\t\t\tString input_txt = scanner.nextLine();\n\t\t\t\tif(input_txt.equals(\"clean\")){\n\t\t\t\t\tcontext = \"\";\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif(input_txt.equals(\"exit\")){\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tinput_txt = input_txt.toLowerCase() + \" \";\n\t\t\t\tSystem.out.println(\"user:\"+input_txt);\n\t\t\t\tinput_txt = context + input_txt;\n\t\t\t\tTensor input = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\tTensor positions = CNChatTokenizer.getPositions(1, input.number);\n\t\t\t\tfor(int t = 0;t\u003Cmax_len;t++) {\n\t\t\t\t\tnetwork.time = input.number;\n\t\t\t\t\tTensor output = network.forward(input, positions);\n\t\t\t\t\toutput.syncHost();\n\t\t\t\t\tString txts = output2TXT(output, trainData, true);\n\t\t\t\t\tString nextWord = txts.substring(txts.length() - 1, input_txt.length());\n\t\t\t\t\tif(trainData.sd.get(nextWord)!=null && (trainData.sd.get(nextWord).equals(\"\u003Csep>\") || trainData.sd.get(nextWord).equals(\"\u003Ceos>\"))) {\n\t\t\t\t\t\tinput_txt += nextWord;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}else {\n\t\t\t\t\t\tinput_txt += nextWord;\n\t\t\t\t\t}\n\t\t\t\t\tinput = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\t\tCNChatTokenizer.getPositions(1, input.number, positions);\n\t\t\t\t}\n\t\t\t\tString[] chatList = input_txt.split(\" \");\n\t\t\t\tString current = chatList[chatList.length - 1];\n\t\t\t\tSystem.out.println(\"chatbot:\"+current);\n\t\t\t\tcontext += input_txt + current;\n\t\t\t}\n\t\t\tscanner.close();\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n    }\n```\n\n#### gpt-医疗问答系统\n```java\n    public static void gpt2_yl_qa() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = true;\n\t\t\tint batchSize = 16;\n\t\t\tint max_len = 256;\n\t\t\tint embedDim = 1024;\n\t\t\tint head_num = 16;\n\t\t\tint decoderNum = 24;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\gpt\\\\cMedQA2\\\\qaData.txt\";\n\t\t\tCNChatTokenizer trainData = new CNChatTokenizer(trainPath, max_len, batchSize);\n\t\t\tNanoGPT network = new NanoGPT(LossType.softmax_with_cross_entropy, UpdaterType.adamw, head_num, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, false);\n\t\t\tnetwork.learnRate = 0.001f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 5, 0.0001f, LearnRateUpdate.SMART_HALF, false);\n\t\t\toptimizer.lr_step = new int[] {1, 2};\n\t\t\toptimizer.trainNanoGPT(trainData);\n\t\t\tnetwork.RUN_MODEL = RunModel.TEST;\n\t\t\tScanner scanner = new Scanner(System.in);\n\t\t\twhile (true) {\n\t\t\t\tSystem.out.println(\"请输入中文:\");\n\t\t\t\tString input_txt = scanner.nextLine();\n\t\t\t\tif(input_txt.equals(\"exit\")){\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tinput_txt = input_txt.toLowerCase() + \" \";\n\t\t\t\tSystem.out.println(\"user:\"+input_txt);\n\t\t\t\tTensor input = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\tTensor positions = CNChatTokenizer.getPositions(1, input.number);\n\t\t\t\tfor(int t = 0;t\u003Cmax_len;t++) {\n\t\t\t\t\tnetwork.time = input.number;\n\t\t\t\t\tTensor output = network.forward(input, positions);\n\t\t\t\t\toutput.syncHost();\n\t\t\t\t\tString txts = output2TXT(output, trainData, true);\n\t\t\t\t\tString nextWord = txts.substring(txts.length() - 1, input_txt.length());\n\t\t\t\t\tif(trainData.sd.get(nextWord)!=null && (trainData.sd.get(nextWord).equals(\"\u003Csep>\") || trainData.sd.get(nextWord).equals(\"\u003Ceos>\"))) {\n\t\t\t\t\t\tinput_txt += trainData.sd.get(nextWord);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}else {\n\t\t\t\t\t\tinput_txt += nextWord;\n\t\t\t\t\t}\n\t\t\t\t\tinput = trainData.loadByTxtToIdx(input_txt);\n\t\t\t\t\tCNChatTokenizer.getPositions(1, input.number, positions);\n\t\t\t\t}\n\t\t\t\tSystem.out.println(\"chatbot:\"+input_txt.split(\" \")[1]);\n\t\t\t}\n\t\t\tscanner.close();\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### llama2-医疗问答系统\n```java\n    public static void llama2_chinese_chatglm_vocab() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = false;\n\t\t\tboolean flashAttention = false;\n\t\t\tint batchSize = 8;\t\t\t\n\t\t\tint max_len = 512;\n\t\t\tint embedDim = 512;\n\t\t\tint head_num = 8;\n\t\t\tint decoderNum = 8;\n\t\t\tString trainPath = \"H:\\\\transformer_dataset\\\\wbm_idx_chatglm_vocab.txt\";\n\t\t\tString tokenizer_path = \"H:\\\\transformer_dataset\\\\tokenizer.model\";\n\t\t\tSentencePieceTokenizer tokenizer = new SentencePieceTokenizer(tokenizer_path, 64793);\n\t\t\tCNWikiTokenizer4 trainData = new CNWikiTokenizer4(trainPath, max_len, batchSize, 6250865, tokenizer);\n\t\t\tLlama2 network = new Llama2(LossType.softmax_with_cross_entropy_idx, UpdaterType.adamw, head_num, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, flashAttention);\n\t\t\tnetwork.learnRate = 3e-4f;\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 1, 0.0001f, LearnRateUpdate.COSINE, false);\n\t\t\toptimizer.lr_step = new int[] {1, 2};\n\t\t\toptimizer.lr = 3e-4f;\n\t\t\toptimizer.min_lr = 1e-5f;\n\t\t\toptimizer.setWarmUp(true);\n\t\t\toptimizer.warmUpTime = 1000;\n\t\t\toptimizer.lrDecayIters = (int) (trainData.count_it * 0.96);\n\t\t\toptimizer.trainLlama2_chinese(trainData);\n\t\t\tString model_path = \"H:\\\\model\\\\llama2-92m-chinese.model\";\n\t\t\tModelUtils.saveModel(network, model_path);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### llama3.1-对话机器人\n```java\n        public static void llama3_monkey() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tboolean dropout = false;\n\t\t\tboolean flashAttention = false;\n\t\t\tint batchSize = 2;\n\t\t\tint max_len = 512;\n\t\t\tint embedDim = 512;\n\t\t\tint head_num = 16;\n\t\t\tint nKVHeadNum = 8;\n\t\t\tint decoderNum = 8;\n\t\t\t\n\t\t\tString trainPath = \"H:\\\\model\\\\pretrain_data_6400.bin\";\n\t\t\tString vocabPath = \"H:\\\\transformer_dataset\\\\6400\\\\vocab.json\";\n\t\t\tString mergesPath = \"H:\\\\transformer_dataset\\\\6400\\\\merges.txt\";\n\t\t\t\n\t\t\tBPETokenizer3 tokenizer = new BPETokenizer3(vocabPath, mergesPath);\n\t\t\tCNBpeTokenizer trainData = new CNBpeTokenizer(trainPath, max_len, batchSize, tokenizer, BinDataType.unint16);\n\t\t\tLlama3 network = new Llama3(LossType.softmax_with_cross_entropy_idx, UpdaterType.adamw, head_num, nKVHeadNum, decoderNum, trainData.vocab_size, max_len, embedDim, bias, dropout, flashAttention);\n\t\t\tnetwork.learnRate = 1e-4f;\n\t\t\tnetwork.CLIP_GRAD_NORM = true;\n\t\t\tinitWeight(network, decoderNum);\n\t\t\tEDOptimizer optimizer = new EDOptimizer(network, batchSize, 2, 0.0001f, LearnRateUpdate.CONSTANT, false);\n\t\t\toptimizer.trainLlama3_chinese(trainData, 8, true);\n\t\t\tString save_model_path = \"H:\\\\model\\\\llama3-26m-chinese.model\";\n\t\t\tModelUtils.saveModel(network, save_model_path);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n```\n\n#### diffusion-动漫头像生成\n```java\npublic static void duffsion_anime() {\n\t\ttry {\n\t\t\tboolean bias = false;\n\t\t\tint batchSize = 8;\n\t\t\tint imw = 96;\n\t\t\tint imh = 96;\n\t\t\tint mChannel = 64;\n\t\t\tint resBlockNum = 2;\n\t\t\tint T = 1000;\n\t\t\tint[] channelMult = new int[] {1, 2, 3, 4};\n\t\t\tString imgDirPath = \"H:\\\\voc\\\\gan_anime\\\\ml2021spring-hw6\\\\faces\\\\\";\n\t\t\tDiffusionImageDataLoader dataLoader = new DiffusionImageDataLoader(imgDirPath, imw, imh, batchSize, false);\n\t\t\tDiffusionUNet network = new DiffusionUNet(LossType.MSE, UpdaterType.adamw, T, 3, mChannel, channelMult, resBlockNum, imw, imh, bias);\n\t\t\tnetwork.CUDNN = true;\n\t\t\tnetwork.learnRate = 0.0002f;\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(network, 50, 0.00001f, batchSize, LearnRateUpdate.GD_GECAY, false);\n\t\t\toptimizer.trainGaussianDiffusion(dataLoader);\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n``` \n\n#### VQVAE\n```java\npublic static void anime_vqvae2_lpips_gandisc_32_nogan2() {\n\n\t\ttry {\n\t\t\tint batchSize = 4;\n\t\t\tint imageSize = 256;\n\t\t\tint z_dims = 128;\n\t\t\tint latendDim = 4;\n\t\t\tint num_vq_embeddings = 512;\n\t\t\tint num_res_blocks = 2;\n\t\t\tint[] ch_mult = new int[] {1, 2, 2, 4};\n\t\t\tint ch = 128;\n\t\t\t\n\t\t\tfloat[] mean = new float[] {0.5f, 0.5f, 0.5f};\n\t\t\tfloat[] std = new float[] {0.5f, 0.5f, 0.5f};\n\t\t\tString imgDirPath = \"I:\\\\dataset\\\\sd-anime\\\\anime_op\\\\256\\\\\";\n\t\t\tDiffusionImageDataLoader dataLoader = new DiffusionImageDataLoader(imgDirPath, imageSize, imageSize, batchSize, true, false, mean, std);\n\t\t\t\n\t\t\tVQVAE2 network = new VQVAE2(LossType.MSE, UpdaterType.adamw, z_dims, latendDim, num_vq_embeddings, imageSize, ch_mult, ch, num_res_blocks);\n\t\t\tnetwork.CUDNN = true;\n\t\t\tnetwork.learnRate = 0.001f;\n\t\t\t\n\t\t\tLPIPS lpips = new LPIPS(LossType.MSE, UpdaterType.adamw, imageSize);\n\t\t\tString lpipsWeight = \"H:\\\\model\\\\lpips.json\";\n\t\t\tLPIPSTest.loadLPIPSWeight(LagJsonReader.readJsonFileSmallWeight(lpipsWeight), lpips, false);\n\t\t\tlpips.CUDNN = true;\n\t\t\t\n\t\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(network, 200, 0.00001f, batchSize, LearnRateUpdate.CONSTANT, false);\n\t\t\toptimizer.trainVQVAE2_lpips_nogan(dataLoader, lpips);\n\n```\n\n\u003C\u002Fthink>\n\nString save_model_path = \"\u002Fomega\u002Fmodels\u002Fanime_vqvae2_256.model\";\n\t\t\tModelUtils.saveModel(network, save_model_path);\n\n\t\t} catch (Exception e) {\n\t\t\t\u002F\u002F TODO: handle exception\n\t\t\te.printStackTrace();\n\t\t}\n\t\n\t}\n```\n\n#### StableDiffusion 文生图\n```java\npublic static void tiny_sd_train_anime_32() throws Exception {\n\t\tString labelPath = \"I:\\\\dataset\\\\sd-anime\\\\anime_op\\\\data.json\";\n\t\tString imgDirPath = \"I:\\\\dataset\\\\sd-anime\\\\anime_op\\\\256\\\\\";\n\t\tboolean horizontalFilp = true;\n\t\tint imgSize = 256;\n\t\tint maxContextLen = 77;\n\t\tint batchSize = 8;\n\n\t\tfloat[] mean = new float[] {0.5f, 0.5f,0.5f};\n\t\tfloat[] std = new float[] {0.5f, 0.5f,0.5f};\n\t\t\n\t\tString vocabPath = \"H:\\\\model\\\\bpe_tokenizer\\\\vocab.json\";\n\t\tString mergesPath = \"H:\\\\model\\\\bpe_tokenizer\\\\merges.txt\";\n\t\tBPETokenizerEN bpe = new BPETokenizerEN(vocabPath, mergesPath, 49406, 49407);\n\t\tSDImageDataLoaderEN dataLoader = new SDImageDataLoaderEN(bpe, labelPath, imgDirPath, imgSize, imgSize, maxContextLen, batchSize, horizontalFilp, mean, std);\n\t\t\n\t\tint time = maxContextLen;\n\t\tint maxPositionEmbeddingsSize = 77;\n\t\tint vocabSize = 49408;\n\t\tint headNum = 8;\n\t\tint n_layers = 12;\n\t\tint textEmbedDim = 512;\n\t\tClipTextModel clip = new ClipTextModel(LossType.MSE, UpdaterType.adamw, headNum, time, vocabSize, textEmbedDim, maxPositionEmbeddingsSize, n_layers);\n\t\tclip.CUDNN = true;\n\t\tclip.time = time;\n\t\tclip.RUN_MODEL = RunModel.EVAL;\n\t\tString clipWeight = \"H:\\\\model\\\\clip-vit-base-patch32.json\";\n\t\tClipModelUtils.loadWeight(LagJsonReader.readJsonFileSmallWeight(clipWeight), clip, true);\n\t\t\n\t\tint z_dims = 128;\n\t\tint latendDim = 4;\n\t\tint num_vq_embeddings = 512;\n\t\tint num_res_blocks = 2;\n\t\tint[] ch_mult = new int[] {1, 2, 2, 4};\n\t\tint ch = 128;\n\t\tVQVAE2 vae = new VQVAE2(LossType.MSE, UpdaterType.adamw, z_dims, latendDim, num_vq_embeddings, imgSize, ch_mult, ch, num_res_blocks);\n\t\tvae.CUDNN = true;\n\t\tvae.learnRate = 0.001f;\n\t\tvae.RUN_MODEL = RunModel.EVAL;\n\t\tString vaeModel = \"anime_vqvae2_256.model\";\n\t\tModelUtils.loadModel(vae, vaeModel);\n\t\t\n\t\tint unetHeadNum = 8;\n\t\tint[] downChannels = new int[] {128, 256, 512, 768};\n\t\tint numLayer = 2;\n\t\tint timeSteps = 1000;\n\t\tint tEmbDim = 512;\n\t\tint latendSize = 32;\n\t\tint groupNum = 32;\n\t\tDiffusionUNetCond2 unet = new DiffusionUNetCond2(LossType.MSE, UpdaterType.adamw, latendDim, latendSize, latendSize, downChannels, unetHeadNum, numLayer, timeSteps, tEmbDim, maxContextLen, textEmbedDim, groupNum);\n\t\tunet.CUDNN = true;\n\t\tunet.learnRate = 0.0001f;\n\t\t\n\t\tMBSGDOptimizer optimizer = new MBSGDOptimizer(unet, 500, 0.00001f, batchSize, LearnRateUpdate.CONSTANT, false);\n\t\toptimizer.trainTinySD_Anime(dataLoader, vae, clip);\n\t\t\n\t\tString save_model_path = \"\u002Fomega\u002Fmodels\u002Fsd_anime256.model\";\n\t\tModelUtils.saveModel(unet, save_model_path);\n\t}\n```\n\n\n\n\n## 版本依赖包\n```xml\n\u003C!-- windows cuda 11.7 -->\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.7-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n\u003C!-- windows cuda 11.8 -->\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.8-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n\u003C!-- windows cuda 12.x -->\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu12.x-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n## 未来可期\n\n实现 LLaMA2（大型语言模型）、UNet（卷积神经网络架构）、Diffusion Model（扩散模型）等模型\n\n### 训练情况可视化\n\n支持动态调参，可视化训练\n\n\n## 彩蛋\n\n### 基于神经网络 + 遗传算法实现 AI 赛车游戏\n\nhttp:\u002F\u002F119.3.123.193:8011\u002FAICar\n\n## 版本更新\n### omega-engine-v3\n#### 2022-06-20\n1. 添加 GPU 支持，使用 jcuda 调用 CUDA 的 cublasSgemm 矩阵乘法，参考了 Caffe 的卷积操作已将卷积操作优化成 im2col+gemm 实现，计算效率得到大大提高\n\n2. 添加 VGG16 demo，该模型在 CIFAR10 数据集上表现为测试数据集准确率 86.45%\n\n3. 利用 JDK ForkJoin 框架实现任务拆分，充分利用 CPU 多线程，提高对数组操作与计算速度\n\n4. 参考 Darknet 对学习率更新机制进行升级，目前已支持 RANDOM、POLY、STEP、EXP、SIG 等多种学习率更新方法，并且实现学习率 warmup 功能\n\n5. 添加 basicblock 模块，新增 ResNet 模型支持，目前该模型在 CIFAR10 数据集上的表现，epoch:300，测试数据集准确率为 91.23%\n\n### omega-engine-v3-gpu\n#### 2022-07-02\n1. 开启 omega-engine-v3-gpu 版本开发，该版本将实现对 omega-engine 的 GPU 全面支持\n\n2. 全面优化卷积层计算，包括前向传播与反向传播.\n\n#### 2022-08-17\n1. 初步完成卷积层的 GPU 改造，使得卷积神经网络计算速度整体提升，增加 im2col 与 col2im 两个经典的核函数（Im2colKernel.cu，Col2imKernel.cu）\n\n2. 添加 CUDA 内存管理器，用于管理整体显存的生命周期，减少频繁申请显存的操作，减少主机与显卡之间的数据传输.\n\n#### 2022-09-02\n1. 修改 BN 层计算 dmean 公式，减少计算量\n\n2. 更换数据存储方式，以便使用 GPU 计算，减少 4 维数组与 1 维数组之间的转换，获得成倍的计算效率提升\n\n3. 全面优化 GPU 计算，更新 CUDA 核函数实现，使得训练与预测计算效获得大大提升\n\n4. 后续版本将进一步优化 GPU 版本，预计将整个计算过程搬迁入 GPU 计算，从而减少主机与设备 (显卡) 之间传输，希望进一步获得更快的计算速度\n\n### omega-engine-v4-gpu\n\n#### 2023-01-10\n1. 开启 omega-engine-v4-gpu 版本开发，该版本将实现对 omega-engine 的 CUDNN 全面支持\n\n2. 新增全局平均池化层实现\n\n3. 将 softmax 与 cross_entropy 结合成 softmax_with_cross_entropy 作为损失函数使用 (注意：使用 softmax_with_cross_entropy 损失函数，将不需要额外添加 SoftmaxLayer)\n\n4. 新增 BN 层对 CUDNN 支持，实现源码请移步 (实现源码请移步 BNCudnnKernel.java)\n\n5. 后续版本将逐渐实现引擎对 CUDNN 支持\n\n#### 2023-04-13\n1. omega-engine-v4-gpu 版本添加 cudnn 支持，整体推理与训练效率提升 4 倍\n\n2. 优化 bn 层，激活函数层内存使用，整体内存显存占用减少 30%~40%\n\n3. 新增 YOLO 目标识别实现，当前实现的 yolo 版本为 yolov1 版本 (实现源码请移步 YoloV1Test.java)\n\n4. 新增图片绘制工具，帮助绘制预测框与回显图片\n\n5. 后续版本将逐渐实现引擎对 yolov3,yolov5 等模型\n\n#### 2023-08-02 \n1. 新增自动求导功能 (包含 cpu，gpu 版本). \n\n2. 新增 multiLabel_soft_margin loss 损失函数，yolo loss（Yolov3Loss）.\n\n3. 新增 yolov3 目标识别实现，当前实现的 yolo 版本为 yolov3 版本 (实现源码请移步 YoloV3Test.java) . \n\n4. 新增目标识别数据增强功能 (随机裁剪边缘，随机上下反转，hsv 变换等).\n\n5. 使用自动求导功能实现 MSN 损失函数，代替原有的 MSN loss. \n\n6. 后续版本将逐渐实现引擎对 yolov5,GAN,transformer 等模型支持.\n\n#### 2023-12-01\n1. 新增 yolov4 版本实现，具体结构请查看 yolov4-tiny.cfg 文件.\n\n2. 新增 yolov7 版本实现，添加 yolov7 loss 实现，具体理论解析请查看 readme.md 文件. \n\n4. 新增基于 yolov7-tiny 实现智能冰柜商品识别 demo. \n\n5. SiLU 激活函数实现. \n\n6. 修改 yoloLayer(yolo 层)，根据 yolov4 版本实现 scale 缩放公式从原来 exp(xy)+b 修改成 sigmoid(xy) * scale - 0.5 * (scale - 1)，该操作可一定程度减缓由于 exp() 函数带来的数值不稳定和无穷大 NaN 的现象. \n\n7. 新增 GAN 实现，详情源码请查看 com.omega.gan 包，里面实现了手写体数字生成与动漫头像生成的事例.\n\n8. 新增 RNN 循环神经网络模型实现，添加 RNNBlockLayer 层，该层实现了 RNN,LSTM,GRU 三种循环神经网络基础模块.\n\n9. 后续版本将逐渐实现引擎对 CycleGAN 风格迁移，LSTM,GRU,transformer 等模型支持. \n\n#### 2024-05-20\n1. 新增循环神经网络 LSTM 模型实现（小说生成器 demo）.\n\n2. 新增循环神经网络 seq2seq 模型实现（中英文翻译器 demo）.\n\n3. 新增 Transformer 家族 GPT 模型支持，新增 MultHeadSelfAttention（多头自注意力机制）实现 FastCausalSelfAttentionLayer、MultiHeadAttentionLayer，新增 MLP 层实现 MLPLayer，新增 EmbeddingIDLayer（输入数据为 id），新增 Layer Normalization 层等 transformer 系列基础层.\n\n4. 新增大语言 nano GPT2 模型实现（莎士比亚剧本生成 demo）.\n\n5. 新增大语言 GPT2 模型实现（中文聊天机器人 demo）.\n\n6. 新增大语言 GPT2 模型实现（中文医疗问答系统 demo）.\n\n7. 新增 BPE（byte pair encode）tokenizer 编码器实现.\n\n\n## 欢迎打扰\n\n### QQ：465973119\n### 技术交流 QQ 群：119593195\n### 电子邮箱：465973119@qq.com","# Omega-AI 快速上手指南\n\nOmega-AI 是一个基于 Java 打造的深度学习框架，旨在帮助开发者快速搭建神经网络、实现模型训练与测试，并支持多 GPU 加速（CUDA\u002FCUDNN）。本指南将帮助您快速完成环境配置与基础运行。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **JDK**: 建议使用 JDK 8 或更高版本。\n- **GPU 驱动**: 安装 NVIDIA 显卡驱动。\n- **CUDA Toolkit**: 根据您选择的引擎版本安装对应版本的 CUDA。\n- **CUDNN**: 安装与 CUDA 版本匹配的 CUDNN 库。\n- **内存调整**: 若部署大参数模型（如 VGG16），需调整 JVM 堆内存。\n  ```bash\n  -Xmx20480m -Xms20480m -Xmn10240m\n  ```\n\n### 版本依赖匹配\n由于 `omega-engine-v4-gpu` 集成了 jcuda 支持，**CUDA 版本必须与 jcuda 版本严格对应**。例如，若机器安装的是 CUDA 11.7.x，则需引入 jcuda 11.7.0 版本的包。\n\n## 安装步骤\n\n### 1. 检查 CUDA 版本\n确认当前系统安装的 CUDA 版本：\n```txt\nnvcc --version\n```\n\n### 2. 下载 CUDA 与 CUDNN\n前往 NVIDIA 官网下载对应工具包：\n[https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit-archive](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit-archive)\n\n### 3. 引入项目依赖\n在您的 Maven 项目中添加 `omega-engine-v4-gpu` 依赖。**国内开发者推荐使用 Gitee 或 GitCode 镜像源**获取源码。\n\n```xml\n\u003Cdependency>\n    \u003CgroupId>io.gitee.iangellove\u003C\u002FgroupId>\n    \u003CartifactId>omega-engine-v4-gpu\u003C\u002FartifactId>\n    \u003Cversion>win-cu11.7-v1.0-beta\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n*(请将版本号替换为您 CUDA 版本对应的最新稳定版)*\n\n### 4. 源码地址\n- [Gitee](https:\u002F\u002Fgitee.com\u002Fdromara\u002Fomega-ai)\n- [GitHub](https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI)\n- [GitCode](https:\u002F\u002Fgitcode.com\u002Fdromara\u002Fomega-ai)\n\n## 基本使用\n\n初始化 GPU 环境后，即可调用框架进行模型构建与推理。以下是最基础的 CNN 网络测试示例：\n\n```java\npublic static void main(String[] args) {\n    try {\n        \u002F\u002F 初始化 GPU 环境获取 Context 对象\n        CUDAModules.initContext();\n        \n        \u002F\u002F 执行网络测试\n        CNNTest cnn = new CNNTest();\n        cnn.cnnNetwork_cifar10();\n    } finally {\n        \u002F\u002F 释放所有显存\n        CUDAMemoryManager.free();\n    }\n}\n```\n\n### 后续扩展\n框架支持多种主流模型架构，包括但不限于：\n- **视觉**: CNN, VGG16, ResNet, YOLO (目标检测)\n- **序列**: RNN, LSTM, Transformer, GPT, Llama\n- **生成**: GAN, Diffusion, Stable Diffusion\n\n更多 Demo 展示（如小说生成器、医疗问答、文生图等）请参考官方文档或源码仓库中的 `Demo` 目录。","某智慧工地项目的后端团队需要在现有的 Java 监控系统中集成安全帽佩戴检测功能，以实时预警违规行为。\n\n### 没有 Omega-AI 时\n- 必须单独搭建 Python 微服务处理图像分析，增加了系统架构的复杂度和运维成本。\n- Java 主程序需通过 HTTP 或 gRPC 调用外部服务，网络传输导致视频流处理延迟较高。\n- 团队成员精通 Java 但缺乏 PyTorch 经验，排查模型推理错误和显存溢出问题非常困难。\n- 若使用公有云 API，不仅费用高昂，且现场视频数据上传存在隐私泄露风险。\n\n### 使用 Omega-AI 后\n- 直接在 Maven 项目中引入引擎包，将 AI 推理逻辑嵌入现有 Java 业务代码，无需维护独立服务。\n- 利用内置的 CUDA 和 CUDNN 加速，GPU 运算效率大幅提升，满足实时监控的低延迟要求。\n- 提供 YOLO 等成熟模型接口，Java 开发者无需切换语言即可快速搭建神经网络进行训练与测试。\n- 显存由 JVM 统一管理，避免了跨语言进程间的资源浪费，系统整体稳定性显著增强。\n\nOmega-AI 成功让 Java 团队在零学习成本下实现了高性能的本地化 AI 推理落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdromara_Omega-AI_cbeb2b0d.png","dromara","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdromara_4a226ffe.png","poems & dreams",null,"xiaoyu@dromara.org","https:\u002F\u002Fdromara.org","https:\u002F\u002Fgithub.com\u002Fdromara",[83,87],{"name":84,"color":85,"percentage":86},"Java","#b07219",90.1,{"name":88,"color":89,"percentage":90},"Cuda","#3A4E3A",9.9,502,66,"2026-04-02T05:59:34","Apache-2.0","Windows, Linux","需要 NVIDIA GPU，CUDA 11.7+，CUDNN 对应版本","推荐 20GB+ (VGG16 训练示例)",{"notes":99,"python":100,"dependencies":101},"这是一个基于 Java 的深度学习框架，无需 Python 环境。需确保本地安装的 CUDA 版本与引入的 jcuda 包版本严格一致（如 CUDA 11.7 对应 jcuda 11.7.0）。训练大模型（如 VGG16）时建议调整 JVM 内存参数（如 -Xmx20480m）。","未说明 (基于 Java 开发)",[102,103,104],"omega-engine-v4-gpu","jcuda","JDK",[13,15,26,14],[107,108,109,110,111,112],"llm","yolo","deeplearning","diffusion","neural-network","ai",4,"2026-03-27T02:49:30.150509","2026-04-06T05:17:33.036521",[117,122,127],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},2413,"Omega-AI 能否与 Spark 结合使用？","Omega-AI 是一款低依赖的深度学习引擎，理论上来说可以与任何一款软件结合使用，包括 Spark。","https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI\u002Fissues\u002F17",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},2414,"项目是否有官方帮助文档？","帮助文档正在加紧编写中。如果有问题，暂时请移步到 QQ 或者 Q 群联系维护者获取支持。","https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI\u002Fissues\u002F16",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},2412,"如何在 macOS（特别是 Apple Silicon）上启动项目？","目前 omega-ai 暂时不支持 macOS，仅支持在 CUDA 环境下使用。如果遇到 `This CUDA version is not available on MacOS` 错误，说明当前环境不兼容。未来计划在昇腾上做支持。","https:\u002F\u002Fgithub.com\u002Fdromara\u002FOmega-AI\u002Fissues\u002F18",[]]