[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-deepjavalibrary--djl":3,"tool-deepjavalibrary--djl":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":81,"stars":118,"forks":119,"last_commit_at":120,"license":121,"difficulty_score":23,"env_os":122,"env_gpu":123,"env_ram":124,"env_deps":125,"category_tags":131,"github_topics":132,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":145,"updated_at":146,"faqs":147,"releases":173},3694,"deepjavalibrary\u002Fdjl","djl","An Engine-Agnostic Deep Learning Framework in Java","Deep Java Library（DJL）是一款专为 Java 开发者打造的开源深度学习框架。它致力于降低人工智能的应用门槛，让熟悉 Java 生态的工程师无需成为机器学习专家，也能利用现有技能轻松构建、训练和部署深度学习模型。\n\n传统深度学习开发往往依赖 Python 环境或特定的底层引擎，导致 Java 应用集成困难且技术栈割裂。DJL 完美解决了这一痛点，它提供原生的 Java 开发体验，像使用普通 Java 库一样自然流畅。其最大的技术亮点在于“引擎无关性”：开发者无需在项目初期绑定特定的深度学习后端（如 PyTorch、TensorFlow 或 MXNet），可随时灵活切换，甚至根据硬件配置自动选择最优的 CPU 或 GPU 加速方案。\n\n通过简洁直观的 API 设计，DJL 引导用户以最佳实践完成从数据加载、模型构建到推理预测的全流程。无论是希望将 AI 能力融入企业级应用的软件工程师，还是倾向于使用 Java 工具链进行算法探索的研究人员，都能借助 DJL 在熟悉的 IDE 环境中高效工作，真正实现用 Java 驾驭深度学习。","\n![DeepJavaLibrary](website\u002Fimg\u002Fdeepjavalibrary.png?raw=true \"Deep Java Library\")\n\n[![Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fdeepjavalibrary\u002Fdjl.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases)\n[![Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-up-green)](https:\u002F\u002Fdocs.djl.ai\u002Fmaster\u002Findex.html)\n[![Continuous](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fworkflows\u002FContinuous\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fcontinuous.yml)\n[![Nightly Publish](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fworkflows\u002FNightly%20Publish\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fnightly_publish.yml)\n[![CodeQL](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fcodeql-analysis-java.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fcodeql-analysis-java.yml)\n\n# Deep Java Library (DJL)\n\n## Overview\n\n[Deep Java Library (DJL)](https:\u002F\u002Fdocs.djl.ai\u002Fmaster\u002Findex.html) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is designed to be easy to get started with and simple to\nuse for Java developers. DJL provides a native Java development experience and functions like any other regular Java library.\n\nYou don't have to be machine learning\u002Fdeep learning expert to get started. You can use your existing Java expertise as an on-ramp to learn and use machine learning and deep learning. You can\nuse your favorite IDE to build, train, and deploy your models. DJL makes it easy to integrate these models with your\nJava applications.\n\nBecause DJL is deep learning engine agnostic, you don't have to make a choice\nbetween engines when creating your projects. You can switch engines at any\npoint. To ensure the best performance, DJL also provides automatic CPU\u002FGPU choice based on hardware configuration.\n\nDJL's ergonomic API interface is designed to guide you with best practices to accomplish\ndeep learning tasks.\nThe following pseudocode demonstrates running inference:\n\n```java\n    \u002F\u002F Assume user uses a pre-trained model from model zoo, they just need to load it\n    Criteria\u003CImage, Classifications> criteria =\n            Criteria.builder()\n                    .optApplication(Application.CV.OBJECT_DETECTION) \u002F\u002F find object detection model\n                    .setTypes(Image.class, Classifications.class)    \u002F\u002F define input and output\n                    .optFilter(\"backbone\", \"resnet50\")               \u002F\u002F choose network architecture\n                    .build();\n\n    Image img = ImageFactory.getInstance().fromUrl(\"http:\u002F\u002F...\");    \u002F\u002F read image\n    try (ZooModel\u003CImage, Classifications> model = criteria.loadModel();\n         Predictor\u003CImage, Classifications> predictor = model.newPredictor()) {\n        Classifications result = predictor.predict(img);\n\n        \u002F\u002F get the classification and probability\n        ...\n    }\n```\n\nThe following pseudocode demonstrates running training:\n\n```java\n    \u002F\u002F Construct your neural network with built-in blocks\n    Block block = new Mlp(28 * 28, 10, new int[] {128, 64});\n\n    Model model = Model.newInstance(\"mlp\"); \u002F\u002F Create an empty model\n    model.setBlock(block);                  \u002F\u002F set neural network to model\n\n    \u002F\u002F Get training and validation dataset (MNIST dataset)\n    Dataset trainingSet = new Mnist.Builder().setUsage(Usage.TRAIN) ... .build();\n    Dataset validateSet = new Mnist.Builder().setUsage(Usage.TEST) ... .build();\n\n    \u002F\u002F Setup training configurations, such as Initializer, Optimizer, Loss ...\n    TrainingConfig config = setupTrainingConfig();\n    Trainer trainer = model.newTrainer(config);\n    \u002F*\n     * Configure input shape based on dataset to initialize the trainer.\n     * 1st axis is batch axis, we can use 1 for initialization.\n     * MNIST is 28x28 grayscale image and pre processed into 28 * 28 NDArray.\n     *\u002F\n    trainer.initialize(new Shape(1, 28 * 28));\n    EasyTrain.fit(trainer, epoch, trainingSet, validateSet);\n\n    \u002F\u002F Save the model\n    model.save(modelDir, \"mlp\");\n\n    \u002F\u002F Close the resources\n    trainer.close();\n    model.close();\n```\n\n## [Getting Started](docs\u002Fquick_start.md)\n\n## Resources\n\n- [Documentation](docs\u002FREADME.md#documentation)\n- [DJL's D2L Book](https:\u002F\u002Fd2l.djl.ai\u002F)\n- [JavaDoc API Reference](https:\u002F\u002Fdjl.ai\u002Fwebsite\u002Fjavadoc.html)\n\n## Release Notes\n\n* [0.36.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.36.0) ([Code](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.36.0))\n* [0.35.1](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.35.1) ([Code](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.35.1))\n* [0.33.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.33.0) ([Code](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.33.0))\n* [0.32.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.32.0) ([Code](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.32.0))\n* [0.31.1](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.31.1) ([Code](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.31.1))\n* [0.30.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.30.0) ([Code](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.30.0))\n* [+29 releases](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases)\n\n## Building From Source\n\nTo build from source, begin by checking out the code.\nOnce you have checked out the code locally, you can build it as follows using Gradle:\n\n```sh\n# for Linux\u002FmacOS:\n.\u002Fgradlew build\n\n# for Windows:\ngradlew build\n```\n\nTo increase build speed, you can use the following command to skip unit tests:\n\n```sh\n# for Linux\u002FmacOS:\n.\u002Fgradlew build -x test\n\n# for Windows:\ngradlew build -x test\n```\n\n### Importing into eclipse\n\nto import source project into eclipse\n\n```sh\n# for Linux\u002FmacOS:\n.\u002Fgradlew eclipse\n\n# for Windows:\ngradlew eclipse\n\n```\n\nin eclipse \n\nfile->import->gradle->existing gradle project\n\n**Note:** please set your workspace text encoding setting to UTF-8\n\n## Community\n\nYou can read our guide to [community forums, following DJL, issues, discussions, and RFCs](docs\u002Fforums.md) to figure out the best way to share and find content from the DJL community.\n\nJoin our [\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_readme_067d5695ad14.png' width='20px' \u002F> slack channel](http:\u002F\u002Ftiny.cc\u002Fdjl_slack) to get in touch with the development team, for questions and discussions.\n\nFollow our [\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_readme_f9fe62a6aaa0.png' width='20px' \u002F> X (formerly Twitter)](https:\u002F\u002Fx.com\u002Fdeepjavalibrary) to see updates about new content, features, and releases.\n\n关注我们 [\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_readme_d52f38c14cc1.png' width='20px' \u002F> 知乎专栏](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fc_1255493231133417472) 获取DJL最新的内容！\n\n## Useful Links\n\n* [DJL Website](https:\u002F\u002Fdjl.ai\u002F)\n* [Documentation](https:\u002F\u002Fdocs.djl.ai\u002F)\n* [DJL Demos](https:\u002F\u002Fdocs.djl.ai\u002Fmaster\u002Fdocs\u002Fdemos\u002Findex.html)\n* [Dive into Deep Learning Book Java version](https:\u002F\u002Fd2l.djl.ai\u002F)\n\n## License\n\nThis project is licensed under the [Apache-2.0 License](LICENSE).\n","![DeepJavaLibrary](website\u002Fimg\u002Fdeepjavalibrary.png?raw=true \"Deep Java Library\")\n\n[![Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fdeepjavalibrary\u002Fdjl.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases)\n[![Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-up-green)](https:\u002F\u002Fdocs.djl.ai\u002Fmaster\u002Findex.html)\n[![Continuous](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fworkflows\u002FContinuous\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fcontinuous.yml)\n[![Nightly Publish](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fworkflows\u002FNightly%20Publish\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fnightly_publish.yml)\n[![CodeQL](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fcodeql-analysis-java.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Factions\u002Fworkflows\u002Fcodeql-analysis-java.yml)\n\n# Deep Java Library (DJL)\n\n## 概述\n\n[Deep Java Library (DJL)](https:\u002F\u002Fdocs.djl.ai\u002Fmaster\u002Findex.html) 是一个开源的、高层次的、与深度学习引擎无关的 Java 框架。DJL 旨在让 Java 开发者能够轻松上手并简单地使用深度学习技术。DJL 提供了原生的 Java 开发体验，其功能与其他常规 Java 库无异。\n\n你无需成为机器学习或深度学习专家即可开始使用 DJL。你可以利用现有的 Java 技能作为入门途径，逐步学习和应用机器学习及深度学习技术。你可以使用自己喜欢的 IDE 来构建、训练和部署模型。DJL 还使得将这些模型轻松集成到你的 Java 应用程序中成为可能。\n\n由于 DJL 不依赖于特定的深度学习引擎，因此在创建项目时无需在不同引擎之间做出选择。你可以在任何时候切换使用的引擎。为了确保最佳性能，DJL 还会根据硬件配置自动选择 CPU 或 GPU。\n\nDJL 的人体工学 API 接口旨在通过最佳实践引导你完成深度学习任务。\n以下伪代码展示了如何进行推理：\n\n```java\n    \u002F\u002F 假设用户使用来自模型仓库的预训练模型，只需加载即可\n    Criteria\u003CImage, Classifications> criteria =\n            Criteria.builder()\n                    .optApplication(Application.CV.OBJECT_DETECTION) \u002F\u002F 查找目标检测模型\n                    .setTypes(Image.class, Classifications.class)    \u002F\u002F 定义输入和输出\n                    .optFilter(\"backbone\", \"resnet50\")               \u002F\u002F 选择网络架构\n                    .build();\n\n    Image img = ImageFactory.getInstance().fromUrl(\"http:\u002F\u002F...\");    \u002F\u002F 读取图像\n    try (ZooModel\u003CImage, Classifications> model = criteria.loadModel();\n         Predictor\u003CImage, Classifications> predictor = model.newPredictor()) {\n        Classifications result = predictor.predict(img);\n\n        \u002F\u002F 获取分类结果和置信度\n        ...\n    }\n```\n\n以下伪代码展示了如何进行训练：\n\n```java\n    \u002F\u002F 使用内置模块构建神经网络\n    Block block = new Mlp(28 * 28, 10, new int[] {128, 64});\n\n    Model model = Model.newInstance(\"mlp\"); \u002F\u002F 创建一个空模型\n    model.setBlock(block);                  \u002F\u002F 将神经网络设置到模型中\n\n    \u002F\u002F 获取训练和验证数据集（MNIST 数据集）\n    Dataset trainingSet = new Mnist.Builder().setUsage(Usage.TRAIN) ... .build();\n    Dataset validateSet = new Mnist.Builder().setUsage(Usage.TEST) ... .build();\n\n    \u002F\u002F 设置训练配置，例如初始化器、优化器、损失函数等\n    TrainingConfig config = setupTrainingConfig();\n    Trainer trainer = model.newTrainer(config);\n    \u002F*\n     * 根据数据集配置输入形状以初始化训练器。\n     * 第一轴是批次轴，我们可以用 1 来初始化。\n     * MNIST 是 28x28 的灰度图像，并被预处理为 28 * 28 的 NDArray。\n     *\u002F\n    trainer.initialize(new Shape(1, 28 * 28));\n    EasyTrain.fit(trainer, epoch, trainingSet, validateSet);\n\n    \u002F\u002F 保存模型\n    model.save(modelDir, \"mlp\");\n\n    \u002F\u002F 关闭资源\n    trainer.close();\n    model.close();\n```\n\n## [快速入门](docs\u002Fquick_start.md)\n\n## 资源\n\n- [文档](docs\u002FREADME.md#documentation)\n- [DJL 的 D2L 书籍](https:\u002F\u002Fd2l.djl.ai\u002F)\n- [JavaDoc API 参考](https:\u002F\u002Fdjl.ai\u002Fwebsite\u002Fjavadoc.html)\n\n## 发布说明\n\n* [0.36.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.36.0) ([代码](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.36.0))\n* [0.35.1](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.35.1) ([代码](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.35.1))\n* [0.33.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.33.0) ([代码](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.33.0))\n* [0.32.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.32.0) ([代码](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.32.0))\n* [0.31.1](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.31.1) ([代码](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.31.1))\n* [0.30.0](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases\u002Ftag\u002Fv0.30.0) ([代码](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Ftree\u002Fv0.30.0))\n* [+29 个版本](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Freleases)\n\n## 从源码构建\n\n要从源码构建，首先需要检出代码。\n本地检出代码后，可以使用 Gradle 按照以下步骤进行构建：\n\n```sh\n# 对于 Linux\u002FmacOS:\n.\u002Fgradlew build\n\n# 对于 Windows:\ngradlew build\n```\n\n为了提高构建速度，可以使用以下命令跳过单元测试：\n\n```sh\n# 对于 Linux\u002FmacOS:\n.\u002Fgradlew build -x test\n\n# 对于 Windows:\ngradlew build -x test\n```\n\n### 导入到 Eclipse\n\n将源代码项目导入 Eclipse：\n\n```sh\n# 对于 Linux\u002FmacOS:\n.\u002Fgradlew eclipse\n\n# 对于 Windows:\ngradlew eclipse\n\n```\n\n在 Eclipse 中：\n\n文件 -> 导入 -> Gradle -> 现有 Gradle 项目\n\n**注意：** 请将工作区的文本编码设置为 UTF-8。\n\n## 社区\n\n你可以阅读我们的指南，了解如何参与 [社区论坛、关注 DJL、提交问题、讨论以及 RFC](docs\u002Fforums.md)，从而找到与 DJL 社区分享和获取内容的最佳方式。\n\n加入我们的 [\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_readme_067d5695ad14.png' width='20px' \u002F> Slack 频道](http:\u002F\u002Ftiny.cc\u002Fdjl_slack)，与开发团队联系，提出问题或参与讨论。\n\n关注我们的 [\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_readme_f9fe62a6aaa0.png' width='20px' \u002F> X（原 Twitter）](https:\u002F\u002Fx.com\u002Fdeepjavalibrary)，以获取有关新内容、功能和发布的最新动态。\n\n关注我们 [\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_readme_d52f38c14cc1.png' width='20px' \u002F> 知乎专栏](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fc_1255493231133417472) 获取DJL最新的内容！\n\n## 有用链接\n\n* [DJL 官网](https:\u002F\u002Fdjl.ai\u002F)\n* [文档](https:\u002F\u002Fdocs.djl.ai\u002F)\n* [DJL 示例](https:\u002F\u002Fdocs.djl.ai\u002Fmaster\u002Fdocs\u002Fdemos\u002Findex.html)\n* [深入浅出深度学习书 Java 版](https:\u002F\u002Fd2l.djl.ai\u002F)\n\n## 许可证\n\n本项目采用 [Apache-2.0 许可证](LICENSE)授权。","# Deep Java Library (DJL) 快速上手指南\n\nDeep Java Library (DJL) 是一个开源的、高级的、与引擎无关的 Java 深度学习框架。它专为 Java 开发者设计，无需成为机器学习专家即可使用现有的 Java 技能构建、训练和部署模型。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS, 或 Windows\n*   **JDK**: JDK 8 或更高版本（推荐 JDK 11+）\n*   **构建工具**: Gradle (项目内置 Gradle Wrapper，无需单独安装) 或 Maven\n*   **IDE**: IntelliJ IDEA, Eclipse 或其他支持 Java 的编辑器\n*   **硬件加速 (可选)**: 如需 GPU 加速，请确保已安装对应的 NVIDIA 驱动和 CUDA 工具包。DJL 会根据硬件配置自动选择 CPU 或 GPU。\n\n## 2. 安装步骤\n\nDJL 作为一个标准的 Java 库，可以通过 Maven 或 Gradle 轻松集成到您的项目中。\n\n### 方式一：Maven 项目\n\n在 `pom.xml` 中添加以下依赖。DJL 会自动根据运行环境下载对应的深度学习引擎（如 PyTorch, TensorFlow, MXNet 等）。\n\n```xml\n\u003Cdependencies>\n    \u003C!-- DJL 核心 API -->\n    \u003Cdependency>\n        \u003CgroupId>ai.djl\u003C\u002FgroupId>\n        \u003CartifactId>api\u003C\u002FartifactId>\n        \u003Cversion>0.36.0\u003C\u002Fversion>\n    \u003C\u002Fdependency>\n    \n    \u003C!-- 模型 zoo (包含预训练模型) -->\n    \u003Cdependency>\n        \u003CgroupId>ai.djl\u003C\u002FgroupId>\n        \u003CartifactId>model-zoo\u003C\u002FartifactId>\n        \u003Cversion>0.36.0\u003C\u002Fversion>\n    \u003C\u002Fdependency>\n\n    \u003C!-- 示例：添加 PyTorch 引擎支持 (根据需要选择引擎) -->\n    \u003Cdependency>\n        \u003CgroupId>ai.djl.pytorch\u003C\u002FgroupId>\n        \u003CartifactId>pytorch-engine\u003C\u002FartifactId>\n        \u003Cversion>0.36.0\u003C\u002Fversion>\n    \u003C\u002Fdependency>\n    \n    \u003C!-- 示例：添加图像处理辅助库 -->\n    \u003Cdependency>\n        \u003CgroupId>ai.djl\u003C\u002FgroupId>\n        \u003CartifactId>basic-model\u003C\u002FartifactId>\n        \u003Cversion>0.36.0\u003C\u002Fversion>\n    \u003C\u002Fdependency>\n\u003C\u002Fdependencies>\n```\n\n> **提示**：国内开发者若遇到依赖下载缓慢，可在 `pom.xml` 或 `settings.xml` 中配置阿里云 Maven 镜像：\n> ```xml\n> \u003Cmirror>\n>   \u003Cid>aliyunmaven\u003C\u002Fid>\n>   \u003CmirrorOf>*\u003C\u002FmirrorOf>\n>   \u003Cname>Aliyun Public Maven\u003C\u002Fname>\n>   \u003Curl>https:\u002F\u002Fmaven.aliyun.com\u002Frepository\u002Fpublic\u003C\u002Furl>\n> \u003C\u002Fmirror>\n> ```\n\n### 方式二：Gradle 项目\n\n在 `build.gradle` 中添加：\n\n```groovy\nrepositories {\n    mavenCentral()\n    \u002F\u002F 国内加速可添加阿里云镜像\n    \u002F\u002F maven { url 'https:\u002F\u002Fmaven.aliyun.com\u002Frepository\u002Fpublic' }\n}\n\ndependencies {\n    implementation 'ai.djl:api:0.36.0'\n    implementation 'ai.djl:model-zoo:0.36.0'\n    implementation 'ai.djl.pytorch:pytorch-engine:0.36.0' \u002F\u002F 以 PyTorch 为例\n    implementation 'ai.djl:basic-model:0.36.0'\n}\n```\n\n### 从源码构建 (可选)\n\n如果您希望贡献代码或使用最新特性，可以克隆源码并构建：\n\n```sh\n# 克隆代码\ngit clone https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl.git\ncd djl\n\n# Linux\u002FmacOS 构建 (跳过测试以加快速度)\n.\u002Fgradlew build -x test\n\n# Windows 构建\ngradlew build -x test\n```\n\n## 3. 基本使用\n\nDJL 提供了直观的 API 来处理推理（Inference）和训练（Training）。\n\n### 场景一：快速推理 (使用预训练模型)\n\n以下示例演示如何从 Model Zoo 加载一个预训练的 ResNet50 目标检测模型，并对网络图片进行预测。\n\n```java\nimport ai.djl.Application;\nimport ai.djl.inference.Predictor;\nimport ai.djl.modality.Classifications;\nimport ai.djl.modality.cv.Image;\nimport ai.djl.modality.cv.ImageFactory;\nimport ai.djl.repository.zoo.Criteria;\nimport ai.djl.repository.zoo.ZooModel;\n\npublic class QuickStartInference {\n    public static void main(String[] args) throws Exception {\n        \u002F\u002F 定义模型加载标准：应用类型、输入输出类型、网络架构\n        Criteria\u003CImage, Classifications> criteria =\n                Criteria.builder()\n                        .optApplication(Application.CV.OBJECT_DETECTION) \u002F\u002F 查找目标检测模型\n                        .setTypes(Image.class, Classifications.class)    \u002F\u002F 定义输入和输出类型\n                        .optFilter(\"backbone\", \"resnet50\")               \u002F\u002F 选择网络架构\n                        .build();\n\n        \u002F\u002F 读取图片\n        Image img = ImageFactory.getInstance().fromUrl(\"https:\u002F\u002Fresources.djl.ai\u002Fimages\u002Fdog.jpg\");\n\n        \u002F\u002F 加载模型并进行预测\n        try (ZooModel\u003CImage, Classifications> model = criteria.loadModel();\n             Predictor\u003CImage, Classifications> predictor = model.newPredictor()) {\n            \n            Classifications result = predictor.predict(img);\n\n            \u002F\u002F 输出结果\n            System.out.println(result);\n        }\n    }\n}\n```\n\n### 场景二：模型训练 (从零构建)\n\n以下示例演示如何构建一个简单的多层感知机 (MLP)，并使用 MNIST 数据集进行训练。\n\n```java\nimport ai.djl.basicdataset.cv.Mnist;\nimport ai.djl.basicmodelzoo.basic.Mlp;\nimport ai.djl.core.Block;\nimport ai.djl.dataset.Dataset;\nimport ai.djl.dataset.Usage;\nimport ai.djl.engine.Engine;\nimport ai.djl.model.Model;\nimport ai.djl.ndarray.types.Shape;\nimport ai.djl.training.EasyTrain;\nimport ai.djl.training.Trainer;\nimport ai.djl.training.config.TrainingConfig;\n\npublic class QuickStartTraining {\n    public static void main(String[] args) throws Exception {\n        \u002F\u002F 1. 构建神经网络 (输入 28*28, 输出 10, 隐藏层 [128, 64])\n        Block block = new Mlp(28 * 28, 10, new int[] {128, 64});\n\n        \u002F\u002F 2. 创建模型并设置网络结构\n        Model model = Model.newInstance(\"mlp\");\n        model.setBlock(block);\n\n        \u002F\u002F 3. 准备数据集 (MNIST)\n        Dataset trainingSet = new Mnist.Builder().setUsage(Usage.TRAIN).build();\n        Dataset validateSet = new Mnist.Builder().setUsage(Usage.TEST).build();\n\n        \u002F\u002F 4. 配置训练参数 (初始化器、优化器、损失函数等)\n        TrainingConfig config = setupTrainingConfig(); \u002F\u002F 需自行实现此方法配置具体参数\n        \n        \u002F\u002F 5. 创建 Trainer 并初始化\n        Trainer trainer = model.newTrainer(config);\n        \u002F\u002F 根据数据集形状初始化 (Batch size=1, 输入维度=28*28)\n        trainer.initialize(new Shape(1, 28 * 28));\n\n        \u002F\u002F 6. 开始训练 (假设 epoch=5)\n        int epoch = 5;\n        EasyTrain.fit(trainer, epoch, trainingSet, validateSet);\n\n        \u002F\u002F 7. 保存模型\n        model.save(\"build\u002Fmodels\", \"mlp\");\n\n        \u002F\u002F 8. 释放资源\n        trainer.close();\n        model.close();\n    }\n    \n    \u002F\u002F 伪代码：配置训练细节\n    private static TrainingConfig setupTrainingConfig() {\n        \u002F\u002F 实际使用中需配置 Optimizer, Loss, Initializer 等\n        return null; \n    }\n}\n```\n\n更多详细教程、API 文档及示例代码，请访问 [DJL 官方文档](https:\u002F\u002Fdocs.djl.ai\u002F) 或参考中文版《动手学深度学习》([D2L Java 版](https:\u002F\u002Fd2l.djl.ai\u002F))。","某大型电商平台的 Java 后端团队需要在订单系统中集成实时商品图像分类功能，以自动识别用户上传的违规图片。\n\n### 没有 djl 时\n- **技术栈割裂**：团队必须维护独立的 Python 微服务来运行深度学习模型，导致 Java 主业务与 AI 服务间通过网络通信，增加了系统复杂度和延迟。\n- **招聘与协作困难**：后端 Java 工程师不懂 Python 和 TensorFlow\u002FPyTorch 细节，需依赖算法团队部署模型，沟通成本高且迭代缓慢。\n- **环境部署繁琐**：生产环境需同时配置 JVM 和复杂的 Python 依赖库（如 CUDA、特定版本的深度学习框架），容器镜像体积大且容易冲突。\n- **引擎锁定风险**：一旦选定某种深度学习引擎（如 MXNet），后续若想切换至 PyTorch 以获得更好性能，几乎需要重写整个推理模块。\n\n### 使用 djl 后\n- **原生 Java 集成**：开发人员直接使用熟悉的 Java 代码加载预训练模型并进行推理，无需跨语言调用，显著降低延迟并简化架构。\n- **降低学习门槛**：Java 工程师无需成为 AI 专家，利用 djl 提供的高级 API 即可像调用普通 Jar 包一样完成模型加载与预测，实现自主开发。\n- **部署轻量化**：仅需标准的 Java 运行环境，djl 自动管理底层引擎依赖，支持根据硬件自动切换 CPU\u002FGPU，大幅简化运维流程。\n- **引擎无关性**：项目代码不绑定特定引擎，团队可随时在 PyTorch、TensorFlow 或 MXNet 之间无缝切换，灵活适配最新算法成果。\n\ndjl 让 Java 开发者能够以原生体验无缝融合深度学习能力，彻底打破了业务系统与 AI 模型之间的技术壁垒。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdeepjavalibrary_djl_067d5695.png","deepjavalibrary","DeepJavaLibrary","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdeepjavalibrary_b17cacf7.png","",null,"https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary",[82,86,90,93,96,100,104,107,111,114],{"name":83,"color":84,"percentage":85},"Java","#b07219",87.1,{"name":87,"color":88,"percentage":89},"Rust","#dea584",3.1,{"name":91,"color":92,"percentage":10},"C++","#f34b7d",{"name":94,"color":95,"percentage":23},"C","#555555",{"name":97,"color":98,"percentage":99},"Python","#3572A5",1.8,{"name":101,"color":102,"percentage":103},"HTML","#e34c26",0.8,{"name":105,"color":106,"percentage":103},"Scala","#c22d40",{"name":108,"color":109,"percentage":110},"CMake","#DA3434",0.4,{"name":112,"color":113,"percentage":110},"Shell","#89e051",{"name":115,"color":116,"percentage":117},"JavaScript","#f1e05a",0.3,4796,744,"2026-04-04T09:24:41","Apache-2.0","Linux, macOS, Windows","非必需。支持自动选择 CPU\u002FGPU，具体显卡型号、显存及 CUDA 版本取决于所选的后端深度学习引擎（如 PyTorch, TensorFlow 等），README 中未指定统一标准。","未说明",{"notes":126,"python":127,"dependencies":128},"这是一个纯 Java 框架，无需 Python 环境。它是引擎无关的，用户需根据项目需求单独引入具体的深度学习引擎依赖（如 DJL-PyTorch, DJL-TensorFlow 等）。构建源码需安装 Gradle，IDE 导入时建议将文本编码设置为 UTF-8。","不需要 (基于 Java)",[129,130],"Java Development Kit (JDK) 8+","Gradle (用于构建)",[15,14,13],[133,134,135,136,137,138,139,140,141,67,142,143,144],"deep-learning","neural-network","ai","java","mxnet","machine-learning","deep-neural-networks","ml","autograd","pytorch","tensorflow","onnxruntime","2026-03-27T02:49:30.150509","2026-04-06T11:32:03.347197",[148,153,158,163,168],{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},16928,"DJL 是否支持 CUDA 12.x 版本？为什么检测不到 GPU？","DJL 0.26.0-SNAPSHOT 及以上版本支持 PyTorch 2.1.1 和 CUDA 12。如果您使用的是旧版本，请升级到快照版本或等待正式发行版。此外，加载模型时尝试将 'mapLocation' 设置为 true。确保您的环境变量配置正确，以便 DJL 能找到相关的 CUDA 库文件。","https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fissues\u002F2902",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},16929,"在 Google Colab 上使用 MXNet 引擎时报错 'Compile with USE_CUDA=1' 或找不到 GPU 怎么办？","这通常是 Colab 环境配置问题。首先，MXNet 需要 CUDA 10.1 或 10.2 才能工作。其次，DJL 需要在 $LD_LIBRARY_PATH 环境变量中找到 libcudart 文件。如果该路径未指向正确的 NVIDIA 库目录（例如 \u002Fusr\u002Flib64-nvidia），您需要创建一个符号链接，使 DJL 能够定位到 libcudart 文件。建议参考 D2L 书籍中的最新 Colab 安装指南以自动化此过程。","https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fissues\u002F824",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},16930,"如何在 GraalVM Native Image 模式下解决 'No deep learning engine found' 错误？","在 Native 模式下，需要正确配置 ServiceLoader 机制。您需要在资源配置文件（如 resources-config.json）中包含引擎的服务提供者配置。目前 DJL 与 TensorFlow 引擎已在 Quarkus 项目中验证可用，您可以参考官方 demo 项目 (aws-samples\u002Fdjl-demo\u002Ftree\u002Fmaster\u002Fquarkus) 中的 reflection-config.json 和 resources-config.json 配置示例，确保所有必要的类和资源都被包含在原生镜像中。","https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fissues\u002F103",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},16931,"使用 TensorFlow 进行多线程推理时出现内存泄漏（Java OOM）如何解决？","该问题与 JavaCPP 的垃圾回收机制有关。当设置 `-Dorg.bytedeco.javacpp.nopointergc=true` 时，可能会阻止 JavaCPP Deallocator 线程正常工作从而导致内存泄漏。解决方案是升级 JavaCPP 到 1.5.6 或更高版本，该版本已修复此问题。同时，建议在 DJL 的下一个发布版本中应用此修复。","https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fissues\u002F690",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},16932,"模型训练准确率很低，如何改进 BinaryImageTranslator 或整体模型效果？","准确率低通常是因为数据量不足。深度学习通常需要大规模数据集。建议采取以下措施：1. 将约 20% 的数据划分为测试集以验证模型准确性；2. 采用迁移学习（Transfer Learning），使用在大型数据集上预训练的模型，然后在您的新数据集上进行微调。这通常能利用预模型学到的图像特征，显著提升小数据集上的表现。","https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fissues\u002F1665",[174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254,259,264,269],{"id":175,"version":176,"summary_zh":177,"released_at":178},99182,"v0.36.0","## 变更内容\n* @access2rohit 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3817 中将构建版本号提升至 0.36.0\n* @access2rohit 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3819 中更新文档（通过 DocString），提示使用 Utils.openUrl() 处理不受信任输入的风险\n* @fracpete 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3818 中改进了 NDManager.create(Number) 不支持类型错误信息\n\n## 新贡献者\n* @access2rohit 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3816 中完成了首次贡献\n* @fracpete 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3818 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fcompare\u002Fv0.35.1...v0.36.0","2025-12-16T22:27:47",{"id":180,"version":181,"summary_zh":182,"released_at":183},99183,"v0.35.1","## 变更内容\n* 修复：将 jQuery 从 2.1.1 升级至 3.7.1，以修复安全漏洞，由 @Lokiiiiii 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3812 中完成\n* 修复 GitHub Actions 工作流中的脚本注入漏洞，由 @Lokiiiiii 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3813 中完成\n* 修复 QuestionAnsweringTranslator 中的安全漏洞（CVSS 评分 8.2），由 @Lokiiiiii 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3814 中完成\n\n## 新贡献者\n* @Lokiiiiii 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3812 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fcompare\u002Fv0.35.0...v0.35.1","2025-12-03T23:00:40",{"id":185,"version":186,"summary_zh":187,"released_at":188},99184,"v0.35.0","## 变更内容\n* 引擎更新\n  * XGBoost 3.0.4 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3788\n\n## 功能增强\n* 将构建版本提升至 0.35.0，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3766 中完成\n* 为 OpenAI 添加默认请求头，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3772 中完成\n* 将 Gradle 更新至 9.0.0，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3777 中完成\n* [XGBoost] 支持 XGBoost 的 JSON 模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3786 中完成\n* [API] 接受张量输入的 JSON 格式输入\u002F输出，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3787 中完成\n* 将 XGBoost 升级至 3.0.4，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3788 中完成\n* [PyTorch] 为 IValue 添加字典元组支持，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3797 中完成\n* [XGBoost] 恢复 XGBoost 的 GPU 支持，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3796 中完成\n* 更新 Gradle 构建脚本以支持配置缓存，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3782 中完成\n* 为 PyTorch 引擎添加百分位数实现，由 @tipame 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3800 中完成\n* 实现 CSV 转换器，支持 XGBoost 的 CSV 输入\u002F输出，由 @smouaa 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3801 中完成\n* 修正 topK 的文档说明，由 @alanocallaghan 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3808 中完成\n* [API] 重构 csvTranslator，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3805 中完成\n\n## Bug 修复\n* 修复 PyTorch 原生 2.5.1 aarch64 构建问题，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3767 中完成\n* [修复] 解决加载分词器时 GPU 共享库的问题，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3773 中完成\n* 修复 YoloSegmentationTranslator 的 #3774 问题，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3775 中完成\n* 修复 #3761 问题，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3776 中完成\n* 修复 `BaseNDManager.debugDump` 方法，由 @petebankhead 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3780 中完成\n* 修正 tokenizer_config.json 的优先级，并从 TokenizerConfig 中移除 doLowerCase 参数，由 @Soha-Agarwal 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3785 中完成\n* 修复 #3795 问题，为分词任务添加 aggregation_strategy 参数，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3798 中完成\n* 修复 Windows 平台上的 PyTorch 原生构建问题，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3802 中完成\n\n## CI\u002FCD\n* 构建 PyTorch JNI CPU 预编译版本，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3770 中完成\n* 移除夜间构建中的 PyTorch 2.1.2 和 2.3.1 发布版本，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3771 中完成\n* 修复 SentencePiece 原生工作流，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3768 中完成\n* 修复 FastText 原生工作流，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3769 中完成\n* 修复持续集成构建问题，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3804 中完成\n\n\n## 新 C","2025-10-21T16:53:58",{"id":190,"version":191,"summary_zh":192,"released_at":193},99191,"v0.24.0","## 主要特性\n\n- 引擎升级\n  - 将 PyTorch 2.0.1 设置为默认版本 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2710\n  - 将 OnnxRuntime 升级至 1.16.0 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2784\n- 支持 [SafeTensors](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fsafetensors) https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2763\n- 支持 YoloV8 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2776\n\n## 功能增强\n\n* [spark] 由 @sindhuvahinis 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2712 中更新 Dockerfile 中的 DJL 版本\n* [pytorch] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2710 中将 PyTorch 2.0.1 设为 DJL 0.24.0 的默认版本\n* 由 @jiyuanq 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2706 中实现 PyTorch 在独立 CUDA 流上的推理支持\n* [spark] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2713 中将 Javacv 版本更新至 1.5.9，用于 Spark Docker 镜像\n* [pytorch] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2717 中将 PyTorch Android 版本升级至 2.0.1\n* [api] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2721 中将 getNeuronDevices() 方法设为公共方法\n* [api] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2724 中添加当无法加载指定类时记录警告信息的功能\n* [api] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2729 中针对 SageMaker 上检测 Neuron 问题提供临时解决方案\n* 由 @rohithkrn 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2732 中为 Llama 支持设置自定义 FT 构建\n* [api] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2735 中修复 NeuronUtils 在非 root 用户下运行时的问题\n* 由 @zachgk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2742 中添加带有默认值的 Utils.getEnvOrSystemProperty 方法\n* 由 @juliangamble 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2715 中实现 Issue #2693，即 PtNDArrayEx.multiBoxPrior 并进行验证\n* [api] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2746 中为 NDArrayAdapter 实现 NDArray.toType() 方法\n* [onnxruntime] 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2743 中将 OnnxRuntime 版本升级至 1.15.1\n* 由 @KexinFeng 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2730 中添加因到达 EOS 标记而产生的 endPosition 输出\n* 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2763 中添加 Safetensors 支持\n* 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2765 中将 SpProcessor 设置为公共方法\n* 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2759 中在 model_zoo_importer 中打印出警告信息\n* 由 @SidneyLann 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2776 中支持 Yolov8\n* 由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2784 中将 OnnxRuntime 升级至 1.16.0\n* 由 @rohithkrn 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2785 中为 sm90 构建 FT\n* 由 @juliangamble 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2769 中实现 PtndArrayEx.multiboxDetection() 方法\n\n## 错误修复\n\n* [api] 修复 ChunkedB","2023-10-16T20:40:34",{"id":195,"version":196,"summary_zh":197,"released_at":198},99192,"v0.9.0","DJL 0.9.0 brings MXNet inference optimization, abundant PyTorch new feature support, TensorFlow windows GPU support and experimental DLR engine that support TVM models.\r\n\r\n## Key Features\r\n\r\n* Add experimental DLR engine support. Now you can run TVM model with DJL\r\n#### MXNet \r\n* Improve MXNet JNA layer by reusing String, String[] and PointerArray with object pool which reduce the GC time significantly\r\n####  PyTorch  \r\n* you can easily create COO Sparse Tensor with following code snippet\r\n```\r\nlong[][] indices = {{0, 1, 1}, {2, 0, 2}};\r\nfloat[] values = {3, 4, 5};\r\nFloatBuffer buf = FloatBuffer.wrap(values);\r\nmanager.createCoo(FloatBuffer.wrap(values), indices, new Shape(2, 4));\r\n```\r\n* If the input of your TorchScript model need List or Dict type, we now add simple one dimension support for you.\r\n```\r\n\u002F\u002F assum your torchscript model takes model({'input': input_tensor})\r\n\u002F\u002F you tell us this kind of information by setting the name\r\nNDArray array = manager.ones(new Shape(2, 2));\r\narray.setName(\"input1.input\");\r\n```\r\n* we support loading ExtraFilesMap\r\n```\r\n\u002F\u002F saving ExtraFilesMap\r\nCriteria\u003CImage, Classifications> criteria = Criteria.builder()\r\n  ...\r\n  .optOption(\"extraFiles.dataOpts\", \"your value\")  \u002F\u002F \u003C- pass in here \r\n  ... \r\n```\r\n#### TensorFlow\r\n* Windows GPU is now supported\r\n\r\n### Several Engines upgrade \r\n\r\n| Engine        | version |\r\n| ----------- | ------- |\r\n| PyTorch      | 1.7.0      |\r\n| TensorFlow | 2.3.1     |\r\n| fastText       | 0.9.2     |\r\n\r\n\r\n## Enhancement\r\n\r\n* Add docker file for serving\r\n* Add Deconvolution support for MXNet engine\r\n* Support PyTorch COO Sparse tensor\r\n* Add CSVDataset, you can find a sample usage [here](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdjl\u002Fblob\u002Fadb52e7e8cf5bd3f45e62641b817b8d83412e0e0\u002Fexamples\u002Fsrc\u002Fmain\u002Fjava\u002Fai\u002Fdjl\u002Fexamples\u002Ftraining\u002Ftransferlearning\u002FTrainAmazonReviewRanking.java#L113-L133)\r\n* Upgrade TensorFlow to 2.3.1\r\n* Upgrade PyTorch to 1.7.0\r\n* Add randomInteger operator support for MXNet and PyTorch engine\r\n* Add PyTorch Profiler\r\n* Add TensorFlow Windows GPU support\r\n* Support loading the model from jar file\r\n* Support 1-D list and dict input for TorchScript\r\n* Remove the Pointer class being used for JNI to relieve Garbage Collector pressure\r\n* Combine several BertVocabulary into one Vocabulary\r\n* Add loading the model from Path class\r\n* Support ExtraFilesMap for PyTorch model inference\r\n* Allow both int32 & int64 for prediction & labels in TopKAccuracy\r\n* Refactor MXNet JNA binding to reduce GC time\r\n* Improve PtNDArray set method to use ByteBuffer directly and avoid copy during tensor creation\r\n* Support experimental MXNet optimizeFor method for accelerator plugin.\r\n\r\n## Documentation and examples\r\n\r\n* Add Amazon Review Ranking Classification\r\n* Add Scala Spark example code on Jupyter Notebook\r\n* Add Amazon SageMaker Notebook and EMR 6.2.0 examples\r\n* Add DJL benchmark instruction\r\n\r\n## Bug Fixes\r\n\r\n* Fix PyTorch Android NDIndex issue\r\n* Fix Apache NiFi issue when loading multiple native in the same Java process\r\n* Fix TrainTicTacToe not training issue\r\n* Fix Sentiment Analysis training example and FixedBucketSampler\r\n* Fix NDArray from DataIterable not being attaching to NDManager properly\r\n* Fix WordPieceTokenizer infinite loop\r\n* Fix randomSplit dataset bug\r\n* Fix convolution and deconvolution output shape calculations\r\n\r\n## Contributors\r\n\r\nThank you to the following community members for contributing to this release:\r\n\r\nFrank Liu(@frankfliu)\r\nLanking(@lanking520)\r\nKimi MA(@kimim)\r\nLai Wei(@roywei)\r\nJake Lee(@stu1130)\r\nZach Kimberg(@zachgk)\r\n0xflotus(@0xflotus)\r\nJoshua(@euromutt)\r\nmpskowron(@mpskowron)\r\nThomas(@thhart)\r\nDocRozza(@docrozza)\r\nWai Wang(@waicool20)\r\nTrijeet Modak(@uniquetrij)\r\n","2020-12-18T22:06:56",{"id":200,"version":201,"summary_zh":202,"released_at":203},99193,"v0.7.0","DJL 0.7.0 brings SetencePiece for tokenization, GravalVM support for PyTorch engine, a new set of Nerual Network operators, BOM module, Reinforcement Learning interface and experimental DJL Serving module.\r\n## Key Features\r\n* Now you can leverage powerful [SentencePiece](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fsentencepiece) to do text processing including tokenization, de-tokenization, encoding and decoding. You can find more details on [extension\u002Fsentencepiece](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdjl\u002Ftree\u002Fmaster\u002Fextensions\u002Fsentencepiece).\r\n* Engine upgrade:\r\n  * MXNet engine: 1.7.0-backport\r\n  * PyTorch engine: 1.6.0\r\n  * TensorFlow: 2.3.0\r\n* MXNet multi-gpu training now is boosted by MXNet KVStore by default, which saves lots of overhead by GPU memory copy.\r\n* [GraalVM](https:\u002F\u002Fwww.graalvm.org\u002F) are fully supported on both of regular execution and native image for PyTorch engine. You can find more details on [GraalVM example](https:\u002F\u002Fgithub.com\u002Faws-samples\u002Fdjl-demo\u002Ftree\u002Fmaster\u002Fgraalvm).\r\n* Add a new set of  Neural Network operators that offers capability of full controlling over parameters for CV domain, which is similar to PyTorch [nn.functional module](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnn.functional.html). You can find the operator method in its Block class.\r\n```java\r\nConv2d.conv2d(NDArray input, NDArray weight, NDArray bias, Shape stride, Shape padding, Shape dilation, int groups);\r\n```\r\n* Bill of Materials (BOM) is introduced to manage the version of dependencies for you. In DJL, the engine you are using usually is tied to a specific version of native package. By easily adding BOM dependencies like this, you won’t worry about version anymore.\r\n```xml\r\n\u003Cdependency>\r\n    \u003CgroupId>ai.djl\u003C\u002FgroupId>\r\n    \u003CartifactId>bom\u003C\u002FartifactId>\r\n    \u003Cversion>0.7.0\u003C\u002Fversion>\r\n    \u003Ctype>pom\u003C\u002Ftype>\r\n    \u003Cscope>import\u003C\u002Fscope>\r\n\u003C\u002Fdependency>\r\n```\r\n```xml\r\nimplementation platform(\"ai.djl:bom:0.7.0\")\r\n```\r\n* JDK 14 now get supported \r\n* New Reinforcement Learning interface including [RIAgent](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdjl\u002Fblob\u002Fmaster\u002Fapi\u002Fsrc\u002Fmain\u002Fjava\u002Fai\u002Fdjl\u002Fmodality\u002Frl\u002Fagent\u002FRlAgent.java), [RlEnv](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdjl\u002Fblob\u002Fmaster\u002Fapi\u002Fsrc\u002Fmain\u002Fjava\u002Fai\u002Fdjl\u002Fmodality\u002Frl\u002Fenv\u002FRlEnv.java), etc, you can see a comprehensive [TicTacToe example](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdjl\u002Fblob\u002Fmaster\u002Fexamples\u002Fsrc\u002Fmain\u002Fjava\u002Fai\u002Fdjl\u002Fexamples\u002Ftraining\u002FTrainTicTacToe.java).\r\n* Support [DJL Serving module](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fdjl\u002Ftree\u002Fmaster\u002Fserving). With only a single command, now you can enjoy deploying your model without bothering writing the server code or config like server proxy.\r\n```sh\r\ncd serving && .\u002Fgradlew run --args=\"-m https:\u002F\u002Fdjl-ai.s3.amazonaws.com\u002Fresources\u002Ftest-models\u002Fmlp.tar.gz\"\r\n```\r\n\r\n## Documentation and examples\r\n* We wrote the [D2L book](https:\u002F\u002Fd2l.djl.ai\u002F) from chapter 1 to chapter 7 with DJL. You can learn basic deep learning concept and classic CV model architecture with DJL. [Repo](https:\u002F\u002Fgithub.com\u002Faws-samples\u002Fd2l-java)\r\n* We launched [a new doc website](https:\u002F\u002Fdocs.djl.ai\u002F) that hosts abundant documents and tutorials for quick search and copy-paste.\r\n* [New Online Sentiment Analysis with Apache Flink](https:\u002F\u002Fgithub.com\u002Faws-samples\u002Fdjl-demo\u002Ftree\u002Fmaster\u002Fflink\u002Fsentiment-analysis).\r\n* [New CTR prediction using Apache Beam and Deep Java Library(DJL)](https:\u002F\u002Fgithub.com\u002Faws-samples\u002Fdjl-demo\u002Ftree\u002Fmaster\u002Fapache-beam\u002Fctr-prediction).\r\n* New DJL logging configuration document which includes how to enable slf4j, switch to other logging libraries and adjust log level to debug the DJL.\r\n* New Dependency Management document that lists DJL internal and external dependencies along with their versions.\r\n* New CV Utilities document as a tutorial for Image API.\r\n* New Cache Management document is updated with more detail on different categories.dependency management.\r\n* Update Model Loading document to describe loading model from various sources like s3, hdfs.\r\n\r\n## Enhancement\r\n* Add archive file support to SimpleRepository\r\n* ImageFolder supports nested folder\r\n* Add singleton method for LambdaBlock to avoid redundant function reference\r\n* Add Constant Initializer\r\n* Add RMSProp, Adagrad, Adadelta Optimizer for MXNet engine\r\n* Add new tabular dataset: Airfoil Dataset\r\n* Add new basic dataset: CookingExchange, BananaDetection\r\n* Add new NumPy like operators: full, sign\r\n* Make prepare() method in Dataset optional\r\n* Add new Image augmentation APIs where you can add to Pipeline to enrich your image dataset\r\n* Add new handy fromNDArray  to Image API for converting NDArray to Image object quickly\r\n* Add interpolation option for Image Resize operator\r\n* Support archive file for s3 repository\r\n* Import new SSD model from TensorFlow Hub into DJL model zoo\r\n* Import new Sentiment Analysis model from HuggingFace into DJL model zoo\r\n\r\n## Breaking changes\r\n* Drop CUDA 9.2 support for all the platforms including linux, windows\r\n* The arguments of several blocks are changed to align with the signature of other widely used Deep Learning frameworks, please ","2020-09-04T01:14:22",{"id":205,"version":206,"summary_zh":207,"released_at":208},99185,"v0.34.0","## 主要变更\n* 引擎更新\n  * PyTorch 2.7.1 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3733\n  * 移除 TensorRT 引擎 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3765\n\n## 功能增强\n* [api] 增加基于 FUSE 的仓库支持，由 @raymondkhliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3695 中实现\n* 将模型服务器视为远程模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3704 中实现\n* 使 ZeroShotClassificationTranslator 行为与 Hugging Face 一致，由 @raphaeldelio 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3712 中实现\n* 为提升性能，移除 UUID.randomUUID() 的使用，由 @aakashb-kayzen 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3719 中实现\n* 使用 NDManager.nextUid() 替代 UUID，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3720 中实现\n* 增加 NDArray 对角线功能，由 @dev-jonghoonpark 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3724 中实现\n* [api] 实现将远程 REST API 调用作为模型，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3725 中实现\n* 为 RpcEngine 增加 jsonlines 流式支持，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3727 中实现\n* 增加 genai 扩展，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3726 中实现\n* [genai] 向 ChatInput 添加工具调用功能，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3728 中实现\n* 增加用于 genai 函数调用的工具类，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3729 中实现\n* 改进 genai 扩展中的函数调用功能，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3731 中实现\n* 支持将 ChatInput 转换为 GeminiInput，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3739 中实现\n* 部分支持 HuggingFace Tokenizer，使其能够使用 tokenizer_config.json 中的参数，由 @Soha-Agarwal 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3738 中实现\n* 增加 genai 对 Anthropic 的支持，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3759 中实现\n\n## 错误修复\n* 修正 CaptchaDataset 的选项数量，以防止 torch.gather 抛出 IndexError，由 @xinhuagu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3694 中实现\n* 修复 SeqBatcher 在重塑偏移量时的维度检查问题，由 @xinhuagu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3702 中实现\n* 修正文档中错误的函数名及轻微拼写错误，由 @xinhuagu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3703 中实现\n* 修复 getImageHeight 方法返回值错误的问题，由 @xinhuagu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3706 中实现\n* [djl_converter]: 修复 djl_converter 的 bug，由 @raymondkhliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3705 中实现\n* 修复使用初始图像时出现的越界异常，由 @luke-zhou 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3717 中实现\n* 修复 earlystopping 指标问题 #3722，由 @SamBSalgado 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3736 中实现\n* [tokenizers] 修复 tokenizer 的 CPU 和 CUDA 构建问题，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3756 中实现\n* [examples] 更新 tfhub 的 URL，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3752 中实现\n* 更新","2025-08-12T14:50:50",{"id":210,"version":211,"summary_zh":212,"released_at":213},99186,"v0.33.0","## 主要变更\n* 引擎更新\n  * OnnxRuntime 更新至 1.21.0\n\n## 功能增强\n* [tokenizers] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3607 中添加了 lasttoken 池化功能\n* [api] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3622 中提供了 TranslatorContext 的具体实现\n* [api] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3626 中增加了零样本目标检测支持\n* [api] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3628 中新增了零样本图像分类支持\n* 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3637 中增加了 yolov8s-world2 模型支持\n* [tensorflow] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3638 中允许获取 TensorFlow 模型的可用签名\n* [api] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3641 中提升了 listModel 的性能\n* [tokenizers] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3642 中添加了 SparseRetrievalTranslator\n* [tokenizers] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3652 中修复了测试中的分词器名称问题\n* [tokenizer] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3654 中将 tokenizers 更新至 0.21.1\n* [onnxruntime] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3678 中将 OnnxRuntime 更新至 1.21.0\n* [examples] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3681 中添加了 WhisperJet 模型演示\n* [pytorch] 由 @saedmanaf 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3663 中增加了 diff 支持\n\n\n## 错误修复\n* [examples] 由 @sindhuvahinis 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3605 中修复了一些依赖和测试要求\n* [onnxruntime] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3630 中修复了 intraOpNumThreads 的 bug\n* [ci] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3635 中修复了用于集成的 build.gradle 文件\n* [ci] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3636 中修复了 Gradle 构建脚本中的系统属性\n* 修复：由 @leleZeng 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3646 中修正了 16 位 PCM 归一化，以避免溢出问题\n* 修复：由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3645 和 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3649 中通过更新 candle-core 至 0.8.4 版本来修复 Rust 构建问题\n* [fix] 由 @dwctic 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3660 中修复了 LRUReplayBuffer 的 \"stepToReplace\" 索引问题\n\n\n## 文档\n* docs：由 @operagxoksana 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3668 中添加了持续集成徽章链接\n* [doc] 由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3669 中更新了 onnxruntime 的 README 文件\n* chore：由 @operagxoksana 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3670 中编辑了徽章\n* chore：由 @operagxoksana 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3671 中为发布添加了一个图标\n* 修正“加载模型”文档：方法 ImageCl","2025-05-09T21:50:23",{"id":215,"version":216,"summary_zh":217,"released_at":218},99187,"v0.32.0","## 主要变更\n- 引擎更新\n  - Tokenizers 更新至 0.21.0\n  - OnnxRuntime 更新至 1.20.0\n\n## 功能增强\n* [PyTorch] 支持 PyTorch 2.1.2 的 Neuron 模型，由 @siddvenk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3542 中实现\n* [ONNX Runtime] 将 ONNX Runtime 引擎升级至 1.20.0，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3549 中完成\n* [Tokenizers] 为 QA 推理返回详细信息，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3555 中实现\n* [API] 允许配置自定义批处理器，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3559 中添加\n* [Tokenizer] 将 Hugging Face Tokenizer 更新至 0.21.0，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3572 中完成\n* [Tokenizers] 为编码添加 int32 选项，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3571 中实现\n* [API] 添加 ZeroShotClassification 支持，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3579 中实现\n* [Tokenizers] 支持将 zero-shot-classification 导入模型库，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3580 中完成\n* [FastText] 添加 Linux-aarch64 支持，由 @iamshubhambhola 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3584 中实现\n* 在 DJL API 中暴露 OnnxRuntime 的 getMetadata() 方法，由 @VanjaRadulovic 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3596 中完成\n* [API] 重构图像处理功能，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3588 中完成\n* 将 zero-shot-classification 添加到模型库，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3589 中实现\n* [API] 为计算机视觉添加 Pad 图像变换功能，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3593 中实现\n\n## Bug 修复\n* [API] 修复 Tar\u002FZip 工具中导致工件不正确的问题，由 @siddvenk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3544 中完成\n* [Tokenizer] 修复 token_classification 导入问题，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3570 中完成\n* [Tokenizer] 修复 DJL 转换器的 trust_remote_code 问题，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3569 中完成\n* [修复] 修复 YoloV8Translator，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3575 中完成\n* [Tokenizers] 修复 ZeroShotClassificationTranslator 的 bug，由 @bryanktliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3581 中完成\n* [XGBoost] 修复 XGBoost 内部关闭已替换数组的问题，由 @ewan0x79 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3558 中完成\n\n## 文档更新\n* [PyTorch] 更新 0.32.0 版本支持的 PyTorch 版本，由 @siddvenk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3541 中完成\n* 更新 README.md（修正错别字），由 @paulk-asert 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3576 中完成\n* 文档：（Readme）由 @Lubov66 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3577 中完成\n\n## CI\u002FCD\n* 将构建版本提升至 0.32.0，由 @siddvenk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3540 中完成\n* [CI] 修复 CI 以正确测试 Windows 环境，由 @siddvenk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3543 中完成\n* [CI] 将 serving.publish 功能重新添加到 DJL 中，由 @sindhuvahinis 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3551 中完成\n* [","2025-03-06T20:24:04",{"id":220,"version":221,"summary_zh":222,"released_at":223},99188,"v0.31.1","## 主要变更\n* 引擎更新：\n  * PyTorch 2.5.1 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3517\n  * HuggingFace Tokenizers 0.20.3 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3514\n* 为 HuggingFace Tokenizers 添加了 Android 支持 @naveen521kk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3531 中实现\n* 修复了跨平台归档解压问题 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3544\n\n## 功能增强\n* [api] 使用编码器\u002F解码器为 Segment anython2 翻译器服务 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3487 中实现\n* [api] NDScope 中的备用 NDArray 不应被关闭 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3490 中实现\n* [api] 将 sam2 模型添加到 onnxruntime 模型库 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3492 中实现\n* [api] 统一 CV 输出格式 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3493 中实现\n* [api] 为 Sam2ServingTranslator 可视化 sam2 输出 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3494 中实现\n* [api] 改进适用于 PyTorch 追踪模型的 Sam2Translator @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3495 中实现\n* [android] 将 PyTorch 版本更新至 2.4.0 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3474 中实现\n* [tokenizers] 使用来自 rust.io 的分词器 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3476 中实现\n* [rust] 移除 cublaslt 中不必要的克隆操作 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3482 中实现\n* [api] 使 Sam2 输入与其他 CV 模型保持一致 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3498 中实现\n* [api] 为部分 CV 模型添加推理服务支持 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3499 中实现\n* HuggingFaceTokenizer：增加对 Android 的支持 @naveen521kk 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3531 中实现\n* [tokenizer] 将分词器更新至 0.20.3 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3514 中实现\n* [tokenizer] 在 libs.versions.toml 中将分词器版本更新至 0.20.3 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3515 中实现\n* [pytorch] 将 Yolo11 模型添加到模型库 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3516 中实现\n* [pytorch] 将 PyTorch 更新至 2.5.1 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3517 中实现\n* [converter] 剪裁 jit 输出中的 token_str @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3527 中实现\n\n## 错误修复\n* [PyTorch] 修复 sam2 模型版本问题 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3496 中实现\n* [api] 修复 QaServingTranslator 输出格式 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3500 中实现\n* [djl-convert] 修复 HuggingFace 转换器 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3505 中实现\n\n## 文档更新\n* [docs] 为 2.4.0 版本更新 PyTorch 引擎 README @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3472 中实现\n* 文档：添加了一个链接 @operagxoksana 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3510 中实现\n* [doc] 将博客文章索引添加到文档中 @xyang16 i","2024-11-18T23:14:54",{"id":225,"version":226,"summary_zh":227,"released_at":228},99189,"v0.30.0","## 主要变更\n* 引擎更新：\n  * OnnxRuntime 1.19.0 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3446\n  * Huggingface Tokenizers 0.20.0 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3452\n* 为 SAM2 模型新增了掩码生成任务 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3450\n* 文本嵌入推理：\n  * 新增对 Mistral、Qwen2、GTE、Camembert 嵌入模型的支持\n  * 新增重排序模型支持\n\n## 功能增强\n* [api] 避免使用非 ASCII 字符，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3395 中实现\n* [djl-converter] 如果模型转换失败，则以错误退出，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3399 中实现\n* [api] 支持将 TEI 输入格式用于重排序模型，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3400 中实现\n* [rust] 为 Rust 引擎添加 sigmoid 和 softmax 运算符，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3407 中实现\n* [test] 检测具有指定引擎的 GPU，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3409 中实现\n* [api] 添加 Criteria.isDownload() API，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3403 中实现\n* [rust] 为每个 CUDA 架构构建 .so 文件，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3410 中实现\n* [rust] 添加 Mistral 嵌入模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3412 中实现\n* [tokenizers] 在 djl-convert 中添加支持的架构，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3416 中实现\n* [tokenizers] 将 pt 文件名替换为 safetensors，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3417 中实现\n* [rust] 在指定设备上加载模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3419 中实现\n* [rust] 添加 Qwen2 模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3420 中实现\n* [rust] 支持预下载的 Rust 共享库，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3421 中实现\n* [pytorch] 添加 pad 运算符，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3423 中实现\n* [rust] 为不支持的操作提供更友好的错误信息，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3424 中实现\n* [api] 为 Yolo 添加居中缩放图像操作，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3425 中实现\n* [rust] 添加 GTE 和 Gemma2 模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3422 中实现\n* [djl-convert] 设置导入时的默认最大模型大小限制，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3428 中实现\n* [djl-import] 导入模型时包含所需版本信息，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3431 中实现\n* [android] 将 DJL 版本升级至 0.30.0，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3432 中实现\n* [rust] 将 cublaslt 包装改为非静态，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3434 中实现\n* [djl-convert] 排除 includeTokenTypes 中的模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3435 中实现\n* [rust] 在旋转嵌入中使张量连续，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalib","2024-09-13T19:52:49",{"id":230,"version":231,"summary_zh":232,"released_at":233},99190,"v0.28.0","## 主要变更\n* 引擎升级\n  * PyTorch 2.2.2 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3155\n  * Sentencepiece 0.2.0 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3163\n* 引擎与 API 增强\n  * 添加实验性 Rust 引擎 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3078\n\n## 功能增强\n* [api] 根据任务自动检测 TranslatorFactory，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3136 中实现\n* [api] 添加 OnesBlockFactory，便于测试，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3140 中实现\n* 确保备用 NDManager 可以使用 GPU，由 @david-sitsky 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3138 中实现\n* [api] 尝试为备用 NDManager 使用同一设备，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3146 中实现\n* [api] 支持在 JSON 中序列化 NaN，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3156 中实现\n* [rust] 添加 Rust 引擎实现，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3078 中实现\n* [rust] 添加 Rust 模型库，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3132 中实现\n* [rust] 支持 RsModel 加载 DJL 模型，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3147 中实现\n* [rust] RsModel 在关闭时删除模型，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3170 中实现\n* [tokenizers] 将分词器更新至 0.19.1，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3143 中实现\n* [tokenizer] 允许使用 HF_TOKEN 访问 gated 模型，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3150 中实现\n* [tokenizers] 创建 djl_converter 包，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3172 中实现\n* [tokenizer] 重构 djl_convert Python 代码，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3179 中实现\n* 对 djl_converter 的更新，由 @xyang16 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3187 中实现\n* [pytorch] 将 PyTorch 更新至 2.2.2，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3155 中实现\n* [pytorch] 更新 PyTorch 引擎的 README 文件以适配 2.2.2 版本，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3165 中实现\n* [pytorch] 优化 PyTorch NDArray 的内存拷贝开销，由 @ewan0x79 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3137 中实现\n* [pytorch] 将 PyTorch 更新至 2.3.0，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3192 中实现\n* [sentencepiece] 将 Sentencepiece 更新至 0.2.0，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3163 中实现\n* [huggingface] 增加更多 ONNX 模型转换选项，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3180 中实现\n\n## 错误修复\n* [gitignore] 避免检查二进制文件，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3134 中实现\n* [api] 关闭文件流，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3130 中实现\n* [api] 修复日志调用规范，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3148 中实现\n* [api] 修复 Criteria.toString() 的 bug，由 @frankfliu 在 https:\u002F\u002Fgithub.com\u002Fdeepjavali","2024-05-16T03:46:20",{"id":235,"version":236,"summary_zh":237,"released_at":238},99194,"v0.29.0","## Key Changes\r\n\r\n* Upgrades for engines\r\n  * Upgrades PyTorch engine to 2.3.1\r\n  * Upgrades TensorFlow engine to 2.16.1\r\n  * Introduces Rust engine CUDA support\r\n  * Upgrades OnnxRuntime version to 1.18.0 and added CUDA 12.4 support\r\n  * Upgrades javacpp version to 1.5.10\r\n  * Upgrades HuggingFace tokenizer to 0.19.1\r\n  * Fixes several issues for LightGBM engine\r\n  * Deprecated llamacpp engine\r\n\r\n* Enhancements for engines and API\r\n  * Adds Yolov8 segmentation and pose detection support \r\n  * Adds metric type to Metic class\r\n  * Improves drawJoints and drawMask behavior for CV model\r\n  * Improves HuggingFace model importing and conversion tool\r\n  * Improves HuggingFace NLP model batch inference performance\r\n  * Adds built-in ONNX extension support\r\n  * Adds several NDArray operators in PyTorch engine\r\n  * Adds fp16 and bf16 support for OnnxRuntime engine\r\n  * Adds CrossEncoder support for NLP models\r\n\r\n## Enhancements\r\n* Adds metric type to Metic class by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3244\r\n* Improves drawJoints behavior by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3305\r\n* [api] Allows to control json pretty print with env var by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3288\r\n* [api] Avoid null dimensions for Metric by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3246\r\n* [api] Improve NDArray.toDebugString() output by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3290\r\n* [api] Loads native engine in deterministic order by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3300\r\n* [api] Refactor drawMask() for instance segmentation by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3304\r\n* [api] Refactor nms for yolo translator by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3297\r\n* add close method to all nd manager by @lanking520 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3225\r\n* ported tools\u002Fstats.gradle by @elect86 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3219\r\n* use standard GSON output by @lanking520 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3284\r\n* [enhancement] Optimize memory copy overhead to enhance performance. by @ewan0x79 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3289\r\n* Gradle Kotlin script plus other stuff by @elect86 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3167\r\n* Improved incremental build by @benjie332 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3231\r\n* Refactored Identifiers by @congyuluo in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3276\r\n* Refactored Identifiers by @congyuluo in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3282\r\n* [gradle] Remove unused gradle files by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3280\r\n* [jacoco] exclude spark extension since it doesnot contain test by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3230\r\n* [Lgbm] support multi classification by @ewan0x79 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3234\r\n* [Lgbm] support multi type prediction by @ewan0x79 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3237\r\n* [llamacpp] Removing llamacpp support in DJL by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3312\r\n* [mxnet-model-zoo] Adds missing translatorFactory in metadata by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3279\r\n* [onnx] Adds fp16 and bfp16 support for OnnxRuntime by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3281\r\n* [onnxruntime] Add debug message for OnnxRuntime by @xyang16 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3217\r\n* [onnxruntime] Adds yolov8n pose model for OnnxRuntime by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3309\r\n* [onnxruntime] Adds yolov8n-seg model to onnxruntime model zoo by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3310\r\n* [onnxruntime] Load onnx extenstion if available by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3333\r\n* [pytorch] Adds Yolov8n-seg model to model zoo by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3308\r\n* [pytorch] Adds back PyTorch 2.1.2 support by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3285\r\n* [pytorch] Adds yolov8n pose estimation model by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3298\r\n* [pytorch] Implements gammaln operator for PyTorch by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3262\r\n* [pytorch] Split maven publish into two parts by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3273\r\n* [rust] Add tokenizer cuda build workflow by @xyang16 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3322\r\n* [rust] Allows -2 as dims for sum() by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3221\r\n* [rust] Change loging level to debug by @xyang16 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3336\r\n* [rust] Download cu124 jni library for cuda by @xyang16 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3327\r\n* [rust] Remove 0-dimension tensor compare in NDArrayTests by @xyan","2024-07-19T01:45:46",{"id":240,"version":241,"summary_zh":242,"released_at":243},99195,"v0.27.0","## Key Changes\r\n* Upgrades for engines\r\n  * OnnxRuntime 1.17.1 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3019\r\n* Enhancements for engines and API\r\n  * Supports PyTorch stream imperative model load https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2981\r\n  * Support encode\u002Fdecode String tensor https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3034\r\n\r\n## Enhancement\r\n* Suppress serial warning for JDK21 by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2935\r\n* [api] Moves commons-compress dependency to standalone class. by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2951\r\n* [api] Allows to load .pt or .onnx file from jar url by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2955\r\n* [tokenizer] Return if exceed max token length by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2957\r\n* [tokenizer] Adds getters for HuggingfaceTokenizer by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2958\r\n* [pytorch] Upgrade android build to 0.26.0 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2975\r\n* [pytorch] Avoid loading .lib file from PYTORCH_LIBRARY_PATH by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2987\r\n* [api] Adds utility method to Model for accessing properties by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3007\r\n* [api] Adds suffix to percentile metric name by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3011\r\n* [api] Adds dimension for prediction metric by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3013\r\n* Thread-safe FaceDetectionTranslator by @StefanOltmann in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3016\r\n* [api] Upgrades commons compress to 1.26.0 for CVE by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3018\r\n* Avoid duplicated loading native library by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3020\r\n* [api] Allows to use relative jar uri for cache folder name by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3026\r\n* support includeTokenTypes in TextEmbeddingBatchTranslator by @morokosi in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3032\r\n* [tokenizer] Adds includeTokenTypes for all translators by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3035\r\n* Updates dependencies version to latest by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3040\r\n* [pytorch] Allows to exclude certain DLL from pytorch directory by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3043\r\n* Update checkstyle tool version to 10.14.2 by @xyang16 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3047\r\n* Upgrade dependency version by @xyang16 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3049\r\n\r\n## Bug Fixes\r\n* [fix][ci] fix typo in publish metric workflow by @siddvenk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2976\r\n* [fix][ci] avoid early exit of script for failure case by @siddvenk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2979\r\n* [ci][fix] update path to android sdk manager cli by @siddvenk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2980\r\n* [dataset] Fixes broken link for mnist dataset by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2984\r\n* [database] Fixes mnist URL for local unitest by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2988\r\n* fix #2968 by @SidneyLann in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2986\r\n* [dataset] Fixes wikitext-2 by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2996\r\n* [spark] Fixes python tarslip security concern by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2995\r\n* Fixes failing CI by @ydm-amazon in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3001\r\n* Fixes cases where the getEngine method in the EngineProvider class returns null when called concurrently. by @onaple in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3005\r\n* [api] Fixes typo in CudaUtils by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3008\r\n* [model-zoo] Fixes typo in README by @fensch in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3009\r\n* [ci] Fixes nightly build for onnx 1.17.1 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3021\r\n* [pytorch] Fixes detecting wrong flavor on macOS issue by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3027\r\n* [bom] Fixes djl-serving packages in BOM by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F3039\r\n\r\n## Documentation\r\n* Bump DJL version to 0.27.0 by @siddvenk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2933\r\n* [doc] include trtllm convert manual by @sindhuvahinis in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2941\r\n* [docs] Updates README by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2954\r\n* [doc] Make LMI a separate tab and include I\u002FO schema by @sindhuvahinis in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2960\r\n* [docs] Fixes cuda version for pytorch native library by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2963\r\n* docs: add AWS Graviton3 PyTorch inference tuning details","2024-03-28T21:19:05",{"id":245,"version":246,"summary_zh":247,"released_at":248},99196,"v0.26.0","## Key Changes\r\n* LlamaCPP Support. You can use DJL to run supported LLMs using the [LlamaCPP engine](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fllama.cpp). See the Chatbot example [here](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-demo\u002Fblob\u002Fmaster\u002Fhuggingface\u002Fnlp\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fexamples\u002FChatbot.java) to learn more.\r\n* Manual Engine Initialization. You can configure DJL to not load any engines at startup, and query\u002Fregister engines programmatically at runtime\r\n* Engine Updates:\r\n  * PyTorch 2.1.1\r\n  * Huggingface Tokenizers 0.15.0\r\n  * OnnxRuntime 1.16.3\r\n  * XGBoost 2.0.3\r\n\r\n## Enhancement\r\n* Add erf and atan2 by @TalGrbr in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2842\r\n* Add FFT2 and FFT2 inverse by @TalGrbr in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2845\r\n* [tokenizer] Update import script for huggingface_hub api change by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2850\r\n* [tokenizer] Not returns overflow tokens by default by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2857\r\n* [pytorch] Updates PyTorch engine to 2.1.1 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2864\r\n* Adds Device.getDevices() for all Device by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2820\r\n* Creates DJL manual engine initialization by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2885\r\n* [pytorch] Allows to load libstdc++.so.6 form different location by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2929\r\n* Add Evaluator support to update multiple accumulators by @petebankhead in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2894\r\n* Adds llama.cpp engine by @bryanktliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2904\r\n* Yelov8 Translator optimization by @gevant in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2908\r\n* [pytorch] Adds Yolov8n model to pytorch model zoo. by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2910\r\n* [onnx] Adds yolov8n to model zoo by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2909\r\n* [llama.cpp] Adds unit-test and standardize input parameters by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2905\r\n* [llama.cpp] Adds llama.cpp huggingface model zoo by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2911\r\n* [XGBoost] Updates XGBoost to 2.0.3 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2915\r\n* [pytorch] Upgrade pytorch andorid to 2.1.1 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2914\r\n* add awscurl release by @lanking520 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2917\r\n* [awscurl] change build to jar by @lanking520 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2918\r\n* [bom] Adds llama engine to BOM by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2916\r\n* [api] Adds ModelZooResolver interface by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2922\r\n* [api] Use folk java process to avoid jvm consume GPU memory by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2882\r\n* [onnxruntime] Updates OnnxRuntime to 1.16.3 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2888\r\n* Tokenizers: Updated huggingface_models.py to support Safetensors models as well as pytorch by @dameikle in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2880\r\n* [tokenizer] Uses fp32 for TextembeddingTranslator clip() by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2881\r\n* [tokenizer] Updates huggingface tokenizer to 0.15.0 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2867\r\n\r\n## Bug Fixes\r\n* [tokenizer] Fixes tokenizer bug by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2843\r\n* Fixes archiveBaseName in native builds by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2859\r\n* [pytorch] Ensure shared library loading order for aarch64 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2892\r\n* [api] Handles both JNA conflict and missing case by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2896\r\n* Minor fixes to improve Apple Silicon MPS support by @petebankhead in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2873\r\n* [tokenizer] Handles import huggingface model zoo exception case by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2872\r\n* [api] Update offline property name to avoid conflict with other app. by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2877\r\n* [tensorflow] Revert InstanceHolder for TensorFlow engine by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2884\r\n* [pytorch] Revert InstanceHolder for PyTorch engine by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2876\r\n* [pytorch] Fixes windows load nvfuser_codegen bug by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2868\r\n\r\n\r\n## Documentation\r\n* [docs] Update serving configuration nav by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2853\r\n* Updates DJL version to 0.25.0 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2860\r\n* Bump up DJL version to 0.26.0 by","2024-01-16T19:09:03",{"id":250,"version":251,"summary_zh":252,"released_at":253},99197,"v0.25.0","## Key Changes\r\n* Engine Upgrades\r\n  * [XGB] support for .xgb file extension https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2810\r\n  * [Tokenizers] Upgrade tokenizers to 1.14.1 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2818\r\n  * [XGB] Updates XGBoost to 2.0.1 https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2833\r\n* Early Stopping support for Training by @jagodevreede https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2806\r\n\r\n## Enhancement\r\n* [tokenizer] Allows import non-english model by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2797\r\n* [api] Allows cancel Input by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2805\r\n* [huggingface] Adds CrossEncoderTranslator by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2817\r\n* Creates MultiDevice by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2819\r\n* [api] Refactor PublisherBytesSupplier.java by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2831\r\n* [api] Replace double-check singlton with lazy initialization by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2826\r\n\r\n## Bug fixes\r\n* [api] Fixed NDList decode numpy file bug by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2804\r\n\r\n## Documentation and Examples\r\n* Updates doc versions to 0.24.0 by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2829\r\n* [docs] Fixes markdown headers by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2812\r\n* Bump up DJL version to 0.25.0 by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2809\r\n* Update README with release update by @zachgk in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2823\r\n\r\n## CI\r\n* [FT Deps] allow to just build for 1 flow by @lanking520 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2798\r\n* [ci] Fixes out of diskspace issue by @frankfliu in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2808\r\n* Add Triton gpu flag build on by @lanking520 in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2815\r\n\r\n## New Contributors\r\n* @jagodevreede made their first contribution in https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2806\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fcompare\u002Fv0.24.0...v0.25.0","2023-12-09T00:13:14",{"id":255,"version":256,"summary_zh":257,"released_at":258},99198,"v0.23.0","## Key Features\r\n\r\n* Upgrades for engines\r\n  * Upgrades PyTorch engine to 2.0.1\r\n  * Upgrades javacpp version to 1.5.9 ([#2636](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2636))\r\n  * Upgrades HuggingFace tokenizer to 0.13.3 ([#2697](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2697))\r\n  * Upgrades OnnxRuntime version to 1.15.0 and other dependencies version ([#2658](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2658))\r\n\r\n* Enhancements for engines and API\r\n  * Adds XGBoost aarch64 support ([#2659](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2659))\r\n  * Adds fastText macOS M1 supports ([#2639](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2639))\r\n  * Creates asynchronous predictStreaming ([#2615](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2615))\r\n\r\n* Introduces text-generation search algorithm\r\n  * Implements text-generation search algorithm ([#2637](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2637))\r\n  * Enhancement features for LMSearch ([#2642](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2642))\r\n \r\n## Enhancement\r\n* DJL API improvements:\r\n  * Adds uint16, uint32, uint64, int16, bf16 data type ([#2570](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2570))\r\n  * Adds NDArray topK operator for PyTorch ([#2634](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2634))\r\n  * Adds support for unsigned datatype ([#2574](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2574))\r\n  * Allows subclass access member variable of Predictor ([#2582](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2582))\r\n  * Makes PredictorContext constructor public ([#2586](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2586))\r\n  * Refactor ChunkedBytesSupplier to avoid unnecessary conversion ([#2587](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2587))\r\n  * Move compileJava() into ClassLoaderUtils ([#2600](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2600))\r\n  * Enable boolean input on ort ([#2644](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2644))\r\n  * Adds more logs for platform detection ([#2646](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2646))\r\n  * Improves DJL URL error message ([#2678](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2678))\r\n  * Avoid exception if SecurityManager is applied ([#2665](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2665))\r\n  * Masks sensitive env vars in debug print out ([#2657](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2657))\r\n  * open isImage() method to package children for reuse-enabling custom datasets ([#2662](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2662))\r\n  * Migrate google analytics ([#2654](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2654))\r\n \r\n* PyTorch engine improvements\r\n  * Load dependencies with specific order ([#2599](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2599))\r\n  * Improves IValue tuple of tuple support ([#2651](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2651))\r\n  * Add basic median support ([#2701](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2701))\r\n\r\n* Spark extension enhancements\r\n  * Support requirements.txt in model tar file ([#2528](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2528))\r\n  * Upgrade dependency version in Dockerfile ([#2569](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2569))\r\n  * Use batch predict in spark ([#2545](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2545))\r\n  * Change implicit conversions to explicit ([#2595](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2595))\r\n\r\n* Huggingface tokenizer enhancements\r\n  * Allow creating BPE huggingface tokenizers ([2550](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2550))\r\n\r\n* Tensorflow engine enhancements\r\n  * Reload javacpp properties ([2668](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2668))\r\n  \r\n## Breaking change\r\n\r\n## Bug fixes\r\n  * Avoids exception for cuda version lower than 10.x ([#2583](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2583))\r\n  * Reverts \"[bom] Simplify BOM build script (#2438)\" [2598](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2598)\r\n  * CI fails looking for v3 reverting to v2 ([#2604](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2604))\r\n  * Fixes the dependencies issue ([#2609](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2609))\r\n  * Fixes the usage of the repeat function for embedding ([#2590](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2590))\r\n  * Adds missing djl-zero to bom ([#2625](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2625))\r\n  * Fixes tabnet predictor ([#2643](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2643))\r\n  * Fixes error message in X.dot(w) ([#2688](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2688))\r\n  * Fixes liquid parsing issues in pytorch ndarray cheatsheet ([#2690](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2690))\r\n  * Fixes getIOU bug ([#2674](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2674))\r\n  * Fixes setup formatting ([#2653](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2653))\r\n  * Fixes broken link ([#2622](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2622))\r\n  * Fixes LocalRepository detection ([#2593](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2593))\r\n  * Fix jupyter notebook links ([#2704](h","2023-07-13T00:34:46",{"id":260,"version":261,"summary_zh":262,"released_at":263},99199,"v0.22.1","## Key Features\r\n\r\n* Upgrades and enhancements for Engines\r\n    * Upgrades PyTorch to 1.13.1 ([#2245](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2245))\r\n    * Upgrades TensorFlow engine to 2.10.1 ([#2440](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2440))\r\n    * Upgrades XGBoost to 1.7.5 ([#2522](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2522))\r\n    * DJLServing release [0.22.1](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Freleases\u002Ftag\u002Fv0.22.1) \r\n\r\n## Enhancement\r\n\r\n* Introduces several enhancement for HuggingFace tokenizer:\r\n    * Allows tokenizer native library load from different classloader ([#2465](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2465))\r\n    * Makes Huggingface model zoo lazy load ([#2469](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2469))\r\n    * Make Huggingface tokenizers translator factory serializable ([#2442](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2442))\r\n* Introduces several enhancement for Spark extension:\r\n    * Adds audio predictors ([#2466](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2466))\r\n    * Adds more image predictors and change some APIs ([#2456](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2456))\r\n    * Adds more text predictors ([#2443](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2443))\r\n    * Adds np_util ([#2419](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2419))\r\n    * Adds pyspark TextEmbedder and update ImageClassifier ([#2414](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2414))\r\n    * Adds text generation in pyspark ([#2477](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2477))\r\n    * Adds text2text generation ([#2506](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2506))\r\n    * Adds whisper python code ([#2513](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2513))\r\n    * Upgrades spark version to 3.3.2 ([#2523](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2523))\r\n* DJL API improvements:\r\n    * Adds support for unique, bmm, xlogy ([#2415](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2415))\r\n    * Fixes NDArray.toByteArray() bug ([#2436](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2436))\r\n    * Adds NDArray.copyTo() support for NDArrayAdapter ([#2437](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2437))\r\n    * Improves Classifications.toString() print out ([#2439](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2439))\r\n    * Makes Batchifier serializable ([#2441](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2441))\r\n    * Loads inputShapes in the loadMetadata method of Linear block ([#2448](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2448))\r\n    * Adds chunked output support ([#2453](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2453))\r\n    * Makes audio and cv translator factory serializable ([#2455](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2455))\r\n    * Adds NamedEntity.toString() function ([#2468](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2468))\r\n    * Streaming Predict and streamable BytesSupplier ([#2470](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2470))\r\n    * Mitigates ZipInputStream CVE. ([#2473](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2473))\r\n    * Adds getProperties() to Model interface ([#2476](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2476))\r\n    * Adds non-blocking poll() for BytesSupplier ([#2478](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2478))\r\n    * Makes PassthroughNDManager aware of engine and device ([#2484](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2484))\r\n    * Fixes telemetry opt out ([#2490](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2490))\r\n    * Uses sha-256 to avoid security warning ([#2495](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2495))\r\n    * Moves NeuronUtils to api package ([#2496](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2496))\r\n    * Adds encode and decode to Input and Output ([#2502](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2502))\r\n    * Fails model loading if specified translator not found ([#2515](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2515))\r\n    * Adds a way to check if streaming is supported ([#2518](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2518))\r\n    * Fixed detect platform for different CUDA version ([#2527](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2527))\r\n    * Fixes neuron core detection in docker container ([#2536](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2536))\r\n\r\n* PyTorch engine improvements:\r\n    * Upgrades PyTorch engine to 2.0.0 ([#2525](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2525))\r\n    * Implements unique operator for PyTorch engine ([#2417](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2417))\r\n    * Adds yolov5s to pytorch model zoo ([#2433](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2433))\r\n    * Respect PYTORCH_FLAVOR override to download libtorch ([#2486](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2486))\r\n    * Print log if graph optimizer is enabled ([#2501](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2501))\r\n* OnnxRuntime engine improvements:\r\n    * Adds support for OnnxRuntime Profiler ([#2472](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2472))\r\n* MXNet engine improvements:\r\n","2023-04-27T14:57:21",{"id":265,"version":266,"summary_zh":267,"released_at":268},99200,"v0.21.0","## Key Features\r\n\r\n* Upgrades and enhancements for Engines\r\n    * Upgrades PyTorch to 1.13.1 ([#2245](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2245))\r\n    * Upgrades ONNXRuntime to 1.14.0 ([#2393](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2393))\r\n    * Upgrades HuggingFace tokenizer version to 0.13.2 ([#2369](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2369))\r\n    * Upgrades XGBoost to 1.7.3 ([#2371](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2371))\r\n    * Removes Neo-DLR engine from DJL [#2373](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2373)\r\n* Introduces several improvements for extensions:\r\n    * Adds batch support huggingface tokenizer\r\n    * Adds API improvement for Spark extensions\r\n    * Add a few image processing methods in OpenCV extension ([#2320](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2320))\r\n    * Adds stft and fft forier transform for audio extension ([#2259](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2259))\r\n* Implements NDScope to automatically close NDArray in the scope ([#2321](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2321))\r\n* Allows MXNet runs on Ampere GPU ([#2313](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2313))\r\n* [DJLServing release](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Freleases\u002Ftag\u002Fv0.21.0)\r\n    * Adds faster transformer support ([#424](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F424))\r\n    * Adds Deepspeed ahead of time partition script in DLC ([#466](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F466))\r\n    * Adds SageMaker MME support ([#479](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F479))\r\n    * Adds support for stable-diffusion-2-1-base model ([#484](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F484))\r\n    * Adds support for stable diffusion depth model ([#488](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F488))\r\n    * Adds out of memory protection for modle loading ([#496](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F496))\r\n    * Makes load_on_devices per model setting ([#493](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F493))\r\n    * Adds several per model settings\r\n    * Improves management console model loading and inference UI ([#431](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F431), [#432](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F432))\r\n    * Updates deepspeed to 0.8.0 ([#465](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F465))\r\n\r\n## Enhancement\r\n\r\n* Introduces several enhancements for timeseries extension:\r\n    * Adds probability distribution support for timeseries ([#2025](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2025))\r\n    * Add time series dataset support for timeseries package ([#2026](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2026))\r\n    * Add some basic block and deepAR model ([#2027](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2027))\r\n    * Enable pytorch deepar model inference in time series package ([#2149](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2149))\r\n* Introduces several enhancement for HuggingFace tokenizer:\r\n    * Adds batch encoding support([#2342](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2342), [#2343](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2343), [#2337](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2337), [#2338](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2338))\r\n    * Adds batchEncode for text pair ([#2339](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2339))\r\n    * Adds mean_sqrt_len and weightedmean pooling for TextEmbedding ([#2272](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2272))\r\n    * Adds more pooling mode form TextEmbedding ([#2261](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2261))\r\n    * Allows Huggingface model zoo list models in offline mode ([#2322](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2322))\r\n    * Update TextEmbedding pooling model name ([#2314](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2314))\r\n* Introduces a few new examples:\r\n    * Adds clip model to examples ([#2239](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2239))\r\n    * Adds openai whisper model to examples ([#2293](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2293))\r\n    * Adds stable diffusion examples ([#2246](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2246))\r\n* Introduces several enhancement for Spark extension:\r\n    * Add pyspark support ([#2301](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2301))\r\n    * Adds spark extension docker image ([#2243](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2243))\r\n    * Adds Numpy binary translator ([#2399](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2399))\r\n    * Adds huggingface tokenizer support for Spark ([#2311](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2311))\r\n    * Refactor Spark extension API ([#2370](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2370))\r\n* DJL API improvements:\r\n    * Adds limit and callback for Metrics API ([#2362](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2362))\r\n    * Adds newBaseManager(String engineName) api ([#2275](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2275))\r\n    * Falls back t","2023-02-25T18:17:27",{"id":270,"version":271,"summary_zh":272,"released_at":273},99201,"v0.20.0","## Key Features\r\n\r\n* Upgrades and enhancements for Engines\r\n    * Upgrades PyTorch to 1.13.0 ([#2157](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2157))\r\n    * Add support for Apple's Metal Performance Shaders (MPS) in PyTorch ([#2037](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2037))\r\n    * Add system property to config GraphExecutorOptimize ([#2156](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2156))\r\n    * Upgrades ONNXRuntime to 1.13.1 ([#2115](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2115))\r\n    * Upgrades Paddle to 2.3.2 ([#2116](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2116))\r\n    * Upgrades TensorFlow to 2.7.4 ([#2121](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2121))\r\n    * Upgrades HuggingFace tokenizer version to 0.13.1 ([#2127](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2127))\r\n    * Upgrades XGBoost to 1.7.1 ([#2143](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2143))\r\n* DJLServing\r\n    * Adds large model inference support with MPI mode ([#291](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F291))\r\n    * Adds built-in DeepSpeed handler ([#292](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F292))\r\n    * Publishes PaddlePaddle docker image ([#342](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl-serving\u002Fpull\u002F342))\r\n* Adds TabNet Training ([#2057](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2057))\r\n* Publishes DJL Zero ([#2091](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2091))\r\n* Adds Spark extension ([#2162](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2162))\r\n* Introduces several improvements for timeseries extension\r\n* Adds ImageFeatureExtractor example and resnet base model to model zoo\r\n\r\n## Enhancement\r\n\r\n* Introduces several enhancements for timeseries extension:\r\n    * Adds probability distribution support for timeseries ([#2025](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2025))\r\n    * Add time series dataset support for timeseries package ([#2026](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2026))\r\n    * Update M5Forecast dataset and its unittest ([#2105](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2105))\r\n    * Add some basic block and deepAR model ([#2027](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2027))\r\n    * Enable pytorch deepar model inference in time series package ([#2149](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2149))\r\n* Introduces several enhancement for HuggingFace tokenizer:\r\n    * Enhance huggingface text embedding translator to support max length padding ([#2049](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2049))\r\n    * Add cli options to only validate jit model on CPU ([#2052](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2052))\r\n    * Add batch decoding methods for tokenizers ([#2154](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2154))\r\n* Adds new models to DJL model zoo:\r\n    * Adds TabNet model for tabular dataset in modelzoo ([#2036](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2036))\r\n    * Adds yolo5s to OnnxRuntime model zoo ([#2046](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2046))\r\n    * Object Detection ([#1930](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F1930))\r\n    * Adds image classification resnet18 base model to model zoo ([#2079](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2079))\r\n* DJL API improvements:\r\n    * Adds Sparsemax block ([#2028](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2028))\r\n    * Updates the SemanticSegmentationTranslator ([#2032](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2032))\r\n    * Creates Ensembleable ([#2043](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2043))\r\n    * Handle error when forget to initialize a child block ([#2045](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2045))\r\n    * Adds draw mask for BitMapWrapper ([#2071](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2071))\r\n    * Allows show NDArray content in debugger ([#2078](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2078))\r\n    * Rename transparency to opacity in CategoryMask ([#2081](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2081))\r\n    * Allows to show NDArray content in Debugger 2 ([#2080](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2080))\r\n    * Transfer learning with pytorch engine on fresh fruit dataset ([#2070](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2070))\r\n    * Ensure GradientCollector can clear gradients ([#2101](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2101))\r\n    * Handles conflict JNA package issue ([#2118](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2118))\r\n    * Adds Multiplication block ([#2110](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2110))\r\n    * Allows non-ServingTranslatorFactory for DJLServing ([#2148](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2148))\r\n    * Adds cumprod operator ([#2152](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2152))\r\n    * Adds Randperm on PyTorch and MxNet ([#2084](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2084))\r\n    * Creates translator options ([#2145](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F2145))\r\n* CI improvements:\r\n    * Add Mac M1 build ([#2039](https:\u002F\u002Fgithub.com\u002Fdeepjavalibrary\u002Fdjl\u002Fpull\u002F203","2022-12-01T02:22:00"]