[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jolibrain--deepdetect":3,"tool-jolibrain--deepdetect":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":68,"readme_en":69,"readme_zh":70,"quickstart_zh":71,"use_case_zh":72,"hero_image_url":73,"owner_login":74,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":82,"stars":109,"forks":110,"last_commit_at":111,"license":112,"difficulty_score":10,"env_os":113,"env_gpu":114,"env_ram":115,"env_deps":116,"category_tags":130,"github_topics":131,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":150,"updated_at":151,"faqs":152,"releases":182},8822,"jolibrain\u002Fdeepdetect","deepdetect","Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE","DeepDetect 是一款开源的深度学习服务器与 API，旨在让尖端机器学习技术轻松集成到现有应用中。它基于 C++14 构建，核心解决了开发者在部署复杂模型时面临的框架依赖繁琐、环境配置困难以及从训练到嵌入式设备部署流程割裂等痛点。\n\n无论是进行图像分类、目标检测、文本分析，还是处理时间序列数据，DeepDetect 都能提供统一的接口支持监督与非监督学习。它特别适合后端工程师、AI 开发者以及需要将算法快速落地的研究团队使用。用户无需深入底层代码，即可通过简洁的 API 调用多种主流框架的能力。\n\n其独特的技术亮点在于强大的兼容性与自动化转换能力。DeepDetect 不仅原生支持 PyTorch、TensorFlow、Caffe、Dlib 等多种深度学习库，还集成了 XGBoost 用于梯度提升决策树，以及 T-SNE、FAISS 用于聚类和相似性搜索。更值得一提的是，它能自动将训练好的模型转换为针对 NVIDIA GPU 优化的 TensorRT 格式或适用于 ARM  CPU 的 NCNN 格式，极大简化了模型在边缘设备上的部署流程。配合丰富的 Docker 镜像支持，De","DeepDetect 是一款开源的深度学习服务器与 API，旨在让尖端机器学习技术轻松集成到现有应用中。它基于 C++14 构建，核心解决了开发者在部署复杂模型时面临的框架依赖繁琐、环境配置困难以及从训练到嵌入式设备部署流程割裂等痛点。\n\n无论是进行图像分类、目标检测、文本分析，还是处理时间序列数据，DeepDetect 都能提供统一的接口支持监督与非监督学习。它特别适合后端工程师、AI 开发者以及需要将算法快速落地的研究团队使用。用户无需深入底层代码，即可通过简洁的 API 调用多种主流框架的能力。\n\n其独特的技术亮点在于强大的兼容性与自动化转换能力。DeepDetect 不仅原生支持 PyTorch、TensorFlow、Caffe、Dlib 等多种深度学习库，还集成了 XGBoost 用于梯度提升决策树，以及 T-SNE、FAISS 用于聚类和相似性搜索。更值得一提的是，它能自动将训练好的模型转换为针对 NVIDIA GPU 优化的 TensorRT 格式或适用于 ARM  CPU 的 NCNN 格式，极大简化了模型在边缘设备上的部署流程。配合丰富的 Docker 镜像支持，DeepDetect 让机器学习服务的搭建与扩展变得高效而灵活。","\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Fwww.deepdetect.com\u002Fimg\u002Ficons\u002Fmenu\u002Fsidebar\u002Fdeepdetect.svg\" alt=\"DeepDetect Logo\" width=\"45%\" \u002F>\u003C\u002Fp>\n\n\u003Ch1 align=\"center\"> Open Source Deep Learning Server & API\u003C\u002Fh1>\n\n[![Join the chat at https:\u002F\u002Fgitter.im\u002Fbeniz\u002Fdeepdetect](https:\u002F\u002Fbadges.gitter.im\u002FJoin%20Chat.svg)](https:\u002F\u002Fgitter.im\u002Fbeniz\u002Fdeepdetect?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n![GitHub release (latest SemVer)](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fjolibrain\u002Fdeepdetect?color=success&sort=semver)\n![GitHub Release Date](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease-date\u002Fjolibrain\u002Fdeepdetect)\n![GitHub commits since latest release (by date)](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommits-since\u002Fjolibrain\u002Fdeepdetect\u002Flatest\u002Fmaster)\n\n\nDeepDetect (https:\u002F\u002Fwww.deepdetect.com\u002F) is a machine learning API and server written in C++11. It makes state of the art machine learning easy to work with and integrate into existing applications. It has support for both training and inference, with automatic conversion to embedded platforms with TensorRT (NVidia GPU) and NCNN (ARM CPU).\n\nIt implements support for supervised and unsupervised deep learning of images, text, time series and other data, with focus on simplicity and ease of use, test and connection into existing applications. It supports classification, object detection, segmentation, regression, autoencoders, ...\n\nAnd it relies on external machine learning libraries through a very generic and flexible API. At the moment it has support for:\n\n- the deep learning libraries [Caffe](https:\u002F\u002Fgithub.com\u002FBVLC\u002Fcaffe), [Tensorflow](https:\u002F\u002Ftensorflow.org), [Caffe2](https:\u002F\u002Fcaffe2.ai\u002F), [Torch](https:\u002F\u002Fpytorch.org\u002F), [NCNN](https:\u002F\u002Fgithub.com\u002FTencent\u002Fncnn) [Tensorrt](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT) and [Dlib](http:\u002F\u002Fdlib.net\u002Fml.html)\n- distributed gradient boosting library [XGBoost](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fxgboost)\n- clustering with [T-SNE](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FMulticore-TSNE)\n- similarity search with [Annoy](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fannoy\u002F) and [FAISS](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss)\n\nPlease join the community on [Gitter](https:\u002F\u002Fgitter.im\u002Fbeniz\u002Fdeepdetect), where we help users get through with installation, API, neural nets and connection to external applications.\n\n---\n\n| Build type | STABLE | DEVEL |\n|------|--------|-------|\n| SOURCE | \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fjolibrain\u002Fdeepdetect?color=success&sort=semver\"> | \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommits-since\u002Fjolibrain\u002Fdeepdetect\u002Flatest\u002Fmaster\"> |\n\nAll DeepDetect Docker images available from https:\u002F\u002Fdocker.jolibrain.com\u002F.\n\n- To list all available images:\n```\ncurl -X GET https:\u002F\u002Fdocker.jolibrain.com\u002Fv2\u002F_catalog\n```\n\n- To list an image available tags, e.g. for the `deepdetect_cpu` image:\n```\ncurl -X GET https:\u002F\u002Fdocker.jolibrain.com\u002Fv2\u002Fdeepdetect_cpu\u002Ftags\u002Flist\n```\n\n---\n\n* [Main features](#main-features)\n* [Machine Learning functionalities per library](#machine-learning-functionalities-per-library)\n* [Installation](https:\u002F\u002Fwww.deepdetect.com\u002Fquickstart-server\u002F)\n  * [From docker](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocs\u002Fdocker.md)\n  * [From source](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocs\u002Fsource.md)\n  * From Amazon AMI: [GPU](https:\u002F\u002Faws.amazon.com\u002Fmarketplace\u002Fpp\u002FB01N4D483M) and [CPU](https:\u002F\u002Faws.amazon.com\u002Fmarketplace\u002Fpp\u002FB01N1RGWQZ)\n  * [Mimic Continuous Integration testing](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocs\u002Fci.md)\n\n* [Models ready to use](#models)\n* Ecosystem\n  * [Platform presentation](https:\u002F\u002Fwww.deepdetect.com\u002Fplatform\u002F)\n  * [Platform installation with docker-compose](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdd_platform_docker)\n  * [Platform installation with helm (Kubernetes)](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fhelm_chart)\n  * [Tools and Clients](#tools-and-clients)\n* Documentation:\n  * [Introduction](https:\u002F\u002Fwww.deepdetect.com\u002Foverview\u002Fintroduction\u002F)\n  * [API Quickstart](https:\u002F\u002Fwww.deepdetect.com\u002Fserver\u002Fdocs\u002Fimagenet-classifier\u002F): setup an image classifier API service in a few minutes\n  * [API Tutorials](https:\u002F\u002Fwww.deepdetect.com\u002Fserver\u002Fdocs\u002Fserver_docs\u002F): training from text, data and images, setup of prediction services, and export to external software (e.g. ElasticSearch)\n  * [API Reference](https:\u002F\u002Fwww.deepdetect.com\u002Fapi\u002F)\n  * [Examples](https:\u002F\u002Fwww.deepdetect.com\u002Fserver\u002Fdocs\u002Fexamples\u002F): MLP for data, text, multi-target regression to CNN and GoogleNet, finetuning, etc...)\n  * [FAQ](https:\u002F\u002Fwww.deepdetect.com\u002Foverview\u002Ffaq\u002F)\n* Demos:\n  * [Image classification Web application](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fimgdetect) using HTML and javascript\n  * [Image similarity search](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fimgsearch) using python client\n  * [Image object detection](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fobjdetect) using python client\n  * [Image segmentation](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fsegmentation) using python client\n* [Performance tools and report](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdd_performances) done on NVidia Desktop and embedded GPUs, along with Raspberry Pi 3.\n* [References](#references)\n* [Authors](#authors)\n\n## Main features\n\n- high-level API for machine learning and deep learning\n- support for Caffe, Tensorflow, XGBoost, T-SNE, Caffe2, NCNN, TensorRT, Pytorch\n- classification, regression, autoencoders, object detection, segmentation, time-series\n- JSON communication format\n- remote Python and Javacript clients\n- dedicated server with support for asynchronous training calls\n- high performances, benefit from multicore CPU and GPU\n- built-in similarity search via neural embeddings\n- connector to handle large collections of images with on-the-fly data augmentation (e.g. rotations, mirroring)\n- connector to handle CSV files with preprocessing capabilities\n- connector to handle text files, sentences, and character-based models\n- connector to handle SVM file format for sparse data\n- range of built-in model assessment measures (e.g. F1, multiclass log loss, ...)\n- range of special losses (e.g Dice, contour, ...)\n- no database dependency and sync, all information and model parameters organized and available from the filesystem\n- flexible template output format to simplify connection to external applications\n- templates for the most useful neural architectures (e.g. Googlenet, Alexnet, ResNet, convnet, character-based convnet, mlp, logistic regression, SSD, DeepLab, PSPNet, U-Net, CRNN, ShuffleNet, SqueezeNet, MobileNet, RefineDet, VOVNet, ...)\n- support for sparse features and computations on both GPU and CPU\n- built-in similarity indexing and search of predicted features, images, objects and probability distributions\n- auto-generated documentation based on [Swagger](https:\u002F\u002Fswagger.io\u002F)\n\n\n## Machine Learning functionalities per library\n\n|                   | Caffe | Caffe2 | XGBoost | TensorRT | NCNN | Libtorch | Tensorflow | T\\-SNE | Dlib |\n|------------------:|:-----:|:------:|:-------:|:--------:|:----:|:--------:|:----------:|:------:|:----:|\n| **Serving**       |       |        |         |          |      |          |            |        |      |\n| Training \\(CPU\\)  | Y     | Y      | Y       | N\u002FA      | N\u002FA  | Y        | N          | Y      | N    |\n| Training \\(GPU\\)  | Y     | Y      | Y       | N\u002FA      | N\u002FA  | Y        | N          | Y      | N    |\n| Inference \\(CPU\\) | Y     | Y      | Y       | N        | Y    | Y        | Y          | N\u002FA    | Y    |\n| Inference \\(GPU\\) | Y     | Y      | Y       | Y        | N    | Y        | Y          | N\u002FA    | Y    |\n|                   |       |        |         |          |      |          |            |        |      |\n| **Models**        |       |        |         |          |      |          |            |        |      |\n| Classification    | Y     | Y      | Y       | Y        | Y    | Y        | Y          | N\u002FA    | Y    |\n| Object Detection  | Y     | Y      | N       | Y        | Y    | N        | N          | N\u002FA    | Y    |\n| Segmentation      | Y     | N      | N       | N        | N    | N        | N          | N\u002FA    | N    |\n| Regression        | Y     | N      | Y       | N        | N    | Y        | N          | N\u002FA    | N    |\n| Autoencoder       | Y     | N      | N\u002FA     | N        | N    | N        | N          | N\u002FA    | N    |\n| NLP               | Y     | N      | Y       | N        | N    | Y        | N          | Y      | N    |\n| OCR \u002F Seq2Seq     | Y     | N      | N       | N        | Y    | N        | N          | N      | N    |\n| Time\\-Series      | Y     | N      | N       | N        | Y    | Y        | N          | N      | N    |\n|                   |       |        |         |          |      |          |            |        |      |\n| **Data**          |       |        |         |          |      |          |            |        |      |\n| CSV               | Y     | N      | Y       | N        |  N   | N        | N          | Y      | N    |\n| SVM               | Y     | N      | Y       | N        |  N   | N        | N          | N      | N    |\n| Text words        | Y     | N      | Y       | N        |  N   | N        | N          | N      | N    |\n| Text characters   | Y     | N      | N       | N        |  N   | N        | N          | Y      | N    |\n| Images            | Y     | Y      | N       | Y        |  Y   | Y        | Y          | Y      | Y    |\n| Time\\-Series      | Y     | N      | N       | N        |  Y   | N        | N          | N      | N    |\n\n## Tools and Clients\n\n* Python client:\n  * REST client: https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fclients\u002Fpython\n  * 'a la scikit' bindings: https:\u002F\u002Fgithub.com\u002FArdalanM\u002FpyDD\n* Javacript client: https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect-js\n* Java client: https:\u002F\u002Fgithub.com\u002Fkfadhel\u002Fdeepdetect-api-java\n* Early C# client: https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fpull\u002F98\n* Log DeepDetect training metrics via Tensorboard: https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdd_board\n\n## Models\n\n|                          | Caffe | Tensorflow | Source        | Top-1 Accuracy (ImageNet) |\n|--------------------------|-------|------------|---------------|---------------------------|\n| AlexNet                  | Y     | N          | BVLC          |          57.1%                 |\n| SqueezeNet               | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsqueezenet\u002Fsqueezenet_v1.1.caffemodel)     | N          | DeepScale              |       59.5%                    |\n| Inception v1 \u002F GoogleNet | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fggnet\u002Fbvlc_googlenet.caffemodel)     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v1.pb)          | BVLC \u002F Google |             67.9%              |\n| Inception v2             | N     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v2.pb)          | Google        |     72.2%                      |\n| Inception v3             | N     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v3.pb)          | Google        |         76.9%                  |\n| Inception v4             | N     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v4.pb)          | Google        |         80.2%                  |\n| ResNet 50                | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnet\u002FResNet-50-model.caffemodel)     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fresnet_v1_50\u002Fresnet_v1_50.pb)          | MSR           |      75.3%                     |\n| ResNet 101               | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnet\u002FResNet-101-model.caffemodel)     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fresnet_v1_101\u002Fresnet_v1_101.pb)          | MSR           |        76.4%                   |\n| ResNet 152               | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnet\u002FResNet-152-model.caffemodel)     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fresnet_v1_152\u002Fresnet_v1_152.pb)         | MSR           |               77%            |\n| Inception-ResNet-v2      | N     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_resnet_v2.pb)          | Google        |       79.79%                    |\n| VGG-16                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fvgg_16\u002FVGG_ILSVRC_16_layers.caffemodel)     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fvgg_16\u002Fvgg_16.pb)          | Oxford        |               70.5%            |\n| VGG-19                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fvgg_19\u002FVGG_ILSVRC_19_layers.caffemodel)     | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fvgg_19\u002Fvgg_19.pb)          | Oxford        |               71.3%            |\n| ResNext 50                | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnext\u002Fresnext_50)     | N          | https:\u002F\u002Fgithub.com\u002Fterrychenism\u002FResNeXt           |      76.9%                     |\n| ResNext 101                | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnext\u002Fresnext_101)     | N          | https:\u002F\u002Fgithub.com\u002Fterrychenism\u002FResNeXt           |      77.9%                     |\n| ResNext 152               | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnext\u002Fresnext_152)     | N          | https:\u002F\u002Fgithub.com\u002Fterrychenism\u002FResNeXt           |      78.7%                     |\n| DenseNet-121                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_121_32\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               74.9%            |\n| DenseNet-161                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_161_48\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               77.6%            |\n| DenseNet-169                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_169_32\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               76.1%            |\n| DenseNet-201                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_201_32\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               77.3%            |\n| SE-BN-Inception                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_bn_inception\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               76.38%            |\n| SE-ResNet-50                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnet_50\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               77.63%            |\n| SE-ResNet-101                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnet_101\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               78.25%            |\n| SE-ResNet-152                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnet_152\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               78.66%            |\n| SE-ResNext-50                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnext_50\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               79.03%            |\n| SE-ResNext-101                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnext_101\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               80.19%            |\n| SENet                   | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_net\u002F)     | N          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               81.32%            |\n| VOC0712 (object detection) | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fvoc0712_dd.tar.gz) | N | https:\u002F\u002Fgithub.com\u002Fweiliu89\u002Fcaffe\u002Ftree\u002Fssd | 71.2 mAP |\n| InceptionBN-21k | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Finception\u002Finception_bn_21k) | N | https:\u002F\u002Fgithub.com\u002Fpertusa\u002FInceptionBN-21K-for-Caffe | 41.9% |\n| Inception v3 5K | N | [Y](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fopenimages_inception_v3) | https:\u002F\u002Fgithub.com\u002Fopenimages\u002Fdataset |  |\n| [5-point Face Landmarking Model (face detection)](http:\u002F\u002Fdlib.net\u002Ffiles\u002Fmmod_human_face_detector.dat.bz2) | N | N | http:\u002F\u002Fblog.dlib.net\u002F2017\u002F09\u002Ffast-multiclass-object-detection-in.html |  |\n| [Front\u002FRear vehicle detection (object detection)](http:\u002F\u002Fdlib.net\u002Ffiles\u002Fmmod_front_and_rear_end_vehicle_detector.dat.bz2) | N | N | http:\u002F\u002Fblog.dlib.net\u002F2017\u002F09\u002Ffast-multiclass-object-detection-in.html |  |\n\nMore models:\n\n- List of free, even for commercial use, deep neural nets for image classification, and character-based convolutional nets for text classification: https:\u002F\u002Fwww.deepdetect.com\u002Fapplications\u002Flist_models\u002F\n\n\u003C!---\n#FIXME(sileht): it's a feature detail, should be moved somewhere in deepdetect.com\u002Fserver\u002Fdocs\u002F\n## Templates\n\nDeepDetect comes with a built-in system of neural network templates (Caffe backend only at the moment). This allows the creation of custom networks based on recognized architectures, for images, text and data, and with much simplicity.\n\nUsage:\n- specify `template` to use, from `mlp`, `convnet` and `resnet`\n- specify the architecture with the `layers` parameter:\n  - for `mlp`, e.g. `[300,100,10]`\n  - for `convnet`, e.g. `[\"1CR64\",\"1CR128\",\"2CR256\",\"1024\",\"512\"], where the main pattern is `xCRy` where `y` is the number of outputs (feature maps), `CR` stands for Convolution + Activation (with `relu` as default), and `x` specifies the number of chained `CR` blocks without pooling. Pooling is applied between all `xCRy`\n- for `resnets`:\n   - with images, e.g. `[\"Res50\"]` where the main pattern is `ResX` with X the depth of the Resnet\n   - with character-based models (text), use the `xCRy` pattern of convnets instead, with the main difference that `x` now specifies the number of chained `CR` blocks within a resnet block\n   - for Resnets applied to CSV or SVM (sparse data), use the `mlp` pattern. In this latter case, at the moment, the `resnet` is built with blocks made of two layers for each specified layer after the first one. Here is an example: `[300,100,10]` means that a first hidden layer of size `300` is applied followed by a `resnet` block made of two `100` fully connected layer, and another block of two `10` fully connected layers. This is subjected to future changes and more control.\n-->\n\n## References\n\n- DeepDetect (https:\u002F\u002Fwww.deepdetect.com\u002F)\n- Caffe (https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fcaffe)\n- XGBoost (https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fxgboost)\n- T-SNE (https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FMulticore-TSNE)\n\n## Authors\nDeepDetect is designed, implemented and supported by [Jolibrain](https:\u002F\u002Fjolibrain.com\u002F) with the help of other contributors.\n","\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Fwww.deepdetect.com\u002Fimg\u002Ficons\u002Fmenu\u002Fsidebar\u002Fdeepdetect.svg\" alt=\"DeepDetect Logo\" width=\"45%\" \u002F>\u003C\u002Fp>\n\n\u003Ch1 align=\"center\"> 开源深度学习服务器与API\u003C\u002Fh1>\n\n[![加入Gitter聊天室 https:\u002F\u002Fgitter.im\u002Fbeniz\u002Fdeepdetect](https:\u002F\u002Fbadges.gitter.im\u002FJoin%20Chat.svg)](https:\u002F\u002Fgitter.im\u002Fbeniz\u002Fdeepdetect?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n![GitHub发布版本（最新SemVer）](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fjolibrain\u002Fdeepdetect?color=success&sort=semver)\n![GitHub发布日期](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease-date\u002Fjolibrain\u002Fdeepdetect)\n![自最新发布以来的GitHub提交数（按日期排序）](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommits-since\u002Fjolibrain\u002Fdeepdetect\u002Flatest\u002Fmaster)\n\n\nDeepDetect (https:\u002F\u002Fwww.deepdetect.com\u002F) 是一款用C++11编写的机器学习API和服务器。它使最先进的机器学习技术易于使用，并能轻松集成到现有应用中。该平台同时支持训练和推理，能够自动转换为搭载TensorRT（NVIDIA GPU）和NCNN（ARM CPU）的嵌入式平台。\n\nDeepDetect实现了对图像、文本、时间序列及其他数据的有监督和无监督深度学习支持，注重简洁性和易用性，便于测试及与现有应用程序的对接。它支持分类、目标检测、分割、回归、自编码器等任务。\n\n此外，DeepDetect通过一个高度通用且灵活的API依赖于外部机器学习库。目前支持以下库：\n\n- 深度学习库：[Caffe](https:\u002F\u002Fgithub.com\u002FBVLC\u002Fcaffe)、[Tensorflow](https:\u002F\u002Ftensorflow.org)、[Caffe2](https:\u002F\u002Fcaffe2.ai\u002F)、[Torch](https:\u002F\u002Fpytorch.org\u002F)、[NCNN](https:\u002F\u002Fgithub.com\u002FTencent\u002Fncnn)、[Tensorrt](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT)以及[Dlib](http:\u002F\u002Fdlib.net\u002Fml.html)\n- 分布式梯度提升库：[XGBoost](https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fxgboost)\n- 聚类算法：[T-SNE](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FMulticore-TSNE)\n- 相似性搜索：[Annoy](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fannoy\u002F) 和 [FAISS](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss)\n\n欢迎加入[Gitter](https:\u002F\u002Fgitter.im\u002Fbeniz\u002Fdeepdetect)社区，在那里我们将帮助用户解决安装、API使用、神经网络搭建以及与外部应用集成等方面的问题。\n\n---\n\n| 构建类型 | 稳定版 | 开发版 |\n|------|--------|-------|\n| 源代码 | \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fjolibrain\u002Fdeepdetect?color=success&sort=semver\"> | \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommits-since\u002Fjolibrain\u002Fdeepdetect\u002Flatest\u002Fmaster\"> |\n\n所有DeepDetect Docker镜像均可从https:\u002F\u002Fdocker.jolibrain.com\u002F获取。\n\n- 列出所有可用镜像：\n```\ncurl -X GET https:\u002F\u002Fdocker.jolibrain.com\u002Fv2\u002F_catalog\n```\n\n- 列出特定镜像的标签，例如`deepdetect_cpu`镜像：\n```\ncurl -X GET https:\u002F\u002Fdocker.jolibrain.com\u002Fv2\u002Fdeepdetect_cpu\u002Ftags\u002Flist\n```\n\n---\n\n* [主要特性](#main-features)\n* [各机器学习库的功能](#machine-learning-functionalities-per-library)\n* [安装指南](https:\u002F\u002Fwww.deepdetect.com\u002Fquickstart-server\u002F)\n  * [Docker方式](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocs\u002Fdocker.md)\n  * [源码方式](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocs\u002Fsource.md)\n  * 亚马逊AMI：[GPU](https:\u002F\u002Faws.amazon.com\u002Fmarketplace\u002Fpp\u002FB01N4D483M) 和 [CPU](https:\u002F\u002Faws.amazon.com\u002Fmarketplace\u002Fpp\u002FB01N1RGWQZ)\n  * [模拟持续集成测试](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocs\u002Fci.md)\n\n* [预训练模型](#models)\n* 生态系统\n  * [平台介绍](https:\u002F\u002Fwww.deepdetect.com\u002Fplatform\u002F)\n  * [使用docker-compose部署平台](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdd_platform_docker)\n  * [使用Helm（Kubernetes）部署平台](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fhelm_chart)\n  * [工具与客户端](#tools-and-clients)\n* 文档：\n  * [简介](https:\u002F\u002Fwww.deepdetect.com\u002Foverview\u002Fintroduction\u002F)\n  * [API快速入门](https:\u002F\u002Fwww.deepdetect.com\u002Fserver\u002Fdocs\u002Fimagenet-classifier\u002F)：几分钟内即可搭建一个图像分类API服务\n  * [API教程](https:\u002F\u002Fwww.deepdetect.com\u002Fserver\u002Fdocs\u002Fserver_docs\u002F)：从文本、数据和图像中进行训练，设置预测服务，并导出至外部软件（如ElasticSearch）\n  * [API参考](https:\u002F\u002Fwww.deepdetect.com\u002Fapi\u002F)\n  * [示例](https:\u002F\u002Fwww.deepdetect.com\u002Fserver\u002Fdocs\u002Fexamples\u002F)：包括用于数据和文本的MLP、多目标回归到CNN和GoogleNet、微调等\n  * [常见问题解答](https:\u002F\u002Fwww.deepdetect.com\u002Foverview\u002Ffaq\u002F)\n* 演示：\n  * [图像分类Web应用](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fimgdetect)，采用HTML和JavaScript实现\n  * [图像相似性搜索](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fimgsearch)，使用Python客户端\n  * [图像目标检测](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fobjdetect)，使用Python客户端\n  * [图像分割](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdemo\u002Fsegmentation)，使用Python客户端\n* [性能工具与报告](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdd_performances)，分别在NVIDIA桌面级和嵌入式GPU上，以及Raspberry Pi 3上完成\n* [参考文献](#references)\n* [作者](#authors)\n\n## 主要特性\n\n- 高层次的机器学习和深度学习API\n- 支持Caffe、Tensorflow、XGBoost、T-SNE、Caffe2、NCNN、TensorRT、Pytorch\n- 支持分类、回归、自编码器、目标检测、分割、时间序列分析等任务\n- 使用JSON通信格式\n- 提供远程Python和JavaScript客户端\n- 专用服务器支持异步训练请求\n- 性能优异，充分利用多核CPU和GPU资源\n- 内置基于神经网络嵌入的相似性搜索功能\n- 可处理大规模图像数据集，并支持实时数据增强（如旋转、翻转等）\n- 可处理CSV文件，并具备预处理能力\n- 可处理文本文件、句子及基于字符的模型\n- 可处理SVM格式的稀疏数据\n- 内置多种模型评估指标（如F1分数、多分类对数损失等）\n- 提供多种特殊损失函数（如Dice损失、轮廓损失等）\n- 不依赖数据库同步，所有信息和模型参数均组织在文件系统中并可直接访问\n- 灵活的模板输出格式，简化与外部应用的对接\n- 提供常用神经网络架构模板（如Googlenet、Alexnet、ResNet、卷积神经网络、基于字符的卷积网络、MLP、逻辑回归、SSD、DeepLab、PSPNet、U-Net、CRNN、ShuffleNet、SqueezeNet、MobileNet、RefineDet、VOVNet等）\n- 同时支持GPU和CPU上的稀疏特征和计算\n- 内置预测特征、图像、对象及概率分布的相似性索引与搜索功能\n- 基于[Swagger](https:\u002F\u002Fswagger.io\u002F)自动生成文档\n\n## 各库的机器学习功能\n\n|                   | Caffe | Caffe2 | XGBoost | TensorRT | NCNN | Libtorch | Tensorflow | T\\-SNE | Dlib |\n|------------------:|:-----:|:------:|:-------:|:--------:|:----:|:--------:|:----------:|:------:|:----:|\n| **服务**       |       |        |         |          |      |          |            |        |      |\n| 训练 \\(CPU\\)  | Y     | Y      | Y       | N\u002FA      | N\u002FA  | Y        | N          | Y      | N    |\n| 训练 \\(GPU\\)  | Y     | Y      | Y       | N\u002FA      | N\u002FA  | Y        | N          | Y      | N    |\n| 推理 \\(CPU\\) | Y     | Y      | Y       | N        | Y    | Y        | Y          | N\u002FA    | Y    |\n| 推理 \\(GPU\\) | Y     | Y      | Y       | Y        | N    | Y        | Y          | N\u002FA    | Y    |\n|                   |       |        |         |          |      |          |            |        |      |\n| **模型**        |       |        |         |          |      |          |            |        |      |\n| 分类    | Y     | Y      | Y       | Y        | Y    | Y        | Y          | N\u002FA    | Y    |\n| 目标检测  | Y     | Y      | N       | Y        | Y    | N        | N          | N\u002FA    | Y    |\n| 分割      | Y     | N      | N       | N        | N    | N        | N          | N\u002FA    | N    |\n| 回归        | Y     | N      | Y       | N        | N    | Y        | N          | N\u002FA    | N    |\n| 自编码器   | Y     | N      | N\u002FA     | N        | N    | N        | N          | N\u002FA    | N    |\n| 自然语言处理 | Y     | N      | Y       | N        | N    | Y        | N          | Y      | N    |\n| OCR \u002F Seq2Seq | Y     | N      | N       | N        | Y    | N        | N          | N      | N    |\n| 时间序列    | Y     | N      | N       | N        | Y    | Y        | N          | N      | N    |\n|                   |       |        |         |          |      |          |            |        |      |\n| **数据**          |       |        |         |          |      |          |            |        |      |\n| CSV               | Y     | N      | Y       | N        |  N   | N        | N          | Y      | N    |\n| SVM               | Y     | N      | Y       | N        |  N   | N        | N          | N      | N    |\n| 文本单词        | Y     | N      | Y       | N        |  N   | N        | N          | N      | N    |\n| 文本字符   | Y     | N      | N       | N        |  N   | N        | N          | Y      | N    |\n| 图像            | Y     | Y      | N       | Y        |  Y   | Y        | Y          | Y      | Y    |\n| 时间序列      | Y     | N      | N       | N        |  Y   | N        | N          | N      | N    |\n\n## 工具和客户端\n\n* Python 客户端：\n  * REST 客户端：https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fclients\u002Fpython\n  * 类似 scikit-learn 的绑定：https:\u002F\u002Fgithub.com\u002FArdalanM\u002FpyDD\n* JavaScript 客户端：https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect-js\n* Java 客户端：https:\u002F\u002Fgithub.com\u002Fkfadhel\u002Fdeepdetect-api-java\n* 早期的 C# 客户端：https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fpull\u002F98\n* 通过 TensorBoard 记录 DeepDetect 训练指标：https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdd_board\n\n## 模型\n\n|                          | Caffe | Tensorflow | 来源        | Top-1 准确率（ImageNet） |\n|--------------------------|-------|------------|---------------|---------------------------|\n| AlexNet                  | 是     | 否          | BVLC          |          57.1%                 |\n| SqueezeNet               | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsqueezenet\u002Fsqueezenet_v1.1.caffemodel)     | 否          | DeepScale              |       59.5%                    |\n| Inception v1 \u002F GoogleNet | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fggnet\u002Fbvlc_googlenet.caffemodel)     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v1.pb)          | BVLC \u002F Google |             67.9%              |\n| Inception v2             | 否     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v2.pb)          | Google        |     72.2%                      |\n| Inception v3             | 否     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v3.pb)          | Google        |         76.9%                  |\n| Inception v4             | 否     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_v4.pb)          | Google        |         80.2%                  |\n| ResNet 50                | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnet\u002FResNet-50-model.caffemodel)     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fresnet_v1_50\u002Fresnet_v1_50.pb)          | MSR           |      75.3%                     |\n| ResNet 101               | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnet\u002FResNet-101-model.caffemodel)     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fresnet_v1_101\u002Fresnet_v1_101.pb)          | MSR           |        76.4%                   |\n| ResNet 152               | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnet\u002FResNet-152-model.caffemodel)     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fresnet_v1_152\u002Fresnet_v1_152.pb)         | MSR           |               77%            |\n| Inception-ResNet-v2      | 否     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Finception_resnet_v2.pb)          | Google        |       79.79%                    |\n| VGG-16                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fvgg_16\u002FVGG_ILSVRC_16_layers.caffemodel)     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fvgg_16\u002Fvgg_16.pb)          | Oxford        |               70.5%            |\n| VGG-19                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fvgg_19\u002FVGG_ILSVRC_19_layers.caffemodel)     | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fvgg_19\u002Fvgg_19.pb)          | Oxford        |               71.3%            |\n| ResNext 50                | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnext\u002Fresnext_50)     | 否          | https:\u002F\u002Fgithub.com\u002Fterrychenism\u002FResNeXt           |      76.9%                     |\n| ResNext 101                | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnext\u002Fresnext_101)     | 否          | https:\u002F\u002Fgithub.com\u002Fterrychenism\u002FResNeXt           |      77.9%                     |\n| ResNext 152               | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fresnext\u002Fresnext_152)     | 否          | https:\u002F\u002Fgithub.com\u002Fterrychenism\u002FResNeXt           |      78.7%                     |\n| DenseNet-121                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_121_32\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               74.9%            |\n| DenseNet-161                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_161_48\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               77.6%            |\n| DenseNet-169                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_169_32\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               76.1%            |\n| DenseNet-201                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fdensenet\u002Fdensenet_201_32\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fshicai\u002FDenseNet-Caffe        |               77.3%            |\n| SE-BN-Inception                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_bn_inception\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               76.38%            |\n| SE-ResNet-50                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnet_50\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               77.63%            |\n| SE-ResNet-101                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnet_101\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               78.25%            |\n| SE-ResNet-152                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnet_152\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               78.66%            |\n| SE-ResNext-50                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnext_50\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               79.03%            |\n| SE-ResNext-101                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_resnext_101\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               80.19%            |\n| SENet                   | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fsenets\u002Fse_net\u002F)     | 否          | https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet        |               81.32%            |\n| VOC0712 (目标检测) | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Fvoc0712_dd.tar.gz) | 否 | https:\u002F\u002Fgithub.com\u002Fweiliu89\u002Fcaffe\u002Ftree\u002Fssd | 71.2 mAP |\n| InceptionBN-21k | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Finception\u002Finception_bn_21k) | 否 | https:\u002F\u002Fgithub.com\u002Fpertusa\u002FInceptionBN-21K-for-Caffe | 41.9% |\n| Inception v3 5K | 否 | [是](https:\u002F\u002Fdeepdetect.com\u002Fmodels\u002Ftf\u002Fopenimages_inception_v3) | https:\u002F\u002Fgithub.com\u002Fopenimages\u002Fdataset |  |\n| [5点人脸关键点检测模型（人脸检测）](http:\u002F\u002Fdlib.net\u002Ffiles\u002Fmmod_human_face_detector.dat.bz2) | 否 | 否 | http:\u002F\u002Fblog.dlib.net\u002F2017\u002F09\u002Ffast-multiclass-object-detection-in.html |  |\n| [前后车辆检测（目标检测）](http:\u002F\u002Fdlib.net\u002Ffiles\u002Fmmod_front_and_rear_end_vehicle_detector.dat.bz2) | 否 | 否 | http:\u002F\u002Fblog.dlib.net\u002F2017\u002F09\u002Ffast-multiclass-object-detection-in.html |  |\n\n更多模型：\n\n- 免费且可用于商业用途的深度神经网络列表，包括用于图像分类的网络以及基于字符的卷积网络用于文本分类：https:\u002F\u002Fwww.deepdetect.com\u002Fapplications\u002Flist_models\u002F\n\n\u003C!---\n#FIXME(sileht): 这是一个功能细节，应该移到 deepdetect.com\u002Fserver\u002Fdocs\u002F 的某个地方\n\n## 模板\n\nDeepDetect 自带一个内置的神经网络模板系统（目前仅支持 Caffe 后端）。这使得用户能够基于已知的架构，为图像、文本和数据创建自定义网络，且操作非常简便。\n\n使用方法：\n- 通过 `template` 参数指定要使用的模板，可选 `mlp`、`convnet` 和 `resnet`。\n- 使用 `layers` 参数指定网络架构：\n  - 对于 `mlp`，例如 `[300,100,10]`。\n  - 对于 `convnet`，例如 `[\"1CR64\",\"1CR128\",\"2CR256\",\"1024\",\"512\"]`，其中主要模式是 `xCRy`，`y` 表示输出通道数（特征图数量），`CR` 表示卷积层与激活函数（默认为 ReLU），而 `x` 则表示在不进行池化的情况下连续堆叠的 `CR` 块的数量。所有 `xCRy` 块之间都会应用池化操作。\n- 对于 `resnets`：\n  - 处理图像时，例如 `[\"Res50\"]`，主要模式是 `ResX`，其中 `X` 表示 ResNet 的深度。\n  - 对于基于字符的模型（文本），则采用与 `convnet` 相同的 `xCRy` 模式，主要区别在于此时 `x` 表示 ResNet 块内连续堆叠的 `CR` 块数量。\n  - 当 ResNet 应用于 CSV 或 SVM（稀疏数据）时，则使用 `mlp` 模式。在这种情况下，目前每个指定的隐藏层之后都会构建由两层组成的 ResNet 块。例如，`[300,100,10]` 表示先应用一个大小为 300 的隐藏层，随后是一个由两个 100 维全连接层组成的 ResNet 块，再接一个由两个 10 维全连接层组成的 ResNet 块。此规则未来可能会调整，并提供更多的控制选项。\n-->\n\n## 参考文献\n\n- DeepDetect（https:\u002F\u002Fwww.deepdetect.com\u002F）\n- Caffe（https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fcaffe）\n- XGBoost（https:\u002F\u002Fgithub.com\u002Fdmlc\u002Fxgboost）\n- T-SNE（https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002FMulticore-TSNE）\n\n## 作者\nDeepDetect 由 [Jolibrain](https:\u002F\u002Fjolibrain.com\u002F) 设计、实现并维护，同时得到了其他贡献者的支持。","# DeepDetect 快速上手指南\n\nDeepDetect 是一个用 C++11 编写的高性能机器学习 API 服务器，支持训练与推理，并可通过 JSON 接口轻松集成到现有应用中。它底层支持 Caffe、TensorFlow、PyTorch (Libtorch)、XGBoost、NCNN 和 TensorRT 等多种引擎。\n\n## 环境准备\n\n### 系统要求\n- **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04\u002F22.04)\n- **硬件**:\n  - **CPU 模式**: 多核 CPU，内存建议 8GB 以上\n  - **GPU 模式**: NVIDIA GPU (需安装 CUDA 和 cuDNN)，用于加速训练和推理\n- **架构支持**: x86_64, ARM (如 Raspberry Pi, Jetson Nano)\n\n### 前置依赖\n最推荐的安装方式是使用 **Docker**，可避免复杂的底层库依赖配置。\n- Docker Engine (19.03+)\n- Docker Compose (可选，用于部署完整平台)\n- `curl` (用于测试 API)\n\n> **注意**：若需从源码编译，需自行安装 CMake, Boost, OpenCV, Protobuf 及各深度学习后端库（如 Caffe, TensorFlow 等），过程较为繁琐，生产环境强烈建议使用 Docker 镜像。\n\n## 安装步骤\n\n### 方式一：使用 Docker（推荐）\n\nDeepDetect 官方提供了预构建的 Docker 镜像，涵盖 CPU 和 GPU 版本。\n\n1. **拉取镜像**\n   \n   访问官方镜像仓库查看可用标签，或直接拉取最新稳定版。\n   \n   ```bash\n   # 查看 available images\n   curl -X GET https:\u002F\u002Fdocker.jolibrain.com\u002Fv2\u002F_catalog\n   \n   # 拉取 CPU 版本镜像\n   docker pull jolibrain\u002Fdeepdetect_cpu\n   \n   # 拉取 GPU 版本镜像 (需宿主机安装 NVIDIA Container Toolkit)\n   docker pull jolibrain\u002Fdeepdetect_gpu\n   ```\n\n2. **启动容器**\n\n   ```bash\n   # 启动 CPU 版本，映射端口 8080\n   docker run -p 8080:8080 -it jolibrain\u002Fdeepdetect_cpu\n   \n   # 启动 GPU 版本 (需要 --gpus 参数)\n   docker run --gpus all -p 8080:8080 -it jolibrain\u002Fdeepdetect_gpu\n   ```\n\n### 方式二：从源码编译（高级用户）\n\n如需自定义后端或优化性能，可从源码编译。请参考官方文档 `docs\u002Fsource.md`。基本流程如下：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect.git\ncd deepdetect\nmkdir build && cd build\ncmake .. -DUSE_CAFFE=ON -DUSE_TF=ON -DUSE_DLIB=ON\nmake -j$(nproc)\nsudo make install\n```\n\n## 基本使用\n\nDeepDetect 通过 RESTful JSON API 进行交互。以下示例演示如何创建一个图像分类服务并进行预测。\n\n### 1. 创建模型服务 (Create Service)\n\n假设我们使用预训练的 ResNet 50 模型进行图像分类。发送 `PUT` 请求创建服务：\n\n```bash\ncurl -X PUT http:\u002F\u002Flocalhost:8080\u002Fservices\u002Fimageservice \\\n-d '{\n  \"service\": \"imageservice\",\n  \"async\": false,\n  \"src\": \"\u002Fopt\u002Fdeepdetect\u002Fmodels\u002Fresnet\",\n  \"mllib\": \"caffe\",\n  \"description\": \"Image classification service\",\n  \"type\": \"supervised\",\n  \"parameters\": {\n    \"input\": {\n      \"width\": 224,\n      \"height\": 224\n    },\n    \"mllib\": {\n      \"nclasses\": 1000\n    }\n  }\n}'\n```\n\n*注：`\u002Fopt\u002Fdeepdetect\u002Fmodels\u002Fresnet` 为容器内预置模型路径，若本地运行需替换为实际路径。*\n\n### 2. 执行预测 (Predict)\n\n上传一张图片进行预测（支持 Base64 编码或图片 URL）：\n\n```bash\ncurl -X POST http:\u002F\u002Flocalhost:8080\u002Fpredict \\\n-H \"Content-Type: application\u002Fjson\" \\\n-d '{\n  \"service\": \"imageservice\",\n  \"parameters\": {\n    \"output\": {\n      \"best\": 3\n    }\n  },\n  \"data\": [\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjolibrain\u002Fdeepdetect\u002Fmaster\u002Fdemo\u002Fimgdetect\u002Fdata\u002Fcat.jpg\"]\n}'\n```\n\n**响应示例：**\n```json\n{\n  \"status\": 200,\n  \"predictions\": [\n    {\n      \"classes\": [\n        {\"cat\": \"tabby cat\", \"prob\": 0.85},\n        {\"cat\": \"Egyptian cat\", \"prob\": 0.10},\n        {\"cat\": \"tiger cat\", \"prob\": 0.03}\n      ]\n    }\n  ]\n}\n```\n\n### 3. 客户端工具\n\n除了直接使用 `curl`，DeepDetect 还提供了多种语言的客户端库：\n\n- **Python**: \n  ```bash\n  pip install pyDD\n  ```\n  或使用官方 REST 客户端：\n  ```python\n  from dd_client import DD\n  dd = DD('http:\u002F\u002Flocalhost:8080')\n  # 后续调用训练或预测接口\n  ```\n\n- **JavaScript**: 参考 `deepdetect-js` 项目。\n- **Java\u002FC#**: 社区贡献的非官方客户端可用。\n\n通过以上步骤，您已成功部署并调用了 DeepDetect 服务。更多高级功能（如训练新模型、对象检测、文本分析等）请参考官方 API 文档。","某中型电商团队急需在现有的 C++ 订单系统中集成实时商品图像分类功能，以自动识别用户上传的晒图并打标。\n\n### 没有 deepdetect 时\n- **开发门槛高**：算法工程师需分别熟悉 PyTorch、TensorFlow 等不同框架的底层 API，且难以将其无缝嵌入现有的 C++ 后端服务。\n- **部署流程繁琐**：从模型训练到生产环境部署，需要手动编写大量胶水代码进行格式转换，且缺乏统一的推理接口标准。\n- **硬件适配困难**：若要利用 NVIDIA GPU 加速或迁移至 ARM 边缘设备，需单独配置 TensorRT 或 NCNN，重复工作量巨大且容易出错。\n- **维护成本高昂**：多种深度学习库并存导致依赖冲突频发，系统升级或更换模型架构时往往牵一发而动全身。\n\n### 使用 deepdetect 后\n- **统一接入标准**：通过 deepdetect 提供的通用 REST API，团队可直接调用支持 PyTorch、TensorFlow 等主流框架的模型，无需关心底层框架差异。\n- **自动化部署加速**：deepdetect 自动处理模型加载与推理逻辑，支持一键将训练好的模型转换为 TensorRT（GPU）或 NCNN（ARM）格式，大幅缩短上线周期。\n- **灵活硬件支持**：同一套代码即可在服务端高性能 GPU 集群运行，也能轻松部署到移动端 ARM 芯片，实现“一次构建，多处运行”。\n- **系统集成简便**：作为独立的 C++ 服务器进程，deepdetect 通过 HTTP 接口与现有订单系统解耦，降低了耦合度与维护难度。\n\ndeepdetect 通过屏蔽底层框架差异并提供统一的推理服务，让企业能以最低成本将前沿深度学习能力快速融入现有生产流。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjolibrain_deepdetect_9049c5e8.png","jolibrain","JoliBrain","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjolibrain_ffb726e8.png","Pretty AI for solving real world problems",null,"contact@jolibrain.com","https:\u002F\u002Fwww.jolibrain.com","https:\u002F\u002Fgithub.com\u002Fjolibrain",[83,87,91,95,99,103,107],{"name":84,"color":85,"percentage":86},"C++","#f34b7d",86.1,{"name":88,"color":89,"percentage":90},"Shell","#89e051",7.6,{"name":92,"color":93,"percentage":94},"Python","#3572A5",2.8,{"name":96,"color":97,"percentage":98},"CMake","#DA3434",2.3,{"name":100,"color":101,"percentage":102},"Dockerfile","#384d54",0.9,{"name":104,"color":105,"percentage":106},"C","#555555",0.1,{"name":108,"color":78,"percentage":106},"M4",2549,551,"2026-04-09T19:23:27","NOASSERTION","Linux","非必需（支持 CPU 和 GPU）。若使用 GPU，需 NVIDIA GPU 以支持 TensorRT 加速；也支持 ARM CPU (NCNN)。具体显存和 CUDA 版本未在文中说明。","未说明",{"notes":117,"python":118,"dependencies":119},"该工具核心是使用 C++11 编写的服务器和 API。支持多种后端引擎（如 Caffe, Tensorflow, PyTorch, TensorRT, NCNN 等），可根据需求选择编译支持的后端。提供 Docker 镜像简化部署。支持将模型自动转换为嵌入式平台格式（TensorRT 用于 NVIDIA GPU，NCNN 用于 ARM CPU）。无数据库依赖，所有数据和模型参数均通过文件系统管理。","未说明（核心服务为 C++11 编写，提供 Python 客户端）",[120,121,122,123,124,125,126,127,128,129],"Caffe","Tensorflow","Caffe2","Libtorch (PyTorch)","NCNN","TensorRT","XGBoost","Dlib","FAISS","Annoy",[45,15,14],[132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149],"deep-learning","machine-learning","caffe","xgboost","rest-api","tsne","object-detection","image-segmentation","image-classification","neural-nets","gpu","ncnn","tensorrt","time-series","pytorch","tensorrt-conversion","tensorrt-inference","image-search","2026-03-27T02:49:30.150509","2026-04-18T09:19:17.243717",[153,158,163,168,173,178],{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},39552,"如何在 CentOS 上安装 deepdetect？","在 CentOS 上直接通过 yum 安装依赖包通常会失败，因为许多包（如 libgoogle-glog-dev 等）不可用。官方推荐的解决方案是使用 Docker 镜像，避免手动编译所有源码。您可以参考项目中的 docker 目录获取相关镜像和配置：https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Ftree\u002Fmaster\u002Fdocker。如果必须原生编译，需确保所有依赖项已正确安装并排查具体报错日志。","https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fissues\u002F71",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},39553,"创建服务时遇到 'Service Bad Request Error' (错误代码 1006) 怎么办？","该错误通常由请求参数不正确引起。请检查以下几点：1. 确保模板路径（templates）和模型仓库路径（repository）在服务器文件系统中真实存在且可读；2. 验证 JSON 格式是否正确，特别是转义字符；3. 确认后端类型（mllib，如 caffe）和任务类型（type，如 supervised）是否匹配。如果是预测时报错且无响应，请检查图片 URL 是否可公开访问（返回 Forbidden 会导致无法读取图片），建议先在本地测试或使用可访问的图片链接。","https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fissues\u002F99",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},39554,"训练网络时出现内存溢出错误或 GPU 训练失败怎么办？","这通常是显存不足导致的。解决方法包括：1. 减小批处理大小（batch size）；2. 尝试使用更小的网络模型；3. 切换到 CPU 模式（设置 gpu:false）进行训练以验证是否为 GPU 显存问题。此外，某些情况下 CUDA 8 与新版的 Nvidia 驱动（如 Maxwell 架构）可能存在兼容性问题，建议使用 Nvidia 提供的测试设备程序（test device exe）来排查驱动和 CUDA 环境是否正常。","https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fissues\u002F159",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},39555,"如何获取支持 Caffe、TensorFlow 和 XGBoost 的预配置环境？","项目官方提供了适用于 AWS EC2 的 Amazon Machine Image (AMI)，支持 GPU 和 CPU 版本，预装了 Caffe、XGBoost 和 TensorFlow 后端。您可以直接在 AWS Marketplace 中搜索并订阅：GPU 版本 AMI ID 为 B01N4D483M，CPU 版本为 B01N1RGWQZ。详细文档和使用说明请访问：https:\u002F\u002Fdeepdetect.com\u002Fproducts\u002Fami\u002F。","https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fissues\u002F5",{"id":174,"question_zh":175,"answer_zh":176,"source_url":177},39556,"DeepDetect 是否支持 TensorFlow 后端？","是的，DeepDetect 支持 TensorFlow 后端。该功能允许将计算图模型保存为 protobuf 文件并在运行时读取，类似于 Caffe 的工作方式。API 设计旨在吸收更多库，支持包括 seq2seq 在内的复杂模型。构建时需要注意：需要 cmake 版本大于等于 3，且不支持 ccache。","https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fissues\u002F30",{"id":179,"question_zh":180,"answer_zh":181,"source_url":162},39557,"为什么使用有效的图片 URL 进行预测时仍然报错 'no image could be found'？","即使 URL 看起来有效，如果服务器无法访问该资源（例如返回 HTTP 403 Forbidden 或需要认证），DeepDetect 也会报此错误。服务器日志通常会显示 'ERROR - no data for image...'。请确保图片 URL 是公开的、无需登录即可访问，并且服务器所在的网络环境可以连接到该图片主机。如果图片受保护，请先下载到本地或通过代理提供公开链接。",[183,188,193,197,202,207,212,217,222,227,232,237,242,247,252,257,262,267,272,277],{"id":184,"version":185,"summary_zh":186,"released_at":187},315482,"v0.26.2","### Docker 镜像：\n\n* CPU 版本：`docker pull docker.jolibrain.com\u002Fdeepdetect_cpu:v0.26.2`\n* GPU（仅 CUDA）：`docker pull docker.jolibrain.com\u002Fdeepdetect_gpu:v0.26.2`\n* GPU（CUDA 和 TensorRT）：`docker pull docker.jolibrain.com\u002Fdeepdetect_cpu_tensorrt:v0.26.2`\n* 带 PyTorch 后端的 GPU：`docker pull docker.jolibrain.com\u002Fdeepdetect_gpu_torch:v0.26.2`\n* 所有镜像均可从 https:\u002F\u002Fdocker.jolibrain.com\u002F 获取，镜像列表如下：{\"repositories\":[\"colette_gpu\",\"colette_gpu_server\",\"colette_ui\",\"cuda12.5.1-devel-ubuntu22.04-preinst-devel\",\"deepdetect_cpu\",\"deepdetect_cpu_torch\",\"deepdetect_gpu\",\"deepdetect_gpu_tensorrt\",\"deepdetect_gpu_torch\",\"filebrowser\",\"gpustat_server\",\"joligen_server\",\"joligen_ui\",\"jupyter_dd_notebook\",\"platform_annotations_backend\",\"platform_annotations_frontend\",\"platform_data\",\"platform_ui\"]}","2025-07-19T07:45:09",{"id":189,"version":190,"summary_zh":191,"released_at":192},315483,"v0.26.1","# DeepDetect：开源深度学习服务器与 API（更新日志）\n\n### [0.26.1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcompare\u002Fv0.26.0...v0.26.1)\t(2025-07-09)\n\n\n### 功能特性\n\n* 支持 DETR 和 RT-DETRv2 的训练 ([83a1ffd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F83a1ffd13b5\\\r\n4ae29164755383a7eb154f028b212))\n* 新增用于将训练指标绘制到 Visdom 的脚本 ([5a50d3a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5a50d3\\\r\na478d50c4a94c0d82d72099034cc3809aa))\n      \t      \t\n\n### 错误修复\n\n* **torch:** 使用 test_batch_size 1 进行分割时的问题 ([a42902f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa42902f7098a5ac0ef17512012529f0479312c7c))\n* trace_rtdetrv2 中配置路径错误 ([e9e5d0a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe9e5d0a18e93b2460ef71d7dd1ef3d0f49410619))\n* DETR 和 RT-DETRv2 的设备设置错误 ([6736b09](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6736b09db83aa988dcab45f383dca8556f97fecf))","2025-07-14T08:21:13",{"id":194,"version":195,"summary_zh":186,"released_at":196},315484,"v0.27.0","2025-11-12T12:57:32",{"id":198,"version":199,"summary_zh":200,"released_at":201},315490,"v0.22.1","# DeepDetect：开源深度学习服务器及 API（变更日志）\n\n### [0.22.1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcompare\u002Fv0.22.0...v0.22.1)（2022-05-28）\n\n### 错误修复\n\n* Caffe 构建可以使用自定义 OpenCV 版本（[fde90cd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ffde90cdeca1b1f911c64f6cc91b2c2d055429dad)）\n* Docker 镜像中 CUDA 运行时版本错误（[8ca5acf](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8ca5acf5f9bd0cf46de859fd3100d969eb892bdc)）\n\n### Docker 镜像：\n\n* CPU 版本：`docker pull jolibrain\u002Fdeepdetect_cpu:v0.22.1`\n* GPU（仅 CUDA）：`docker pull jolibrain\u002Fdeepdetect_gpu:v0.22.1`\n* GPU（CUDA 和 TensorRT）：`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.22.1`\n* 带 PyTorch 后端的 GPU：`docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.22.1`\n* 所有镜像均可在 https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain 上获取。","2022-05-28T09:01:11",{"id":203,"version":204,"summary_zh":205,"released_at":206},315496,"v0.17.0","\r\n\r\n### Features\r\n\r\n* **ml:** data augmentation for object detection with torch backend ([95942b9](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F95942b9174170099ca1e2a4b87a5e5f758943a37))\r\n* **ml:** Visformer architecture with torch backend ([40ec03f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F40ec03f77d0107a4b758b1103d265fffc904812a))\r\n* **torch:** add batch size > 1 for detection models ([91bde66](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F91bde66ba01e0a0f2e75d32062583c4fe018022b))\r\n* **torch:** image data augmentation with random geometric perspectives ([d163fd8](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd163fd88bbc966aedc1a213f25e1d9d16664f822))\r\n* **api:** introduce predict output parameter ([c9ee71a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc9ee71af5167ee24fa285fb0812bbc267a72970b))\r\n* **api:** use DTO for NCNN init parameters ([2ee11f0](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2ee11f07b7f8c60ca3baefd9b09f20df30a45863))\r\n\r\n### Bug Fixes\r\n\r\n* **build:** docker builds with tcmalloc ([6b8411a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6b8411a3989f8131c6745c921fe96629246570d3))\r\n* **doc:** api traced models list ([342b909](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F342b909b74d6e24f1b0c440086e8ce8057e6fd83))\r\n* **graph:** loading weights from previous model does not fail ([5e7c8f6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5e7c8f6c8a0ddcd1cc2e2bd93278e2262a6d80ff))\r\n* **torch:** fix faster rcnn model export for training ([cbbbd99](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fcbbbd99cb1fffe5fda5ff7aeeef47a853c35e615))\r\n* **torch:** retinanet now trains correctly ([351d6c6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F351d6c6aafde52d821ab50853a37595438778556))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.17.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.17.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.17.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.17.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-05-10T09:49:55",{"id":208,"version":209,"summary_zh":210,"released_at":211},315485,"v0.26.0","\r\n### 功能特性\r\n\r\n* 为 TensorRT 中的扩散模型生成掩码 ([5c16ad0](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5c16ad0b4afec2a294769f883d201ace4786552c))\r\n* 可选构建 OpenCV 支持 ([ce9b9a7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fce9b9a74a02cecf90a8a711c5fd70f65d7f2e7f9))\r\n* **输出：** 为检测任务添加假阳性指标 ([bec49c4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fbec49c43210360f8118e6e80418711f85f602a0c))\r\n* **PyTorch：** 添加 JIT FusionStrategy 选择选项 ([d2331be](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd2331be5fff65cff8eafc3f379b3fd5279f188d8))\r\n* **PyTorch：** 基于 https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336，新增用于追踪 HuggingFace Transformers CLIP 模型的脚本 ([c2373ea](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc2373ea60144560ded246354cf2a5adf16e49366))\r\n\r\n\r\n### Bug 修复\r\n\r\n* **PyTorch：** 正确捕获输入连接器中的错误 ([ac09c52](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fac09c52af467a2863119b555f8a40dd244f65a6e))\r\n* **PyTorch：** 修复 test_batch_size > 1 时的分段问题 ([0d8d3da](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0d8d3da83fb673ddbdb6c8bbf32b98f1114d8b30))\r\n### Docker 镜像：\r\n\r\n* CPU 版本：`docker pull docker.jolibrain.com\u002Fdeepdetect_cpu:v0.26.0`\r\n* GPU（仅 CUDA）：`docker pull docker.jolibrain.com\u002Fdeepdetect_gpu:v0.26.0`\r\n* GPU（CUDA 和 TensorRT）：`docker pull docker.jolibrain.com\u002Fdeepdetect_cpu_tensorrt:v0.26.0`\r\n* 使用 PyTorch 后端的 GPU 版本：`docker pull docker.jolibrain.com\u002Fdeepdetect_gpu_torch:v0.26.0`\r\n* 所有镜像均可在 https:\u002F\u002Fdocker.jolibrain.com\u002F 上获取，镜像列表如下：{\"repositories\":[\"cuda12.5.1-devel-ubuntu22.04-preinst-devel\",\"deepdetect_cpu\",\"deepdetect_cpu_torch\",\"deepdetect_gpu\",\"deepdetect_gpu_tensorrt\",\"deepdetect_gpu_torch\",\"filebrowser\",\"gpustat_server\",\"joligen_server\",\"joligen_ui\",\"jupyter_dd_notebook\",\"platform_annotations_backend\",\"platform_annotations_frontend\",\"platform_data\",\"platform_ui\"]}","2024-11-09T06:31:27",{"id":213,"version":214,"summary_zh":215,"released_at":216},315486,"v0.25.0","\r\n\r\n### ⚠ 重大变更\r\n\r\n* **trt:** 停止对 Caffe Refinedet 的支持\r\n\r\n### 功能特性\r\n\r\n* 允许以 Base64 格式的 JSON 返回图像 ([05096fd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F05096fdabf19f23b296484d06c7b0a94a2c22112))\r\n* 在 Apple 平台上构建 Deepdetect + PyTorch MPS ([aa8822d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Faa8822d671f8badc188a55a67ef1fd5f4e97bd55))\r\n* 重新组合操作，以从 GAN 中重建图像并进行裁剪 ([e1118b1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe1118b147d6395a8d8343d3ea98c3171b6f63c08))\r\n* **torch:** 添加任意 IoU 阈值的地图指标 ([20d8ebe](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F20d8ebea3ee37748101994986aeaffc553467cd9))\r\n* **torch:** 新增参数 `disable_concurrent_predict` ([71cb66a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F71cb66ab9bb00ca01fba4d03f4ea4d44ebe9a1b2))\r\n\r\n\r\n### 错误修复\r\n\r\n* 添加更明确的错误信息 ([ca2703c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fca2703c02b2644a98e6d127514b9cd48d6d92187))\r\n* 允许同名的两个链式调用同时执行 ([b26b5b9](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb26b5b98a991457730747697891ac9a4ef9a45c6))\r\n* **chain:** 空预测结果过于空洞 ([57bed0b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F57bed0b1360bdd1fd5fc2ae162cd7630653bd398))\r\n* **docker:** 构建 CPU 版本的 Docker 镜像 ([9e56aba](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9e56aba46b248618341ac3798aea2f2209a4a184))\r\n* 使用图像训练时不会进行缩放 ([e84c616](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe84c6161aa75a7157b60c5bb51b144768481996e))\r\n* 防止在服务完成预测前被删除而导致崩溃 ([0ef1f46](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0ef1f469a539e722a722ff693c91f0088087ca35))\r\n* 支持服务信息参数中的布尔值 ([737724d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F737724de18af18a3da29dc79d98a650228622f4d))\r\n* Docker 构建时正确选择了 PyTorch 架构 ([5eb7890](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5eb7890c15bc4dffcbc430f3ec4b5379d3052340))\r\n* **torch:** 黑白图像现在可以与 CRNN 和数据增强一起使用 ([2b07002](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2b070027944affedc753b9a88c7148a4f9fa71e3))\r\n* **torch:** `concurrent_predict` 参数始终为真 ([edb28c1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fedb28c11fb42e06f62411766b1dc027f56d009c7))\r\n\r\n\r\n### Docker 镜像:\r\n\r\n* CPU 版本: `docker pull docker.jolibrain.com\u002Fdeepdetect_cpu:v0.25.0`\r\n* GPU（仅 CUDA）: `docker pull docker.jolibrain.com\u002Fdeepdetect_gpu:v0.25.0`\r\n* GPU（CUDA 和 TensorRT）: `docker pull docker.jolibrain.com\u002Fdeepdetect_cpu_tensorrt:v0.25.0`\r\n* 带有 PyTorch 后端的 GPU: `docker pull docker.jolibrain.com\u002Fdeepdetect_gpu_torch:v0.25.0`\r\n* 所有镜像均可在 https:\u002F\u002Fdocker.jolibrain.com\u002F 上获取，镜像列表为 {\"repositories\":[\"deepdetect_cpu\",\"deepdetect_cpu_torch\",\"deepdetect_gpu\",\"deepdetect_gpu_tensorrt\",\"deepdetect_gpu_torch\",\"filebro","2024-01-10T10:48:20",{"id":218,"version":219,"summary_zh":220,"released_at":221},315487,"v0.24.0","### 功能特性\n\n* 为 Swagger 添加自定义 API 路径（[4fe0df7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4fe0df721d1cc7a1cd03870e473e744c4924bb58)）\n* 添加百分比误差指标显示（[1cc15d6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1cc15d6a50a00218a47f43387ca88b94ac665801)）\n* **API：** 添加 `model_stats` 字段，包含模型的参数数量（[b562fee](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb562fee5834402a720ca54a25ef7de4c6026f036)）\n* **API：** 在服务信息中添加标签（[66cbff5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F66cbff59ae8e071ba84f0c84edda7375c7a0d0cb)）\n* **API：** 增加接受头的大小限制（[07f6ff3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F07f6ff32a834ae20e79a4e9933d66e89392b2385)）\n* 在服务启动时记录模型参数和大小（[041b649](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F041b6493aec803a2e4e76fe91e80b17b94f94c4e)）\n* **回归：** 为回归任务添加 L1 损失指标（[c82f08d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc82f08d82763ef20719ca7b36d02f67ec69d0d78)）\n* **PyTorch：** 添加 RAdam 优化器（[5bba045](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5bba045ccd75ff13f85ad88160558e38c4410cba)）\n* **PyTorch：** 在数据增强中加入平移和边界框复制操作（[8752e1f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8752e1f2f723a43194049cf570b42deca8ed8b5d)）\n* **PyTorch：** 允许数据增强仅使用噪声或扭曲（[5a02234](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5a02234ce4a1f571759d3af1f31f818c36809798)）\n* **PyTorch：** 允许在无数据库的情况下进行数据增强（[f5b16b3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ff5b16b3f111dbd8c1555a15e6afe78dac61354b2)）\n* **PyTorch：** 为边界框数据增强提供无数据库支持（[a99ca7b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa99ca7b14a0d7e0eff5c816a8c959fff31b12ff1)）\n* **PyTorch：** 根据请求设置数据增强因子（[e26a775](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe26a7751c5bd1f5cdd8db6bc727776e38f05e8da)）\n* **PyTorch：** 将 PyTorch 更新至 1.13 版本（[9c5da36](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9c5da3605c8cb751dd92f8d659887bdc19214877)）\n* **TensorRT：** 添加 INT8 推理功能（[a212a8e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa212a8e0088bb7965df1d4af70830ec082b8e8a9)）\n* **TensorRT：** 如果检测到版本错误，则重新编译引擎（[0f0bb62](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0f0bb624afdf1b4bab6538e487bb02cce3b46801)）\n* 升级至 TensorRT 8.4.3（[1132760](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F113276006ae0a5ea28c0274f635ab2cbea3e2d9c)）\n\n\n### 错误修复\n\n* **API：** 在 info 调用中重新添加参数（[df318cb](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fdf318cb77c9760292b56ce8e38c3c8498f54152b)）\n* 当边界框文件包含无效类别时抛出异常（[3a82a9d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F3a82a9d8263a333998bc58ebf83a02cb933752e8)）\n* **README：** 修正 ci-master 的 Docker 标签（[49dde89](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F49dde89be309981532a2fb22e43","2023-03-31T09:52:20",{"id":223,"version":224,"summary_zh":225,"released_at":226},315488,"v0.23.1","\r\n### 功能特性\r\n\r\n* **chain:** 按最小尺寸裁剪，强制为正方形（[a41ca51](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa41ca51a24779d97d8d9ddd8fcf923365edfaa73)）\r\n\r\n\r\n### Bug 修复\r\n\r\n* **torch:** 多 GPU 下的类别权重问题（[9c1ed4c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9c1ed4c11b844922b83a9b2d70abc719c58d3438)）\r\n* **torch:** 多个测试集的指标命名问题（[17b8cbb](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F17b8cbbfe77f974cb28a29740813902d68ef33cf)）\r\n\r\n\r\n### Docker 镜像:\r\n\r\n* CPU 版本: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.23.1`\r\n* GPU（仅 CUDA）: `docker pull jolibrain\u002Fdeepdetect_gpu:v0.23.1`\r\n* GPU（CUDA 和 TensorRT）: `docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.23.1`\r\n* 带 PyTorch 后端的 GPU 版本: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.23.1`\r\n* 所有镜像均可在 https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain 上获取。","2022-10-14T10:13:24",{"id":228,"version":229,"summary_zh":230,"released_at":231},315489,"v0.23.0","\r\n### 功能特性\r\n\r\n* 添加 CRNN ResNet 原生模板 ([ec1f8ad](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fec1f8ad4640ab4ef9c0109b101b4d1ba1e10f869))\r\n* 为外部项目将 DeepDetect 版本添加到配置变量中 ([be79e54](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fbe79e543a5f7c73949e1d5fbe97a4d2890548c3c))\r\n* **dlib:** 更新 dlib 后端 ([12d181f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F12d181f5bccbbea9473853475086781f439f29e6))\r\n* **torch:** 添加多标签分类 ([90d536e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F90d536e60bd5a2b748da6f51305df4332d984977))\r\n* **torch:** 允许对追踪模型使用多 GPU ([6b3b9c0](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6b3b9c08b2590456cfa19f6344f8569291950bea))\r\n* **torch:** 最佳模型将在所有测试集上进行计算 ([fbedf80](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ffbedf80605a8228424a39b7ce99ed2635572e20f))\r\n* **torch:** 将 PyTorch 更新至 1.12 ([7172314](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F717231409f341ee871a4b3baa53a4bfb74e7c7d6))\r\n* **yolox:** 直接从训练好的 DeepDetect 仓库导出为 ONNX 格式 ([a612539](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa612539cee8d49a2e5a68351caa958013a7163b4))\r\n\r\n\r\n### 错误修复\r\n\r\n* 使用 Torch 后端时，AdamW 默认权重衰减问题 ([eb0cf83](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Feb0cf83d8eabb6481b57a90a7db4313d0a5fc399))\r\n* 在 predict_out.hpp 中添加缺失的头文件 ([b23298f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb23298f6b8ebc888eba28e0f2333f6a59ddeff1c))\r\n* **docker:** 为 gpu_torch Docker 镜像添加 libcupti 库 ([1a5cd09](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1a5cd090d75f2fa4a0626a292f5d5f2a4de878c6))\r\n* 启用带有 DTO 和自定义操作的 Caffe 链 ([d3e722e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd3e722ed0f3d7cbccdd645c4c147b824e8063020))\r\n* 导出的 YOLOX 模型具有正确的类别数量 ([4dac269](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4dac269a0496d52026c4d82dc9514e3790237e02))\r\n* 缺少 ifdef 语句 ([e8a70cf](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe8a70cf5f9a39cdf9275f0874f9ff716913e3872))\r\n* Jetson Nano 的 TensorRT-OSS 构建中缺少 cub 头文件路径 ([00df9fd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F00df9fdfce78af7a87ce6d515d80a653d47a9ded))\r\n* **oatpp:** oatpp-zlib 内存泄漏问题 ([fccd9a6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ffccd9a622dea9bd3bbbf6e40a12ba05dd9f57e80))\r\n* 防止在追踪的 Faster R-CNN 中出现有缺陷的优化 ([dab88ca](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fdab88cae82f76b65012ddc23c5546f79c719de08))\r\n* 在恢复训练后正确重新加载最佳指标 ([c15c502](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc15c502319085f062b018ac26263d7b0790ffed0))\r\n* **torch:** 使用原生模型进行 OCR 预测 ([24aa37c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F24aa37c79448738f753bb22721fd75b29a5b6563))\r\n### Docker 镜像：\r\n\r\n* CPU 版本：`docker pull jolibrain\u002Fdeepdetect_cpu:v0.23.0`\r\n* GPU（仅支持 CUDA）：`docker pull jolibrain\u002Fdeepdetect_gpu:v0.23.0`","2022-09-29T08:42:04",{"id":233,"version":234,"summary_zh":235,"released_at":236},315491,"v0.22.0","\r\n\r\n### 功能特性\r\n\r\n* **cpp:** 将 PyTorch 预测结果转换为 DTO 格式（[b88f22a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb88f22a214cf8a59c4df70cbedbc8854d7a189bc)）\r\n* 滑动窗口目标检测脚本（[0e3df67](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0e3df679941d50c3c79d2a9b4604c26999f3e9a3)）\r\n* TensorRT 目标检测器的 top_k 控制功能（[655aa48](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F655aa483c0f129a0a07c0da9be1f1ab8a465f1be)）\r\n* **PyTorch:** 升级至 PyTorch 1.11 和 torchvision 0.12（[5d312d0](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5d312d02c12ad8d9a0a1b0d6605e1e17ec1e53d4)）\r\n* **PyTorch:** OCR 模型的训练与推理（[3fc2e27](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F3fc2e278974a168bac1d1fba87913a75fa8a931e)）\r\n* **TensorRT:** 将 TensorRT 更新至 22.03 版本（[c03aa9d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc03aa9d515a3fa4a058174e24b917318ec91cd8f)）\r\n\r\n\r\n### 错误修复\r\n\r\n* 发布 PyTorch 模型时裁剪模型输入尺寸，并添加相关测试（[2dabd89](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2dabd8923c8123d07534b2cb35d424b39869f439)）\r\n* 在 PyTorch 模型的数据增强中加入 Cutout 和裁剪操作（[1ef2796](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1ef2796220a76a64bb68263443e6161d18c28f62)）\r\n* **Docker:** 修复 TensorRT Docker 镜像中找不到库的问题（[86f3924](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F86f3924bb67482f8c5bcc3ae7da41c9007009754)）\r\n* 移除语义提交检查（[5d0f0c7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5d0f0c774600b026b68661c1d540cd468326d3a4)）\r\n* 在测试时对随机裁剪进行种子初始化（[92feae3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F92feae33bb759e486ab86f606aeb41466c6e62a4)）\r\n* PyTorch 最佳模型改进为更好或相等（[4d50c8e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4d50c8ed8e1422c5db3a583196dfa67bdabc7615)）\r\n* 修复 PyTorch 模型发布崩溃问题及仓库相关问题（[6a89b83](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6a89b8332b3b117845f3f4baf54420af716674f6)）\r\n* **PyTorch:** 修复恢复训练时更新指标和求解器选项的问题（[9b0019f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9b0019f54614ed909dedf63bcfb7fe1316bbb900)）\r\n### Docker 镜像:\r\n\r\n* CPU 版本：`docker pull jolibrain\u002Fdeepdetect_cpu:v0.22.0`\r\n* GPU（仅 CUDA）：`docker pull jolibrain\u002Fdeepdetect_gpu:v0.22.0`\r\n* GPU（CUDA 和 TensorRT）：`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.22.0`\r\n* 带 PyTorch 后端的 GPU 版本：`docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.22.0`\r\n* 所有镜像均可在 https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain 上获取。","2022-05-23T09:12:14",{"id":238,"version":239,"summary_zh":240,"released_at":241},315492,"v0.21.0","\r\n\r\n### Features\r\n\r\n* add predict from video ([02872eb](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F02872eb2139e20843b1fdcb16aa8cb22f4339cbc))\r\n* add video input connector and streaming endpoints ([07644b4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F07644b43e4e443bb3662a7fd7229a8706a3229b5))\r\n* allow pure negative samples for training object detectors with torch ([cd23bad](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fcd23bad4b890404b42ae3362e6e848f2cec585e8))\r\n* **bench:** add monitoring of transform time ([3f77d42](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F3f77d42b96aa98a6e784b3225a0668542fafa55e))\r\n* **chain:** add action to draw bboxes as trailing action ([ae0a05f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fae0a05f32591cec5bec7ba5f768d3971943f0b3f))\r\n* **chain:** allow user to add their own custom actions ([a470c7b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa470c7baf5ae4f00b4ae75646a29645e529df2b7))\r\n* **ml:** added support for segformer with torch backend ([ab03d1d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fab03d1dd7412ff5d2aa7e02abb60a340e8b1727e))\r\n* **ml:** random cropping for training segmentation models with torch ([ac7ce0f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fac7ce0ffaef57f9b8a1d20107037dce27332acf4))\r\n* random crops for object detector training with torch backend ([385122d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F385122d4eace490ab95fa7a7b9ed92121af1414e))\r\n* segmentation of large images with sliding window, example Python script ([8528e9a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8528e9a689f9f68e436da91b6e59b6117f6470ae))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* bbox clamping in torch inference ([2d6efd3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2d6efd3eacbadc0f71aa3adf35017ae080bbc9ea))\r\n* caffe object detector training requires test set ([2e4db7e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2e4db7ea7daade86d6e138f75b867ee662166367))\r\n* dataset output dimension after crop augmentation ([636d455](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F636d4555ff87bd5df433503a0362d621e7d38657))\r\n* **detection\u002Ftorch:** correctly normalize MAP wrt torchlib outputs ([b12d188](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb12d188e46df4511d1294311319b4b6b8ff53a53))\r\n* model.json file saving ([809f00a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F809f00a9e22878ca1c75aa3b02aeb80b5d6b9e05))\r\n* segmentation with torch backend + full cropping support ([e14c3f2](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe14c3f2fed8a593640f963791d2209d0308ffdb5))\r\n* torch MaP with bboxes ([9bc840f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9bc840f0b1055426670d64b5285701d6faceabb9))\r\n* torch model published config file ([b0d4e04](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb0d4e0485443fb9c069bde4d2b323e13e8733d93))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.21.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.21.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.21.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.21.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2022-02-22T12:03:46",{"id":243,"version":244,"summary_zh":245,"released_at":246},315493,"v0.20.0","\r\n\r\n### Features\r\n\r\n* **feat:** add elapsed time to training metrics ([fe5fc41](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ffe5fc41e7090d5756f99488ceb02708a58d95b7d))\r\n* **feat:** add onnx export for torchvision models ([07f69b1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F07f69b1f01af46088a00019d480b653b4b0350aa))\r\n* **feat:** add yolox export script for training and inference ([0b2f20b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0b2f20be8211a95b1fea3a600f0d5ba17b8d339f))\r\n* **feat:** add yolox onnx export and trt support ([80b7e6a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F80b7e6a658a05046d840b0f2d0591ee865d75168))\r\n* **api:** chain uses dto end to end ([5efbf28](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5efbf283f8056fef09512db7a11277b0f15ecd2d))\r\n* **ml:** data augmentation for training segmentation models with torch backend ([b55c218](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb55c218f3a31e7877039cd027f010dfcace56bd7))\r\n* **ml:** DETR export and inference with torch backend ([1e4ea4e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1e4ea4e8e21759682c0355974f8da4bedfd890bd))\r\n* **feat**: full cuda pipeline for tensorrt ([93815d7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F93815d7c607560890435b6bbe2f32be8306c8380))\r\n* **ml:** noise image data augmentation for training with torch backend ([2d9757d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2d9757d40463194db403ff6d675e3570603edecb))\r\n* **ml:** training segmentation models with torch backend ([1e3ff16](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1e3ff160b2b0796ea8dc1bd7252689c4bf7482ff))\r\n* **ml:** activate cutout for object detector training with torch backend ([8a34aa1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8a34aa17213ffeeea003c5223b8f4e85647fbbda))\r\n* **ml:** distortion noise for image training with torch backend ([35a16df](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F35a16dfabc4ae1148b854d81324812460d90f98a))\r\n* **ml:** dice loss https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03237 ([542bcb4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F542bcb49870c82d2bccfd1bf68ac2eaa76e30846))\r\n* **ml:** manage models with multiple losses ([bea7cb4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fbea7cb46c0bfda50526b7af262b7e0ccf3d0b181))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* **cpu:** cudnn is now on by default, auto switch it to off in case of cpu_only ([3770baf](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F3770baf63c06746aaee3aa681333492a61ecde8b))\r\n* **tensorrt:** read onnx model to find topk ([5cce134](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5cce1348b865d90a920559b8246a7129bb9e1c09))\r\n* simsearch ivf index craft after reload, disabling mmap ([8a2e665](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8a2e665569887f040bbec624e8aa0266802c9c32))\r\n* **tensorrt:** yolox postprocessing in C++ ([1d781d2](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1d781d25b4ad3246be46e6df52685a2197c4977c))\r\n* **torch:** add include sometimes needed ([74487dc](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F74487dc0069df0ef43dc06fbdd825b3c123c66e2))\r\n* add mltype in metrics.json even if training is not over ([9bda7f7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9bda7f70382279724c2d00967150e4a01f5b85fa))\r\n* clang formatting of mlmodel ([130626b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F130626b0040f414cc70f41741d08d0005db854fa))\r\n* **torch:** avoid crashes caused by an exception in the training loop ([667b264](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F667b26416c8a2011b327108a8744a35d25d2c60b))\r\n* **torch:** bad bbox rescaling on multiple uris ([05451ed](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F05451ed1aa3827c6a51aec6e592d18be29b222ac))\r\n* **torch:** correct output name for onnx classification model ([a03eb87](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa03eb87fcd60267deac403e33850fd38c6a7760e))\r\n* **torch:** prevent crash during training if an exception is thrown ([4ce7802](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4ce78020982f29b62c4d04f189711abe3b3d8c65))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.20.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.20.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.20.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.20.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-12-17T20:18:03",{"id":248,"version":249,"summary_zh":250,"released_at":251},315494,"v0.19.0","\r\n\r\n### Features\r\n\r\n* add DTO schemas to swagger automatic doc ([9180ff4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9180ff4b8f0d71995bffff58cd497121ae3ea98a))\r\n* add z-normalisation option ([82d7cc5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F82d7cc57011180d2836efffed919f68200d1ff24))\r\n* **dto:** add custom dto vector type ([01222db](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F01222db2bc8663a959de57e8c27a715d97add163))\r\n* **torch:** add ADAMP variant of adam in RANGER (2006.08217) ([e26ed77](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe26ed77744e302c8fbae597f51864c78a411a903))\r\n* **trt:** add return cv::Mat instead of vector for GAN output ([4990e7b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4990e7bc39e663ed1a96af2391d1d9e4e3b21f55))\r\n* torch segmentation model prediction ([d72a138](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd72a138b7f39aa300f273e252d20fd0afb473369))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* always depend on oatpp ([f262114](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ff262114381d7a06ba99d5c7fc679a2188d7133b6))\r\n* **test:** tar archive was decompressed at each cmake call ([910a0ee](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F910a0ee5080260f2dbda8f78698e3db14fa5fe5c))\r\n* **torch:** predictions handled correctly when data count > 1 ([5a95c29](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5a95c29a8a100f1a6dec4427a041a98185a19d2c))\r\n* **trt:** detect architecture and rebuild model if necessary ([5c9ff89](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5c9ff896b3bc868f4ba493af7db7d432ff587722))\r\n* **TRT:** fix build wrt new external build script ([7121dfe](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F7121dfed3fdcce3672342a62ee38770c011cb709))\r\n* **TRT:** make refinedet great again, also upgrades to TRT8.0.0\u002FTRT-OSS21.08 ([bdff2ae](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fbdff2aedc2e0f2cb5e4110bda928f53e1c4cbdb4))\r\n* CI on Jetson nano with lighter classification model ([1673a99](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1673a99ecc922e01dd7cc8845098291ef46a8902))\r\n* dont rebuild torchvision everytime ([4f17897](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4f178973aac93e9616fe7d9449c1326c402b2ef8))\r\n* remove linking errors on oatpp access_log ([ed276b3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fed276b30385be690923404f4052a30fbde94e5f1))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.19.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.19.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.19.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.19.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-09-06T14:02:28",{"id":253,"version":254,"summary_zh":255,"released_at":256},315495,"v0.18.0","\r\n\r\n### Features\r\n\r\n* **build:** CMake config file to link with dede ([dd71a35](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fdd71a35df831bab5382e4ee5885b425d5364a3b9))\r\n* **ml:** add multigpu support for external native models ([90dcadd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F90dcaddc064a17275ddb709a4fe26ee690c7fc58))\r\n* **ml:** inference for GAN generators with TensorRT backend ([c93188c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc93188c7a89d7efbea0269345e32c90df29ef74a))\r\n* **ml:** python script to trace timm vision models ([055fdfe](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F055fdfe49d08a99b6b9379d3e2863dfff9ff8c1c))\r\n* **predict:** add best_bbox for torch, trt, caffe, ncnn backend ([7890401](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F7890401e1751d3ca855a48a1a5badd48fcac833f))\r\n* **torch:** add dataloader_threads in API ([74a036d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F74a036d58b98059f4592102b7e54d90490773258))\r\n* **torch:** add multigpu for torch models ([447dd53](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F447dd532c8e8d996675a091c7f3875fecd793aed))\r\n* **torch:** support detection models in chains ([7bb9705](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F7bb9705fa4eeac3af34e0dd8bc94eab0224fc120))\r\n* **TRT:** port to TensorRT 21.04\u002F7.2.3 ([4377451](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4377451dcbad488d3ee30a6083a3f82fdee2b196))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* moving back to FAISS master ([916338b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F916338b9611d7285dea6dec92cfdd6d3699d37dc))\r\n* **build:** add required definitions and include directory for building external dd api ([a059428](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa059428357b01836f9efa0c83be0e79549d9774c))\r\n* **build:** do not patch\u002Frebuild tensorrt if not needed ([bfd29ec](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fbfd29ec071207cb9d528c462046889f8a6cdcd3c))\r\n* **build:** torch 1.8 with cuda 11.3 string_view patch ([5002308](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F50023087bda036118b18c2fd8733a991be3ab39b))\r\n* **chain:** fixed_size crops now work at the edges of images ([8e38e35](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8e38e35fc242db0459664ba13e90b0c16f18b5b5))\r\n* **dto:** allow scale input param to be either bool for csv\u002Fcsvts or float for img ([168fc7c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F168fc7cb0c1b018c7408cba01184543e89b64c58))\r\n* **log:** typo in ncnn model log ([0163b02](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0163b02de3639fce9e9746335f7a96188b99ffa2))\r\n* **ncnn:** fix ncnnapi deserialization error ([089aacd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F089aacde7693435b2c90952d4961b8d41d9668ea))\r\n* **ncnn:** fix typo in ut ([893217b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F893217b1de938d2465ee72d14dd168ffed1a7800))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.18.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.18.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.18.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.18.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-06-11T05:22:20",{"id":258,"version":259,"summary_zh":260,"released_at":261},315497,"v0.16.0","\r\n\r\n### Features\r\n\r\n* **torch:** add confidence threshold for classification ([0e75d88](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0e75d88fb949fc2e0e23ed744b5752df8b581d5a))\r\n* **torch:** add more backbones to traced detection models ([f4d05e1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ff4d05e1ea9f419832cc4c63c5367f7615ef22b2f))\r\n* **torch:** allow FP16 inference on GPU ([705d3d7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F705d3d77c8f325d7707cafb422e948b9cc3ac7f7))\r\n* **torch:** madgrad optimizer ([0657d82](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0657d82cd05d575cb6d45c2f122946626d7457a8))\r\n* **torch:** training of detection models on backend torch ([b920999](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb9209991a4e44a45d9bacaed182fd7ecacaed369))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* **torch:** default gradient clipping to true when using madgrad ([5979019](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5979019c27cb5e84ddcb38f40bbd962c32d7003f))\r\n* remove dirty git flag on builds ([6daa4f5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6daa4f5343fb31afbf0efd7330da7513b652e539))\r\n* services names were not always case insentitive ([bee3183](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fbee318356c2bf056247073c73f580016970f379f))\r\n* **chains:** cloning of image crops in chains ([2e62b7e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F2e62b7e6f3f75d2de08e8c6088c5a2da7b320d39))\r\n* **ml:** refinedet image dimensions configuration via API ([20d56e4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F20d56e4ac6ab4691c32187137b520996160c8d59))\r\n* **TensorRT:** fix some memory allocation weirdness in trt backend ([4f952c3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4f952c3fbc2f8da03ebc66644e125576d9b12fee))\r\n* **timeseries:** throw if no data found ([a95e7f9](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa95e7f936c35cbe5cf24779fe5e899667b7f6e6c))\r\n* **torch:** allow partial or mismatching weights loading only if finetuning ([23666ea](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F23666ea49ece302477e1f2d8f88edc41366ff213))\r\n* **torch:** Fix underflow in CSVTS::serialize_bounds ([c8b11b6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc8b11b66b4b264ac16b3b2357fbd66293c01f99d))\r\n* **torch:** fix very long ETA with iter_size != 1 ([0c716a6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0c716a60b2742c70ad715ad4e9b23a3f4d035a77))\r\n* **torch:** parameters are added only once to solver during traced model training ([86cbcf5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F86cbcf5f41f868a6472bb3df46015db34b61f1a2))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.16.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.16.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.16.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.16.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-04-23T13:23:35",{"id":263,"version":264,"summary_zh":265,"released_at":266},315498,"v0.15.0","\r\n\r\n### Features\r\n\r\n* **nbeats:** default backcast loss coeff to zero, allows very short forecast length to learn smoothly ([db17a41](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fdb17a41401b037187b5ccf2e54464e3f6647e40d))\r\n* **timeseries:** add MAE and MSE metrics ([847830d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F847830d8f6a011be05763b36fbf7240dd6d867e6))\r\n* **timeseries:** do not output per serie metrics as a default, add prefix _all for displaying all metrics ([5b6bc4e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5b6bc4e19274595741e8fd11cbfd326b0497b79f))\r\n* **torch:** model publishing with the platform ([da14d33](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fda14d33affb362aa869367fb748d5dbac1d73a10))\r\n* **torch:** save last model at training service interruption ([b346923](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb34692395ee6c0d03b6a378d2b454a1479e52e76))\r\n* **torch:** SWA for RANGER\u002Ftorch (https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05407) ([74cf54c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F74cf54cce30b791def7712eabd0c93c31eebf91b))\r\n* **torch\u002Fcsvts:** create db incrementally ([4336e89](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4336e893efe3c41d97b0199d300c5461fde55776))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* **caffe\u002Fdetection:** fix rare spurious detection decoding, see bug 1190 ([94935b5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F94935b5a6c9a4ab9321cca52d5050f3b520e9ff7))\r\n* **chore:** add opencv imgcodecs explicit link ([8ff5851](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8ff585140f8784e2a91a955c53d10fcb0917369d))\r\n* compile flags typo ([8f0c947](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8f0c947eefad3bde0defae52f4b85317a0e98f50))\r\n* docker cpu link in readme ([1541dcc](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1541dccfdbede08ffd0ce466f7f27a171d6647a9))\r\n* tensorrt tests on Jetson nano ([25b12f5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F25b12f573d6894a24d233a30ee85092327e0d96f))\r\n* **nbeats:** make seasonality block work ([d035c79](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd035c794822f57be5d2aad57403cc2d7ba06738f))\r\n* **torch:** display msg if resume fails, also fails if not best_model.txt file ([d8c5418](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd8c541838713ee2922e3d262c24fcf0bf058ce1a))\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.15.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.15.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.15.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.15.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-03-26T10:13:50",{"id":268,"version":269,"summary_zh":270,"released_at":271},315499,"v0.14.0","\r\n\r\n### Features\r\n\r\n* **bench:** Add parameters for torch image backend ([5d24f3d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5d24f3d4665c0c7cd21bc2ba84643c6f7830735f))\r\n* **ml:** ViT support for Realformer from https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.11747v2 ([5312de7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5312de770eb16408d7bea8ffbc4a6b24f35a95c9))\r\n* **nbeats:** add parameter coefficient to backcast loss ([35b3c31](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F35b3c313fd0f122c788969e93e5ffa476150e8ea))\r\n* **torch:** add inference for torch detection models ([516eeb6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F516eeb6a56ac36aefbb3f24624344236cbb20f39))\r\n* **torch:** Sharpness Aware Minimization (2010.01412) ([45a8408](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F45a84087b5321a5c3eebcb8a6d53975d1b544478))\r\n* **torch:** support for multiple test sets ([c0dcec9](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc0dcec9a51f86cf904809c492fc175b5951dae5b))\r\n* **torch:** temporal transformers (encoder only) (non autoreg) ([3538eb7](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F3538eb78a721b477f377d6798f707796be8319e0))\r\n* CSV parser support for quotes and string labels ([efa4c79](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fefa4c79e9fe21e9074f17ca20b020f97bd2112cb))\r\n* new cropping action parameters in chains ([6597b53](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6597b53671b19022b2b7b32f4e2a6e0a29136f21))\r\n* running custom methods from jit models ([73d1eef](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F73d1eef00b0b41083237e7061d11ff8d4156f612))\r\n* **torch\u002Ftxt:** display msg if vocab not found ([31837ec](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F31837eca5907c4cac7aa3d42f0d12b474ad673f9))\r\n* SSD MAP-x threshold control ([acd252a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Facd252a2a448f0c1f8c497ae665cc5be7649f35d))\r\n* use oatpp::DTO to parse img-input-connector APIData ([33aee72](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F33aee72ad4450a1080dd53f40a5c2cea14a304b8))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* **build:** pytorch with custom spdlog ([1fb19a0](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F1fb19a02c700698a4dc262a6c81ef83f8c3623a6))\r\n* **caffe\u002Fcudnn:** force default engine option in case of cudnn not compiled in ([b6dec4e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb6dec4e30dc166e9f37246a69bb15a0d9efc6c3e))\r\n* **chore:** typo when trying to use syslog ([374e6c4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F374e6c4b48a18f4e41eacef3cb5e13bcf325b0f7))\r\n* **client:** Change python package name to dd_client ([b96b0fa](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb96b0fade15a46b6a89bc830f00f650b4ca7242b))\r\n* **csvts:** read from memory ([6d1dba8](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6d1dba85fc584d7bf75d30f63c506c6e00aaa07e))\r\n* **csvts:** throw proper error when a csv file is passed at training time ([90aab20](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F90aab201df316dea23f1bc47c5ca5d300d95f12c))\r\n* **docker:** ensure pip3 is working on all images ([a374a58](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa374a5898e73cddb0b76b4309ad59c4329359571))\r\n* **ncnn:** update innerproduct so that it does not pack data ([9d88187](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9d88187381982c0b49170aa749caf8581532128c))\r\n* **torch:** add error message when repository contains multiple models ([a08285f](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa08285f51b2f614d24fea08d6c62edf3c9a47e74))\r\n* -Werror=deprecated-copy gcc 9.3 ([0371cfa](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0371cfa03bf0c42ce3a643c198e7154d426c7892))\r\n* action cv macros with opencv >= 3 ([37d2926](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F37d292683a7a8039ec77cd66ab16f21342b5f28c))\r\n* caffe build spdlog dependency ([62e781a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F62e781a4a2f97d420d3a34cbb16da40d27d6199c))\r\n* docker \u002Fopt\u002Fmodels permissions ([82e2695](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F82e269589a9a8160eb1c63fbde2f8b372f0838d6))\r\n* prevent softmax after layer extraction ([cbee659](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fcbee65945d46ee5f304519bd760d92be3b00eb2f))\r\n* tag syntax for github releases ([4de3807](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4de3807adfb13957358e41b95a09bd9ee0533a09))\r\n* torch backend CPU build and tests ([44343f6](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F44343f6236d9afc70f931b7a762d4df591325abf))\r\n* typo in oatpp chain HTTP endpoint ([955b178](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F955b178b09a015b1f147449f277c0e4945c48d3a))\r\n* **torch:** gather torchscript model parameters correctly ([99e4dbe](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F99e4dbe34e8845331a95dec3b4dd7bad3d11b03b))\r\n* **torch:** set seed of torchdataset during training ([d02404a](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd02404a6120ef6ec599accc63e8bc25c27072e7e))\r\n* **torch\u002Franger:** allow not to use lookahead ([d428d08](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd428d08e5bd40166f89","2021-03-05T10:16:39",{"id":273,"version":274,"summary_zh":275,"released_at":276},315500,"v0.13.0","\r\n\r\n### Features\r\n\r\n* support for batches for NCNN image models ([b85d79e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb85d79e673ac28d0f1bcf65a1773990bff6cd0b4))\r\n* **ml:** retain_graph control via API for torch autograd ([d109558](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd109558f20c7aba2eda6ef16eb884845bf816638))\r\n* **ml:** torch image basic data augmentation ([b9f8525](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fb9f85251380c0483268488c07990db3a68f5a4ee))\r\n* **ncnn:** use master from tencent\u002Fncnn ([044e181](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F044e1811223c1c6843a35363cf4b29aed5394003))\r\n* upgrade oatpp to pre-1.2.5 ([596f6f4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F596f6f4c4098468ff3b2f3276bf5e6f8043edb27))\r\n\r\n\r\n### Bug Fixes\r\n\r\n* **torch:** csvts forecast mode needs sequence of length backcast during predict ([4c89a1c](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4c89a1c0e22a34afaf2433e675fb7fb9a32add43))\r\n* add missing spdlog patch ([4d0a4fa](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4d0a4fa1f281a1ab2067a3801ed49c0122c604a7))\r\n* caffe linkage with our spdlog ([967fdef](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F967fdefb4497b71b3274ade8d64f50c019f3f034))\r\n* copy .git in docker image builder ([570323d](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F570323d91518bb2f62e7555da80549e6521598cf))\r\n* deactivate the csvts NCNN test when caffe is not built ([5a5c8f1](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5a5c8f12227a629f656ca016e28d2a9f35d1e3d1))\r\n* missing support for parent_id in chains with Python client ([a5fad50](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fa5fad50b2c698418f81cab9d714e4ddd13081839))\r\n* NCNN chain with images and actions ([38b1d07](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F38b1d076ddddac0ef126e0eb63f08e019b62f15c))\r\n* throw if hard image read error in caffe classification input ([f1c0d09](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Ff1c0d09d511ea63f65def260cce5577455fb897e))\r\n* **doc:** similarity search_nn number of results API ([5eaf343](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5eaf3435d39db77ee0a4fed725eb257718805a45))\r\n* **torch:** remove potential segfault in csvts connector ([ba96b4e](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fba96b4ea4ff6315194e0d9150934a380bbac265e))\r\n\r\n### Docker images:\r\n\r\n* CPU version: `docker pull jolibrain\u002Fdeepdetect_cpu:v0.13.0`\r\n* GPU (CUDA only): `docker pull jolibrain\u002Fdeepdetect_gpu:v0.13.0`\r\n* GPU (CUDA and Tensorrt) :`docker pull jolibrain\u002Fdeepdetect_cpu_tensorrt:v0.13.0`\r\n* GPU with torch backend: `docker pull jolibrain\u002Fdeepdetect_gpu_torch:v0.13.0`\r\n* All images available on https:\u002F\u002Fhub.docker.com\u002Fu\u002Fjolibrain\r\n","2021-01-22T13:32:36",{"id":278,"version":279,"summary_zh":280,"released_at":281},315501,"v0.12.0","### Highlights\r\n\r\n- Vision Transformer (ViT) image classification models support with libtorch\r\n- Support for native Torch vision classification models\r\n- Improved N-BEATS for multivariate time-series\r\n- OATPP new webserver interface\r\n\r\n### Features\r\n\r\n* switch back to cppnet-lib by default ([ebe3b15](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Febe3b15e7e3065727b4649b605c83caf8a53dc70))\r\n* **torch:** native models can load weights from any jit file ([69af7f4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F69af7f4b7279dc192b3772271fef2682ca9df36f))\r\n* **torch:** update libtorch to 1.7.1 ([41d5375](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F41d5375961a4f7d62c716f7d010a480e566e1ca6))\r\n* add access log for oat++ ([4291bf8](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F4291bf8db2030aa5f07ef0a87ef6207b39004845))\r\n* add cudnn cmake find package ([5983ffd](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F5983ffdc7551605118e7b5cd17e7faaa05e7f45a))\r\n* add some more error messages to log ([e4ec772](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe4ec7728424b1d593141923a093079f7d84b3460))\r\n* enable backtrace on segfault for oatpp ([96b2184](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F96b2184656eb54d2e14d7277a44080a649f7fe40))\r\n* enhance cppnetlib req timing logs ([6fc3e76](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F6fc3e76f5b156161c1f695cfe688c7b215cecda4))\r\n* gives also per target error and not only global eucl when selecting measure:eucll ([dd2fc79](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fdd2fc794be77aefc0690d1cf772c24f8f688daa1))\r\n* introduce aotpp interface for deepdetect ([04b79f4](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F04b79f47bf3932ab25a33aaab2bce11212b1c3d4))\r\n* print stacktrace on segfault ([11ab359](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F11ab359a35c9faf1264a6763ac19e61bcd269a4a))\r\n* provide predict\u002Ftransform duration in ms ([0197991](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0197991b6d5044a9ad41f18647de2483260fc96b))\r\n* service stats provide predict and transform duration total ([9a24125](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F9a24125e5916a29f7f3f3842134552a0e83c1be7))\r\n* track oatpp request timing ([68749d3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F68749d398177bbde66f08740ec9e38568f557c25))\r\n* **ml:** image regression model training with libtorch ([968c551](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F968c5512bb72ba49e91fe25bbb38879b4eb5e90f))\r\n* **tools:** trace_torchvision can trace models for loading weights with dd native models ([c11b551](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc11b551e0a6ac4507aa894ac76c9410852e97fd6))\r\n* **torch:** Add multigpu support for native models training ([33cd1df](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F33cd1df023585917acb7e1fd0719859c52bb2c22))\r\n* **torch:** Add native resnet support ([0a01e57](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F0a01e57a204fc8af815eafe42372515565431111))\r\n* **torch:** add wide resnet and resnext to the vision models ([aba6efb](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Faba6efbf20a3af4f54315ca5764931acf39e9b71))\r\n* use jolibrain fork of faiss ([8eb6e53](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F8eb6e537d24f7da74bca111d726f15b2f5f99a30))\r\n* use oatpp by default ([c1d6620](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc1d66202768a9b72ee6a30664ac423015c8f19f3))\r\n* vision transformer (ViT) multi-gpu with torch ([88b65c2](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F88b65c293295b3c56319914806bebeef922f0162))\r\n* **graph:** correct data size computation if different ouputs of an op have different sizes ([288dd5b](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F288dd5be22b5de83719043dd6ecedc095c75e16c))\r\n* **ml:** added vision tranformer (ViT) as torch native template ([72c0269](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F72c02696a64d7309ecd2b94b1142fe55fe2dd641))\r\n* **ml:** torch db stores encoded images instead of tensors ([e7f3c19](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe7f3c19047dde87c5ec0ccb06d22395b5707fca2))\r\n* **ml:** torch regression and classification models training from list of files without db ([e049caa](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fe049caacc6f3cae7f1d286b224a295a95ab801e3))\r\n* **torch:** clip gradient options for all optimizers ([c2ddee5](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fc2ddee510a6ce68d5615449744ad0e3c7ae79e48))\r\n* **torch:** implement resume mllib option for torchlib: if true, reuse previous solver state ([02e3177](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002F02e3177ef77ef1cec506ab683b26b16963339d32))\r\n* **torch\u002Fnbeats:** allow different sizes for backcast and forecast , also implements minimal change in csvtstorchinputconn in order to do forecast of signals instead of label predicting ([d4e27f3](https:\u002F\u002Fgithub.com\u002Fjolibrain\u002Fdeepdetect\u002Fcommit\u002Fd4e27f37b721f9ac5e9fd8932ddde2622ce5c3d0))\r\n* **torch\u002Ftimeseries:** add (p)ReLU layers in recurrent template, allowing to compute mlp-like embeddings before LSTM layers ([930be","2021-01-08T16:26:08"]