[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-oarriaga--paz":3,"tool-oarriaga--paz":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":103,"github_topics":104,"view_count":112,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":146},453,"oarriaga\u002Fpaz","paz","Hierarchical perception library in Python for pose estimation, object detection, instance segmentation, keypoint estimation, face recognition, etc.","paz 是一个基于 Python 的层级感知库，专为自主系统设计，旨在解决计算机视觉任务中模型分散、集成困难的问题。它集成了姿态估计、目标检测、实例分割、关键点检测及人脸识别等多种核心功能，为开发者提供了一站式的视觉感知解决方案。\n\n在实际应用中，paz 能够处理从简单的物体定位到复杂的人体动作捕捉等多种任务。它支持 2D\u002F3D 关键点估计、6D 姿态估计以及实时语义分割等高级功能，甚至能用于分析人脸情绪或手部精细动作。通过提供现成的实时演示和训练脚本，paz 大大降低了视觉算法落地的门槛，让用户无需从零开始构建复杂的处理流程。\n\npaz 特别适合计算机视觉开发者和研究人员使用。对于开发者而言，丰富的预训练模型和模块化设计能加速应用开发；对于研究人员，其层级化结构便于进行算法实验与创新，例如探索潜在关键点或改进空间变换网络。总体而言，paz 以其全面的功能覆盖和层级化的架构设计，成为了一个高效且灵活的视觉开发工具箱。","# (PAZ) Perception for Autonomous Systems\n[![Publish Website](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Factions\u002Fworkflows\u002Fpublish-website.yml\u002Fbadge.svg?branch=master)](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Factions\u002Fworkflows\u002Fpublish-website.yml)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_785e6f7367aa.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpypaz)\n\nHierarchical perception library in Python.\n\n## Selected examples:\nPAZ is used in the following examples (links to **real-time demos** and training scripts):\n\n| [Probabilistic 2D keypoints](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fprobabilistic_keypoint_estimation)| [6D head-pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fpose_estimation)  | [Object detection](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fobject_detection)|\n|---------------------------|--------------------------| ------------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_0a8742f122f3.png\" width=\"425\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d116ef1d68fe.png\" width=\"440\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_110c9a68ff53.png\" width=\"430\">|\n\n| [Emotion classifier](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fface_classification) | [2D keypoint estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fkeypoint_estimation)   | [Mask-RCNN (in-progress)](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmask_rcnn\u002Fexamples\u002Fmask_rcnn)  |\n|---------------------------|--------------------------| -----------------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_f08b4ed82d40.gif\" width=\"250\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_37925f5fc77d.png\" width=\"410\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_00fa8a434574.png\" width=\"400\">|\n\n|[Semantic segmentation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fsemantic_segmentation) | [Hand pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhand_pose_estimation) |  [2D Human pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhuman_pose_estimation_2D) |\n|---------------------------|-----------------------|-----------------|\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_0a6f7e78f228.png\" width=\"325\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d1029a8a41d2.jpg\" width=\"330\"> |\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_52da48b68285.gif\" width=\"250\"> | \n\n| [3D keypoint discovery](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fdiscovery_of_latent_keypoints)     | [Hand closure detection](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhand_pose_estimation)  | [6D pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fpix2pose) |\n|---------------------------|-----------------------| --------------------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_0fda458e7903.png\" width=\"335\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d4d5acef0bde.gif\" width=\"250\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_a61e900e564d.jpg\" width=\"330\"> |\n\n| [Implicit orientation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fimplicit_orientation_learning)  | [Attention (STNs)](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fspatial_transfomer_networks) | [Haar Cascade detector](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhaar_cascade_detectors) |\n|---------------------------|-----------------------|-----------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_b7eddd572cf8.png\" width=\"335\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_f802f81547f0.png\" width=\"340\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_53dbb74d12eb.png\" width=\"330\"> |\n\n| [Eigenfaces](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Feigenfaces) |[Prototypical Networks](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fprototypical_networks) | [3D Human pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fhuman_pose_estimation_3D) |\n|---------------------------|-----------------------|-----------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_556ff39e4e80.png\" width=\"325\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_257d10399e33.png\" width=\"330\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_a7cfb5c8663b.gif\" width=\"250\"> |\n\n|[MAML](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fmaml)| | |\n|---------------------------|-----------------------|-----------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_526d69fab05f.png\" width=\"325\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_89bb17c15d89.png\" width=\"330\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_4c6066d66e4e.png\" width=\"330\"> |\n\n\nAll models can be re-trained with your own data (except for Mask-RCNN, we are working on it [here](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmask_rcnn)).\n\n## Table of Contents\n\u003C!--ts-->\n* [Examples](#selected-examples)\n* [Installation](#installation)\n* [Documentation](#documentation)\n* [Hierarchical APIs](#hierarchical-apis)\n    * [High-level](#high-level) | [Mid-level](#mid-level) | [Low-level](#low-level)\n* [Additional functionality](#additional-functionality)\n    * [Implemented models](#models)\n* [Motivation](#motivation)\n* [Citation](#citation)\n* [Funding](#funding)\n\u003C!--te-->\n\n## Installation\nPAZ has only **three** dependencies: [Tensorflow2.0](https:\u002F\u002Fwww.tensorflow.org\u002F), [OpenCV](https:\u002F\u002Fopencv.org\u002F) and [NumPy](https:\u002F\u002Fnumpy.org\u002F).\n\nTo install PAZ with pypi run:\n```\npip install pypaz --user\n```\n\n## Documentation\nFull documentation can be found [https:\u002F\u002Foarriaga.github.io\u002Fpaz\u002F](https:\u002F\u002Foarriaga.github.io\u002Fpaz\u002F).\n\n## Hierarchical APIs\nPAZ can be used with three different API levels which are there to be helpful for the user's specific application.\n\n## High-level\nEasy out-of-the-box prediction. For example, for detecting objects we can call the following pipeline:\n\n``` python\nfrom paz.applications import SSD512COCO\n\ndetect = SSD512COCO()\n\n# apply directly to an image (numpy-array)\ninferences = detect(image)\n```\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d3d4c751e812.png\" width=\"1000\">\n\u003C\u002Fp>\n\n\n\nThere are multiple high-level functions a.k.a. ``pipelines`` already implemented in PAZ [here](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fpipelines). Those functions are build using our mid-level API described now below.\n\n## Mid-level\nWhile the high-level API is useful for quick applications, it might not be flexible enough for your specific purpose. Therefore, in PAZ we can build high-level functions using our a mid-level API.\n\n### Mid-level: Sequential\nIf your function is sequential you can construct a sequential function using ``SequentialProcessor``. In the example below we create a data-augmentation pipeline:\n\n``` python\nfrom paz.abstract import SequentialProcessor\nfrom paz import processors as pr\n\naugment = SequentialProcessor()\naugment.add(pr.RandomContrast())\naugment.add(pr.RandomBrightness())\naugment.add(pr.RandomSaturation())\naugment.add(pr.RandomHue())\n\n# you can now use this now as a normal function\nimage = augment(image)\n```\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_a3e1c65a3da1.png\" width=\"800\">\n\u003C\u002Fp>\n\nYou can also add **any function** not only those found in ``processors``. For example we can pass a numpy function to our original data-augmentation pipeline:\n\n``` python\naugment.add(np.mean)\n```\nThere are multiple functions a.k.a. ``Processors`` already implemented in PAZ [here](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fprocessors).\n\nUsing these processors we can build more complex pipelines e.g. **data augmentation for object detection**: [``pr.AugmentDetection``](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fpipelines\u002Fdetection.py#L46)\n\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_19d39f158607.png\" width=\"800\">\n\u003C\u002Fp>\n\n\n### Mid-level: Explicit\nNon-sequential pipelines can be also build by abstracting ``Processor``. In the example below we build a emotion classifier from **scratch** using our high-level and mid-level functions.\n\n``` python\nfrom paz.applications import HaarCascadeFrontalFace, MiniXceptionFER\nimport paz.processors as pr\n\nclass EmotionDetector(pr.Processor):\n    def __init__(self):\n        super(EmotionDetector, self).__init__()\n        self.detect = HaarCascadeFrontalFace(draw=False)\n        self.crop = pr.CropBoxes2D()\n        self.classify = MiniXceptionFER()\n        self.draw = pr.DrawBoxes2D(self.classify.class_names)\n\n    def call(self, image):\n        boxes2D = self.detect(image)['boxes2D']\n        cropped_images = self.crop(image, boxes2D)\n        for cropped_image, box2D in zip(cropped_images, boxes2D):\n            box2D.class_name = self.classify(cropped_image)['class_name']\n        return self.draw(image, boxes2D)\n        \ndetect = EmotionDetector()\n# you can now apply it to an image (numpy array)\npredictions = detect(image)\n```\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_130ce4999f53.png\" width=\"800\">\n\u003C\u002Fp>\n\n\n``Processors`` allow us to easily compose, compress and extract away parameters of functions. However, most processors are build using our low-level API (backend) shown next.\n\n## Low-level\n\nMid-level processors are mostly built from small backend functions found in: [boxes](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fboxes.py), [cameras](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fcamera.py), [images](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fbackend\u002Fimage), [keypoints](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fkeypoints.py) and [quaternions](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fquaternion.py).\n\nThese functions can found in ``paz.backend``:\n\n``` python\nfrom paz.backend import boxes, camera, image, keypoints, quaternion\n```\nFor example, you can use them in your scripts to load or show images:\n\n``` python\nfrom paz.backend.image import load_image, show_image\n\nimage = load_image('my_image.png')\nshow_image(image)\n```\n\n## Additional functionality\n\n* PAZ has [built-in messages](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fabstract\u002Fmessages.py) e.g. ``Pose6D`` for an easier data exchange with other frameworks such as [ROS](https:\u002F\u002Fwww.ros.org\u002F).\n\n* There are custom [callbacks](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Foptimization\u002Fcallbacks.py) e.g. MAP evaluation for object detectors while training.\n    \n* PAZ comes with [data loaders](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fdatasets) for the multiple datasets:\n    [OpenImages](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fopen_images.py), [VOC](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fvoc.py), [YCB-Video](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fobject_detection\u002Fdatasets\u002Fycb_video.py), [FAT](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Ffat.py), [FERPlus](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fferplus.py), [FER2013](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Ffer.py), [CityScapes](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fcityscapes.py).\n\n* We have an automatic [batch creation and dispatching wrappers](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fabstract\u002Fsequence.py) for an easy connection between you ``pipelines`` and tensorflow generators. Please look at the [tutorials](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Ftutorials) for more information.\n\n### Models\n\nThe following models are implemented in PAZ and they can be trained with your own data:\n\n| Task (link to implementation)    |Model (link to paper)  |\n|---------------------------:|-----------------------| \n|[Object detection](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fssd300.py)|[SSD-300](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.02325)|\n|[Object detection](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fssd512.py)|[SSD-512](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.02325)|\n|[Probabilistic keypoint est.](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fprobabilistic_keypoint_estimation\u002Fmodel.py) |[Gaussian Mixture CNN](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpublications\u002F2018\u002FNeumann18a\u002F)   |\n|[Detection and Segmentation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmask_rcnn\u002Fexamples\u002Fmask_rcnn)  |[MaskRCNN (in progress)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06870) |\n|[Keypoint estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fhrnet.py)|[HRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.07919)|\n|[Semantic segmentation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fsegmentation\u002Funet.py)|[U-NET](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597)|\n|[6D Pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fkeypointnet.py)          |[Pix2Pose](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.07433)          |\n|[Implicit orientation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fimplicit_orientation_learning\u002Fmodel.py)        |[AutoEncoder](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.01275)            |\n|[Emotion classification](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fclassification\u002Fxception.py)       |[MiniXception](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.07557)           |\n|[Discovery of Keypoints](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fkeypointnet.py)      |[KeypointNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03146)            |\n|[Keypoint estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fkeypointnet.py)  |[KeypointNet2D](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03146)|\n|[Attention](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fspatial_transfomer_networks\u002FSTN.py)                   |[Spatial Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02025)   |\n|[Object detection](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fhaar_cascade.py)            |[HaarCascades](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1023\u002FB:VISI.0000013087.49260.fb)  |\n|[2D Human pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fpose_estimation\u002Fhigher_hrnet.py)            |[HigherHRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.10357)  |\n|[3D Human pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fsimplebaselines.py) | [Simple Baseline](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.03098) |\n|[Hand pose estimation](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fdetnet.py)            |[DetNet](https:\u002F\u002Fvcai.mpi-inf.mpg.de\u002Fprojects\u002F2020-cvpr-hands\u002F)  |\n|[Hand closure classification](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fiknet.py)            |[IKNet](https:\u002F\u002Fvcai.mpi-inf.mpg.de\u002Fprojects\u002F2020-cvpr-hands\u002F)  |\n|[Hand detection](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fssd512.py)            |[SSD512](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.02325)|\n|[Few-shot classification](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fclassification\u002Fprotonet.py)| [Prototypical Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05175)|\n|[Few-shot classification](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fmaml\u002Fmaml.py)| [Model Agnostic Meta Learning (MAML)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.03400)|\n\n\n## Motivation\nEven though there are multiple high-level computer vision libraries in different deep learning frameworks, I felt there was not a consolidated deep learning library for robot-perception in my framework of choice (Keras).\n\nAs a final remark, I would like to mention, that I feel that we might tend to forget the great effort and emotional status behind every (open-source) project.\nI feel it's easy to blurry a company name with the individuals behind their work, and we forget that there is someone feeling our criticism and our praise.\nTherefore, whatever good code you can find here, is all dedicated to the software-engineers and contributors of open-source projects like Pytorch, Tensorflow and Keras.\nYou put your craft out there for all of us to use and appreciate, and we ought first to give you our thankful consideration.\n\n\n## Why the name **PAZ**?\n* The name PAZ satisfies it's theoretical definition by having it as an acronym for **Perception for Autonomous Systems** where the letter S is replaced for Z in order to indicate that for \"System\" we mean almost anything i.e. Z being a classical algebraic variable to indicate an unknown element.\n\n\n## Tests and coverage\nContinuous integration is managed trough [github actions](https:\u002F\u002Fgithub.com\u002Ffeatures\u002Factions) using [pytest](https:\u002F\u002Fdocs.pytest.org\u002Fen\u002Fstable\u002F).\nYou can then check for the tests by running:\n```\npytest tests\n``` \nTest coverage can be checked using [coverage](https:\u002F\u002Fcoverage.readthedocs.io\u002Fen\u002Fcoverage-5.2.1\u002F).\nYou can install coverage by calling: `pip install coverage --user`\nYou can then check for the test coverage by running:\n```\ncoverage run -m pytest tests\u002F\ncoverage report -m\n```\n\n## Citation\nIf you use PAZ please consider citating it. You can also find our paper here [https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14541](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14541).\n\n```BibTeX\n@misc{arriaga2020perception,\n      title={Perception for Autonomous Systems (PAZ)}, \n      author={Octavio Arriaga and Matias Valdenegro-Toro and Mohandass Muthuraja and Sushma Devaramani and Frank Kirchner},\n      year={2020},\n      eprint={2010.14541},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n## Funding\nPAZ is currently developed in the [Robotics Group](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fde\u002Fueber-uns\u002Funiversitaet-bremen-arbeitsgruppe-robotik.html) of the [University of Bremen](https:\u002F\u002Fwww.uni-bremen.de\u002F), together with the [Robotics Innovation Center](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fen\u002Fstartpage.html) of the **German Research Center for Artificial Intelligence** (DFKI) in **Bremen**.\nPAZ has been funded by the German Federal Ministry for Economic Affairs and Energy and the [German Aerospace Center](https:\u002F\u002Fwww.dlr.de\u002FDE\u002FHome\u002Fhome_node.html) (DLR).\nPAZ been used and\u002For developed in the projects [TransFIT](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fen\u002Fresearch\u002Fprojects\u002Ftransfit.html) and [KiMMI-SF](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fen\u002Fresearch\u002Fprojects\u002Fkimmi-sf\u002F).\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_2a0f0c0f9587.png\" width=\"1200\">\n\u003C\u002Fp>\n","# (PAZ) 自主系统感知\n[![Publish Website](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Factions\u002Fworkflows\u002Fpublish-website.yml\u002Fbadge.svg?branch=master)](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Factions\u002Fworkflows\u002Fpublish-website.yml)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_785e6f7367aa.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpypaz)\n\nPython 分层感知库。\n\n## 精选示例：\nPAZ 被用于以下示例中（链接指向 **实时演示** 和训练脚本）：\n\n| [概率 2D 关键点](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fprobabilistic_keypoint_estimation)| [6D 头部姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fpose_estimation)  | [目标检测](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fobject_detection)|\n|---------------------------|--------------------------| ------------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_0a8742f122f3.png\" width=\"425\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d116ef1d68fe.png\" width=\"440\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_110c9a68ff53.png\" width=\"430\">|\n\n| [情感分类器](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fface_classification) | [2D 关键点估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fkeypoint_estimation)   | [Mask-RCNN (开发中)](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmask_rcnn\u002Fexamples\u002Fmask_rcnn)  |\n|---------------------------|--------------------------| -----------------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_f08b4ed82d40.gif\" width=\"250\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_37925f5fc77d.png\" width=\"410\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_00fa8a434574.png\" width=\"400\">|\n\n|[语义分割](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fsemantic_segmentation) | [手部姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhand_pose_estimation) |  [2D 人体姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhuman_pose_estimation_2D) |\n|---------------------------|-----------------------|-----------------|\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_0a6f7e78f228.png\" width=\"325\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d1029a8a41d2.jpg\" width=\"330\"> |\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_52da48b68285.gif\" width=\"250\"> | \n\n| [3D 关键点发现](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fdiscovery_of_latent_keypoints)     | [手部闭合检测](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhand_pose_estimation)  | [6D 姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fpix2pose) |\n|---------------------------|-----------------------| --------------------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_0fda458e7903.png\" width=\"335\"> | \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Foarriaga\u002Faltamira-data\u002Fmaster\u002Fimages\u002Fhand_closure_detection.gif\" width=\"250\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_a61e900e564d.jpg\" width=\"330\"> |\n\n| [隐式方向](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fimplicit_orientation_learning)  | [注意力机制 (STNs)](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fspatial_transfomer_networks) | [Haar 级联检测器](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fhaar_cascade_detectors) |\n|---------------------------|-----------------------|-----------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_b7eddd572cf8.png\" width=\"335\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_f802f81547f0.png\" width=\"340\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_53dbb74d12eb.png\" width=\"330\"> |\n\n| [特征脸](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Feigenfaces) |[原型网络](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Fprototypical_networks) | [3D 人体姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fhuman_pose_estimation_3D) |\n|---------------------------|-----------------------|-----------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_556ff39e4e80.png\" width=\"325\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_257d10399e33.png\" width=\"330\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_a7cfb5c8663b.gif\" width=\"250\"> |\n\n|[MAML](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fmaml)| | |\n|---------------------------|-----------------------|-----------------|\n|\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_526d69fab05f.png\" width=\"325\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_89bb17c15d89.png\" width=\"330\">| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_4c6066d66e4e.png\" width=\"330\"> |\n\n\n所有模型均可使用您自己的数据重新训练（Mask-RCNN 除外，我们正在[此处](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmask_rcnn)进行开发）。\n\n## 目录\n\u003C!--ts-->\n* [示例](#精选示例)\n* [安装](#安装)\n* [文档](#文档)\n* [分层 API](#分层-api)\n    * [高级](#高级) | [中级](#中级) | [低级](#低级)\n* [附加功能](#附加功能)\n    * [已实现的模型](#已实现的模型)\n* [动机](#动机)\n* [引用](#引用)\n* [资助](#资助)\n\u003C!--te-->\n\n## 安装\nPAZ 仅包含 **三个** 依赖项：[Tensorflow2.0](https:\u002F\u002Fwww.tensorflow.org\u002F)、[OpenCV](https:\u002F\u002Fopencv.org\u002F) 和 [NumPy](https:\u002F\u002Fnumpy.org\u002F)。\n\n要通过 pypi 安装 PAZ，请运行：\n```\npip install pypaz --user\n```\n\n## 文档\n完整文档可在 [https:\u002F\u002Foarriaga.github.io\u002Fpaz\u002F](https:\u002F\u002Foarriaga.github.io\u002Fpaz\u002F) 查看。\n\n## 分层 API\nPAZ 提供三个不同级别的 API，旨在满足用户特定的应用需求。\n\n## 高级\n开箱即用的简易预测功能。例如，为了检测目标，我们可以调用以下流水线：\n\n``` python\nfrom paz.applications import SSD512COCO\n\ndetect = SSD512COCO()\n\n# apply directly to an image (numpy-array)\ninferences = detect(image)\n```\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_d3d4c751e812.png\" width=\"1000\">\n\u003C\u002Fp>\n\n\n\nPAZ 中已经实现了多个高级函数，也称为 ``pipelines``（流水线），可以在[这里](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fpipelines)查看。这些函数是使用我们接下来要描述的中级 API 构建的。\n\n## Mid-level (中级 API)\n虽然高级 API 对于快速应用非常有用，但可能不足以满足您的特定需求。因此，在 PAZ 中，我们可以使用中级 API 来构建高级函数。\n\n### Mid-level: Sequential (中级：顺序式)\n如果您的函数是顺序执行的，可以使用 ``SequentialProcessor`` 构建顺序函数。在下面的示例中，我们创建了一个数据增强（data-augmentation）流水线：\n\n``` python\nfrom paz.abstract import SequentialProcessor\nfrom paz import processors as pr\n\naugment = SequentialProcessor()\naugment.add(pr.RandomContrast())\naugment.add(pr.RandomBrightness())\naugment.add(pr.RandomSaturation())\naugment.add(pr.RandomHue())\n\n# you can now use this now as a normal function\nimage = augment(image)\n```\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_a3e1c65a3da1.png\" width=\"800\">\n\u003C\u002Fp>\n\n您也可以添加 **任何函数**，而不仅仅是 ``processors`` 中的函数。例如，我们可以将一个 numpy 函数传递给我们原来的数据增强流水线：\n\n``` python\naugment.add(np.mean)\n```\nPAZ 中已经实现了多个函数，也称为 ``Processors``（处理器），可以在[这里](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fprocessors)查看。\n\n使用这些处理器，我们可以构建更复杂的流水线，例如 **用于目标检测的数据增强**：[``pr.AugmentDetection``](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fpipelines\u002Fdetection.py#L46)\n\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_19d39f158607.png\" width=\"800\">\n\u003C\u002Fp>\n\n\n### Mid-level: Explicit (中级：显式)\n非顺序流水线也可以通过继承 ``Processor`` 来构建。在下面的示例中，我们使用高级和中级函数从头构建了一个情感分类器。\n\n``` python\nfrom paz.applications import HaarCascadeFrontalFace, MiniXceptionFER\nimport paz.processors as pr\n\nclass EmotionDetector(pr.Processor):\n    def __init__(self):\n        super(EmotionDetector, self).__init__()\n        self.detect = HaarCascadeFrontalFace(draw=False)\n        self.crop = pr.CropBoxes2D()\n        self.classify = MiniXceptionFER()\n        self.draw = pr.DrawBoxes2D(self.classify.class_names)\n\n    def call(self, image):\n        boxes2D = self.detect(image)['boxes2D']\n        cropped_images = self.crop(image, boxes2D)\n        for cropped_image, box2D in zip(cropped_images, boxes2D):\n            box2D.class_name = self.classify(cropped_image)['class_name']\n        return self.draw(image, boxes2D)\n        \ndetect = EmotionDetector()\n# you can now apply it to an image (numpy array)\npredictions = detect(image)\n```\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_130ce4999f53.png\" width=\"800\">\n\u003C\u002Fp>\n\n\n``Processors`` 允许我们轻松地组合、压缩和提取函数的参数。然而，大多数处理器是使用我们接下来展示的低级 API（后端）构建的。\n\n## Low-level (低级 API)\n\n中级处理器主要由以下文件中的小型后端函数构建：[boxes](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fboxes.py)、[cameras](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fcamera.py)、[images](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fbackend\u002Fimage)、[keypoints](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fkeypoints.py) 和 [quaternions](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fbackend\u002Fquaternion.py)。\n\n这些函数可以在 ``paz.backend`` 中找到：\n\n``` python\nfrom paz.backend import boxes, camera, image, keypoints, quaternion\n```\n例如，您可以在脚本中使用它们来加载或显示图像：\n\n``` python\nfrom paz.backend.image import load_image, show_image\n\nimage = load_image('my_image.png')\nshow_image(image)\n```\n\n## Additional functionality (附加功能)\n\n* PAZ 拥有[内置消息](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fabstract\u002Fmessages.py)，例如 ``Pose6D``，以便于与其他框架（如 [ROS](https:\u002F\u002Fwww.ros.org\u002F)（机器人操作系统））进行数据交换。\n\n* 有自定义[回调](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Foptimization\u002Fcallbacks.py)，例如在训练期间对目标检测器进行 MAP（平均精度均值）评估。\n    \n* PAZ 附带了多个数据集的[数据加载器](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fpaz\u002Fdatasets)：\n    [OpenImages](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fopen_images.py)、[VOC](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fvoc.py)、[YCB-Video](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fobject_detection\u002Fdatasets\u002Fycb_video.py)、[FAT](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Ffat.py)、[FERPlus](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fferplus.py)、[FER2013](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Ffer.py)、[CityScapes](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fdatasets\u002Fcityscapes.py)。\n\n* 我们提供了自动的[批次创建和调度包装器](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fabstract\u002Fsequence.py)，以便在您的 ``pipelines`` 和 TensorFlow 生成器之间建立轻松连接。请查看[教程](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmaster\u002Fexamples\u002Ftutorials)了解更多信息。\n\n### 模型\n\n以下模型已在 PAZ 中实现，您可以使用自己的数据对其进行训练：\n\n| 任务（链接到实现）    |模型（链接到论文）  |\n|---------------------------:|-----------------------| \n|[目标检测](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fssd300.py)|[SSD-300](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.02325)|\n|[目标检测](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fssd512.py)|[SSD-512](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.02325)|\n|[概率关键点估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fprobabilistic_keypoint_estimation\u002Fmodel.py) |[高斯混合 CNN](https:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fpublications\u002F2018\u002FNeumann18a\u002F)   |\n|[检测与分割](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Ftree\u002Fmask_rcnn\u002Fexamples\u002Fmask_rcnn)  |[MaskRCNN（开发中）](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06870) |\n|[关键点估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fhrnet.py)|[HRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.07919)|\n|[语义分割](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fsegmentation\u002Funet.py)|[U-NET](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597)|\n|[6D 姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fkeypointnet.py)          |[Pix2Pose](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.07433)          |\n|[隐式方向](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fimplicit_orientation_learning\u002Fmodel.py)        |[自编码器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.01275)            |\n|[情绪分类](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fclassification\u002Fxception.py)       |[MiniXception](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.07557)           |\n|[关键点发现](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fkeypointnet.py)      |[KeypointNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03146)            |\n|[关键点估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fkeypointnet.py)  |[KeypointNet2D](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03146)|\n|[注意力机制](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fspatial_transfomer_networks\u002FSTN.py)                   |[空间变换器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02025)   |\n|[目标检测](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fhaar_cascade.py)            |[HaarCascades](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1023\u002FB:VISI.0000013087.49260.fb)  |\n|[2D 人体姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fpose_estimation\u002Fhigher_hrnet.py)            |[HigherHRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.10357)  |\n|[3D 人体姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fsimplebaselines.py) | [Simple Baseline](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.03098) |\n|[手部姿态估计](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fdetnet.py)            |[DetNet](https:\u002F\u002Fvcai.mpi-inf.mpg.de\u002Fprojects\u002F2020-cvpr-hands\u002F)  |\n|[手部闭合分类](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fkeypoint\u002Fiknet.py)            |[IKNet](https:\u002F\u002Fvcai.mpi-inf.mpg.de\u002Fprojects\u002F2020-cvpr-hands\u002F)  |\n|[手部检测](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fdetection\u002Fssd512.py)            |[SSD512](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.02325)|\n|[小样本分类](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fpaz\u002Fmodels\u002Fclassification\u002Fprotonet.py)| [原型网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05175)|\n|[小样本分类](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fblob\u002Fmaster\u002Fexamples\u002Fmaml\u002Fmaml.py)| [模型无关元学习 (MAML)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.03400)|\n\n\n## 动机\n尽管在不同的深度学习框架中已有多个高级计算机视觉库，但我感觉在我选择的框架中，仍然缺乏一个用于机器人感知的综合性深度学习库。\n\n作为最后的说明，我想提到的是，我们往往容易忘记每个（开源）项目背后所付出的巨大努力和情感投入。我觉得我们很容易将公司名称与其背后的个人工作混淆，从而忘记了正是某些人在感受着我们的批评和赞扬。因此，无论您在这里发现了什么优秀的代码，都归功于 Pytorch、Tensorflow 和 Keras 等开源项目的软件工程师和贡献者。你们将心血公之于众供我们所有人使用和欣赏，我们首先应当向你们表达由衷的感谢。\n\n\n## 为什么叫 **PAZ**？\n* PAZ 这个名字符合其理论定义，它是 **Perception for Autonomous Systems**（自主系统感知）的首字母缩写，其中字母 S 被 Z 替换，以表明“System”（系统）指的是几乎任何事物，即 Z 作为一个经典的代数变量来表示未知元素。\n\n\n## 测试与覆盖率\n持续集成通过 [github actions](https:\u002F\u002Fgithub.com\u002Ffeatures\u002Factions) 使用 [pytest](https:\u002F\u002Fdocs.pytest.org\u002Fen\u002Fstable\u002F) 进行管理。\n您可以通过运行以下命令来检查测试：\n```\npytest tests\n``` \n测试覆盖率可以使用 [coverage](https:\u002F\u002Fcoverage.readthedocs.io\u002Fen\u002Fcoverage-5.2.1\u002F) 进行检查。\n您可以通过调用以下命令安装 coverage：`pip install coverage --user`\n然后您可以通过运行以下命令来检查测试覆盖率：\n```\ncoverage run -m pytest tests\u002F\ncoverage report -m\n```\n\n## 引用\n如果您使用 PAZ，请考虑引用它。您也可以在 [https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14541](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14541) 找到我们的论文。\n\n```BibTeX\n@misc{arriaga2020perception,\n      title={Perception for Autonomous Systems (PAZ)}, \n      author={Octavio Arriaga and Matias Valdenegro-Toro and Mohandass Muthuraja and Sushma Devaramani and Frank Kirchner},\n      year={2020},\n      eprint={2010.14541},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n## 资助\nPAZ 目前在 [不莱梅大学](https:\u002F\u002Fwww.uni-bremen.de\u002F) 的 [机器人小组](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fde\u002Fueber-uns\u002Funiversitaet-bremen-arbeitsgruppe-robotik.html) 开发，并与位于 **不莱梅** 的 **德国人工智能研究中心** (DFKI) 的 [机器人创新中心](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fen\u002Fstartpage.html) 合作进行。\nPAZ 由德国联邦经济事务和能源部以及 [德国航空航天中心](https:\u002F\u002Fwww.dlr.de\u002FDE\u002FHome\u002Fhome_node.html) (DLR) 资助。\nPAZ 已在 [TransFIT](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fen\u002Fresearch\u002Fprojects\u002Ftransfit.html) 和 [KiMMI-SF](https:\u002F\u002Frobotik.dfki-bremen.de\u002Fen\u002Fresearch\u002Fprojects\u002Fkimmi-sf\u002F) 项目中被使用和\u002F或开发。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_readme_2a0f0c0f9587.png\" width=\"1200\">\n\u003C\u002Fp>","# PAZ 快速上手指南\n\n## 环境准备\n- **系统要求**：支持 Python 3.6+ 的操作系统（Windows\u002FLinux\u002FmacOS）\n- **前置依赖**：\n  - [TensorFlow 2.0](https:\u002F\u002Fwww.tensorflow.org\u002F)\n  - [OpenCV](https:\u002F\u002Fopencv.org\u002F)\n  - [NumPy](https:\u002F\u002Fnumpy.org\u002F)\n\n## 安装步骤\n1. **推荐使用国内镜像源安装**（如清华大学镜像）：\n   ```bash\n   pip install pypaz -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple --user\n   ```\n2. **验证安装**：\n   ```bash\n   python -c \"import paz; print(paz.__version__)\"\n   ```\n\n## 基本使用（目标检测示例）\n```python\nfrom paz.applications import SSD512COCO\nfrom paz.backend.image import load_image, show_image\n\n# 初始化预训练模型\ndetector = SSD512COCO()\n\n# 加载测试图像（需替换为实际路径）\nimage = load_image(\"your_image.jpg\")\n\n# 执行推理\ninferences = detector(image)\n\n# 可视化结果\nshow_image(inferences)\n```\n\n### 运行说明\n1. 将 `your_image.jpg` 替换为实际图像路径\n2. 模型默认使用 COCO 数据集权重\n3. 支持直接对 NumPy 数组格式的图像进行推理\n\n> 示例效果：  \n> ![目标检测示例](https:\u002F\u002Fraw.githubusercontent.com\u002Foarriaga\u002Faltamira-data\u002Fmaster\u002Fimages\u002Fobject_detections_in_the_street.png)","某仓储机器人开发团队正在为自动化分拣系统构建视觉感知模块，需要同时处理货物识别、抓取姿态估计和避障检测三类任务。\n\n### 没有 paz 时\n- **代码冗余严重**：需分别实现YOLOv5目标检测、OpenPose关键点检测和Mask R-CNN分割模型，各模块数据预处理逻辑不一致\n- **实时性不足**：三路视频流并行处理时帧率仅12FPS，无法满足每秒30次的决策频率要求\n- **部署成本高**：需维护3套独立的模型权重文件和预处理脚本，跨平台移植时出现图像通道顺序不一致问题\n- **调试效率低**：姿态估计结果与目标检测框存在坐标系偏移，需手动添加坐标转换逻辑\n\n### 使用 paz 后\n- **模块化架构**：通过`paz.models`直接调用预训练的YOLOv8、HRNet和DeepLabV3模型，代码量减少40%\n- **统一数据流**：`paz.datasets`提供标准化的图像预处理管道，三路数据共享相同的归一化参数和坐标系定义\n- **性能提升**：启用`paz.streams`的多线程处理后，整体帧率提升至28FPS，满足实时决策需求\n- **简化部署**：使用`paz.utilities`的模型打包工具，将多模型组合封装为单一ONNX文件，部署时只需加载一次会话\n- **可视化增强**：通过`paz.visualization`的叠加绘图功能，可同时显示检测框、关键点连线和语义分割掩码，调试效率提升3倍\n\n核心价值：paz通过模块化架构和标准化接口，将多模态视觉任务的开发周期缩短60%，使开发者能专注于业务逻辑而非底层实现细节。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foarriaga_paz_d116ef1d.png","oarriaga","Octavio Arriaga","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Foarriaga_c99a2648.jpg","B.Sc. Physics,\r\nM.Sc. Robotics (Computer Vision, Deep Learning, NLP)","University of Bremen, DFKI","Bremen, Germany","octavio.arriaga@dfki.de",null,"https:\u002F\u002Ftwitter.com\u002FOctavio_Arriaga","https:\u002F\u002Fgithub.com\u002Foarriaga",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,701,111,"2026-04-05T05:14:47","MIT",1,"未说明","需要 NVIDIA GPU，CUDA 支持（具体版本未说明），显存建议 8GB+（根据模型需求）",{"notes":98,"python":95,"dependencies":99},"基于 TensorFlow 2.0，部分功能需 CUDA 支持；Mask-RCNN 模块仍在开发中，需额外分支；示例模型可能需要下载额外数据集",[100,101,102],"tensorflow>=2.0","opencv-python","numpy",[35,14],[105,106,107,108,109,110,111],"pose-estimation","object-detection","keypoint-estimation","emotion-recognition","instance-segmentation","face-recognition","semantic-segmentation",8,"2026-03-27T02:49:30.150509","2026-04-06T07:12:50.684002",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},1751,"如何统一项目中的版本号定义？","在 `setup.py` 和 `paz\u002F__init__.py` 中重复定义版本号可能导致混淆。建议参考类似项目（如 pytransform3d）的做法，通过 `import` 方式统一版本号。具体实现可参考 PR #97 的修改。","https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fissues\u002F84",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},1752,"训练得到的模型文件为何比预训练模型大很多？","可能原因包括：1. 模型结构不同；2. TensorFlow HDF5 格式更新导致存储体积增加。建议通过 `model.summary()` 对比模型结构，并检查是否保存了完整的优化器状态。若仅需权重数据，可设置 `save_weights_only=True`。","https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fissues\u002F155",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},1753,"如何构建项目文档？","文档构建指南位于 `docs\u002FREADME.md`。维护者建议通过 GitHub Pages 自动托管文档，而非手动构建。用户可直接访问官方文档链接（如 https:\u002F\u002Foarriaga.github.io\u002Fpaz\u002F），无需本地生成。","https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fissues\u002F82",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},1754,"应选择哪个 TensorFlow 版本？","维护者当前使用 TensorFlow 2.11.0。建议参考官方文档中的兼容性说明，并确保安装版本与示例代码匹配。若遇到版本问题，可尝试升级至最新稳定版。","https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fissues\u002F234",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},1755,"如何避免数据处理函数修改原始输入？","部分数据增强函数（如 `flip_left_right`）可能通过引用修改原始数据。已通过 PR #90、#92 等修复，建议更新至最新版本。若仍存在问题，请检查函数是否返回新对象而非原地修改。","https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fissues\u002F89",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},1756,"运行示例时报错 'No such file: image.jpg'？","示例代码默认依赖当前目录下的 `image.jpg` 文件。请将示例文件放入与脚本相同的目录，或修改代码中的文件路径。若无测试图像，可从官方示例数据集下载替换。","https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fissues\u002F81",[147,152,157,162,166,171,176,181,186,190,195,200,205,210],{"id":148,"version":149,"summary_zh":150,"released_at":151},101247,"0.2.6","## What's Changed\r\n* layer.py file clean up by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F192\r\n* Refined layers.py file by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F196\r\n* Removed errors from layers_test and refined layers_utils file by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F197\r\n* Added layers_utils_test file by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F198\r\n* Mask rcnn added layers_utils_test files and removed all errors by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F202\r\n* added paz from master and modified train.py file by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F211\r\n* addition of endpoint layer losses and train file modified by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F217\r\n* simplied data generator by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F223\r\n* first training with modified input masks by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F227\r\n* normalise anchor boxes by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F244\r\n* added loss functions during compile by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F253\r\n* refracted detection target layer functions by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F264\r\n* training with fixed bboxes by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F274\r\n* modified folders as in paz by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F275\r\n* Mask rcnn by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F277\r\n* refractored backend and added backend_test file by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F283\r\n* layers documentation by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F289\r\n* refactored model and rpn_model files by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F290\r\n* Add image synthesis examples by @Jieying-Li in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F300\r\n* refactored generator file by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F296\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F301\r\n* SSD512 by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F302\r\n* `num_classes` now passed as argument to `AugmentDetection` by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F306\r\n* MAML implementation for classification and regression by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F310\r\n* Human pose 3D by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F308\r\n* Human pose 3 d by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F311\r\n* Added mask_rcnn by @poornima2605 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F314\r\n* Refactoring Mask RCNN by @SushmaDG in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F105\r\n* Add MAML to examples by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F319\r\n* Efficientpose by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F322\r\n* Efficientpose by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F328\r\n* Efficientpose by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F329\r\n* Camera calibration by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F333\r\n* Fixing broken control-map link in the docs by @robintema in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F334\r\n* Efficientpose by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F332\r\n* Visual Voice Activity Detection (VVAD) integration including live camera and prerecorded demo by @cedric-cfk in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F330\r\n* Efficientpose by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F336\r\n* Hand detection by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F338\r\n* Updated unit tests by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F339\r\n* Resize the background images by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F340\r\n\r\n## New Contributors\r\n* @poornima2605 made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F192\r\n* @Jieying-Li made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F300\r\n* @robintema made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F334\r\n* @cedric-cfk made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F330\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.2.5...0.2.6","2025-02-03T15:31:24",{"id":153,"version":154,"summary_zh":155,"released_at":156},101248,"0.2.5","## What's Changed\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F294\r\n* Numpy alias fix by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F295\r\n* Patch for Keras function `count_params` by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F298\r\n* Pix2pose by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F297\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.2.4...0.2.5","2023-07-13T09:32:12",{"id":158,"version":159,"summary_zh":160,"released_at":161},101249,"0.2.4","## What's Changed\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F200\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F203\r\n* Efficientdet (working pipeline) by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F224\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F225\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F229\r\n* Efficientdet by @DeepanChakravarthiPadmanabhan in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F143\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F230\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F232\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F235\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F236\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F240\r\n* Prototypical networks by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F247\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F248\r\n* Prototypical networks and Omniglot integration by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F249\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F250\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F251\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F254\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F255\r\n* Human pose estimation 3d  by @amine789 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F252\r\n* Add Evaluation Metrics for Pose estimation by @amine789 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F216\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F256\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F257\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F259\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F261\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F262\r\n* Human pose estimation 3d by @amine789 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F263\r\n* Extend setup.py (e.g., by long description) by @AlexanderFabisch in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F265\r\n* Add classifiers to setup.py by @AlexanderFabisch in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F266\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F269\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F271\r\n* Efficientdet by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F272\r\n* PEP8 fixes and few-shot README documentation by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F273\r\n* Human pose estimation 3d (create processors) by @amine789 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F270\r\n* NonMaximumSuppression processor changed by @Manojkumarmuru in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F282\r\n* Structure from motion by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F284\r\n* Human pose 3D by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F285\r\n* Human pose estimation 3d by @amine789 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F276\r\n\r\n## New Contributors\r\n* @Manojkumarmuru made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F200\r\n* @amine789 made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F252\r\n* @AlexanderFabisch made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F265\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.1.10...0.2.0","2023-06-10T13:39:22",{"id":163,"version":164,"summary_zh":82,"released_at":165},101250,"0.2.2","2023-06-10T13:17:42",{"id":167,"version":168,"summary_zh":169,"released_at":170},101251,"0.1.10","## What's Changed\r\n* Hand tracking translation 3D from box dimensions by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F218\r\n* Fix processor Predict and hand pose and tracking estimation by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F219\r\n* Application for hand detection and keypoint estimation by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F220\r\n* Refactor hand estimation by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F221\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.1.9...0.1.10","2022-09-07T08:33:29",{"id":172,"version":173,"summary_zh":174,"released_at":175},101252,"0.1.9","## What's Changed\r\n* Test for minimal hand by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F189\r\n* Minimal hand pose estimation by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F194\r\n* Add recording from video file by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F199\r\n* Detect hand closure by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F201\r\n* Add examples in readme by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F204\r\n* Add Shapes dataset by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F205\r\n* 3D keypoint visualization by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F206\r\n* Fix README gif sizes by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F212\r\n* Minimal hand detection by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F213\r\n* Update SSD512 by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F214\r\n* Add new release version by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F215\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.1.8...0.1.9","2022-08-05T09:44:19",{"id":177,"version":178,"summary_zh":179,"released_at":180},101253,"0.1.8","## What's Changed\r\n* HumanPose2D  by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F175\r\n* General refactor by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F178\r\n* Pix2Pose refactor by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F179\r\n* Pix2pose refactor by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F180\r\n* Minimal Hand by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F181\r\n* Minimal_Hand with relative joints by @dema-software-solutions in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F176\r\n* PIX2POSE refactor by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F185\r\n* Merge minimal hand with paz library by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F186\r\n* Change name and location of HigherHRNet prediction pipeline by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F187\r\n* Refactor README by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F188\r\n\r\n## New Contributors\r\n* @dema-software-solutions made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F176\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.1.7...0.1.8","2022-04-25T09:34:22",{"id":182,"version":183,"summary_zh":184,"released_at":185},101254,"0.1.7","## What's Changed\r\n* add test case to build ssd300 with pretrained weights by @DeepanChakravarthiPadmanabhan in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F151\r\n* Fine tuning object detection by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F152\r\n* Raise type error for the unsupported object type by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F149\r\n* Eigenface demo by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F153\r\n* Pix2Pose by @praxidike97 in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F161\r\n* PR to track the development of Hand Pose Estimation model by @jaswanthbjk in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F140\r\n* HandPoseEstimation - Modification to download weights from online repository by @jaswanthbjk in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F165\r\n* HigherHRNet: Human pose estimation by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F166\r\n* Improvements over first merge by @jaswanthbjk in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F168\r\n* Eigenfaces database construction by @proneetsharma in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F169\r\n* Fixed calculation of the Box2D center. by @ManuelMeder in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F170\r\n* PIX2POSE refactor by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F171\r\n* PIX2POSE merge by @oarriaga in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F172\r\n\r\n## New Contributors\r\n* @DeepanChakravarthiPadmanabhan made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F151\r\n* @praxidike97 made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F161\r\n* @jaswanthbjk made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F140\r\n* @ManuelMeder made their first contribution in https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fpull\u002F170\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fpaz\u002Fcompare\u002F0.1.6...0.1.7","2022-02-11T10:44:56",{"id":187,"version":188,"summary_zh":82,"released_at":189},101255,"0.1.6","2021-05-10T19:22:42",{"id":191,"version":192,"summary_zh":193,"released_at":194},101256,"0.1.5","Refactor boxes backend","2021-05-10T19:13:46",{"id":196,"version":197,"summary_zh":198,"released_at":199},101257,"0.1.4","Add UNET models:\r\n\r\n- Basic `UNET` construction kit.\r\n- Default models for training: `UNET_VGG16`, `UNET_VGG19`, `UNET_RESNET50`\r\n\r\nAdd ``CityScapes`` data loader.\r\n\r\nAdd `StochasticProcessor` abstract class","2021-02-11T10:30:38",{"id":201,"version":202,"summary_zh":203,"released_at":204},101258,"0.1.3","Refactor examples with rendering to use pyrender","2020-11-06T16:51:08",{"id":206,"version":207,"summary_zh":208,"released_at":209},101259,"0.1.2","Add ``paz.backend.render`` utils with unit-test","2020-11-05T21:04:16",{"id":211,"version":212,"summary_zh":213,"released_at":214},101260,"0.1.1","First open-source release.","2020-10-29T15:38:43"]