[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-clementchadebec--benchmark_VAE":3,"tool-clementchadebec--benchmark_VAE":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160411,2,"2026-04-18T23:33:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":75,"owner_email":75,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":32,"env_os":89,"env_gpu":90,"env_ram":89,"env_deps":91,"category_tags":100,"github_topics":102,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":150},9461,"clementchadebec\u002Fbenchmark_VAE","benchmark_VAE","Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)","Pythae 是一个基于 PyTorch 的开源库，旨在统一变分自编码器（VAE）及其衍生模型的实现。在深度学习研究中，对比不同 VAE 模型往往因代码架构差异而变得复杂且难以复现。Pythae 通过提供标准化的统一框架，让研究人员能够在完全相同的编码器 - 解码器架构下训练和比较多种主流 VAE 模型，从而有效解决了实验公平性与可复现性的痛点。\n\n该工具特别适合人工智能研究人员、算法工程师及高校学生使用。它不仅支持快速复现 NeurIPS 2022 论文中的基准实验，还具备高度的灵活性：用户既可以调用预置模型，也能轻松接入自定义的网络架构和数据集进行训练。技术亮点方面，Pythae 原生支持分布式训练（DDP），能显著提升大规模数据集上的训练速度；同时深度集成了 WandB、MLflow 等实验监控工具，并支持与 HuggingFace Hub 无缝对接，仅需几行代码即可实现模型的分享与加载。无论是想要深入探究生成模型原理，还是希望高效开展对比实验，Pythae 都能提供简洁、专业且强大的技术支持。","\u003Cp align=\"center\">\n\t\u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fpythae\u002F\">\n\t    \u003Cimg src='https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fpythae.svg' alt='Python' \u002F>\n\t\u003C\u002Fa>\n    \u003Ca>\n\t    \u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.7%7C3.8%7C3.9%2B-blueviolet' alt='Python' \u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href='https:\u002F\u002Fpythae.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest'>\n    \t\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_13d664e1afd7.png' alt='Documentation Status' \u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href='https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0'>\n\t    \u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fclementchadebec\u002Fbenchmark_VAE?color=blue' \u002F>\n\t\u003C\u002Fa>\u003Cbr>\n    \u003Ca>\n\t    \u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-black' \u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002Fclementchadebec\u002Fbenchmark_VAE\">\n  \t\t\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=KEM7KKISXJ\"\u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Foverview_notebook.ipynb\">\n  \t\t\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\"\u002F>\n\t\u003C\u002Fa>\n\t\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpythae.readthedocs.io\u002Fen\u002Flatest\u002F\">Documentation\u003C\u002Fa>\n\u003C\u002Fp>\n\t\n    \n# pythae \n\nThis library implements some of the most common (Variational) Autoencoder models under a unified implementation. In particular, it \nprovides the possibility to perform benchmark experiments and comparisons by training \nthe models with the same autoencoding neural network architecture. The feature *make your own autoencoder* \nallows you to train any of these models with your own data and own Encoder and Decoder neural networks. It integrates experiment monitoring tools such [wandb](https:\u002F\u002Fwandb.ai\u002F), [mlflow](https:\u002F\u002Fmlflow.org\u002F) or [comet-ml](https:\u002F\u002Fwww.comet.com\u002Fsignup?utm_source=pythae&utm_medium=partner&utm_campaign=AMS_US_EN_SNUP_Pythae_Comet_Integration) 🧪 and allows model sharing and loading from the [HuggingFace Hub](https:\u002F\u002Fhuggingface.co\u002Fmodels) 🤗 in a few lines of code.\n\n**News** 📢\n\nAs of v0.1.0, `Pythae` now supports distributed training using PyTorch's [DDP](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Fddp.html). You can now train your favorite VAE faster and on larger datasets, still with a few lines of code.\nSee our speed-up [benchmark](#benchmark).\n\n## Quick access:\n- [Installation](#installation)\n- [Implemented models](#available-models) \u002F [Implemented samplers](#available-samplers)\n- [Reproducibility statement](#reproducibility) \u002F [Results flavor](#results)\n- [Model training](#launching-a-model-training) \u002F [Data generation](#launching-data-generation) \u002F [Custom network architectures](#define-you-own-autoencoder-architecture) \u002F [Distributed training](#distributed-training-with-pythae)\n- [Model sharing with 🤗 Hub](#sharing-your-models-with-the-huggingface-hub-) \u002F [Experiment tracking with `wandb`](#monitoring-your-experiments-with-wandb-) \u002F [Experiment tracking with `mlflow`](#monitoring-your-experiments-with-mlflow-) \u002F [Experiment tracking with `comet_ml`](#monitoring-your-experiments-with-comet_ml-)\n- [Tutorials](#getting-your-hands-on-the-code) \u002F [Documentation](https:\u002F\u002Fpythae.readthedocs.io\u002Fen\u002Flatest\u002F)\n- [Contributing 🚀](#contributing-) \u002F [Issues 🛠️](#dealing-with-issues-%EF%B8%8F)\n- [Citing this repository](#citation)\n\n# Installation\n\nTo install the latest stable release of this library run the following using ``pip``\n\n```bash\n$ pip install pythae\n``` \n\nTo install the latest github version of this library run the following using ``pip``\n\n```bash\n$ pip install git+https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE.git\n``` \n\nor alternatively you can clone the github repo to access to tests, tutorials and scripts.\n```bash\n$ git clone https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE.git\n```\nand install the library\n```bash\n$ cd benchmark_VAE\n$ pip install -e .\n``` \n\n## Available Models\n\nBelow is the list of the models currently implemented in the library.\n\n\n|               Models               |                                                                                    Training example                                                                                    |                     Paper                    |                           Official Implementation                          |\n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|:--------------------------------------------------------------------------:|\n| Autoencoder (AE)                   | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fae_training.ipynb) |                                              |                                                                            |\n| Variational Autoencoder (VAE)      | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114)  |\n| Beta Variational Autoencoder (BetaVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fbeta_vae_training.ipynb) | [link](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Sy2fzU9gl)  |   \nVAE with Linear Normalizing Flows (VAE_LinNF) |  [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvae_lin_nf_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.05770) |         \nVAE with Inverse Autoregressive Flows (VAE_IAF) |  [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvae_iaf_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04934) |  [link](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fiaf)                                  |\n| Disentangled Beta Variational Autoencoder (DisentangledBetaVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fdisentangled_beta_vae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03599)  |   \n| Disentangling by Factorising (FactorVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Ffactor_vae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05983)  |                                                                            |\n| Beta-TC-VAE (BetaTCVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fbeta_tc_vae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04942)  |  [link](https:\u002F\u002Fgithub.com\u002Frtqichen\u002Fbeta-tcvae)\n| Importance Weighted Autoencoder (IWAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fiwae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1509.00519v4)  | [link](https:\u002F\u002Fgithub.com\u002Fyburda\u002Fiwae)  \n| Multiply Importance Weighted Autoencoder (MIWAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fmiwae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537)  |       \n| Partially Importance Weighted Autoencoder (PIWAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fpiwae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537)  |       \n| Combination Importance Weighted Autoencoder (CIWAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fciwae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537)  |                                                                             |\n| VAE with perceptual metric similarity (MSSSIM_VAE)      | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fms_ssim_vae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06409)  |\n| Wasserstein Autoencoder (WAE)      | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fwae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.01558) | [link](https:\u002F\u002Fgithub.com\u002Ftolstikhin\u002Fwae)                                  |\n| Info Variational Autoencoder (INFOVAE_MMD)      | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Finfo_vae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02262) |                                   |\n| VAMP Autoencoder (VAMP)            | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvamp_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07120) | [link](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vampprior)                         |\n| Hyperspherical VAE (SVAE)            | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fsvae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00891) | [link](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002Fs-vae-pytorch)\n| Poincaré Disk VAE (PoincareVAE)            | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fpvae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06033) | [link](https:\u002F\u002Fgithub.com\u002Femilemathieu\u002Fpvae)                         |\n| Adversarial Autoencoder (Adversarial_AE)                   | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fadversarial_ae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644)\n| Variational Autoencoder GAN (VAEGAN) 🥗 | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvaegan_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.09300) | [link](https:\u002F\u002Fgithub.com\u002Fandersbll\u002Fautoencoding_beyond_pixels)| [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.09300) | [link](https:\u002F\u002Fgithub.com\u002Fandersbll\u002Fautoencoding_beyond_pixels)\n| Vector Quantized VAE (VQVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvqvae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00937) | [link](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fsonnet\u002Fblob\u002Fv2\u002Fsonnet\u002F)\n| Hamiltonian VAE (HVAE)             | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fhvae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11328) | [link](https:\u002F\u002Fgithub.com\u002Fanthonycaterini\u002Fhvae-nips)                       |\n| Regularized AE with L2 decoder param (RAE_L2) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Frae_l2_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12436) | [link](https:\u002F\u002Fgithub.com\u002FParthaEth\u002FRegularized_autoencoders-RAE-\u002Ftree\u002Fmaster\u002F) |\n| Regularized AE with gradient penalty (RAE_GP) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Frae_gp_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12436) | [link](https:\u002F\u002Fgithub.com\u002FParthaEth\u002FRegularized_autoencoders-RAE-\u002Ftree\u002Fmaster\u002F) |\n| Riemannian Hamiltonian VAE (RHVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Frhvae_training.ipynb) | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00026) | [link](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fpyraug)|\n| Hierarchical Residual Quantization (HRQVAE) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fhrqvae_training.ipynb) | [link](https:\u002F\u002Faclanthology.org\u002F2022.acl-long.178\u002F) | [link](https:\u002F\u002Fgithub.com\u002Ftomhosking\u002Fhrq-vae)|\n\n**See [reconstruction](#Reconstruction) and [generation](#Generation) results for all aforementionned models**\n\n## Available Samplers\n\nBelow is the list of the models currently implemented in the library.\n\n|                Samplers               |   Models  \t\t  | Paper \t\t\t\t\t\t\t\t\t\t\t  | Official Implementation \t\t\t\t  |\n|:-------------------------------------:|:-------------------:|:-------------------------------------------------:|:-----------------------------------------:|\n| Normal prior (NormalSampler)                         | all models\t\t  | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114)\t\t  |\n| Gaussian mixture (GaussianMixtureSampler) | all models\t\t  | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12436) \t  | [link](https:\u002F\u002Fgithub.com\u002FParthaEth\u002FRegularized_autoencoders-RAE-\u002Ftree\u002Fmaster\u002Fmodels\u002Frae) |\n| Two stage VAE sampler (TwoStageVAESampler)\t\t\t\t\t| all VAE based models| [link](https:\u002F\u002Fopenreview.net\u002Fpdf?id=B1e0X3C9tQ)  | [link](https:\u002F\u002Fgithub.com\u002Fdaib13\u002FTwoStageVAE\u002F) |)\n| Unit sphere uniform sampler (HypersphereUniformSampler)                     |    SVAE  \t\t  | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00891)      |\t\t[link](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002Fs-vae-pytorch)\n| Poincaré Disk sampler (PoincareDiskSampler)                     |    PoincareVAE  \t\t  | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06033)      |\t\t[link](https:\u002F\u002Fgithub.com\u002Femilemathieu\u002Fpvae)\n| VAMP prior sampler (VAMPSampler)                   |    VAMP   \t\t  | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07120) \t  | [link](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vampprior) |\n| Manifold sampler (RHVAESampler)                     |    RHVAE  \t\t  | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00026)      |\t[link](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fpyraug)|\n| Masked Autoregressive Flow Sampler (MAFSampler) | all models | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07057v4)      |\t[link](https:\u002F\u002Fgithub.com\u002Fgpapamak\u002Fmaf) |\n| Inverse Autoregressive Flow Sampler (IAFSampler) | all models | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04934) |  [link](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fiaf)             |   \n| PixelCNN (PixelCNNSampler) | VQVAE | [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.05328) |             |                     \n\n## Reproducibility\n\nWe validate the implementations by reproducing some results presented in the original publications when the official code has been released or when enough details about the experimental section of the papers were available. See [reproducibility](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts\u002Freproducibility) for more details.\n\n## Launching a model training\n\nTo launch a model training, you only need to call a `TrainingPipeline` instance. \n\n```python\n>>> from pythae.pipelines import TrainingPipeline\n>>> from pythae.models import VAE, VAEConfig\n>>> from pythae.trainers import BaseTrainerConfig\n\n>>> # Set up the training configuration\n>>> my_training_config = BaseTrainerConfig(\n...\toutput_dir='my_model',\n...\tnum_epochs=50,\n...\tlearning_rate=1e-3,\n...\tper_device_train_batch_size=200,\n...\tper_device_eval_batch_size=200,\n...\ttrain_dataloader_num_workers=2,\n...\teval_dataloader_num_workers=2,\n...\tsteps_saving=20,\n...\toptimizer_cls=\"AdamW\",\n...\toptimizer_params={\"weight_decay\": 0.05, \"betas\": (0.91, 0.995)},\n...\tscheduler_cls=\"ReduceLROnPlateau\",\n...\tscheduler_params={\"patience\": 5, \"factor\": 0.5}\n... )\n>>> # Set up the model configuration \n>>> my_vae_config = model_config = VAEConfig(\n...\tinput_dim=(1, 28, 28),\n...\tlatent_dim=10\n... )\n>>> # Build the model\n>>> my_vae_model = VAE(\n...\tmodel_config=my_vae_config\n... )\n>>> # Build the Pipeline\n>>> pipeline = TrainingPipeline(\n... \ttraining_config=my_training_config,\n... \tmodel=my_vae_model\n... )\n>>> # Launch the Pipeline\n>>> pipeline(\n...\ttrain_data=your_train_data, # must be torch.Tensor, np.array or torch datasets\n...\teval_data=your_eval_data # must be torch.Tensor, np.array or torch datasets\n... )\n```\n\nAt the end of training, the best model weights, model configuration and training configuration are stored in a `final_model` folder available in  `my_model\u002FMODEL_NAME_training_YYYY-MM-DD_hh-mm-ss` (with `my_model` being the `output_dir` argument of the `BaseTrainerConfig`). If you further set the `steps_saving` argument to a certain value, folders named `checkpoint_epoch_k` containing the best model weights, optimizer, scheduler, configuration and training configuration at epoch *k* will also appear in `my_model\u002FMODEL_NAME_training_YYYY-MM-DD_hh-mm-ss`.\n\n## Launching a training on benchmark datasets\nWe also provide a training script example [here](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts\u002Ftraining.py) that can be used to train the models on benchmarks datasets (mnist, cifar10, celeba ...). The script can be launched with the following commandline\n\n```bash\npython training.py --dataset mnist --model_name ae --model_config 'configs\u002Fae_config.json' --training_config 'configs\u002Fbase_training_config.json'\n```\n\nSee [README.md](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts\u002FREADME.md) for further details on this script\n\n## Launching data generation\n\n### Using the `GenerationPipeline`\n\nThe easiest way to launch a data generation from a trained model consists in using the built-in `GenerationPipeline` provided in Pythae. Say you want to generate 100 samples using a `MAFSampler` all you have to do is 1) relaod the trained model, 2) define the sampler's configuration and 3) create and launch the `GenerationPipeline` as follows\n\n```python\n>>> from pythae.models import AutoModel\n>>> from pythae.samplers import MAFSamplerConfig\n>>> from pythae.pipelines import GenerationPipeline\n>>> # Retrieve the trained model\n>>> my_trained_vae = AutoModel.load_from_folder(\n...\t'path\u002Fto\u002Fyour\u002Ftrained\u002Fmodel'\n... )\n>>> my_sampler_config = MAFSamplerConfig(\n...\tn_made_blocks=2,\n...\tn_hidden_in_made=3,\n...\thidden_size=128\n... )\n>>> # Build the pipeline\n>>> pipe = GenerationPipeline(\n...\tmodel=my_trained_vae,\n...\tsampler_config=my_sampler_config\n... )\n>>> # Launch data generation\n>>> generated_samples = pipe(\n...\tnum_samples=args.num_samples,\n...\treturn_gen=True, # If false returns nothing\n...\ttrain_data=train_data, # Needed to fit the sampler\n...\teval_data=eval_data, # Needed to fit the sampler\n...\ttraining_config=BaseTrainerConfig(num_epochs=200) # TrainingConfig to use to fit the sampler\n... )\n```\n\n### Using the Samplers\n\nAlternatively, you can launch the data generation process from a trained model directly with the sampler. For instance, to generate new data with your sampler, run the following.\n\n```python\n>>> from pythae.models import AutoModel\n>>> from pythae.samplers import NormalSampler\n>>> # Retrieve the trained model\n>>> my_trained_vae = AutoModel.load_from_folder(\n...\t'path\u002Fto\u002Fyour\u002Ftrained\u002Fmodel'\n... )\n>>> # Define your sampler\n>>> my_samper = NormalSampler(\n...\tmodel=my_trained_vae\n... )\n>>> # Generate samples\n>>> gen_data = my_samper.sample(\n...\tnum_samples=50,\n...\tbatch_size=10,\n...\toutput_dir=None,\n...\treturn_gen=True\n... )\n```\nIf you set `output_dir` to a specific path, the generated images will be saved as `.png` files named `00000000.png`, `00000001.png` ...\nThe samplers can be used with any model as long as it is suited. For instance, a `GaussianMixtureSampler` instance can be used to generate from any model but a `VAMPSampler` will only be usable with a `VAMP` model. Check [here](#available-samplers) to see which ones apply to your model. Be carefull that some samplers such as the `GaussianMixtureSampler` for instance may need to be fitted by calling the `fit` method before using. Below is an example for the `GaussianMixtureSampler`. \n\n```python\n>>> from pythae.models import AutoModel\n>>> from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig\n>>> # Retrieve the trained model\n>>> my_trained_vae = AutoModel.load_from_folder(\n...\t'path\u002Fto\u002Fyour\u002Ftrained\u002Fmodel'\n... )\n>>> # Define your sampler\n... gmm_sampler_config = GaussianMixtureSamplerConfig(\n...\tn_components=10\n... )\n>>> my_samper = GaussianMixtureSampler(\n...\tsampler_config=gmm_sampler_config,\n...\tmodel=my_trained_vae\n... )\n>>> # fit the sampler\n>>> gmm_sampler.fit(train_dataset)\n>>> # Generate samples\n>>> gen_data = my_samper.sample(\n...\tnum_samples=50,\n...\tbatch_size=10,\n...\toutput_dir=None,\n...\treturn_gen=True\n... )\n```\n\n\n## Define you own Autoencoder architecture \n \nPythae provides you the possibility to define your own neural networks within the VAE models. For instance, say you want to train a Wassertstein AE with a specific encoder and decoder, you can do the following:\n\n```python\n>>> from pythae.models.nn import BaseEncoder, BaseDecoder\n>>> from pythae.models.base.base_utils import ModelOutput\n>>> class My_Encoder(BaseEncoder):\n...\tdef __init__(self, args=None): # Args is a ModelConfig instance\n...\t\tBaseEncoder.__init__(self)\n...\t\tself.layers = my_nn_layers()\n...\t\t\n...\tdef forward(self, x:torch.Tensor) -> ModelOutput:\n...\t\tout = self.layers(x)\n...\t\toutput = ModelOutput(\n...\t\t\tembedding=out # Set the output from the encoder in a ModelOutput instance \n...\t\t)\n...\t\treturn output\n...\n... class My_Decoder(BaseDecoder):\n...\tdef __init__(self, args=None):\n...\t\tBaseDecoder.__init__(self)\n...\t\tself.layers = my_nn_layers()\n...\t\t\n...\tdef forward(self, x:torch.Tensor) -> ModelOutput:\n...\t\tout = self.layers(x)\n...\t\toutput = ModelOutput(\n...\t\t\treconstruction=out # Set the output from the decoder in a ModelOutput instance\n...\t\t)\n...\t\treturn output\n...\n>>> my_encoder = My_Encoder()\n>>> my_decoder = My_Decoder()\n```\n\nAnd now build the model\n\n```python\n>>> from pythae.models import WAE_MMD, WAE_MMD_Config\n>>> # Set up the model configuration \n>>> my_wae_config = model_config = WAE_MMD_Config(\n...\tinput_dim=(1, 28, 28),\n...\tlatent_dim=10\n... )\n...\n>>> # Build the model\n>>> my_wae_model = WAE_MMD(\n...\tmodel_config=my_wae_config,\n...\tencoder=my_encoder, # pass your encoder as argument when building the model\n...\tdecoder=my_decoder # pass your decoder as argument when building the model\n... )\n```\n\n**important note 1**: For all AE-based models (AE, WAE, RAE_L2, RAE_GP), both the encoder and decoder must return a `ModelOutput` instance. For the encoder, the `ModelOutput` instance must contain the embbeddings under the key `embedding`. For the decoder, the `ModelOutput` instance must contain the reconstructions under the key `reconstruction`.\n\n\n**important note 2**: For all VAE-based models (VAE, BetaVAE, IWAE, HVAE, VAMP, RHVAE), both the encoder and decoder must return a `ModelOutput` instance. For the encoder, the `ModelOutput` instance must contain the embbeddings and **log**-covariance matrices (of shape batch_size x latent_space_dim) respectively under the key `embedding` and `log_covariance` key. For the decoder, the `ModelOutput` instance must contain the reconstructions under the key `reconstruction`.\n\n\n## Using benchmark neural nets\nYou can also find predefined neural network architectures for the most common data sets (*i.e.* MNIST, CIFAR, CELEBA ...) that can be loaded as follows\n\n```python\n>>> from pythae.models.nn.benchmark.mnist import (\n...\tEncoder_Conv_AE_MNIST, # For AE based model (only return embeddings)\n...\tEncoder_Conv_VAE_MNIST, # For VAE based model (return embeddings and log_covariances)\n...\tDecoder_Conv_AE_MNIST\n... )\n```\nReplace *mnist* by cifar or celeba to access to other neural nets.\n\n## Distributed Training with `Pythae`\nAs of `v0.1.0`, Pythae now supports distributed training using PyTorch's [DDP](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Fddp.html). It allows you to train your favorite VAE faster and on larger dataset using multi-gpu and\u002For multi-node training.\n\nTo do so, you can build a python script that will then be launched by a launcher (such as `srun` on a cluster). The only thing that is needed in the script is to specify some elements relative to the distributed environment (such as the number of nodes\u002Fgpus) directly in the training configuration as follows\n\n```python\n>>> training_config = BaseTrainerConfig(\n...     num_epochs=10,\n...     learning_rate=1e-3,\n...     per_device_train_batch_size=64,\n...     per_device_eval_batch_size=64,\n...     train_dataloader_num_workers=8,\n...     eval_dataloader_num_workers=8,\n...     dist_backend=\"nccl\", # distributed backend\n...     world_size=8 # number of gpus to use (n_nodes x n_gpus_per_node),\n...     rank=5 # global gpu id,\n...     local_rank=1 # gpu id within a node,\n...     master_addr=\"localhost\" # master address,\n...     master_port=\"12345\" # master port,\n... )\n```\n\nSee this [example script](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fscripts\u002Fdistributed_training_imagenet.py) that defines a multi-gpu VQVAE training on ImageNet dataset. Please note that the way the distributed environnement variables (`world_size`, `rank` ...) are recovered may be specific to the cluster and launcher you use. \n\n### Benchmark\n\nBelow are indicated the training times for a Vector Quantized VAE (VQ-VAE) with `Pythae` for 100 epochs on MNIST on V100 16GB GPU(s), for 50 epochs on [FFHQ](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fffhq-dataset) (1024x1024 images) and for 20 epochs on [ImageNet-1k](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fimagenet-1k) on V100 32GB GPU(s).\n\n|  | Train Data | 1 GPU | 4 GPUs | 2x4 GPUs |\n|:---:|:---:|:---:|:---:|---|\n| MNIST (VQ-VAE) | 28x28 images (50k) | 235.18 s | 62.00 s | 35.86 s |\n| FFHQ 1024x1024 (VQVAE) | 1024x1024 RGB images (60k) | 19h 1min | 5h 6min | 2h 37min |\n| ImageNet-1k 128x128 (VQVAE) | 128x128 RGB images (~ 1.2M) | 6h 25min | 1h 41min | 51min 26s |\n\n\nFor each dataset, we provide the benchmarking scripts [here](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts)\n\n\n## Sharing your models with the HuggingFace Hub 🤗\nPythae also allows you to share your models on the [HuggingFace Hub](https:\u002F\u002Fhuggingface.co\u002Fmodels). To do so you need:\n- a valid HuggingFace account\n- the package `huggingface_hub` installed in your virtual env. If not you can install it with \n```\n$ python -m pip install huggingface_hub\n```\n- to be logged in to your HuggingFace account using\n```\n$ huggingface-cli login\n```\n\n### Uploading a model to the Hub\nAny pythae model can be easily uploaded using the method `push_to_hf_hub`\n```python\n>>> my_vae_model.push_to_hf_hub(hf_hub_path=\"your_hf_username\u002Fyour_hf_hub_repo\")\n```\n**Note:** If `your_hf_hub_repo` already exists and is not empty, files will be overridden. In case, \nthe repo `your_hf_hub_repo` does not exist, a folder having the same name will be created.\n\n### Downloading models from the Hub\nEquivalently, you can download or reload any Pythae's model directly from the Hub using the method `load_from_hf_hub`\n```python\n>>> from pythae.models import AutoModel\n>>> my_downloaded_vae = AutoModel.load_from_hf_hub(hf_hub_path=\"path_to_hf_repo\")\n```\n\n## Monitoring your experiments with `wandb` 🧪\nPythae also integrates the experiment tracking tool [wandb](https:\u002F\u002Fwandb.ai\u002F) allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:\n- a valid wandb account\n- the package `wandb` installed in your virtual env. If not you can install it with \n```\n$ pip install wandb\n```\n- to be logged in to your wandb account using\n```\n$ wandb login\n```\n\n### Creating a `WandbCallback`\nLaunching an experiment monitoring with `wandb` in pythae is pretty simple. The only thing a user needs to do is create a `WandbCallback` instance...\n\n```python\n>>> # Create you callback\n>>> from pythae.trainers.training_callbacks import WandbCallback\n>>> callbacks = [] # the TrainingPipeline expects a list of callbacks\n>>> wandb_cb = WandbCallback() # Build the callback \n>>> # SetUp the callback \n>>> wandb_cb.setup(\n...\ttraining_config=your_training_config, # training config\n...\tmodel_config=your_model_config, # model config\n...\tproject_name=\"your_wandb_project\", # specify your wandb project\n...\tentity_name=\"your_wandb_entity\", # specify your wandb entity\n... )\n>>> callbacks.append(wandb_cb) # Add it to the callbacks list\n```\n...and then pass it to the `TrainingPipeline`.\n```python\n>>> pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n>>> pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n... )\n>>> # You can log to https:\u002F\u002Fwandb.ai\u002Fyour_wandb_entity\u002Fyour_wandb_project to monitor your training\n```\nSee the detailed tutorial \n\n## Monitoring your experiments with `mlflow` 🧪\nPythae also integrates the experiment tracking tool [mlflow](https:\u002F\u002Fmlflow.org\u002F) allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:\n- the package `mlfow` installed in your virtual env. If not you can install it with \n```\n$ pip install mlflow\n```\n\n### Creating a `MLFlowCallback`\nLaunching an experiment monitoring with `mlfow` in pythae is pretty simple. The only thing a user needs to do is create a `MLFlowCallback` instance...\n\n```python\n>>> # Create you callback\n>>> from pythae.trainers.training_callbacks import MLFlowCallback\n>>> callbacks = [] # the TrainingPipeline expects a list of callbacks\n>>> mlflow_cb = MLFlowCallback() # Build the callback \n>>> # SetUp the callback \n>>> mlflow_cb.setup(\n...\ttraining_config=your_training_config, # training config\n...\tmodel_config=your_model_config, # model config\n...\trun_name=\"mlflow_cb_example\", # specify your mlflow run\n... )\n>>> callbacks.append(mlflow_cb) # Add it to the callbacks list\n```\n...and then pass it to the `TrainingPipeline`.\n```python\n>>> pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n>>> pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n... )\n```\nyou can visualize your metric by running the following in the directory where the `.\u002Fmlruns`\n```bash\n$ mlflow ui \n```\nSee the detailed tutorial \n\n## Monitoring your experiments with `comet_ml` 🧪\nPythae also integrates the experiment tracking tool [comet_ml](https:\u002F\u002Fwww.comet.com\u002Fsignup?utm_source=pythae&utm_medium=partner&utm_campaign=AMS_US_EN_SNUP_Pythae_Comet_Integration) allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:\n- the package `comet_ml` installed in your virtual env. If not you can install it with \n```\n$ pip install comet_ml\n```\n\n### Creating a `CometCallback`\nLaunching an experiment monitoring with `comet_ml` in pythae is pretty simple. The only thing a user needs to do is create a `CometCallback` instance...\n\n```python\n>>> # Create you callback\n>>> from pythae.trainers.training_callbacks import CometCallback\n>>> callbacks = [] # the TrainingPipeline expects a list of callbacks\n>>> comet_cb = CometCallback() # Build the callback \n>>> # SetUp the callback \n>>> comet_cb.setup(\n...\ttraining_config=training_config, # training config\n...\tmodel_config=model_config, # model config\n...\tapi_key=\"your_comet_api_key\", # specify your comet api-key\n...\tproject_name=\"your_comet_project\", # specify your wandb project\n...\t#offline_run=True, # run in offline mode\n...\t#offline_directory='my_offline_runs' # set the directory to store the offline runs\n... )\n>>> callbacks.append(comet_cb) # Add it to the callbacks list\n```\n...and then pass it to the `TrainingPipeline`.\n```python\n>>> pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n>>> pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n... )\n>>> # You can log to https:\u002F\u002Fcomet.com\u002Fyour_comet_username\u002Fyour_comet_project to monitor your training\n```\nSee the detailed tutorial \n\n\n## Getting your hands on the code \n\nTo help you to understand the way pythae works and how you can train your models with this library we also\nprovide tutorials:\n\n- [making_your_own_autoencoder.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) shows you how to pass your own networks to the models implemented in pythae [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmaking_your_own_autoencoder.ipynb)\n\n- [custom_dataset.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) shows you how to  use custom datasets with any of the models implemented in pythae [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fcustom_dataset.ipynb)\n\n- [hf_hub_models_sharing.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) shows you how to upload and download models for the HuggingFace Hub [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fhf_hub_models_sharing.ipynb)\n\n- [wandb_experiment_monitoring.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) shows you how to monitor you experiments using `wandb` [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fwandb_experiment_monitoring.ipynb)\n\n- [mlflow_experiment_monitoring.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) shows you how to monitor you experiments using `mlflow` [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmlflow_experiment_monitoring.ipynb)\n\n- [comet_experiment_monitoring.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) shows you how to monitor you experiments using `comet_ml` [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fcomet_experiment_monitoring.ipynb)\n\n- [models_training](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training) folder provides notebooks showing how to train each implemented model and how to sample from it using `pythae.samplers`.\n\n- [scripts](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts) folder provides in particular an example of a training script to train the models on benchmark data sets (mnist, cifar10, celeba ...)\n\n## Dealing with issues 🛠️\n\nIf you are experiencing any issues while running the code or request new features\u002Fmodels to be implemented please [open an issue on github](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues).\n\n## Contributing 🚀\n\nYou want to contribute to this library by adding a model, a sampler or simply fix a bug ? That's awesome! Thank you! Please see [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002FCONTRIBUTING.md) to follow the main contributing guidelines.\n\n## Results\n\n### Reconstruction\nFirst let's have a look at the reconstructed samples taken from the evaluation set. \n\n\n|               Models               |                                                                                    MNIST                                                                     |                     CELEBA             \n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| Eval data                  | ![Eval](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_813aed68332f.png) | ![AE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_6ab51dabb88b.png)  \n| AE                  | ![AE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_93f0902f5cc6.png) | ![AE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a835b12e1ddf.png)                                                                            |\n| VAE | ![VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_02e97d4b1789.png) |  ![VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_29ef5e77b1bf.png)\n| Beta-VAE| ![Beta](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_d35ab47df4e1.png) | ![Beta Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_f8b782bd270c.png)\n| VAE Lin NF| ![VAE_LinNF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1f96a6546469.png) | ![VAE_IAF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_6f2840e396f4.png)\n| VAE IAF| ![VAE_IAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_9f3e5efebaef.png) | ![VAE_IAF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_b703ac95798d.png)\n| Disentangled  Beta-VAE| ![Disentangled Beta](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_315d08d2bd1f.png) | ![Disentangled Beta](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_cbd33c51c90a.png)\n| FactorVAE| ![FactorVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_d1b747a78a2c.png) | ![FactorVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_3a461928bfe1.png)\n| BetaTCVAE| ![BetaTCVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_c1ea3b7cb131.png) | ![BetaTCVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a88f9743c95d.png)\n| IWAE | ![IWAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_31ef7c11e1ac.png) | ![IWAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_3cad25762b15.png)\n| MSSSIM_VAE | ![MSSSIM VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_b63cb20596da.png) |  ![MSSSIM VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_32f1bca23e21.png)\n| WAE| ![WAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_850c5b78ae60.png) | ![WAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_f81e7df461b9.png)\n| INFO VAE| ![INFO](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a9ad0818e721.png) | ![INFO](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_8231b27c8f5b.png)\n| VAMP | ![VAMP](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0a3b630303bd.png) | ![VAMP](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_70999ec4d268.png) |\n| SVAE | ![SVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_8ba701bc5227.png) | ![SVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_daca350df894.png) |\n| Adversarial_AE          | ![AAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_182f088add5d.png) | ![AAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_c5c602ef439b.png) |\n| VAE_GAN          | ![VAEGAN](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0da2c22eaba1.png) | ![VAEGAN](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_9df2e52c7e57.png) |\n| VQVAE          | ![VQVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_ad90048621b2.png) | ![VQVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_729eedc6c35a.png) |\n| HVAE             | ![HVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_642e9c0a7058.png) | ![HVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a22e0aa98c11.png)\n| RAE_L2 | ![RAE L2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_215c7ee2ea82.png)  |  ![RAE L2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_18c78253bc6d.png)\n| RAE_GP | ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_016dc844c94f.png)  |  ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_4261db3581c0.png)\n| Riemannian Hamiltonian VAE (RHVAE)| ![RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0ee1e99d4d6b.png) | ![RHVAE RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_65fb4622559d.png)\n\n----------------------------\n### Generation\n\nHere, we show the generated samples using each model implemented in the library and different samplers.\n\n|               Models               |                                                                                    MNIST                                                                     |                     CELEBA             \n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| AE  + GaussianMixtureSampler                  | ![AE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_af2580fa2cb0.png) | ![AE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_81b5c9c13017.png)                                                                            |\n| VAE  + NormalSampler    | ![VAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_41352754f8ed.png) |  ![VAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_4ccfdc1312f4.png)\n| VAE  + GaussianMixtureSampler    | ![VAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a7eec46a82b8.png) |  ![VAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_25cee0dcec78.png)\n| VAE  + TwoStageVAESampler    | ![VAE 2 stage](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_398aec9d2ec5.png) |  ![VAE 2 stage](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_405f45e1d88a.png)\n| VAE  + MAFSampler    | ![VAE MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0cdc50a2e4df.png) |  ![VAE MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_b66f1a25c284.png)\n| Beta-VAE + NormalSampler | ![Beta Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_021aa84b8e4d.png) | ![Beta Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_76eac2e91260.png)\n| VAE Lin NF + NormalSampler | ![VAE_LinNF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_499a7db998fd.png) | ![VAE_LinNF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_865cfb79fbcd.png)\n| VAE IAF + NormalSampler | ![VAE_IAF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_5764a109b6c9.png) | ![VAE IAF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_18a839d1baa2.png)\n| Disentangled Beta-VAE + NormalSampler | ![Disentangled Beta Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_6f5477838561.png) | ![Disentangled Beta Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_c8b58ac4606b.png)\n| FactorVAE + NormalSampler | ![FactorVAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1fffb84ad15b.png) | ![FactorVAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_32a44c688cee.png)\n| BetaTCVAE + NormalSampler | ![BetaTCVAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_92882934d1df.png) | ![BetaTCVAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_899fd14fbb76.png)\n| IWAE +  Normal sampler | ![IWAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_d107e5a143ee.png) | ![IWAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_24bc22b4dad2.png)\n| MSSSIM_VAE  + NormalSampler    | ![MSSSIM_VAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_09a075241a29.png) |  ![MSSSIM_VAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1b2bf83297e3.png)\n| WAE + NormalSampler| ![WAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_62a755afa275.png) | ![WAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_320db004dceb.png)\n| INFO VAE + NormalSampler| ![INFO Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_8dbe88ca5ef4.png) | ![INFO Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_292197808ae9.png)\n| SVAE + HypershereUniformSampler          | ![SVAE Sphere](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a39447c5b3e5.png) | ![SVAE Sphere](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0891e0c75e67.png) |\n| VAMP + VAMPSampler          | ![VAMP Vamp](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_151a77eebda7.png) | ![VAMP Vamp](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_127c0dbc0445.png) |\n| Adversarial_AE + NormalSampler          | ![AAE_Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_99bde3659956.png) | ![AAE_Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1d905caab24c.png) |\n| VAEGAN + NormalSampler          | ![VAEGAN_Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_ad51a3696329.png) | ![VAEGAN_Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_88bdf5a769a3.png) |\n| VQVAE + MAFSampler          | ![VQVAE_MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_dbe66fe5f825.png) | ![VQVAE_MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_e98970114e26.png) |\n| HVAE + NormalSampler             | ![HVAE Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_dbd584c4e7e0.png) | ![HVAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_89375d2fdfc7.png)\n| RAE_L2 + GaussianMixtureSampler | ![RAE L2 GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_2715968eed16.png)  |  ![RAE L2 GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_78b1645d267b.png)\n| RAE_GP + GaussianMixtureSampler| ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_73bedd224c69.png)  |  ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_cd767c6c1f50.png)\n| Riemannian Hamiltonian VAE (RHVAE) + RHVAE Sampler| ![RHVAE RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_5464505e2249.png) | ![RHVAE RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_277a2bd638b4.png)\n\n\n# Citation\n\nIf you find this work useful or use it in your research, please consider citing us\n\n```bibtex\n@inproceedings{chadebec2022pythae,\n author = {Chadebec, Cl\\'{e}ment and Vincent, Louis and Allassonniere, Stephanie},\n booktitle = {Advances in Neural Information Processing Systems},\n editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},\n pages = {21575--21589},\n publisher = {Curran Associates, Inc.},\n title = {Pythae: Unifying Generative Autoencoders in Python - A Benchmarking Use Case},\n volume = {35},\n year = {2022}\n}\n```\n","\u003Cp align=\"center\">\n\t\u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fpythae\u002F\">\n\t    \u003Cimg src='https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fpythae.svg' alt='Python' \u002F>\n\t\u003C\u002Fa>\n    \u003Ca>\n\t    \u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.7%7C3.8%7C3.9%2B-blueviolet' alt='Python' \u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href='https:\u002F\u002Fpythae.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest'>\n    \t\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_13d664e1afd7.png' alt='Documentation Status' \u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href='https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0'>\n\t    \u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fclementchadebec\u002Fbenchmark_VAE?color=blue' \u002F>\n\t\u003C\u002Fa>\u003Cbr>\n    \u003Ca>\n\t    \u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-black' \u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002Fclementchadebec\u002Fbenchmark_VAE\">\n  \t\t\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgh\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=KEM7KKISXJ\"\u002F>\n\t\u003C\u002Fa>\n\t\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Foverview_notebook.ipynb\">\n  \t\t\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\"\u002F>\n\t\u003C\u002Fa>\n\t\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpythae.readthedocs.io\u002Fen\u002Flatest\u002F\">文档\u003C\u002Fa>\n\u003C\u002Fp>\n\t\n    \n# pythae \n\n该库以统一的实现方式实现了几种最常见的（变分）自编码器模型。特别地，它提供了通过使用相同的自动编码神经网络架构来训练这些模型，从而进行基准实验和比较的可能性。其“自定义自编码器”功能允许您使用自己的数据以及自定义的编码器和解码器神经网络来训练这些模型。该库集成了诸如 [wandb](https:\u002F\u002Fwandb.ai\u002F)、[mlflow](https:\u002F\u002Fmlflow.org\u002F) 或 [comet-ml](https:\u002F\u002Fwww.comet.com\u002Fsignup?utm_source=pythae&utm_medium=partner&utm_campaign=AMS_US_EN_SNUP_Pythae_Comet_Integration) 🧪 等实验监控工具，并支持在几行代码内从 [HuggingFace Hub](https:\u002F\u002Fhuggingface.co\u002Fmodels) 🤗 上共享和加载模型。\n\n**新闻** 📢\n\n自 v0.1.0 版本起，`Pythae` 现已支持使用 PyTorch 的 [DDP](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Fddp.html) 进行分布式训练。现在您可以更快地在更大的数据集上训练您喜爱的 VAE，而且仍然只需几行代码。\n请参阅我们的加速 [基准测试](#benchmark)。\n\n## 快速访问：\n- [安装](#installation)\n- [已实现的模型](#available-models) \u002F [已实现的采样器](#available-samplers)\n- [可重复性声明](#reproducibility) \u002F [结果呈现方式](#results)\n- [模型训练](#launching-a-model-training) \u002F [数据生成](#launching-data-generation) \u002F [自定义网络架构](#define-you-own-autoencoder-architecture) \u002F [分布式训练](#distributed-training-with-pythae)\n- [与 🤗 Hub 共享模型](#sharing-your-models-with-the-huggingface-hub-) \u002F [使用 `wandb` 监控实验](#monitoring-your-experiments-with-wandb-) \u002F [使用 `mlflow` 监控实验](#monitoring-your-experiments-with-mlflow-) \u002F [使用 `comet_ml` 监控实验](#monitoring-your-experiments-with-comet_ml-)\n- [教程](#getting-your-hands-on-the-code) \u002F [文档](https:\u002F\u002Fpythae.readthedocs.io\u002Fen\u002Flatest\u002F)\n- [贡献 🚀](#contributing-) \u002F [问题 🛠️](#dealing-with-issues-%EF%B8%8F)\n- [引用本仓库](#citation)\n\n# 安装\n\n要安装该库的最新稳定版，请使用 ``pip`` 运行以下命令：\n\n```bash\n$ pip install pythae\n``` \n\n要安装该库的最新 GitHub 版本，请使用 ``pip`` 运行以下命令：\n\n```bash\n$ pip install git+https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE.git\n``` \n\n或者，您也可以克隆 GitHub 仓库以访问测试、教程和脚本。\n```bash\n$ git clone https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE.git\n``` \n然后进入目录并安装库：\n```bash\n$ cd benchmark_VAE\n$ pip install -e .\n``` \n\n## 可用模型\n\n以下是当前库中已实现的模型列表。\n\n|               模型               |                                                                                    训练示例                                                                                    |                     论文                    |                           官方实现                          |\n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|:--------------------------------------------------------------------------:|\n| 自编码器 (AE)                   | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fae_training.ipynb) |                                              |                                                                            |\n| 变分自编码器 (VAE)      | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114)  |\n| Beta 变分自编码器 (BetaVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fbeta_vae_training.ipynb) | [链接](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Sy2fzU9gl)  |   \n变分自编码器结合线性归一化流 (VAE_LinNF) |  [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvae_lin_nf_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.05770) |         \n变分自编码器结合逆向自回归流 (VAE_IAF) |  [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvae_iaf_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04934) |  [链接](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fiaf)                                  |\n| 解耦合 Beta 变分自编码器 (DisentangledBetaVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fdisentangled_beta_vae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03599)  |   \n| 因子分解解耦 (FactorVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Ffactor_vae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05983)  |                                                                            |\n| Beta-TC-VAE (BetaTCVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fbeta_tc_vae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04942)  |  [链接](https:\u002F\u002Fgithub.com\u002Frtqichen\u002Fbeta-tcvae)\n| 重要性加权自编码器 (IWAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fiwae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1509.00519v4)  | [链接](https:\u002F\u002Fgithub.com\u002Fyburda\u002Fiwae)  \n| 多重重要性加权自编码器 (MIWAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fmiwae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537)  |       \n| 部分重要性加权自编码器 (PIWAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fpiwae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537)  |       \n| 组合重要性加权自编码器 (CIWAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fciwae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537)  |                                                                             |\n| 基于感知度量相似性 (MSSSIM) 的变分自编码器      | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fms_ssim_vae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06409)  |\n| Wasserstein 自编码器 (WAE)      | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fwae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.01558) | [链接](https:\u002F\u002Fgithub.com\u002Ftolstikhin\u002Fwae)                                  |\n| 信息变分自编码器 (INFOVAE_MMD)      | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Finfo_vae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02262) |                                   |\n| VAMP 自编码器 (VAMP)            | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvamp_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07120) | [链接](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vampprior)                         |\n| 超球面变分自编码器 (SVAE)            | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fsvae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00891) | [链接](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002Fs-vae-pytorch)\n| 波兰克圆盘变分自编码器 (PoincareVAE)            | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fpvae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06033) | [链接](https:\u002F\u002Fgithub.com\u002Femilemathieu\u002Fpvae)                         |\n| 对抗自编码器 (Adversarial_AE)                   | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fadversarial_ae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644)\n| 变分自编码器 GAN (VAEGAN) 🥗 | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvaegan_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.09300) | [链接](https:\u002F\u002Fgithub.com\u002Fandersbll\u002Fautoencoding_beyond_pixels)| [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.09300) | [链接](https:\u002F\u002Fgithub.com\u002Fandersbll\u002Fautoencoding_beyond_pixels)\n| 向量量化变分自编码器 (VQVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fvqvae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00937) | [链接](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fsonnet\u002Fblob\u002Fv2\u002Fsonnet\u002F)\n| 哈密顿变分自编码器 (HVAE)             | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fhvae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11328) | [链接](https:\u002F\u002Fgithub.com\u002Fanthonycaterini\u002Fhvae-nips)                       |\n| 使用 L2 解码器参数正则化的自编码器 (RAE_L2) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Frae_l2_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12436) | [链接](https:\u002F\u002Fgithub.com\u002FParthaEth\u002FRegularized_autoencoders-RAE-\u002Ftree\u002Fmaster\u002F) |\n| 使用梯度惩罚正则化的自编码器 (RAE_GP) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Frae_gp_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12436) | [链接](https:\u002F\u002Fgithub.com\u002FParthaEth\u002FRegularized_autoencoders-RAE-\u002Ftree\u002Fmaster\u002F) |\n| 黎曼哈密顿变分自编码器 (RHVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Frhvae_training.ipynb) | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00026) | [链接](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fpyraug)|\n| 层次残差量化 (HRQVAE) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training\u002Fhrqvae_training.ipynb) | [链接](https:\u002F\u002Faclanthology.org\u002F2022.acl-long.178\u002F) | [链接](https:\u002F\u002Fgithub.com\u002Ftomhosking\u002Fhrq-vae)|\n\n**请参阅[重建](#Reconstruction)和[生成](#Generation)结果，了解所有上述模型的表现**\n\n\n\n## 可用的采样器\n\n以下是当前库中已实现的模型列表。\n\n|                采样器               |   模型  \t\t  | 论文 \t\t\t\t\t\t\t\t\t\t\t  | 官方实现 \t\t\t\t  |\n|:-------------------------------------:|:-------------------:|:-------------------------------------------------:|:-----------------------------------------:|\n| 正态先验 (NormalSampler)                         | 所有模型\t\t  | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114)\t\t  |\n| 高斯混合 (GaussianMixtureSampler) | 所有模型\t\t  | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12436) \t  | [链接](https:\u002F\u002Fgithub.com\u002FParthaEth\u002FRegularized_autoencoders-RAE-\u002Ftree\u002Fmaster\u002Fmodels\u002Frae) |\n| 两阶段VAE采样器 (TwoStageVAESampler)\t\t\t\t\t| 所有基于VAE的模型| [链接](https:\u002F\u002Fopenreview.net\u002Fpdf?id=B1e0X3C9tQ)  | [链接](https:\u002F\u002Fgithub.com\u002Fdaib13\u002FTwoStageVAE\u002F) |)\n| 单位球面均匀采样器 (HypersphereUniformSampler)                     |    SVAE  \t\t  | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00891)      |\t\t[链接](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002Fs-vae-pytorch)\n| 波兰卡圆盘采样器 (PoincareDiskSampler)                     |    PoincareVAE  \t\t  | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06033)      |\t\t[链接](https:\u002F\u002Fgithub.com\u002Femilemathieu\u002Fpvae)\n| VAMP先验采样器 (VAMPSampler)                   |    VAMP   \t\t  | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07120) \t  | [链接](https:\u002F\u002Fgithub.com\u002Fjmtomczak\u002Fvae_vampprior) |\n| 流形采样器 (RHVAESampler)                     |    RHVAE  \t\t  | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00026)      |\t[链接](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fpyraug)|\n| 掩码自回归流采样器 (MAFSampler) | 所有模型 | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07057v4)      |\t[链接](https:\u002F\u002Fgithub.com\u002Fgpapamak\u002Fmaf) |\n| 逆向自回归流采样器 (IAFSampler) | 所有模型 | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04934) |  [链接](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fiaf)             |   \n| PixelCNN (PixelCNNSampler) | VQVAE | [链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.05328) |             |                     \n\n## 可复现性\n\n我们通过复现原始论文中的一些结果来验证实现的正确性，前提是官方代码已发布，或者论文的实验部分提供了足够详细的信息。更多详情请参阅[可复现性](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts\u002Freproducibility)。\n\n## 启动模型训练\n\n要启动模型训练，只需调用一个 `TrainingPipeline` 实例即可。\n\n```python\n>>> from pythae.pipelines import TrainingPipeline\n>>> from pythae.models import VAE, VAEConfig\n>>> from pythae.trainers import BaseTrainerConfig\n\n>>> # 设置训练配置\n>>> my_training_config = BaseTrainerConfig(\n...\toutput_dir='my_model',\n...\tnum_epochs=50,\n...\tlearning_rate=1e-3,\n...\tper_device_train_batch_size=200,\n...\tper_device_eval_batch_size=200,\n...\ttrain_dataloader_num_workers=2,\n...\teval_dataloader_num_workers=2,\n...\tsteps_saving=20,\n...\toptimizer_cls=\"AdamW\",\n...\toptimizer_params={\"weight_decay\": 0.05, \"betas\": (0.91, 0.995)},\n...\tscheduler_cls=\"ReduceLROnPlateau\",\n...\tscheduler_params={\"patience\": 5, \"factor\": 0.5}\n... )\n>>> # 设置模型配置 \n>>> my_vae_config = model_config = VAEConfig(\n...\tinput_dim=(1, 28, 28),\n...\tlatent_dim=10\n... )\n>>> # 构建模型\n>>> my_vae_model = VAE(\n...\tmodel_config=my_vae_config\n... )\n>>> # 构建流水线\n>>> pipeline = TrainingPipeline(\n... \ttraining_config=my_training_config,\n... \tmodel=my_vae_model\n... )\n>>> # 启动流水线\n>>> pipeline(\n...\ttrain_data=your_train_data, # 必须是 torch.Tensor、np.array 或 torch 数据集\n...\teval_data=your_eval_data # 必须是 torch.Tensor、np.array 或 torch 数据集\n... )\n```\n\n训练结束后，最佳模型权重、模型配置和训练配置将存储在 `my_model\u002FMODEL_NAME_training_YYYY-MM-DD_hh-mm-ss` 文件夹中的 `final_model` 目录下（其中 `my_model` 是 `BaseTrainerConfig` 的 `output_dir` 参数）。如果进一步设置了 `steps_saving` 参数，则还会在 `my_model\u002FMODEL_NAME_training_YYYY-MM-DD_hh-mm-ss` 中出现名为 `checkpoint_epoch_k` 的文件夹，其中包含第 *k* 个 epoch 时的最佳模型权重、优化器、调度器、配置和训练配置。\n\n## 在基准数据集上启动训练\n我们还提供了一个训练脚本示例[此处](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts\u002Ftraining.py)，可用于在基准数据集（mnist、cifar10、celeba 等）上训练模型。该脚本可以通过以下命令行启动：\n\n```bash\npython training.py --dataset mnist --model_name ae --model_config 'configs\u002Fae_config.json' --training_config 'configs\u002Fbase_training_config.json'\n```\n\n有关此脚本的更多详细信息，请参阅 [README.md](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts\u002FREADME.md)。\n\n## 启动数据生成\n\n### 使用 `GenerationPipeline`\n\n从已训练模型中启动数据生成的最简单方法是使用 Pythae 内置的 `GenerationPipeline`。假设您想使用 `MAFSampler` 生成 100 个样本，您只需执行以下步骤：1) 重新加载已训练的模型，2) 定义采样器的配置，3) 创建并启动 `GenerationPipeline`，如下所示：\n\n```python\n>>> from pythae.models import AutoModel\n>>> from pythae.samplers import MAFSamplerConfig\n>>> from pythae.pipelines import GenerationPipeline\n>>> # 恢复已训练的模型\n>>> my_trained_vae = AutoModel.load_from_folder(\n...\t'path\u002Fto\u002Fyour\u002Ftrained\u002Fmodel'\n... )\n>>> my_sampler_config = MAFSamplerConfig(\n...\tn_made_blocks=2,\n...\tn_hidden_in_made=3,\n...\thidden_size=128\n... )\n>>> # 构建流水线\n>>> pipe = GenerationPipeline(\n...\tmodel=my_trained_vae,\n...\tsampler_config=my_sampler_config\n... )\n>>> # 启动数据生成\n>>> generated_samples = pipe(\n...\tnum_samples=args.num_samples,\n...\treturn_gen=True, # 如果为假则不返回任何内容\n...\ttrain_data=train_data, # 用于拟合采样器\n...\teval_data=eval_data, # 用于拟合采样器\n...\ttraining_config=BaseTrainerConfig(num_epochs=200) # 用于拟合采样器的训练配置\n... )\n```\n\n### 使用采样器\n\n或者，你也可以直接通过采样器从训练好的模型中启动数据生成过程。例如，要使用你的采样器生成新数据，可以运行以下代码：\n\n```python\n>>> from pythae.models import AutoModel\n>>> from pythae.samplers import NormalSampler\n>>> # 获取训练好的模型\n>>> my_trained_vae = AutoModel.load_from_folder(\n...\t'path\u002Fto\u002Fyour\u002Ftrained\u002Fmodel'\n... )\n>>> # 定义你的采样器\n>>> my_samper = NormalSampler(\n...\tmodel=my_trained_vae\n... )\n>>> # 生成样本\n>>> gen_data = my_samper.sample(\n...\tnum_samples=50,\n...\tbatch_size=10,\n...\toutput_dir=None,\n...\treturn_gen=True\n... )\n```\n如果你将 `output_dir` 设置为一个特定路径，生成的图像将会以 `.png` 文件的形式保存，文件名分别为 `00000000.png`, `00000001.png` 等等。\n\n只要模型适配，采样器就可以用于任何模型。例如，`GaussianMixtureSampler` 实例可以用于任何模型，而 `VAMPSampler` 只能与 `VAMP` 模型一起使用。请查看 [此处](#available-samplers) 以了解哪些采样器适用于你的模型。请注意，某些采样器，比如 `GaussianMixtureSampler`，在使用前可能需要调用 `fit` 方法进行拟合。以下是 `GaussianMixtureSampler` 的示例：\n\n```python\n>>> from pythae.models import AutoModel\n>>> from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig\n>>> # 获取训练好的模型\n>>> my_trained_vae = AutoModel.load_from_folder(\n...\t'path\u002Fto\u002Fyour\u002Ftrained\u002Fmodel'\n... )\n>>> # 定义你的采样器\n... gmm_sampler_config = GaussianMixtureSamplerConfig(\n...\tn_components=10\n... )\n>>> my_samper = GaussianMixtureSampler(\n...\tsampler_config=gmm_sampler_config,\n...\tmodel=my_trained_vae\n... )\n>>> # 拟合采样器\n>>> gmm_sampler.fit(train_dataset)\n>>> # 生成样本\n>>> gen_data = my_samper.sample(\n...\tnum_samples=50,\n...\tbatch_size=10,\n...\toutput_dir=None,\n...\treturn_gen=True\n... )\n```\n\n\n## 自定义自编码器架构\n\nPythae 提供了在 VAE 模型中定义自定义神经网络的可能性。例如，假设你想训练一个带有特定编码器和解码器的 Wassertstein AE，你可以这样做：\n\n```python\n>>> from pythae.models.nn import BaseEncoder, BaseDecoder\n>>> from pythae.models.base.base_utils import ModelOutput\n>>> class My_Encoder(BaseEncoder):\n...\tdef __init__(self, args=None): # Args 是一个 ModelConfig 实例\n...\t\tBaseEncoder.__init__(self)\n...\t\tself.layers = my_nn_layers()\n...\t\t\n...\tdef forward(self, x:torch.Tensor) -> ModelOutput:\n...\t\tout = self.layers(x)\n...\t\toutput = ModelOutput(\n...\t\t\tembedding=out # 将编码器的输出放入 ModelOutput 实例中 \n...\t\t)\n...\t\treturn output\n...\n... class My_Decoder(BaseDecoder):\n...\tdef __init__(self, args=None):\n...\t\tBaseDecoder.__init__(self)\n...\t\tself.layers = my_nn_layers()\n...\t\t\n...\tdef forward(self, x:torch.Tensor) -> ModelOutput:\n...\t\tout = self.layers(x)\n...\t\toutput = ModelOutput(\n...\t\t\treconstruction=out # 将解码器的输出放入 ModelOutput 实例中\n...\t\t)\n...\t\treturn output\n...\n>>> my_encoder = My_Encoder()\n>>> my_decoder = My_Decoder()\n```\n\n然后构建模型：\n\n```python\n>>> from pythae.models import WAE_MMD, WAE_MMD_Config\n>>> # 设置模型配置 \n>>> my_wae_config = model_config = WAE_MMD_Config(\n...\tinput_dim=(1, 28, 28),\n...\tlatent_dim=10\n... )\n...\n>>> # 构建模型\n>>> my_wae_model = WAE_MMD(\n...\tmodel_config=my_wae_config,\n...\tencoder=my_encoder, # 在构建模型时传入你的编码器\n...\tdecoder=my_decoder # 在构建模型时传入你的解码器\n... )\n```\n\n**重要提示 1**：对于所有基于 AE 的模型（AE、WAE、RAE_L2、RAE_GP），编码器和解码器都必须返回一个 `ModelOutput` 实例。对于编码器，`ModelOutput` 实例必须在 `embedding` 键下包含嵌入向量。对于解码器，`ModelOutput` 实例必须在 `reconstruction` 键下包含重建结果。\n\n\n**重要提示 2**：对于所有基于 VAE 的模型（VAE、BetaVAE、IWAE、HVAE、VAMP、RHVAE），编码器和解码器都必须返回一个 `ModelOutput` 实例。对于编码器，`ModelOutput` 实例必须分别在 `embedding` 和 `log_covariance` 键下包含嵌入向量和对数协方差矩阵（形状为 batch_size × latent_space_dim）。对于解码器，`ModelOutput` 实例必须在 `reconstruction` 键下包含重建结果。\n\n\n## 使用基准神经网络\n\n你还可以找到针对最常见数据集（如 MNIST、CIFAR、CELEBA 等）的预定义神经网络架构，可以通过以下方式加载：\n\n```python\n>>> from pythae.models.nn.benchmark.mnist import (\n...\tEncoder_Conv_AE_MNIST, # 用于基于 AE 的模型（仅返回嵌入）\n...\tEncoder_Conv_VAE_MNIST, # 用于基于 VAE 的模型（返回嵌入和对数协方差）\n...\tDecoder_Conv_AE_MNIST\n... )\n```\n将 *mnist* 替换为 *cifar* 或 *celeba*，即可访问其他神经网络。\n\n## 使用 Pythae 进行分布式训练\n\n自 `v0.1.0` 起，Pythae 现已支持使用 PyTorch 的 [DDP](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Fddp.html) 进行分布式训练。这使你能够利用多 GPU 和\u002F或多节点训练，更快地在更大的数据集上训练你喜欢的 VAE。\n\n为此，你可以编写一个 Python 脚本，然后由启动程序（如集群上的 `srun`）来运行该脚本。脚本中唯一需要做的就是在训练配置中直接指定与分布式环境相关的参数，如下所示：\n\n```python\n>>> training_config = BaseTrainerConfig(\n...     num_epochs=10,\n...     learning_rate=1e-3,\n...     per_device_train_batch_size=64,\n...     per_device_eval_batch_size=64,\n...     train_dataloader_num_workers=8,\n...     eval_dataloader_num_workers=8,\n...     dist_backend=\"nccl\", # 分布式后端\n...     world_size=8 # 使用的 GPU 数量（节点数 × 每个节点的 GPU 数量），\n...     rank=5 # 全局 GPU ID，\n...     local_rank=1 # 节点内的 GPU ID，\n...     master_addr=\"localhost\" # 主节点地址，\n...     master_port=\"12345\" # 主节点端口，\n... )\n```\n\n请参阅此 [示例脚本](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fscripts\u002Fdistributed_training_imagenet.py)，其中定义了一个在 ImageNet 数据集上进行多 GPU VQVAE 训练的脚本。请注意，分布式环境变量（`world_size`、`rank` 等）的获取方式可能因你使用的集群和启动程序而异。\n\n### 基准测试\n\n以下是使用 `Pythae` 在 V100 16GB GPU 上对 MNIST 数据集训练 100 个 epoch 的向量量化变分自编码器 (VQ-VAE)、在 [FFHQ](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fffhq-dataset)（1024×1024 图像）上训练 50 个 epoch，以及在 V100 32GB GPU 上对 [ImageNet-1k](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fimagenet-1k) 训练 20 个 epoch 的训练时间。\n\n|  | 训练数据 | 1 张 GPU | 4 张 GPU | 2×4 张 GPU |\n|:---:|:---:|:---:|:---:|---|\n| MNIST (VQ-VAE) | 28×28 图像（5 万张） | 235.18 秒 | 62.00 秒 | 35.86 秒 |\n| FFHQ 1024×1024 (VQVAE) | 1024×1024 RGB 图像（6 万张） | 19 小时 1 分钟 | 5 小时 6 分钟 | 2 小时 37 分钟 |\n| ImageNet-1k 128×128 (VQVAE) | 128×128 RGB 图像（约 120 万张） | 6 小时 25 分钟 | 1 小时 41 分钟 | 51 分钟 26 秒 |\n\n\n对于每个数据集，我们提供了基准测试脚本，可在[这里](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts)找到。\n\n\n## 使用 HuggingFace Hub 分享你的模型 🤗\nPythae 还允许你将模型分享到 [HuggingFace Hub](https:\u002F\u002Fhuggingface.co\u002Fmodels)。为此你需要：\n- 一个有效的 HuggingFace 账号\n- 在你的虚拟环境中安装了 `huggingface_hub` 包。如果没有，可以通过以下命令安装：\n```\n$ python -m pip install huggingface_hub\n```\n- 使用以下命令登录你的 HuggingFace 账号：\n```\n$ huggingface-cli login\n```\n\n### 将模型上传到 Hub\n任何 Pythae 模型都可以通过 `push_to_hf_hub` 方法轻松上传：\n```python\n>>> my_vae_model.push_to_hf_hub(hf_hub_path=\"your_hf_username\u002Fyour_hf_hub_repo\")\n```\n**注意：** 如果 `your_hf_hub_repo` 已经存在且不为空，文件将会被覆盖。如果该仓库不存在，系统会创建一个同名的文件夹。\n\n### 从 Hub 下载模型\n同样地，你可以直接从 Hub 下载或重新加载任何 Pythae 模型，只需使用 `load_from_hf_hub` 方法：\n```python\n>>> from pythae.models import AutoModel\n>>> my_downloaded_vae = AutoModel.load_from_hf_hub(hf_hub_path=\"path_to_hf_repo\")\n```\n\n## 使用 `wandb` 监控你的实验 🧪\nPythae 还集成了实验跟踪工具 [wandb](https:\u002F\u002Fwandb.ai\u002F)，允许用户存储配置、监控训练过程，并通过图形界面比较不同运行的结果。要使用此功能，你需要：\n- 一个有效的 wandb 账号\n- 在你的虚拟环境中安装了 `wandb` 包。如果没有，可以通过以下命令安装：\n```\n$ pip install wandb\n```\n- 使用以下命令登录你的 wandb 账号：\n```\n$ wandb login\n```\n\n### 创建 `WandbCallback`\n在 Pythae 中使用 `wandb` 启动实验监控非常简单。用户只需创建一个 `WandbCallback` 实例……\n\n```python\n>>> # 创建回调\n>>> from pythae.trainers.training_callbacks import WandbCallback\n>>> callbacks = [] # TrainingPipeline 需要一个回调列表\n>>> wandb_cb = WandbCallback() # 构建回调\n>>> # 设置回调\n>>> wandb_cb.setup(\n...\ttraining_config=your_training_config, # 训练配置\n...\tmodel_config=your_model_config, # 模型配置\n...\tproject_name=\"your_wandb_project\", # 指定你的 wandb 项目\n...\tentity_name=\"your_wandb_entity\", # 指定你的 wandb 实体\n... )\n>>> callbacks.append(wandb_cb) # 添加到回调列表\n```\n……然后将其传递给 `TrainingPipeline`。\n```python\n>>> pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n>>> pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # 将回调传递给 TrainingPipeline，大功告成！\n... )\n>>> # 你可以登录 https:\u002F\u002Fwandb.ai\u002Fyour_wandb_entity\u002Fyour_wandb_project 来监控你的训练\n```\n请参阅详细教程\n\n## 使用 `mlflow` 监控你的实验 🧪\nPythae 还集成了实验跟踪工具 [mlflow](https:\u002F\u002Fmlflow.org\u002F)，允许用户存储配置、监控训练过程，并通过图形界面比较不同运行的结果。要使用此功能，你需要：\n- 在你的虚拟环境中安装了 `mlflow` 包。如果没有，可以通过以下命令安装：\n```\n$ pip install mlflow\n```\n\n### 创建 `MLFlowCallback`\n在 Pythae 中使用 `mlflow` 启动实验监控非常简单。用户只需创建一个 `MLFlowCallback` 实例……\n\n```python\n>>> # 创建回调\n>>> from pythae.trainers.training_callbacks import MLFlowCallback\n>>> callbacks = [] # TrainingPipeline 需要一个回调列表\n>>> mlflow_cb = MLFlowCallback() # 构建回调\n>>> # 设置回调\n>>> mlflow_cb.setup(\n...\ttraining_config=your_training_config, # 训练配置\n...\tmodel_config=your_model_config, # 模型配置\n...\trun_name=\"mlflow_cb_example\", # 指定你的 mlflow 运行名称\n... )\n>>> callbacks.append(mlflow_cb) # 添加到回调列表\n```\n……然后将其传递给 `TrainingPipeline`。\n```python\n>>> pipeline = TrainingPipeline(\n...\ttraining_config=config，\n...\tmodel=model\n... )\n>>> pipeline(\n...\ttrain_data=train_dataset，\n...\teval_data=eval_dataset，\n...\tcallbacks=callbacks # 将回调传递给 TrainingPipeline，大功告成！\n... )\n```\n你可以在包含 `.\u002Fmlruns` 的目录中运行以下命令来可视化指标：\n```bash\n$ mlflow ui\n```\n请参阅详细教程\n\n## 使用 `comet_ml` 监控你的实验 🧪\nPythae 还集成了实验跟踪工具 [comet_ml](https:\u002F\u002Fwww.comet.com\u002Fsignup?utm_source=pythae&utm_medium=partner&utm_campaign=AMS_US_EN_SNUP_Pythae_Comet_Integration)，允许用户存储配置、监控训练过程，并通过图形界面比较不同运行的结果。要使用此功能，你需要：\n- 在你的虚拟环境中安装了 `comet_ml` 包。如果没有，可以通过以下命令安装：\n```\n$ pip install comet_ml\n```\n\n### 创建 `CometCallback`\n在 Pythae 中使用 `comet_ml` 启动实验监控非常简单。用户只需创建一个 `CometCallback` 实例……\n\n```python\n>>> # 创建回调\n>>> from pythae.trainers.training_callbacks import CometCallback\n>>> callbacks = [] # TrainingPipeline 需要一个回调列表\n>>> comet_cb = CometCallback() # 构建回调\n>>> # 设置回调\n>>> comet_cb.setup(\n...\ttraining_config=training_config, # 训练配置\n...\tmodel_config=model_config, # 模型配置\n...\tapi_key=\"your_comet_api_key\", # 指定你的 comet API 密钥\n...\tproject_name=\"your_comet_project\", # 指定你的 comet 项目\n...\t#offline_run=True, # 以离线模式运行\n...\t#offline_directory='my_offline_runs' # 设置用于存储离线运行的目录\n... )\n>>> callbacks.append(comet_cb) # 添加到回调列表\n```\n……然后将其传递给 `TrainingPipeline`。\n```python\n>>> pipeline = TrainingPipeline(\n...\ttraining_config=config，\n...\tmodel=model\n... )\n>>> pipeline(\n...\ttrain_data=train_dataset，\n...\teval_data=eval_dataset，\n...,callbacks=callbacks # 将回调传递给 TrainingPipeline，大功告成！\n... )\n>>> # 你可以登录 https:\u002F\u002Fcomet.com\u002Fyour_comet_username\u002Fyour_comet_project 来监控你的训练\n```\n请参阅详细教程\n\n## 获取代码 \n\n为了帮助您理解 pythae 的工作原理以及如何使用该库训练您的模型，我们还提供了以下教程：\n\n- [making_your_own_autoencoder.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) 向您展示如何将您自己的网络传递给 pythae 中实现的模型 [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmaking_your_own_autoencoder.ipynb)\n\n- [custom_dataset.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) 向您展示如何在 pythae 中实现的任何模型中使用自定义数据集 [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fcustom_dataset.ipynb)\n\n- [hf_hub_models_sharing.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) 向您展示如何上传和下载 HuggingFace Hub 上的模型 [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fhf_hub_models_sharing.ipynb)\n\n- [wandb_experiment_monitoring.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) 向您展示如何使用 `wandb` 监控实验 [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fwandb_experiment_monitoring.ipynb)\n\n- [mlflow_experiment_monitoring.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) 向您展示如何使用 `mlflow` 监控实验 [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmlflow_experiment_monitoring.ipynb)\n\n- [comet_experiment_monitoring.ipynb](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks) 向您展示如何使用 `comet_ml` 监控实验 [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fblob\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fcomet_experiment_monitoring.ipynb)\n\n- [models_training](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fnotebooks\u002Fmodels_training) 文件夹提供了笔记本，展示了如何训练每个已实现的模型，以及如何使用 `pythae.samplers` 从中采样。\n\n- [scripts](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002Fexamples\u002Fscripts) 文件夹特别提供了一个训练脚本示例，用于在基准数据集（mnist、cifar10、celeba 等）上训练模型。\n\n## 处理问题 🛠️\n\n如果您在运行代码时遇到任何问题，或希望添加新的功能\u002F模型，请在 [github 上提交一个问题](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues)。\n\n## 贡献 🚀\n\n您想通过添加一个模型、采样器，或者只是修复一个 bug 来为这个库做出贡献吗？太棒了！非常感谢！请参阅 [CONTRIBUTING.md](https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Ftree\u002Fmain\u002FCONTRIBUTING.md)，以了解主要的贡献指南。\n\n## 结果\n\n### 重建效果\n首先让我们来看看从评估集抽取的重建样本。\n\n\n|               模型               |                                                                                    MNIST                                                                     |                     CELEBA             \n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| 评估数据                  | ![Eval](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_813aed68332f.png) | ![AE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_6ab51dabb88b.png)  \n| 自编码器                  | ![AE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_93f0902f5cc6.png) | ![AE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a835b12e1ddf.png)                                                                            |\n| 变分自编码器 | ![VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_02e97d4b1789.png) |  ![VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_29ef5e77b1bf.png)\n| Beta-变分自编码器| ![Beta](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_d35ab47df4e1.png) | ![Beta Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_f8b782bd270c.png)\n| 线性流变分自编码器| ![VAE_LinNF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1f96a6546469.png) | ![VAE_IAF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_6f2840e396f4.png)\n| IAF变分自编码器| ![VAE_IAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_9f3e5efebaef.png) | ![VAE_IAF Normal](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_b703ac95798d.png)\n| 解耦合Beta-变分自编码器| ![Disentangled Beta](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_315d08d2bd1f.png) | ![Disentangled Beta](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_cbd33c51c90a.png)\n| FactorVAE| ![FactorVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_d1b747a78a2c.png) | ![FactorVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_3a461928bfe1.png)\n| BetaTCVAE| ![BetaTCVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_c1ea3b7cb131.png) | ![BetaTCVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a88f9743c95d.png)\n| IWAE | ![IWAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_31ef7c11e1ac.png) | ![IWAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_3cad25762b15.png)\n| MSSSIM_VAE | ![MSSSIM VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_b63cb20596da.png) |  ![MSSSIM VAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_32f1bca23e21.png)\n| WAE| ![WAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_850c5b78ae60.png) | ![WAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_f81e7df461b9.png)\n| INFO VAE| ![INFO](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a9ad0818e721.png) | ![INFO](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_8231b27c8f5b.png)\n| VAMP | ![VAMP](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0a3b630303bd.png) | ![VAMP](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_70999ec4d268.png) |\n| SVAE | ![SVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_8ba701bc5227.png) | ![SVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_daca350df894.png) |\n| 对抗自编码器          | ![AAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_182f088add5d.png) | ![AAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_c5c602ef439b.png) |\n| VAE_GAN          | ![VAEGAN](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0da2c22eaba1.png) | ![VAEGAN](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_9df2e52c7e57.png) |\n| VQVAE          | ![VQVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_ad90048621b2.png) | ![VQVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_729eedc6c35a.png) |\n| HVAE             | ![HVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_642e9c0a7058.png) | ![HVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a22e0aa98c11.png)\n| RAE_L2 | ![RAE L2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_215c7ee2ea82.png)  |  ![RAE L2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_18c78253bc6d.png)\n| RAE_GP | ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_016dc844c94f.png)  |  ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_4261db3581c0.png)\n| 黎曼哈密顿变分自编码器 (RHVAE)| ![RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0ee1e99d4d6b.png) | ![RHVAE RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_65fb4622559d.png)\n\n----------------------------\n### 生成效果\n\n在这里，我们展示了使用库中实现的各个模型以及不同采样器生成的样本。\n\n|               模型               |                                                                                    MNIST                                                                     |                     CELEBA             \n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| AE  + 高斯混合采样器                  | ![AE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_af2580fa2cb0.png) | ![AE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_81b5c9c13017.png)                                                                            |\n| VAE  + 正态采样器    | ![VAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_41352754f8ed.png) |  ![VAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_4ccfdc1312f4.png)\n| VAE  + 高斯混合采样器    | ![VAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a7eec46a82b8.png) |  ![VAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_25cee0dcec78.png)\n| VAE  + 两阶段VAE采样器    | ![VAE 2阶段](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_398aec9d2ec5.png) |  ![VAE 2阶段](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_405f45e1d88a.png)\n| VAE  + MAF采样器    | ![VAE MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0cdc50a2e4df.png) |  ![VAE MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_b66f1a25c284.png)\n| Beta-VAE + 正态采样器 | ![Beta 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_021aa84b8e4d.png) | ![Beta 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_76eac2e91260.png)\n| VAE Lin NF + 正态采样器 | ![VAE_LinNF 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_499a7db998fd.png) | ![VAE_LinNF 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_865cfb79fbcd.png)\n| VAE IAF + 正态采样器 | ![VAE_IAF 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_5764a109b6c9.png) | ![VAE IAF 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_18a839d1baa2.png)\n| 解耦合Beta-VAE + 正态采样器 | ![解耦合Beta 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_6f5477838561.png) | ![解耦合Beta 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_c8b58ac4606b.png)\n| FactorVAE + 正态采样器 | ![FactorVAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1fffb84ad15b.png) | ![FactorVAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_32a44c688cee.png)\n| BetaTCVAE + 正态采样器 | ![BetaTCVAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_92882934d1df.png) | ![BetaTCVAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_899fd14fbb76.png)\n| IWAE + 正态采样器 | ![IWAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_d107e5a143ee.png) | ![IWAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_24bc22b4dad2.png)\n| MSSSIM_VAE  + 正态采样器    | ![MSSSIM_VAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_09a075241a29.png) |  ![MSSSIM_VAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1b2bf83297e3.png)\n| WAE + 正态采样器| ![WAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_62a755afa275.png) | ![WAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_320db004dceb.png)\n| INFO VAE + 正态采样器| ![INFO 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_8dbe88ca5ef4.png) | ![INFO 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_292197808ae9.png)\n| SVAE + 超球面均匀采样器          | ![SVAE 球体](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_a39447c5b3e5.png) | ![SVAE 球体](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_0891e0c75e67.png) |\n| VAMP + VAMP采样器          | ![VAMP Vamp](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_151a77eebda7.png) | ![VAMP Vamp](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_127c0dbc0445.png) |\n| 对抗自编码器 + 正态采样器          | ![AAE_正态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_99bde3659956.png) | ![AAE_正态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_1d905caab24c.png) |\n| VAEGAN + 正态采样器          | ![VAEGAN_正态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_ad51a3696329.png) | ![VAEGAN_正态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_88bdf5a769a3.png) |\n| VQVAE + MAF采样器          | ![VQVAE_MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_dbe66fe5f825.png) | ![VQVAE_MAF](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_e98970114e26.png) |\n| HVAE + 正态采样器             | ![HVAE 正常](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_dbd584c4e7e0.png) | ![HVAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_89375d2fdfc7.png)\n| RAE_L2 + 高斯混合采样器 | ![RAE L2 GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_2715968eed16.png)  |  ![RAE L2 GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_78b1645d267b.png)\n| RAE_GP + 高斯混合采样器| ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_73bedd224c69.png)  |  ![RAE GMM](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_cd767c6c1f50.png)\n| 黎曼哈密顿VAE (RHVAE) + RHVAE采样器| ![RHVAE RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_5464505e2249.png) | ![RHVAE RHVAE](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_readme_277a2bd638b4.png)\n\n# 引用\n\n如果您觉得这项工作有用，或在您的研究中使用了它，请考虑引用我们。\n\n```bibtex\n@inproceedings{chadebec2022pythae,\n author = {Chadebec, Cl\\'{e}ment and Vincent, Louis and Allassonniere, Stephanie},\n booktitle = {神经信息处理系统进展},\n editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},\n pages = {21575--21589},\n publisher = {柯兰联合公司},\n title = {Pythae：在 Python 中统一生成式自编码器——一个基准测试用例},\n volume = {35},\n year = {2022}\n}\n```","# Pythae (benchmark_VAE) 快速上手指南\n\nPythae 是一个统一的变分自编码器（VAE）库，集成了多种主流 VAE 模型。它支持在相同的网络架构下进行基准测试和对比实验，允许用户自定义编码器和解码器，并原生支持分布式训练、实验监控（WandB, MLflow, Comet）以及 HuggingFace Hub 模型共享。\n\n## 环境准备\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：3.7, 3.8, 3.9 或更高版本\n*   **核心依赖**：PyTorch (库会自动处理相关依赖)\n*   **可选依赖**：\n    *   分布式训练：需配置支持 NCCL 的 GPU 环境\n    *   实验监控：`wandb`, `mlflow`, 或 `comet_ml`\n\n## 安装步骤\n\n### 方式一：通过 Pip 安装（推荐）\n\n安装最新稳定版：\n```bash\npip install pythae\n```\n\n若需使用国内镜像加速安装：\n```bash\npip install pythae -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从 GitHub 安装（获取最新特性）\n\n如需体验最新功能（如最新的分布式训练优化），可直接从源码安装：\n\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE.git\n```\n\n或者克隆仓库以便运行示例和测试：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE.git\ncd benchmark_VAE\npip install -e .\n```\n\n## 基本使用\n\nPythae 的核心优势在于统一的接口。以下是最简单的训练流程示例，以标准的 **VAE** 模型为例。\n\n### 1. 导入模块与配置\n首先导入所需的模型类和训练配置类，并设置参数。\n\n```python\nfrom pythae.models import VAE, VAEConfig\nfrom pythae.trainers import BaseTrainer, BaseTrainerConfig\n\n# 定义模型配置\nmodel_config = VAEConfig(\n    input_dim=(3, 28, 28),  # 输入数据维度 (Channels, Height, Width)\n    latent_dim=10           # 潜在空间维度\n)\n\n# 初始化模型\nmodel = VAE(model_config)\n\n# 定义训练器配置\ntrainer_config = BaseTrainerConfig(\n    output_dir=\"my_vae_experiment\",\n    train_batch_size=64,\n    eval_batch_size=64,\n    learning_rate=1e-3,\n    num_epochs=10\n)\n```\n\n### 2. 准备数据\nPythae 兼容标准的 PyTorch `Dataset` 和 `DataLoader`。\n\n```python\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import ToTensor\n\n# 加载数据集 (以 MNIST 为例)\ntrain_dataset = MNIST(root=\".\u002Fdata\", train=True, download=True, transform=ToTensor())\ntrain_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)\n\neval_dataset = MNIST(root=\".\u002Fdata\", train=False, download=True, transform=ToTensor())\neval_loader = DataLoader(eval_dataset, batch_size=64, shuffle=False)\n```\n\n### 3. 启动训练\n创建训练器并开始训练。\n\n```python\n# 初始化训练器\ntrainer = BaseTrainer(\n    model=model,\n    train_dataset=train_dataset,\n    eval_dataset=eval_dataset,\n    config=trainer_config\n)\n\n# 开始训练\ntraining_output = trainer.train()\n```\n\n### 4. 生成数据与保存\n训练完成后，可以使用模型生成样本或保存到 HuggingFace Hub。\n\n```python\n# 生成样本\ngenerated_data = model.generate(num_samples=10)\n\n# 保存到本地\nmodel.save(\"my_saved_vae\")\n\n# 推送到 HuggingFace Hub (需先登录 huggingface-cli login)\n# model.push_to_hub(\"your_username\u002Fmy-vae-model\")\n```\n\n> **提示**：库中已内置了包括 BetaVAE, IWAE, VQVAE, WAE 等在内的 20+ 种模型，只需将上述代码中的 `VAE` 和 `VAEConfig` 替换为对应的模型类（如 `BetaVAE`, `VQVAEConfig` 等）即可无缝切换。","某医疗影像实验室的研究团队正试图通过对比多种变分自编码器（VAE）模型，从有限的肺部 CT 扫描数据中学习更鲁棒的潜在特征，以辅助早期病灶检测。\n\n### 没有 benchmark_VAE 时\n- **代码重复劳动繁重**：团队需为 VAE、β-VAE、VQ-VAE 等不同模型分别寻找并适配独立的开源代码库，每切换一个模型就要重写一遍数据加载和训练循环。\n- **公平对比难以保证**：由于各源码的编码器\u002F解码器架构、超参数设置及随机种子管理不一致，导致实验结果差异可能源于实现细节而非模型本身的优劣，结论缺乏说服力。\n- **实验监控分散**：缺乏统一的接口对接 WandB 或 MLflow，研究人员需手动整理不同脚本产生的日志，难以实时追踪和可视化多组实验的损失曲线与重建效果。\n- **复现与协作成本高**：新成员加入时需花费数天理解杂乱的代码结构，且在不同机器上复现论文结果时，常因环境依赖或缺失模块而失败。\n\n### 使用 benchmark_VAE 后\n- **统一接口快速切换**：借助 benchmark_VAE 标准化的 API，团队仅需修改几行配置即可在同一套自定义的 Encoder-Decoder 架构下训练十几种主流 VAE 模型，开发效率提升数倍。\n- **确保控制变量严谨**：该工具强制所有模型共享相同的网络骨架和训练流程，消除了实现偏差，使团队能确信性能提升真正源自算法改进，显著增强了论文的可信度。\n- **原生集成监控生态**：通过内置插件一键连接 WandB 或 HuggingFace Hub，实验指标自动同步云端，团队成员可实时协作分析生成样本质量，并轻松分享训练好的模型权重。\n- **分布式训练加速迭代**：利用其支持的 PyTorch DDP 功能，团队直接在多卡服务器上并行跑通大规模数据集训练，将原本需要数周的对比实验压缩至几天内完成。\n\nbenchmark_VAE 通过统一实现标准与自动化流程，将研究人员从繁琐的工程泥潭中解放出来，使其能专注于算法创新与业务价值挖掘。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fclementchadebec_benchmark_VAE_aa763c89.png","clementchadebec","Clément Chadebec","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fclementchadebec_bf44edac.jpg",null,"Jasper AI","CChadebec","https:\u002F\u002Fclementchadebec.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fclementchadebec",[81],{"name":82,"color":83,"percentage":84},"Python","#3572A5",100,1989,178,"2026-04-14T07:24:27","Apache-2.0","未说明","未说明（支持分布式训练 DDP，暗示可使用多 GPU 加速）",{"notes":92,"python":93,"dependencies":94},"该库名为 pythae，专注于统一实现多种变分自编码器（VAE）模型以进行基准测试。支持使用 PyTorch DDP 进行分布式训练。允许用户自定义编码器和解码器架构。集成 wandb、mlflow 和 comet-ml 用于实验监控，并支持通过 HuggingFace Hub 分享和加载模型。","3.7, 3.8, 3.9+",[95,96,97,98,99],"pytorch","wandb","mlflow","comet-ml","huggingface_hub",[15,14,101],"其他",[103,104,105,106,107,108,95,109,110,111,112,113,114,115,116],"vae","benchmarking","beta-vae","comparison","normalizing-flows","pixel-cnn","reproducibility","reproducible-research","vae-gan","vae-implementation","vae-pytorch","variational-autoencoder","vq-vae","wasserstein-autoencoder","2026-03-27T02:49:30.150509","2026-04-19T15:38:51.985362",[120,125,130,135,140,145],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},42438,"RHVAE 模型中的哈密顿量计算公式与论文描述不一致，代码中多了联合概率项和 G 逆矩阵项，这两者等价吗？","是的，这些方程在理论上是等价的。代码中更新的度量标准（metric）是在验证阶段需要的。虽然形式上看起来不同，但这是为了适应具体的计算实现。如果在处理 3D MRI 数据时遇到内存问题，可以在每个维度上将输入数据下采样 2 倍，但理论上 VAE 可以处理任意维度的数据。","https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues\u002F10",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},42439,"重构损失（Reconstruction Loss）应该使用求和（Sum）还是均值（Mean）？使用求和会导致损失值过大掩盖其他损失项怎么办？","这通常是一个设计选择。对于大多数情况，使用均值（Mean）重构损失往往表现更好，可以避免损失值过大导致训练不平衡。维护者确认用户可以提交 PR 来让用户在平均和求和之间进行选择。特别注意：对于 VQVAE 模型的承诺损失（commitment loss），根据论文回顾，确实应该使用求和（Sum）而不是均值。","https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues\u002F124",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},42440,"在使用 PixelCNNSampler 进行采样时遇到 'Tensor' object is not callable 错误，如何解决？","该错误通常发生在使用 `QuantizerEMA`（即在 VQVAE 中使用 EMA）时，因为此时 embeddings 是 Tensor 对象而不是可调用的 `torch.nn.Embedding` 实例。解决方法是根据 embeddings 的类型进行判断：如果是 Tensor，则使用索引操作 `embeddings[...]`；如果是 `torch.nn.Embedding` 实例，则使用调用操作 `embeddings(...)`。参考代码如下：\nif isinstance(self.model.quantizer.embeddings, torch.Tensor):\n    z_quant = self.model.quantizer.embeddings[z.reshape(z.shape[0], -1).long()]\nelse:\n    z_quant = self.model.quantizer.embeddings(z.reshape(z.shape[0], -1).long())","https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues\u002F147",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},42441,"运行 RHVAE 教程时在训练过程中出现 NaN 错误（ArithmeticError: NaN detected in train loss），是什么原因导致的？","这不是代码本身的 Bug，而是由于学习率过大或批次大小（batch size）不合适导致的数值不稳定问题。梯度更新使参数偏离过多从而产生不稳定性。解决方案是降低学习率（例如改为 1e-6）或减小批次大小（例如改为 16）。调整这两个参数中的任意一个通常都能消除 NaN 错误并顺利完成训练。","https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues\u002F132",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},42442,"如何在 benchmark_VAE 框架中进行多模态数据训练？如何自定义重构损失函数以结合不同模态的损失？","虽然框架主要针对单模态数据集设计，但用户可以通过自定义重构损失函数来实现多模态训练。基本思路是分别计算每个模态的损失，然后将它们组合成最终的总损失。具体实现需要继承或修改现有的损失计算类，针对每个模态定义特定的重建误差计算逻辑，并在训练循环中将各模态损失加权求和。建议参考框架中现有的损失函数实现方式进行扩展。","https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues\u002F64",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},42443,"项目是否支持将训练好的模型直接推送到 Hugging Face Hub 进行分享和版本管理？","是的，项目已经集成了 Hugging Face Hub 功能。用户现在可以使用 `model.push_to_hub(\"username\u002Fmy_vae\")` 命令直接将模型推送到 Hub。该集成支持版本控制、提交历史记录、差异比较，以及自动添加任务、语言、指标等元数据以提高模型的可发现性。此外，还支持 TensorBoard 可视化、排行榜等功能。相关功能已通过 PR 合并到主分支。","https:\u002F\u002Fgithub.com\u002Fclementchadebec\u002Fbenchmark_VAE\u002Fissues\u002F24",[151,156,161,166,171,176,181,186,191,196,201],{"id":152,"version":153,"summary_zh":154,"released_at":155},334510,"v0.1.2","**新功能**\n- 迁移到 `pydantic=2.*` (#105)\n- 感谢 @fbosshard 的贡献，支持自定义 collate 函数 (#83)\n- 感谢 @liamchalcroft 的贡献，在 `BaseTrainer` 中添加自动混合精度功能 (#90)\n\n**次要改动**\n- 统一所有基于 VAE 的模型实现的高斯似然函数 (#104)\n- 感谢 @soumickmj 的贡献，更新了 `RHVAE` 中的 `predict` 方法 (#80)\n- 感谢 @soumickmj 的贡献，在 `SVAE` 模型中添加裁剪以提高稳定性 (#79)","2023-09-06T15:43:22",{"id":157,"version":158,"summary_zh":159,"released_at":160},334511,"v0.1.1","**新功能**\n- 添加了训练回调 `TrainHistoryCallback`，用于在训练过程中存储训练指标，详见 #71，由 @VolodyaCO 提供。\n```python \nfrom pythae.trainers.training_callbacks import TrainHistoryCallback\n\n>>> train_history = TrainHistoryCallback()\n>>> callbacks = [train_history]\n>>> pipeline(\n...    train_data=train_dataset,\n...    eval_data=eval_dataset,\n...    callbacks=callbacks\n... )\n>>> train_history.history\n... {\n...    'train_loss': [58.51896972363562, 42.15931177749049, 40.583426756017346],\n...    'eval_loss': [43.39408182034827, 41.45351771943888, 39.77221281209569]\n... }\n```\n- 添加了 `predict` 方法，可在不进行损失计算的情况下对输入数据进行编码和解码，详见 #75，由 @soumickmj 和 @ravih18 提供。\n```python\n>>> out = model.predict(eval_dataset[:3])\n>>> out.embedding.shape, out.recon_x.shape\n... (torch.Size([3, 16]), torch.Size([3, 1, 28, 28]))\n```\n- 添加了 `embed` 方法，用于返回输入数据的潜在表示，详见 #76，由 @tbouchik 提供。\n```python\n>>> out = model.embed(eval_dataset[:3].to(device))\n>>> out.shape\n... torch.Size([3, 16])\n```","2023-02-23T16:06:52",{"id":162,"version":163,"summary_zh":164,"released_at":165},334512,"v0.1.0","**新功能** :rocket:  \n- `Pythae` 现在支持分布式训练（基于 PyTorch 的 [DDP](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fnotes\u002Fddp.html) 构建）。启动分布式训练可以通过一个训练脚本实现，其中所有分布式环境变量将按如下方式传递给 `BaseTrainerConfig` 实例：\n\n```python\ntraining_config = BaseTrainerConfig(\n     num_epochs=10,\n     learning_rate=1e-3,\n     per_device_train_batch_size=64,\n     per_device_eval_batch_size=64,\n     dist_backend=\"nccl\", # 分布式后端\n     world_size=8 # 使用的 GPU 数量（节点数 × 每节点 GPU 数），\n     rank=0 # 进程\u002FGPU ID，\n     local_rank=1 # 节点 ID，\n     master_addr=\"localhost\" # 主节点地址，\n     master_port=\"12345\" # 主节点端口，\n )\n```\n\n随后可以使用如 `srun` 之类的启动器来运行该脚本。此模块已在单节点多 GPU 和多节点多 GPU 环境中进行了测试。\n\n- 感谢 @ravih18，`MSSSIM_VAE` 现在支持 3D 图像 :rocket:  \n\n**重大变更**\n- 自定义 `optimizers` 和 `schedulers` 的选择与定义方式已更改。不再需要先构建 `optimizer`（或 `scheduler`），再将其传递给 `Trainer`。自 v0.1.0 起，`optimizers` 和 `schedulers` 的选择及参数可以直接传递给 `TrainerConfig`。具体变更如下：\n\n*自 v0.1.0 起*\n```python\nmy_model = VAE(model_config=model_config)\n# 直接在 Trainer 配置中指定实例和参数\ntraining_config = BaseTrainerConfig(\n    ...,\n    optimizer_cls=\"AdamW\",\n    optimizer_params={\"betas\": (0.91, 0.995)}\n    scheduler_cls=\"MultiStepLR\",\n    scheduler_params={\"milestones\": [10, 20, 30], \"gamma\": 10**(-1\u002F5)}\n)\ntrainer = BaseTrainer(\n    model=model,\n    train_dataset=train_dataset,\n    eval_dataset=eval_dataset,\n    training_config=training_config\n)\n# 启动训练\ntrainer.train()\n```\n\n*v0.1.0 之前*\n```python\nmy_model = VAE(model_config=model_config)\ntraining_config = BaseTrainerConfig(...)\n### Optimizer\noptimizer = torch.optim.AdamW(model.parameters(), lr=training_config.learning_rate, betas=(0.91, 0.995))\n### Scheduler\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[10, 20, 30], gamma=10**(-1\u002F5))\n# 将实例传递给 Trainer\ntrainer = BaseTrainer(\n    model=model,\n    train_dataset=train_dataset,\n    eval_dataset=eval_dataset,\n    training_config=training_config,\n    optimizer=optimizer,\n    scheduler=scheduler\n)\n# 启动训练\ntrainer.train()\n```\n\n- `batch_size` 键已从 `Trainer` 配置中移除。取而代之的是 `per_device_train_batch_size` 和 `per_device_eval_batch_size` 两个键，用于指定每个设备上的批次大小。请注意，如果您处于分布式环境中，例如使用 4 张 GPU，并设置 `per_device_eval_batch_size=64`，这等同于在单张 GPU 上以 4×64 的批次大小进行训练。\n\n**小改动**\n- 增加了指定…的功能","2023-02-06T16:58:35",{"id":167,"version":168,"summary_zh":169,"released_at":170},334513,"v0.0.9","**新功能**\n- 通过 `CometCallback` 训练回调函数集成 `comet_ml`，进一步实现 #55 的功能\n\n**已修复的 bug :bug:**\n- 修复 `pickle5` 与 `python>=3.8` 的兼容性问题\n- 更新 [`conda-forge` 饲料库](https:\u002F\u002Fgithub.com\u002Fconda-forge\u002Fpythae-feedstock)，添加正确的依赖项（https:\u002F\u002Fgithub.com\u002Fconda-forge\u002Fpythae-feedstock\u002Fpull\u002F11）","2022-10-19T10:25:57",{"id":172,"version":173,"summary_zh":174,"released_at":175},334514,"v.0.0.8","**新功能**：\n- 在 `TrainingCallbacks` 中新增了 `MLFlowCallback`，进一步支持 #44 问题。\n- 允许将继承自 `torch.utils.data.Dataset` 的自定义 `Dataset` 作为输入传递到 `training_pipeline` 中，进一步支持 #35 问题。\n```python\ndef __call__(\n        self,\n        train_data: Union[np.ndarray, torch.Tensor, torch.utils.data.Dataset],\n        eval_data: Union[np.ndarray, torch.Tensor, torch.utils.data.Dataset] = None,\n        callbacks: List[TrainingCallback] = None,\n    ):\n```\n- 实现了 Multiply\u002FPartially\u002FCombination IWAE，即 `MIWAE`、`PIWAE` 和 `CIWAE`（https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04537）。\n\n**次要改动**：\n- 统一 `FactorVAE` 中的数据处理方式与其他模型一致。（一半的批次用于重建，另一半用于因子表示）\n- 修改了 `trainers` 中的模型合理性检查方法（在检查时使用数据加载器而非数据集）。\n- 在 `CoupledOptimizerTrainer` 中添加了编码器和解码器所需的损失函数，并更新了相关测试。","2022-09-07T17:03:02",{"id":177,"version":178,"summary_zh":179,"released_at":180},334515,"v.0.0.7","**新功能**\n- 根据 https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.06033 的方法，新增了 `PoincareVAE` 模型和 `PoincareDiskSampler` 实现。\n\n**次要改动**\n- 新增 VAE LSTM 示例\n- 新增可复现性报告","2022-09-03T09:18:20",{"id":182,"version":183,"summary_zh":184,"released_at":185},334516,"v.0.0.6","**新功能**\n- 添加了 `interpolate` 方法，可在任意 `pythae.models` 的潜在空间中基于给定输入进行线性插值（进一步支持 #34）。\n- 添加了 `reconstruct` 方法，可使用任意 `pythae.models` 轻松对给定输入数据进行重建。","2022-07-22T09:04:51",{"id":187,"version":188,"summary_zh":189,"released_at":190},334517,"v0.0.5","**Bug :bug:**\n修复 Hugging Face Hub 模型卡片","2022-07-07T17:58:29",{"id":192,"version":193,"summary_zh":194,"released_at":195},334518,"v.0.0.3","**变更**\n- 将库的最低 Python 版本提升至 `python3.7+`\n- 不再支持 `python3.6`","2022-07-05T08:08:19",{"id":197,"version":198,"summary_zh":199,"released_at":200},334519,"v.0.0.2","**新功能**\n- 添加 `push_to_hf_hub` 方法，允许将 `pythae.models` 实例上传到 Hugging Face Hub\n- 添加 `load_from_hf_hub` 方法，允许从 Hub 下载预训练模型\n- 添加教程（Hugging Face Hub 的保存与加载以及 `wandb` 回调）","2022-07-04T17:51:13",{"id":202,"version":203,"summary_zh":204,"released_at":205},334520,"v.0.0.1","First release on pypi","2022-06-14T10:05:13"]