[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-janosh--awesome-normalizing-flows":3,"tool-janosh--awesome-normalizing-flows":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":90,"env_os":91,"env_gpu":92,"env_ram":92,"env_deps":93,"category_tags":96,"github_topics":97,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":107,"updated_at":108,"faqs":109,"releases":145},5790,"janosh\u002Fawesome-normalizing-flows","awesome-normalizing-flows","Awesome resources on normalizing flows.","awesome-normalizing-flows 是一个专注于“归一化流”（Normalizing Flows）技术的精选资源合集。归一化流是一种强大的统计建模工具，它通过一系列可学习的平滑变换，将简单的概率分布转化为复杂的真实数据分布，从而在保持计算高效的同时实现高精度的密度估计与样本生成。\n\n该资源库主要解决了研究人员和开发者在面对这一前沿领域时，难以系统性获取高质量论文、代码实现及教程的痛点。它精心整理了包括 60 多篇核心学术论文、多个主流框架（如 PyTorch、TensorFlow、JAX）的代码包、开源项目复现以及视频讲解和博客文章，覆盖了从理论基础到分子动力学模拟等实际应用场景的全方位内容。\n\n无论是希望深入理解生成模型原理的 AI 研究人员，还是急需寻找可靠代码库进行项目开发的工程师，awesome-normalizing-flows 都是极佳的入门指南与案头参考。其独特亮点在于不仅提供了丰富的文献列表，还特别关注了如 Boltzmann Generators 等最新前沿进展，并持续维护链接的有效性，帮助用户高效追踪该领域的技术演进，避免在碎片化信息中迷失方向。","\u003Ch1 align=\"center\">\n  Awesome Normalizing Flows\n\u003C\u002Fh1>\n\n\u003Ch4 align=\"center\">\n\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n[![Pull Requests Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPull%20Requests-welcome-brightgreen.svg?logo=github)](#-contributing)\n[![Link Check](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Factions\u002Fworkflows\u002Flink-check.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Factions\u002Fworkflows\u002Flink-check.yml)\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002F227366838.svg)](https:\u002F\u002Fzenodo.org\u002Fbadge\u002Flatestdoi\u002F227366838)\n\n\u003C\u002Fh4>\n\nA list of awesome resources for understanding and applying normalizing flows (NF): a relatively simple yet powerful new tool in statistics for constructing expressive probability distributions from simple base distributions using a chain (flow) of trainable smooth bijective transformations (diffeomorphisms).\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Fnormalizing-flow\">\n   \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fnormalizing-flow\u002Fnormalizing-flow-white.svg\">\n      \u003Cimg alt=\"Diagram of the slow (sequential) forward pass of a Masked Autoregressive Flow (MAF) layer\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fnormalizing-flow\u002Fnormalizing-flow.svg\">\n   \u003C\u002Fpicture>\n\u003C\u002Fa>\n\n\u003Csup>_Figure inspired by [Lilian Weng](https:\u002F\u002Flilianweng.github.io\u002Flil-log\u002F2018\u002F10\u002F13\u002Fflow-based-deep-generative-models). Created in [CeTZ](https:\u002F\u002Fcetz-package.github.io). [View source](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Fblob\u002Fmain\u002Fassets\u002Fnormalizing-flow\u002Fnormalizing-flow.typ)._\u003C\u002Fsup>\n\n\u003Cbr>\n\n## \u003Cimg src=\"assets\u002Ftoc.svg\" alt=\"Contents\" height=\"18px\"> &nbsp;Table of Contents\n\n1. [Table of Contents](#-table-of-contents)\n1. [📝 Publications (60)](#-publications-60)\n1. [🛠️ Applications (8)](#️-applications-8)\n1. [📺 Videos (8)](#-videos-8)\n1. [📦 Packages (14)](#-packages-14)\n   1. [PyTorch Packages](#-pytorch-packages)\n   1. [TensorFlow Packages](#-tensorflow-packages)\n   1. [JAX Packages](#-jax-packages)\n   1. [Julia Packages](#-julia-packages)\n1. [🧑‍💻 Repos (18)](#-repos-18)\n   1. [PyTorch Repos](#-pytorch-repos)\n   1. [TensorFlow Repos](#-tensorflow-repos)\n   1. [JAX Repos](#-jax-repos)\n   1. [Other Repos](#-other-repos)\n1. [🌐 Blog Posts (5)](#-blog-posts-5)\n1. [🚧 Contributing](#-contributing)\n\n\u003Cbr>\n\n## 📝 Publications \u003Csmall>(60)\u003C\u002Fsmall>\n\n1. 2024-06-20 - [Transferable Boltzmann Generators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14426) by Klein, Noé\u003Cbr>\n   Boltzmann Generators, a machine learning method, generate equilibrium samples of molecular systems by learning a transformation from a simple prior distribution to the target Boltzmann distribution via normalizing flows. Recently, flow matching has been used to train Boltzmann Generators for small systems in Cartesian coordinates. This work extends this approach by proposing a framework for transferable Boltzmann Generators that can predict Boltzmann distributions for unseen molecules without retraining. This allows for approximate sampling and efficient reweighting to the target distribution. The framework is tested on dipeptides, demonstrating efficient generalization to new systems and improved efficiency compared to single-system training. [[Code](https:\u002F\u002Fosf.io\u002Fn8vz3\u002F?view_only=1052300a21bd43c08f700016728aa96e)]\n\n1. 2023-01-03 - [FInC Flow: Fast and Invertible k×k Convolutions for Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09266) by Kallapa, Nagar et al.\u003Cbr>\n   propose a k×k convolutional layer and Deep Normalizing Flow architecture which i) has a fast parallel inversion algorithm with running time O(nk^2) (n is height and width of the input image and k is kernel size), ii) masks the minimal amount of learnable parameters in a layer. iii) gives better forward pass and sampling times comparable to other k×k convolution-based models on real-world benchmarks. We provide an implementation of the proposed parallel algorithm for sampling using our invertible convolutions on GPUs. [[Code](https:\u002F\u002Fgithub.com\u002Faditya-v-kallappa\u002FFInCFlow)]\n\n1. 2022-10-15 - [Invertible Monotone Operators for Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08176) by Ahn, Kim et al.\u003Cbr>\n   This work proposes the monotone formulation to overcome the issue of the Lipschitz constants in previous ResNet-based normalizing flows using monotone operators and provides an in-depth theoretical analysis. Furthermore, this work constructs an activation function called Concatenated Pila (CPila) to improve gradient flow. The resulting model, Monotone Flows, exhibits an excellent performance on multiple density estimation benchmarks (MNIST, CIFAR-10, ImageNet32, ImageNet64). [[Code](https:\u002F\u002Fgithub.com\u002Fmlvlab\u002FMonotoneFlows)]\n\n1. 2022-08-18 - [ManiFlow: Implicitly Representing Manifolds with Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.08932) by Postels, Danelljan et al.\u003Cbr>\n   The invertibility constraint of NFs imposes limitations on data distributions that reside on lower dimensional manifolds embedded in higher dimensional space. This is often bypassed by adding noise to the data which impacts generated sample quality. This work generates samples from the original data distribution given full knowledge of perturbed distribution and noise model. They establish NFs trained on perturbed data implicitly represent the manifold in regions of maximum likelihood, then propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.\n\n1. 2022-06-03 - [Graphical Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02548) by Wehenkel, Louppe\u003Cbr>\n   This work revisits coupling and autoregressive transformations as probabilistic graphical models showing they reduce to Bayesian networks with a pre-defined topology. From this new perspective, the authors propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure. This model provides a promising way to inject domain knowledge into normalizing flows while preserving both the interpretability of Bayesian networks and the representation capacity of normalizing flows. [[Code](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FGraphical-Normalizing-Flows)]\n\n1. 2022-05-16 - [Multi-scale Attention Flow for Probabilistic Time Series Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.07493) by Feng, Xu et al.\u003Cbr>\n   Proposes a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF), where one integrates multi-scale attention and relative position information and the multivariate data distribution is represented by the conditioned normalizing flow.\n\n1. 2022-03-02 - [Adaptive Monte Carlo augmented with normalizing flows](https:\u002F\u002Fdoi.org\u002F10.1073\u002Fpnas.2109420119) by Gabrié, Rotskoff et al.\u003Cbr>\n   Markov Chain Monte Carlo (MCMC) algorithms struggle with sampling from high-dimensional, multimodal distributions, requiring extensive computational effort or specialized importance sampling strategies. To address this, an adaptive MCMC approach is proposed, combining local updates with nonlocal transitions via normalizing flows. This method blends standard transition kernels with generative model moves, adapting the generative model using generated data to improve sampling efficiency. Theoretical analysis and numerical experiments demonstrate the algorithm's ability to equilibrate quickly between metastable modes, sampling effectively across large free energy barriers and achieving significant accelerations over traditional MCMC methods. [[Code](https:\u002F\u002Fzenodo.org\u002Frecords\u002F4783701#.Yfv53urMJD8)]\n\n1. 2022-01-14 - [E(n) Equivariant Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.09016) by Satorras, Hoogeboom et al.\u003Cbr>\n   Introduces equivariant graph neural networks into the normalizing flow framework which combine to give invertible equivariant functions. Demonstrates their flow beats prior equivariant models and allows sampling of molecular configurations with positions, atom types and charges.\n\n1. 2021-07-16 - [Efficient Bayesian Sampling Using Normalizing Flows to Assist Markov Chain Monte Carlo Methods](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.08001) by Gabrié, Rotskoff et al.\u003Cbr>\n   Normalizing flows have potential in Bayesian statistics as a complementary or alternative method to MCMC for sampling posteriors. However, their training via reverse KL divergence may be inadequate for complex posteriors. This research proposes a new training approach utilizing direct KL divergence, which involves augmenting a local MCMC algorithm with a normalizing flow to enhance mixing rate and utilizing the resulting samples to train the flow. This method requires minimal prior knowledge of the posterior and can be applied for model validation and evidence estimation, offering a promising strategy for efficient posterior sampling.\n\n1. 2021-07-03 - [CInC Flow: Characterizable Invertible 3x3 Convolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01358) by Nagar, Dufraisse et al.\u003Cbr>\n   Seeks to improve expensive convolutions. They investigate the conditions for when 3x3 convolutions are invertible under which conditions (e.g. padding) and saw successful speedups. Furthermore, they developed a more expressive, invertible _Quad coupling_ layer. [[Code](https:\u002F\u002Fgithub.com\u002FNaagar\u002FNormalizing_Flow_3x3_inv)]\n\n1. 2021-04-14 - [Orthogonalizing Convolutional Layers with the Cayley Transform](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.07167) by Trockman, Kolter\u003Cbr>\n   Parametrizes the multichannel convolution to be orthogonal via the Cayley transform (skew-symmetric convolutions in the Fourier domain). This enables the inverse to be computed efficiently. [[Code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Forthogonal-convolutions)]\n\n1. 2021-04-14 - [Improving Normalizing Flows via Better Orthogonal Parameterizations](https:\u002F\u002Finvertibleworkshop.github.io\u002FINNF_2019\u002Faccepted_papers\u002Fpdfs\u002FINNF_2019_paper_30.pdf) by Goliński, Lezcano-Casado et al.\u003Cbr>\n   Parametrizes the 1x1 convolution via the exponential map and the Cayley map. They demonstrate an improved optimization for the Sylvester normalizing flows.\n\n1. 2020-09-28 - [Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06103) by Rasul, Sheikh et al.\u003Cbr>\n   Models the multi-variate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow. [[OpenReview.net](https:\u002F\u002Fopenreview.net\u002Fforum?id=WiGQBFuVRv)] [[Code](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Fpytorch-ts)]\n\n1. 2020-09-21 - [Haar Wavelet based Block Autoregressive Flows for Trajectories](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.09878) by Bhattacharyya, Straehle et al.\u003Cbr>\n   Introduce a Haar wavelet-based block autoregressive model.\n\n1. 2020-07-15 - [AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07435) by Dolatabadi, Erfani et al.\u003Cbr>\n   An adversarial attack method on image classifiers that use normalizing flows. [[Code](https:\u002F\u002Fgithub.com\u002Fhmdolatabadi\u002FAdvFlow)]\n\n1. 2020-07-06 - [SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.02731) by Nielsen, Jaini et al.\u003Cbr>\n   They present a generalized framework that encompasses both Flows (deterministic maps) and VAEs (stochastic maps). By seeing deterministic maps `x = f(z)` as limiting cases of stochastic maps `x ~ p(x|z)`, the ELBO is reinterpreted as a change of variables formula for the stochastic maps. Moreover, they present a few examples of surjective layers using stochastic maps, which can be composed together with flow layers. [[Video](https:\u002F\u002Fyoutu.be\u002FbXp8fk4MRXQ)] [[Code](https:\u002F\u002Fgithub.com\u002Fdidriknielsen\u002Fsurvae_flows)]\n\n1. 2020-06-15 - [Why Normalizing Flows Fail to Detect Out-of-Distribution Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08545) by Kirichenko, Izmailov et al.\u003Cbr>\n   This study how traditional normalizing flow models can suffer from out-of-distribution data. They offer a solution to combat this issue by modifying the coupling layers. [[Tweet](https:\u002F\u002Ftwitter.com\u002Fpolkirichenko\u002Fstatus\u002F1272715634544119809)] [[Code](https:\u002F\u002Fgithub.com\u002FPolinaKirichenko\u002Fflows_ood)]\n\n1. 2020-06-03 - [Equivariant Flows: exact likelihood generative learning for symmetric densities](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02425) by Köhler, Klein et al.\u003Cbr>\n   Shows that distributions generated by equivariant NFs faithfully reproduce symmetries in the underlying density. Proposes building blocks for flows which preserve typical symmetries in physical\u002Fchemical many-body systems. Shows that symmetry-preserving flows can provide better generalization and sampling efficiency.\n\n1. 2020-06-02 - [The Convolution Exponential and Generalized Sylvester Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.01910) by Hoogeboom, Satorras et al.\u003Cbr>\n   Introduces exponential convolution to add the spatial dependencies in linear layers as an improvement of the 1x1 convolutions. It uses matrix exponentials to create cheap and invertible layers. They also use this new architecture to create _convolutional Sylvester flows_ and _graph convolutional exponentials_. [[Code](https:\u002F\u002Fgithub.com\u002Fehoogeboom\u002Fconvolution_exponential_and_sylvester)]\n\n1. 2020-05-11 - [iUNets: Fully invertible U-Nets with Learnable Upand Downsampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.05220) by Etmann, Ke et al.\u003Cbr>\n   Extends the classical UNet to be fully invertible by enabling invertible, orthogonal upsampling and downsampling layers. It is rather efficient so it should be able to enable stable training of deeper and larger networks.\n\n1. 2020-04-08 - [Normalizing Flows with Multi-Scale Autoregressive Priors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.03891) by Mahajan, Bhattacharyya et al.\u003Cbr>\n   Improves the representational power of flow-based models by introducing channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR). [[Code](https:\u002F\u002Fgithub.com\u002Fvisinf\u002Fmar-scf)]\n\n1. 2020-03-31 - [Flows for simultaneous manifold learning and density estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13913) by Brehmer, Cranmer\u003Cbr>\n   Normalizing flows that learn the data manifold and probability density function on that manifold. [[Tweet](https:\u002F\u002Ftwitter.com\u002Fkylecranmer\u002Fstatus\u002F1250129080395223040)] [[Code](https:\u002F\u002Fgithub.com\u002Fjohannbrehmer\u002Fmanifold-flow)]\n\n1. 2020-03-04 - [Gaussianization Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.01941) by Meng, Song et al.\u003Cbr>\n   Uses a repeated composition of trainable kernel layers and orthogonal transformations. Very competitive versus some of the SOTA like Real-NVP, Glow and FFJORD. [[Code](https:\u002F\u002Fgithub.com\u002Fchenlin9\u002FGaussianization_Flows)]\n\n1. 2020-02-27 - [Gradient Boosted Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11896) by Giaquinto, Banerjee\u003Cbr>\n   Augment traditional normalizing flows with gradient boosting. They show that training multiple models can achieve good results and it's not necessary to have more complex distributions. [[Code](https:\u002F\u002Fgithub.com\u002Frobert-giaquinto\u002Fgradient-boosted-normalizing-flows)]\n\n1. 2020-02-24 - [Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10516) by Deng, Chang et al.\u003Cbr>\n   They propose a normalizing flow using differential deformation of the Wiener process. Applied to time series. [[Tweet](https:\u002F\u002Ftwitter.com\u002Fr_giaquinto\u002Fstatus\u002F1309648804824723464)]\n\n1. 2020-02-21 - [Stochastic Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.09547) by Hodgkinson, Heide et al.\u003Cbr>\n   Name clash for a very different technique from the above SNF: an extension of continuous normalizing flows using stochastic differential equations (SDE). Treats Brownian motion in the SDE as a latent variable and approximates it by a flow. Aims to enable efficient training of neural SDEs which can be used for constructing efficient Markov chains.\n\n1. 2020-02-16 - [Stochastic Normalizing Flows (SNF)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06707) by Wu, Köhler et al.\u003Cbr>\n   Introduces SNF, an arbitrary sequence of deterministic invertible functions (the flow) and stochastic processes such as MCMC or Langevin Dynamics. The aim is to increase expressiveness of the chosen deterministic invertible function, while the trainable flow improves sampling efficiency over pure MCMC [[Tweet](https:\u002F\u002Ftwitter.com\u002FFrankNoeBerlin\u002Fstatus\u002F1229734899034329103)).]\n\n1. 2020-01-17 - [Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.06448) by Ardizzone, Mackowiak et al.\u003Cbr>\n   They introduce a class of conditional normalizing flows with an information bottleneck objective. [[Code](https:\u002F\u002Fgithub.com\u002Fvislearn\u002FIB-INN)]\n\n1. 2020-01-15 - [Invertible Generative Modeling using Linear Rational Splines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.05168) by Dolatabadi, Erfani et al.\u003Cbr>\n   A successor to the Neural spline flows which features an easy-to-compute inverse.\n\n1. 2019-12-05 - [Normalizing Flows for Probabilistic Modeling and Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.02762) by Papamakarios, Nalisnick et al.\u003Cbr>\n   A thorough and very readable review article by some of the guys at DeepMind involved in the development of flows. Highly recommended.\n\n1. 2019-09-14 - [Unconstrained Monotonic Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.05164) by Wehenkel, Louppe\u003Cbr>\n   UMNN relaxes the constraints on weights and activation functions of monotonic neural networks by setting the derivative of the transformation as the output of an unconstrained neural network. The transformation itself is computed by numerical integration (Clenshaw-Curtis quadrature) of the derivative. [[Code](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FUMNN)]\n\n1. 2019-08-25 - [Normalizing Flows: An Introduction and Review of Current Methods](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.09257) by Kobyzev, Prince et al.\u003Cbr>\n   Another very thorough and very readable review article going through the basics of NFs as well as some of the state-of-the-art. Also highly recommended.\n\n1. 2019-07-21 - [Noise Regularization for Conditional Density Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.08982) by Rothfuss, Ferreira et al.\u003Cbr>\n   Normalizing flows for conditional density estimation. This paper proposes noise regularization to reduce overfitting. [[Blog](https:\u002F\u002Fsiboehm.com\u002Farticles\u002F19\u002Fnormalizing-flow-network)]\n\n1. 2019-07-18 - [MintNet: Building Invertible Neural Networks with Masked Convolutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.07945) by Song, Meng et al.\u003Cbr>\n   Creates an autoregressive-like coupling layer via masked convolutions which is fast and efficient to evaluate. [[Code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fmintnet)]\n\n1. 2019-07-18 - [Densely connected normalizing flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04627) by Grcić, Grubišić et al.\u003Cbr>\n   Creates a nested coupling structure to add more expressivity to standard coupling layers. They also utilize slicing\u002Ffactorization for dimensionality reduction and Nystromer for the coupling layer conditioning network. They achieved SOTA results for normalizing flow models. [[Code](https:\u002F\u002Fgithub.com\u002Fmatejgrcic\u002FDenseFlow)]\n\n1. 2019-06-15 - [Invertible Convolutional Flow](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fb1f62fa99de9f27a048344d55c5ef7a6-Abstract.html) by Karami, Schuurmans et al.\u003Cbr>\n   Introduces convolutional layers that are circular and symmetric. The layer is invertible and cheap to evaluate. They also showcase how one can design non-linear elementwise bijectors that induce special properties via constraining the loss function. [[Code](https:\u002F\u002Fgithub.com\u002FKarami-m\u002FInvertible-Convolutional-Flow)]\n\n1. 2019-06-15 - [Invertible Convolutional Networks](https:\u002F\u002Finvertibleworkshop.github.io\u002FINNF_2019\u002Faccepted_papers\u002Fpdfs\u002FINNF_2019_paper_26.pdf) by Finzi, Izmailov et al.\u003Cbr>\n   Showcases how standard convolutional layers can be made invertible via Fourier transformations. They also introduce better activations which might be better suited to normalizing flows, e.g. SneakyRELU\n\n1. 2019-06-10 - [Neural Spline Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.04032) by Durkan, Bekasov et al.\u003Cbr>\n   Uses monotonic ration splines as a coupling layer. This is currently one of the state of the art.\n\n1. 2019-05-30 - [Graph Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.13177) by Liu, Kumar et al.\u003Cbr>\n   A new, reversible graph network for prediction and generation. They perform similarly to message passing neural networks on supervised tasks, but at significantly reduced memory use, allowing them to scale to larger graphs. Combined with a novel graph auto-encoder for unsupervised learning, graph normalizing flows are a generative model for graph structures.\n\n1. 2019-05-24 - [Fast Flow Reconstruction via Robust Invertible n x n Convolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.10170) by Truong, Luu et al.\u003Cbr>\n   Seeks to overcome the limitation of 1x1 convolutions and proposes invertible nxn convolutions via a clever convolutional _affine_ function.\n\n1. 2019-05-17 - [Integer Discrete Flows and Lossless Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.07376) by Hoogeboom, Peters et al.\u003Cbr>\n   A normalizing flow to be used for ordinal discrete data. They introduce a flexible transformation layer called integer discrete coupling.\n\n1. 2019-04-09 - [Block Neural Autoregressive Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04676)) by Cao, Titov et al.\u003Cbr>\n   Introduces (B-NAF), a more efficient probability density approximator. Claims to be competitive with other flows across datasets while using orders of magnitude fewer parameters.\n\n1. 2019-04-09 - [Block Neural Autoregressive Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04676) by Wehenkel, Louppe\u003Cbr>\n   As an alternative to hand-crafted bijections, Huang et al. (2018) proposed NAF, a universal approximator for density functions. Their flow is a neural net whose parameters are predicted by another NN. The latter grows quadratically with the size of the former which is inefficient. We propose block neural autoregressive flow (B-NAF), a much more compact universal approximator of density functions, where we model a bijection directly using a single feed-forward network. Invertibility is ensured by carefully designing affine transformations with block matrices that make the flow autoregressive and monotone. We compare B-NAF to NAF and show our flow is competitive across datasets while using orders of magnitude fewer parameters. [[Code](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002FBNAF)]\n\n1. 2019-02-19 - [MaCow: Masked Convolutional Generative Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04208) by Ma, Kong et al.\u003Cbr>\n   Introduces a masked convolutional generative flow (MaCow) layer using a small kernel to capture local connectivity. They showed some improvement over the GLOW model while being fast and stable.\n\n1. 2019-01-30 - [Emerging Convolutions for Generative Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.11137) by Hoogeboom, Berg et al.\u003Cbr>\n   Introduces autoregressive-like convolutional layers that operate on the channel **and** spatial axes. This improved upon the performance of image datasets compared to the standard 1x1 Convolutions. The trade-off is that the inverse operator is quite expensive however the authors provide a fast C++ implementation. [[Code](https:\u002F\u002Fgithub.com\u002Fehoogeboom\u002Femerging)]\n\n1. 2018-11-06 - [FloWaveNet : A Generative Flow for Raw Audio](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.02155) by Kim, Lee et al.\u003Cbr>\n   A flow-based generative model for raw audio synthesis. [[Code](https:\u002F\u002Fgithub.com\u002Fksw0306\u002FFloWaveNet)]\n\n1. 2018-10-02 - [FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.01367) by Grathwohl, Chen et al.\u003Cbr>\n   Uses Neural ODEs as a solver to produce continuous-time normalizing flows (CNF).\n\n1. 2018-07-09 - [Glow: Generative Flow with Invertible 1x1 Convolutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03039) by Kingma, Dhariwal\u003Cbr>\n   They show that flows using invertible 1x1 convolution achieve high likelihood on standard generative benchmarks and can efficiently synthesize realistic-looking, large images.\n\n1. 2018-07-03 - [Deep Density Destructors](https:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Finouye18a.html) by Inouye, Ravikumar\u003Cbr>\n   Normalizing flows but from an iterative perspective. Features a Tree-based density estimator.\n\n1. 2018-04-03 - [Neural Autoregressive Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00779) by Huang, Krueger et al.\u003Cbr>\n   Unifies and generalize autoregressive and normalizing flow approaches, replacing the (conditionally) affine univariate transformations of MAF\u002FIAF with a more general class of invertible univariate transformations expressed as monotonic neural networks. Also demonstrates that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions. [[Code](https:\u002F\u002Fgithub.com\u002FCW-Huang\u002FNAF)]\n\n1. 2018-03-15 - [Sylvester Normalizing Flow for Variational Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05649) by Berg, Hasenclever et al.\u003Cbr>\n   Introduces Sylvester normalizing flows which remove the single-unit bottleneck from planar flows for increased flexibility in the variational posterior.\n\n1. 2017-11-17 - [Convolutional Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.02255) by Zheng, Yang et al.\u003Cbr>\n   Introduces normalizing flows that take advantage of convolutions (based on convolution over the dimensions of random input vector) to improve the posterior in the variational inference framework. This also reduced the number of parameters due to the convolutions.\n\n1. 2017-05-19 - [Masked Autoregressive Flow for Density Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07057) by Papamakarios, Pavlakou et al.\u003Cbr>\n   Introduces MAF, a stack of autoregressive models forming a normalizing flow suitable for fast density estimation but slow at sampling. Analogous to Inverse Autoregressive Flow (IAF) except the forward and inverse passes are exchanged. Generalization of RNVP.\n\n   \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Fmasked-autoregressive-flow\">\n     \u003Cpicture>\n       \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmasked-autoregressive-flow\u002Fmasked-autoregressive-flow-white.svg\">\n       \u003Cimg alt=\"Diagram of the slow (sequential) forward pass of a Masked Autoregressive Flow (MAF) layer\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmasked-autoregressive-flow\u002Fmasked-autoregressive-flow.svg\">\n     \u003C\u002Fpicture>\n   \u003C\u002Fa>\n\n1. 2017-03-06 - [Multiplicative Normalizing Flows for Variational Bayesian Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01961) by Louizos, Welling\u003Cbr>\n   They introduce a new type of variational Bayesian neural network that uses flows to generate auxiliary random variables which boost the flexibility of the variational family by multiplying the means of a fully-factorized Gaussian posterior over network parameters. This turns the usual diagonal covariance Gaussian into something that allows for multimodality and non-linear dependencies between network parameters.\n\n1. 2016-06-15 - [Improving Variational Inference with Inverse Autoregressive Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04934) by Kingma, Salimans et al.\u003Cbr>\n   Introduces inverse autoregressive flow (IAF), a new type of flow which scales well to high-dimensional latent spaces. [[Code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fiaf)]\n\n1. 2016-05-27 - [Density estimation using Real NVP](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.08803) by Dinh, Sohl-Dickstein et al.\u003Cbr>\n   They introduce the affine coupling layer (RNVP), a major improvement in terms of flexibility over the additive coupling layer (NICE) with unit Jacobian while keeping a single-pass forward and inverse transformation for fast sampling and density estimation, respectively.\n\n   \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Frnvp-affine-coupling-layer\">\n     \u003Cpicture>\n       \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Frnvp-affine-coupling-layer\u002Frnvp-affine-coupling-layer-white.svg\">\n       \u003Cimg alt=\"Diagram of real-valued non-volume preserving (RNVP) coupling layer\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Frnvp-affine-coupling-layer\u002Frnvp-affine-coupling-layer.svg\">\n     \u003C\u002Fpicture>\n   \u003C\u002Fa>\n\n1. 2015-05-21 - [Variational Inference with Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.05770) by Rezende, Mohamed\u003Cbr>\n   They show how to go beyond mean-field variational inference by using flows to increase the flexibility of the variational family.\n\n1. 2015-02-12 - [Masked Autoencoder for Distribution Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03509) by Germain, Gregor et al.\u003Cbr>\n   Introduces MADE, a feed-forward network that uses carefully constructed binary masks on its weights to control the precise flow of information through the network. The masks ensure that each output unit receives signals only from input units that come before it in some arbitrary order. Yet all outputs can be computed in a single pass.\n\n   A popular and efficient way to make flows autoregressive is to construct them from MADE nets.\n\n   \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Fmade\">\n     \u003Cpicture>\n       \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmade\u002Fmade-white.svg\">\n       \u003Cimg alt=\"Masked Autoencoder for Distribution Estimation\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmade\u002Fmade.svg\">\n     \u003C\u002Fpicture>\n   \u003C\u002Fa>\n\n1. 2014-10-30 - [Non-linear Independent Components Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1410.8516) by Dinh, Krueger et al.\u003Cbr>\n   Introduces the additive coupling layer (NICE) and shows how to use it for image generation and inpainting.\n\n1. 2011-04-01 - [Iterative Gaussianization: from ICA to Random Rotations](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.00229) by Laparra, Camps-Valls et al.\u003Cbr>\n   Normalizing flows in the form of Gaussianization in an iterative format. Also shows connections to information theory.\n\n\u003Cbr>\n\n## 🛠️ Applications \u003Csmall>(8)\u003C\u002Fsmall>\n\n1. 2020-12-06 - [Normalizing Kalman Filters for Multivariate Time Series Analysis](https:\u002F\u002Fassets.amazon.science\u002Fea\u002F0c\u002F88b7bdd54eae8c08983fa4cc3e06\u002Fnormalizing-kalman-filters-for-multivariate-time-series-analysis.pdf) by Bézenac, Rangapuram et al.\u003Cbr>\n   Augments state space models with normalizing flows and thereby mitigates imprecisions stemming from idealized assumptions. Aimed at forecasting real-world data and handling varying levels of missing data. (Also available at [Amazon Science](https:\u002F\u002Famazon.science\u002Fpublications\u002Fnormalizing-kalman-filters-for-multivariate-time-series-analysis).)\n\n1. 2020-11-02 - [On the Sentence Embeddings from Pre-trained Language Models](https:\u002F\u002Faclweb.org\u002Fanthology\u002F2020.emnlp-main.733) by Li, Zhou et al.\u003Cbr>\n   Proposes to use flows to transform anisotropic sentence embedding distributions from BERT to a smooth and isotropic Gaussian, learned through unsupervised objective. Demonstrates performance gains over SOTA sentence embeddings on semantic textual similarity tasks. Code available at \u003Chttps:\u002F\u002Fgithub.com\u002Fbohanli\u002FBERT-flow>.\n\n1. 2020-10-13 - [Targeted free energy estimation via learned mappings](https:\u002F\u002Faip.scitation.org\u002Fdoi\u002F10.1063\u002F5.0018903) by Wirnsberger, Ballard et al.\u003Cbr>\n   Normalizing flows used to estimate free energy differences.\n\n1. 2020-07-15 - [Faster Uncertainty Quantification for Inverse Problems with Conditional Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07985) by Siahkoohi, Rizzuti et al.\u003Cbr>\n   Uses conditional normalizing flows for inverse problems. [[Video](https:\u002F\u002Fyoutu.be\u002FnPvZIKaRBkI)]\n\n1. 2020-06-25 - [SRFlow: Learning the Super-Resolution Space with Normalizing Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.14200) by Lugmayr, Danelljan et al.\u003Cbr>\n   Uses normalizing flows for super-resolution.\n\n1. 2019-03-09 - [NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03704) by Hoffman, Sountsov et al.\u003Cbr>\n   Uses normalizing flows in conjunction with Monte Carlo estimation to have more expressive distributions and better posterior estimation.\n\n1. 2018-08-14 - [Analyzing Inverse Problems with Invertible Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04730) by Ardizzone, Kruse et al.\u003Cbr>\n   Normalizing flows for inverse problems.\n\n1. 2018-04-09 - [Latent Space Policies for Hierarchical Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.02808) by Haarnoja, Hartikainen et al.\u003Cbr>\n   Uses normalizing flows, specifically RealNVPs, as policies for reinforcement learning and also applies them for the hierarchical reinforcement learning setting.\n\n\u003Cbr>\n\n## 📺 Videos \u003Csmall>(8)\u003C\u002Fsmall>\n\n1. 2021-01-16 - [Normalizing Flows - Motivations, The Big Idea & Essential Foundations](https:\u002F\u002Fyoutu.be\u002FIuXU2dBOJyw) by [Kapil Sachdeva](https:\u002F\u002Fgithub.com\u002Fksachdeva)\u003Cbr>\n   A comprehensive tutorial on flows explaining the challenges addressed by this class of algorithm. Provides intuition on how to address those challenges, and explains the underlying mathematics using a simple step by step approach.\n\n1. 2020-12-07 - [Normalizing Flows](https:\u002F\u002Fyoutu.be\u002F7TOvhz93G9o) by [Marc Deisenroth](https:\u002F\u002Fmml-book.github.io\u002Fslopes-expectations.html)\u003Cbr>\n   Part of a NeurIPS 2020 tutorial series titled \"There and Back Again: A Tale of Slopes and Expectations\". Link to [full series](https:\u002F\u002Fyoutube.com\u002Fplaylist?list=PL93aLKqThq4h7UpgeNhkOtEeCnX3DMseS).\n\n1. 2020-11-23 - [Introduction to Normalizing Flows](https:\u002F\u002Fyoutu.be\u002Fu3vVyFVU_lI) by [Marcus Brubaker](https:\u002F\u002Fmbrubake.github.io)\u003Cbr>\n   A great introduction to normalizing flows by one of the creators of [Stan](https:\u002F\u002Fmc-stan.org) presented at ECCV 2020. The tutorial also provides an excellent review of various practical implementations.\n\n1. 2020-02-06 - [Flow Models](https:\u002F\u002Fyoutu.be\u002FJBb5sSC0JoY) by [Pieter Abbeel](https:\u002F\u002Fsites.google.com\u002Fview\u002Fberkeley-cs294-158-sp20\u002Fhome)\u003Cbr>\n   A really thorough explanation of normalizing flows. Also includes some sample code.\n\n1. 2019-12-06 - [What are normalizing flows?](https:\u002F\u002Fyoutu.be\u002Fi7LjDvsLWCg) by [Ari Seff](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=IxBGctYAAAAJ)\u003Cbr>\n   A great 3blue1brown-style video explaining the basics of normalizing flows.\n\n1. 2019-10-09 - [A primer on normalizing flows](https:\u002F\u002Fyoutu.be\u002FP4Ta-TZPVi0) by [Laurent Dinh](https:\u002F\u002Flaurent-dinh.github.io)\u003Cbr>\n   The first author on both the NICE and RNVP papers and one of the first in this field gives an introductory talk at \"Machine Learning for Physics and the Physics Learning of, 2019\".\n\n1. 2019-09-24 - [Graph Normalizing Flows](https:\u002F\u002Fyoutu.be\u002FfrMPP30QQgY) by Jenny Liu\u003Cbr>\n   Introduces a new graph generating model for use e.g. in drug discovery, where training on molecules that are known to bind\u002Fdissolve\u002Fetc. may help to generate novel, similarly effective molecules.\n\n1. 2018-10-04 - [Sylvester Normalizing Flow for Variational Inference](https:\u002F\u002Fyoutu.be\u002FVeYyUcIDVHI) by Rianne van den Berg\u003Cbr>\n   Introduces Sylvester normalizing flows which remove the single-unit bottleneck from planar flows for increased flexibility in the variational posterior.\n\n\u003Cbr>\n\n## 📦 Packages \u003Csmall>(14)\u003C\u002Fsmall>\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fpytorch.svg\" alt=\"PyTorch\" height=\"20px\"> &nbsp;PyTorch Packages\n\n1. 2022-05-21 - [Zuko](https:\u002F\u002Fgithub.com\u002Ffrancois-rozet\u002Fzuko) by [François Rozet](https:\u002F\u002Ffrancois-rozet.github.io)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ffrancois-rozet\u002Fzuko\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Zuko is a Python package that implements normalizing flows in PyTorch. It relies heavily on PyTorch's built-in distributions and transformations, which makes the implementation concise, easy to understand and extend. The API is fully documented with references to the original papers.\n\nZuko is used in [LAMPE](https:\u002F\u002Fgithub.com\u002Ffrancois-rozet\u002Flampe) to enable Likelihood-free AMortized Posterior Estimation with PyTorch.\n\n1. 2021-01-25 - [Jammy Flows](https:\u002F\u002Fgithub.com\u002Fthoglu\u002Fjammy_flows) by [Thorsten Glüsenkamp](https:\u002F\u002Fgithub.com\u002Fthoglu)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fthoglu\u002Fjammy_flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A package that models joint (conditional) PDFs on tensor products of manifolds (Euclidean, sphere, interval, simplex) - like inverse autoregressive flows, but connects manifolds, models conditional PDFs, and allows for arbitrary couplings instead of affine ones. Includes a few SOTA flows like Gaussianization flows.\n\n1. 2020-12-07 - [flowtorch](https:\u002F\u002Fgithub.com\u002Ffacebookincubator\u002Fflowtorch) by [Facebook \u002F Meta](https:\u002F\u002Fopensource.fb.com)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ffacebookincubator\u002Fflowtorch\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using Normalizing Flows.\n\n1. 2020-02-09 - [nflows](https:\u002F\u002Fgithub.com\u002Fbayesiains\u002Fnflows) by [Bayesiains](https:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fimurray2\u002Fgroup)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbayesiains\u002Fnflows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A suite of most of the SOTA methods using PyTorch. From an ML group in Edinburgh. They created the current SOTA spline flows. Almost as complete as you'll find from a single repo.\n\n1. 2020-01-28 - [normflows](https:\u002F\u002Fgithub.com\u002FVincentStimper\u002Fnormalizing-flows) by [Vincent Stimper](https:\u002F\u002Fgithub.com\u002FVincentStimper)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FVincentStimper\u002Fnormalizing-flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   The library provides most of the common normalizing flow architectures. It also includes stochastic layers, flows on tori and spheres, and other tools that are particularly useful for applications to the physical sciences.\n\n1. 2018-09-07 - [FrEIA](https:\u002F\u002Fgithub.com\u002Fvislearn\u002FFrEIA) by [VLL Heidelberg](https:\u002F\u002Fgithub.com\u002Fvislearn)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvislearn\u002FFrEIA\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   The Framework for Easily Invertible Architectures (FrEIA) is based on RNVP flows. Easy to setup, it allows to define complex Invertible Neural Networks (INNs) from simple invertible building blocks.\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Ftensorflow.svg\" alt=\"TensorFlow\" height=\"20px\"> &nbsp;TensorFlow Packages\n\n1. 2018-06-22 - [TensorFlow Probability](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability) by [Google](https:\u002F\u002Ftensorflow.org\u002Fprobability)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftensorflow\u002Fprobability\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Large first-party library that offers RNVP, MAF among other autoregressive models plus a collection of composable bijectors.\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fjax.svg\" alt=\"JAX\" height=\"20px\"> &nbsp;JAX Packages\n\n1. 2024-07-05 - [GWKokab](https:\u002F\u002Fgithub.com\u002Fgwkokab\u002Fgwkokab) by [Meesum Qazalbash](https:\u002F\u002Fgithub.com\u002FQazalbash), [Muhammad Zeeshan](https:\u002F\u002Fccrg.rit.edu\u002Fuser\u002Fmuhammad.zeeshan) et al.\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgwkokab\u002Fgwkokab\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A JAX-based gravitational-wave population inference toolkit for parametric models [[Docs](https:\u002F\u002Fgwkokab.readthedocs.io)]\n\n1. 2022-06-17 - [flowMC](https:\u002F\u002Fgithub.com\u002Fkazewong\u002FflowMC) by [Kaze Wong](https:\u002F\u002Fwww.kaze-wong.com\u002F)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkazewong\u002FflowMC\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Normalizing-flow enhanced sampling package for probabilistic inference [[Docs](https:\u002F\u002Fpypi.org\u002Fproject\u002FflowMC)]\n\n1. 2021-06-17 - [pzflow](https:\u002F\u002Fgithub.com\u002Fjfcrenshaw\u002Fpzflow) by [John Franklin Crenshaw](https:\u002F\u002Fjfcrenshaw.github.io)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fjfcrenshaw\u002Fpzflow\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A package that focuses on probabilistic modeling of tabular data, with a focus on sampling and posterior calculation.\n\n1. 2021-04-12 - [Distrax](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdistrax) by [DeepMind](https:\u002F\u002Fdeepmind.com)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdeepmind\u002Fdistrax\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Distrax is a lightweight library of probability distributions and bijectors. It acts as a JAX-native re-implementation of a subset of TensorFlow Probability (TFP), with some new features and emphasis on extensibility.\n\n1. 2020-03-09 - [NuX](https:\u002F\u002Fgithub.com\u002FInformation-Fusion-Lab-Umass\u002FNuX) by Information Fusion Labs (UMass)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FInformation-Fusion-Lab-Umass\u002FNuX\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A library that offers normalizing flows using JAX as the backend. Has some SOTA methods. They also feature a surjective flow via quantization.\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fjulia.svg\" alt=\"Julia\" height=\"20px\"> &nbsp;Julia Packages\n\n1. 2021-11-07 - [ContinuousNormalizingFlows.jl](https:\u002F\u002Fgithub.com\u002FimpICNF\u002FContinuousNormalizingFlows.jl) by [Hossein Pourbozorg](https:\u002F\u002Fgithub.com\u002Fprbzrg)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FimpICNF\u002FContinuousNormalizingFlows.jl\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Implementations of Infinitesimal Continuous Normalizing Flows Algorithms in Julia. [[Docs](https:\u002F\u002Fimpicnf.github.io\u002FContinuousNormalizingFlows.jl)]\n\n1. 2020-02-07 - [InvertibleNetworks.jl](https:\u002F\u002Fgithub.com\u002Fslimgroup\u002FInvertibleNetworks.jl) by [SLIM](https:\u002F\u002Fslim.gatech.edu)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fslimgroup\u002FInvertibleNetworks.jl\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A Flux compatible library implementing invertible neural networks and normalizing flows using memory-efficient backpropagation. Uses manually implemented gradients to take advantage of the invertibility of building blocks, which allows for scaling to large-scale problem sizes.\n\n\u003Cbr>\n\n## 🧑‍💻 Repos \u003Csmall>(18)\u003C\u002Fsmall>\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fpytorch.svg\" alt=\"PyTorch\" height=\"20px\"> &nbsp;PyTorch Repos\n\n1. 2021-09-27 - [DeeProb-kit](https:\u002F\u002Fgithub.com\u002Fdeeprob-org\u002Fdeeprob-kit) by [Lorenzo Loconte](https:\u002F\u002Fgithub.com\u002Floreloc)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdeeprob-org\u002Fdeeprob-kit\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A general-purpose Python library providing a collection of deep probabilistic models (DPMs) which are easy to use and extend.\nImplements flows such as MAF, RealNVP and NICE.\n\n1. 2021-08-21 - [NICE: Non-linear Independent Components Estimation](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FNICE_Non_linear_Independent_Components_Estimation) by Maxime Vandegar\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   PyTorch implementation that reproduces results from the paper NICE in about 100 lines of code.\n\n1. 2020-07-19 - [Normalizing Flows - Introduction (Part 1)](https:\u002F\u002Fpyro.ai\u002Fexamples\u002Fnormalizing_flows_i) by [pyro.ai](https:\u002F\u002Fpyro.ai)\u003Cbr>\n   A tutorial about how to use the `pyro-ppl` library (based on PyTorch) to use Normalizing flows. They provide some SOTA methods including NSF and MAF. [Parts 2 and 3 coming later](https:\u002F\u002Fgithub.com\u002Fpyro-ppl\u002Fpyro\u002Fissues\u002F1992).\n\n1. 2020-07-03 - [Density Estimation with Neural ODEs and Density Estimation with FFJORDs](https:\u002F\u002Fgit.io\u002FJiWaG) by [torchdyn](https:\u002F\u002Ftorchdyn.readthedocs.io)\u003Cbr>\n   Example of how to use FFJORD as a continuous normalizing flow (CNF). Based on the PyTorch suite `torchdyn` which offers continuous neural architectures.\n\n1. 2020-05-26 - [StyleFlow](https:\u002F\u002Fgithub.com\u002FRameenAbdal\u002FStyleFlow) by [Rameen Abdal](https:\u002F\u002Ftwitter.com\u002FAbdalRameen)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FRameenAbdal\u002FStyleFlow\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Attribute-conditioned Exploration of StyleGAN-generated Images using Conditional Continuous Normalizing Flows. [[Docs](https:\u002F\u002Frameenabdal.github.io\u002FStyleFlow)]\n\n1. 2020-02-04 - [Graphical Normalizing Flows](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FGraphical-Normalizing-Flows) by [Antoine Wehenkel](https:\u002F\u002Fawehenkel.github.io)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FAWehenkel\u002FGraphical-Normalizing-Flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Official implementation of \"Graphical Normalizing Flows\" and the experiments presented in the paper.\n\n1. 2019-12-09 - [pytorch-normalizing-flows](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fpytorch-normalizing-flows) by Andrej Karpathy\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkarpathy\u002Fpytorch-normalizing-flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A Jupyter notebook with PyTorch implementations of the most commonly used flows: NICE, RNVP, MAF, Glow, NSF.\n\n1. 2019-09-19 - [Unconstrained Monotonic Neural Networks (UMNN)](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FUMNN) by Antoine Wehenkel\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FAWehenkel\u002FUMNN\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Official implementation of \"Unconstrained Monotonic Neural Networks\" and the experiments presented in the paper.\n\n1. 2019-02-06 - [pytorch_flows](https:\u002F\u002Fgithub.com\u002Facids-ircam\u002Fpytorch_flows) by [acids-ircam](https:\u002F\u002Fgithub.com\u002Facids-ircam)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Facids-ircam\u002Fpytorch_flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   A great repo with some basic PyTorch implementations of normalizing flows from scratch.\n\n1. 2018-12-30 - [normalizing_flows](https:\u002F\u002Fgithub.com\u002Fkamenbliznashki\u002Fnormalizing_flows) by Kamen Bliznashki\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkamenbliznashki\u002Fnormalizing_flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Pytorch implementations of density estimation algorithms: BNAF, Glow, MAF, RealNVP, planar flows.\n\n1. 2018-09-01 - [pytorch-flows](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-flows) by Ilya Kostrikov\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fikostrikov\u002Fpytorch-flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   PyTorch implementations of density estimation algorithms: MAF, RNVP, Glow.\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Ftensorflow.svg\" alt=\"TensorFlow\" height=\"20px\"> &nbsp;TensorFlow Repos\n\n1. 2020-11-02 - [Variational Inference using Normalizing Flows (VINF)](https:\u002F\u002Fgithub.com\u002Fpierresegonne\u002FVINF) by Pierre Segonne\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fpierresegonne\u002FVINF\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   This repository provides a hands-on TensorFlow implementation of Normalizing Flows as presented in the [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1505.05770.pdf) introducing the concept (D. Rezende & S. Mohamed).\n\n1. 2020-01-29 - [Normalizing Flows](https:\u002F\u002Fgithub.com\u002FLukasRinder\u002Fnormalizing-flows) by [Lukas Rinder](https:\u002F\u002Fgithub.com\u002FLukasRinder)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FLukasRinder\u002Fnormalizing-flows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Implementation of normalizing flows (Planar Flow, Radial Flow, Real NVP, Masked Autoregressive Flow (MAF), Inverse Autoregressive Flow (IAF), Neural Spline Flow) in TensorFlow 2 including a small tutorial.\n\n1. 2019-07-19 - [BERT-flow](https:\u002F\u002Fgithub.com\u002Fbohanli\u002FBERT-flow) by Bohan Li\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbohanli\u002FBERT-flow\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   TensorFlow implementation of \"On the Sentence Embeddings from Pre-trained Language Models\" (EMNLP 2020).\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fjax.svg\" alt=\"JAX\" height=\"20px\"> &nbsp;JAX Repos\n\n1. 2020-06-12 - [Neural Transport](https:\u002F\u002Fpyro.ai\u002Fnumpyro\u002Fexamples\u002Fneutra) by [numpyro](https:\u002F\u002Fnum.pyro.ai)\u003Cbr>\n   Features an example of how Normalizing flows can be used to get more robust posteriors from Monte Carlo methods. Uses the `numpyro` library which is a PPL with JAX as the backend. The NF implementations include the basic ones like IAF and BNAF.\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fother.svg\" alt=\"Other\" height=\"20px\"> &nbsp;Other Repos\n\n1. 2018-06-11 - [Destructive Deep Learning (ddl)](https:\u002F\u002Fgithub.com\u002Fdavidinouye\u002Fdestructive-deep-learning) by [David Inouye](https:\u002F\u002Fdavidinouye.com)\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdavidinouye\u002Fdestructive-deep-learning\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Code base for the paper [Deep Density Destructors](https:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Finouye18a.html) by Inouye & Ravikumar (2018). An entire suite of iterative methods including tree-based as well as Gaussianization methods which are similar to normalizing flows except they converge iteratively instead of fully parametrized. That is, they still use bijective transforms, compute the Jacobian, check the likelihood and you can still sample and get probability density estimates. The only difference is you repeat the following two steps until convergence:\n\n1. compute one layer or block layer (e.g. Marginal Gaussianization + PCA rotation)\n1. check for convergence (e.g log-likelihood using the change-of-variables formula)\n\nTable 1 in the paper has a good comparison with traditional NFs.\n\n1. 2017-07-11 - [Normalizing Flows Overview](https:\u002F\u002Fwww.pymc.io\u002Fprojects\u002Fexamples\u002Fen\u002F2022.12.0\u002Fvariational_inference\u002Fnormalizing_flows_overview.html) by PyMC3\u003Cbr>\n   A very helpful notebook showcasing how to work with flows in practice and comparing it to PyMC3's NUTS-based HMC kernel. Based on [Theano](https:\u002F\u002Fgithub.com\u002FTheano\u002FTheano).\n\n1. 2017-03-21 - [NormFlows](https:\u002F\u002Fgithub.com\u002Fandymiller\u002FNormFlows) by Andy Miller\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fandymiller\u002FNormFlows\" alt=\"GitHub repo stars\" valign=\"middle\" \u002F>\u003Cbr>\n   Simple didactic example using [`autograd`](https:\u002F\u002Fgithub.com\u002FHIPS\u002Fautograd), so pretty low-level.\n\n\u003Cbr>\n\n## 🌐 Blog Posts \u003Csmall>(5)\u003C\u002Fsmall>\n\n1. 2020-08-19 - [Chapter on flows from the book 'Deep Learning for Molecules and Materials'](https:\u002F\u002Fdmol.pub\u002Fdl\u002Fflows) by [Andrew White](https:\u002F\u002Fthewhitelab.org)\u003Cbr>\n   A nice introduction starting with the change of variables formula (aka flow equation), going on to cover some common bijectors and finishing with a code example showing how to fit the double-moon distribution with TensorFlow Probability.\n\n1. 2018-10-21 - [Change of Variables for Normalizing Flows](https:\u002F\u002Fnealjean.com\u002Fml\u002Fchange-of-variables) by Neal Jean\u003Cbr>\n   Short and simple explanation of change of variables theorem i.t.o. probability mass conservation.\n\n1. 2018-10-13 - [Flow-based Deep Generative Models](https:\u002F\u002Flilianweng.github.io\u002Flil-log\u002F2018\u002F10\u002F13\u002Fflow-based-deep-generative-models) by Lilian Weng\u003Cbr>\n   Covers change of variables, NICE, RNVP, MADE, Glow, MAF, IAF, WaveNet, PixelRNN.\n\n1. 2018-04-03 - [Normalizing Flows](https:\u002F\u002Fakosiorek.github.io\u002Fnorm_flows) by Adam Kosiorek\u003Cbr>\n   Introduction to flows covering change of variables, planar flow, radial flow, RNVP and autoregressive flows like MAF, IAF and Parallel WaveNet.\n\n1. 2018-01-17 - [Normalizing Flows Tutorial](https:\u002F\u002Fblog.evjang.com\u002F2018\u002F01\u002Fnf1.html) by Eric Jang\u003Cbr>\n   [Part 1](https:\u002F\u002Fblog.evjang.com\u002F2018\u002F01\u002Fnf1.html): Distributions and Determinants. [Part 2](https:\u002F\u002Fblog.evjang.com\u002F2018\u002F01\u002Fnf2.html): Modern Normalizing Flows. Lots of great graphics.\n\n\u003Cbr>\n\n## 🚧 Contributing\n\nSee something that's missing from this list? PRs welcome! A good place to find new items for the Repos section is the [Normalizing Flows topic on GitHub](https:\u002F\u002Fgithub.com\u002Ftopics\u002Fnormalizing-flows).\n\nNote: Don't edit the readme directly (it's auto-generated). Add your submission\nto the appropriate [`data\u002F*.yml`](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fedit\u002Fmain\u002Fdata) file.\n\nPapers should be peer-reviewed and published in a journal. If you're unsure if a paper or resource belongs in this list, feel free to [open an issue](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002Fnew) or [start a discussion](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fdiscussions). This repo is meant to be a community effort. Don't hesitate to voice an opinion.\n","\u003Ch1 align=\"center\">\n  令人惊叹的归一化流\n\u003C\u002Fh1>\n\n\u003Ch4 align=\"center\">\n\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n[![欢迎提交 Pull 请求](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPull%20Requests-welcome-brightgreen.svg?logo=github)](#-contributing)\n[![链接检查](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Factions\u002Fworkflows\u002Flink-check.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Factions\u002Fworkflows\u002Flink-check.yml)\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002F227366838.svg)](https:\u002F\u002Fzenodo.org\u002Fbadge\u002Flatestdoi\u002F227366838)\n\n\u003C\u002Fh4>\n\n一份关于理解和应用归一化流（NF）的优秀资源列表：归一化流是一种相对简单但功能强大的统计学新工具，它通过一系列可训练的光滑双射变换（微分同胚），将简单的基础分布转化为表达能力强的概率分布。\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Fnormalizing-flow\">\n   \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fnormalizing-flow\u002Fnormalizing-flow-white.svg\">\n      \u003Cimg alt=\"掩码自回归流（MAF）层缓慢（顺序）前向传播示意图\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fnormalizing-flow\u002Fnormalizing-flow.svg\">\n   \u003C\u002Fpicture>\n\u003C\u002Fa>\n\n\u003Csup>_该图灵感来自 [Lilian Weng](https:\u002F\u002Flilianweng.github.io\u002Flil-log\u002F2018\u002F10\u002F13\u002Fflow-based-deep-generative-models)。由 [CeTZ](https:\u002F\u002Fcetz-package.github.io) 创作。[查看源代码](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Fblob\u002Fmain\u002Fassets\u002Fnormalizing-flow\u002Fnormalizing-flow.typ)._\u003C\u002Fsup>\n\n\u003Cbr>\n\n## \u003Cimg src=\"assets\u002Ftoc.svg\" alt=\"Contents\" height=\"18px\"> &nbsp;目录\n\n1. [目录](#-table-of-contents)\n1. [📝 出版物（60篇）](#-publications-60)\n1. [🛠️ 应用（8个）](#️-applications-8)\n1. [📺 视频（8个）](#-videos-8)\n1. [📦 软件包（14个）](#-packages-14)\n   1. [PyTorch 软件包](#-pytorch-packages)\n   1. [TensorFlow 软件包](#-tensorflow-packages)\n   1. [JAX 软件包](#-jax-packages)\n   1. [Julia 软件包](#-julia-packages)\n1. [🧑‍💻 仓库（18个）](#-repos-18)\n   1. [PyTorch 仓库](#-pytorch-repos)\n   1. [TensorFlow 仓库](#-tensorflow-repos)\n   1. [JAX 仓库](#-jax-repos)\n   1. [其他仓库](#-other-repos)\n1. [🌐 博客文章（5篇）](#-blog-posts-5)\n1. [🚧 贡献](#-contributing)\n\n\u003Cbr>\n\n## 📝 出版物 \u003Csmall>(60篇)\u003C\u002Fsmall>\n\n1. 2024年6月20日 - [可迁移的玻尔兹曼生成器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14426) 作者：Klein、Noé\u003Cbr>\n   玻尔兹曼生成器是一种机器学习方法，它通过归一化流学习从简单先验分布到目标玻尔兹曼分布的变换，从而生成分子系统的平衡样本。最近，流匹配技术已被用于在笛卡尔坐标系中训练小型系统的玻尔兹曼生成器。本研究提出了一种可迁移的玻尔兹曼生成器框架，能够在无需重新训练的情况下预测未见过分子的玻尔兹曼分布。这使得近似采样和高效重加权到目标分布成为可能。该框架在二肽上进行了测试，结果表明其对新系统具有良好的泛化能力，并且与单系统训练相比效率更高。[[代码](https:\u002F\u002Fosf.io\u002Fn8vz3\u002F?view_only=1052300a21bd43c08f700016728aa96e)]\n\n1. 2023年1月3日 - [FInC Flow：用于归一化流的快速且可逆的k×k卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09266) 作者：Kallapa、Nagar等。\u003Cbr>\n   提出了一种k×k卷积层及深度归一化流架构，该架构具备：i) 快速并行反演算法，运行时间为O(nk^2)（n为输入图像的高度和宽度，k为卷积核大小）；ii) 每层仅掩码最少数量的可学习参数；iii) 前向传播和采样时间优于其他基于k×k卷积的模型，在真实世界基准测试中表现相当。我们还提供了使用GPU实现的所提出的并行采样算法，以供参考。[[代码](https:\u002F\u002Fgithub.com\u002Faditya-v-kallappa\u002FFInCFlow)]\n\n1. 2022年10月15日 - [用于归一化流的可逆单调算子](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08176) 作者：Ahn、Kim等。\u003Cbr>\n   本研究提出采用单调形式来解决先前基于ResNet的归一化流中 Lipschitz 常数的问题，并利用单调算子进行了深入的理论分析。此外，研究还构建了一种名为Concatenated Pila (CPila)的激活函数，以改善梯度流动。由此产生的模型——单调流——在多项密度估计基准测试中表现出色（MNIST、CIFAR-10、ImageNet32、ImageNet64）。[[代码](https:\u002F\u002Fgithub.com\u002Fmlvlab\u002FMonotoneFlows)]\n\n1. 2022年8月18日 - [ManiFlow：用归一化流隐式表示流形](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.08932) 作者：Postels、Danelljan等。\u003Cbr>\n   归一化流的可逆性约束限制了那些嵌入高维空间中的低维流形上的数据分布。通常人们会通过向数据添加噪声来绕过这一限制，但这会影响生成样本的质量。本研究在完全了解扰动分布和噪声模型的情况下，直接从原始数据分布中生成样本。他们发现，在最大似然区域，经过扰动数据训练的归一化流能够隐式地表示流形，进而提出了一种优化目标，即根据来自扰动分布的样本恢复流形上最有可能的点。\n\n1. 2022年6月3日 - [图形化归一化流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02548) 作者：Wehenkel、Louppe。\u003Cbr>\n   本研究将耦合和自回归变换重新视为概率图模型，指出它们可以简化为具有预定义拓扑结构的贝叶斯网络。基于这一新视角，作者提出了图形化归一化流，这是一种新的可逆变换，其图形结构既可以是预先指定的，也可以是可学习的。该模型为将领域知识注入归一化流提供了一条有前景的途径，同时保留了贝叶斯网络的可解释性和归一化流的表征能力。[[代码](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FGraphical-Normalizing-Flows)]\n\n1. 2022年5月16日 - [用于概率时间序列预测的多尺度注意力流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.07493) 作者：Feng、Xu等。\u003Cbr>\n   提出了一种新型的非自回归深度学习模型——多尺度注意力归一化流（MANF），该模型融合了多尺度注意力和相对位置信息，并利用条件归一化流来表示多变量数据分布。\n\n1. 2022-03-02 - [基于归一化流的自适应蒙特卡洛方法](https:\u002F\u002Fdoi.org\u002F10.1073\u002Fpnas.2109420119) 由Gabrié、Rotskoff等人提出。\u003Cbr>\n   马尔可夫链蒙特卡洛（MCMC）算法在从高维、多模态分布中采样时面临困难，通常需要大量的计算资源或专门的重要性采样策略。为解决这一问题，研究者提出了一种自适应MCMC方法，该方法结合局部更新与通过归一化流实现的非局部转移。此方法将标准转移核与生成模型的移动相结合，并利用生成的数据自适应地调整生成模型，从而提高采样效率。理论分析和数值实验表明，该算法能够在亚稳态模式之间快速达到平衡，在跨越巨大自由能势垒时仍能有效采样，并且相比传统MCMC方法实现了显著加速。[[代码](https:\u002F\u002Fzenodo.org\u002Frecords\u002F4783701#.Yfv53urMJD8)]\n\n1. 2022-01-14 - [E(n) 等变归一化流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.09016) 由Satorras、Hoogeboom等人提出。\u003Cbr>\n   将等变图神经网络引入归一化流框架，二者结合形成可逆的等变函数。研究表明，这种流模型优于先前的等变模型，并能够对包含原子位置、类型和电荷的分子构型进行采样。\n\n1. 2021-07-16 - [利用归一化流辅助马尔可夫链蒙特卡洛方法的高效贝叶斯采样](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.08001) 由Gabrié、Rotskoff等人提出。\u003Cbr>\n   归一化流在贝叶斯统计中具有潜力，可作为MCMC的补充或替代方法用于后验分布的采样。然而，通过反向KL散度进行训练可能不足以处理复杂的后验分布。本研究提出了一种新的训练方法，采用直接KL散度，即通过在局部MCMC算法中加入归一化流来提高混合速率，并利用生成的样本进一步训练归一化流。该方法对后验分布的先验知识要求极低，可用于模型验证和证据估计，为高效后验采样提供了一条有前景的途径。\n\n1. 2021-07-03 - [CInC流：可表征的可逆3×3卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01358) 由Nagar、Dufraisse等人提出。\u003Cbr>\n   旨在改进计算成本高昂的卷积操作。研究探讨了3×3卷积在何种条件下（如填充方式）是可逆的，并实现了显著的速度提升。此外，他们还开发了一种更具表达能力的可逆_Quad耦合_层。[[代码](https:\u002F\u002Fgithub.com\u002FNaagar\u002FNormalizing_Flow_3x3_inv)]\n\n1. 2021-04-14 - [利用凯莱变换正交化卷积层](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.07167) 由Trockman、Kolter提出。\u003Cbr>\n   通过凯莱变换将多通道卷积参数化为正交形式（在傅里叶域中使用斜对称卷积）。这使得逆运算可以高效计算。[[代码](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Forthogonal-convolutions)]\n\n1. 2021-04-14 - [通过更好的正交参数化改进归一化流](https:\u002F\u002Finvertibleworkshop.github.io\u002FINNF_2019\u002Faccepted_papers\u002Fpdfs\u002FINNF_2019_paper_30.pdf) 由Goliński、Lezcano-Casado等人提出。\u003Cbr>\n   他们通过指数映射和凯莱映射对1×1卷积进行参数化，并展示了对Sylvester归一化流优化效果的提升。\n\n1. 2020-09-28 - [基于条件归一化流的多变量概率时间序列预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06103) 由Rasul、Sheikh等人提出。\u003Cbr>\n   使用自回归深度学习模型对时间序列的多变量动态进行建模，其中数据分布由条件归一化流表示。[[OpenReview.net](https:\u002F\u002Fopenreview.net\u002Fforum?id=WiGQBFuVRv)] [[代码](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Fpytorch-ts)]\n\n1. 2020-09-21 - [基于哈尔小波的分块自回归流用于轨迹建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.09878) 由Bhattacharyya、Straehle等人提出。\u003Cbr>\n   引入了一种基于哈尔小波的分块自回归模型。\n\n1. 2020-07-15 - [AdvFlow：利用归一化流的隐蔽黑盒对抗攻击](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07435) 由Dolatabadi、Erfani等人提出。\u003Cbr>\n   一种针对图像分类器的对抗攻击方法，使用归一化流实现。[[代码](https:\u002F\u002Fgithub.com\u002Fhmdolatabadi\u002FAdvFlow)]\n\n1. 2020-07-06 - [SurVAE流：通过满射弥合VAE与流之间的差距](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.02731) 由Nielsen、Jaini等人提出。\u003Cbr>\n   他们提出了一种通用框架，同时涵盖流（确定性映射）和VAE（随机映射）。通过将确定性映射`x = f(z)`视为随机映射`x ~ p(x|z)`的极限情况，ELBO被重新解释为随机映射下的变量变换公式。此外，他们还展示了几种使用随机映射的满射层，并说明这些层可以与流层组合使用。[[视频](https:\u002F\u002Fyoutu.be\u002FbXp8fk4MRXQ)] [[代码](https:\u002F\u002Fgithub.com\u002Fdidriknielsen\u002Fsurvae_flows)]\n\n1. 2020-06-15 - [为什么归一化流无法检测分布外数据](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08545) 由Kirichenko、Izmailov等人提出。\u003Cbr>\n   该研究探讨了传统归一化流模型在面对分布外数据时可能出现的问题，并提出通过修改耦合层来应对这一挑战。[[推文](https:\u002F\u002Ftwitter.com\u002Fpolkirichenko\u002Fstatus\u002F1272715634544119809)] [[代码](https:\u002F\u002Fgithub.com\u002FPolinaKirichenko\u002Fflows_ood)]\n\n1. 2020-06-03 - [等变流：针对对称密度的精确似然生成式学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02425) 由Köhler、Klein等人提出。\u003Cbr>\n   研究表明，由等变归一化流生成的分布能够忠实地再现底层密度中的对称性。他们提出了用于构建流的模块，这些模块能够保持物理\u002F化学多体系统中的典型对称性。结果表明，保持对称性的流可以提供更好的泛化能力和采样效率。\n\n1. 2020-06-02 - [卷积指数与广义Sylvester流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.01910) 由Hoogeboom、Satorras等人提出。\u003Cbr>\n   引入卷积指数，以在线性层中加入空间依赖关系，作为对1×1卷积的改进。该方法利用矩阵指数创建廉价且可逆的层。他们还利用这一新架构构建了_卷积Sylvester流_和_图卷积指数_。[[代码](https:\u002F\u002Fgithub.com\u002Fehoogeboom\u002Fconvolution_exponential_and_sylvester)]\n\n1. 2020-05-11 - [iUNets：具有可学习上采样和下采样的全可逆U-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.05220) 由Etmann、Ke等人提出。\u003Cbr>\n   通过引入可逆、正交的上采样和下采样层，将经典的U-Net扩展为完全可逆结构。该方法效率较高，有望支持更深、更大的网络的稳定训练。\n\n1. 2020年4月8日 - Mahajan、Bhattacharyya等人发表的《具有多尺度自回归先验的归一化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.03891）\u003Cbr>\n   通过引入多尺度自回归先验（mAR），在潜空间中建立通道间的依赖关系，从而提升基于流模型的表征能力。[[代码](https:\u002F\u002Fgithub.com\u002Fvisinf\u002Fmar-scf)]\n\n1. 2020年3月31日 - Brehmer、Cranmer发表的《用于同时进行流形学习和密度估计的流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13913）\u003Cbr>\n   这种归一化流能够学习数据流形及其上的概率密度函数。[[推文](https:\u002F\u002Ftwitter.com\u002Fkylecranmer\u002Fstatus\u002F1250129080395223040)] [[代码](https:\u002F\u002Fgithub.com\u002Fjohannbrehmer\u002Fmanifold-flow)]\n\n1. 2020年3月4日 - Meng、Song等人发表的《高斯化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.01941）\u003Cbr>\n   该方法采用可训练核层与正交变换的反复组合。与Real-NVP、Glow和FFJORD等当前最先进模型相比，性能非常有竞争力。[[代码](https:\u002F\u002Fgithub.com\u002Fchenlin9\u002FGaussianization_Flows)]\n\n1. 2020年2月27日 - Giaquinto、Banerjee发表的《梯度提升归一化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11896）\u003Cbr>\n   将传统的归一化流与梯度提升技术相结合。研究结果表明，训练多个模型即可取得良好效果，而无需使用更复杂的分布。[[代码](https:\u002F\u002Fgithub.com\u002Frobert-giaquinto\u002Fgradient-boosted-normalizing-flows)]\n\n1. 2020年2月24日 - Deng、Chang等人发表的《利用动态归一化流建模连续随机过程》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10516）\u003Cbr>\n   提出了一种基于维纳过程微分变形的归一化流，并将其应用于时间序列分析。[[推文](https:\u002F\u002Ftwitter.com\u002Fr_giaquinto\u002Fstatus\u002F1309648804824723464)]\n\n1. 2020年2月21日 - Hodgkinson、Heide等人发表的《随机归一化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.09547）\u003Cbr>\n   名称虽与上述SNF相同，但技术完全不同：这是一种基于随机微分方程（SDE）的连续归一化流动态扩展。该方法将SDE中的布朗运动视为潜在变量，并用流对其进行近似。其目标是实现神经网络SDE的高效训练，进而构建高效的马尔可夫链。\n\n1. 2020年2月16日 - Wu、Köhler等人发表的《随机归一化流（SNF）》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06707）\u003Cbr>\n   提出了SNF的概念，即由任意顺序的确定性可逆函数（流）与MCMC或朗之万动力学等随机过程相结合。其目的是增强所选确定性可逆函数的表达能力，同时通过可训练的流提高采样效率，使其优于纯MCMC方法。[[推文](https:\u002F\u002Ftwitter.com\u002FFrankNoeBerlin\u002Fstatus\u002F1229734899034329103)])\n\n1. 2020年1月17日 - Ardizzone、Mackowiak等人发表的《利用信息瓶颈训练归一化流以实现竞争性的生成分类》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.06448）\u003Cbr>\n   他们提出了一类带有信息瓶颈目标的条件归一化流。[[代码](https:\u002F\u002Fgithub.com\u002Fvislearn\u002FIB-INN)]\n\n1. 2020年1月15日 - Dolatabadi、Erfani等人发表的《利用线性有理样条进行可逆生成建模》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.05168）\u003Cbr>\n   这是神经样条流的后续工作，其特点是逆运算易于计算。\n\n1. 2019年12月5日 - Papamakarios、Nalisnick等人发表的《用于概率建模与推理的归一化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.02762）\u003Cbr>\n   这是一篇由DeepMind参与流开发的研究人员撰写的全面且易读的综述文章，强烈推荐。\n\n1. 2019年9月14日 - Wehenkel、Luppe发表的《无约束单调神经网络》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.05164）\u003Cbr>\n   UMNN通过将变换的导数作为无约束神经网络的输出，放宽了对单调神经网络权重和激活函数的限制。变换本身则通过对导数进行数值积分（Clenshaw-Curtis求积法）来计算。[[代码](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FUMNN)]\n\n1. 2019年8月25日 - Kobyzev、Prince等人发表的《归一化流：简介及当前方法综述》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.09257）\u003Cbr>\n   另一篇内容详尽且通俗易懂的综述文章，既介绍了归一化流的基础知识，也涵盖了部分最先进的方法，同样值得推荐。\n\n1. 2019年7月21日 - Rothfuss、Ferreira等人发表的《用于条件密度估计的噪声正则化》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.08982）\u003Cbr>\n   该论文提出了用于条件密度估计的归一化流，并建议采用噪声正则化来减少过拟合。[[博客](https:\u002F\u002Fsiboehm.com\u002Farticles\u002F19\u002Fnormalizing-flow-network)]\n\n1. 2019年7月18日 - Song、Meng等人发表的《MintNet：利用掩码卷积构建可逆神经网络》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.07945）\u003Cbr>\n   通过掩码卷积创建类似自回归的耦合层，评估速度快且高效。[[代码](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fmintnet)]\n\n1. 2019年7月18日 - Grcić、Grubišić等人发表的《密集连接的归一化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04627）\u003Cbr>\n   他们构建了嵌套式耦合结构，以增强标准耦合层的表达能力。此外，还利用切片\u002F因式分解进行降维，并采用Nystrom方法为耦合层的条件网络提供支持。该方法在归一化流模型中取得了SOTA水平的成绩。[[代码](https:\u002F\u002Fgithub.com\u002Fmatejgrcic\u002FDenseFlow)]\n\n1. 2019年6月15日 - Karami、Schuurmans等人发表的《可逆卷积流》（https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fb1f62fa99de9f27a048344d55c5ef7a6-Abstract.html）\u003Cbr>\n   提出了循环对称的可逆卷积层，该层不仅可逆，而且计算成本低廉。他们还展示了如何通过约束损失函数设计具有特殊性质的非线性逐元素双射。[[代码](https:\u002F\u002Fgithub.com\u002FKarami-m\u002FInvertible-Convolutional-Flow)]\n\n1. 2019年6月15日 - Finzi、Izmailov等人发表的《可逆卷积网络》（https:\u002F\u002Finvertibleworkshop.github.io\u002FINNF_2019\u002Faccepted_papers\u002Fpdfs\u002FINNF_2019_paper_26.pdf）\u003Cbr>\n   展示了如何通过傅里叶变换使标准卷积层具备可逆性。他们还引入了更适合归一化流的激活函数，例如SneakyRELU。\n\n1. 2019年6月10日 - Durkan、Bekasov等人发表的《神经样条流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.04032）\u003Cbr>\n   该方法使用单调有理样条作为耦合层，目前属于最先进的技术之一。\n\n1. 2019年5月30日 - Liu、Kumar等人发表的《图归一化流》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.13177）\u003Cbr>\n   这是一种用于预测和生成的新式可逆图网络。在监督任务上，其表现与消息传递神经网络相当，但内存占用显著降低，因此可以扩展到更大的图。结合新型图自编码器用于无监督学习，图归一化流成为一种用于图结构的生成模型。\n\n1. 2019-05-24 - [通过鲁棒可逆 n×n 卷积实现快速流重建](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.10170) 由 Truong、Luu 等人撰写。\u003Cbr>\n   旨在克服 1×1 卷积的局限性，并提出了一种基于巧妙卷积 _仿射_ 函数的可逆 n×n 卷积。\n\n1. 2019-05-17 - [整数离散流与无损压缩](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.07376) 由 Hoogeboom、Peters 等人撰写。\u003Cbr>\n   一种用于序数型离散数据的归一化流。他们引入了一种灵活的变换层，称为整数离散耦合层。\n\n1. 2019-04-09 - [块神经自回归流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04676) 由 Cao、Titov 等人撰写。\u003Cbr>\n   提出 (B-NAF)，一种更高效的概率密度近似器。声称在多个数据集上与其他流方法相比具有竞争力，同时使用的参数数量少了一个数量级。\n\n1. 2019-04-09 - [块神经自回归流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04676) 由 Wehenkel、Louppe 撰写。\u003Cbr>\n   作为手工设计双射函数的替代方案，Huang 等人（2018）提出了 NAF，这是一种用于密度函数的通用近似器。他们的流是一种神经网络，其参数由另一个神经网络预测。后者的规模随前者的大小呈二次增长，效率较低。我们提出了块神经自回归流 (B-NAF)，这是一种更加紧凑的密度函数通用近似器，其中我们直接使用单个前馈网络来建模双射关系。通过精心设计采用块矩阵的仿射变换，确保了流的自回归性和单调性，从而保证了可逆性。我们将 B-NAF 与 NAF 进行了比较，结果表明我们的流在多个数据集上具有竞争力，同时使用的参数数量少了一个数量级。[[代码](https:\u002F\u002Fgithub.com\u002Fnicola-decao\u002FBNAF)]\n\n1. 2019-02-19 - [MaCow：掩码卷积生成流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04208) 由 Ma、Kong 等人撰写。\u003Cbr>\n   引入了一种使用小卷积核捕捉局部连接性的掩码卷积生成流 (MaCow) 层。他们展示了该方法在速度和稳定性方面较 GLOW 模型有所提升。\n\n1. 2019-01-30 - [用于生成式归一化流的新兴卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.11137) 由 Hoogeboom、Berg 等人撰写。\u003Cbr>\n   提出了类似自回归的卷积层，可在通道 **和** 空间两个维度上操作。与标准的 1×1 卷积相比，这种方法显著提升了图像数据集上的性能。不过，其逆运算较为昂贵，但作者提供了一个快速的 C++ 实现。[[代码](https:\u002F\u002Fgithub.com\u002Fehoogeboom\u002Femerging)]\n\n1. 2018-11-06 - [FloWaveNet：用于原始音频的生成流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.02155) 由 Kim、Lee 等人撰写。\u003Cbr>\n   一种基于流的原始音频合成模型。[[代码](https:\u002F\u002Fgithub.com\u002Fksw0306\u002FFloWaveNet)]\n\n1. 2018-10-02 - [FFJORD：用于可扩展可逆生成模型的自由形式连续动力学](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.01367) 由 Grathwohl、Chen 等人撰写。\u003Cbr>\n   使用神经 ODE 作为求解器，以生成连续时间归一化流 (CNF)。\n\n1. 2018-07-09 - [Glow：带有可逆 1×1 卷积的生成流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03039) 由 Kingma、Dhariwal 撰写。\u003Cbr>\n   他们证明，使用可逆 1×1 卷积的流在标准生成基准测试中能够达到很高的似然值，并且可以高效地合成逼真的大型图像。\n\n1. 2018-07-03 - [深度密度破坏者](https:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Finouye18a.html) 由 Inouye、Ravikumar 撰写。\u003Cbr>\n   从迭代的角度探讨归一化流，特色是一个基于树结构的密度估计器。\n\n1. 2018-04-03 - [神经自回归流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00779) 由 Huang、Krueger 等人撰写。\u003Cbr>\n   统一并推广了自回归和归一化流的方法，用更一般的可逆一维变换类——即单调递增的神经网络——取代了 MAF\u002FIAF 中的（条件）仿射一维变换。同时也证明了所提出的神经自回归流 (NAF) 是连续概率分布的通用近似器。[[代码](https:\u002F\u002Fgithub.com\u002FCW-Huang\u002FNAF)]\n\n1. 2018-03-15 - [用于变分推断的 Sylvester 归一化流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05649) 由 Berg、Hasenclever 等人撰写。\u003Cbr>\n   引入了 Sylvester 归一化流，它消除了平面流中的单单元瓶颈，从而提高了变分后验分布的灵活性。\n\n1. 2017-11-17 - [卷积归一化流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.02255) 由 Zheng、Yang 等人撰写。\u003Cbr>\n   提出了利用卷积优势的归一化流（基于随机输入向量各维度上的卷积），以改善变分推断框架中的后验分布。此外，由于卷积的作用，还减少了参数数量。\n\n1. 2017-05-19 - [用于密度估计的掩码自回归流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07057) 由 Papamakarios、Pavlakou 等人撰写。\u003Cbr>\n   引入了 MAF，即一系列自回归模型组成的归一化流，适用于快速密度估计，但在采样方面较慢。与逆向自回归流 (IAF) 类似，只是前向和逆向传递被互换。它是 RNVP 的一种推广。\n\n   \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Fmasked-autoregressive-flow\">\n     \u003Cpicture>\n       \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmasked-autoregressive-flow\u002Fmasked-autoregressive-flow-white.svg\">\n       \u003Cimg alt=\"掩码自回归流 (MAF) 层缓慢（顺序）前向传递示意图\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmasked-autoregressive-flow\u002Fmasked-autoregressive-flow.svg\">\n     \u003C\u002Fpicture>\n   \u003C\u002Fa>\n\n1. 2017-03-06 - [用于变分贝叶斯神经网络的乘法归一化流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01961) 由 Louizos、Welling 撰写。\u003Cbr>\n   他们提出了一种新型的变分贝叶斯神经网络，利用流生成辅助随机变量，通过将网络参数的完全分解高斯后验均值相乘，从而提高变分族的灵活性。这使得原本对角线协方差的高斯分布能够支持多模态分布以及网络参数之间的非线性依赖关系。\n\n1. 2016-06-15 - [利用逆向自回归流改进变分推断](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04934) 由 Kingma、Salimans 等人撰写。\u003Cbr>\n   引入了逆向自回归流 (IAF)，这是一种非常适合高维潜在空间的新型流。[[代码](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fiaf)]\n\n1. 2016年5月27日 - Dinh、Sohl-Dickstein 等人发表的《使用 Real NVP 进行密度估计》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.08803）\u003Cbr>\n   他们提出了仿射耦合层（RNVP），这是在灵活性方面对具有单位雅可比行列式的加法耦合层（NICE）的重大改进，同时保持了单次前向和逆变换，从而分别实现快速采样和密度估计。\n\n   \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Frnvp-affine-coupling-layer\">\n     \u003Cpicture>\n       \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Frnvp-affine-coupling-layer\u002Frnvp-affine-coupling-layer-white.svg\">\n       \u003Cimg alt=\"实值非保体积（RNVP）耦合层示意图\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Frnvp-affine-coupling-layer\u002Frnvp-affine-coupling-layer.svg\">\n     \u003C\u002Fpicture>\n   \u003C\u002Fa>\n\n1. 2015年5月21日 - Rezende、Mohamed 发表的《基于归一化流的变分推断》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.05770）\u003Cbr>\n   他们展示了如何通过使用流模型来提高变分分布族的灵活性，从而超越均场变分推断。\n\n1. 2015年2月12日 - Germain、Gregor 等人发表的《用于分布估计的掩码自编码器》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03509）\u003Cbr>\n   提出了 MADE，这是一种前馈神经网络，通过对权重施加精心设计的二值掩码来控制信息在网络中的精确流动。这些掩码确保每个输出单元仅接收来自按某种任意顺序排列在其之前的输入单元的信号。然而，所有输出仍可在一次前向传播中计算出来。\n\n   将流模型构建为自回归形式的一种流行且高效的方法，就是使用 MADE 网络来实现。\n\n   \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fdiagrams\u002Ftree\u002Fmain\u002Fassets\u002Fmade\">\n     \u003Cpicture>\n       \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmade\u002Fmade-white.svg\">\n       \u003Cimg alt=\"用于分布估计的掩码自编码器\" src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fjanosh\u002Fdiagrams\u002Fmain\u002Fassets\u002Fmade\u002Fmade.svg\">\n     \u003C\u002Fpicture>\n   \u003C\u002Fa>\n\n1. 2014年10月30日 - Dinh、Krueger 等人发表的《非线性独立成分估计》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1410.8516）\u003Cbr>\n   提出了加法耦合层（NICE），并展示了如何将其用于图像生成和修复。\n\n1. 2011年4月1日 - Laparra、Camps-Valls 等人发表的《迭代式高斯化：从 ICA 到随机旋转》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.00229）\u003Cbr>\n   以迭代形式进行高斯化的归一化流。同时也探讨了其与信息理论的联系。\n\n\u003Cbr>\n\n\n\n## 🛠️ 应用领域 \u003Csmall>(8)\u003C\u002Fsmall>\n\n1. 2020年12月6日 - Bézenac、Rangapuram 等人发表的《用于多变量时间序列分析的归一化卡尔曼滤波器》（https:\u002F\u002Fassets.amazon.science\u002Fea\u002F0c\u002F88b7bdd54eae8c08983fa4cc3e06\u002Fnormalizing-kalman-filters-for-multivariate-time-series-analysis.pdf）\u003Cbr>\n   通过引入归一化流来增强状态空间模型，从而缓解由理想化假设引起的不准确性。旨在预测真实世界数据，并处理不同程度的缺失数据。（也可在 [Amazon Science](https:\u002F\u002Famazon.science\u002Fpublications\u002Fnormalizing-kalman-filters-for-multivariate-time-series-analysis) 上查阅。）\n\n1. 2020年11月2日 - Li、Zhou 等人发表的《关于预训练语言模型的句子嵌入》（https:\u002F\u002Faclweb.org\u002Fanthology\u002F2020.emnlp-main.733）\u003Cbr>\n   提议使用流模型将 BERT 的各向异性句子嵌入分布转换为平滑且各向同性的高斯分布，该分布通过无监督目标学习得到。实验表明，在语义文本相似度任务上，性能优于当前最优的句子嵌入方法。代码可在 \u003Chttps:\u002F\u002Fgithub.com\u002Fbohanli\u002FBERT-flow> 获取。\n\n1. 2020年10月13日 - Wirnsberger、Ballard 等人发表的《通过学习映射进行靶向自由能估计》（https:\u002F\u002Faip.scitation.org\u002Fdoi\u002F10.1063\u002F5.0018903）\u003Cbr>\n   使用归一化流来估计自由能差。\n\n1. 2020年7月15日 - Siahkoohi、Rizzuti 等人发表的《利用条件归一化流加速反问题的不确定性量化》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07985）\u003Cbr>\n   将条件归一化流应用于反问题。[[视频](https:\u002F\u002Fyoutu.be\u002FnPvZIKaRBkI)]\n\n1. 2020年6月25日 - Lugmayr、Danelljan 等人发表的《SRFlow：利用归一化流学习超分辨率空间》（https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.14200）\u003Cbr>\n   将归一化流用于超分辨率任务。\n\n1. 2019年3月9日 - Hoffman、Sountsov 等人发表的《使用神经传输消除哈密顿蒙特卡洛中的不良几何结构》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03704）\u003Cbr>\n   将归一化流与蒙特卡洛估计结合使用，以获得更具表达力的分布和更准确的后验估计。\n\n1. 2018年8月14日 - Ardizzone、Kruse 等人发表的《利用可逆神经网络分析反问题》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04730）\u003Cbr>\n   将归一化流应用于反问题。\n\n1. 2018年4月9日 - Haarnoja、Hartikainen 等人发表的《面向层次强化学习的潜在空间策略》（https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.02808）\u003Cbr>\n   使用归一化流，特别是 RealNVPs，作为强化学习的策略，并将其应用于层次强化学习场景。\n\n\u003Cbr>\n\n## 📺 视频 \u003Csmall>(8)\u003C\u002Fsmall>\n\n1. 2021-01-16 - [归一化流 - 动机、核心思想及基础概念](https:\u002F\u002Fyoutu.be\u002FIuXU2dBOJyw) 由 [Kapil Sachdeva](https:\u002F\u002Fgithub.com\u002Fksachdeva) 演讲\u003Cbr>\n   一篇关于归一化流的全面教程，解释了这一类算法所解决的挑战。提供了应对这些挑战的直观理解，并以简单易懂的逐步方式阐述了背后的数学原理。\n\n1. 2020-12-07 - [归一化流](https:\u002F\u002Fyoutu.be\u002F7TOvhz93G9o) 由 [Marc Deisenroth](https:\u002F\u002Fmml-book.github.io\u002Fslopes-expectations.html) 演讲\u003Cbr>\n   属于 NeurIPS 2020 系列教程“来回之旅：斜率与期望的故事”的一部分。完整系列链接：[这里](https:\u002F\u002Fyoutube.com\u002Fplaylist?list=PL93aLKqThq4h7UpgeNhkOtEeCnX3DMseS)。\n\n1. 2020-11-23 - [归一化流简介](https:\u002F\u002Fyoutu.be\u002Fu3vVyFVU_lI) 由 [Marcus Brubaker](https:\u002F\u002Fmbrubake.github.io) 演讲\u003Cbr>\n   这是由 [Stan](https:\u002F\u002Fmc-stan.org) 的创建者之一在 ECCV 2020 上所做的关于归一化流的精彩介绍。该教程还对各种实际实现进行了出色的回顾。\n\n1. 2020-02-06 - [流模型](https:\u002F\u002Fyoutu.be\u002FJBb5sSC0JoY) 由 [Pieter Abbeel](https:\u002F\u002Fsites.google.com\u002Fview\u002Fberkeley-cs294-158-sp20\u002Fhome) 演讲\u003Cbr>\n   对归一化流进行了非常详尽的讲解。还包含一些示例代码。\n\n1. 2019-12-06 - [什么是归一化流？](https:\u002F\u002Fyoutu.be\u002Fi7LjDvsLWCg) 由 [Ari Seff](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=IxBGctYAAAAJ) 演讲\u003Cbr>\n   一段类似 3blue1brown 风格的优秀视频，解释了归一化流的基础知识。\n\n1. 2019-10-09 - [归一化流入门](https:\u002F\u002Fyoutu.be\u002FP4Ta-TZPVi0) 由 [Laurent Dinh](https:\u002F\u002Flaurent-dinh.github.io) 演讲\u003Cbr>\n   NICE 和 RNVP 论文的第一作者，也是该领域的早期研究者之一，在“2019 年物理学机器学习与物理学习”会议上发表了介绍性演讲。\n\n1. 2019-09-24 - [图归一化流](https:\u002F\u002Fyoutu.be\u002FfrMPP30QQgY) 由 Jenny Liu 演讲\u003Cbr>\n   介绍了一种用于药物发现等领域的新型图生成模型，通过对已知具有结合、溶解等特性的分子进行训练，有助于生成同样有效的新型分子。\n\n1. 2018-10-04 - [西尔维斯特归一化流用于变分推断](https:\u002F\u002Fyoutu.be\u002FVeYyUcIDVHI) 由 Rianne van den Berg 演讲\u003Cbr>\n   介绍了西尔维斯特归一化流，它消除了平面流中的单单元瓶颈，从而提高了变分后验分布的灵活性。\n\n\u003Cbr>\n\n## 📦 软件包 \u003Csmall>(14)\u003C\u002Fsmall>\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fpytorch.svg\" alt=\"PyTorch\" height=\"20px\"> &nbsp;PyTorch 软件包\n\n1. 2022-05-21 - [Zuko](https:\u002F\u002Fgithub.com\u002Ffrancois-rozet\u002Fzuko) 由 [François Rozet](https:\u002F\u002Ffrancois-rozet.github.io) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ffrancois-rozet\u002Fzuko\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   Zuko 是一个在 PyTorch 中实现归一化流的 Python 库。它大量依赖 PyTorch 内置的概率分布和变换功能，使得实现简洁、易于理解和扩展。API 文档齐全，并附有原始论文的参考文献。\n\nZuko 被用于 [LAMPE](https:\u002F\u002Fgithub.com\u002Ffrancois-rozet\u002Flampe)，以支持使用 PyTorch 的似然无须计算的近似后验估计。\n\n1. 2021-01-25 - [Jammy Flows](https:\u002F\u002Fgithub.com\u002Fthoglu\u002Fjammy_flows) 由 [Thorsten Glüsenkamp](https:\u002F\u002Fgithub.com\u002Fthoglu) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fthoglu\u002Fjammy_flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   该软件包用于在流形的张量积上建模联合（条件）概率密度函数——类似于逆自回归流，但能够连接不同的流形、建模条件概率密度函数，并允许使用任意耦合而非仅限于仿射耦合。包含了若干 SOTA 流，如高斯化流。\n\n1. 2020-12-07 - [flowtorch](https:\u002F\u002Fgithub.com\u002Ffacebookincubator\u002Fflowtorch) 由 [Facebook \u002F Meta](https:\u002F\u002Fopensource.fb.com) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ffacebookincubator\u002Fflowtorch\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   FlowTorch 是一个基于 PyTorch 的库，用于通过归一化流学习和采样复杂概率分布。\n\n1. 2020-02-09 - [nflows](https:\u002F\u002Fgithub.com\u002Fbayesiains\u002Fnflows) 由 [Bayesiains](https:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fimurray2\u002Fgroup) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbayesiains\u002Fnflows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   这是一套使用 PyTorch 实现的大多数 SOTA 方法的工具集。来自爱丁堡的一个机器学习小组。他们开发了当前 SOTA 的样条归一化流。几乎可以称得上是单一仓库中最为完整的实现。\n\n1. 2020-01-28 - [normflows](https:\u002F\u002Fgithub.com\u002FVincentStimper\u002Fnormalizing-flows) 由 [Vincent Stimper](https:\u002F\u002Fgithub.com\u002FVincentStimper) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FVincentStimper\u002Fnormalizing-flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   该库提供了大多数常见的归一化流架构。还包括随机层、环面和球面上的流以及其他特别适用于物理科学应用的工具。\n\n1. 2018-09-07 - [FrEIA](https:\u002F\u002Fgithub.com\u002Fvislearn\u002FFrEIA) 由 [VLL Heidelberg](https:\u002F\u002Fgithub.com\u002Fvislearn) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvislearn\u002FFrEIA\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   易于逆向的架构框架 (FrEIA) 基于 RNVP 流。设置简单，允许从简单的可逆构建模块定义复杂的可逆神经网络 (INN)。\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Ftensorflow.svg\" alt=\"TensorFlow\" height=\"20px\"> &nbsp;TensorFlow 软件包\n\n1. 2018-06-22 - [TensorFlow Probability](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprobability) 由 [Google](https:\u002F\u002Ftensorflow.org\u002Fprobability) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftensorflow\u002Fprobability\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   这是一个大型的第一方库，提供 RNVP、MAF 等自回归模型，以及一系列可组合的双射变换。\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fjax.svg\" alt=\"JAX\" height=\"20px\"> &nbsp;JAX 生态包\n\n1. 2024-07-05 - [GWKokab](https:\u002F\u002Fgithub.com\u002Fgwkokab\u002Fgwkokab) 由 [Meesum Qazalbash](https:\u002F\u002Fgithub.com\u002FQazalbash)、[Muhammad Zeeshan](https:\u002F\u002Fccrg.rit.edu\u002Fuser\u002Fmuhammad.zeeshan) 等人开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgwkokab\u002Fgwkokab\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   基于 JAX 的引力波种群推断工具包，适用于参数化模型 [[文档](https:\u002F\u002Fgwkokab.readthedocs.io)]\n\n1. 2022-06-17 - [flowMC](https:\u002F\u002Fgithub.com\u002Fkazewong\u002FflowMC) 由 [Kaze Wong](https:\u002F\u002Fwww.kaze-wong.com\u002F) 开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkazewong\u002FflowMC\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   基于归一化流的采样包，用于概率推断 [[文档](https:\u002F\u002Fpypi.org\u002Fproject\u002FflowMC)]\n\n1. 2021-06-17 - [pzflow](https:\u002F\u002Fgithub.com\u002Fjfcrenshaw\u002Fpzflow) 由 [John Franklin Crenshaw](https:\u002F\u002Fjfcrenshaw.github.io) 开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fjfcrenshaw\u002Fpzflow\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   一个专注于表格数据概率建模的软件包，重点在于采样和后验计算。\n\n1. 2021-04-12 - [Distrax](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdistrax) 由 [DeepMind](https:\u002F\u002Fdeepmind.com) 开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdeepmind\u002Fdistrax\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   Distrax 是一个轻量级的概率分布与双射变换库。它作为 TensorFlow Probability (TFP) 某些子集的 JAX 原生重实现，同时引入了一些新特性，并强调可扩展性。\n\n1. 2020-03-09 - [NuX](https:\u002F\u002Fgithub.com\u002FInformation-Fusion-Lab-Umass\u002FNuX) 由 马萨诸塞大学信息融合实验室 开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FInformation-Fusion-Lab-Umass\u002FNuX\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   一个以 JAX 为后端的归一化流库，包含一些当前最优的方法。此外，还提供了一种基于量化技术的满射流。\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fjulia.svg\" alt=\"Julia\" height=\"20px\"> &nbsp;Julia 生态包\n\n1. 2021-11-07 - [ContinuousNormalizingFlows.jl](https:\u002F\u002Fgithub.com\u002FimpICNF\u002FContinuousNormalizingFlows.jl) 由 [Hossein Pourbozorg](https:\u002F\u002Fgithub.com\u002Fprbzrg) 开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FimpICNF\u002FContinuousNormalizingFlows.jl\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   在 Julia 中实现了无穷小连续归一化流算法。[[文档](https:\u002F\u002Fimpicnf.github.io\u002FContinuousNormalizingFlows.jl)]\n\n1. 2020-02-07 - [InvertibleNetworks.jl](https:\u002F\u002Fgithub.com\u002Fslimgroup\u002FInvertibleNetworks.jl) 由 [SLIM](https:\u002F\u002Fslim.gatech.edu) 开发。\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fslimgroup\u002FInvertibleNetworks.jl\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   一个与 Flux 兼容的库，实现了可逆神经网络和归一化流，并采用内存高效的反向传播技术。通过手动实现梯度来利用构建模块的可逆性，从而能够扩展到大规模问题。\n\n\u003Cbr>\n\n## 🧑‍💻 代码仓库 \u003Csmall>(18)\u003C\u002Fsmall>\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fpytorch.svg\" alt=\"PyTorch\" height=\"20px\"> &nbsp;PyTorch 仓库\n\n1. 2021-09-27 - [DeeProb-kit](https:\u002F\u002Fgithub.com\u002Fdeeprob-org\u002Fdeeprob-kit) 由 [Lorenzo Loconte](https:\u002F\u002Fgithub.com\u002Floreloc) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdeeprob-org\u002Fdeeprob-kit\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   一个通用的 Python 库，提供了一系列易于使用和扩展的深度概率模型（DPM）。实现了 MAF、RealNVP 和 NICE 等流模型。\n\n1. 2021-08-21 - [NICE：非线性独立成分估计](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FNICE_Non_linear_Independent_Components_Estimation) 由 Maxime Vandegar 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   一个 PyTorch 实现，用大约 100 行代码复现了论文 NICE 中的结果。\n\n1. 2020-07-19 - [归一化流——简介（第 1 部分）](https:\u002F\u002Fpyro.ai\u002Fexamples\u002Fnormalizing_flows_i) 由 [pyro.ai](https:\u002F\u002Fpyro.ai) 开发\u003Cbr>\n   一个关于如何使用基于 PyTorch 的 `pyro-ppl` 库来应用归一化流的教程。他们提供了包括 NSF 和 MAF 在内的若干 SOTA 方法。[第 2 和第 3 部分后续推出](https:\u002F\u002Fgithub.com\u002Fpyro-ppl\u002Fpyro\u002Fissues\u002F1992)。\n\n1. 2020-07-03 - [基于神经 ODE 的密度估计与基于 FFJORD 的密度估计](https:\u002F\u002Fgit.io\u002FJiWaG) 由 [torchdyn](https:\u002F\u002Ftorchdyn.readthedocs.io) 开发\u003Cbr>\n   展示如何将 FFJORD 用作连续归一化流（CNF）的示例。基于 PyTorch 工具包 `torchdyn`，该工具包提供连续神经网络架构。\n\n1. 2020-05-26 - [StyleFlow](https:\u002F\u002Fgithub.com\u002FRameenAbdal\u002FStyleFlow) 由 [Rameen Abdal](https:\u002F\u002Ftwitter.com\u002FAbdalRameen) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FRameenAbdal\u002FStyleFlow\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   使用条件连续归一化流对 StyleGAN 生成的图像进行属性条件探索。[[文档](https:\u002F\u002Frameenabdal.github.io\u002FStyleFlow)]\n\n1. 2020-02-04 - [图模型归一化流](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FGraphical-Normalizing-Flows) 由 [Antoine Wehenkel](https:\u002F\u002Fawehenkel.github.io) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FAWehenkel\u002FGraphical-Normalizing-Flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   “图模型归一化流”的官方实现及论文中展示的实验。\n\n1. 2019-12-09 - [pytorch-normalizing-flows](https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fpytorch-normalizing-flows) 由 Andrej Karpathy 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkarpathy\u002Fpytorch-normalizing-flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   一个 Jupyter 笔记本，包含最常用流模型的 PyTorch 实现：NICE、RNVP、MAF、Glow、NSF。\n\n1. 2019-09-19 - [无约束单调神经网络（UMNN）](https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FUMNN) 由 Antoine Wehenkel 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FAWehenkel\u002FUMNN\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   “无约束单调神经网络”的官方实现及论文中展示的实验。\n\n1. 2019-02-06 - [pytorch_flows](https:\u002F\u002Fgithub.com\u002Facids-ircam\u002Fpytorch_flows) 由 [acids-ircam](https:\u002F\u002Fgithub.com\u002Facids-ircam) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Facids-ircam\u002Fpytorch_flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   一个优秀的仓库，包含从头开始实现的一些基础归一化流的 PyTorch 代码。\n\n1. 2018-12-30 - [normalizing_flows](https:\u002F\u002Fgithub.com\u002Fkamenbliznashki\u002Fnormalizing_flows) 由 Kamen Bliznashki 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkamenbliznashki\u002Fnormalizing_flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   PyTorch 实现的密度估计算法：BNAF、Glow、MAF、RealNVP、平面流。\n\n1. 2018-09-01 - [pytorch-flows](https:\u002F\u002Fgithub.com\u002Fikostrikov\u002Fpytorch-flows) 由 Ilya Kostrikov 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fikostrikov\u002Fpytorch-flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   PyTorch 实现的密度估计算法：MAF、RNVP、Glow。\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Ftensorflow.svg\" alt=\"TensorFlow\" height=\"20px\"> &nbsp;TensorFlow 仓库\n\n1. 2020-11-02 - [使用归一化流的变分推断（VINF）](https:\u002F\u002Fgithub.com\u002Fpierresegonne\u002FVINF) 由 Pierre Segonne 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fpierresegonne\u002FVINF\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   该仓库提供了 TensorFlow 的实践实现，用于实现归一化流，正如介绍这一概念的论文（D. Rezende & S. Mohamed）中所述。\n\n1. 2020-01-29 - [归一化流](https:\u002F\u002Fgithub.com\u002FLukasRinder\u002Fnormalizing-flows) 由 [Lukas Rinder](https:\u002F\u002Fgithub.com\u002FLukasRinder) 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FLukasRinder\u002Fnormalizing-flows\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   在 TensorFlow 2 中实现了归一化流（平面流、径向流、Real NVP、掩码自回归流（MAF）、逆自回归流（IAF）、神经样条流），并附带一个小教程。\n\n1. 2019-07-19 - [BERT-flow](https:\u002F\u002Fgithub.com\u002Fbohanli\u002FBERT-flow) 由 Bohan Li 开发\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbohanli\u002FBERT-flow\" alt=\"GitHub 仓库星标数\" valign=\"middle\" \u002F>\u003Cbr>\n   “预训练语言模型的句子嵌入”（EMNLP 2020）的 TensorFlow 实现。\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fjax.svg\" alt=\"JAX\" height=\"20px\"> &nbsp;JAX 仓库\n\n1. 2020-06-12 - [神经传输](https:\u002F\u002Fpyro.ai\u002Fnumpyro\u002Fexamples\u002Fneutra) 由 [numpyro](https:\u002F\u002Fnum.pyro.ai) 开发\u003Cbr>\n   展示了如何利用归一化流从蒙特卡洛方法中获得更稳健的后验分布。使用了 `numpyro` 库，这是一个以 JAX 为后端的概率编程库。归一化流的实现包括 IAF 和 BNAF 等基础模型。\n\n\u003Cbr>\n\n### \u003Cimg src=\"assets\u002Fother.svg\" alt=\"其他\" height=\"20px\"> &nbsp;其他仓库\n\n1. 2018-06-11 - [破坏性深度学习 (ddl)](https:\u002F\u002Fgithub.com\u002Fdavidinouye\u002Fdestructive-deep-learning) 由 [David Inouye](https:\u002F\u002Fdavidinouye.com) 提供\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdavidinouye\u002Fdestructive-deep-learning\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   这是 Inouye 和 Ravikumar（2018）论文 [Deep Density Destructors](https:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Finouye18a.html) 的代码库。它包含一整套迭代方法，既有基于树的方法，也有高斯化方法，这些方法与归一化流类似，不同之处在于它们是通过迭代收敛的，而不是完全参数化的。也就是说，它们仍然使用双射变换、计算雅可比行列式、检查似然值，并且可以进行采样和概率密度估计。唯一的区别就是你需要重复以下两个步骤直到收敛：\n\n   1. 计算一层或一个块层（例如，边际高斯化 + PCA 旋转）\n   1. 检查是否收敛（例如，使用变量变换公式计算对数似然）\n\n   论文中的表 1 对传统归一化流进行了很好的比较。\n\n1. 2017-07-11 - [归一化流概述](https:\u002F\u002Fwww.pymc.io\u002Fprojects\u002Fexamples\u002Fen\u002F2022.12.0\u002Fvariational_inference\u002Fnormalizing_flows_overview.html) 由 PyMC3 提供\u003Cbr>\n   这是一个非常有用的笔记本，展示了如何在实践中使用流，并将其与 PyMC3 基于 NUTS 的 HMC 核心进行了比较。基于 [Theano](https:\u002F\u002Fgithub.com\u002FTheano\u002FTheano)。\n\n1. 2017-03-21 - [NormFlows](https:\u002F\u002Fgithub.com\u002Fandymiller\u002FNormFlows) 由 Andy Miller 提供\n&ensp;\n\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fandymiller\u002FNormFlows\" alt=\"GitHub 仓库星级\" valign=\"middle\" \u002F>\u003Cbr>\n   使用 [`autograd`](https:\u002F\u002Fgithub.com\u002FHIPS\u002Fautograd) 的简单教学示例，因此层次较低。\n\n\u003Cbr>\n\n## 🌐 博客文章 \u003Csmall>(5)\u003C\u002Fsmall>\n\n1. 2020-08-19 - [《分子与材料深度学习》一书中的流章节](https:\u002F\u002Fdmol.pub\u002Fdl\u002Fflows) 由 [Andrew White](https:\u002F\u002Fthewhitelab.org) 提供\u003Cbr>\n   一篇不错的介绍，从变量变换公式（即流方程）开始，接着介绍了几种常见的双射变换，并以一个使用 TensorFlow Probability 拟合双月数据分布的代码示例结束。\n\n1. 2018-10-21 - [归一化流的变量变换](https:\u002F\u002Fnealjean.com\u002Fml\u002Fchange-of-variables) 由 Neal Jean 提供\u003Cbr>\n   简短而清晰地解释了与概率质量守恒相关的变量变换定理。\n\n1. 2018-10-13 - [基于流的深度生成模型](https:\u002F\u002Flilianweng.github.io\u002Flil-log\u002F2018\u002F10\u002F13\u002Fflow-based-deep-generative-models) 由 Lilian Weng 提供\u003Cbr>\n   内容涵盖变量变换、NICE、RNVP、MADE、Glow、MAF、IAF、WaveNet、PixelRNN。\n\n1. 2018-04-03 - [归一化流](https:\u002F\u002Fakosiorek.github.io\u002Fnorm_flows) 由 Adam Kosiorek 提供\u003Cbr>\n   流的入门介绍，内容包括变量变换、平面流、径向流、RNVP 以及像 MAF、IAF 和 Parallel WaveNet 这样的自回归流。\n\n1. 2018-01-17 - [归一化流教程](https:\u002F\u002Fblog.evjang.com\u002F2018\u002F01\u002Fnf1.html) 由 Eric Jang 提供\u003Cbr>\n   [第一部分](https:\u002F\u002Fblog.evjang.com\u002F2018\u002F01\u002Fnf1.html)：分布与行列式。[第二部分](https:\u002F\u002Fblog.evjang.com\u002F2018\u002F01\u002Fnf2.html)：现代归一化流。包含大量优秀的图表。\n\n\u003Cbr>\n\n## 🚧 贡献\n\n如果你发现列表中缺少某些内容，请提交 PR！寻找“仓库”部分新条目的好地方是 GitHub 上的 [归一化流话题](https:\u002F\u002Fgithub.com\u002Ftopics\u002Fnormalizing-flows)。\n\n注意：请勿直接编辑 README 文件（它是自动生成的）。请将你的贡献添加到相应的 [`data\u002F*.yml`](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fedit\u002Fmain\u002Fdata) 文件中。\n\n论文应经过同行评审并在期刊上发表。如果你不确定某篇论文或资源是否应列入此列表，欢迎 [打开一个问题](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002Fnew) 或 [发起讨论](https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fdiscussions)。本仓库旨在成为一个社区协作项目，欢迎大家畅所欲言。","# Awesome Normalizing Flows 快速上手指南\n\n`awesome-normalizing-flows` 并非一个单一的 Python 包，而是一个精选的资源列表，汇集了关于**归一化流（Normalizing Flows, NF）**的论文、应用案例、视频教程以及不同框架（PyTorch, TensorFlow, JAX, Julia）的代码实现库。\n\n本指南将帮助你快速搭建环境，并基于列表中推荐的 PyTorch 实现库开始第一个归一化流模型的开发。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows (推荐 Linux 以获得最佳 GPU 支持)\n*   **Python 版本**: 3.8 或更高版本\n*   **硬件**: 推荐使用 NVIDIA GPU (需安装对应的 CUDA 驱动)，CPU 亦可运行但训练速度较慢\n*   **前置依赖**:\n    *   `pip` (Python 包管理工具)\n    *   `git` (用于克隆代码仓库)\n\n## 安装步骤\n\n由于该资源列表包含了多个独立的实现库，我们以列表中广泛使用的 **PyTorch** 生态为例进行安装。你可以选择安装通用的深度学习库，或直接克隆列表中某个具体的实现仓库（如 `Graphical-Normalizing-Flows` 或 `MonotoneFlows`）。\n\n以下是安装基础 PyTorch 环境和常用 NF 依赖的步骤：\n\n1.  **创建虚拟环境（推荐）**\n    ```bash\n    python -m venv nf-env\n    source nf-env\u002Fbin\u002Factivate  # Linux\u002FmacOS\n    # 或\n    nf-env\\Scripts\\activate     # Windows\n    ```\n\n2.  **安装 PyTorch**\n    *   **官方源安装**:\n        ```bash\n        pip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n        ```\n    *   **国内镜像加速（推荐中国开发者）**:\n        使用清华大学或阿里云镜像源可显著提升下载速度。\n        ```bash\n        pip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n        ```\n\n3.  **安装辅助库**\n    大多数归一化流实现需要 `numpy`, `scipy`, `matplotlib` 等科学计算库。\n    ```bash\n    pip install numpy scipy matplotlib tqdm\n    ```\n\n4.  **获取具体实现代码（可选）**\n    如果你想直接运行列表中提到的特定论文代码（例如 *Graphical Normalizing Flows*），可以克隆其仓库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FAWehenkel\u002FGraphical-Normalizing-Flows.git\n    cd Graphical-Normalizing-Flows\n    pip install -r requirements.txt\n    ```\n\n## 基本使用\n\n归一化流的核心思想是通过一系列可逆的双射变换（bijective transformations），将简单的先验分布（如标准高斯分布）映射到复杂的数据分布。\n\n以下是一个基于 `torch` 和 `nflows` (PyTorch 社区常用的 NF 库，虽未在 README 显式列出但属于该生态标准组件) 的最简示例，演示如何构建一个简单的实值非体积保持耦合层（RealNVP 风格）。\n\n*注：若未安装 `nflows`，可先执行 `pip install nflows` 或使用国内源 `pip install nflows -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`。*\n\n### 最简单的使用示例\n\n```python\nimport torch\nfrom torch import nn\nfrom nflows import flows, distributions, transforms\n\n# 1. 定义数据维度\ninput_dim = 2\n\n# 2. 定义基础分布 (标准高斯分布)\nbase_distribution = distributions.StandardNormal(shape=[input_dim])\n\n# 3. 构建变换链 (Transforms)\n# 这里创建一个简单的仿射耦合层 (Affine Coupling Layer)\ntransform_list = []\n\n# 添加一个随机置换层，增加模型表达能力\ntransform_list.append(transforms.RandomPermutation(features=input_dim))\n\n# 添加一个仿射耦合层\ntransform_list.append(\n    transforms.make_coupling_transform(\n        create_transform=lambda: transforms.AffineCouplingTransform(\n            mask=torch.tensor([1, 0], dtype=torch.float32), # 掩码，决定哪些维度作为条件\n            transform_net_create_fn=lambda in_features, out_features: nn.Sequential(\n                nn.Linear(in_features, 64),\n                nn.ReLU(),\n                nn.Linear(64, out_features)\n            )\n        ),\n        num_blocks=1\n    )[0] # make_coupling_transform 返回 (transform, log_det_fn)，我们只需要 transform\n)\n\n# 组合所有变换\ntransform = transforms.CompositeTransform(transform_list)\n\n# 4. 构建归一化流模型\nflow_model = flows.Flow(transform, base_distribution)\n\n# 5. 模拟训练数据 (假设是从某个双月分布采样的数据)\ndata = torch.randn(100, input_dim) \n\n# 6. 计算负对数似然损失 (Negative Log-Likelihood)\nloss = -flow_model.log_prob(data).mean()\n\nprint(f\"Initial Loss: {loss.item():.4f}\")\n\n# 7. 简单的优化步骤\noptimizer = torch.optim.Adam(flow_model.parameters(), lr=1e-3)\n\nfor step in range(100):\n    optimizer.zero_grad()\n    loss = -flow_model.log_prob(data).mean()\n    loss.backward()\n    optimizer.step()\n    \n    if step % 20 == 0:\n        print(f\"Step {step}, Loss: {loss.item():.4f}\")\n\n# 8. 采样 (从学习到的分布中生成新数据)\nsamples, _ = flow_model.sample(num_samples=10)\nprint(f\"Generated samples shape: {samples.shape}\")\n```\n\n### 下一步建议\n浏览 `awesome-normalizing-flows` 列表中的 **Publications** 部分，找到与你应用场景（如分子生成、时间序列预测、异常检测）相关的论文，并点击对应的 **Code** 链接获取针对该任务优化的完整实现。","某生物制药公司的算法团队正致力于利用机器学习加速新药研发，具体任务是通过生成模型模拟复杂分子系统的平衡态分布，以预测未知药物的结合亲和力。\n\n### 没有 awesome-normalizing-flows 时\n- **文献调研效率低下**：团队成员需手动在 arXiv 和谷歌学术中大海捞针，难以系统性地追踪如“可迁移玻尔兹曼生成器”等前沿论文，导致技术选型滞后。\n- **复现门槛极高**：缺乏统一的代码资源索引，研究人员往往找不到官方实现的 PyTorch 或 JAX 仓库，不得不从零编写复杂的可逆变换逻辑，耗时数周且易出错。\n- **理论理解碎片化**：由于缺少高质量的博客解读和可视化图解（如 MAF 层的前向传播流程），初级工程师难以直观理解归一化流如何通过微分同胚链构建概率分布，沟通成本高昂。\n- **应用落地盲目**：不清楚该技术在分子动力学之外的具体应用场景，难以评估将其引入当前药物筛选流水线的可行性与预期收益。\n\n### 使用 awesome-normalizing-flows 后\n- **前沿技术一键直达**：团队直接查阅其精选的 60+ 篇出版物列表，迅速锁定了能泛化到未见分子的最新算法，将技术调研周期从两周缩短至两天。\n- **开箱即用的代码库**：依托其整理的框架专用包（PyTorch\u002FTensorFlow\u002FJAX）和 18 个高质量仓库，开发人员直接复用成熟的卷积层和采样器，立即着手进行模型微调。\n- **直观的学习路径**：借助收录的视频教程和博客文章，团队快速统一了对核心概念的理解，利用清晰的流程图大幅降低了内部培训难度。\n- **明确的应用指引**：通过分类清晰的应用案例，团队迅速确认了该技术在二肽分子模拟中的成功先例，坚定了将其部署到生产环境的信心。\n\nawesome-normalizing-flows 通过聚合分散的理论、代码与应用资源，将归一化流技术的探索成本降至最低，让研发团队能专注于解决实际的分子模拟难题而非重复造轮子。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjanosh_awesome-normalizing-flows_7aeb8917.png","janosh","Janosh Riebesell","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjanosh_692439a8.jpg","Computational materials science","Periodic Labs","San Francisco",null,"https:\u002F\u002Fjanosh.dev","https:\u002F\u002Fgithub.com\u002Fjanosh",[82],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,1615,131,"2026-04-04T05:14:35","MIT",1,"","未说明",{"notes":94,"python":92,"dependencies":95},"该仓库是一个资源列表（Awesome List），汇集了关于归一化流（Normalizing Flows）的论文、应用、视频和代码库链接，本身不是一个可直接运行的单一软件工具。列表中提及的子项目分别基于 PyTorch、TensorFlow、JAX 和 Julia 等不同框架，具体运行环境需求需参考各个子项目的独立文档。部分子项目（如 FInC Flow）提到了在 GPU 上实现并行算法，但未指定具体的显卡型号或显存要求。",[],[14],[98,99,100,101,102,103,104,105,106],"normalizing-flows","bayesian-neural-networks","variational-inference","density-estimation","generative-modeling","autoregressive","machine-learning","awesome-list","bayesian-inference","2026-03-27T02:49:30.150509","2026-04-09T12:33:43.269313",[110,115,120,125,130,135,140],{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},26242,"我想贡献一个新的包或论文到列表中，具体应该如何操作？","您可以直接提交 Pull Request (PR)。只需将您的包或论文信息添加到 `data\u002Fpackages.yml`（或其他对应的数据文件）中。条目格式通常如下：\n\n```yaml\n- id: pkg-11\n  date: todays_date\n  title: name_of_package\n  url: package_website\n  authors: ___\n  authors_url: your_website\n  lang: Python\n  description: A NF package ....\n```\n\n其余格式检查和整理工作通常由 `.pre-commit-config.yaml` 自动处理。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F42",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},26243,"个人博客文章或未发表的预印本（如 arXiv 论文）可以被收录吗？","该仓库主要收录经过同行评审的研究论文。对于博客文章，如果它们能合并成一篇完整的单一文章（而不是分散的多篇帖子），则可能被接受。未发表的预印本通常不符合收录标准，除非有特殊情况或被社区广泛认可。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F17",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},26244,"如何添加视频教程或演讲到列表中？","可以添加视频教程。请提交一个 Pull Request，将视频链接添加到 README 文件的 [Videos section]（视频部分）。维护者欢迎此类贡献以保持仓库的活跃度。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F11",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},26245,"LAMPE 包和 Zuko 包之间有什么关系？","LAMPE 包中实现的归一化流（Normalizing Flows）功能已被导出为一个独立的包，名为 Zuko。建议在列表中将 LAMPE 的条目替换为 Zuko，并在 Zuko 的描述中提及 LAMPE 作为其下游工具或前身，说明其用途。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F38",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},26246,"列表中的项目是按什么顺序排列的？是否可以改为按时间倒序排列？","为了更有效地展示最新内容，维护者计划通过程序化方式从源数据文件（如 `data.yml`）生成 README，从而实现按时间倒序（最新到最旧）排列，而不是手动重新排序。这确保了列表始终自动保持最新优先的顺序。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F28",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},26247,"如果发现某个 JAX 相关的包或论文未被列出，该如何建议添加？","您可以直接在 Issue 中提供包的 GitHub 链接和相关论文链接（例如 arXiv 或期刊链接）。维护者通常会欢迎这些建议，并可能邀请您直接提交 PR 来添加这些内容，或者他们会亲自添加。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F65",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},26248,"数据字段中的作者（authors）和作者链接（authors_url）支持多个值吗？","是的，为了支持多位作者，`authors` 和 `authors_url` 字段应设计为字符串列表（`list[str]`）而不是单个字符串。如果您发现当前实现不支持，可以提交 PR 来改进数据结构以支持多作者格式。","https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fissues\u002F69",[146],{"id":147,"version":148,"summary_zh":149,"released_at":150},169288,"v1.0.0","## 变更内容\n* 新增论文、视频及代码，由 @jejjohnson 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F1 中提交\n* 更多论文和应用，由 @jejjohnson 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F2 中提交\n* 增加更多论文，由 @jejjohnson 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F4 中提交\n* 添加带有多尺度自回归先验的归一化流论文，由 @ksachdeva 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F6 中提交\n* 添加神经自回归流论文，由 @ksachdeva 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F7 中提交\n* 添加 FrEIA 框架，由 @kleinicke 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F8 中提交\n* 添加 YouTube 教程链接，由 @ksachdeva 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F10 中提交\n* 新增 SurVAE 流，由 @hushon 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F13 中提交\n* 在应用部分添加归一化卡尔曼滤波器论文，由 @ksachdeva 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F14 中提交\n* 添加 ECCV 2020 视频教程条目，主讲人为 Marcus Brubaker (@mbrubake)，由 @ksachdeva 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F15 中提交\n* 更新 README——关于变量变换的博客文章，由 @MattSkiff 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F16 中提交\n* 添加 VINF，由 @iphysresearch 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F19 中提交\n* 补丁 1：添加 InvertibleNetworks.jl，由 @rafaelorozco 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F20 中提交\n* 将 pzflow 加入 Jax 软件包列表，由 @jfcrenshaw 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F21 中提交\n* 添加 PyTorch 实现，由 @MaximeVandegar 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F24 中提交\n* 添加“用于层次强化学习的潜在空间策略”，由 @hartikainen 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F25 中提交\n* [pre-commit.ci] pre-commit 自动更新，由 @pre-commit-ci 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F26 中提交\n* 添加 Facebook\u002FMeta 的 flowtorch，由 @cranmer 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F27 中提交\n* 自动生成 README，由 @janosh 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F29 中提交\n* 按日期倒序排列 README 条目，由 @janosh 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F30 中提交\n* 为所有出版物添加完整作者列表，由 @janosh 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F31 中提交\n* 添加 https:\u002F\u002Fyoutu.be\u002F7TOvhz93G9o，由 @janosh 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F32 中提交\n* 新增 12 篇出版物，由 @jejjohnson 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F33 中提交\n* 添加“用于概率时间序列预测的多尺度注意力流”，由 @hanlaoshi 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F36 中提交\n* 添加 LAMPE 软件包，由 @francois-rozet 在 https:\u002F\u002Fgithub.com\u002Fjanosh\u002Fawesome-normalizing-flows\u002Fpull\u002F39 中提交\n* 添加无约束单调","2023-07-20T20:52:41"]