[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PythonOT--POT":3,"tool-PythonOT--POT":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":23,"env_os":98,"env_gpu":99,"env_ram":99,"env_deps":100,"category_tags":104,"github_topics":105,"view_count":122,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":154},117,"PythonOT\u002FPOT","POT","POT : Python Optimal Transport","POT（Python Optimal Transport）是一个开源的 Python 库，专注于求解最优传输（Optimal Transport, OT）相关的优化问题。最优传输理论在信号处理、图像分析和机器学习等领域有广泛应用，例如衡量概率分布之间的差异、实现领域自适应或计算数据分布的“平均”形态（即 Wasserstein 重心）。POT 提供了丰富的算法实现，包括经典的线性规划解法、带熵正则化的 Sinkhorn 算法、Gromov-Wasserstein 距离及其融合变体，并支持不平衡传输、一维快速求解、高斯混合模型间的传输等场景。它还集成了与 PyTorch、TensorFlow、JAX、NumPy 和 CuPy 的兼容接口，便于在不同深度学习框架中使用。POT 特别适合从事机器学习、计算机视觉或运筹优化方向的研究人员与开发者，尤其适用于需要高效、可微分最优传输计算的科研或工程任务。其模块化设计和详尽文档也降低了入门门槛，兼顾灵活性与性能。","# POT: Python Optimal Transport\n\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FPOT.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FPOT)\n[![Anaconda Cloud](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot\u002Fbadges\u002Fversion.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot)\n[![Build Status](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Factions\u002Fworkflows\u002Fbuild_tests.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Factions)\n[![Codecov Status](https:\u002F\u002Fcodecov.io\u002Fgh\u002FPythonOT\u002FPOT\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FPythonOT\u002FPOT)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_d50896e0bc3d.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpot)\n[![Anaconda downloads](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot\u002Fbadges\u002Fdownloads.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot)\n[![License](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot\u002Fbadges\u002Flicense.svg)](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fblob\u002Fmaster\u002FLICENSE)\n\nThis open source Python library provides several solvers for optimization\nproblems related to Optimal Transport for signal, image processing and machine\nlearning.\n\nWebsite and documentation: [https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F)\n\nSource Code (MIT):\n[https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT)\n\n\nPOT has the following main features:\n* A large set of differentiable solvers for optimal transport problems, including:\n  *  Exact linear OT, entropic and quadratic regularized OT,\n  *  Gromov-Wasserstein (GW) distances, Fused GW distances and variants of\n     quadratic OT,\n  *  Unbalanced and partial OT for different divergences,\n*  OT barycenters (Wasserstein and GW) for fixed and free support,\n*  Fast OT solvers in 1D, on the circle and between Gaussian Mixture Models (GMMs),\n*  Many ML related solvers, such as domain adaptation, optimal transport mapping\n   estimation, subspace learning, Graph Neural Networks (GNNs) layers.\n*  Several backends for easy use with Pytorch, Jax, Tensorflow, Numpy and Cupy arrays.\n\n### Implemented Features\n\nPOT provides the following generic OT solvers:\n\n* [OT Network Simplex solver](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_OT_1D.html) for the linear program\u002F Earth Movers Distance [1] .\n* [Conditional gradient](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_optim_OTreg.html) [6] and [Generalized conditional gradient](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_optim_OTreg.html) for regularized OT [7].\n* Entropic regularization OT solver with [Sinkhorn Knopp\n  Algorithm](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_OT_1D.html) [2] ,\n  stabilized version [9] [10] [34], lazy CPU\u002FGPU solver from geomloss [60] [61], greedy Sinkhorn [22] and Screening\n  Sinkhorn [26].\n* Bregman projections for [Wasserstein barycenter](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_barycenter_lp_vs_entropic.html) [3], [convolutional barycenter](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_convolutional_barycenter.html) [21]  and unmixing [4].\n* Sinkhorn divergence [23] and entropic regularization OT from empirical data.\n* Debiased Sinkhorn barycenters [Sinkhorn divergence barycenter](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_debiased_barycenter.html) [37]\n* Smooth optimal transport solvers (dual and semi-dual) for KL and squared L2 regularizations [17].\n* Weak OT solver between empirical distributions [39]\n* Non regularized [Wasserstein barycenters [16] ](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_barycenter_lp_vs_entropic.html) with LP solver (only small scale).\n* [Gromov-Wasserstein distances](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov.html) and [GW barycenters](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov_barycenter.html)  (exact [13] and regularized [12,51]), differentiable using gradients from Graph Dictionary Learning [38]\n * [Fused-Gromov-Wasserstein distances solver](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_fgw.html#sphx-glr-auto-examples-plot-fgw-py) and [FGW barycenters](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_barycenter_fgw.html) (exact [24] and regularized [12,51]).\n* [Stochastic\n  solver](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_stochastic.html) and\n  [differentiable losses](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_stoch_continuous_ot_pytorch.html) for\n  Large-scale Optimal Transport (semi-dual problem [18] and dual problem [19])\n* [Sampled solver of Gromov Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov.html) for large-scale problem with any loss functions [33]\n* Non regularized [free support Wasserstein barycenters](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_free_support_barycenter.html) [20].\n* [One dimensional Unbalanced OT](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_UOT_1D.html) with KL relaxation [73] and [barycenter](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_UOT_barycenter_1D.html) [10, 25]. Also [exact unbalanced OT](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_unbalanced_ot.html) with KL and quadratic regularization and the [regularization path of UOT](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_regpath.html) [41]\n* [Partial Wasserstein and Gromov-Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_partial_wass_and_gromov.html) and [Partial Fused Gromov-Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_partial_fgw.html) (exact [29] and entropic [3] formulations).\n* [Sliced Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_variance.html) [31, 32] and Max-sliced Wasserstein [35] that can be used for gradient flows [36].\n* [Sliced Unbalanced OT and Unbalanced Sliced OT](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_UOT.html) [82]\n* [Wasserstein distance on the\n  circle](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_compute_wasserstein_circle.html)\n  [44, 45] and  [Spherical Sliced Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_variance_ssw.html) [46]\n* [Graph Dictionary Learning solvers](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov_wasserstein_dictionary_learning.html) [38].\n* [Semi-relaxed (Fused) Gromov-Wasserstein divergences](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_semirelaxed_fgw.html) with corresponding [barycenter solvers](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_semirelaxed_gromov_wasserstein_barycenter.hmtl) (exact and regularized [48]).\n* [Quantized (Fused) Gromov-Wasserstein distances](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_quantized_gromov_wasserstein.html) [68].\n* [Efficient Discrete Multi Marginal Optimal Transport Regularization](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_demd_gradient_minimize.html) [50].\n* [Several backends](https:\u002F\u002Fpythonot.github.io\u002Fquickstart.html#solving-ot-with-multiple-backends) for easy use of POT with  [Pytorch](https:\u002F\u002Fpytorch.org\u002F)\u002F[jax](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax)\u002F[Numpy](https:\u002F\u002Fnumpy.org\u002F)\u002F[Cupy](https:\u002F\u002Fcupy.dev\u002F)\u002F[Tensorflow](https:\u002F\u002Fwww.tensorflow.org\u002F) arrays.\n* [Smooth Strongly Convex Nearest Brenier Potentials](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_SSNB.html#sphx-glr-auto-examples-others-plot-ssnb-py) [58], with an extension to bounding potentials using [59].\n* [Gaussian Mixture Model OT](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgaussian_gmm\u002Fplot_GMMOT_plan.html#sphx-glr-auto-examples-others-plot-gmmot-plan-py) [69].\n* [Co-Optimal Transport](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_COOT.html) [49] and\n[unbalanced Co-Optimal Transport](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_learning_weights_with_COOT.html) [71].\n* Fused unbalanced Gromov-Wasserstein [70].\n* [Optimal Transport Barycenters for Generic Costs](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_free_support_barycenter_generic_cost.html) [77]\n* [Barycenters between Gaussian Mixture Models](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_gmm_barycenter.html) [69, 77]\n\nPOT provides the following Machine Learning related solvers:\n\n* [Optimal transport for domain\n  adaptation](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_classes.html)\n  with [group lasso regularization](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_classes.html),   [Laplacian regularization](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_laplacian.html) [5] [30] and [semi\n  supervised setting](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_semi_supervised.html).\n* [Linear OT mapping](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_linear_mapping.html) [14] and [Joint OT mapping estimation](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_mapping.html) [8].\n* [Wasserstein Discriminant Analysis](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_WDA.html) [11] (requires autograd + pymanopt).\n* [JCPOT algorithm for multi-source domain adaptation with target shift](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_jcpot.html) [27].\n* [Graph Neural Network OT layers TFGW](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gnn_TFGW.html) [52] and TW (OT-GNN) [53]\n\nSome other examples are available in the  [documentation](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Findex.html).\n\n#### Using and citing the toolbox\n\nIf you use this toolbox in your research and find it useful, please cite POT\nusing the following references from the current version and from our [JMLR\npaper](https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F20-451.html):\n\n    Flamary R., Vincent-Cuaz C., Courty N., Gramfort A., Kachaiev O., Quang Tran H., David L., Bonet C., Cassereau N., Gnassounou T., Tanguy E., Delon J., Collas A., Mazelet S., Chapel L., Kerdoncuff T., Yu X., Feickert M., Krzakala P., Liu T., Fernandes Montesuma E. POT Python Optimal Transport (version 0.9.5). URL: https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\n\n    Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, Titouan Vayer, POT Python Optimal Transport library, Journal of Machine Learning Research, 22(78):1−8, 2021. URL: https:\u002F\u002Fpythonot.github.io\u002F\n\nIn Bibtex format:\n\n```bibtex\n@misc{flamary2024pot,\n  author = {Flamary, R{\\'e}mi and Vincent-Cuaz, C{\\'e}dric and Courty, Nicolas and Gramfort, Alexandre and Kachaiev, Oleksii and Quang Tran, Huy and David, Laurène and Bonet, Cl{\\'e}ment and Cassereau, Nathan and Gnassounou, Th{\\'e}o and Tanguy, Eloi and Delon, Julie and Collas, Antoine and Mazelet, Sonia and Chapel, Laetitia and Kerdoncuff, Tanguy and Yu, Xizheng and Feickert, Matthew and Krzakala, Paul and Liu, Tianlin and Fernandes Montesuma, Eduardo},\n  title = {POT Python Optimal Transport (version 0.9.5)},\n  url = {https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT},\n  year = {2024}\n}\n\n@article{flamary2021pot,\n  author  = {R{\\'e}mi Flamary and Nicolas Courty and Alexandre Gramfort and Mokhtar Z. Alaya and Aur{\\'e}lie Boisbunon and Stanislas Chambon and Laetitia Chapel and Adrien Corenflos and Kilian Fatras and Nemo Fournier and L{\\'e}o Gautheron and Nathalie T.H. Gayraud and Hicham Janati and Alain Rakotomamonjy and Ievgen Redko and Antoine Rolet and Antony Schutz and Vivien Seguy and Danica J. Sutherland and Romain Tavenard and Alexander Tong and Titouan Vayer},\n  title   = {POT: Python Optimal Transport},\n  journal = {Journal of Machine Learning Research},\n  year    = {2021},\n  volume  = {22},\n  number  = {78},\n  pages   = {1-8},\n  url     = {http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F20-451.html}\n}\n```\n\n## Installation\n\nThe library has been tested on Linux, MacOSX and Windows. It requires a C++ compiler for building\u002Finstalling the EMD solver and relies on the following Python modules:\n\n- Numpy (>=1.16)\n- Scipy (>=1.0)\n- Cython (>=0.23) (build only, not necessary when installing from pip or conda)\n\n#### Pip installation\n\n\nYou can install the toolbox through PyPI with:\n\n```console\npip install POT\n```\n\nor get the very latest version by running:\n\n```console\npip install -U https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Farchive\u002Fmaster.zip # with --user for user install (no root)\n```\n\nOptional dependencies may be installed with\n```console\npip install POT[all]\n```\nNote that this installs `cvxopt`, which is licensed under GPL 3.0. Alternatively, if you cannot use GPL-licensed software, the specific optional dependencies may be installed individually, or per-submodule. The available optional installations are `backend-jax, backend-tf, backend-torch, cvxopt, dr, gnn, all`.\n\n#### Anaconda installation with conda-forge\n\nIf you use the Anaconda python distribution, POT is available in [conda-forge](https:\u002F\u002Fconda-forge.org). To install it and the required dependencies:\n\n```console\nconda install -c conda-forge pot\n```\n\n#### Post installation check\nAfter a correct installation, you should be able to import the module without errors:\n\n```python\nimport ot\n```\n\nNote that for easier access the module is named `ot` instead of `pot`.\n\n\n### Dependencies\n\nSome sub-modules require additional dependencies which are discussed below\n\n* **ot.dr** (Wasserstein dimensionality reduction) depends on autograd and pymanopt that can be installed with:\n\n```shell\npip install pymanopt autograd\n```\n\n\n## Examples\n\n### Short examples\n\n* Import the toolbox\n\n```python\nimport ot\n```\n\n* Compute Wasserstein distances\n\n```python\n# a,b are 1D histograms (sum to 1 and positive)\n# M is the ground cost matrix\n\n# With the unified  API :\nWd = ot.solve(M, a, b).value # exact linear program\nWd_reg = ot.solve(M, a, b, reg=reg).value # entropic regularized OT\n\n# With the old API :\nWd = ot.emd2(a, b, M) # exact linear program\nWd_reg = ot.sinkhorn2(a, b, M, reg) # entropic regularized OT\n# if b is a matrix compute all distances to a and return a vector\n```\n\n* Compute OT matrix\n\n```python\n# a,b are 1D histograms (sum to 1 and positive)\n# M is the ground cost matrix\n\n# With the unified API :\nT = ot.solve(M, a, b).plan # exact linear program\nT_reg = ot.solve(M, a, b, reg=reg).plan # entropic regularized OT\n\n# With the old API :\nT = ot.emd(a, b, M) # exact linear program\nT_reg = ot.sinkhorn(a, b, M, reg) # entropic regularized OT\n```\n\n* Compute OT on empirical distributions\n\n```python\n# X and Y are two 2D arrays of shape (n_samples, n_features)\n\n# with squared euclidean metric\nT = ot.solve_sample(X, Y).plan # exact linear program\nT_reg = ot.solve_sample(X, Y, reg=reg).plan # entropic regularized OT\n\nWass_2 = ot.solve_sample(X, Y).value # Squared Wasserstein_2\nWass_1 = ot.solve_sample(X, Y, metric='euclidean').value # Wasserstein 1\n```\n\n* Compute Wasserstein barycenter\n\n```python\n# A is a n*d matrix containing d  1D histograms\n# M is the ground cost matrix\nba = ot.barycenter(A, M, reg) # reg is regularization parameter\n```\n\n### Examples and Notebooks\n\nThe examples folder contain several examples and use case for the library. The full documentation with examples and output is available on [https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F).\n\n\n## Acknowledgements\n\nThis toolbox has been created by [Rémi Flamary](https:\u002F\u002Fremi.flamary.com\u002F) and [Nicolas Courty](http:\u002F\u002Fpeople.irisa.fr\u002FNicolas.Courty\u002F).\n\nIt is currently maintained by :\n\n* [Rémi Flamary](https:\u002F\u002Fremi.flamary.com\u002F)\n* [Cédric Vincent-Cuaz](https:\u002F\u002Fcedricvincentcuaz.github.io\u002F)\n\nThe POT contributors to this library are listed [here](CONTRIBUTORS.md).\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_a4d76650c9b2.png\" \u002F>\n\u003C\u002Fa>\n\nPOT has benefited from the financing or manpower from the following partners:\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_35d9efdb0ba7.jpg\" alt=\"ANR\" style=\"height:60px;\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_b9c0d7ba821a.jpg\" alt=\"CNRS\" style=\"height:60px;\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_b89f51778527.jpg\" alt=\"3IA\" style=\"height:60px;\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_d01bf283303f.png\" alt=\"Hi!PARIS\" style=\"height:60px;\"\u002F>\n\n## Contributions and code of conduct\n\nEvery contribution is welcome and should respect the [contribution guidelines](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fcontributing.html). Each member of the project is expected to follow the [code of conduct](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fcode_of_conduct.html).\n\n## Support\n\nYou can ask questions and join the development discussion:\n\n* On the POT [slack channel](https:\u002F\u002Fpot-toolbox.slack.com)\n* On the POT [gitter channel](https:\u002F\u002Fgitter.im\u002FPythonOT\u002Fcommunity)\n* On the POT [mailing list](https:\u002F\u002Fmail.python.org\u002Fmm3\u002Fmailman3\u002Flists\u002Fpot.python.org\u002F)\n\nYou can also post bug reports and feature requests in Github issues. Make sure to read our [guidelines](.github\u002FCONTRIBUTING.md) first.\n\n## References\n\n[1] Bonneel, N., Van De Panne, M., Paris, S., & Heidrich, W. (2011, December). [Displacement interpolation using Lagrangian mass transport](https:\u002F\u002Fpeople.csail.mit.edu\u002Fsparis\u002Fpubli\u002F2011\u002Fsigasia\u002FBonneel_11_Displacement_Interpolation.pdf). In ACM Transactions on Graphics (TOG) (Vol. 30, No. 6, p. 158). ACM.\n\n[2] Cuturi, M. (2013). [Sinkhorn distances: Lightspeed computation of optimal transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1306.0895.pdf). In Advances in Neural Information Processing Systems (pp. 2292-2300).\n\n[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015). [Iterative Bregman projections for regularized transportation problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1412.5154.pdf). SIAM Journal on Scientific Computing, 37(2), A1111-A1138.\n\n[4] S. Nakhostin, N. Courty, R. Flamary, D. Tuia, T. Corpetti, [Supervised planetary unmixing with optimal transport](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-01377236\u002Fdocument), Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), 2016.\n\n[5] N. Courty; R. Flamary; D. Tuia; A. Rakotomamonjy, [Optimal Transport for Domain Adaptation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1507.00504.pdf), in IEEE Transactions on Pattern Analysis and Machine Intelligence , vol.PP, no.99, pp.1-1\n\n[6] Ferradans, S., Papadakis, N., Peyré, G., & Aujol, J. F. (2014). [Regularized discrete optimal transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1307.5551.pdf). SIAM Journal on Imaging Sciences, 7(3), 1853-1882.\n\n[7] Rakotomamonjy, A., Flamary, R., & Courty, N. (2015). [Generalized conditional gradient: analysis of convergence and applications](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1510.06567.pdf). arXiv preprint arXiv:1510.06567.\n\n[8] M. Perrot, N. Courty, R. Flamary, A. Habrard (2016), [Mapping estimation for discrete optimal transport](http:\u002F\u002Fremi.flamary.com\u002Fbiblio\u002Fperrot2016mapping.pdf), Neural Information Processing Systems (NIPS).\n\n[9] Schmitzer, B. (2016). [Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1610.06519.pdf). arXiv preprint arXiv:1610.06519.\n\n[10] Chizat, L., Peyré, G., Schmitzer, B., & Vialard, F. X. (2016). [Scaling algorithms for unbalanced transport problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.05816.pdf). arXiv preprint arXiv:1607.05816.\n\n[11] Flamary, R., Cuturi, M., Courty, N., & Rakotomamonjy, A. (2016). [Wasserstein Discriminant Analysis](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1608.08063.pdf). arXiv preprint arXiv:1608.08063.\n\n[12] Gabriel Peyré, Marco Cuturi, and Justin Solomon (2016), [Gromov-Wasserstein averaging of kernel and distance matrices](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fpeyre16.html)  International Conference on Machine Learning (ICML).\n\n[13] Mémoli, Facundo (2011). [Gromov–Wasserstein distances and the metric approach to object matching](https:\u002F\u002Fmedia.adelaide.edu.au\u002Facvt\u002FPublications\u002F2011\u002F2011-Gromov%E2%80%93Wasserstein%20Distances%20and%20the%20Metric%20Approach%20to%20Object%20Matching.pdf). Foundations of computational mathematics 11.4 : 417-487.\n\n[14] Knott, M. and Smith, C. S. (1984).[On the optimal mapping of distributions](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002FBF00934745), Journal of Optimization Theory and Applications Vol 43.\n\n[15] Peyré, G., & Cuturi, M. (2018). [Computational Optimal Transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.00567.pdf) .\n\n[16] Agueh, M., & Carlier, G. (2011). [Barycenters in the Wasserstein space](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-00637399\u002Fdocument). SIAM Journal on Mathematical Analysis, 43(2), 904-924.\n\n[17] Blondel, M., Seguy, V., & Rolet, A. (2018). [Smooth and Sparse Optimal Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06276). Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (AISTATS).\n\n[18] Genevay, A., Cuturi, M., Peyré, G. & Bach, F. (2016) [Stochastic Optimization for Large-scale Optimal Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.08527). Advances in Neural Information Processing Systems (2016).\n\n[19] Seguy, V., Bhushan Damodaran, B., Flamary, R., Courty, N., Rolet, A.& Blondel, M. [Large-scale Optimal Transport and Mapping Estimation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.02283.pdf). International Conference on Learning Representation (2018)\n\n[20] Cuturi, M. and Doucet, A. (2014) [Fast Computation of Wasserstein Barycenters](http:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fcuturi14.html). International Conference in Machine Learning\n\n[21] Solomon, J., De Goes, F., Peyré, G., Cuturi, M., Butscher, A., Nguyen, A. & Guibas, L. (2015). [Convolutional wasserstein distances: Efficient optimal transportation on geometric domains](https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2766963). ACM Transactions on Graphics (TOG), 34(4), 66.\n\n[22] J. Altschuler, J.Weed, P. Rigollet, (2017) [Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6792-near-linear-time-approximation-algorithms-for-optimal-transport-via-sinkhorn-iteration.pdf), Advances in Neural Information Processing Systems (NIPS) 31\n\n[23] Aude, G., Peyré, G., Cuturi, M., [Learning Generative Models with Sinkhorn Divergences](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00292), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, (AISTATS) 21, 2018\n\n[24] Vayer, T., Chapel, L., Flamary, R., Tavenard, R. and Courty, N. (2019). [Optimal Transport for structured data with application on graphs](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftitouan19a.html) Proceedings of the 36th International Conference on Machine Learning (ICML).\n\n[25] Frogner C., Zhang C., Mobahi H., Araya-Polo M., Poggio T. (2015). [Learning with a Wasserstein Loss](http:\u002F\u002Fcbcl.mit.edu\u002Fwasserstein\u002F)  Advances in Neural Information Processing Systems (NIPS).\n\n[26] Alaya M. Z., Bérar M., Gasso G., Rakotomamonjy A. (2019). [Screening Sinkhorn Algorithm for Regularized Optimal Transport](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9386-screening-sinkhorn-algorithm-for-regularized-optimal-transport), Advances in Neural Information Processing Systems 33 (NeurIPS).\n\n[27] Redko I., Courty N., Flamary R., Tuia D. (2019). [Optimal Transport for Multi-source Domain Adaptation under Target Shift](http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Fredko19a.html), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS) 22, 2019.\n\n[28] Caffarelli, L. A., McCann, R. J. (2010). [Free boundaries in optimal transport and Monge-Ampere obstacle problems](http:\u002F\u002Fwww.math.toronto.edu\u002F~mccann\u002Fpapers\u002Fannals2010.pdf), Annals of mathematics, 673-730.\n\n[29] Chapel, L., Alaya, M., Gasso, G. (2020). [Partial Optimal Transport with Applications on Positive-Unlabeled Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08276), Advances in Neural Information Processing Systems (NeurIPS), 2020.\n\n[30] Flamary R., Courty N., Tuia D., Rakotomamonjy A. (2014). [Optimal transport with Laplacian regularization: Applications to domain adaptation and shape matching](https:\u002F\u002Fremi.flamary.com\u002Fbiblio\u002Fflamary2014optlaplace.pdf), NIPS Workshop on Optimal Transport and Machine Learning OTML, 2014.\n\n[31] Bonneel, Nicolas, et al. [Sliced and radon wasserstein barycenters of measures](https:\u002F\u002Fperso.liris.cnrs.fr\u002Fnicolas.bonneel\u002FWassersteinSliced-JMIV.pdf), Journal of Mathematical Imaging and Vision 51.1 (2015): 22-45\n\n[32] Huang, M., Ma S., Lai, L. (2021). [A Riemannian Block Coordinate Descent Method for Computing the Projection Robust Wasserstein Distance](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fhuang21e.html), Proceedings of the 38th International Conference on Machine Learning (ICML).\n\n[33] Kerdoncuff T., Emonet R., Marc S. [Sampled Gromov Wasserstein](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-03232509\u002Fdocument), Machine Learning Journal (MJL), 2021\n\n[34] Feydy, J., Séjourné, T., Vialard, F. X., Amari, S. I., Trouvé, A., & Peyré, G. (2019, April). [Interpolating between optimal transport and MMD using Sinkhorn divergences](http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Ffeydy19a\u002Ffeydy19a.pdf). In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 2681-2690). PMLR.\n\n[35] Deshpande, I., Hu, Y. T., Sun, R., Pyrros, A., Siddiqui, N., Koyejo, S., ... & Schwing, A. G. (2019). [Max-sliced wasserstein distance and its use for gans](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FDeshpande_Max-Sliced_Wasserstein_Distance_and_Its_Use_for_GANs_CVPR_2019_paper.pdf). In Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (pp. 10648-10656).\n\n[36] Liutkus, A., Simsekli, U., Majewski, S., Durmus, A., & Stöter, F. R.\n(2019, May). [Sliced-Wasserstein flows: Nonparametric generative modeling\nvia optimal transport and diffusions](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fliutkus19a\u002Fliutkus19a.pdf). In International Conference on\nMachine Learning (pp. 4104-4113). PMLR.\n\n[37] Janati, H., Cuturi, M., Gramfort, A. [Debiased sinkhorn barycenters](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fjanati20a\u002Fjanati20a.pdf) Proceedings of the 37th International\nConference on Machine Learning, PMLR 119:4692-4701, 2020\n\n[38] C. Vincent-Cuaz, T. Vayer, R. Flamary, M. Corneli, N. Courty, [Online Graph\nDictionary Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.06555.pdf), International Conference on Machine Learning (ICML), 2021.\n\n[39] Gozlan, N., Roberto, C., Samson, P. M., & Tetali, P. (2017). [Kantorovich duality for general transport costs and applications](https:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.712.1825&rep=rep1&type=pdf). Journal of Functional Analysis, 273(11), 3327-3405.\n\n[40] Forrow, A., Hütter, J. C., Nitzan, M., Rigollet, P., Schiebinger, G., & Weed, J. (2019, April). [Statistical optimal transport via factored couplings](http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Fforrow19a\u002Fforrow19a.pdf). In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 2454-2465). PMLR.\n\n[41] Chapel*, L., Flamary*, R., Wu, H., Févotte, C., Gasso, G. (2021). [Unbalanced Optimal Transport through Non-negative Penalized Linear Regression](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc3c617a9b80b3ae1ebd868b0017cc349-Paper.pdf) Advances in Neural Information Processing Systems (NeurIPS), 2020. (Two first co-authors)\n\n[42] Delon, J., Gozlan, N., and Saint-Dizier, A. [Generalized Wasserstein barycenters between probability measures living on different subspaces](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.09755). arXiv preprint arXiv:2105.09755, 2021.\n\n[43] Álvarez-Esteban, Pedro C., et al. [A fixed-point approach to barycenters in Wasserstein space.](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.05355.pdf) Journal of Mathematical Analysis and Applications 441.2 (2016): 744-762.\n\n[44] Delon, Julie, Julien Salomon, and Andrei Sobolevski. [Fast transport optimization for Monge costs on the circle.](https:\u002F\u002Farxiv.org\u002Fabs\u002F0902.3527) SIAM Journal on Applied Mathematics 70.7 (2010): 2239-2258.\n\n[45] Hundrieser, Shayan, Marcel Klatt, and Axel Munk. [The statistics of circular optimal transport.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15426) Directional Statistics for Innovative Applications: A Bicentennial Tribute to Florence Nightingale. Singapore: Springer Nature Singapore, 2022. 57-82.\n\n[46] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). [Spherical Sliced-Wasserstein](https:\u002F\u002Fopenreview.net\u002Fforum?id=jXQ0ipgMdU). International Conference on Learning Representations.\n\n[47] Chowdhury, S., & Mémoli, F. (2019). [The gromov–wasserstein distance between networks and stable network invariants](https:\u002F\u002Facademic.oup.com\u002Fimaiai\u002Farticle\u002F8\u002F4\u002F757\u002F5627736). Information and Inference: A Journal of the IMA, 8(4), 757-787.\n\n[48] Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty (2022). [Semi-relaxed Gromov-Wasserstein divergence and applications on graphs](https:\u002F\u002Fopenreview.net\u002Fpdf?id=RShaMexjc-x). International Conference on Learning Representations (ICLR), 2022.\n\n[49] Redko, I., Vayer, T., Flamary, R., and Courty, N. (2020). [CO-Optimal Transport](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fcc384c68ad503482fb24e6d1e3b512ae-Paper.pdf). Advances in Neural Information Processing Systems, 33.\n\n[50] Liu, T., Puigcerver, J., & Blondel, M. (2023). [Sparsity-constrained optimal transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=yHY9NbQJ5BP). Proceedings of the Eleventh International Conference on Learning Representations (ICLR).\n\n[51] Xu, H., Luo, D., Zha, H., & Carin, L. (2019). [Gromov-wasserstein learning for graph matching and node embedding](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fxu19b.html). In International Conference on Machine Learning (ICML), 2019.\n\n[52] Collas, A., Vayer, T., Flamary, F., & Breloy, A. (2023). [Entropic Wasserstein Component Analysis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05119). ArXiv.\n\n[53] C. Vincent-Cuaz, R. Flamary, M. Corneli, T. Vayer, N. Courty (2022). [Template based graph neural network with optimal transport distances](https:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Ffile\u002F4d3525bc60ba1adc72336c0392d3d902-Paper-Conference.pdf). Advances in Neural Information Processing Systems, 35.\n\n[54] Bécigneul, G., Ganea, O. E., Chen, B., Barzilay, R., & Jaakkola, T. S. (2020). [Optimal transport graph neural networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.04804).\n\n[55] Ronak Mehta, Jeffery Kline, Vishnu Suresh Lokhande, Glenn Fung, & Vikas Singh (2023). [Efficient Discrete Multi Marginal Optimal Transport Regularization](https:\u002F\u002Fopenreview.net\u002Fforum?id=R98ZfMt-jE). In The Eleventh International Conference on Learning Representations (ICLR).\n\n[56] Jeffery Kline. [Properties of the d-dimensional earth mover’s problem](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0166218X19301441). Discrete Applied Mathematics, 265: 128–141, 2019.\n\n[57] Delon, J., Desolneux, A., & Salmona, A. (2022). [Gromov–Wasserstein\ndistances between Gaussian distributions](https:\u002F\u002Fhal.science\u002Fhal-03197398v2\u002Ffile\u002Fmain.pdf). Journal of Applied Probability, 59(4),\n1178-1198.\n\n[58] Paty F-P., d’Aspremont 1., & Cuturi M. (2020). [Regularity as regularization:Smooth and strongly convex brenier potentials in optimal transport.](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fpaty20a\u002Fpaty20a.pdf) In International Conference on Artificial Intelligence and Statistics, pages 1222–1232. PMLR, 2020.\n\n[59] Taylor A. B. (2017). [Convex interpolation and performance estimation of first-order methods for convex optimization.](https:\u002F\u002Fdial.uclouvain.be\u002Fpr\u002Fboreal\u002Fobject\u002Fboreal%3A182881\u002Fdatastream\u002FPDF_01\u002Fview) PhD thesis, Catholic University of Louvain, Louvain-la-Neuve, Belgium, 2017.\n\n[60] Feydy, J., Roussillon, P., Trouvé, A., & Gori, P. (2019). [Fast and scalable optimal transport for brain tractograms](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.02010.pdf). In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part III 22 (pp. 636-644). Springer International Publishing.\n\n[61] Charlier, B., Feydy, J., Glaunes, J. A., Collin, F. D., & Durif, G. (2021). [Kernel operations on the gpu, with autodiff, without memory overflows](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume22\u002F20-275\u002F20-275.pdf). The Journal of Machine Learning Research, 22(1), 3457-3462.\n\n[62] H. Van Assel, C. Vincent-Cuaz, T. Vayer, R. Flamary, N. Courty (2023). [Interpolating between Clustering and Dimensionality Reduction with Gromov-Wasserstein](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.03398.pdf). NeurIPS 2023 Workshop Optimal Transport and Machine Learning.\n\n[63] Li, J., Tang, J., Kong, L., Liu, H., Li, J., So, A. M. C., & Blanchet, J. (2022). [A Convergent Single-Loop Algorithm for Relaxation of Gromov-Wasserstein in Graph Data](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0jxPyVWmiiF). In The Eleventh International Conference on Learning Representations.\n\n[64] Ma, X., Chu, X., Wang, Y., Lin, Y., Zhao, J., Ma, L., & Zhu, W. (2023). [Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications](https:\u002F\u002Fopenreview.net\u002Fpdf?id=uqkUguNu40). In Thirty-seventh Conference on Neural Information Processing Systems.\n\n[65] Scetbon, M., Cuturi, M., & Peyré, G. (2021). [Low-Rank Sinkhorn Factorization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.04737.pdf).\n\n[66] Pooladian, Aram-Alexandre, and Jonathan Niles-Weed. [Entropic estimation of optimal transport maps](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.12004.pdf). arXiv preprint arXiv:2109.12004 (2021).\n\n[67] Scetbon, M., Peyré, G. & Cuturi, M. (2022). [Linear-Time Gromov-Wasserstein Distances using Low Rank Couplings and Costs](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fscetbon22b\u002Fscetbon22b.pdf). In International Conference on Machine Learning (ICML), 2022.\n\n[68] Chowdhury, S., Miller, D., & Needham, T. (2021). [Quantized gromov-wasserstein](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-86523-8_49). ECML PKDD 2021. Springer International Publishing.\n\n[69] Delon, J., & Desolneux, A. (2020). [A Wasserstein-type distance in the space of Gaussian mixture models](https:\u002F\u002Fepubs.siam.org\u002Fdoi\u002Fabs\u002F10.1137\u002F19M1301047). SIAM Journal on Imaging Sciences, 13(2), 936-970.\n\n[70] A. Thual, H. Tran, T. Zemskova, N. Courty, R. Flamary, S. Dehaene\n& B. Thirion (2022). [Aligning individual brains with Fused Unbalanced Gromov-Wasserstein.](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Ffile\u002F8906cac4ca58dcaf17e97a0486ad57ca-Paper-Conference.pdf). Neural Information Processing Systems (NeurIPS).\n\n[71] H. Tran, H. Janati, N. Courty, R. Flamary, I. Redko, P. Demetci & R. Singh (2023). [Unbalanced Co-Optimal Transport](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1609\u002Faaai.v37i8.26193). AAAI Conference on\nArtificial Intelligence.\n\n[72] Thibault Séjourné, François-Xavier Vialard, and Gabriel Peyré (2021). [The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F4990974d150d0de5e6e15a1454fe6b0f-Paper.pdf). Neural Information Processing Systems (NeurIPS).\n\n[73] Séjourné, T., Vialard, F. X., & Peyré, G. (2022). [Faster Unbalanced Optimal Transport: Translation Invariant Sinkhorn and 1-D Frank-Wolfe](https:\u002F\u002Fproceedings.mlr.press\u002Fv151\u002Fsejourne22a.html). In International Conference on Artificial Intelligence and Statistics (pp. 4995-5021). PMLR.\n\n[74] Chewi, S., Maunu, T., Rigollet, P., & Stromme, A. J. (2020). [Gradient descent algorithms for Bures-Wasserstein barycenters](https:\u002F\u002Fproceedings.mlr.press\u002Fv125\u002Fchewi20a.html). In Conference on Learning Theory (pp. 1276-1304). PMLR.\n\n[75] Altschuler, J., Chewi, S., Gerber, P. R., & Stromme, A. (2021). [Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent](https:\u002F\u002Fpapers.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2021\u002Fhash\u002Fb9acb4ae6121c941324b2b1d3fac5c30-Abstract.html). Advances in Neural Information Processing Systems, 34, 22132-22145.\n\n[76] Chapel, L., Tavenard, R. (2025). [One for all and all for one: Efficient computation of partial Wasserstein distances on the line](https:\u002F\u002Ficlr.cc\u002Fvirtual\u002F2025\u002Fposter\u002F28547). In International Conference on Learning Representations.\n\n[77] Tanguy, Eloi and Delon, Julie and Gozlan, Nathaël (2024). [Computing Barycentres of Measures for Generic Transport Costs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04016). arXiv preprint 2501.04016 (2024)\n\n[78] Martin, R. D., Medri, I., Bai, Y., Liu, X., Yan, K., Rohde, G. K., & Kolouri, S. (2024). [LCOT: Linear Circular Optimal Transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=49z97Y9lMq). International Conference on Learning Representations.\n\n[79] Liu, X., Bai, Y., Martín, R. D., Shi, K., Shahbazi, A., Landman, B. A., Chang, C., & Kolouri, S. (2025). [Linear Spherical Sliced Optimal Transport: A Fast Metric for Comparing Spherical Data](https:\u002F\u002Fopenreview.net\u002Fforum?id=fgUFZAxywx). International Conference on Learning Representations.\n\n[80] Altschuler, J., Bach, F., Rudi, A., Niles-Weed, J., [Massively scalable Sinkhorn distances via the Nyström method](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2019\u002Ffile\u002Ff55cadb97eaff2ba1980e001b0bd9842-Paper.pdf), Advances in Neural Information Processing Systems, 2019.\n\n[81] Xu, H., Luo, D., & Carin, L. (2019). [Scalable Gromov-Wasserstein learning for graph partitioning and matching](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002F6e62a992c676f611616097dbea8ea030-Abstract.html). Neural Information Processing Systems (NeurIPS).\n\n[82] Bonet, C., Nadjahi, K., Séjourné, T., Fatras, K., & Courty, N. (2024). [Slicing Unbalanced Optimal Transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=AjJTg5M0r8). Transactions on Machine Learning Research.\n","# POT: Python 最优传输（Optimal Transport）\n\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FPOT.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002FPOT)\n[![Anaconda Cloud](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot\u002Fbadges\u002Fversion.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot)\n[![Build Status](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Factions\u002Fworkflows\u002Fbuild_tests.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Factions)\n[![Codecov Status](https:\u002F\u002Fcodecov.io\u002Fgh\u002FPythonOT\u002FPOT\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FPythonOT\u002FPOT)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_d50896e0bc3d.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpot)\n[![Anaconda downloads](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot\u002Fbadges\u002Fdownloads.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot)\n[![License](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fpot\u002Fbadges\u002Flicense.svg)](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fblob\u002Fmaster\u002FLICENSE)\n\n这个开源 Python 库提供了多种求解器，用于解决与最优传输（Optimal Transport, OT）相关的优化问题，适用于信号处理、图像处理和机器学习等领域。\n\n网站与文档：[https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F)\n\n源代码（MIT 许可证）：\n[https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT)\n\nPOT 具有以下主要功能：\n* 一系列可微分的最优传输问题求解器，包括：\n  * 精确线性 OT、熵正则化 OT 和二次正则化 OT，\n  * Gromov-Wasserstein（GW）距离、融合 GW（Fused GW）距离以及各类二次 OT 变体，\n  * 针对不同散度（divergence）的非平衡（unbalanced）和部分（partial）OT，\n* 固定支撑集和自由支撑集下的 OT 质心（Wasserstein 和 GW），\n* 在一维空间、圆环（circle）上以及高斯混合模型（Gaussian Mixture Models, GMMs）之间的快速 OT 求解器，\n* 多种与机器学习相关的求解器，例如域自适应（domain adaptation）、最优传输映射估计、子空间学习、图神经网络（Graph Neural Networks, GNNs）层等。\n* 支持多种后端，可轻松与 PyTorch、JAX、TensorFlow、NumPy 和 CuPy 数组配合使用。\n\n### 已实现的功能\n\nPOT 提供了以下通用 OT 求解器：\n\n* 用于线性规划\u002FEarth Mover 距离（EMD）[1] 的 [OT Network Simplex 求解器](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_OT_1D.html)。\n* 用于正则化最优传输（Regularized OT）[7] 的 [条件梯度法（Conditional gradient）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_optim_OTreg.html) [6] 和 [广义条件梯度法（Generalized conditional gradient）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_optim_OTreg.html)。\n* 基于 [Sinkhorn-Knopp 算法](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fplot_OT_1D.html) [2] 的熵正则化最优传输（Entropic regularization OT）求解器，包含稳定化版本 [9][10][34]、来自 geomloss 的懒惰 CPU\u002FGPU 求解器 [60][61]、贪心 Sinkhorn [22] 以及 Screening Sinkhorn [26]。\n* 用于 [Wasserstein 质心（Wasserstein barycenter）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_barycenter_lp_vs_entropic.html) [3]、[卷积质心（convolutional barycenter）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_convolutional_barycenter.html) [21] 和解混（unmixing）[4] 的 Bregman 投影方法。\n* Sinkhorn 散度（Sinkhorn divergence）[23] 以及基于经验数据的熵正则化最优传输。\n* 去偏 Sinkhorn 质心（Debiased Sinkhorn barycenters），即 [Sinkhorn 散度质心](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_debiased_barycenter.html) [37]。\n* 针对 KL 和平方 L2 正则化的平滑最优传输求解器（对偶和半对偶形式）[17]。\n* 经验分布之间的弱最优传输（Weak OT）求解器 [39]。\n* 使用线性规划（LP）求解器的非正则化 [Wasserstein 质心](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_barycenter_lp_vs_entropic.html) [16]（仅适用于小规模问题）。\n* [Gromov-Wasserstein 距离](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov.html) 与 [GW 质心](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov_barycenter.html)（精确解 [13] 与正则化解 [12,51]），并可通过图字典学习（Graph Dictionary Learning）[38] 提供的梯度实现可微分。\n* [Fused-Gromov-Wasserstein 距离求解器](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_fgw.html#sphx-glr-auto-examples-plot-fgw-py) 与 [FGW 质心](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_barycenter_fgw.html)（精确解 [24] 与正则化解 [12,51]）。\n* 大规模最优传输的 [随机求解器（Stochastic solver）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_stochastic.html) 与 [可微损失函数（differentiable losses）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_stoch_continuous_ot_pytorch.html)（针对半对偶问题 [18] 与对偶问题 [19]）。\n* 适用于大规模问题且支持任意损失函数的 [Gromov-Wasserstein 抽样求解器](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov.html) [33]。\n* 非正则化的 [自由支撑 Wasserstein 质心（free support Wasserstein barycenters）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_free_support_barycenter.html) [20]。\n* [一维不平衡最优传输（One dimensional Unbalanced OT）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_UOT_1D.html)，采用 KL 松弛 [73]，以及对应的 [质心](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_UOT_barycenter_1D.html) [10, 25]。此外还包括 [精确不平衡最优传输](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_unbalanced_ot.html)（含 KL 与二次正则化）以及 [UOT 正则化路径（regularization path of UOT）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_regpath.html) [41]。\n* [部分 Wasserstein 与 Gromov-Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_partial_wass_and_gromov.html) 以及 [部分 Fused Gromov-Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_partial_fgw.html)（精确形式 [29] 与熵形式 [3]）。\n* [切片 Wasserstein（Sliced Wasserstein）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_variance.html) [31, 32] 与最大切片 Wasserstein（Max-sliced Wasserstein）[35]，可用于梯度流（gradient flows）[36]。\n* [切片不平衡最优传输（Sliced Unbalanced OT）与不平衡切片最优传输（Unbalanced Sliced OT）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_UOT.html) [82]。\n* [圆环上的 Wasserstein 距离（Wasserstein distance on the circle）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_compute_wasserstein_circle.html) [44, 45] 与 [球面切片 Wasserstein（Spherical Sliced Wasserstein）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_variance_ssw.html) [46]。\n* [图字典学习求解器（Graph Dictionary Learning solvers）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov_wasserstein_dictionary_learning.html) [38]。\n* [半松弛（Fused）Gromov-Wasserstein 散度](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_semirelaxed_fgw.html) 及其对应的 [质心求解器](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_semirelaxed_gromov_wasserstein_barycenter.hmtl)（精确与正则化形式 [48]）。\n* [量化（Fused）Gromov-Wasserstein 距离](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_quantized_gromov_wasserstein.html) [68]。\n* [高效离散多边缘最优传输正则化（Efficient Discrete Multi Marginal Optimal Transport Regularization）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_demd_gradient_minimize.html) [50]。\n* 支持多种后端（[Pytorch](https:\u002F\u002Fpytorch.org\u002F) \u002F [jax](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fjax) \u002F [Numpy](https:\u002F\u002Fnumpy.org\u002F) \u002F [Cupy](https:\u002F\u002Fcupy.dev\u002F) \u002F [Tensorflow](https:\u002F\u002Fwww.tensorflow.org\u002F) 数组）的 [多种后端接口](https:\u002F\u002Fpythonot.github.io\u002Fquickstart.html#solving-ot-with-multiple-backends)，便于使用 POT。\n* [光滑强凸最近 Brenier 势（Smooth Strongly Convex Nearest Brenier Potentials）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_SSNB.html#sphx-glr-auto-examples-others-plot-ssnb-py) [58]，并扩展至使用 [59] 对势函数进行约束。\n* [高斯混合模型最优传输（Gaussian Mixture Model OT）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgaussian_gmm\u002Fplot_GMMOT_plan.html#sphx-glr-auto-examples-others-plot-gmmot-plan-py) [69]。\n* [协同最优传输（Co-Optimal Transport）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_COOT.html) [49] 与 [不平衡协同最优传输（unbalanced Co-Optimal Transport）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_learning_weights_with_COOT.html) [71]。\n* 融合不平衡 Gromov-Wasserstein（Fused unbalanced Gromov-Wasserstein）[70]。\n* [通用代价函数下的最优传输质心（Optimal Transport Barycenters for Generic Costs）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_free_support_barycenter_generic_cost.html) [77]。\n* [高斯混合模型之间的质心（Barycenters between Gaussian Mixture Models）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbarycenters\u002Fplot_gmm_barycenter.html) [69, 77]。\n\nPOT 提供以下与机器学习相关的求解器：\n\n* [带域自适应（domain adaptation）的最优传输（Optimal transport）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_classes.html)，包含 [组套索正则化（group lasso regularization）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_classes.html)、[拉普拉斯正则化（Laplacian regularization）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_laplacian.html) [5] [30] 以及 [半监督设置（semi supervised setting）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_semi_supervised.html)。\n* [线性 OT 映射（Linear OT mapping）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_linear_mapping.html) [14] 和 [联合 OT 映射估计（Joint OT mapping estimation）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_mapping.html) [8]。\n* [Wasserstein 判别分析（Wasserstein Discriminant Analysis）](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_WDA.html) [11]（需要 autograd + pymanopt）。\n* [用于带目标偏移（target shift）的多源域自适应的 JCPOT 算法](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_jcpot.html) [27]。\n* [图神经网络 OT 层 TFGW](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gnn_TFGW.html) [52] 和 TW (OT-GNN) [53]\n\n更多示例请参见 [文档](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Findex.html)。\n\n#### 使用与引用该工具箱\n\n如果您在研究中使用了本工具箱并觉得它有用，请引用 POT，引用以下当前版本的参考文献以及我们的 [JMLR 论文](https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F20-451.html)：\n\n    Flamary R., Vincent-Cuaz C., Courty N., Gramfort A., Kachaiev O., Quang Tran H., David L., Bonet C., Cassereau N., Gnassounou T., Tanguy E., Delon J., Collas A., Mazelet S., Chapel L., Kerdoncuff T., Yu X., Feickert M., Krzakala P., Liu T., Fernandes Montesuma E. POT Python Optimal Transport (version 0.9.5). URL: https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\n\n    Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, Titouan Vayer, POT Python Optimal Transport library, Journal of Machine Learning Research, 22(78):1−8, 2021. URL: https:\u002F\u002Fpythonot.github.io\u002F\n\nBibtex 格式如下：\n\n```bibtex\n@misc{flamary2024pot,\n  author = {Flamary, R{\\'e}mi and Vincent-Cuaz, C{\\'e}dric and Courty, Nicolas and Gramfort, Alexandre and Kachaiev, Oleksii and Quang Tran, Huy and David, Laurène and Bonet, Cl{\\'e}ment and Cassereau, Nathan and Gnassounou, Th{\\'e}o and Tanguy, Eloi and Delon, Julie and Collas, Antoine and Mazelet, Sonia and Chapel, Laetitia and Kerdoncuff, Tanguy and Yu, Xizheng and Feickert, Matthew and Krzakala, Paul and Liu, Tianlin and Fernandes Montesuma, Eduardo},\n  title = {POT Python Optimal Transport (version 0.9.5)},\n  url = {https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT},\n  year = {2024}\n}\n\n@article{flamary2021pot,\n  author  = {R{\\'e}mi Flamary and Nicolas Courty and Alexandre Gramfort and Mokhtar Z. Alaya and Aur{\\'e}lie Boisbunon and Stanislas Chambon and Laetitia Chapel and Adrien Corenflos and Kilian Fatras and Nemo Fournier and L{\\'e}o Gautheron and Nathalie T.H. Gayraud and Hicham Janati and Alain Rakotomamonjy and Ievgen Redko and Antoine Rolet and Antony Schutz and Vivien Seguy and Danica J. Sutherland and Romain Tavenard and Alexander Tong and Titouan Vayer},\n  title   = {POT: Python Optimal Transport},\n  journal = {Journal of Machine Learning Research},\n  year    = {2021},\n  volume  = {22},\n  number  = {78},\n  pages   = {1-8},\n  url     = {http:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv22\u002F20-451.html}\n}\n```\n\n\n\n## 安装\n\n该库已在 Linux、MacOSX 和 Windows 上测试通过。安装时需要 C++ 编译器以构建\u002F安装 EMD 求解器，并依赖以下 Python 模块：\n\n- Numpy (>=1.16)\n- Scipy (>=1.0)\n- Cython (>=0.23)（仅构建时需要，通过 pip 或 conda 安装时无需）\n\n#### Pip 安装\n\n您可以通过 PyPI 安装该工具箱：\n\n```console\npip install POT\n```\n\n或者获取最新开发版：\n\n```console\npip install -U https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Farchive\u002Fmaster.zip # 添加 --user 参数可进行用户级安装（无需 root 权限）\n```\n\n可选依赖项可通过以下命令安装：\n```console\npip install POT[all]\n```\n注意：这会安装 `cvxopt`，其采用 GPL 3.0 许可证。如果您不能使用 GPL 许可的软件，可以单独或按子模块安装特定的可选依赖项。可用的可选安装选项包括 `backend-jax, backend-tf, backend-torch, cvxopt, dr, gnn, all`。\n\n#### 通过 conda-forge 使用 Anaconda 安装\n\n如果您使用 Anaconda Python 发行版，POT 已在 [conda-forge](https:\u002F\u002Fconda-forge.org) 中提供。安装命令如下：\n\n```console\nconda install -c conda-forge pot\n```\n\n#### 安装后检查\n正确安装后，应能无错误地导入模块：\n\n```python\nimport ot\n```\n\n注意：为方便使用，模块名为 `ot` 而非 `pot`。\n\n\n### 依赖项\n\n某些子模块需要额外的依赖项，具体如下：\n\n* **ot.dr**（Wasserstein 降维）依赖 autograd 和 pymanopt，可通过以下命令安装：\n\n```shell\npip install pymanopt autograd\n```\n\n\n## 示例\n\n### 简短示例\n\n* 导入工具箱\n\n```python\nimport ot\n```\n\n* 计算 Wasserstein 距离\n\n```python\n# a,b 是一维直方图（元素非负且和为 1）\n# M 是基础代价矩阵（ground cost matrix）\n\n# 使用统一 API：\nWd = ot.solve(M, a, b).value # 精确线性规划\nWd_reg = ot.solve(M, a, b, reg=reg).value # 熵正则化 OT\n\n# 使用旧版 API：\nWd = ot.emd2(a, b, M) # 精确线性规划\nWd_reg = ot.sinkhorn2(a, b, M, reg) # 熵正则化 OT\n# 若 b 为矩阵，则计算 a 到所有 b 的距离并返回向量\n```\n\n* 计算 OT 传输矩阵\n\n```python\n# a,b 是一维直方图（元素非负且和为 1）\n# M 是基础代价矩阵\n\n# 使用统一 API：\nT = ot.solve(M, a, b).plan # 精确线性规划\nT_reg = ot.solve(M, a, b, reg=reg).plan # 熵正则化 OT\n\n# 使用旧版 API：\nT = ot.emd(a, b, M) # 精确线性规划\nT_reg = ot.sinkhorn(a, b, M, reg) # 熵正则化 OT\n```\n\n* 对经验分布计算 OT\n\n```python\n# X 和 Y 是形状为 (n_samples, n_features) 的二维数组\n\n# 使用平方欧氏距离度量\nT = ot.solve_sample(X, Y).plan # 精确线性规划\nT_reg = ot.solve_sample(X, Y, reg=reg).plan # 熵正则化 OT\n\nWass_2 = ot.solve_sample(X, Y).value # 平方 Wasserstein_2 距离\nWass_1 = ot.solve_sample(X, Y, metric='euclidean').value # Wasserstein 1 距离\n```\n\n* 计算 Wasserstein 重心（barycenter）\n\n```python\n# A 是一个 n*d 矩阵，包含 d 个一维直方图\n# M 是基础代价矩阵\nba = ot.barycenter(A, M, reg) # reg 为正则化参数\n```\n\n### 示例与 Notebook\n\n`examples` 文件夹中包含多个该库的示例和使用案例。完整的文档（含示例及输出）请参见 [https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F)。\n\n\n## 致谢\n\n本工具箱由 [Rémi Flamary](https:\u002F\u002Fremi.flamary.com\u002F) 和 [Nicolas Courty](http:\u002F\u002Fpeople.irisa.fr\u002FNicolas.Courty\u002F) 创建。\n\n目前由以下人员维护：\n\n* [Rémi Flamary](https:\u002F\u002Fremi.flamary.com\u002F)\n* [Cédric Vincent-Cuaz](https:\u002F\u002Fcedricvincentcuaz.github.io\u002F)\n\nPOT 库的所有贡献者列表请见 [此处](CONTRIBUTORS.md)。\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_a4d76650c9b2.png\" \u002F>\n\u003C\u002Fa>\n\nPOT 的开发得到了以下合作机构的资金或人力支持：\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_35d9efdb0ba7.jpg\" alt=\"ANR\" style=\"height:60px;\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_b9c0d7ba821a.jpg\" alt=\"CNRS\" style=\"height:60px;\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_b89f51778527.jpg\" alt=\"3IA\" style=\"height:60px;\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_readme_d01bf283303f.png\" alt=\"Hi!PARIS\" style=\"height:60px;\"\u002F>\n\n## 贡献与行为准则\n\n我们欢迎任何形式的贡献，并请遵守 [贡献指南](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fcontributing.html)。项目所有成员均应遵循 [行为准则](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fcode_of_conduct.html)。\n\n## 支持\n\n您可以通过以下渠道提问或参与开发讨论：\n\n* POT [Slack 频道](https:\u002F\u002Fpot-toolbox.slack.com)\n* POT [Gitter 频道](https:\u002F\u002Fgitter.im\u002FPythonOT\u002Fcommunity)\n* POT [邮件列表](https:\u002F\u002Fmail.python.org\u002Fmm3\u002Fmailman3\u002Flists\u002Fpot.python.org\u002F)\n\n您也可以在 GitHub Issues 中提交 bug 报告或功能请求。提交前请务必先阅读我们的 [贡献指南](.github\u002FCONTRIBUTING.md)。\n\n## 参考文献\n\n[1] Bonneel, N., Van De Panne, M., Paris, S., & Heidrich, W. (2011, December). [Displacement interpolation using Lagrangian mass transport](https:\u002F\u002Fpeople.csail.mit.edu\u002Fsparis\u002Fpubli\u002F2011\u002Fsigasia\u002FBonneel_11_Displacement_Interpolation.pdf). In ACM Transactions on Graphics (TOG) (Vol. 30, No. 6, p. 158). ACM.\n\n[2] Cuturi, M. (2013). [Sinkhorn distances: Lightspeed computation of optimal transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1306.0895.pdf). In Advances in Neural Information Processing Systems (pp. 2292-2300).\n\n[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015). [Iterative Bregman projections for regularized transportation problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1412.5154.pdf). SIAM Journal on Scientific Computing, 37(2), A1111-A1138.\n\n[4] S. Nakhostin, N. Courty, R. Flamary, D. Tuia, T. Corpetti, [Supervised planetary unmixing with optimal transport](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-01377236\u002Fdocument), Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), 2016.\n\n[5] N. Courty; R. Flamary; D. Tuia; A. Rakotomamonjy, [Optimal Transport for Domain Adaptation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1507.00504.pdf), in IEEE Transactions on Pattern Analysis and Machine Intelligence , vol.PP, no.99, pp.1-1\n\n[6] Ferradans, S., Papadakis, N., Peyré, G., & Aujol, J. F. (2014). [Regularized discrete optimal transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1307.5551.pdf). SIAM Journal on Imaging Sciences, 7(3), 1853-1882.\n\n[7] Rakotomamonjy, A., Flamary, R., & Courty, N. (2015). [Generalized conditional gradient: analysis of convergence and applications](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1510.06567.pdf). arXiv preprint arXiv:1510.06567.\n\n[8] M. Perrot, N. Courty, R. Flamary, A. Habrard (2016), [Mapping estimation for discrete optimal transport](http:\u002F\u002Fremi.flamary.com\u002Fbiblio\u002Fperrot2016mapping.pdf), Neural Information Processing Systems (NIPS).\n\n[9] Schmitzer, B. (2016). [Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1610.06519.pdf). arXiv preprint arXiv:1610.06519.\n\n[10] Chizat, L., Peyré, G., Schmitzer, B., & Vialard, F. X. (2016). [Scaling algorithms for unbalanced transport problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.05816.pdf). arXiv preprint arXiv:1607.05816.\n\n[11] Flamary, R., Cuturi, M., Courty, N., & Rakotomamonjy, A. (2016). [Wasserstein Discriminant Analysis](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1608.08063.pdf). arXiv preprint arXiv:1608.08063.\n\n[12] Gabriel Peyré, Marco Cuturi, and Justin Solomon (2016), [Gromov-Wasserstein averaging of kernel and distance matrices](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fpeyre16.html)  International Conference on Machine Learning (ICML).\n\n[13] Mémoli, Facundo (2011). [Gromov–Wasserstein distances and the metric approach to object matching](https:\u002F\u002Fmedia.adelaide.edu.au\u002Facvt\u002FPublications\u002F2011\u002F2011-Gromov%E2%80%93Wasserstein%20Distances%20and%20the%20Metric%20Approach%20to%20Object%20Matching.pdf). Foundations of computational mathematics 11.4 : 417-487.\n\n[14] Knott, M. and Smith, C. S. (1984).[On the optimal mapping of distributions](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002FBF00934745), Journal of Optimization Theory and Applications Vol 43.\n\n[15] Peyré, G., & Cuturi, M. (2018). [Computational Optimal Transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.00567.pdf) .\n\n[16] Agueh, M., & Carlier, G. (2011). [Barycenters in the Wasserstein space](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-00637399\u002Fdocument). SIAM Journal on Mathematical Analysis, 43(2), 904-924.\n\n[17] Blondel, M., Seguy, V., & Rolet, A. (2018). [Smooth and Sparse Optimal Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06276). Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (AISTATS).\n\n[18] Genevay, A., Cuturi, M., Peyré, G. & Bach, F. (2016) [Stochastic Optimization for Large-scale Optimal Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.08527). Advances in Neural Information Processing Systems (2016).\n\n[19] Seguy, V., Bhushan Damodaran, B., Flamary, R., Courty, N., Rolet, A.& Blondel, M. [Large-scale Optimal Transport and Mapping Estimation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.02283.pdf). International Conference on Learning Representation (2018)\n\n[20] Cuturi, M. and Doucet, A. (2014) [Fast Computation of Wasserstein Barycenters](http:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fcuturi14.html). International Conference in Machine Learning\n\n[21] Solomon, J., De Goes, F., Peyré, G., Cuturi, M., Butscher, A., Nguyen, A. & Guibas, L. (2015). [Convolutional wasserstein distances: Efficient optimal transportation on geometric domains](https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2766963). ACM Transactions on Graphics (TOG), 34(4), 66.\n\n[22] J. Altschuler, J.Weed, P. Rigollet, (2017) [Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6792-near-linear-time-approximation-algorithms-for-optimal-transport-via-sinkhorn-iteration.pdf), Advances in Neural Information Processing Systems (NIPS) 31\n\n[23] Aude, G., Peyré, G., Cuturi, M., [使用 Sinkhorn 散度学习生成模型（Learning Generative Models with Sinkhorn Divergences）](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00292)，第二十一届人工智能与统计国际会议（AISTATS）21，2018\n\n[24] Vayer, T., Chapel, L., Flamary, R., Tavenard, R. 和 Courty, N. (2019). [面向结构化数据的最优传输及其在图上的应用（Optimal Transport for structured data with application on graphs）](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftitouan19a.html)，第三十六届国际机器学习会议（ICML）论文集。\n\n[25] Frogner C., Zhang C., Mobahi H., Araya-Polo M., Poggio T. (2015). [使用 Wasserstein 损失进行学习（Learning with a Wasserstein Loss）](http:\u002F\u002Fcbcl.mit.edu\u002Fwasserstein\u002F)，神经信息处理系统进展（NIPS）。\n\n[26] Alaya M. Z., Bérar M., Gasso G., Rakotomamonjy A. (2019). [正则化最优传输的筛选 Sinkhorn 算法（Screening Sinkhorn Algorithm for Regularized Optimal Transport）](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9386-screening-sinkhorn-algorithm-for-regularized-optimal-transport)，神经信息处理系统进展 33（NeurIPS）。\n\n[27] Redko I., Courty N., Flamary R., Tuia D. (2019). [目标偏移下多源域自适应的最优传输方法（Optimal Transport for Multi-source Domain Adaptation under Target Shift）](http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Fredko19a.html)，第二十二届人工智能与统计国际会议（AISTATS）22，2019。\n\n[28] Caffarelli, L. A., McCann, R. J. (2010). [最优传输中的自由边界与 Monge-Ampère 障碍问题（Free boundaries in optimal transport and Monge-Ampere obstacle problems）](http:\u002F\u002Fwww.math.toronto.edu\u002F~mccann\u002Fpapers\u002Fannals2010.pdf)，《数学年刊》（Annals of mathematics），673–730。\n\n[29] Chapel, L., Alaya, M., Gasso, G. (2020). [部分最优传输及其在正样本-未标记学习中的应用（Partial Optimal Transport with Applications on Positive-Unlabeled Learning）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08276)，神经信息处理系统进展（NeurIPS），2020。\n\n[30] Flamary R., Courty N., Tuia D., Rakotomamonjy A. (2014). [带拉普拉斯正则化的最优传输：在域自适应和形状匹配中的应用（Optimal transport with Laplacian regularization: Applications to domain adaptation and shape matching）](https:\u002F\u002Fremi.flamary.com\u002Fbiblio\u002Fflamary2014optlaplace.pdf)，NIPS 最优传输与机器学习研讨会（OTML），2014。\n\n[31] Bonneel, Nicolas 等人。[测度的切片与 Radon Wasserstein 质心（Sliced and radon wasserstein barycenters of measures）](https:\u002F\u002Fperso.liris.cnrs.fr\u002Fnicolas.bonneel\u002FWassersteinSliced-JMIV.pdf)，《数学成像与视觉期刊》（Journal of Mathematical Imaging and Vision）51.1 (2015): 22–45。\n\n[32] Huang, M., Ma S., Lai, L. (2021). [计算投影鲁棒 Wasserstein 距离的黎曼块坐标下降法（A Riemannian Block Coordinate Descent Method for Computing the Projection Robust Wasserstein Distance）](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fhuang21e.html)，第三十八届国际机器学习会议（ICML）论文集。\n\n[33] Kerdoncuff T., Emonet R., Marc S. [采样 Gromov-Wasserstein（Sampled Gromov Wasserstein）](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-03232509\u002Fdocument)，《机器学习期刊》（Machine Learning Journal, MJL），2021。\n\n[34] Feydy, J., Séjourné, T., Vialard, F. X., Amari, S. I., Trouvé, A., & Peyré, G. (2019 年 4 月). [利用 Sinkhorn 散度在最优传输与 MMD 之间插值（Interpolating between optimal transport and MMD using Sinkhorn divergences）](http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Ffeydy19a\u002Ffeydy19a.pdf)。第二十二届人工智能与统计国际会议（AISTATS）（第 2681–2690 页）。PMLR。\n\n[35] Deshpande, I., Hu, Y. T., Sun, R., Pyrros, A., Siddiqui, N., Koyejo, S., ... & Schwing, A. G. (2019). [最大切片 Wasserstein 距离及其在 GAN 中的应用（Max-sliced wasserstein distance and its use for gans）](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FDeshpande_Max-Sliced_Wasserstein_Distance_and_Its_Use_for_GANs_CVPR_2019_paper.pdf)。IEEE\u002FCVF 计算机视觉与模式识别会议论文集（第 10648–10656 页）。\n\n[36] Liutkus, A., Simsekli, U., Majewski, S., Durmus, A., & Stöter, F. R. (2019 年 5 月). [切片-Wasserstein 流：通过最优传输与扩散实现的非参数生成建模（Sliced-Wasserstein flows: Nonparametric generative modeling via optimal transport and diffusions）](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fliutkus19a\u002Fliutkus19a.pdf)。国际机器学习会议（第 4104–4113 页）。PMLR。\n\n[37] Janati, H., Cuturi, M., Gramfort, A. [去偏 Sinkhorn 质心（Debiased sinkhorn barycenters）](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fjanati20a\u002Fjanati20a.pdf)，第三十七届国际机器学习会议论文集，PMLR 119:4692–4701，2020。\n\n[38] C. Vincent-Cuaz, T. Vayer, R. Flamary, M. Corneli, N. Courty, [在线图字典学习（Online Graph Dictionary Learning）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.06555.pdf)，国际机器学习会议（ICML），2021。\n\n[39] Gozlan, N., Roberto, C., Samson, P. M., & Tetali, P. (2017). [一般传输代价的 Kantorovich 对偶理论及其应用（Kantorovich duality for general transport costs and applications）](https:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.712.1825&rep=rep1&type=pdf)。《泛函分析杂志》（Journal of Functional Analysis），273(11)，3327–3405。\n\n[40] Forrow, A., Hütter, J. C., Nitzan, M., Rigollet, P., Schiebinger, G., & Weed, J. (2019 年 4 月). [基于分解耦合的统计最优传输（Statistical optimal transport via factored couplings）](http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Fforrow19a\u002Fforrow19a.pdf)。第二十二届人工智能与统计国际会议（第 2454–2465 页）。PMLR。\n\n[41] Chapel*, L., Flamary*, R., Wu, H., Févotte, C., Gasso, G. (2021). [通过非负惩罚线性回归实现的不平衡最优传输（Unbalanced Optimal Transport through Non-negative Penalized Linear Regression）](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc3c617a9b80b3ae1ebd868b0017cc349-Paper.pdf)，神经信息处理系统进展（NeurIPS），2020。（前两位作者并列第一）\n\n[42] Delon, J., Gozlan, N., and Saint-Dizier, A. [定义在不同子空间上的概率测度之间的广义 Wasserstein 质心（Generalized Wasserstein barycenters between probability measures living on different subspaces）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.09755)。arXiv 预印本 arXiv:2105.09755，2021。\n\n[43] Álvarez-Esteban, Pedro C. 等人。[Wasserstein 空间中质心的不动点方法（A fixed-point approach to barycenters in Wasserstein space）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.05355.pdf)。《数学分析与应用杂志》（Journal of Mathematical Analysis and Applications）441.2 (2016): 744–762。\n\n[44] Delon, Julie, Julien Salomon, and Andrei Sobolevski. [圆环上 Monge 代价的快速传输优化（Fast transport optimization for Monge costs on the circle）](https:\u002F\u002Farxiv.org\u002Fabs\u002F0902.3527)。《SIAM 应用数学杂志》（SIAM Journal on Applied Mathematics）70.7 (2010): 2239–2258。\n\n[45] Hundrieser, Shayan, Marcel Klatt, and Axel Munk. [圆形最优传输的统计学（The statistics of circular optimal transport）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15426)。《方向统计学在创新应用中的应用：纪念弗洛伦斯·南丁格尔诞辰二百周年》。新加坡：Springer Nature Singapore，2022。57–82。\n\n[46] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). [球面切片 Wasserstein（Spherical Sliced-Wasserstein）](https:\u002F\u002Fopenreview.net\u002Fforum?id=jXQ0ipgMdU)。国际学习表征会议（ICLR）。\n\n[47] Chowdhury, S., & Mémoli, F. (2019). [网络间的 Gromov–Wasserstein 距离与稳定网络不变量（The gromov–wasserstein distance between networks and stable network invariants）](https:\u002F\u002Facademic.oup.com\u002Fimaiai\u002Farticle\u002F8\u002F4\u002F757\u002F5627736)。《IMA 信息与推断期刊》（Information and Inference: A Journal of the IMA），8(4)，757–787。\n\n[48] Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty (2022). [半松弛 Gromov-Wasserstein 散度及其在图上的应用（Semi-relaxed Gromov-Wasserstein divergence and applications on graphs）](https:\u002F\u002Fopenreview.net\u002Fpdf?id=RShaMexjc-x)。国际学习表征会议（ICLR），2022。\n\n[49] Redko, I., Vayer, T., Flamary, R., 和 Courty, N. (2020). [CO-Optimal Transport（协同最优传输）](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fcc384c68ad503482fb24e6d1e3b512ae-Paper.pdf). Advances in Neural Information Processing Systems, 33.\n\n[50] Liu, T., Puigcerver, J., & Blondel, M. (2023). [Sparsity-constrained optimal transport（稀疏约束最优传输）](https:\u002F\u002Fopenreview.net\u002Fforum?id=yHY9NbQJ5BP). Proceedings of the Eleventh International Conference on Learning Representations (ICLR).\n\n[51] Xu, H., Luo, D., Zha, H., & Carin, L. (2019). [Gromov-Wasserstein learning for graph matching and node embedding（用于图匹配与节点嵌入的 Gromov-Wasserstein 学习）](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fxu19b.html). In International Conference on Machine Learning (ICML), 2019.\n\n[52] Collas, A., Vayer, T., Flamary, F., & Breloy, A. (2023). [Entropic Wasserstein Component Analysis（熵正则化 Wasserstein 成分分析）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05119). ArXiv.\n\n[53] C. Vincent-Cuaz, R. Flamary, M. Corneli, T. Vayer, N. Courty (2022). [Template based graph neural network with optimal transport distances（基于模板并使用最优传输距离的图神经网络）](https:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Ffile\u002F4d3525bc60ba1adc72336c0392d3d902-Paper-Conference.pdf). Advances in Neural Information Processing Systems, 35.\n\n[54] Bécigneul, G., Ganea, O. E., Chen, B., Barzilay, R., & Jaakkola, T. S. (2020). [Optimal transport graph neural networks（最优传输图神经网络）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.04804).\n\n[55] Ronak Mehta, Jeffery Kline, Vishnu Suresh Lokhande, Glenn Fung, & Vikas Singh (2023). [Efficient Discrete Multi Marginal Optimal Transport Regularization（高效的离散多边缘最优传输正则化）](https:\u002F\u002Fopenreview.net\u002Fforum?id=R98ZfMt-jE). In The Eleventh International Conference on Learning Representations (ICLR).\n\n[56] Jeffery Kline. [Properties of the d-dimensional earth mover’s problem（d 维推土机问题的性质）](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0166218X19301441). Discrete Applied Mathematics, 265: 128–141, 2019.\n\n[57] Delon, J., Desolneux, A., & Salmona, A. (2022). [Gromov–Wasserstein distances between Gaussian distributions（高斯分布间的 Gromov–Wasserstein 距离）](https:\u002F\u002Fhal.science\u002Fhal-03197398v2\u002Ffile\u002Fmain.pdf). Journal of Applied Probability, 59(4), 1178-1198.\n\n[58] Paty F-P., d’Aspremont 1., & Cuturi M. (2020). [Regularity as regularization: Smooth and strongly convex Brenier potentials in optimal transport（正则性作为正则化：最优传输中的光滑强凸 Brenier 势）](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fpaty20a\u002Fpaty20a.pdf). In International Conference on Artificial Intelligence and Statistics, pages 1222–1232. PMLR, 2020.\n\n[59] Taylor A. B. (2017). [Convex interpolation and performance estimation of first-order methods for convex optimization（凸插值与凸优化一阶方法的性能估计）](https:\u002F\u002Fdial.uclouvain.be\u002Fpr\u002Fboreal\u002Fobject\u002Fboreal%3A182881\u002Fdatastream\u002FPDF_01\u002Fview). PhD thesis, Catholic University of Louvain, Louvain-la-Neuve, Belgium, 2017.\n\n[60] Feydy, J., Roussillon, P., Trouvé, A., & Gori, P. (2019). [Fast and scalable optimal transport for brain tractograms（用于脑纤维束图的快速可扩展最优传输）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.02010.pdf). In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part III 22 (pp. 636-644). Springer International Publishing.\n\n[61] Charlier, B., Feydy, J., Glaunes, J. A., Collin, F. D., & Durif, G. (2021). [Kernel operations on the GPU, with autodiff, without memory overflows（在 GPU 上进行核运算，支持自动微分且无内存溢出）](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume22\u002F20-275\u002F20-275.pdf). The Journal of Machine Learning Research, 22(1), 3457-3462.\n\n[62] H. Van Assel, C. Vincent-Cuaz, T. Vayer, R. Flamary, N. Courty (2023). [Interpolating between Clustering and Dimensionality Reduction with Gromov-Wasserstein（利用 Gromov-Wasserstein 在聚类与降维之间插值）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.03398.pdf). NeurIPS 2023 Workshop Optimal Transport and Machine Learning.\n\n[63] Li, J., Tang, J., Kong, L., Liu, H., Li, J., So, A. M. C., & Blanchet, J. (2022). [A Convergent Single-Loop Algorithm for Relaxation of Gromov-Wasserstein in Graph Data（图数据中 Gromov-Wasserstein 松弛的收敛单循环算法）](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0jxPyVWmiiF). In The Eleventh International Conference on Learning Representations.\n\n[64] Ma, X., Chu, X., Wang, Y., Lin, Y., Zhao, J., Ma, L., & Zhu, W. (2023). [Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications（用于图级别分类的融合 Gromov-Wasserstein 图 Mixup）](https:\u002F\u002Fopenreview.net\u002Fpdf?id=uqkUguNu40). In Thirty-seventh Conference on Neural Information Processing Systems.\n\n[65] Scetbon, M., Cuturi, M., & Peyré, G. (2021). [Low-Rank Sinkhorn Factorization（低秩 Sinkhorn 分解）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.04737.pdf).\n\n[66] Pooladian, Aram-Alexandre, and Jonathan Niles-Weed. [Entropic estimation of optimal transport maps（最优传输映射的熵估计）](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.12004.pdf). arXiv preprint arXiv:2109.12004 (2021).\n\n[67] Scetbon, M., Peyré, G. & Cuturi, M. (2022). [Linear-Time Gromov-Wasserstein Distances using Low Rank Couplings and Costs（利用低秩耦合与代价的线性时间 Gromov-Wasserstein 距离）](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fscetbon22b\u002Fscetbon22b.pdf). In International Conference on Machine Learning (ICML), 2022.\n\n[68] Chowdhury, S., Miller, D., & Needham, T. (2021). [Quantized Gromov-Wasserstein（量化 Gromov-Wasserstein）](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-86523-8_49). ECML PKDD 2021. Springer International Publishing.\n\n[69] Delon, J., & Desolneux, A. (2020). [A Wasserstein-type distance in the space of Gaussian mixture models（高斯混合模型空间中的 Wasserstein 型距离）](https:\u002F\u002Fepubs.siam.org\u002Fdoi\u002Fabs\u002F10.1137\u002F19M1301047). SIAM Journal on Imaging Sciences, 13(2), 936-970.\n\n[70] A. Thual, H. Tran, T. Zemskova, N. Courty, R. Flamary, S. Dehaene & B. Thirion (2022). [Aligning individual brains with Fused Unbalanced Gromov-Wasserstein（使用融合非平衡 Gromov-Wasserstein 对齐个体大脑）](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Ffile\u002F8906cac4ca58dcaf17e97a0486ad57ca-Paper-Conference.pdf). Neural Information Processing Systems (NeurIPS).\n\n[71] H. Tran, H. Janati, N. Courty, R. Flamary, I. Redko, P. Demetci & R. Singh (2023). [Unbalanced Co-Optimal Transport（非平衡协同最优传输）](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1609\u002Faaai.v37i8.26193). AAAI Conference on Artificial Intelligence.\n\n[72] Thibault Séjourné, François-Xavier Vialard, and Gabriel Peyré (2021). [The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation（非平衡 Gromov-Wasserstein 距离：锥形公式与松弛）](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F4990974d150d0de5e6e15a1454fe6b0f-Paper.pdf). Neural Information Processing Systems (NeurIPS).\n\n[73] Séjourné, T., Vialard, F. X., & Peyré, G. (2022). [Faster Unbalanced Optimal Transport: Translation Invariant Sinkhorn and 1-D Frank-Wolfe（更快的非平衡最优传输：平移不变 Sinkhorn 与一维 Frank-Wolfe）](https:\u002F\u002Fproceedings.mlr.press\u002Fv151\u002Fsejourne22a.html). In International Conference on Artificial Intelligence and Statistics (pp. 4995-5021). PMLR.\n\n[74] Chewi, S., Maunu, T., Rigollet, P., & Stromme, A. J. (2020). [Gradient descent algorithms for Bures-Wasserstein barycenters（Bures-Wasserstein 质心的梯度下降算法）](https:\u002F\u002Fproceedings.mlr.press\u002Fv125\u002Fchewi20a.html). In Conference on Learning Theory (pp. 1276-1304). PMLR.\n\n[75] Altschuler, J., Chewi, S., Gerber, P. R., & Stromme, A. (2021). [Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent（Bures-Wasserstein 流形上的平均：梯度下降的维度无关收敛性）](https:\u002F\u002Fpapers.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2021\u002Fhash\u002Fb9acb4ae6121c941324b2b1d3fac5c30-Abstract.html). Advances in Neural Information Processing Systems, 34, 22132-22145.\n\n[76] Chapel, L., Tavenard, R. (2025). [One for all and all for one: Efficient computation of partial Wasserstein distances on the line（一即一切，一切即一：直线上偏 Wasserstein 距离的高效计算）](https:\u002F\u002Ficlr.cc\u002Fvirtual\u002F2025\u002Fposter\u002F28547). In International Conference on Learning Representations.\n\n[77] Tanguy, Eloi 和 Delon, Julie 和 Gozlan, Nathaël (2024). [Computing Barycentres of Measures for Generic Transport Costs（通用传输代价下测度的重心计算）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04016). arXiv 预印本 2501.04016 (2024)\n\n[78] Martin, R. D., Medri, I., Bai, Y., Liu, X., Yan, K., Rohde, G. K., & Kolouri, S. (2024). [LCOT: Linear Circular Optimal Transport（线性圆形最优传输）](https:\u002F\u002Fopenreview.net\u002Fforum?id=49z97Y9lMq). 国际学习表征会议（International Conference on Learning Representations）.\n\n[79] Liu, X., Bai, Y., Martín, R. D., Shi, K., Shahbazi, A., Landman, B. A., Chang, C., & Kolouri, S. (2025). [Linear Spherical Sliced Optimal Transport: A Fast Metric for Comparing Spherical Data（线性球面切片最优传输：一种用于比较球面数据的快速度量）](https:\u002F\u002Fopenreview.net\u002Fforum?id=fgUFZAxywx). 国际学习表征会议（International Conference on Learning Representations）.\n\n[80] Altschuler, J., Bach, F., Rudi, A., Niles-Weed, J., [Massively scalable Sinkhorn distances via the Nyström method（通过 Nyström 方法实现大规模可扩展的 Sinkhorn 距离）](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2019\u002Ffile\u002Ff55cadb97eaff2ba1980e001b0bd9842-Paper.pdf), 神经信息处理系统进展（Advances in Neural Information Processing Systems）, 2019.\n\n[81] Xu, H., Luo, D., & Carin, L. (2019). [Scalable Gromov-Wasserstein learning for graph partitioning and matching（用于图划分与匹配的可扩展 Gromov-Wasserstein 学习）](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Fhash\u002F6e62a992c676f611616097dbea8ea030-Abstract.html). 神经信息处理系统会议（Neural Information Processing Systems, NeurIPS）.\n\n[82] Bonet, C., Nadjahi, K., Séjourné, T., Fatras, K., & Courty, N. (2024). [Slicing Unbalanced Optimal Transport（切片非平衡最优传输）](https:\u002F\u002Fopenreview.net\u002Fforum?id=AjJTg5M0r8). 机器学习研究汇刊（Transactions on Machine Learning Research）.","# POT（Python Optimal Transport）快速上手指南\n\n## 环境准备\n\n- **操作系统**：Linux、macOS 或 Windows 均可\n- **Python 版本**：建议使用 Python 3.7 及以上\n- **前置依赖**：\n  - `numpy >= 1.16`\n  - 安装时需系统具备 C++ 编译器（用于编译 EMD 求解器）\n- **可选加速**：如需 GPU 支持，可配合 `cupy`；如使用深度学习框架，支持 PyTorch、TensorFlow、JAX 等后端\n\n> 💡 国内用户建议配置 pip 镜像源（如清华源）以加速安装。\n\n## 安装步骤\n\n### 使用 pip 安装（推荐）\n\n```bash\npip install POT\n```\n\n国内用户可使用清华镜像加速：\n\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple POT\n```\n\n### 使用 conda 安装\n\n```bash\nconda install -c conda-forge pot\n```\n\n## 基本使用\n\n以下是一个使用 Sinkhorn 算法计算正则化最优传输的最简示例：\n\n```python\nimport numpy as np\nimport ot\n\n# 生成两个离散概率分布（权重）\na = np.array([0.5, 0.5])  # 源分布\nb = np.array([0.3, 0.7])  # 目标分布\n\n# 定义代价矩阵（例如欧氏距离平方）\nM = np.array([[0., 1.], [1., 0.]])\n\n# 使用 Sinkhorn 算法求解正则化 OT\nT = ot.sinkhorn(a, b, M, reg=0.1)\n\nprint(\"传输计划 T:\\n\", T)\n```\n\n> 更多示例（如 Gromov-Wasserstein、OT barycenter、域自适应等）请参考官方文档：[https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F)","一家医疗影像AI公司正在开发跨医院的脑部MRI图像配准系统，需要对不同设备采集的图像分布进行对齐，以提升后续病灶检测模型的泛化能力。\n\n### 没有 POT 时\n- 团队需手动实现最优传输（Optimal Transport）算法，代码复杂且难以验证正确性，耗费大量研发时间。\n- 缺乏对Gromov-Wasserstein等结构感知距离的支持，无法有效处理图像间无明确像素对应关系的非刚性形变。\n- 自研实现仅支持NumPy，难以与PyTorch训练流程无缝集成，导致梯度无法端到端回传。\n- 处理高维图像数据时计算效率低下，缺乏Sinkhorn等正则化加速策略，训练周期长达数天。\n- 无法快速尝试不同OT变体（如Fused-GW、不平衡OT），限制了算法选型和效果调优空间。\n\n### 使用 POT 后\n- 直接调用POT内置的Fused-Gromov-Wasserstein求解器，几行代码即可完成跨域图像分布对齐，开发效率显著提升。\n- 利用POT对图结构和特征联合建模的能力，精准捕捉不同MRI设备下脑区拓扑结构的相似性。\n- 通过POT的PyTorch后端，OT损失可直接嵌入神经网络训练流程，实现端到端优化。\n- 借助Sinkhorn正则化与GPU加速（CuPy支持），大规模图像配准任务训练时间从数天缩短至数小时。\n- 快速实验多种OT方案（如熵正则化、部分OT等），最终选出最适合医学图像特性的对齐策略。\n\nPOT将复杂的最优传输理论转化为即插即用的工程组件，让团队聚焦核心业务而非底层算法实现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPythonOT_POT_7e6a6c97.png","PythonOT","Python Optimal Transport","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPythonOT_47dbfcbc.png","",null,"https:\u002F\u002Fgithub.com\u002FPythonOT",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",98.3,{"name":87,"color":88,"percentage":89},"Cython","#fedf5b",1.6,{"name":91,"color":92,"percentage":93},"Makefile","#427819",0.1,2779,544,"2026-04-04T13:10:46","MIT","Linux, macOSX, Windows","未说明",{"notes":101,"python":99,"dependencies":102},"需要 C++ 编译器用于构建\u002F安装 EMD 求解器；支持多种后端（PyTorch、JAX、TensorFlow、NumPy、CuPy），但这些并非默认安装依赖；可通过 PyPI 或 conda-forge 安装。",[103],"Numpy (>=1.0)",[13],[106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121],"optimal-transport","numerical-optimization","machine-learning","emd","ot-mapping-estimation","wasserstein-barycenter","ot-solver","python","wasserstein","wasserstein-discriminant-analysis","gromov-wasserstein","wasserstein-barycenters","sinkhorn-divergences","sinkhorn-knopp","pot","domain-adaptation",8,"2026-03-27T02:49:30.150509","2026-04-06T05:19:42.388957",[126,131,136,141,146,150],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},99,"安装 POT 时遇到 'No module named Cython' 错误怎么办？","POT 在构建时依赖 Cython 和 NumPy，必须先手动安装这两个依赖。解决方法是先运行 `pip install cython numpy`，再运行 `pip install POT`。如果使用 requirements.txt，需确保 Cython 和 NumPy 在 POT 之前安装。官方建议的 Dockerfile 示例为：\n```\nFROM ubuntu:latest\nRUN apt-get -qq update && apt-get -qq -y install python3.7 python3-pip build-essential git\nRUN pip3 install cython numpy\nRUN pip3 install POT\n```","https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fissues\u002F59",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},100,"为什么计算两个大小差异很大的分布之间的 EMD 会报错“Problem Infeasible”或“not in simplex”？","这是由于数值精度问题导致概率向量未严格满足 simplex 约束（即元素和不等于 1）。维护者已通过调整 C 代码中的 EPSILON 常量修复该问题。建议升级到最新版本的 POT。若仍遇问题，可尝试对输入权重进行归一化处理，例如使用 `a = a \u002F a.sum()` 和 `b = b \u002F b.sum()` 确保它们是有效的概率分布。","https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fissues\u002F93",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},101,"POT 的域适应（Domain Adaptation）类如何符合 scikit-learn 的接口规范？","POT 正在将域适应类重构为符合 scikit-learn 的 BaseEstimator 接口。具体包括：(1) 类名使用 CamelCase；(2) 所有参数在 `__init__` 中设置，不传入数据；(3) 数据通过 `.fit(Xs, ys, Xt, yt)` 传入；(4) `.transform()` 将源域样本映射到目标域；(5) 映射方式（如重心映射、核映射等）应在 `__init__` 中指定。这些更改旨在提升 API 一致性并支持 scikit-learn 的 `set_params()` 和 `get_params()` 方法。","https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fissues\u002F17",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},102,"POT 1.0 版本计划做哪些重大改进？","POT 1.0 的核心改进包括：(1) 统一命名规范（如避免 ot.sinkhorn2 这类不一致名称）；(2) 清理重复代码（如 bregman 模块）；(3) 将 emd 函数移出 `__init__.py` 到专用模块；(4) 优化 Sinkhorn 等函数的行为，使其返回结果不依赖输入维度；(5) 支持显式计算传输计划以节省 GPU 内存；(6) 引入 PyTorch 后端；(7) 改进 CI\u002FCD 流程确保发布稳定性。这些变更优先于新功能开发，以提升库的长期可维护性。","https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fissues\u002F111",{"id":147,"question_zh":148,"answer_zh":149,"source_url":145},103,"如何避免在不需要传输矩阵时浪费内存（特别是在 GPU 上）？","当前某些函数（如 sinkhorn）会默认计算完整的传输计划，即使用户只需要 OT 距离。建议未来使用显式 API：先通过求解器获取对偶变量，再按需调用专门函数从对偶变量重建传输计划。这样可避免在 GPU 上存储大型稠密矩阵，节省显存。该设计已在 POT 1.0 路线图中列为优先事项。",{"id":151,"question_zh":152,"answer_zh":153,"source_url":145},104,"POT 的包名 'ot' 是否会引起命名冲突？","是的，两位字母的包名 'ot' 容易与其他库或变量名冲突。该项目已在 POT 1.0 规划中讨论此问题，并考虑采用更明确的命名策略。虽然目前仍使用 `import ot`，但未来可能通过别名或子模块结构缓解该问题。用户在编写代码时应避免使用 `ot` 作为变量名以减少冲突风险。",[155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,239,244,249],{"id":156,"version":157,"summary_zh":158,"released_at":159},99757,"0.9.6.post1","This is a bug fix release because a Cython file was missing form the previous source release (all wheels work fine)\r\n\r\n#### Closed issues\r\n- Fix missing cython file in MANIFEST.in (PR #763)","2025-09-22T12:54:07",{"id":161,"version":162,"summary_zh":163,"released_at":164},99758,"0.9.6","This new release contains several new features and bug fixes. Among the new features we have a new submodule `ot.batch` that contains batch parallel solvers for several OT problems including [Sinkhorn, Gromov-Wasserstein and Fused Gromov-Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fbackends\u002Fplot_ot_batch.html). This new submodule can be used to solve many independent OT problems in parallel on CPU or GPU with shared source and target support sizes. We also implemented a new Nystrom kernel approximation for the Sinkhorn solver that can be used to speed up the computation of the Sinkhorn divergence on large datasets. We also added new 1D solvers for [Linear circular OT](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fsliced-wasserstein\u002Fplot_compute_wasserstein_circle.html) and new solvers for free support [OT barycenters with generic cost functions](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fbarycenters\u002Fplot_free_support_barycenter_generic_cost.html) and for [barycenters between Gaussian Mixture Models (GMMs)](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fbarycenters\u002Fplot_gmm_barycenter.html). We also implemented two solvers for partial Fused Gromov-Wasserstein problems based on [conditional gradient](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fgen_modules\u002Fot.gromov.html#ot.gromov.partial_fused_gromov_wasserstein) and [projected gradient](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fgen_modules\u002Fot.gromov.html#ot.gromov.entropic_partial_fused_gromov_wasserstein) descents.\r\n\r\nFinally we have updated the documentation to reflect the new generic API and reorganized the [examples gallery](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Findex.html).\r\n\r\n#### New features\r\n- Implement CG solvers for partial FGW (PR #687)\r\n- Added feature `grad=last_step` for `ot.solvers.solve` (PR #693)\r\n- Automatic PR labeling and release file update check (PR #704)\r\n- Reorganize sub-module `ot\u002Flp\u002F__init__.py` into separate files (PR #714)\r\n- Implement fixed-point solver for OT barycenters with generic cost functions\r\n  (generalizes `ot.lp.free_support_barycenter`), with example. (PR #715)\r\n- Implement fixed-point solver for barycenters between GMMs (PR #715), with example.\r\n- Fix warning raise when import the library (PR #716)\r\n- Implement projected gradient descent solvers for entropic partial FGW (PR #702)\r\n- Fix documentation in the module `ot.gaussian` (PR #718)\r\n- Refactored `ot.bregman._convolutional` to improve readability (PR #709)\r\n- Added `ot.gaussian.bures_barycenter_gradient_descent` (PR #680)\r\n- Added `ot.gaussian.bures_wasserstein_distance` (PR #680)\r\n- `ot.gaussian.bures_wasserstein_distance` can be batched (PR #680)\r\n- Backend implementation of `ot.dist` for (PR #701)\r\n- Updated documentation Quickstart guide and User guide with new API (PR #726)\r\n- Fix jax version for auto-grad (PR #732)\r\n- Add Nystrom kernel approximation for Sinkhorn (PR #742)\r\n- Added `ot.solver_1d.linear_circular_ot` and `ot.sliced.linear_sliced_wasserstein_sphere` (PR #736)\r\n- Implement 1d solver for partial optimal transport (PR #741)\r\n- Fix reg_div function compatibility with numpy in `ot.unbalanced.lbfgsb_unbalanced` via new function `ot.utils.fun_to_numpy` (PR #731)\r\n- Added to each example in the examples gallery the information about the release version in which it was introduced (PR #743)\r\n- Removed release information from quickstart guide (PR #744)\r\n- Implement batch parallel solvers in ot.batch (PR #745)\r\n- Update REAMDE with new API and reorganize examples (PR #754)\r\n- Speedup and update tests and wheels (PR #759)\r\n\r\n#### Closed issues\r\n- Fixed `ot.mapping` solvers which depended on deprecated `cvxpy` `ECOS` solver (PR #692, Issue #668)\r\n- Fixed numerical errors in `ot.gmm` (PR #690, Issue #689)\r\n- Add version number to the documentation (PR #696)\r\n- Update doc for default regularization in `ot.unbalanced` sinkhorn solvers (Issue #691, PR #700)\r\n- Clean documentation for `gromov`, `lp` and `unbalanced` folders (PR #710)\r\n- Clean references in documentation (PR #722)\r\n- Clean documentation for `ot.gromov.gromov_wasserstein` (PR #737)\r\n- Debug wheels building (PR #739)\r\n- Fix doc for projection sparse simplex (PR #734, PR #746)\r\n- Changed the default behavior of `ot.lp.solver_1d.wasserstein_circle` (Issue #738)\r\n- Avoid raising unnecessary warnings in `ot.lp.solver_1d.binary_search_circle` (Issue #738)\r\n- Avoid deprecation warning in `ot.lp.solver_1d.wasserstein_1d` (Issue #760, PR #761)\r\n\r\n\r\n## New Contributors\r\n* @samuelbx made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F690\r\n* @qbarthelemy made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F710\r\n* @tatsuookubo made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F737\r\n* @NailKhelifa made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F743\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fcompare\u002F0.9.5...0.9.6","2025-09-19T14:45:49",{"id":166,"version":167,"summary_zh":168,"released_at":169},99759,"0.9.5","This new release contains several new features, starting with a novel [Gaussian Mixture Model Optimal Transport (GMM-OT)](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fgen_modules\u002Fot.gmm.html#examples-using-ot-gmm-gmm-ot-apply-map) solver to compare GMM while enforcing the transport plan to remain a GMM, that benefits from a closed-form solution making it practical for high-dimensional matching problems. We also extended our general unbalanced OT solvers to support any non-negative reference measure in the regularization terms, before adding the novel [translation invariant UOT](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_conv_sinkhorn_ti.html) solver showcasing a higher convergence speed.\r\nWe also implemented several new solvers and enhanced existing ones to perform OT across spaces. These include a [semi-relaxed FGW barycenter](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fgromov\u002Fplot_semirelaxed_gromov_wasserstein_barycenter.html) solver, coupled with new initialization heuristics for the inner divergence computation, to perform graph partitioning or dictionary learning. Followed by novel [unbalanced FGW and Co-optimal transport](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fothers\u002Fplot_outlier_detection_with_COOT_and_unbalanced_COOT.html) solvers to promote robustness to outliers in such matching problems. And we finally updated the implementation of partial GW now supporting asymmetric structures and the KL divergence, while leveraging a new generic conditional gradient solver for partial transport problems enabling significant speed improvements. These latest updates required some modifications to the line search functions of our generic conditional gradient solver, paving the way for future improvements to other GW-based solvers.\r\nLast but not least, we implemented a pre-commit scheme to automatically correct common programming mistakes likely to be made by our future contributors.\r\n\r\nThis release also contains few bug fixes, concerning the support of any metric in `ot.emd_1d` \u002F `ot.emd2_1d`, and the support of any weights in `ot.gaussian`.\r\n \r\n#### Breaking change\r\n- Custom functions provided as parameter `line_search` to `ot.optim.generic_conditional_gradient` must now have the signature `line_search(cost, G, deltaG, Mi, cost_G, df_G, **kwargs)`, adding as input `df_G` the gradient of the regularizer evaluated at the transport plan `G`. This change aims at improving speed of solvers having quadratic polynomial functions as regularizer such as the Gromov-Wassertein loss (PR #663).\r\n\r\n#### New features\r\n- New linter based on pre-commit using ruff, codespell and yamllint (PR #681)\r\n- Added feature `mass=True` for `nx.kl_div` (PR #654)\r\n- Implemented Gaussian Mixture Model OT `ot.gmm` (PR #649)\r\n- Added feature `semirelaxed_fgw_barycenters` and generic FGW-related barycenter updates `update_barycenter_structure` and `update_barycenter_feature` (PR #659)\r\n- Added initialization heuristics for sr(F)GW problems via `semirelaxed_init_plan`, integrated in all sr(F)GW solvers (PR #659)\r\n- Improved `ot.plot.plot1D_mat` (PR #649)\r\n- Added `nx.det` (PR #649)\r\n- `nx.sqrtm` is now broadcastable (takes ..., d, d) inputs (PR #649)\r\n- Restructured `ot.unbalanced` module (PR #658)\r\n- Added `ot.unbalanced.lbfgsb_unbalanced2` and add flexible reference measure `c` in all unbalanced solvers (PR #658)\r\n- Implemented Fused unbalanced Gromov-Wasserstein and unbalanced Co-Optimal Transport (PR #677)\r\n- Notes before depreciating partial Gromov-Wasserstein function in `ot.partial` moved to ot.gromov  (PR #663)\r\n- Create `ot.gromov._partial` add new features `loss_fun = \"kl_loss\"` and `symmetry=False` to all solvers while increasing speed + updating adequatly `ot.solvers` (PR #663)\r\n- Added `ot.unbalanced.sinkhorn_unbalanced_translation_invariant` (PR #676)\r\n\r\n#### Closed issues\r\n- Fixed `ot.gaussian` ignoring weights when computing means (PR #649, Issue #648)\r\n- Fixed `ot.emd_1d` and `ot.emd2_1d` incorrectly allowing any metric (PR #670, Issue #669)\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fcompare\u002F0.9.4...0.9.5","2024-11-07T10:12:32",{"id":171,"version":172,"summary_zh":173,"released_at":174},99760,"0.9.4","\r\nThis new release contains several new features and bug fixes. Among the new features\r\nwe have novel [Quantized FGW](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_quantized_gromov_wasserstein.html) solvers that can be used to speed up the computation of the FGW loss on large datasets or to promote a structure on the pairwise matrices. We also updated the continuous entropic mapping to provide efficient out-of-sample continuous mapping thanks to entropic regularization. We also have a new general unbalanced solvers for `ot.solve` and BFGS solver and illustrative example. Finally we have a new solver for the [Low Rank Gromov-Wasserstein](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_lowrank_GW.html) that can be used to compute the GW distance between two large scale datasets with a low rank approximation.\r\n\r\nFrom a maintenance point of view, we now have a new option to install optional dependencies with `pip install POT[all]` and the specific backends or submodules' dependencies may also be installed individually. The pip options are: `backend-jax, backend-tf, backend-torch, cvxopt, dr, gnn, plot, all`. We also provide with this release support for NumPy 2.0 (the wheels should now be compatible with NumPy 2.0 and below). We also fixed several issues such as gradient sign errors for FGW solvers, empty weights for `ot.emd2`, and line-search in partial GW. We also split the `test\u002Ftest_gromov.py` into `test\u002Fgromov\u002F` to make the tests more manageable.\r\n\r\n#### New features\r\n+ NumPy 2.0 support is added (PR #629)\r\n+ New quantized FGW solvers `ot.gromov.quantized_fused_gromov_wasserstein`, `ot.gromov.quantized_fused_gromov_wasserstein_samples` and `ot.gromov.quantized_fused_gromov_wasserstein_partitioned` (PR #603)\r\n+ `ot.gromov._gw.solve_gromov_linesearch` now has an argument to specify if the matrices are symmetric in which case the computation can be done faster (PR #607).\r\n+ Continuous entropic mapping (PR #613)\r\n+ New general unbalanced solvers for `ot.solve` and BFGS solver and illustrative example (PR #620)\r\n+ Add gradient computation with envelope theorem to sinkhorn solver of `ot.solve` with `grad='envelope'` (PR #605).\r\n+ Added support for [Low rank Gromov-Wasserstein](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fscetbon22b\u002Fscetbon22b.pdf) with `ot.gromov.lowrank_gromov_wasserstein_samples` (PR #614)\r\n+ Optional dependencies may now be installed with `pip install POT[all]` The specific backends or submodules' dependencies may also be installed individually. The pip options are: `backend-jax, backend-tf, backend-torch, cvxopt, dr, gnn, all`. The installation of the `cupy` backend should be done with conda.\r\n\r\n#### Closed issues\r\n- Fix gpu compatibility of sr(F)GW solvers when `G0 is not None`(PR #596)\r\n- Fix doc and example for lowrank sinkhorn (PR #601)\r\n- Fix issue with empty weights for `ot.emd2` (PR #606, Issue #534)\r\n- Fix a sign error regarding the gradient of `ot.gromov._gw.fused_gromov_wasserstein2` and `ot.gromov._gw.gromov_wasserstein2` for the kl loss (PR #610)\r\n- Fix same sign error for sr(F)GW conditional gradient solvers (PR #611)\r\n- Split `test\u002Ftest_gromov.py` into `test\u002Fgromov\u002F` (PR #619)\r\n- Fix (F)GW barycenter functions to support computing barycenter on 1 input + deprecate structures as lists (PR #628)\r\n- Fix line-search in partial GW and change default init to the interior of partial transport plans (PR #602)\r\n- Fix `ot.da.sinkhorn_lpl1_mm` compatibility with JAX (PR #592)\r\n- Fiw linesearch import error on Scipy 1.14 (PR #642, Issue #641)\r\n- Upgrade supported JAX versions from jax\u003C=0.4.24 to jax\u003C=0.4.30 (PR #643)\r\n\r\n## New Contributors\r\n* @WilliamBonvini made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F595\r\n* @KrzakalaPaul made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F607\r\n* @matthewfeickert made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F629\r\n* @yikun-baio made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F602\r\n* @SarahG-579462 made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F627\r\n* @simon-forb made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F637\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fcompare\u002F0.9.3...0.9.4","2024-06-26T11:22:44",{"id":176,"version":177,"summary_zh":178,"released_at":179},99761,"0.9.3","## Closed issues\r\n- Fixed an issue with cost correction for mismatched labels in `ot.da.BaseTransport` fit methods. This fix addresses the original issue introduced PR #587 (PR #593)\r\n\r\n## What's Changed\r\n* tiny typos in doc by @gabrielfougeron in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F591\r\n* Fix DA cost correction when cost limit is set to Inf by @kachayev in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F593\r\n* [MRG] Release 0.9.3 by @rflamary in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F594\r\n\r\n## New Contributors\r\n* @gabrielfougeron made their first contribution in https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fpull\u002F591\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Fcompare\u002F0.9.2...0.9.3","2024-01-12T15:44:40",{"id":181,"version":182,"summary_zh":183,"released_at":184},99762,"0.9.2","\r\nThis new release contains several new features and bug fixes. Among the new features\r\nwe have a new solver for estimation of nearest Brenier potentials (SSNB) that can be used for OT mapping estimation (on small problems), new Bregman Alternated Projected Gradient solvers for GW and FGW, and new solvers for Bures-Wasserstein barycenters. We also provide a first solver for Low Rank Sinkhorn that will be ussed to provide low rak OT extensions in the next releases. Finally we have a new exact line-search for (F)GW solvers with KL loss that can be used to improve the convergence of the solvers.\r\n\r\nWe also have a new `LazyTensor` class that can be used to model OT plans and low rank tensors in large scale OT. This class is used to return the plan for the new wrapper for `geomloss` Sinkhorn solver on empirical samples that can lead to x10\u002Fx100 speedups on CPU or GPU and have  a lazy implementation that allows solving very large problems of a few millions samples.\r\n\r\nWe also have a new API for solving OT problems from empirical samples with `ot.solve_sample`  Finally we have a new API for Gromov-Wasserstein solvers with `ot.solve_gromov` function that centralizes most of the (F)GW methods with unified notation. Some example of how to use the new API below:\r\n\r\n```python\r\n# Generate random data\r\nxs, xt = np.random.randn(100, 2), np.random.randn(50, 2)\r\n\r\n# Solve OT problem with empirical samples\r\nsol = ot.solve_sample(xs, xt) # Exact OT betwen smaples with uniform weights\r\nsol = ot.solve_sample(xs, xt, wa, wb) # Exact OT with weights given by user \r\n\r\nsol = ot.solve_sample(xs, xt, reg= 1, metric='euclidean') # sinkhorn with euclidean metric\r\n\r\nsol = ot.solve_sample(xs, xt, reg= 1, method='geomloss') # faster sinkhorn solver on CPU\u002FGPU\r\n\r\nsol = ot.solve_sample(x,x2, method='factored', rank=10) # compute factored OT\r\n\r\nsol = ot.solve_sample(x,x2, method='lowrank', rank=10) # compute lowrank sinkhorn OT\r\n\r\nvalue_bw = ot.solve_sample(xs, xt, method='gaussian').value # Bures-Wasserstein distance\r\n\r\n# Solve GW problem \r\nCs, Ct = ot.dist(xs, xs), ot.dist(xt, xt) # compute cost matrices\r\nsol = ot.solve_gromov(Cs,Ct) # Exact GW between samples with uniform weights\r\n\r\n# Solve FGW problem\r\nM = ot.dist(xs, xt) # compute cost matrix\r\n\r\n# Exact FGW between samples with uniform weights\r\nsol = ot.solve_gromov(Cs, Ct, M, loss='KL', alpha=0.7) # FGW with KL data fitting  \r\n\r\n\r\n# recover solutions objects\r\nP = sol.plan # OT plan\r\nu, v = sol.potentials # dual variables\r\nvalue = sol.value # OT value\r\n\r\n# for GW and FGW\r\nvalue_linear = sol.value_linear # linear part of the loss\r\nvalue_quad = sol.value_quad # quadratic part of the loss \r\n\r\n```\r\n\r\nUsers are encouraged to use the new API (it is much simpler) but it might still be subjects to small changes before the release of POT 1.0.\r\n\r\n\r\nWe also fixed a number of issues, the most pressing being a problem of GPU memory allocation when pytorch is installed that will not happen now thanks to Lazy initialization of the backends. We now also have the possibility to deactivate some backends using environment which prevents POT from importing them and can lead to large import speedup. \r\n\r\n\r\n### New features\r\n+ Added support for [Nearest Brenier Potentials (SSNB)](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fpaty20a\u002Fpaty20a.pdf) (PR #526) + minor fix (PR #535)\r\n+ Tweaked `get_backend` to ignore `None` inputs (PR #525)\r\n+ Callbacks for generalized conditional gradient in `ot.da.sinkhorn_l1l2_gl` are now vectorized to improve performance (PR #507)\r\n+ The `linspace` method of the backends now has the `type_as` argument to convert to the same dtype and device. (PR #533)\r\n+ The `convolutional_barycenter2d` and `convolutional_barycenter2d_debiased` functions now work with different devices.. (PR #533)\r\n+ New API for Gromov-Wasserstein solvers with `ot.solve_gromov` function (PR #536)\r\n+ New LP solvers from scipy used by default for LP barycenter (PR #537)\r\n+ Update wheels to Python 3.12 and remove old i686 arch that do not have scipy wheels (PR #543)\r\n+ Upgraded unbalanced OT solvers for more flexibility (PR #539)\r\n+ Add LazyTensor for modeling plans and low rank tensor in large scale OT (PR #544)\r\n+ Add exact line-search for `gromov_wasserstein` and `fused_gromov_wasserstein` with KL loss (PR #556)\r\n+ Add KL loss to all semi-relaxed (Fused) Gromov-Wasserstein solvers (PR #559)\r\n+ Further upgraded unbalanced OT solvers for more flexibility and future use (PR #551)\r\n+ New API function `ot.solve_sample` for solving OT problems from empirical samples (PR #563)\r\n+ Wrapper for `geomloss`` solver on empirical samples (PR #571)\r\n+ Add `stop_criterion` feature to (un)regularized (f)gw barycenter solvers (PR #578)\r\n+ Add `fixed_structure` and `fixed_features` to entropic fgw barycenter solver (PR #578)\r\n+ Add new BAPG solvers with KL projections for GW and FGW (PR #581)\r\n+ Add Bures-Wasserstein barycenter in `ot.gaussian` and example (PR #582, PR #584)\r\n+ Domain adaptation method `SinkhornL1l2Transport` now supports JA","2023-12-22T12:20:38",{"id":186,"version":187,"summary_zh":188,"released_at":189},99763,"0.9.1","This new release contains several new features and bug fixes. \r\n\r\nNew features include a new submodule `ot.gnn` that contains two new Graph neural network layers (compatible with [Pytorch Geometric](https:\u002F\u002Fpytorch-geometric.readthedocs.io\u002F)) for template-based pooling of graphs with an example on [graph classification](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fgromov\u002Fplot_gnn_TFGW.html). Related to this, we also now provide FGW and semi relaxed FGW solvers for which the resulting loss is differentiable w.r.t. the parameter `alpha`. Other contributions on the (F)GW front include a new solver for the Proximal Point algorithm [that can be used to solve entropic GW problems](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fgromov\u002Fplot_fgw_solvers.html) (using the parameter `solver=\"PPA\"`), new solvers for entropic FGW barycenters, novels Sinkhorn-based solvers for entropic semi-relaxed (F)GW, the possibility to provide a warm-start to the solvers, and optional marginal weights of the samples (uniform weights ar used by default). Finally we added in the submodule `ot.gaussian` and `ot.da` new loss and mapping estimators for the Gaussian Gromov-Wasserstein that can be used as a fast alternative to GW and estimates linear mappings between unregistered spaces that can potentially have different size (See the update [linear mapping example](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_linear_mapping.html) for an illustration).\r\n\r\nWe also provide a new solver for the [Entropic Wasserstein Component Analysis](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fothers\u002Fplot_EWCA.html) that is a generalization of the celebrated PCA taking into account the local neighborhood of the samples. We also now have a new solver in `ot.smooth` for the [sparsity-constrained OT (last plot)](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fplot_OT_1D_smooth.html) that can be used to find regularized OT plans with sparsity constraints. Finally we have a first multi-marginal solver for regular 1D distributions with a Monge loss (see [here](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fothers\u002Fplot_dmmot.html)).\r\n\r\nThe documentation and testings have also been updated. We now have nearly 95% code coverage with the tests. The documentation has been updated and some examples have been streamlined to build more quickly and avoid timeout problems with CircleCI. We also added an optional CI on GPU for the master branch and approved PRs that can be used when a GPU runner is online. \r\n\r\nMany other bugs and issues have been fixed and we want to thank all the contributors, old and new, who made this release possible. More details below.\r\n\r\n\r\n#### New features\r\n- Gaussian Gromov Wasserstein loss and mapping (PR #498)\r\n- Template-based Fused Gromov Wasserstein GNN layer in `ot.gnn` (PR #488)\r\n- Make alpha parameter in semi-relaxed Fused Gromov Wasserstein differentiable (PR #483)\r\n- Make alpha parameter in Fused Gromov Wasserstein differentiable (PR #463)\r\n- Added the sparsity-constrained OT solver to `ot.smooth` and added `projection_sparse_simplex` to `ot.utils` (PR #459)\r\n- Add tests on GPU for master branch and approved PR (PR #473)\r\n- Add `median` method to all inherited classes of `backend.Backend` (PR #472)\r\n- Update tests for macOS and Windows, speedup documentation (PR #484)\r\n- Added Proximal Point algorithm to solve GW problems via a new parameter `solver=\"PPA\"` in `ot.gromov.entropic_gromov_wasserstein` + examples (PR #455)\r\n- Added features `warmstart` and `kwargs` in `ot.gromov.entropic_gromov_wasserstein` to respectively perform warmstart on dual potentials and pass parameters to `ot.sinkhorn` (PR #455)\r\n- Added sinkhorn projection based solvers for FGW `ot.gromov.entropic_fused_gromov_wasserstein` and entropic FGW barycenters + examples (PR #455)\r\n- Added features `warmstartT` and `kwargs` to all CG and entropic (F)GW barycenter solvers (PR #455)\r\n- Added entropic semi-relaxed (Fused) Gromov-Wasserstein solvers in `ot.gromov` + examples (PR #455)\r\n- Make marginal parameters optional for (F)GW solvers in `._gw`, `._bregman` and `._semirelaxed` (PR #455)\r\n- Add Entropic Wasserstein Component Analysis (ECWA) in ot.dr (PR #486)\r\n- Added feature Efficient Discrete Multi Marginal Optimal Transport Regularization + examples (PR #454)\r\n\r\n#### Closed issues\r\n\r\n- Fix gromov conventions (PR #497)\r\n- Fix change in scipy API for `cdist` (PR #487)\r\n- More permissive check_backend (PR #494)\r\n- Fix circleci-redirector action and codecov (PR #460)\r\n- Fix issues with cuda for ot.binary_search_circle and with gradients for ot.sliced_wasserstein_sphere (PR #457)\r\n- Major documentation cleanup (PR #462, PR #467, PR #475)\r\n- Fix gradients for \"Wasserstein2 Minibatch GAN\" example (PR #466)\r\n- Faster Bures-Wasserstein distance with NumPy backend (PR #468)\r\n- Fix issue backend for ot.sliced_wasserstein_sphere ot.sliced_wasserstein_sphere_unif (PR #471)\r\n- Fix issue with ot.barycenter_stabilized when used with PyTorch tensors","2023-08-09T12:42:09",{"id":191,"version":192,"summary_zh":193,"released_at":194},99764,"0.9.0","This new release contains so many new features and bug fixes since 0.8.2 that we decided to make it a new minor release at 0.9.0. \r\n\r\nThe release contains many new features. First we did a major update of all Gromov-Wasserstein solvers that brings up to 30% gain in\r\ncomputation time (see PR #431) and allows the GW solvers to work on non symmetricmatrices. It also brings novel solvers for the veryefficient [semi-relaxed GW problem](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fgromov\u002Fplot_semirelaxed_fgw.html#sphx-glr-auto-examples-gromov-plot-semirelaxed-fgw-py) that can be used to find the best re-weighting for one of the distributions. We also now have fast and differentiable solvers for [Wasserstein on the circle](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fplot_compute_wasserstein_circle.html#sphx-glr-auto-examples-plot-compute-wasserstein-circle-py) and [sliced Wasserstein on the sphere](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fbackends\u002Fplot_ssw_unif_torch.html#sphx-glr-auto-examples-backends-plot-ssw-unif-torch-py). We are also very happy to provide new OT barycenter solvers such as the [Free support Sinkhorn barycenter](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fbarycenters\u002Fplot_free_support_sinkhorn_barycenter.html#sphx-glr-auto-examples-barycenters-plot-free-support-sinkhorn-barycenter-py) and the [Generalized Wasserstein barycenter](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fbarycenters\u002Fplot_generalized_free_support_barycenter.html#sphx-glr-auto-examples-barycenters-plot-generalized-free-support-barycenter-py). A new differentiable solver for OT across spaces that provides OT plans between samples and features simultaneously and called [Co-Optimal Transport](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fothers\u002Fplot_COOT.html) has also been implemented. Finally we began working on OT between Gaussian distributions and now provide differentiable estimation for the Bures-Wasserstein [divergence](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fgen_modules\u002Fot.gaussian.html#ot.gaussian.bures_wasserstein_distance) and [mappings](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_linear_mapping.html#sphx-glr-auto-examples-domain-adaptation-plot-otda-linear-mapping-py).\r\n\r\nAnother important first step  toward POT 1.0 is the implementation of a unified API for OT solvers with introduction of the [`ot.solve`](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fall.html#ot.solve) function that can solve (depending on parameters) exact, regularized and unbalanced OT and return a new [`OTResult`](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002Fgen_modules\u002Fot.utils.html#ot.utils.OTResult) object. The idea behind this new API is to facilitate exploring different solvers with just a change of parameter and get a more unified API for them. We will keep the old solvers API for power users but it will be the preferred way to solve problems starting from release 1.0.0. We provide below some examples of use for the new function and how to recover different aspects of the solution (OT plan, full loss, linear part of the loss, dual variables) :\r\n```python\r\n#Solve  exact ot\r\nsol = ot.solve(M)\r\n\r\n# get the results\r\nG = sol.plan # OT plan\r\not_loss = sol.value # OT value (full loss for regularized and unbalanced)\r\not_loss_linear = sol.value_linear # OT value for linear term np.sum(sol.plan*M)\r\nalpha, beta = sol.potentials # dual potentials\r\n\r\n# direct plan and loss computation\r\nG = ot.solve(M).plan\r\not_loss = ot.solve(M).value\r\n\r\n# OT exact with marginals a\u002Fb\r\nsol2 = ot.solve(M, a, b)\r\n\r\n# regularized and unbalanced OT\r\nsol_rkl = ot.solve(M, a, b, reg=1) # KL regularization\r\nsol_rl2 = ot.solve(M, a, b, reg=1, reg_type='L2')\r\nsol_ul2 = ot.solve(M, a, b, unbalanced=10, unbalanced_type='L2')\r\nsol_rkl_ukl = ot.solve(M, a, b, reg=10, unbalanced=10) # KL + KL\r\n\r\n```\r\nThe function is fully compatible with backends and will be implemented for different types of distribution support (empirical distributions, grids) and OT problems (Gromov-Wasserstein) in the new releases. This new API is not yet presented in the kickstart part of the documentation as there is a small change that it might change when implementing new solvers but we encourage users to play with it.\r\n\r\nFinally, in addition to those many new  this release fixes 20 issues (some long standing) and we want to thank all the contributors who made this release so big. More details below.\r\n    \r\n\r\n#### New features\r\n- Added feature to (Fused) Gromov-Wasserstein solvers herited from `ot.optim` to support relative and absolute loss variations as stopping criterions (PR #431)\r\n- Added feature to (Fused) Gromov-Wasserstein solvers to handle asymmetric matrices (PR #431)\r\n- Added semi-relaxed (Fused) Gromov-Wasserstein solvers in `ot.gromov` + examples (PR #431)\r\n- Added the spherical sliced-Wasserstein discrepancy in `ot.sliced.sliced_wasserstein_sphere` and `ot.sliced.sliced_wasserstein_sphere_unif` + examples (PR #434)\r\n- Added the Wasserstein distance on the circle in ``ot.l","2023-04-07T08:32:56",{"id":196,"version":197,"summary_zh":198,"released_at":199},99765,"0.8.2","\r\nThis releases introduces several new notable features. The less important but most exiting one being that we now have a logo for the toolbox (color and dark background) :\r\n![](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002F_images\u002Flogo.svg)\r\n\r\nThis logo is generated using with matplotlib and using the solution of an OT problem provided by POT (with `ot.emd`). Generating the logo can be done with a simple python script also provided in the [documentation gallery](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_logo.html#sphx-glr-auto-examples-others-plot-logo-py).\r\n\r\nNew OT solvers include [Weak OT](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.weak.html#ot.weak.weak_optimal_transport) and [OT with factored coupling](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.factored.html#ot.factored.factored_optimal_transport) that can be used on large datasets. The [Majorization Minimization](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.unbalanced.html?highlight=mm_#ot.unbalanced.mm_unbalanced) solvers for non-regularized Unbalanced OT are now also available. We also now provide an implementation of [GW and FGW unmixing](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.gromov.html#ot.gromov.gromov_wasserstein_linear_unmixing) and [dictionary learning](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.gromov.html#ot.gromov.gromov_wasserstein_dictionary_learning). It is now possible to use autodiff to solve entropic an quadratic regularized OT in the dual for full or stochastic optimization thanks to the new functions to compute the dual loss for [entropic](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.stochastic.html#ot.stochastic.loss_dual_entropic) and [quadratic](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.stochastic.html#ot.stochastic.loss_dual_quadratic) regularized OT and reconstruct the [OT plan](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.stochastic.html#ot.stochastic.plan_dual_entropic) on part or all of the data. They can be used for instance to solve OT problems with stochastic gradient or for estimating the [dual potentials as neural networks](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_stoch_continuous_ot_pytorch.html#sphx-glr-auto-examples-backends-plot-stoch-continuous-ot-pytorch-py).\r\n\r\nOn the backend front, we now have backend compatible functions and classes in the domain adaptation [`ot.da`](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.da.html#module-ot.da) and unbalanced OT [`ot.unbalanced`](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.unbalanced.html) modules. This means that the DA classes can be used on tensors from all compatible backends. The [free support Wasserstein barycenter](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.lp.html?highlight=free%20support#ot.lp.free_support_barycenter) solver is now also backend compatible.\r\n\r\nFinally we have worked on the documentation to provide an update of existing examples in the gallery and and several new examples including [GW dictionary learning](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov_wasserstein_dictionary_learning.html#sphx-glr-auto-examples-gromov-plot-gromov-wasserstein-dictionary-learning-py) and [weak Optimal Transport](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fothers\u002Fplot_WeakOT_VS_OT.html#sphx-glr-auto-examples-others-plot-weakot-vs-ot-py).\r\n\r\n#### New features\r\n\r\n- Remove deprecated `ot.gpu` submodule (PR #361)\r\n- Update examples in the gallery (PR #359)\r\n- Add stochastic loss and OT plan computation for regularized OT and \r\n  backend examples(PR #360)\r\n- Implementation of factored OT with emd and sinkhorn (PR #358)\r\n- A brand new logo for POT (PR #357)\r\n- Better list of related examples in quick start guide with `minigallery` (PR #334)\r\n- Add optional log-domain Sinkhorn implementation in WDA to support smaller values\r\n  of the regularization parameter (PR #336)\r\n- Backend implementation for `ot.lp.free_support_barycenter` (PR #340)\r\n- Add weak OT solver + example  (PR #341)\r\n- Add backend support for Domain Adaptation and Unbalanced solvers (PR #343)\r\n- Add (F)GW linear dictionary learning solvers + example  (PR #319)\r\n- Add links to related PR and Issues in the doc release page (PR #350)\r\n- Add new minimization-maximization algorithms for solving exact Unbalanced OT + example (PR #362)\r\n\r\n#### Closed issues\r\n\r\n- Fix mass gradient of `ot.emd2` and `ot.gromov_wasserstein2` so that they are \r\n  centered (Issue #364, PR #363)\r\n- Fix bug in instantiating an `autograd` function `ValFunction` (Issue #337, \r\n  PR #338)\r\n- Fix POT ABI compatibility with old and new numpy (Issue #346, PR #349)\r\n- Warning when feeding integer cost matrix to EMD solver resulting in an integer transport plan (Issue #345, PR #343)\r\n- Fix bug where gromov_wasserstein2 does not perform backpropagation with CUDA\r\n  tensors (Issue #351, PR #352)\r\n","2022-04-21T16:31:43",{"id":201,"version":202,"summary_zh":203,"released_at":204},99766,"0.8.1.0","This is a bug fix release that will remove the `benchmarks` module form the installation and correct the documentation generation.\r\n\r\n#### Closed issues\r\n\r\n- Bug in documentation generation (tag VS master push, PR #332)\r\n- Remove installation of the benchmarks in global namespace (Issue #331, PR #333)","2021-12-31T12:46:47",{"id":206,"version":207,"summary_zh":208,"released_at":209},99767,"0.8.1","This release fixes several bugs and introduces two new backends: Cupy and Tensorflow. Note that the tensorflow backend will work only when tensorflow has enabled the Numpy behavior (for transpose that is not by default in tensorflow). We also introduce a simple benchmark on CPU GPU for the sinkhorn solver that will be provided in the [backend](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.backend.html) documentation.\r\n\r\nThis release also brings a few changes in dependencies and compatibility. First we removed tests for Python 3.6 that will not be updated in the future. Also note that POT now depends on Numpy (>= 1.20) because a recent change in ABI is making the wheels non-compatible with older numpy versions. If you really need an older numpy POT will work with no problems but you will need to build it from source.\r\n\r\nAs always we want to that the contributors who helped make POT better (and bug free).\r\n\r\n#### New features\r\n\r\n- New benchmark for sinkhorn solver on CPU\u002FGPU and between backends (PR #316)\r\n- New tensorflow backend (PR #316)\r\n- New Cupy backend (PR #315)\r\n- Documentation always up-to-date with README, RELEASES, CONTRIBUTING and\r\n  CODE_OF_CONDUCT files (PR #316, PR #322).\r\n\r\n#### Closed issues\r\n\r\n- Fix bug in older Numpy ABI (\u003C1.20) (Issue #308, PR #326)\r\n- Fix bug  in `ot.dist` function when non euclidean distance (Issue #305, PR #306)\r\n- Fix gradient scaling for functions using `nx.set_gradients` (Issue #309, PR   #310)\r\n- Fix bug in generalized Conditional gradient solver and SinkhornL1L2 (Issue   #311, PR #313)\r\n- Fix log error in `gromov_barycenters` (Issue #317, PR #3018)","2021-12-27T14:24:27",{"id":211,"version":212,"summary_zh":213,"released_at":214},99768,"0.8.0","This new stable release introduces several important features. \r\n\r\nFirst we now have an OpenMP compatible exact ot solver in `ot.emd`. The OpenMP version is used when the parameter `numThreads` is greater than one and can lead to nice speedups on multi-core machines. \r\n\r\nSecond we have introduced a backend mechanism that allows to use standard POT function seamlessly on Numpy, Pytorch and Jax arrays. Other backends are coming but right now POT can be used seamlessly for training neural networks in Pytorch. Notably we propose the first differentiable computation of the exact OT loss with `ot.emd2` (can be differentiated w.r.t. both cost matrix and sample weights), but also for the classical Sinkhorn loss with `ot.sinkhorn2`, the Wasserstein distance in 1D with `ot.wasserstein_1d`, sliced Wasserstein with  `ot.sliced_wasserstein_distance` and Gromov-Wasserstein with `ot.gromov_wasserstein2` .  Examples of how this new feature can be used are now available in the documentation where the Pytorch backend is used to estimate a [minimal Wasserstein estimator](https:\u002F\u002FPythonOT.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_unmix_optim_torch.html), a [Generative Network (GAN)](https:\u002F\u002FPythonOT.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_wass2_gan_torch.html), for a  [sliced Wasserstein gradient flow](https:\u002F\u002FPythonOT.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_sliced_wass_grad_flow_pytorch.html) and [optimizing the Gromov-Wassersein distance](https:\u002F\u002FPythonOT.github.io\u002Fauto_examples\u002Fbackends\u002Fplot_optim_gromov_pytorch.html). Note that the Jax backend is still in early development and quite slow at the moment, we strongly recommend for Jax users to use the [OTT toolbox](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fott)  when possible.   As a result of this new feature,  the old `ot.gpu` submodule is now deprecated since GPU implementations can be done using GPU arrays on the torch backends.\r\n\r\nOther novel features include implementation for [Sampled Gromov Wasserstein and Pointwise Gromov Wasserstein](https:\u002F\u002FPythonOT.github.io\u002Fauto_examples\u002Fgromov\u002Fplot_gromov.html#compute-gw-with-a-scalable-stochastic-method-with-any-loss-function), Sinkhorn in log space with `method='sinkhorn_log'`, [Projection Robust Wasserstein](https:\u002F\u002FPythonOT.github.io\u002Fgen_modules\u002Fot.dr.html?highlight=robust#ot.dr.projection_robust_wasserstein), and [deviased Sinkorn barycenters](https:\u002F\u002FPythonOT.github.ioauto_examples\u002Fbarycenters\u002Fplot_debiased_barycenter.html).\r\n\r\nThis release will also simplify the installation process. We have now a `pyproject.toml` that defines the build dependency and POT should now build even when cython is not installed yet. Also we now provide pe-compiled wheels for linux `aarch64` that is used on Raspberry PI and android phones and for MacOS on ARM processors.\r\n\r\n\r\nFinally POT was accepted for publication in the Journal of Machine Learning Research (JMLR) open source software track and we ask the POT users to cite [this paper](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv22\u002F20-451.html) from now on. The documentation has been improved in particular by adding a \"Why OT?\" section to the quick start guide and several new examples illustrating the new features. The documentation now has two version : the stable version  [https:\u002F\u002Fpythonot.github.io\u002F](https:\u002F\u002Fpythonot.github.io\u002F) corresponding to the last release and the master version [https:\u002F\u002Fpythonot.github.io\u002Fmaster](https:\u002F\u002Fpythonot.github.io\u002Fmaster) that corresponds to the current master branch on GitHub.\r\n\r\n\r\nAs usual, we want to thank all the POT contributors (now 37 people have contributed to the toolbox). But for this release we thank in particular Nathan Cassereau and Kamel Guerda from the AI support team at [IDRIS](http:\u002F\u002Fwww.idris.fr\u002F) for their support to the development of the\r\nbackend and OpenMP implementations. \r\n\r\n\r\n#### New features\r\n\r\n- OpenMP support for exact OT solvers (PR #260)\r\n- Backend for running POT in numpy\u002Ftorch + exact solver (PR #249)\r\n- Backend implementation of most functions in `ot.bregman` (PR #280)\r\n- Backend implementation of most functions in `ot.optim` (PR #282)\r\n- Backend implementation of most functions in `ot.gromov` (PR #294, PR #302)\r\n- Test for arrays of different type and device (CPU\u002FGPU) (PR #304, #303)\r\n- Implementation of Sinkhorn in log space with `method='sinkhorn_log'` (PR #290)\r\n- Implementation of regularization path for L2 Unbalanced OT (PR #274)\r\n- Implementation of Projection Robust Wasserstein (PR #267)\r\n- Implementation of Debiased Sinkhorn Barycenters (PR #291)\r\n- Implementation of Sampled Gromov Wasserstein and Pointwise Gromov Wasserstein\r\n  (PR #275)\r\n- Add `pyproject.toml` and build POT without installing cython first (PR #293)\r\n- Lazy implementation in log space for sinkhorn on samples (PR #259)\r\n- Documentation cleanup (PR #298)\r\n- Two up-to-date documentations [for stable\r\n  release](https:\u002F\u002FPythonOT.github.io\u002F) and for [master branch](https:\u002F\u002Fpythonot.github.io\u002Fmaster\u002F).\r\n- Building wheels on ARM for Raspberry PI and smartphones (PR #238)\r\n- Update bui","2021-11-05T17:07:28",{"id":216,"version":217,"summary_zh":218,"released_at":219},99769,"0.7.0","This is the new stable release for POT. We made a lot of changes in the documentation and added several new features such as Partial OT, Unbalanced and Multi Sources OT Domain Adaptation and several bug fixes. One important change is that we have created the GitHub organization [PythonOT](https:\u002F\u002Fgithub.com\u002FPythonOT) that now owns the main POT repository [https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT) and the repository for the new documentation is now hosted at [https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F).\r\n\r\nThis is the first release where the Python 2.7 tests have been removed. Most of the toolbox should still work but we do not offer support for Python 2.7 and will close related Issues. \r\n\r\nA lot of changes have been done to the documentation that is now hosted on [https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F) instead of readthedocs. It was a hard choice but readthedocs did not allow us to run sphinx-gallery to update our beautiful examples and it was a huge amount of work to maintain. The documentation is now automatically compiled and updated on merge. We also removed the notebooks from the repository for space reason and also because they are all available in the [example gallery](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Findex.html). Note that now the output of the documentation build for each commit in the PR is available to check that the doc builds correctly before merging which was not possible with readthedocs.\r\n\r\nThe CI framework has also been changed with a move from Travis to Github Action which allows to get faster tests on Windows, MacOS and Linux. We also now report our coverage on [Codecov.io](https:\u002F\u002Fcodecov.io\u002Fgh\u002FPythonOT\u002FPOT) and we have a reasonable 92% coverage. We also now generate wheels for a number of OS and Python versions at each merge in the master branch. They are available as outputs of this [action](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT\u002Factions?query=workflow%3A%22Build+dist+and+wheels%22). This will allow simpler multi-platform releases from now on.\r\n\r\nIn terms of new features we now have [OTDA Classes for unbalanced OT](https:\u002F\u002Fpythonot.github.io\u002Fgen_modules\u002Fot.da.html#ot.da.UnbalancedSinkhornTransport), a new Domain adaptation class form [multi domain problems (JCPOT)](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Fdomain-adaptation\u002Fplot_otda_jcpot.html#sphx-glr-auto-examples-domain-adaptation-plot-otda-jcpot-py), and several solvers to solve the [Partial Optimal Transport](https:\u002F\u002Fpythonot.github.io\u002Fauto_examples\u002Funbalanced-partial\u002Fplot_partial_wass_and_gromov.html#sphx-glr-auto-examples-unbalanced-partial-plot-partial-wass-and-gromov-py) problems.\r\n\r\nThis release is also the moment to thank all the POT contributors (old and new) for helping making POT such a nice toolbox. A lot of changes (also in the API) are comming for the next versions. \r\n\r\n\r\n#### Features\r\n\r\n- New documentation on [https:\u002F\u002FPythonOT.github.io\u002F](https:\u002F\u002FPythonOT.github.io\u002F) (PR #160, PR #143, PR #144)\r\n- Documentation build on CircleCI with sphinx-gallery (PR #145,PR #146, #155)\r\n- Run sphinx gallery in CI (PR #146)\r\n- Remove notebooks from repo because available in doc (PR #156)\r\n- Build wheels in CI (#157)\r\n- Move from travis to GitHub Action for Windows, MacOS and Linux (PR #148, PR #150)\r\n- Partial Optimal Transport (PR#141 and PR #142)\r\n- Laplace regularized OTDA (PR #140)\r\n- Multi source DA with target shift (PR #137)\r\n- Screenkhorn algorithm (PR #121)\r\n\r\n#### Closed issues\r\n\r\n- Bug in Unbalanced OT example (Issue #127)\r\n- Clean Cython output when calling setup.py clean (Issue #122)\r\n- Various Macosx compilation problems (Issue #113, Issue #118, PR#130)\r\n- EMD dimension mismatch (Issue #114, Fixed in PR #116)\r\n- 2D barycenter bug for non square images (Issue #124, fixed in PR #132)\r\n- Bad value in EMD 1D (Issue #138, fixed in PR #139)\r\n- Log bugs for Gromov-Wassertein solver (Issue #107, fixed in PR #108)\r\n- Weight issues in barycenter function (PR #106)","2020-05-05T10:14:34",{"id":221,"version":222,"summary_zh":223,"released_at":224},99770,"0.7.0-beta0","This is a beta test pre release for the new version of POT.\r\n\r\nDo not use in production.","2020-04-23T09:41:53",{"id":226,"version":227,"summary_zh":228,"released_at":229},99771,"0.6.0","This is the first official stable release of POT and this means a jump to 0.6! The library has been used in the wild for a while now and we have reached a state where a lot of fundamental OT solvers are available and tested. It has been quite stable in the last months but kept the beta flag in its Pypi classifiers until now. \r\n\r\nNote that this release will be the last one supporting officially Python 2.7 (See https:\u002F\u002Fpython3statement.org\u002F for more reasons). For next release we will keep the travis tests for Python 2 but will make them non necessary for merge in 2020.\r\n\r\nThe features are never complete in a toolbox designed for solving mathematical problems and research but with the new contributions we now implement algorithms and solvers from 24 scientific papers (listed in the README.md file). New features include a direct implementation of the [empirical Sinkhorn divergence](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html#ot.bregman.empirical_sinkhorn_divergence), a new efficient (Cython implementation) solver for [EMD in 1D](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html#ot.lp.emd_1d) and corresponding [Wasserstein 1D](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html#ot.lp.wasserstein_1d). We now also have implementations for [Unbalanced OT](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_UOT_1D.ipynb) and a solver for [Unbalanced OT barycenters](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_UOT_barycenter_1D.ipynb). A new variant of Gromov-Wasserstein divergence called [Fused Gromov-Wasserstein](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html?highlight=fused_#ot.gromov.fused_gromov_wasserstein)  has been also contributed with exemples of use on [structured data](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_fgw.ipynb) and computing [barycenters of labeld graphs](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_barycenter_fgw.ipynb).\r\n\r\n\r\nA lot of work has been done on the documentation with several new examples corresponding to the new features and a lot of corrections for the docstrings. But the most visible change is a new [quick start guide](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fquickstart.html) for POT that gives several pointers about which function or classes allow to solve which specific OT problem. When possible a link is provided to relevant examples.\r\n\r\nWe will also provide with this release some pre-compiled Python wheels for Linux 64bit on\r\ngithub and pip. This will simplify the install process that before required a C compiler and numpy\u002Fcython already installed.\r\n\r\nFinally we would like to acknowledge and thank the numerous contributors of POT that has helped in the past build the foundation and are still contributing to bring new features and solvers to the library.\r\n\r\n#### Features\r\n\r\n* Add compiled manylinux 64bits wheels to pip releases (PR #91)\r\n* Add quick start guide (PR #88)\r\n* Make doctest work on travis (PR #90)\r\n* Update documentation (PR #79, PR #84)\r\n* Solver for EMD in 1D (PR #89)\r\n* Solvers for regularized unbalanced OT (PR #87, PR#99)\r\n* Solver for Fused Gromov-Wasserstein (PR #86)\r\n* Add empirical Sinkhorn and empirical Sinkhorn divergences (PR #80)\r\n\r\n\r\n#### Closed issues\r\n\r\n- Issue #59 fail when using \"pip install POT\" (new details in doc+ hopefully\r\n  wheels)\r\n- Issue #85 Cannot run gpu modules\r\n- Issue #75 Greenkhorn do not return log (solved in PR #76)\r\n- Issue #82 Gromov-Wasserstein fails when the cost matrices are slightly different\r\n- Issue #72 Macosx build problem","2019-09-10T07:13:03",{"id":231,"version":232,"summary_zh":233,"released_at":234},99772,"0.5.0","POT is 2 years old! This release brings numerous new features to the toolbox as listed below but also several bug correction.\r\n\r\nAmong the new features, we can highlight a [non-regularized Gromov-Wasserstein solver](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_gromov.ipynb), a new [greedy variant of sinkhorn](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html#ot.bregman.greenkhorn),  [non-regularized](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html#ot.lp.barycenter), [convolutional (2D)](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_convolutional_barycenter.ipynb) and [free support](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fplot_free_support_barycenter.ipynb) Wasserstein barycenters and [smooth](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002FprV0.5\u002Fnotebooks\u002Fplot_OT_1D_smooth.ipynb)  and [stochastic](https:\u002F\u002Fpot.readthedocs.io\u002Fen\u002Flatest\u002Fall.html#ot.stochastic.sgd_entropic_regularization) implementation of entropic OT.\r\n\r\nPOT 0.5 also comes with a rewriting of ot.gpu using the cupy framework instead of the unmaintained cudamat. Note that while we tried to keed changes to the minimum, the OTDA classes were deprecated. If you are happy with the cudamat implementation, we recommend you stay with stable release 0.4 for now.\r\n\r\nThe code quality has also improved with 92% code coverage in tests that is now printed to the log in the Travis builds. The documentation has also been greatly improved with new modules and examples\u002Fnotebooks.\r\n\r\nThis new release is so full of new stuff and corrections thanks to the old and new POT contributors (you can see the list in the [readme](https:\u002F\u002Fgithub.com\u002Frflamary\u002FPOT\u002Fblob\u002Fmaster\u002FREADME.md)).\r\n\r\n\r\n#### Features\r\n\r\n* Add non regularized Gromov-Wasserstein solver  (PR #41)\r\n* Linear OT mapping between empirical distributions and 90\\% test coverage (PR #42)\r\n* Add log parameter in class EMDTransport and SinkhornLpL1Transport (PR #44)\r\n* Add Markdown format for Pipy (PR #45)\r\n* Test for Python 3.5 and 3.6 on Travis (PR #46)\r\n* Non regularized Wasserstein barycenter with scipy linear solver and\u002For cvxopt (PR #47)\r\n* Rename dataset functions to be more sklearn compliant (PR #49)\r\n* Smooth and sparse Optimal transport implementation with entropic and quadratic regularization (PR #50)\r\n* Stochastic OT in the dual and semi-dual (PR #52 and PR #62)\r\n* Free support barycenters (PR #56)\r\n* Speed-up Sinkhorn function (PR #57 and PR #58)\r\n* Add convolutional Wassersein barycenters for 2D images (PR #64) \r\n* Add Greedy Sinkhorn variant (Greenkhorn) (PR #66)\r\n* Big ot.gpu update with cupy implementation (instead of un-maintained cudamat) (PR #67)\r\n\r\n#### Deprecation\r\n\r\nDeprecated OTDA Classes were removed from ot.da and ot.gpu for version 0.5 (PR #48 and PR #67). The deprecation message has been for a year here since 0.4 and it is time to pull the plug.\r\n\r\n#### Closed issues\r\n\r\n* Issue #35 : remove import plot from `ot\u002F__init__.py` (See PR #41)\r\n* Issue #43 : Unusable parameter log for EMDTransport (See PR #44)\r\n* Issue #55 : UnicodeDecodeError: 'ascii' while installing with pip ","2018-10-03T06:46:59",{"id":236,"version":237,"summary_zh":79,"released_at":238},99773,"0.4.0","2017-09-20T07:38:58",{"id":240,"version":241,"summary_zh":242,"released_at":243},99774,"0.4","This release contains a lot of contribution from new contributors.\r\n\r\n\r\n#### Features\r\n\r\n* Automatic notebooks and doc update (PR #27)\r\n* Add gromov Wasserstein solver and Gromov Barycenters (PR #23)\r\n* emd and emd2 can now return dual variables and have max_iter (PR #29 and PR #25) \r\n* New domain adaptation classes compatible with scikit-learn (PR #22)\r\n* Proper tests with pytest on travis (PR #19)\r\n* PEP 8 tests (PR #13)\r\n\r\n#### Closed issues\r\n\r\n* emd convergence problem du to fixed max iterations (#24) \r\n* Semi supervised DA error (#26)\r\n","2017-09-15T12:56:33",{"id":245,"version":246,"summary_zh":247,"released_at":248},99775,"0.3.1","+ correct bug in EMD on windows","2017-07-11T09:51:44",{"id":250,"version":251,"summary_zh":252,"released_at":253},99776,"0.3","+ emd* and sinkhorn* are now performed in parallel for multiple target distributions\r\n+ emd and sinkhorn are for OT matrix computation\r\n+ emd2 and sinkhorn2 are for OT loss computation\r\n+ new notebooks for emd computation and Wasserstein Discriminant Analysis\r\n+ relocate notebooks\r\n+ update documentation\r\n+ clean_zeros(a,b,M) for removimg zeros in sparse distributions\r\n+ GPU implementations for sinkhorn and group lasso regularization","2017-07-07T07:16:22"]