[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jettify--pytorch-optimizer":3,"tool-jettify--pytorch-optimizer":61},[4,18,28,37,45,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":24,"last_commit_at":25,"category_tags":26,"status":17},9989,"n8n","n8n-io\u002Fn8n","n8n 是一款面向技术团队的公平代码（fair-code）工作流自动化平台，旨在让用户在享受低代码快速构建便利的同时，保留编写自定义代码的灵活性。它主要解决了传统自动化工具要么过于封闭难以扩展、要么完全依赖手写代码效率低下的痛点，帮助用户轻松连接 400 多种应用与服务，实现复杂业务流程的自动化。\n\nn8n 特别适合开发者、工程师以及具备一定技术背景的业务人员使用。其核心亮点在于“按需编码”：既可以通过直观的可视化界面拖拽节点搭建流程，也能随时插入 JavaScript 或 Python 代码、调用 npm 包来处理复杂逻辑。此外，n8n 原生集成了基于 LangChain 的 AI 能力，支持用户利用自有数据和模型构建智能体工作流。在部署方面，n8n 提供极高的自由度，支持完全自托管以保障数据隐私和控制权，也提供云端服务选项。凭借活跃的社区生态和数百个现成模板，n8n 让构建强大且可控的自动化系统变得简单高效。",184740,2,"2026-04-19T23:22:26",[16,14,13,15,27],"插件",{"id":29,"name":30,"github_repo":31,"description_zh":32,"stars":33,"difficulty_score":10,"last_commit_at":34,"category_tags":35,"status":17},10095,"AutoGPT","Significant-Gravitas\u002FAutoGPT","AutoGPT 是一个旨在让每个人都能轻松使用和构建 AI 的强大平台，核心功能是帮助用户创建、部署和管理能够自动执行复杂任务的连续型 AI 智能体。它解决了传统 AI 应用中需要频繁人工干预、难以自动化长流程工作的痛点，让用户只需设定目标，AI 即可自主规划步骤、调用工具并持续运行直至完成任务。\n\n无论是开发者、研究人员，还是希望提升工作效率的普通用户，都能从 AutoGPT 中受益。开发者可利用其低代码界面快速定制专属智能体；研究人员能基于开源架构探索多智能体协作机制；而非技术背景用户也可直接选用预置的智能体模板，立即投入实际工作场景。\n\nAutoGPT 的技术亮点在于其模块化“积木式”工作流设计——用户通过连接功能块即可构建复杂逻辑，每个块负责单一动作，灵活且易于调试。同时，平台支持本地自托管与云端部署两种模式，兼顾数据隐私与使用便捷性。配合完善的文档和一键安装脚本，即使是初次接触的用户也能在几分钟内启动自己的第一个 AI 智能体。AutoGPT 正致力于降低 AI 应用门槛，让人人都能成为 AI 的创造者与受益者。",183572,"2026-04-20T04:47:55",[13,36,27,14,15],"语言模型",{"id":38,"name":39,"github_repo":40,"description_zh":41,"stars":42,"difficulty_score":10,"last_commit_at":43,"category_tags":44,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":24,"last_commit_at":51,"category_tags":52,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",161147,"2026-04-19T23:31:47",[14,13,36],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":24,"last_commit_at":59,"category_tags":60,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":73,"owner_company":75,"owner_location":76,"owner_email":77,"owner_twitter":73,"owner_website":73,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":92,"env_os":93,"env_gpu":94,"env_ram":94,"env_deps":95,"category_tags":99,"github_topics":100,"view_count":24,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":155},10015,"jettify\u002Fpytorch-optimizer","pytorch-optimizer","torch-optimizer -- collection of optimizers for Pytorch","pytorch-optimizer 是一个专为 PyTorch 框架设计的优化器集合库，旨在为深度学习模型训练提供更多样化、更高效的参数更新策略。它解决了原生 PyTorch 内置优化器种类有限的问题，让开发者和研究人员能够轻松尝试如 DiffGrad、A2Grad、AccSGD 等前沿学术算法，而无需手动复现复杂的数学公式或担心兼容性问题。\n\n该工具完全兼容 PyTorch 原生的 `optim` 模块接口，用户只需替换导入路径即可无缝切换优化器，极大降低了实验门槛。无论是需要复现论文结果的科研人员，还是希望提升模型收敛速度与精度的算法工程师，都能从中受益。其核心亮点在于收录了多篇顶级会议论文提出的创新优化算法，并提供了经过测试的稳定实现，支持通过简单的 pip 命令一键安装。借助 pytorch-optimizer，用户可以快速验证不同优化策略对特定任务的效果，加速从理论到实践的转化过程，是深度学习领域不可或缺的实用辅助工具。","torch-optimizer\n===============\n.. image:: https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fworkflows\u002FCI\u002Fbadge.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Factions?query=workflow%3ACI\n   :alt: GitHub Actions status for master branch\n.. image:: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjettify\u002Fpytorch-optimizer\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg\n    :target: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjettify\u002Fpytorch-optimizer\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftorch-optimizer.svg\n    :target: https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorch-optimizer\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fpytorch-optimizer\u002Fbadge\u002F?version=latest\n    :target: https:\u002F\u002Fpytorch-optimizer.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\n    :alt: Documentation Status\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftorch-optimizer.svg\n    :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Ftorch-optimizer\n.. image:: https:\u002F\u002Fstatic.deepsource.io\u002Fdeepsource-badge-light-mini.svg\n    :target: https:\u002F\u002Fdeepsource.io\u002Fgh\u002Fjettify\u002Fpytorch-optimizer\u002F?ref=repository-badge\n\n\n**torch-optimizer** -- collection of optimizers for PyTorch_ compatible with optim_\nmodule.\n\n\nSimple example\n--------------\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.DiffGrad(model.parameters(), lr=0.001)\n    optimizer.step()\n\n\nInstallation\n------------\nInstallation process is simple, just::\n\n    $ pip install torch_optimizer\n\n\nDocumentation\n-------------\nhttps:\u002F\u002Fpytorch-optimizer.rtfd.io\n\n\nCitation\n--------\nPlease cite the original authors of the optimization algorithms. If you like this\npackage::\n\n    @software{Novik_torchoptimizers,\n    \ttitle        = {{torch-optimizer -- collection of optimization algorithms for PyTorch.}},\n    \tauthor       = {Novik, Mykola},\n    \tyear         = 2020,\n    \tmonth        = 1,\n    \tversion      = {1.0.1}\n    }\n\nOr use the github feature: \"cite this repository\" button.\n\n\nSupported Optimizers\n====================\n\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `A2GradExp`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `A2GradInc`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `A2GradUni`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AccSGD`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05591                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdaBelief`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07468                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdaBound`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09843                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdaMod`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12249                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Adafactor`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.04235                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Adahessian`_ | https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.00719                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdamP`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AggMo`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00325                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Apollo`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.13586                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `DiffGrad`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11015                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Lamb`_       | https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00962                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Lookahead`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.08610                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `MADGRAD`_    | https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11075                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `NovoGrad`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11286                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `PID`_        | https:\u002F\u002Fwww4.comp.polyu.edu.hk\u002F~cslzhang\u002Fpaper\u002FCVPR18_PID.pdf                                                                        |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `QHAdam`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `QHM`_        | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `RAdam`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03265                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Ranger`_     | https:\u002F\u002Fmedium.com\u002F@lessw\u002Fnew-deep-learning-optimizer-ranger-synergistic-combination-of-radam-lookahead-for-the-best-of-2dc83f79a48d |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `RangerQH`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `RangerVA`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.00700v2                                                                                                   |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `SGDP`_       | https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `SGDW`_       | https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.03983                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `SWATS`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.07628                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Shampoo`_    | https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09568                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Yogi`_       | https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8186-adaptive-methods-for-nonconvex-optimization                                                        |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n\n\nVisualizations\n--------------\nVisualizations help us see how different algorithms deal with simple\nsituations like: saddle points, local minima, valleys etc, and may provide\ninteresting insights into the inner workings of an algorithm. Rosenbrock_ and Rastrigin_\nbenchmark_ functions were selected because:\n\n* Rosenbrock_ (also known as banana function), is non-convex function that has\n  one global minimum  `(1.0. 1.0)`. The global minimum is inside a long,\n  narrow, parabolic shaped flat valley. Finding the valley is trivial. \n  Converging to the global minimum, however, is difficult. Optimization\n  algorithms might pay a lot of attention to one coordinate, and struggle\n  following the valley which is relatively flat.\n\n .. image::  https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002F3\u002F32\u002FRosenbrock_function.svg\n\n* Rastrigin_ is a non-convex function  and has one global minimum in `(0.0, 0.0)`.\n  Finding the minimum of this function is a fairly difficult problem due to\n  its large search space and its large number of local minima.\n\n  .. image::  https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002F8\u002F8b\u002FRastrigin_function.png\n\nEach optimizer performs `501` optimization steps. Learning rate is the best one found\nby a hyper parameter search algorithm, the rest of the tuning parameters are default. It\nis very easy to extend the script and tune other optimizer parameters.\n\n\n.. code::\n\n    python examples\u002Fviz_optimizers.py\n\n\nWarning\n-------\nDo not pick an optimizer based on visualizations, optimization approaches\nhave unique properties and may be tailored for different purposes or may\nrequire explicit learning rate schedule etc. The best way to find out is to try \none on your particular problem and see if it improves scores.\n\nIf you do not know which optimizer to use, start with the built in SGD\u002FAdam. Once\nthe training logic is ready and baseline scores are established, swap the optimizer \nand see if there is any improvement.\n\n\nA2GradExp\n---------\n\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_A2GradExp.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_A2GradExp.png  |\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.A2GradExp(\n        model.parameters(),\n        kappa=1000.0,\n        beta=10.0,\n        lips=10.0,\n        rho=0.5,\n    )\n    optimizer.step()\n\n\n**Paper**: *Optimal Adaptive and Accelerated Stochastic Gradient Descent* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fseverilov\u002FA2Grad_optimizer\n\n\nA2GradInc\n---------\n\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_A2GradInc.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_A2GradInc.png  |\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.A2GradInc(\n        model.parameters(),\n        kappa=1000.0,\n        beta=10.0,\n        lips=10.0,\n    )\n    optimizer.step()\n\n\n**Paper**: *Optimal Adaptive and Accelerated Stochastic Gradient Descent* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fseverilov\u002FA2Grad_optimizer\n\n\nA2GradUni\n---------\n\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_A2GradUni.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_A2GradUni.png  |\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.A2GradUni(\n        model.parameters(),\n        kappa=1000.0,\n        beta=10.0,\n        lips=10.0,\n    )\n    optimizer.step()\n\n\n**Paper**: *Optimal Adaptive and Accelerated Stochastic Gradient Descent* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fseverilov\u002FA2Grad_optimizer\n\n\nAccSGD\n------\n\n+-----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AccSGD.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AccSGD.png  |\n+-----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AccSGD(\n        model.parameters(),\n        lr=1e-3,\n        kappa=1000.0,\n        xi=10.0,\n        small_const=0.7,\n        weight_decay=0\n    )\n    optimizer.step()\n\n\n**Paper**: *On the insufficiency of existing momentum schemes for Stochastic Optimization* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05591]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Frahulkidambi\u002FAccSGD\n\n\nAdaBelief\n---------\n\n+-------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdaBelief.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdaBelief.png |\n+-------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdaBelief(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        weight_decay=0,\n        amsgrad=False,\n        weight_decouple=False,\n        fixed_decay=False,\n        rectify=False,\n    )\n    optimizer.step()\n\n\n**Paper**: *AdaBelief Optimizer, adapting stepsizes by the belief in observed gradients* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07468]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fjuntang-zhuang\u002FAdabelief-Optimizer\n\n\nAdaBound\n--------\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdaBound.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdaBound.png |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdaBound(\n        m.parameters(),\n        lr= 1e-3,\n        betas= (0.9, 0.999),\n        final_lr = 0.1,\n        gamma=1e-3,\n        eps= 1e-8,\n        weight_decay=0,\n        amsbound=False,\n    )\n    optimizer.step()\n\n\n**Paper**: *Adaptive Gradient Methods with Dynamic Bound of Learning Rate* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09843]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002FLuolc\u002FAdaBound\n\nAdaMod\n------\nThe AdaMod method restricts the adaptive learning rates with adaptive and momental\nupper bounds. The dynamic learning rate bounds are based on the exponential\nmoving averages of the adaptive learning rates themselves, which smooth out\nunexpected large learning rates and stabilize the training of deep neural networks.\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdaMod.png    |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdaMod.png   |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdaMod(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        beta3=0.999,\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**Paper**: *An Adaptive and Momental Bound Method for Stochastic Learning.* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12249]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Flancopku\u002FAdaMod\n\n\nAdafactor\n---------\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Adafactor.png |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Adafactor.png |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Adafactor(\n        m.parameters(),\n        lr= 1e-3,\n        eps2= (1e-30, 1e-3),\n        clip_threshold=1.0,\n        decay_rate=-0.8,\n        beta1=None,\n        weight_decay=0.0,\n        scale_parameter=True,\n        relative_step=True,\n        warmup_init=False,\n    )\n    optimizer.step()\n\n**Paper**: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost.* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.04235]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ffairseq\u002Fblob\u002Fmaster\u002Ffairseq\u002Foptim\u002Fadafactor.py\n\n\nAdahessian\n----------\n+-------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Adahessian.png |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Adahessian.png  |\n+-------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Adahessian(\n        m.parameters(),\n        lr= 1.0,\n        betas= (0.9, 0.999),\n        eps= 1e-4,\n        weight_decay=0.0,\n        hessian_power=1.0,\n    )\n\t  loss_fn(m(input), target).backward(create_graph = True) # create_graph=True is necessary for Hessian calculation\n    optimizer.step()\n\n\n**Paper**: *ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.00719]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Famirgholami\u002Fadahessian\n\n\nAdamP\n------\nAdamP propose a simple and effective solution: at each iteration of the Adam optimizer\napplied on scale-invariant weights (e.g., Conv weights preceding a BN layer), AdamP\nremoves the radial component (i.e., parallel to the weight vector) from the update vector.\nIntuitively, this operation prevents the unnecessary update along the radial direction\nthat only increases the weight norm without contributing to the loss minimization.\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdamP.png     |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdamP.png    |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdamP(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n        delta = 0.1,\n        wd_ratio = 0.1\n    )\n    optimizer.step()\n\n**Paper**: *Slowing Down the Weight Norm Increase in Momentum-based Optimizers.* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fclovaai\u002FAdamP\n\n\nAggMo\n-----\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AggMo.png     |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AggMo.png    |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AggMo(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.0, 0.9, 0.99),\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**Paper**: *Aggregated Momentum: Stability Through Passive Damping.* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00325]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002FAtheMathmo\u002FAggMo\n\n\nApollo\n------\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Apollo.png    |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Apollo.png   |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Apollo(\n        m.parameters(),\n        lr= 1e-2,\n        beta=0.9,\n        eps=1e-4,\n        warmup=0,\n        init_lr=0.01,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**Paper**: *Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization.* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.13586]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002FXuezheMax\u002Fapollo\n\n\nDiffGrad\n--------\nOptimizer based on the difference between the present and the immediate past\ngradient, the step size is adjusted for each parameter in such\na way that it should have a larger step size for faster gradient changing\nparameters and a lower step size for lower gradient changing parameters.\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_DiffGrad.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_DiffGrad.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.DiffGrad(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**Paper**: *diffGrad: An Optimization Method for Convolutional Neural Networks.* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11015]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fshivram1987\u002FdiffGrad\n\nLamb\n----\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Lamb.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Lamb.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Lamb(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**Paper**: *Large Batch Optimization for Deep Learning: Training BERT in 76 minutes* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00962]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fcybertronai\u002Fpytorch-lamb\n\nLookahead\n---------\n\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_LookaheadYogi.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_LookaheadYogi.png  |\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    # base optimizer, any other optimizer can be used like Adam or DiffGrad\n    yogi = optim.Yogi(\n        m.parameters(),\n        lr= 1e-2,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        initial_accumulator=1e-6,\n        weight_decay=0,\n    )\n\n    optimizer = optim.Lookahead(yogi, k=5, alpha=0.5)\n    optimizer.step()\n\n\n**Paper**: *Lookahead Optimizer: k steps forward, 1 step back* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.08610]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Falphadl\u002Flookahead.pytorch\n\n\nMADGRAD\n---------\n\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_MADGRAD.png        |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_MADGRAD.png        |\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.MADGRAD(\n        m.parameters(),\n        lr=1e-2,\n        momentum=0.9,\n        weight_decay=0,\n        eps=1e-6,\n    )\n    optimizer.step()\n\n\n**Paper**: *Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization* (2021) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11075]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmadgrad\n\n\nNovoGrad\n--------\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_NovoGrad.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_NovoGrad.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.NovoGrad(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n        grad_averaging=False,\n        amsgrad=False,\n    )\n    optimizer.step()\n\n\n**Paper**: *Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11286]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FDeepLearningExamples\u002F\n\n\nPID\n---\n\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_PID.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_PID.png  |\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.PID(\n        m.parameters(),\n        lr=1e-3,\n        momentum=0,\n        dampening=0,\n        weight_decay=1e-2,\n        integral=5.0,\n        derivative=10.0,\n    )\n    optimizer.step()\n\n\n**Paper**: *A PID Controller Approach for Stochastic Optimization of Deep Networks* (2018) [http:\u002F\u002Fwww4.comp.polyu.edu.hk\u002F~cslzhang\u002Fpaper\u002FCVPR18_PID.pdf]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Ftensorboy\u002FPIDOptimizer\n\n\nQHAdam\n------\n\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_QHAdam.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_QHAdam.png  |\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.QHAdam(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        nus=(1.0, 1.0),\n        weight_decay=0,\n        decouple_weight_decay=False,\n        eps=1e-8,\n    )\n    optimizer.step()\n\n\n**Paper**: *Quasi-hyperbolic momentum and Adam for deep learning* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fqhoptim\n\n\nQHM\n---\n\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_QHM.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_QHM.png  |\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.QHM(\n        m.parameters(),\n        lr=1e-3,\n        momentum=0,\n        nu=0.7,\n        weight_decay=1e-2,\n        weight_decay_type='grad',\n    )\n    optimizer.step()\n\n\n**Paper**: *Quasi-hyperbolic momentum and Adam for deep learning* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fqhoptim\n\n\nRAdam\n-----\n\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_RAdam.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_RAdam.png  |\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n\nDeprecated, please use version provided by PyTorch_.\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.RAdam(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**Paper**: *On the Variance of the Adaptive Learning Rate and Beyond* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03265]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002FLiyuanLucasLiu\u002FRAdam\n\n\nRanger\n------\n\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Ranger.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Ranger.png  |\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Ranger(\n        m.parameters(),\n        lr=1e-3,\n        alpha=0.5,\n        k=6,\n        N_sma_threshhold=5,\n        betas=(.95, 0.999),\n        eps=1e-5,\n        weight_decay=0\n    )\n    optimizer.step()\n\n\n**Paper**: *New Deep Learning Optimizer, Ranger: Synergistic combination of RAdam + LookAhead for the best of both* (2019) [https:\u002F\u002Fmedium.com\u002F@lessw\u002Fnew-deep-learning-optimizer-ranger-synergistic-combination-of-radam-lookahead-for-the-best-of-2dc83f79a48d]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Flessw2020\u002FRanger-Deep-Learning-Optimizer\n\n\nRangerQH\n--------\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_RangerQH.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_RangerQH.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.RangerQH(\n        m.parameters(),\n        lr=1e-3,\n        betas=(0.9, 0.999),\n        nus=(.7, 1.0),\n        weight_decay=0.0,\n        k=6,\n        alpha=.5,\n        decouple_weight_decay=False,\n        eps=1e-8,\n    )\n    optimizer.step()\n\n\n**Paper**: *Quasi-hyperbolic momentum and Adam for deep learning* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Flessw2020\u002FRanger-Deep-Learning-Optimizer\n\n\nRangerVA\n--------\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_RangerVA.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_RangerVA.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.RangerVA(\n        m.parameters(),\n        lr=1e-3,\n        alpha=0.5,\n        k=6,\n        n_sma_threshhold=5,\n        betas=(.95, 0.999),\n        eps=1e-5,\n        weight_decay=0,\n        amsgrad=True,\n        transformer='softplus',\n        smooth=50,\n        grad_transformer='square'\n    )\n    optimizer.step()\n\n\n**Paper**: *Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.00700v2]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Flessw2020\u002FRanger-Deep-Learning-Optimizer\n\n\nSGDP\n----\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SGDP.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SGDP.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.SGDP(\n        m.parameters(),\n        lr= 1e-3,\n        momentum=0,\n        dampening=0,\n        weight_decay=1e-2,\n        nesterov=False,\n        delta = 0.1,\n        wd_ratio = 0.1\n    )\n    optimizer.step()\n\n\n**Paper**: *Slowing Down the Weight Norm Increase in Momentum-based Optimizers.* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fclovaai\u002FAdamP\n\n\nSGDW\n----\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SGDW.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SGDW.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.SGDW(\n        m.parameters(),\n        lr= 1e-3,\n        momentum=0,\n        dampening=0,\n        weight_decay=1e-2,\n        nesterov=False,\n    )\n    optimizer.step()\n\n\n**Paper**: *SGDR: Stochastic Gradient Descent with Warm Restarts* (2017) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.03983]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Fpull\u002F22466\n\n\nSWATS\n-----\n\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SWATS.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SWATS.png  |\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.SWATS(\n        model.parameters(),\n        lr=1e-1,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        weight_decay= 0.0,\n        amsgrad=False,\n        nesterov=False,\n    )\n    optimizer.step()\n\n\n**Paper**: *Improving Generalization Performance by Switching from Adam to SGD* (2017) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.07628]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002FMrpatekful\u002Fswats\n\n\nShampoo\n-------\n\n+-----------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Shampoo.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Shampoo.png  |\n+-----------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Shampoo(\n        m.parameters(),\n        lr=1e-1,\n        momentum=0.0,\n        weight_decay=0.0,\n        epsilon=1e-4,\n        update_freq=1,\n    )\n    optimizer.step()\n\n\n**Paper**: *Shampoo: Preconditioned Stochastic Tensor Optimization* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09568]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fshampoo.pytorch\n\n\nYogi\n----\n\nYogi is optimization algorithm based on ADAM with more fine grained effective\nlearning rate control, and has similar theoretical guarantees on convergence as ADAM.\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Yogi.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Yogi.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Yogi(\n        m.parameters(),\n        lr= 1e-2,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        initial_accumulator=1e-6,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**Paper**: *Adaptive Methods for Nonconvex Optimization* (2018) [https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8186-adaptive-methods-for-nonconvex-optimization]\n\n**Reference Code**: https:\u002F\u002Fgithub.com\u002F4rtemi5\u002FYogi-Optimizer_Keras\n\n\nAdam (PyTorch built-in)\n-----------------------\n\n+---------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Adam.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Adam.png  |\n+---------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\nSGD (PyTorch built-in)\n----------------------\n\n+--------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SGD.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SGD.png  |\n+--------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n\n.. _Python: https:\u002F\u002Fwww.python.org\n.. _PyTorch: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\n.. _Rastrigin: https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRastrigin_function\n.. _Rosenbrock: https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRosenbrock_function\n.. _benchmark: https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTest_functions_for_optimization\n.. _optim: https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Foptim.html\n","torch-optimizer\n===============\n.. image:: https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fworkflows\u002FCI\u002Fbadge.svg\n   :target: https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Factions?query=workflow%3ACI\n   :alt: 主分支的 GitHub Actions 状态\n.. image:: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjettify\u002Fpytorch-optimizer\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg\n    :target: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjettify\u002Fpytorch-optimizer\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftorch-optimizer.svg\n    :target: https:\u002F\u002Fpypi.org\u002Fproject\u002Ftorch-optimizer\n.. image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fpytorch-optimizer\u002Fbadge\u002F?version=latest\n    :target: https:\u002F\u002Fpytorch-optimizer.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\n    :alt: 文档状态\n.. image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftorch-optimizer.svg\n    :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Ftorch-optimizer\n.. image:: https:\u002F\u002Fstatic.deepsource.io\u002Fdeepsource-badge-light-mini.svg\n    :target: https:\u002F\u002Fdeepsource.io\u002Fgh\u002Fjettify\u002Fpytorch-optimizer\u002F?ref=repository-badge\n\n\n**torch-optimizer** -- 一个与 PyTorch_ 兼容、并基于 optim_ 模块的优化器集合。\n\n\n简单示例\n--------------\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.DiffGrad(model.parameters(), lr=0.001)\n    optimizer.step()\n\n\n安装\n------------\n安装过程非常简单，只需执行::\n\n    $ pip install torch_optimizer\n\n\n文档\n-------------\nhttps:\u002F\u002Fpytorch-optimizer.rtfd.io\n\n\n引用\n--------\n请引用这些优化算法的原始作者。如果您喜欢这个包，请这样引用：\n\n    @software{Novik_torchoptimizers,\n    \ttitle        = {{torch-optimizer -- PyTorch 的优化算法集合。}},\n    \tauthor       = {Novik, Mykola},\n    \tyear         = 2020,\n    \tmonth        = 1,\n    \tversion      = {1.0.1}\n    }\n\n或者使用 GitHub 提供的“引用此仓库”按钮。\n\n\n支持的优化器\n====================\n\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `A2GradExp`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `A2GradInc`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `A2GradUni`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AccSGD`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05591                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdaBelief`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07468                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdaBound`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09843                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdaMod`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12249                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Adafactor`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.04235                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Adahessian`_ | https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.00719                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AdamP`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `AggMo`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00325                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Apollo`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.13586                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `DiffGrad`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11015                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Lamb`_       | https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00962                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Lookahead`_  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.08610                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `MADGRAD`_    | https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11075                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `NovoGrad`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11286                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `PID`_        | https:\u002F\u002Fwww4.comp.polyu.edu.hk\u002F~cslzhang\u002Fpaper\u002FCVPR18_PID.pdf                                                                        |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `QHAdam`_     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `QHM`_        | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `RAdam`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03265                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Ranger`_     | https:\u002F\u002Fmedium.com\u002F@lessw\u002Fnew-deep-learning-optimizer-ranger-synergistic-combination-of-radam-lookahead-for-the-best-of-2dc83f79a48d |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `RangerQH`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `RangerVA`_   | https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.00700v2                                                                                                   |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `SGDP`_       | https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `SGDW`_       | https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.03983                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `SWATS`_      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.07628                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Shampoo`_    | https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09568                                                                                                     |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n|               |                                                                                                                                      |\n| `Yogi`_       | https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8186-adaptive-methods-for-nonconvex-optimization                                                        |\n+---------------+--------------------------------------------------------------------------------------------------------------------------------------+\n\n可视化\n--------------\n可视化帮助我们观察不同算法如何处理简单的情况，例如：鞍点、局部最小值、山谷等，并可能提供对算法内部运作的有趣见解。选择 Rosenbrock_ 和 Rastrigin_ 这两个基准函数是因为：\n\n* Rosenbrock_（也称为香蕉函数）是一个非凸函数，它有一个全局最小值 `(1.0, 1.0)`。这个全局最小值位于一个狭长、抛物线形状的平坦山谷中。找到这个山谷并不难，但要收敛到全局最小值却非常困难。优化算法可能会过度关注其中一个坐标轴，而难以沿着相对平坦的山谷前进。\n\n.. image:: https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002F3\u002F32\u002FRosenbrock_function.svg\n\n* Rastrigin_ 是一个非凸函数，在 `(0.0, 0.0)` 处有一个全局最小值。由于其庞大的搜索空间和大量的局部最小值，寻找该函数的最小值是一个相当困难的问题。\n\n.. image:: https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002F8\u002F8b\u002FRastrigin_function.png\n\n每个优化器都执行 `501` 次优化步骤。学习率是通过超参数搜索算法找到的最佳值，其余调优参数则使用默认值。扩展脚本并调整其他优化器参数非常容易。\n\n\n.. code::\n\n    python examples\u002Fviz_optimizers.py\n\n\n警告\n-------\n不要仅凭可视化结果来选择优化器。不同的优化方法具有独特的特性，可能针对不同的目的进行设计，或者需要显式的学习率调度等。最好的办法是在你的具体问题上尝试几种优化器，看看它们是否能提升性能。\n\n如果你不确定该使用哪种优化器，可以先从内置的 SGD 或 Adam 开始。一旦训练逻辑准备就绪并建立了基线分数，再更换优化器，看看是否有改进。\n\n\nA2GradExp\n---------\n\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_A2GradExp.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_A2GradExp.png  |\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.A2GradExp(\n        model.parameters(),\n        kappa=1000.0,\n        beta=10.0,\n        lips=10.0,\n        rho=0.5,\n    )\n    optimizer.step()\n\n\n**论文**: *最优自适应与加速随机梯度下降* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fseverilov\u002FA2Grad_optimizer\n\n\nA2GradInc\n---------\n\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_A2GradInc.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_A2GradInc.png  |\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.A2GradInc(\n        model.parameters(),\n        kappa=1000.0,\n        beta=10.0,\n        lips=10.0,\n    )\n    optimizer.step()\n\n\n**论文**: *最优自适应与加速随机梯度下降* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fseverilov\u002FA2Grad_optimizer\n\n\nA2GradUni\n---------\n\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_A2GradUni.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_A2GradUni.png  |\n+--------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.A2GradUni(\n        model.parameters(),\n        kappa=1000.0,\n        beta=10.0,\n        lips=10.0,\n    )\n    optimizer.step()\n\n\n**论文**: *最优自适应与加速随机梯度下降* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.00553]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fseverilov\u002FA2Grad_optimizer\n\n\nAccSGD\n------\n\n+-----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AccSGD.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AccSGD.png  |\n+-----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AccSGD(\n        model.parameters(),\n        lr=1e-3,\n        kappa=1000.0,\n        xi=10.0,\n        small_const=0.7,\n        weight_decay=0\n    )\n    optimizer.step()\n\n\n**论文**: *关于现有动量方案在随机优化中的不足* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05591]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Frahulkidambi\u002FAccSGD\n\n\nAdaBelief\n---------\n\n+-------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdaBelief.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdaBelief.png |\n+-------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdaBelief(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        weight_decay=0,\n        amsgrad=False,\n        weight_decouple=False,\n        fixed_decay=False,\n        rectify=False,\n    )\n    optimizer.step()\n\n\n**论文**: *AdaBelief优化器，根据对观测梯度的信任度自适应调整步长* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07468]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fjuntang-zhuang\u002FAdabelief-Optimizer\n\n\nAdaBound\n--------\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdaBound.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdaBound.png |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdaBound(\n        m.parameters(),\n        lr= 1e-3,\n        betas= (0.9, 0.999),\n        final_lr = 0.1,\n        gamma=1e-3,\n        eps= 1e-8,\n        weight_decay=0,\n        amsbound=False,\n    )\n    optimizer.step()\n\n\n**论文**: *带有动态学习率边界值的自适应梯度方法* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09843]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002FLuolc\u002FAdaBound\n\nAdaMod\n------\nAdaMod方法通过自适应和动量式的上界来限制自适应学习率。动态学习率的上下界基于自适应学习率自身的指数移动平均值，这有助于平滑意外出现的大学习率，并稳定深度神经网络的训练过程。\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdaMod.png    |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdaMod.png   |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdaMod(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        beta3=0.999,\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**论文**: *一种用于随机学习的自适应与动量式约束方法。* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12249]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Flancopku\u002FAdaMod\n\n\nAdafactor\n---------\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Adafactor.png |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Adafactor.png |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Adafactor(\n        m.parameters(),\n        lr= 1e-3,\n        eps2= (1e-30, 1e-3),\n        clip_threshold=1.0,\n        decay_rate=-0.8,\n        beta1=None,\n        weight_decay=0.0,\n        scale_parameter=True,\n        relative_step=True,\n        warmup_init=False,\n    )\n    optimizer.step()\n\n**论文**: *Adafactor：具有次线性内存开销的自适应学习率方法。* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.04235]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ffairseq\u002Fblob\u002Fmaster\u002Ffairseq\u002Foptim\u002Fadafactor.py\n\n\nAdahessian\n----------\n+-------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Adahessian.png |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Adahessian.png  |\n+-------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Adahessian(\n        m.parameters(),\n        lr= 1.0,\n        betas= (0.9, 0.999),\n        eps= 1e-4,\n        weight_decay=0.0,\n        hessian_power=1.0,\n    )\n\t  loss_fn(m(input), target).backward(create_graph = True) # create_graph=True是计算Hessian矩阵所必需的\n    optimizer.step()\n\n\n**论文**: *ADAHESSIAN：一种用于机器学习的自适应二阶优化器* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.00719]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Famirgholami\u002Fadahessian\n\n\nAdamP\n------\nAdamP提出了一种简单而有效的解决方案：在应用于尺度不变权重（例如，在BN层之前的卷积权重）的Adam优化器的每次迭代中，AdamP会从更新向量中移除径向分量（即与权重向量平行的部分）。直观地说，这一操作可以防止沿径向方向的不必要更新，因为这种更新只会增加权重范数，而不会对损失的最小化产生任何贡献。\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AdamP.png     |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AdamP.png    |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AdamP(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n        delta = 0.1,\n        wd_ratio = 0.1\n    )\n    optimizer.step()\n\n**论文**: *减缓基于动量的优化器中权重范数的增长。* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fclovaai\u002FAdamP\n\n\nAggMo\n-----\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_AggMo.png     |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_AggMo.png    |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.AggMo(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.0, 0.9, 0.99),\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**论文**: *聚合动量：通过被动阻尼实现稳定性。* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.00325]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002FAtheMathmo\u002FAggMo\n\n\nApollo\n------\n\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Apollo.png    |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Apollo.png   |\n+------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Apollo(\n        m.parameters(),\n        lr= 1e-2,\n        beta=0.9,\n        eps=1e-4,\n        warmup=0,\n        init_lr=0.01,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**论文**: *Apollo: 一种用于非凸随机优化的自适应参数化对角拟牛顿法。* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.13586]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002FXuezheMax\u002Fapollo\n\n\nDiffGrad\n--------\n基于当前梯度与前一时刻梯度之差的优化器，其步长会针对每个参数进行调整，使得梯度变化较快的参数采用较大的步长，而梯度变化较慢的参数则采用较小的步长。\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_DiffGrad.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_DiffGrad.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.DiffGrad(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**论文**: *diffGrad: 一种用于卷积神经网络的优化方法。* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11015]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fshivram1987\u002FdiffGrad\n\nLamb\n----\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Lamb.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Lamb.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Lamb(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**论文**: *深度学习的大批量优化：76分钟内训练BERT* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00962]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fcybertronai\u002Fpytorch-lamb\n\nLookahead\n---------\n\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_LookaheadYogi.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_LookaheadYogi.png  |\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    # 基础优化器，可以使用任何其他优化器，如Adam或DiffGrad\n    yogi = optim.Yogi(\n        m.parameters(),\n        lr= 1e-2,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        initial_accumulator=1e-6,\n        weight_decay=0,\n    )\n\n    optimizer = optim.Lookahead(yogi, k=5, alpha=0.5)\n    optimizer.step()\n\n**论文**: *Lookahead 优化器：k 步向前，1 步向后* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.08610]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Falphadl\u002Flookahead.pytorch\n\n\nMADGRAD\n---------\n\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_MADGRAD.png        |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_MADGRAD.png        |\n+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.MADGRAD(\n        m.parameters(),\n        lr=1e-2,\n        momentum=0.9,\n        weight_decay=0,\n        eps=1e-6,\n    )\n    optimizer.step()\n\n\n**论文**: *无需妥协的自适应性：一种用于随机优化的动量化、自适应、双重平均梯度方法* (2021) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11075]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fmadgrad\n\n\nNovoGrad\n--------\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_NovoGrad.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_NovoGrad.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.NovoGrad(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n        grad_averaging=False,\n        amsgrad=False,\n    )\n    optimizer.step()\n\n\n**论文**: *用于深度网络训练的分层自适应矩随机梯度方法* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11286]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FDeepLearningExamples\u002F\n\n\nPID\n---\n\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_PID.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_PID.png  |\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.PID(\n        m.parameters(),\n        lr=1e-3,\n        momentum=0,\n        dampening=0,\n        weight_decay=1e-2,\n        integral=5.0,\n        derivative=10.0,\n    )\n    optimizer.step()\n\n\n**论文**: *基于 PID 控制器的深度网络随机优化方法* (2018) [http:\u002F\u002Fwww4.comp.polyu.edu.hk\u002F~cslzhang\u002Fpaper\u002FCVPR18_PID.pdf]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Ftensorboy\u002FPIDOptimizer\n\n\nQHAdam\n------\n\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_QHAdam.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_QHAdam.png  |\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.QHAdam(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        nus=(1.0, 1.0),\n        weight_decay=0,\n        decouple_weight_decay=False,\n        eps=1e-8,\n    )\n    optimizer.step()\n\n\n**论文**: *深度学习中的拟双曲动量和 Adam* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fqhoptim\n\n\nQHM\n---\n\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_QHM.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_QHM.png  |\n+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.QHM(\n        m.parameters(),\n        lr=1e-3,\n        momentum=0,\n        nu=0.7,\n        weight_decay=1e-2,\n        weight_decay_type='grad',\n    )\n    optimizer.step()\n\n\n**论文**: *深度学习中的拟双曲动量和 Adam* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fqhoptim\n\n\nRAdam\n-----\n\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_RAdam.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_RAdam.png  |\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n\n已弃用，请使用 PyTorch_ 提供的版本。\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.RAdam(\n        m.parameters(),\n        lr= 1e-3,\n        betas=(0.9, 0.999),\n        eps=1e-8,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n**论文**: *关于自适应学习率的方差及更多* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03265]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002FLiyuanLucasLiu\u002FRAdam\n\n\nRanger\n------\n\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Ranger.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Ranger.png  |\n+----------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.Ranger(\n        m.parameters(),\n        lr=1e-3,\n        alpha=0.5,\n        k=6,\n        N_sma_threshhold=5,\n        betas=(.95, 0.999),\n        eps=1e-5,\n        weight_decay=0\n    )\n    optimizer.step()\n\n\n**论文**: *新型深度学习优化器，Ranger：RAdam与LookAhead的协同组合，兼得两者之优* (2019) [https:\u002F\u002Fmedium.com\u002F@lessw\u002Fnew-deep-learning-optimizer-ranger-synergistic-combination-of-radam-lookahead-for-the-best-of-2dc83f79a48d]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Flessw2020\u002FRanger-Deep-Learning-Optimizer\n\n\nRangerQH\n--------\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_RangerQH.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_RangerQH.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.RangerQH(\n        m.parameters(),\n        lr=1e-3,\n        betas=(0.9, 0.999),\n        nus=(.7, 1.0),\n        weight_decay=0.0,\n        k=6,\n        alpha=.5,\n        decouple_weight_decay=False,\n        eps=1e-8,\n    )\n    optimizer.step()\n\n\n**论文**: *深度学习中的准双曲动量和Adam* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06801]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Flessw2020\u002FRanger-Deep-Learning-Optimizer\n\n\nRangerVA\n--------\n\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_RangerVA.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_RangerVA.png  |\n+------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.RangerVA(\n        m.parameters(),\n        lr=1e-3,\n        alpha=0.5,\n        k=6,\n        n_sma_threshhold=5,\n        betas=(.95, 0.999),\n        eps=1e-5,\n        weight_decay=0,\n        amsgrad=True,\n        transformer='softplus',\n        smooth=50,\n        grad_transformer='square'\n    )\n    optimizer.step.\n\n\n**论文**: *校准自适应学习率以改善ADAM的收敛性* (2019) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.00700v2]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Flessw2020\u002FRanger-Deep-Learning-Optimizer\n\n\nSGDP\n----\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SGDP.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SGDP.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.SGDP(\n        m.parameters(),\n        lr= 1e-3,\n        momentum=0,\n        dampening=0,\n        weight_decay=1e-2,\n        nesterov=False,\n        delta = 0.1,\n        wd_ratio = 0.1\n    )\n    optimizer.step.\n\n\n**论文**: *减缓基于动量的优化器中权重范数的增长* (2020) [https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08217]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fclovaai\u002FAdamP\n\n\nSGDW\n----\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SGDW.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SGDW.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # model = ...\n    optimizer = optim.SGDW(\n        m.parameters(),\n        lr= 1e-3,\n        momentum=0,\n        dampening=0,\n        weight_decay=1e-2,\n        nesterov=False,\n    )\n    optimizer.step.\n\n\n**论文**: *SGDR：带有热重启的随机梯度下降* (2017) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.03983]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Fpull\u002F22466\n\n\nSWATS\n-----\n\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SWATS.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SWATS.png  |\n+---------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n# 模型 = ...\n    optimizer = optim.SWATS(\n        model.parameters(),\n        lr=1e-1,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        weight_decay= 0.0,\n        amsgrad=False,\n        nesterov=False,\n    )\n    optimizer.step()\n\n\n**论文**: *通过从 Adam 切换到 SGD 提升泛化性能* (2017) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.07628]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002FMrpatekful\u002Fswats\n\n\nShampoo\n-------\n\n+-----------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Shampoo.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Shampoo.png  |\n+-----------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # 模型 = ...\n    optimizer = optim.Shampoo(\n        m.parameters(),\n        lr=1e-1,\n        momentum=0.0,\n        weight_decay=0.0,\n        epsilon=1e-4,\n        update_freq=1,\n    )\n    optimizer.step()\n\n\n**论文**: *Shampoo: 预条件随机张量优化* (2018) [https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09568]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002Fmoskomule\u002Fshampoo.pytorch\n\n\nYogi\n----\n\nYogi 是一种基于 ADAM 的优化算法，具有更精细的有效学习率控制，并且在收敛性方面与 ADAM 具有相似的理论保证。\n\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Yogi.png  |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Yogi.png  |\n+--------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\n.. code:: python\n\n    import torch_optimizer as optim\n\n    # 模型 = ...\n    optimizer = optim.Yogi(\n        m.parameters(),\n        lr= 1e-2,\n        betas=(0.9, 0.999),\n        eps=1e-3,\n        initial_accumulator=1e-6,\n        weight_decay=0,\n    )\n    optimizer.step()\n\n\n**论文**: *非凸优化的自适应方法* (2018) [https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8186-adaptive-methods-for-nonconvex-optimization]\n\n**参考代码**: https:\u002F\u002Fgithub.com\u002F4rtemi5\u002FYogi-Optimizer_Keras\n\n\nAdam（PyTorch 内置）\n-----------------------\n\n+---------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_Adam.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_Adam.png  |\n+---------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------+\n\nSGD（PyTorch 内置）\n----------------------\n\n+--------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n| .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frastrigin_SGD.png   |  .. image:: https:\u002F\u002Fraw.githubusercontent.com\u002Fjettify\u002Fpytorch-optimizer\u002Fmaster\u002Fdocs\u002Frosenbrock_SGD.png  |\n+--------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+\n\n.. _Python: https:\u002F\u002Fwww.python.org\n.. _PyTorch: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\n.. _Rastrigin: https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRastrigin_function\n.. _Rosenbrock: https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FRosenbrock_function\n.. _benchmark: https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTest_functions_for_optimization\n.. _optim: https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Foptim.html","# torch-optimizer 快速上手指南\n\n**torch-optimizer** 是一个为 PyTorch 设计的优化器集合库，兼容 PyTorch 原生的 `optim` 模块。它集成了多种前沿的优化算法（如 DiffGrad, Lamb, Ranger, AdaBelief 等），方便开发者在深度学习训练中直接调用。\n\n## 环境准备\n\n在使用本工具前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：支持 Python 3.6 及以上版本\n*   **核心依赖**：\n    *   `PyTorch` (torch)\n    *   `torchvision` (可选，视具体模型需求而定)\n\n请确保已正确安装 PyTorch。如果您尚未安装，建议访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002F) 获取适合您环境的安装命令。\n\n## 安装步骤\n\n推荐使用 pip 进行安装。为了获得更快的下载速度，中国开发者可以使用国内镜像源（如清华大学或阿里云镜像）。\n\n### 方式一：使用官方源\n```bash\npip install torch_optimizer\n```\n\n### 方式二：使用国内镜像源（推荐）\n```bash\npip install torch_optimizer -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n`torch-optimizer` 的使用方式与 PyTorch 原生优化器完全一致。只需导入库，实例化优化器并传入模型参数即可。\n\n以下是一个最简单的使用示例，演示如何使用 `DiffGrad` 优化器：\n\n```python\nimport torch\nimport torch_optimizer as optim\n\n# 假设您已经定义了模型\n# model = MyNeuralNetwork()\n\n# 初始化优化器，这里以 DiffGrad 为例，学习率设为 0.001\noptimizer = optim.DiffGrad(model.parameters(), lr=0.001)\n\n# 在训练循环中正常使用\n# loss = criterion(output, target)\n# optimizer.zero_grad()\n# loss.backward()\noptimizer.step()\n```\n\n### 支持的优化器\n该库支持多种优化器，您只需将上述代码中的 `optim.DiffGrad` 替换为其他支持的算法名称即可，例如：\n*   `optim.Lamb`\n*   `optim.Ranger`\n*   `optim.AdaBelief`\n*   `optim.Adafactor`\n*   `optim.MADGRAD`\n\n完整列表及对应论文请参考官方文档：https:\u002F\u002Fpytorch-optimizer.readthedocs.io\u002F","某计算机视觉团队正在训练一个复杂的图像分割模型，但在调整学习率和收敛速度时遇到了瓶颈。\n\n### 没有 pytorch-optimizer 时\n- 开发者只能依赖 PyTorch 原生提供的 SGD 或 Adam 等基础优化器，难以应对损失函数曲面复杂或非凸优化的挑战。\n- 若想尝试 DiffGrad、RAdam 或 AccSGD 等前沿算法，必须手动从论文复现代码，不仅耗时且极易引入实现错误。\n- 不同优化器的接口定义不统一，每次切换实验都需要重写训练循环中的参数更新逻辑，导致代码维护成本高昂。\n- 缺乏经过社区验证的高质量实现，模型训练过程容易出现梯度爆炸或不收敛，排查问题耗费大量算力资源。\n\n### 使用 pytorch-optimizer 后\n- 团队可直接调用库中集成的 DiffGrad 或 RAdam 等先进算法，仅需一行代码即可替换原有优化器，轻松突破收敛瓶颈。\n- 无需再手动复现论文公式，所有优化器均经过严格测试与文档化，确保了数学实现的准确性与稳定性。\n- 所有优化器完全兼容 PyTorch 原生 `optim` 模块接口，切换算法时无需修改任何训练流程代码，极大提升了实验效率。\n- 借助成熟的开源实现，模型在相同数据集下的收敛速度显著提升，且训练过程更加平稳，有效减少了调参试错的时间。\n\npytorch-optimizer 通过提供一站式的前沿优化器集合，让开发者能零成本地将理论成果转化为实际的模型性能提升。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjettify_pytorch-optimizer_d730cdc8.png","jettify",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjettify_a6bb158a.png","uflight","Greater Boston, MA","nickolainovik@gmail.com","https:\u002F\u002Fgithub.com\u002Fjettify",[80,84],{"name":81,"color":82,"percentage":83},"Python","#3572A5",99.2,{"name":85,"color":86,"percentage":87},"Makefile","#427819",0.8,3169,310,"2026-04-19T06:46:32","Apache-2.0",1,"","未说明",{"notes":96,"python":94,"dependencies":97},"该工具是 PyTorch 优化器的集合，兼容 PyTorch 的 optim 模块。安装方式简单，仅需运行 'pip install torch_optimizer'。README 中未明确列出具体的操作系统、GPU、内存或 Python 版本限制，通常意味着其依赖宿主环境（即已安装 PyTorch 的环境）的配置。具体支持的优化器算法列表详见文档。",[98],"torch",[14],[101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116],"pytorch","optimizer","diffgrad","adamod","lamb","yogi","accsgd","adabound","novograd","shampoo","lookahead","swats","sgdp","adabelief","apollo","hacktoberfest","2026-03-27T02:49:30.150509","2026-04-20T17:02:29.953745",[120,125,130,135,140,145,150],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},44979,"如何在 PyTorch 1.6+ 版本中解决 Lamb 优化器的弃用警告（UserWarning: add_ is deprecated）？","该警告是由于 PyTorch 1.6 改变了 `add_` 函数的参数顺序导致的。维护者已在后续版本中修复了此问题。请升级 `torch-optimizer` 到最新版本（例如 0.0.1a15 或更高），即可消除警告。可以通过 `pip install --upgrade torch-optimizer` 进行更新。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F143",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},44980,"使用 GPU 训练模型时遇到错误，如何正确配置设备？","确保在代码中正确指定设备。通常使用 `device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')`。如果之前代码中包含导致冲突的特定行（如某些硬编码的设备转移逻辑），尝试移除该行。有用户反馈移除特定行后 GPU 正常工作。此外，部分优化器（如 Shampoo）的早期版本可能不支持 GPU 上的 SVD 运算，需确认使用的是已修复的最新版本。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F257",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},44981,"YOGI 优化器的 `exp_avg` 和 `exp_avg_sq` 应该如何正确初始化？","根据 YOGI 论文及社区修正：\n1. `exp_avg_sq`（二阶矩估计）应初始化为初始点梯度平方的平均值（基于一个较大的 mini-batch）。\n2. `exp_avg`（一阶矩估计）应初始化为零，而不是 `initial_accumulator`。\n维护者已接受相关 PR 修正了代码中的初始化逻辑。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F77",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},44982,"为什么可视化示例中的优化器比较结果可能不准确或不公平？","原有的可视化示例仅运行 100 次更新，这有利于初期收敛快的优化器，而对初期收敛慢但后期表现好的优化器不利。此外，未使用显式的学习率调度（Learning Rate Schedule），而像 AdaBound 和 RAdam 等优化器隐式包含了学习率衰减，这导致比较不公平。建议在进行超参数搜索时，以最后一步点的函数值而非距离作为目标，并增加更新步数或使用标准的学习率调度策略以获得更公正的比较。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F219",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},44983,"如何在项目中添加并使用 Adahessian 优化器？","Adahessian 优化器已被合并到项目中。由于它是二阶优化器，通常需要比一阶优化器更大的学习率（例如在测试中最佳学习率约为 23-32）。使用时请确保安装了包含该优化器的最新版本 `torch-optimizer`。注意二阶优化器计算开销较大，但在某些任务上能提供 SOTA 性能。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F169",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},44984,"如何在可视化对比中加入 Adam 或 SGD 等基准优化器？","项目已支持在可视化脚本中添加基准优化器。用户可以通过修改示例代码（如 `examples\u002Fviz_optimizers.py`）来包含 Adam 和 SGD。注意，为了获得可复现的结果，可能需要调整梯度裁剪值（例如设置为 1.0）。由于优化器的表现高度依赖数据和超参数，建议用户在本地重新运行实验以获取针对特定场景的基准对比。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F71",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},44985,"LAMB 优化器中的权重范数截断值（clamp=10）是否可以自定义？","早期版本中 LAMB 的权重范数截断值被硬编码为 10。社区已提出建议将其改为可配置参数。虽然具体实现可能随版本更新而变化，但建议使用 `torch.norm()` 来计算范数，并关注最新版本是否已暴露该参数（如 `max_grad_norm` 或类似名称）以便用户自定义截断阈值。","https:\u002F\u002Fgithub.com\u002Fjettify\u002Fpytorch-optimizer\u002Fissues\u002F64",[156,161,166,171,176,181,186,191,196,201,206,211,216,220,224,228,232,236,240,244],{"id":157,"version":158,"summary_zh":159,"released_at":160},359875,"v0.3.0","# 变更\r\n* 撤销移除 RAdam 的更改。","2021-10-31T02:57:04",{"id":162,"version":163,"summary_zh":164,"released_at":165},359876,"v0.2.0","# 变更\r\n\r\n* 移除 RAdam 优化器，因为它已包含在 PyTorch 中。\r\n* 不再将测试作为可安装的包包含在内。\r\n* 在可能的情况下保持内存布局不变。\r\n* 添加 MADGRAD 优化器。","2021-10-26T01:28:35",{"id":167,"version":168,"summary_zh":169,"released_at":170},359877,"v0.1.0","# 变更日志\r\n\r\n* 初始发布。\r\n* 新增对 A2GradExp、A2GradInc、A2GradUni、AccSGD、AdaBelief、\r\n  AdaBound、AdaMod、Adafactor、Adahessian、AdamP、AggMo、Apollo、\r\n  DiffGrad、Lamb、Lookahead、NovoGrad、PID、QHAdam、QHM、RAdam、Ranger、\r\n  RangerQH、RangerVA、SGDP、SGDW、SWATS、Shampoo、Yogi 的支持。","2021-01-01T16:49:38",{"id":172,"version":173,"summary_zh":174,"released_at":175},359878,"v0.0.1a17","# 更改\n4357dcf 改进部署工作流\n9dfd2fc 添加部署工作流 (#230)\n31f60c9 迁移到 GitHub Actions (#228)\n7276b69 提高参数验证的测试覆盖率。为 SWATS 添加权重衰减验证 (#225)\n2e013a0 改进学习率值验证的测试。\nccc920d 将 Sphinx 从 3.3.0 升级到 3.3.1\n788d1e8 将 Matplotlib 从 3.3.2 升级到 3.3.3\n3adf578 添加优化器选择警告 (#222)\n5b2b59e 将 NumPy 从 1.19.3 升级到 1.19.4\n9123897 将 Sphinx 从 3.2.1 升级到 3.3.0\ncc8acc8 在 README 中添加 Apollo 优化器 (#218)\n1dcec1d 添加 Apollo 优化器测试\n03e6dad 添加 Apollo 优化器实现\n15cc6da (add-apollo-optimizer) 合并 github.com:jettify\u002Fpytorch-optimizer 的 master 分支\nce7b51c 将 mypy 从 0.782 升级到 0.790\n3da1672 将 NumPy 从 1.19.2 升级到 1.19.3 (#216)\n6a1f550 将 pytest 从 6.1.1 升级到 6.1.2\n3327ebb 将 torchvision 从 0.7.0 升级到 0.8.1\n40a7ba5 将 PyTorch 从 1.6.0 升级到 1.7.0\na61c8ad 修正 A2Grad 论文的链接 (#211)\n8ce32cf 更新版本号","2020-11-27T01:49:48",{"id":177,"version":178,"summary_zh":179,"released_at":180},359879,"v0.0.1a16","# 更改\nefeea8f 将 adabelief 添加到 README 中。（#210）\n9c72aa0 添加 adabelief 优化器（#209）\n0d94e4e 更新 CONTRIBUTING.rst（#207）\n3a4abcd 将 sphinx-autodoc-typehints 从 1.11.0 升级到 1.11.1\nf08f793 使用 a2grad 优化器更新 README（#204）\n9003a68 添加 A2GradInc 和 A2GradExp 优化器。（#203）\nd221899 将 hyperopt 从 0.2.4 升级到 0.2.5\n35f14f6 将 ipdb 从 0.13.3 升级到 0.13.4\n1005b6b 将 flake8 从 3.8.3 升级到 3.8.4\n9ad5102 将 pytest 从 6.1.0 升级到 6.1.1\nbd71c05 添加 a2grad 优化器（#199）\nba60ddb 将 pytest 从 6.0.2 升级到 6.1.0（#197）\n1e142d4 将 matplotlib 从 3.3.1 升级到 3.3.2\n02f0d90 将 pytest 从 6.0.1 升级到 6.0.2\n1ba47a8 将 numpy 从 1.19.1 升级到 1.19.2\nbe69ae7 添加 adafactor 测试用例。（#192）\ndb002c6 合并 pull request #191，来自 matech96 的 patch-1 分支\n7edf138 yogi 文档修复\na2905c5 合并 pull request #190，来自 jettify 的 update-dock-with-new-optimizers 分支\nd31dbc9 (origin\u002Fupdate-dock-with-new-optimizers, update-dock-with-new-optimizers) 使用新优化器更新文档。\n2308555 合并 pull request #189，来自 jettify 的 add-adafactor-optimizer 分支\nae0118b (origin\u002Fadd-adafactor-optimizer, add-adafactor-optimizer) 将 adafactor 添加到 README 中。\nf7237ee 添加 adafactor 测试\ndcaa63f 添加 adafactor 实现\nb6fdd4e 合并 pull request #186，来自 jettify 的 fix-warning-in-sgdp-and-adamp 分支\ne1c2d2d (origin\u002Ffix-warning-in-sgdp-and-adamp, fix-warning-in-sgdp-and-adamp) 修复 adamp 中的警告\n7eaa2f3 修复 sgdp 中的警告\n75d625a 修复 adabound 中的警告（#184）\n30ef850 将 black 从 19.10b0 升级到 20.8b1\na149682 修复 qhadam 中的警告（#182）\n8787b6b 将 swats 添加到 README 中（#181）\nc7bb0fd 添加 SWATS 优化器（#178）\n5835261 将 pytest-cov 从 2.10.0 升级到 2.10.1\n519473d 将 sphinx 从 3.2.0 升级到 3.2.1\n1b705d0 修复在指定 weight_decay 时遗漏的警告（#177）\n17c754e 修复 shampoo 中的警告（#176）\n13db710 修复 accsgd 中的警告（#175）\n51ffdc6 将 matplotlib 从 3.3.0 升级到 3.3.1\n913543a 修复 yogi 优化器中的警告。（#173）\nce54a95 修复 adamod 中的警告（#172）\na144f4b 版本升级","2020-10-20T00:29:01",{"id":182,"version":183,"summary_zh":184,"released_at":185},359880,"v0.0.1a15","# 更改\n279d420 将 sphinx 从 3.1.2 升级到 3.2.0\ne345295 修复 sgdw 中的警告 (#168)\n077c72d 修复 qhm 中的警告 (#167)\nb6d87b2 修复 radam 中的警告 (#166)\n03ee2a4 修复 novograd 优化器中的警告 (#165)\n19026f5 解决 pid 优化器中的警告 (#164)\n92ac42e 修复 lamb 优化器中的警告 (#163)\n0df1686 修复与 PyTorch 1.6.0 的兼容性 (#162)\n6a74cee 重新格式化代码并整理导入语句 (#160)\n33a8bbe 在 README 中添加 aggmo (#159)\nb4cc233 将 pytest 从 6.0.0 升级到 6.0.1\na48e955 使 setup.py 更加健壮，并升级 numpy 版本 (#157)\na2749e2 将 torchvision 从 0.6.1 升级到 0.7.0\nb20d9b3 将 pytest 从 5.4.3 升级到 6.0.0\na545011 添加 aggmo 优化器 (#153)\n1552465 将 matplotlib 从 3.2.2 升级到 3.3.0\n72d4e35 提升开发版本","2020-08-11T02:08:10",{"id":187,"version":188,"summary_zh":189,"released_at":190},359881,"v0.0.1a14","e90a185 版本号升级\ne21b422 将 sphinx 从 3.1.1 升级到 3.1.2\n1b70e4e 修复 numpy 版本问题 (#148)\n9a9d233 添加 SGDP 优化器 (#145)\n8452433 将 numpy 从 1.18.5 升级到 1.19.0\n40c0723 将 ipython 从 7.15.0 升级到 7.16.1\na891371 将 ipdb 从 0.13.2 升级到 0.13.3\n8f5c382 添加 AdamP 优化器 (#133)\ncae9bde 将 mypy 从 0.781 升级到 0.782\n17da984 将 sphinx-autodoc-typehints 从 1.10.3 升级到 1.11.0\n387581d 将 mypy 从 0.780 升级到 0.781\na291360 将 torchvision 从 0.6.0 升级到 0.6.1\nfd5badc 将 torch 从 1.5.0 升级到 1.5.1\n144e72e 将 matplotlib 从 3.2.1 升级到 3.2.2","2020-07-13T01:30:24",{"id":192,"version":193,"summary_zh":194,"released_at":195},359882,"v0.0.1a13","# 更改\n3c0dd77 在 README 中添加文档引用。\n4e5548e 将 pytest-cov 从 2.9.0 升级到 2.10.0\n861aa98 将 sphinx 从 3.1.0 升级到 3.1.1\n219ae8a 将 flake8 从 3.8.2 升级到 3.8.3\n81107ab 将 sphinx 从 3.0.4 升级到 3.1.0\ne3b059c 在 README 中添加 lookahead 优化器。（#128）\n8ab662a 准备 lookahead 优化器发布。\n29bd821 将 ipython 从 7.14.0 升级到 7.15.0\n5f7e4e8 将 mypy 从 0.770 升级到 0.780\n982a052 将 numpy 从 1.18.4 升级到 1.18.5\n1e41844 将 pytest 从 5.4.2 升级到 5.4.3\n96f7257 添加文档字符串并重新格式化代码。\nda9e0f3 修复 linter。\n8fda937 添加项目 URL\nba33395 正确暴露 shampoo。\n6d194c1 将 sphinx 从 3.0.3 升级到 3.0.4\nb3adae7 更新文档 (#122)\nbfa8358 将 flake8 从 3.8.1 升级到 3.8.2\n2fd60a8 将 pytest-cov 从 2.8.1 升级到 2.9.0\n542dd46 添加 read the docs 徽章\nc77f808 使测试更稳定 (#119)\n18742bf 将 flake8-quotes 从 3.0.0 升级到 3.2.0\n267b8c5 在 setup.py 中添加 python3.8\nd14a85c 将 flake8 从 3.7.9 升级到 3.8.1\n2d9da19 更新 diffgrad.py (#118)\ned84071 将 pytest 从 5.4.1 升级到 5.4.2\n690ad22 在 readme 中添加 Shampoo 优化器。（#114）\n590b288 将 ipython 从 7.13.0 升级到 7.14.0\n096b021 将 numpy 从 1.18.3 升级到 1.18.4\n7875299 添加 shampoo 优化器。（#110）\nfff339d 将 sphinx 从 3.0.2 升级到 3.0.3\n86664f5 尝试 py3.8 (#109)\n87d9ccf 提升开发版本","2020-06-17T02:01:31",{"id":197,"version":198,"summary_zh":199,"released_at":200},359883,"v0.0.1a12","# 更改\n63393ae 将 hyperopt 从 0.2.3 升级到 0.2.4\nbfc46c4 将 torchvision 从 0.5.0 升级到 0.6.0\n660d831 将 torch 从 1.4.0 升级到 1.5.0\nf21c85a 将 numpy 从 1.18.2 升级到 1.18.3\ne8d4866 将 sphinx 从 3.0.1 升级到 3.0.2\n2cfbf20 修复 #96 问题的 RAdam 解决方案。（#103）\ndf65965 将 sphinx 从 3.0.0 升级到 3.0.1\n19408f5 修复了 torch_optimizer.get 的返回类型注解。（#101）\n243509b 将 sphinx 从 2.4.4 升级到 3.0.0\n0320faa 如果未找到优化器，则抛出异常。修复了一些 mypy 类型。（#98）\n05b2da5 提升开发版本","2020-04-26T18:07:43",{"id":202,"version":203,"summary_zh":204,"released_at":205},359884,"v0.0.1a11","# 更改\ne2425ec 重写获取优化器的函数 (#97)\n4376be1 添加 torch_optimizer.get() 方法 (#95)\nfd4f0a9 在 README 中添加 Ranger 优化器 (#94)\nf0e1f7c 添加 Rangers (#93)\n40b832c 向 setup.py 添加关键字参数\naf6d6e1 在文档字符串中添加参考代码链接\n1bd385d 将 flake8-quotes 从 2.1.1 升级到 3.0.0\n4997fa3 更新文档并修复 README。\na7e330e 对文档字符串进行小幅清理，并修正 QHM 的常量\ncb7bdb9 在 README 中添加 QHAdam (#91)\n5155f0b 使用新的优化器更新文档。(#90)\n3ef2b4f 将 matplotlib 从 3.2.0 升级到 3.2.1\n7aaaafa 提高 QHAdam 优化器的测试覆盖率 (#88)\n7ab6b72 添加 QHAdam 优化器 (#87)\nad59d3f 将 pytest 从 5.4.0 升级到 5.4.1\na5e0fdc 版本号升级","2020-04-05T18:44:03",{"id":207,"version":208,"summary_zh":209,"released_at":210},359885,"v0.0.1a10","# Changes\r\n71e9bb4 Add Python3.5 support (#85)\r\n071a73e apply black\r\n3447833 Bump pytest from 5.3.5 to 5.4.0\r\nb2e587a Bump mypy from 0.761 to 0.770\r\n64e8e48 Add QHM optimizer to the readme. (#82)\r\n86c2587 Better test coverage for QHM (#81)\r\na3609b4 Fix readme formatting\r\n4b7d230 added Adam & SGD images to README and viz script (#80)\r\n697fdb1 Add comments about yogi optimizer initialization #77 (#79)\r\n95b0f91 Add deepsourde badge.\r\nfd57585 Minor style changes.\r\n5bcc51f Bump version and minor tweaks in setup.py\r\n7003d6c Less error prone parameter validation. (#78)\r\nabb84d3 Add QHM basic implementation. (#73)\r\nea1d413 Bump sphinx from 2.4.3 to 2.4.4\r\n6a2e8f8 Tweak linter configuration and address one issue in setup.py (#74)\r\nd6a52ae Bump matplotlib from 3.1.3 to 3.2.0\r\nfa961b7 Add PID optimizer to the list of supported in README. (#70)\r\nc84d1ea Add .deepsource.toml\r\n76484ff Bump ipdb from 0.13.1 to 0.13.2","2020-03-15T21:22:49",{"id":212,"version":213,"summary_zh":214,"released_at":215},359886,"v0.0.1a9","9b80b11 Better code coverage for PID optimizer (#68)\r\na3f2fdc Change default values for yogi optimizer (#62)\r\n564584e Add grad_clip to the example and re-tune p. for all methods (#67)\r\n3391056 Add clamp_value (float) and debias (bool) parameters to LAMB optimizer (#65)\r\n92d6818 Add PID optimizer (#66)","2020-03-04T02:59:36",{"id":217,"version":218,"summary_zh":73,"released_at":219},359887,"v0.0.1a8","2020-03-02T02:07:29",{"id":221,"version":222,"summary_zh":73,"released_at":223},359888,"v0.0.1a7","2020-02-27T00:27:28",{"id":225,"version":226,"summary_zh":73,"released_at":227},359889,"v0.0.1a6","2020-02-22T03:47:08",{"id":229,"version":230,"summary_zh":73,"released_at":231},359890,"v0.0.1a5","2020-02-15T23:31:40",{"id":233,"version":234,"summary_zh":73,"released_at":235},359891,"v0.0.1a4","2020-02-11T02:58:01",{"id":237,"version":238,"summary_zh":73,"released_at":239},359892,"v0.0.1a3","2020-02-09T03:42:06",{"id":241,"version":242,"summary_zh":73,"released_at":243},359893,"v0.0.1a2","2020-02-03T02:00:37",{"id":245,"version":246,"summary_zh":73,"released_at":247},359894,"v0.0.1a1","2020-01-22T01:57:30"]