[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-catalyst-team--catalyst":3,"tool-catalyst-team--catalyst":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":80,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":90,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":111,"github_topics":112,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":133,"updated_at":134,"faqs":135,"releases":166},2819,"catalyst-team\u002Fcatalyst","catalyst","Accelerated deep learning R&D","Catalyst 是一个基于 PyTorch 构建的深度学习研发框架，旨在加速人工智能领域的探索与创新过程。它的核心使命是帮助开发者打破“重复编写训练循环”的枯燥循环，让团队能将宝贵精力集中在创造新模型和解决核心问题上，而非耗费在基础代码的堆砌中。\n\n针对深度学习研究中常见的实验复现难、代码复用率低以及迭代速度慢等痛点，Catalyst 提供了一套高度结构化且灵活的解决方案。它通过标准化的流程设计，确保了实验结果的可复现性，并极大提升了快速验证想法的效率。无论是调整超参数还是尝试新的网络架构，用户都能以更少的代码量完成更复杂的实验配置。\n\n这款工具特别适合 AI 研究人员、算法工程师以及需要频繁进行模型迭代的开发团队使用。其独特的技术亮点在于对 PyTorch 生态的深度集成与扩展，支持从简单的原型验证到大规模分布式训练的无缝切换。同时，Catalyst 拥有完善的文档体系和活跃的社区支持（包括 Telegram 和 Slack 频道），兼容 Linux、macOS 及 WSL 等多种操作系统，并提供了 Docker 镜像以简化环境部署。如果你希望在不牺牲灵活性的前提下提升研发效率，C","Catalyst 是一个基于 PyTorch 构建的深度学习研发框架，旨在加速人工智能领域的探索与创新过程。它的核心使命是帮助开发者打破“重复编写训练循环”的枯燥循环，让团队能将宝贵精力集中在创造新模型和解决核心问题上，而非耗费在基础代码的堆砌中。\n\n针对深度学习研究中常见的实验复现难、代码复用率低以及迭代速度慢等痛点，Catalyst 提供了一套高度结构化且灵活的解决方案。它通过标准化的流程设计，确保了实验结果的可复现性，并极大提升了快速验证想法的效率。无论是调整超参数还是尝试新的网络架构，用户都能以更少的代码量完成更复杂的实验配置。\n\n这款工具特别适合 AI 研究人员、算法工程师以及需要频繁进行模型迭代的开发团队使用。其独特的技术亮点在于对 PyTorch 生态的深度集成与扩展，支持从简单的原型验证到大规模分布式训练的无缝切换。同时，Catalyst 拥有完善的文档体系和活跃的社区支持（包括 Telegram 和 Slack 频道），兼容 Linux、macOS 及 WSL 等多种操作系统，并提供了 Docker 镜像以简化环境部署。如果你希望在不牺牲灵活性的前提下提升研发效率，Catalyst 将是一个值得信赖的选择。","\u003Cdiv align=\"center\">\n\n[![Catalyst logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_da88dba8fe2a.png)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n\n**Accelerated Deep Learning R&D**\n\n[![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_9ee0cb95ac54.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst)\n[![Pipi version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fcatalyst.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcatalyst\u002F)\n[![Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Fcatalyst%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v)](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Findex.html)\n[![Docker](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocker-hub-blue)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fcatalystteam\u002Fcatalyst\u002Ftags)\n[![PyPI Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_d50896e0bc3d.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fcatalyst)\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fnews-twitter-499feb)](https:\u002F\u002Ftwitter.com\u002FCatalystTeam)\n[![Telegram](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchannel-telegram-blue)](https:\u002F\u002Ft.me\u002Fcatalyst_team)\n[![Slack](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCatalyst-slack-success)](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fcatalyst-team-devs\u002Fshared_invite\u002Fzt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)\n[![Github contributors](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fcatalyst-team\u002Fcatalyst.svg?logo=github&logoColor=white)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fgraphs\u002Fcontributors)\n\n![codestyle](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcodestyle\u002Fbadge.svg?branch=master&event=push)\n![docs](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fdocs\u002Fbadge.svg?branch=master&event=push)\n![catalyst](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n![integrations](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fintegrations\u002Fbadge.svg?branch=master&event=push)\n\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.6-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.7-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.8-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n\n[![os](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinux-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![os](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOSX-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![os](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWSL-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n\u003C\u002Fdiv>\n\nCatalyst is a PyTorch framework for Deep Learning Research and Development.\nIt focuses on reproducibility, rapid experimentation, and codebase reuse\nso you can create something new rather than write yet another train loop.\n\u003Cbr\u002F> Break the cycle – use the Catalyst!\n\n- [Project Manifest](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002FMANIFEST.md)\n- [Framework architecture](https:\u002F\u002Fmiro.com\u002Fapp\u002Fboard\u002Fo9J_lxBO-2k=\u002F)\n- [Catalyst at AI Landscape](https:\u002F\u002Flandscape.lfai.foundation\u002Fselected=catalyst)\n- Part of the [PyTorch Ecosystem](https:\u002F\u002Fpytorch.org\u002Fecosystem\u002F)\n\n\u003Cdetails>\n\u003Csummary>Catalyst at PyTorch Ecosystem Day 2021\u003C\u002Fsummary>\n\u003Cp>\n\n[![Catalyst poster](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_58ed610cb017.png)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Catalyst at PyTorch Developer Day 2021\u003C\u002Fsummary>\n\u003Cp>\n\n[![Catalyst poster](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_e49a6714964a.png)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n----\n\n## Getting started\n\n```bash\npip install -U catalyst\n```\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, utils\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\nloaders = {\n    \"train\": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),\n    \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n}\n\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\n\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.PrecisionRecallF1SupportCallback(input_key=\"logits\", target_key=\"targets\"),\n    ],\n    logdir=\".\u002Flogs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n\n# model evaluation\nmetrics = runner.evaluate_loader(\n    loader=loaders[\"valid\"],\n    callbacks=[dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5))],\n)\n\n# model inference\nfor prediction in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert prediction[\"logits\"].detach().cpu().numpy().shape[-1] == 10\n\n# model post-processing\nmodel = runner.model.cpu()\nbatch = next(iter(loaders[\"valid\"]))[0]\nutils.trace_model(model=model, batch=batch)\nutils.quantize_model(model=model)\nutils.prune_model(model=model, pruning_fn=\"l1_unstructured\", amount=0.8)\nutils.onnx_export(model=model, batch=batch, file=\".\u002Flogs\u002Fmnist.onnx\", verbose=True)\n```\n\n### Step-by-step Guide\n1. Start with [Catalyst — A PyTorch Framework for Accelerated Deep Learning R&D](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Fcatalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef) introduction.\n1. Try [notebook tutorials](#minimal-examples) or check [minimal examples](#minimal-examples) for first deep dive.\n1. Read [blog posts](https:\u002F\u002Fcatalyst-team.com\u002Fpost\u002F) with use-cases and guides.\n1. Learn machine learning with our [\"Deep Learning with Catalyst\" course](https:\u002F\u002Fcatalyst-team.com\u002F#course).\n1. And finally, [join our slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fcatalyst-team-core\u002Fshared_invite\u002Fzt-d9miirnn-z86oKDzFMKlMG4fgFdZafw) if you want to chat with the team and contributors.\n\n\n## Table of Contents\n- [Getting started](#getting-started)\n  - [Step-by-step Guide](#step-by-step-guide)\n- [Table of Contents](#table-of-contents)\n- [Overview](#overview)\n  - [Installation](#installation)\n  - [Documentation](#documentation)\n  - [Minimal Examples](#minimal-examples)\n  - [Tests](#tests)\n  - [Blog Posts](#blog-posts)\n  - [Talks](#talks)\n- [Community](#community)\n  - [Contribution Guide](#contribution-guide)\n  - [User Feedback](#user-feedback)\n  - [Acknowledgments](#acknowledgments)\n  - [Trusted by](#trusted-by)\n  - [Citation](#citation)\n\n\n## Overview\nCatalyst helps you implement compact\nbut full-featured Deep Learning pipelines with just a few lines of code.\nYou get a training loop with metrics, early-stopping, model checkpointing,\nand other features without the boilerplate.\n\n\n### Installation\n\nGeneric installation:\n```bash\npip install -U catalyst\n```\n\n\u003Cdetails>\n\u003Csummary>Specialized versions, extra requirements might apply\u003C\u002Fsummary>\n\u003Cp>\n\n```bash\npip install catalyst[ml]         # installs ML-based Catalyst\npip install catalyst[cv]         # installs CV-based Catalyst\n# master version installation\npip install git+https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst@master --upgrade\n# all available extensions are listed here:\n# https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fsetup.py\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\nCatalyst is compatible with: Python 3.7+. PyTorch 1.4+. \u003Cbr\u002F>\nTested on Ubuntu 16.04\u002F18.04\u002F20.04, macOS 10.15, Windows 10, and Windows Subsystem for Linux.\n\n### Documentation\n- [master](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002F)\n- [22.02](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv22.02\u002Findex.html)\n\n- \u003Cdetails>\n  \u003Csummary>2021 edition\u003C\u002Fsummary>\n  \u003Cp>\n\n    - [21.12](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.12\u002Findex.html)\n    - [21.11](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.11\u002Findex.html)\n    - [21.10](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.10\u002Findex.html)\n    - [21.09](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.09\u002Findex.html)\n    - [21.08](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.08\u002Findex.html)\n    - [21.07](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.07\u002Findex.html)\n    - [21.06](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.06\u002Findex.html)\n    - [21.05](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.05\u002Findex.html) ([Catalyst — A PyTorch Framework for Accelerated Deep Learning R&D](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Fcatalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef))\n    - [21.04\u002F21.04.1](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.04\u002Findex.html), [21.04.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.04.2\u002Findex.html)\n    - [21.03](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.03\u002Findex.html), [21.03.1\u002F21.03.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.03.1\u002Findex.html)\n\n  \u003C\u002Fp>\n  \u003C\u002Fdetails>\n- \u003Cdetails>\n  \u003Csummary>2020 edition\u003C\u002Fsummary>\n  \u003Cp>\n\n    - [20.12](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.12\u002Findex.html)\n    - [20.11](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.11\u002Findex.html)\n    - [20.10](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.10\u002Findex.html)\n    - [20.09](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.09\u002Findex.html)\n    - [20.08.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.08.2\u002Findex.html)\n    - [20.07](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.07\u002Findex.html) ([dev blog: 20.07 release](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Fcatalyst-dev-blog-20-07-release-fb489cd23e14?source=friends_link&sk=7ab92169658fe9a9e1c44068f28cc36c))\n    - [20.06](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.06\u002Findex.html)\n    - [20.05](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.05\u002Findex.html), [20.05.1](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.05.1\u002Findex.html)\n    - [20.04](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.04\u002Findex.html), [20.04.1](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.04.1\u002Findex.html), [20.04.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.04.2\u002Findex.html)\n\n  \u003C\u002Fp>\n  \u003C\u002Fdetails>\n\n\n### Minimal Examples\n\n- [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fcustomizing_what_happens_in_train.ipynb) Introduction tutorial \"[Customizing what happens in `train`](.\u002Fexamples\u002Fnotebooks\u002Fcustomizing_what_happens_in_train.ipynb)\"\n- [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fcustomization_tutorial.ipynb) Demo with [customization examples](.\u002Fexamples\u002Fnotebooks\u002Fcustomization_tutorial.ipynb)\n- [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Freinforcement_learning.ipynb) [Reinforcement Learning with Catalyst](.\u002Fexamples\u002Fnotebooks\u002Freinforcement_learning.ipynb)\n- [And more](.\u002Fexamples\u002F)\n\n\u003Cdetails>\n\u003Csummary>CustomRunner – PyTorch for-loop decomposition\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        # model inference step\n        return self.model(batch[0].to(self.engine.device))\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss\", \"accuracy01\", \"accuracy03\"]\n        }\n\n    def handle_batch(self, batch):\n        # model train\u002Fvalid step\n        # unpack the batch\n        x, y = batch\n        # run model forward pass\n        logits = self.model(x)\n        # compute the loss\n        loss = F.cross_entropy(logits, y)\n        # compute the metrics\n        accuracy01, accuracy03 = metrics.accuracy(logits, y, topk=(1, 3))\n        # log metrics\n        self.batch_metrics.update(\n            {\"loss\": loss, \"accuracy01\": accuracy01, \"accuracy03\": accuracy03}\n        )\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n        # run model backward pass\n        if self.is_train_loader:\n            self.engine.backward(loss)\n            self.optimizer.step()\n            self.optimizer.zero_grad()\n\n    def on_loader_end(self, runner):\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\nrunner = CustomRunner()\n# model training\nrunner.train(\n    model=model,\n    optimizer=optimizer,\n    loaders=loaders,\n    logdir=\".\u002Flogs\",\n    num_epochs=5,\n    verbose=True,\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n)\n# model inference\nfor logits in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert logits.detach().cpu().numpy().shape[-1] == 10\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>ML - linear regression\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# data\nnum_samples, num_features = int(1e4), int(1e1)\nX, y = torch.rand(num_samples, num_features), torch.rand(num_samples)\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, 1)\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])\n\n# model training\nrunner = dl.SupervisedRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\".\u002Flogdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    num_epochs=8,\n    verbose=True,\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>ML - multiclass classification\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples,) * num_classes).to(torch.int64)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\".\u002Flogdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy03\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=num_classes),\n        # uncomment for extra metrics:\n        # dl.PrecisionRecallF1SupportCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n        # dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n        # catalyst[ml] required ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n    ],\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>ML - multilabel classification\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples, num_classes) > 0.5).to(torch.float32)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\".\u002Flogdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy01\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # uncomment for extra metrics:\n        # dl.MultilabelAccuracyCallback(input_key=\"scores\", target_key=\"targets\", threshold=0.5),\n        # dl.MultilabelPrecisionRecallF1SupportCallback(\n        #     input_key=\"scores\", target_key=\"targets\", threshold=0.5\n        # ),\n    ]\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>ML - multihead classification\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes1, num_classes2 = int(1e4), int(1e1), 4, 10\nX = torch.rand(num_samples, num_features)\ny1 = (torch.rand(num_samples,) * num_classes1).to(torch.int64)\ny2 = (torch.rand(num_samples,) * num_classes2).to(torch.int64)\n\n# pytorch loaders\ndataset = TensorDataset(X, y1, y2)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\nclass CustomModule(nn.Module):\n    def __init__(self, in_features: int, out_features1: int, out_features2: int):\n        super().__init__()\n        self.shared = nn.Linear(in_features, 128)\n        self.head1 = nn.Linear(128, out_features1)\n        self.head2 = nn.Linear(128, out_features2)\n\n    def forward(self, x):\n        x = self.shared(x)\n        y1 = self.head1(x)\n        y2 = self.head2(x)\n        return y1, y2\n\n# model, criterion, optimizer, scheduler\nmodel = CustomModule(num_features, num_classes1, num_classes2)\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters())\nscheduler = optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\nclass CustomRunner(dl.Runner):\n    def handle_batch(self, batch):\n        x, y1, y2 = batch\n        y1_hat, y2_hat = self.model(x)\n        self.batch = {\n            \"features\": x,\n            \"logits1\": y1_hat,\n            \"logits2\": y2_hat,\n            \"targets1\": y1,\n            \"targets2\": y2,\n        }\n\n# model training\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.CriterionCallback(metric_key=\"loss1\", input_key=\"logits1\", target_key=\"targets1\"),\n        dl.CriterionCallback(metric_key=\"loss2\", input_key=\"logits2\", target_key=\"targets2\"),\n        dl.MetricAggregationCallback(metric_key=\"loss\", metrics=[\"loss1\", \"loss2\"], mode=\"mean\"),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.AccuracyCallback(\n            input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_\"\n        ),\n        dl.AccuracyCallback(\n            input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_\"\n        ),\n        # catalyst[ml] required ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_cm\"\n        # ),\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_cm\"\n        # ),\n        dl.CheckpointCallback(\n            logdir=\".\u002Flogs\u002Fone\",\n            loader_key=\"valid\", metric_key=\"one_accuracy01\", minimize=False, topk=1\n        ),\n        dl.CheckpointCallback(\n            logdir=\".\u002Flogs\u002Ftwo\",\n            loader_key=\"valid\", metric_key=\"two_accuracy03\", minimize=False, topk=3\n        ),\n    ],\n    loggers={\"console\": dl.ConsoleLogger(), \"tb\": dl.TensorboardLogger(\".\u002Flogs\u002Ftb\")},\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>ML – RecSys\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_users, num_features, num_items = int(1e4), int(1e1), 10\nX = torch.rand(num_users, num_features)\ny = (torch.rand(num_users, num_items) > 0.5).to(torch.float32)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_items)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.CriterionCallback(input_key=\"logits\", target_key=\"targets\", metric_key=\"loss\"),\n        # uncomment for extra metrics:\n        # dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # dl.HitrateCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MRRCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MAPCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.NDCGCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.CheckpointCallback(\n            logdir=\".\u002Flogs\", loader_key=\"valid\", metric_key=\"loss\", minimize=True\n        ),\n    ]\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST classification\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nrunner = dl.SupervisedRunner()\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    logdir=\".\u002Flogs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n# uncomment for extra metrics:\n#     callbacks=[\n#         dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=10),\n#         dl.PrecisionRecallF1SupportCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=10\n#         ),\n#         dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n#         # catalyst[ml] required ``pip install catalyst[ml]``\n#         dl.ConfusionMatrixCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n#         ),\n#     ]\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST segmentation\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.losses import IoULoss\n\n\nmodel = nn.Sequential(\n    nn.Conv2d(1, 1, 3, 1, 1), nn.ReLU(),\n    nn.Conv2d(1, 1, 3, 1, 1), nn.Sigmoid(),\n)\ncriterion = IoULoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch):\n        x = batch[self._input_key]\n        x_noise = (x + torch.rand_like(x)).clamp_(0, 1)\n        x_ = self.model(x_noise)\n        self.batch = {self._input_key: x, self._output_key: x_, self._target_key: x}\n\nrunner = CustomRunner(\n    input_key=\"features\", output_key=\"scores\", target_key=\"targets\", loss_key=\"loss\"\n)\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.IOUCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.DiceCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.TrevskyCallback(input_key=\"scores\", target_key=\"targets\", alpha=0.2),\n    ],\n    logdir=\".\u002Flogdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST metric learning\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.data import HardTripletsSampler\nfrom catalyst.contrib.datasets import MnistMLDataset, MnistQGDataset\nfrom catalyst.contrib.losses import TripletMarginLossWithSampler\nfrom catalyst.contrib.models import MnistSimpleNet\nfrom catalyst.data.sampler import BatchBalanceClassSampler\n\n\n# 1. train and valid loaders\ntrain_dataset = MnistMLDataset(root=os.getcwd())\nsampler = BatchBalanceClassSampler(\n    labels=train_dataset.get_labels(), num_classes=5, num_samples=10, num_batches=10\n)\ntrain_loader = DataLoader(dataset=train_dataset, batch_sampler=sampler)\n\nvalid_dataset = MnistQGDataset(root=os.getcwd(), gallery_fraq=0.2)\nvalid_loader = DataLoader(dataset=valid_dataset, batch_size=1024)\n\n# 2. model and optimizer\nmodel = MnistSimpleNet(out_features=16)\noptimizer = Adam(model.parameters(), lr=0.001)\n\n# 3. criterion with triplets sampling\nsampler_inbatch = HardTripletsSampler(norm_required=False)\ncriterion = TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)\n\n# 4. training with catalyst Runner\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch) -> None:\n        if self.is_train_loader:\n            images, targets = batch[\"features\"].float(), batch[\"targets\"].long()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets,}\n        else:\n            images, targets, is_query = \\\n                batch[\"features\"].float(), batch[\"targets\"].long(), batch[\"is_query\"].bool()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets, \"is_query\": is_query}\n\ncallbacks = [\n    dl.ControlFlowCallbackWrapper(\n        dl.CriterionCallback(input_key=\"embeddings\", target_key=\"targets\", metric_key=\"loss\"),\n        loaders=\"train\",\n    ),\n    dl.ControlFlowCallbackWrapper(\n        dl.CMCScoreCallback(\n            embeddings_key=\"embeddings\",\n            labels_key=\"targets\",\n            is_query_key=\"is_query\",\n            topk=[1],\n        ),\n        loaders=\"valid\",\n    ),\n    dl.PeriodicLoaderCallback(\n        valid_loader_key=\"valid\", valid_metric_key=\"cmc01\", minimize=False, valid=2\n    ),\n]\n\nrunner = CustomRunner(input_key=\"features\", output_key=\"embeddings\")\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    callbacks=callbacks,\n    loaders={\"train\": train_loader, \"valid\": valid_loader},\n    verbose=False,\n    logdir=\".\u002Flogs\",\n    valid_loader=\"valid\",\n    valid_metric=\"cmc01\",\n    minimize_valid_metric=False,\n    num_epochs=10,\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST GAN\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.layers import GlobalMaxPool2d, Lambda\n\nlatent_dim = 128\ngenerator = nn.Sequential(\n    # We want to generate 128 coefficients to reshape into a 7x7x128 map\n    nn.Linear(128, 128 * 7 * 7),\n    nn.LeakyReLU(0.2, inplace=True),\n    Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(128, 1, (7, 7), padding=3),\n    nn.Sigmoid(),\n)\ndiscriminator = nn.Sequential(\n    nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    GlobalMaxPool2d(),\n    nn.Flatten(),\n    nn.Linear(128, 1),\n)\n\nmodel = nn.ModuleDict({\"generator\": generator, \"discriminator\": discriminator})\ncriterion = {\"generator\": nn.BCEWithLogitsLoss(), \"discriminator\": nn.BCEWithLogitsLoss()}\noptimizer = {\n    \"generator\": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n    \"discriminator\": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n}\ntrain_data = MNIST(os.getcwd(), train=False)\nloaders = {\"train\": DataLoader(train_data, batch_size=32)}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        batch_size = 1\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        # Decode them to fake images\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        return generated_images\n\n    def handle_batch(self, batch):\n        real_images, _ = batch\n        batch_size = real_images.shape[0]\n\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n\n        # Decode them to fake images\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        # Combine them with real images\n        combined_images = torch.cat([generated_images, real_images])\n\n        # Assemble labels discriminating real from fake images\n        labels = \\\n            torch.cat([torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))]).to(self.engine.device)\n        # Add random noise to the labels - important trick!\n        labels += 0.05 * torch.rand(labels.shape).to(self.engine.device)\n\n        # Discriminator forward\n        combined_predictions = self.model[\"discriminator\"](combined_images)\n\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        # Assemble labels that say \"all real images\"\n        misleading_labels = torch.zeros((batch_size, 1)).to(self.engine.device)\n\n        # Generator forward\n        generated_images = self.model[\"generator\"](random_latent_vectors)\n        generated_predictions = self.model[\"discriminator\"](generated_images)\n\n        self.batch = {\n            \"combined_predictions\": combined_predictions,\n            \"labels\": labels,\n            \"generated_predictions\": generated_predictions,\n            \"misleading_labels\": misleading_labels,\n        }\n\n\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    callbacks=[\n        dl.CriterionCallback(\n            input_key=\"combined_predictions\",\n            target_key=\"labels\",\n            metric_key=\"loss_discriminator\",\n            criterion_key=\"discriminator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_discriminator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"discriminator\",\n            metric_key=\"loss_discriminator\",\n        ),\n        dl.CriterionCallback(\n            input_key=\"generated_predictions\",\n            target_key=\"misleading_labels\",\n            metric_key=\"loss_generator\",\n            criterion_key=\"generator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_generator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"generator\",\n            metric_key=\"loss_generator\",\n        ),\n    ],\n    valid_loader=\"train\",\n    valid_metric=\"loss_generator\",\n    minimize_valid_metric=True,\n    num_epochs=20,\n    verbose=True,\n    logdir=\".\u002Flogs_gan\",\n)\n\n# visualization (matplotlib required):\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n# plt.imshow(runner.predict_batch(None)[0, 0].cpu().numpy())\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST VAE\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport torch\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nLOG_SCALE_MAX = 2\nLOG_SCALE_MIN = -10\n\ndef normal_sample(loc, log_scale):\n    scale = torch.exp(0.5 * log_scale)\n    return loc + scale * torch.randn_like(scale)\n\nclass VAE(nn.Module):\n    def __init__(self, in_features, hid_features):\n        super().__init__()\n        self.hid_features = hid_features\n        self.encoder = nn.Linear(in_features, hid_features * 2)\n        self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())\n\n    def forward(self, x, deterministic=False):\n        z = self.encoder(x)\n        bs, z_dim = z.shape\n\n        loc, log_scale = z[:, : z_dim \u002F\u002F 2], z[:, z_dim \u002F\u002F 2 :]\n        log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)\n\n        z_ = loc if deterministic else normal_sample(loc, log_scale)\n        z_ = z_.view(bs, -1)\n        x_ = self.decoder(z_)\n\n        return x_, loc, log_scale\n\nclass CustomRunner(dl.IRunner):\n    def __init__(self, hid_features, logdir, engine):\n        super().__init__()\n        self.hid_features = hid_features\n        self._logdir = logdir\n        self._engine = engine\n\n    def get_engine(self):\n        return self._engine\n\n    def get_loggers(self):\n        return {\n            \"console\": dl.ConsoleLogger(),\n            \"csv\": dl.CSVLogger(logdir=self._logdir),\n            \"tensorboard\": dl.TensorboardLogger(logdir=self._logdir),\n        }\n\n    @property\n    def num_epochs(self) -> int:\n        return 1\n\n    def get_loaders(self):\n        loaders = {\n            \"train\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n            \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n        }\n        return loaders\n\n    def get_model(self):\n        model = self.model if self.model is not None else VAE(28 * 28, self.hid_features)\n        return model\n\n    def get_optimizer(self, model):\n        return optim.Adam(model.parameters(), lr=0.02)\n\n    def get_callbacks(self):\n        return {\n            \"backward\": dl.BackwardCallback(metric_key=\"loss\"),\n            \"optimizer\": dl.OptimizerCallback(metric_key=\"loss\"),\n            \"checkpoint\": dl.CheckpointCallback(\n                self._logdir,\n                loader_key=\"valid\",\n                metric_key=\"loss\",\n                minimize=True,\n                topk=3,\n            ),\n        }\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss_ae\", \"loss_kld\", \"loss\"]\n        }\n\n    def handle_batch(self, batch):\n        x, _ = batch\n        x = x.view(x.size(0), -1)\n        x_, loc, log_scale = self.model(x, deterministic=not self.is_train_loader)\n\n        loss_ae = F.mse_loss(x_, x)\n        loss_kld = (\n            -0.5 * torch.sum(1 + log_scale - loc.pow(2) - log_scale.exp(), dim=1)\n        ).mean()\n        loss = loss_ae + loss_kld * 0.01\n\n        self.batch_metrics = {\"loss_ae\": loss_ae, \"loss_kld\": loss_kld, \"loss\": loss}\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n\n    def on_loader_end(self, runner):\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\n    def predict_batch(self, batch):\n        random_latent_vectors = torch.randn(1, self.hid_features).to(self.engine.device)\n        generated_images = self.model.decoder(random_latent_vectors).detach()\n        return generated_images\n\nrunner = CustomRunner(128, \".\u002Flogs\", dl.CPUEngine())\nrunner.run()\n# visualization (matplotlib required):\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n# plt.imshow(runner.predict_batch(None)[0].cpu().numpy().reshape(28, 28))\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>AutoML - hyperparameters optimization with Optuna\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport optuna\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\n\ndef objective(trial):\n    lr = trial.suggest_loguniform(\"lr\", 1e-3, 1e-1)\n    num_hidden = int(trial.suggest_loguniform(\"num_hidden\", 32, 128))\n\n    train_data = MNIST(os.getcwd(), train=True)\n    valid_data = MNIST(os.getcwd(), train=False)\n    loaders = {\n        \"train\": DataLoader(train_data, batch_size=32),\n        \"valid\": DataLoader(valid_data, batch_size=32),\n    }\n    model = nn.Sequential(\n        nn.Flatten(), nn.Linear(784, num_hidden), nn.ReLU(), nn.Linear(num_hidden, 10)\n    )\n    optimizer = torch.optim.Adam(model.parameters(), lr=lr)\n    criterion = nn.CrossEntropyLoss()\n\n    runner = dl.SupervisedRunner(input_key=\"features\", output_key=\"logits\", target_key=\"targets\")\n    runner.train(\n        model=model,\n        criterion=criterion,\n        optimizer=optimizer,\n        loaders=loaders,\n        callbacks={\n            \"accuracy\": dl.AccuracyCallback(\n                input_key=\"logits\", target_key=\"targets\", num_classes=10\n            ),\n            # catalyst[optuna] required ``pip install catalyst[optuna]``\n            \"optuna\": dl.OptunaPruningCallback(\n                loader_key=\"valid\", metric_key=\"accuracy01\", minimize=False, trial=trial\n            ),\n        },\n        num_epochs=3,\n    )\n    score = trial.best_score\n    return score\n\nstudy = optuna.create_study(\n    direction=\"maximize\",\n    pruner=optuna.pruners.MedianPruner(\n        n_startup_trials=1, n_warmup_steps=0, interval_steps=1\n    ),\n)\nstudy.optimize(objective, n_trials=3, timeout=300)\nprint(study.best_value, study.best_params)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Config API - minimal example\u003C\u002Fsummary>\n\u003Cp>\n\n```yaml title=\"example.yaml\"\nrunner:\n  _target_: catalyst.runners.SupervisedRunner\n  model:\n    _var_: model\n    _target_: torch.nn.Sequential\n    args:\n      - _target_: torch.nn.Flatten\n      - _target_: torch.nn.Linear\n        in_features: 784  # 28 * 28\n        out_features: 10\n  input_key: features\n  output_key: &output_key logits\n  target_key: &target_key targets\n  loss_key: &loss_key loss\n\nrun:\n  # ≈ stage 1\n  - _call_: train  # runner.train(...)\n\n    criterion:\n      _target_: torch.nn.CrossEntropyLoss\n\n    optimizer:\n      _target_: torch.optim.Adam\n      params:  # model.parameters()\n        _var_: model.parameters\n      lr: 0.02\n\n    loaders:\n      train:\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: y\n        batch_size: 32\n\n      &valid_loader_key valid:\n        &valid_loader\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: n\n        batch_size: 32\n\n    callbacks:\n      - &accuracy_metric\n        _target_: catalyst.callbacks.AccuracyCallback\n        input_key: *output_key\n        target_key: *target_key\n        topk: [1,3,5]\n      - _target_: catalyst.callbacks.PrecisionRecallF1SupportCallback\n        input_key: *output_key\n        target_key: *target_key\n\n    num_epochs: 1\n    logdir: logs\n    valid_loader: *valid_loader_key\n    valid_metric: *loss_key\n    minimize_valid_metric: y\n    verbose: y\n\n  # ≈ stage 2\n  - _call_: evaluate_loader  # runner.evaluate_loader(...)\n    loader: *valid_loader\n    callbacks:\n      - *accuracy_metric\n\n```\n\n```sh\ncatalyst-run --config example.yaml\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n### Tests\nAll Catalyst code, features, and pipelines [are fully tested](.\u002Ftests).\nWe also have our own [catalyst-codestyle](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcodestyle) and a corresponding pre-commit hook.\nDuring testing, we train a variety of different models: image classification,\nimage segmentation, text classification, GANs, and much more.\nWe then compare their convergence metrics in order to verify\nthe correctness of the training procedure and its reproducibility.\nAs a result, Catalyst provides fully tested and reproducible\nbest practices for your deep learning research and development.\n\n### [Blog Posts](https:\u002F\u002Fcatalyst-team.com\u002Fpost\u002F)\n\n### [Talks](https:\u002F\u002Fcatalyst-team.com\u002Ftalk\u002F)\n\n\n## Community\n\n### Accelerated with Catalyst\n\n\u003Cdetails>\n\u003Csummary>Research Papers\u003C\u002Fsummary>\n\u003Cp>\n\n- [Hierarchical Attention for Sentiment Classification with Visualization](https:\u002F\u002Fgithub.com\u002Fneuromation\u002Fml-recipe-hier-attention)\n- [Pediatric Bone Age Assessment](https:\u002F\u002Fgithub.com\u002Fneuromation\u002Fml-recipe-bone-age)\n- [Implementation of the paper \"Tell Me Where to Look: Guided Attention Inference Network\"](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FGAIN)\n- [Implementation of the paper \"Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks\"](https:\u002F\u002Fgithub.com\u002Fyukkyo\u002FPyTorch-FilterResponseNormalizationLayer)\n- [Implementation of the paper \"Utterance-level Aggregation For Speaker Recognition In The Wild\"](https:\u002F\u002Fgithub.com\u002FptJexio\u002FSpeaker-Recognition)\n- [Implementation of the paper \"Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation\"](https:\u002F\u002Fgithub.com\u002Fvitrioil\u002FSpeech-Separation)\n- [Implementation of the paper \"ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks\"](https:\u002F\u002Fgithub.com\u002Fleverxgroup\u002Fesrgan)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Blog Posts\u003C\u002Fsummary>\n\u003Cp>\n\n- [Solving the Cocktail Party Problem using PyTorch](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Faddressing-the-cocktail-party-problem-using-pytorch-305fb74560ea)\n- [Beyond fashion: Deep Learning with Catalyst (Config API)](https:\u002F\u002Fevilmartians.com\u002Fchronicles\u002Fbeyond-fashion-deep-learning-with-catalyst)\n- [Tutorial from Notebook API to Config API (RU)](https:\u002F\u002Fgithub.com\u002FBekovmi\u002FSegmentation_tutorial)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Competitions\u003C\u002Fsummary>\n\u003Cp>\n\n- [Kaggle Quick, Draw! Doodle Recognition Challenge](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-QuickDraw) - 11th place\n- [Catalyst.RL - NeurIPS 2018: AI for Prosthetics Challenge](https:\u002F\u002Fgithub.com\u002FScitator\u002Fneurips-18-prosthetics-challenge) – 3rd place\n- [Kaggle Google Landmark 2019](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-Google-Landmark-2019) - 30th place\n- [iMet Collection 2019 - FGVC6](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-iMet) - 24th place\n- [ID R&D Anti-spoofing Challenge](https:\u002F\u002Fgithub.com\u002Fbagxi\u002Fidrnd-anti-spoofing-challenge-solution) - 14th place\n- [NeurIPS 2019: Recursion Cellular Image Classification](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-Recursion-Cellular) - 4th place\n- [MICCAI 2019: Automatic Structure Segmentation for Radiotherapy Planning Challenge 2019](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FStructSeg2019)\n  * 3rd place solution for `Task 3: Organ-at-risk segmentation from chest CT scans`\n  * and 4th place solution for `Task 4: Gross Target Volume segmentation of lung cancer`\n- [Kaggle Seversteal steel detection](https:\u002F\u002Fgithub.com\u002Fbamps53\u002Fkaggle-severstal) - 5th place\n- [RSNA Intracranial Hemorrhage Detection](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-RSNA) - 5th place\n- [APTOS 2019 Blindness Detection](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FKaggle-2019-Blindness-Detection) – 7th place\n- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https:\u002F\u002Fgithub.com\u002FScitator\u002Frun-skeleton-run-in-3d) – 2nd place\n- [xView2 Damage Assessment Challenge](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FxView2-Solution) - 3rd place\n\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Toolkits\u003C\u002Fsummary>\n\u003Cp>\n\n- [Catalyst.RL](https:\u002F\u002Fgithub.com\u002FScitator\u002Fcatalyst-rl-framework) – A Distributed Framework for Reproducible RL Research by [Scitator](https:\u002F\u002Fgithub.com\u002FScitator)\n- [Catalyst.Classification](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fclassification) - Comprehensive classification pipeline with Pseudo-Labeling by [Bagxi](https:\u002F\u002Fgithub.com\u002Fbagxi) and [Pdanilov](https:\u002F\u002Fgithub.com\u002Fpdanilov)\n- [Catalyst.Segmentation](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fsegmentation) - Segmentation pipelines - binary, semantic and instance, by [Bagxi](https:\u002F\u002Fgithub.com\u002Fbagxi)\n- [Catalyst.Detection](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fdetection) - Anchor-free detection pipeline by [Avi2011class](https:\u002F\u002Fgithub.com\u002FAvi2011class) and [TezRomacH](https:\u002F\u002Fgithub.com\u002FTezRomacH)\n- [Catalyst.GAN](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fgan) - Reproducible GANs pipelines by [Asmekal](https:\u002F\u002Fgithub.com\u002Fasmekal)\n- [Catalyst.Neuro](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fneuro) - Brain image analysis project, in collaboration with [TReNDS Center](https:\u002F\u002Ftrendscenter.org)\n- [MLComp](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fmlcomp) – Distributed DAG framework for machine learning with UI by [Lightforever](https:\u002F\u002Fgithub.com\u002Flightforever)\n- [Pytorch toolbelt](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002Fpytorch-toolbelt) - PyTorch extensions for fast R&D prototyping and Kaggle farming by [BloodAxe](https:\u002F\u002Fgithub.com\u002FBloodAxe)\n- [Helper functions](https:\u002F\u002Fgithub.com\u002Fternaus\u002Figlovikov_helper_functions) - An assorted collection of helper functions by [Ternaus](https:\u002F\u002Fgithub.com\u002Fternaus)\n- [BERT Distillation with Catalyst](https:\u002F\u002Fgithub.com\u002Felephantmipt\u002Fbert-distillation) by [elephantmipt](https:\u002F\u002Fgithub.com\u002Felephantmipt)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>Other\u003C\u002Fsummary>\n\u003Cp>\n\n- [CamVid Segmentation Example](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FCatalyst-CamVid-Segmentation-Example) - Example of semantic segmentation for CamVid dataset\n- [Notebook API tutorial for segmentation in Understanding Clouds from Satellite Images Competition](https:\u002F\u002Fwww.kaggle.com\u002Fartgor\u002Fsegmentation-in-pytorch-using-convenient-tools\u002F)\n- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https:\u002F\u002Fgithub.com\u002FScitator\u002Flearning-to-move-starter-kit) – starter kit\n- [Catalyst.RL - NeurIPS 2019: Animal-AI Olympics](https:\u002F\u002Fgithub.com\u002FScitator\u002Fanimal-olympics-starter-kit) - starter kit\n- [Inria Segmentation Example](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FCatalyst-Inria-Segmentation-Example) - An example of training segmentation model for Inria Sattelite Segmentation Challenge\n- [iglovikov_segmentation](https:\u002F\u002Fgithub.com\u002Fternaus\u002Figlovikov_segmentation) - Semantic segmentation pipeline using Catalyst\n- [Logging Catalyst Runs to Comet](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1TaG27HcMh2jyRKBGsqRXLiGUfsHVyCq6?usp=sharing) - An example of how to log metrics, hyperparameters and more from Catalyst runs to [Comet](https:\u002F\u002Fwww.comet.ml\u002Fsite\u002Fdata-scientists\u002F)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\nSee other projects at [the GitHub dependency graph](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fnetwork\u002Fdependents).\n\nIf your project implements a paper,\na notable use-case\u002Ftutorial, or a Kaggle competition solution, or\nif your code simply presents interesting results and uses Catalyst,\nwe would be happy to add your project to the list above!\nDo not hesitate to send us a PR with a brief description of the project similar to the above.\n\n### Contribution Guide\n\nWe appreciate all contributions.\nIf you are planning to contribute back bug-fixes, there is no need to run that by us; just send a PR.\nIf you plan to contribute new features, new utility functions, or extensions,\nplease open an issue first and discuss it with us.\n\n- Please see the [Contribution Guide](CONTRIBUTING.md) for more information.\n- By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md).\n\n\n### User Feedback\n\nWe've created `feedback@catalyst-team.com` as an additional channel for user feedback.\n\n- If you like the project and want to thank us, this is the right place.\n- If you would like to start a collaboration between your team and Catalyst team to improve Deep Learning R&D, you are always welcome.\n- If you don't like Github Issues and prefer email, feel free to email us.\n- Finally, if you do not like something, please, share it with us, and we can see how to improve it.\n\nWe appreciate any type of feedback. Thank you!\n\n\n### Acknowledgments\n\nSince the beginning of the Сatalyst development, a lot of people have influenced it in a lot of different ways.\n\n#### Catalyst.Team\n- [Dmytro Doroshenko](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fdmytro-doroshenko-05671112a\u002F) ([ditwoo](https:\u002F\u002Fgithub.com\u002FDitwoo))\n- [Eugene Kachan](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fyauheni-kachan\u002F) ([bagxi](https:\u002F\u002Fgithub.com\u002Fbagxi))\n- [Nikita Balagansky](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fnikita-balagansky-50414a19a\u002F) ([elephantmipt](https:\u002F\u002Fgithub.com\u002Felephantmipt))\n- [Sergey Kolesnikov](https:\u002F\u002Fwww.scitator.com\u002F) ([scitator](https:\u002F\u002Fgithub.com\u002FScitator))\n\n#### Catalyst.Contributors\n- [Aleksey Grinchuk](https:\u002F\u002Fwww.facebook.com\u002Fgrinchuk.alexey) ([alexgrinch](https:\u002F\u002Fgithub.com\u002FAlexGrinch))\n- [Aleksey Shabanov](https:\u002F\u002Flinkedin.com\u002Fin\u002Faleksey-shabanov-96b351189) ([AlekseySh](https:\u002F\u002Fgithub.com\u002FAlekseySh))\n- [Alex Gaziev](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Falexgaziev\u002F) ([gazay](https:\u002F\u002Fgithub.com\u002Fgazay))\n- [Andrey Zharkov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fandrey-zharkov-8554a1153\u002F) ([asmekal](https:\u002F\u002Fgithub.com\u002Fasmekal))\n- [Artem Zolkin](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fartem-zolkin-b5155571\u002F) ([arquestro](https:\u002F\u002Fgithub.com\u002FArquestro))\n- [David Kuryakin](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fdkuryakin\u002F) ([dkuryakin](https:\u002F\u002Fgithub.com\u002Fdkuryakin))\n- [Evgeny Semyonov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fewan-semyonov\u002F) ([lightforever](https:\u002F\u002Fgithub.com\u002Flightforever))\n- [Eugene Khvedchenya](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fcvtalks\u002F) ([bloodaxe](https:\u002F\u002Fgithub.com\u002FBloodAxe))\n- [Ivan Stepanenko](https:\u002F\u002Fwww.facebook.com\u002Fistepanenko)\n- [Julia Shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina) ([julia-shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina))\n- [Nguyen Xuan Bac](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fbac-nguyen-xuan-70340b66\u002F) ([ngxbac](https:\u002F\u002Fgithub.com\u002Fngxbac))\n- [Roman Tezikov](http:\u002F\u002Flinkedin.com\u002Fin\u002Froman-tezikov\u002F) ([TezRomacH](https:\u002F\u002Fgithub.com\u002FTezRomacH))\n- [Valentin Khrulkov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fvkhrulkov\u002F) ([khrulkovv](https:\u002F\u002Fgithub.com\u002FKhrulkovV))\n- [Vladimir Iglovikov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Figlovikov\u002F) ([ternaus](https:\u002F\u002Fgithub.com\u002Fternaus))\n- [Vsevolod Poletaev](https:\u002F\u002Flinkedin.com\u002Fin\u002Fvsevolod-poletaev-468071165) ([hexfaker](https:\u002F\u002Fgithub.com\u002Fhexfaker))\n- [Yury Kashnitsky](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fkashnitskiy\u002F) ([yorko](https:\u002F\u002Fgithub.com\u002FYorko))\n\n\n### Trusted by\n- [Awecom](https:\u002F\u002Fwww.awecom.com)\n- Researchers at the [Center for Translational Research in Neuroimaging and Data Science (TReNDS)](https:\u002F\u002Ftrendscenter.org)\n- [Deep Learning School](https:\u002F\u002Fen.dlschool.org)\n- Researchers at [Emory University](https:\u002F\u002Fwww.emory.edu)\n- [Evil Martians](https:\u002F\u002Fevilmartians.com)\n- Researchers at the [Georgia Institute of Technology](https:\u002F\u002Fwww.gatech.edu)\n- Researchers at [Georgia State University](https:\u002F\u002Fwww.gsu.edu)\n- [Helios](http:\u002F\u002Fhelios.to)\n- [HPCD Lab](https:\u002F\u002Fwww.hpcdlab.com)\n- [iFarm](https:\u002F\u002Fifarmproject.com)\n- [Kinoplan](http:\u002F\u002Fkinoplan.io\u002F)\n- Researchers at the [Moscow Institute of Physics and Technology](https:\u002F\u002Fmipt.ru\u002Fenglish\u002F)\n- [Neuromation](https:\u002F\u002Fneuromation.io)\n- [Poteha Labs](https:\u002F\u002Fpotehalabs.com\u002Fen\u002F)\n- [Provectus](https:\u002F\u002Fprovectus.com)\n- Researchers at the [Skolkovo Institute of Science and Technology](https:\u002F\u002Fwww.skoltech.ru\u002Fen)\n- [SoftConstruct](https:\u002F\u002Fwww.softconstruct.io\u002F)\n- Researchers at [Tinkoff](https:\u002F\u002Fwww.tinkoff.ru\u002Feng\u002F)\n- Researchers at [Yandex.Research](https:\u002F\u002Fresearch.yandex.com)\n\n\n### Citation\n\nPlease use this bibtex if you want to cite this repository in your publications:\n\n    @misc{catalyst,\n        author = {Kolesnikov, Sergey},\n        title = {Catalyst - Accelerated deep learning R&D},\n        year = {2018},\n        publisher = {GitHub},\n        journal = {GitHub repository},\n        howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst}},\n    }\n","\u003Cdiv align=\"center\">\n\n[![Catalyst logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_da88dba8fe2a.png)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n\n**加速深度学习研发**\n\n[![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_9ee0cb95ac54.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst)\n[![Pipi版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fcatalyst.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcatalyst\u002F)\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Fcatalyst%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v)](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Findex.html)\n[![Docker](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocker-hub-blue)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fcatalystteam\u002Fcatalyst\u002Ftags)\n[![PyPI热度](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_d50896e0bc3d.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fcatalyst)\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fnews-twitter-499feb)](https:\u002F\u002Ftwitter.com\u002FCatalystTeam)\n[![Telegram](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchannel-telegram-blue)](https:\u002F\u002Ft.me\u002Fcatalyst_team)\n[![Slack](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCatalyst-slack-success)](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fcatalyst-team-devs\u002Fshared_invite\u002Fzt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)\n[![Github贡献者](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fcatalyst-team\u002Fcatalyst.svg?logo=github&logoColor=white)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fgraphs\u002Fcontributors)\n\n![codestyle](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcodestyle\u002Fbadge.svg?branch=master&event=push)\n![docs](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fdocs\u002Fbadge.svg?branch=master&event=push)\n![catalyst](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n![integrations](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fintegrations\u002Fbadge.svg?branch=master&event=push)\n\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.6-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.7-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython_3.8-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n\n[![os](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinux-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![os](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOSX-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n[![os](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWSL-passing-success)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fworkflows\u002Fcatalyst\u002Fbadge.svg?branch=master&event=push)\n\u003C\u002Fdiv>\n\nCatalyst 是一个用于深度学习研究与开发的 PyTorch 框架。它专注于可复现性、快速实验以及代码库的重用，使您能够专注于创新，而不是重复编写训练循环。  \n打破循环——使用 Catalyst！\n\n- [项目宣言](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002FMANIFEST.md)\n- [框架架构](https:\u002F\u002Fmiro.com\u002Fapp\u002Fboard\u002Fo9J_lxBO-2k=\u002F)\n- [Catalyst 在 AI Landscape 中的位置](https:\u002F\u002Flandscape.lfai.foundation\u002Fselected=catalyst)\n- [PyTorch 生态系统](https:\u002F\u002Fpytorch.org\u002Fecosystem\u002F)的一部分\n\n\u003Cdetails>\n\u003Csummary>Catalyst 在 2021 年 PyTorch 生态系统日上的展示\u003C\u002Fsummary>\n\u003Cp>\n\n[![Catalyst 海报](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_58ed610cb017.png)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Catalyst 在 2021 年 PyTorch 开发者日上的展示\u003C\u002Fsummary>\n\u003Cp>\n\n[![Catalyst 海报](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_readme_e49a6714964a.png)](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n----\n\n## 快速入门\n\n```bash\npip install -U catalyst\n```\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, utils\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\nloaders = {\n    \"train\": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),\n    \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n}\n\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\n\n# 模型训练\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.PrecisionRecallF1SupportCallback(input_key=\"logits\", target_key=\"targets\"),\n    ],\n    logdir=\".\u002Flogs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n\n# 模型评估\nmetrics = runner.evaluate_loader(\n    loader=loaders[\"valid\"],\n    callbacks=[dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5))],\n)\n\n# 模型推理\nfor prediction in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert prediction[\"logits\"].detach().cpu().numpy().shape[-1] == 10\n\n# 模型后处理\nmodel = runner.model.cpu()\nbatch = next(iter(loaders[\"valid\"]))[0]\nutils.trace_model(model=model, batch=batch)\nutils.quantize_model(model=model)\nutils.prune_model(model=model, pruning_fn=\"l1_unstructured\", amount=0.8)\nutils.onnx_export(model=model, batch=batch, file=\".\u002Flogs\u002Fmnist.onnx\", verbose=True)\n```\n\n### 分步指南\n1. 从 [Catalyst — 一个用于加速深度学习研发的 PyTorch 框架](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Fcatalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef) 的介绍开始。\n1. 尝试 [笔记本教程](#minimal-examples) 或查看 [最小示例](#minimal-examples) 进行首次深入探索。\n1. 阅读包含用例和指南的 [博客文章](https:\u002F\u002Fcatalyst-team.com\u002Fpost\u002F)。\n1. 参加我们的 [\"使用 Catalyst 进行深度学习\" 课程](https:\u002F\u002Fcatalyst-team.com\u002F#course) 学习机器学习知识。\n1. 最后，如果您想与团队及贡献者交流，可以 [加入我们的 Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fcatalyst-team-core\u002Fshared_invite\u002Fzt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)。\n\n## 目录\n- [入门](#getting-started)\n  - [逐步指南](#step-by-step-guide)\n- [目录](#table-of-contents)\n- [概述](#overview)\n  - [安装](#installation)\n  - [文档](#documentation)\n  - [最小示例](#minimal-examples)\n  - [测试](#tests)\n  - [博客文章](#blog-posts)\n  - [演讲](#talks)\n- [社区](#community)\n  - [贡献指南](#contribution-guide)\n  - [用户反馈](#user-feedback)\n  - [致谢](#acknowledgments)\n  - [受信任的机构](#trusted-by)\n  - [引用](#citation)\n\n\n## 概述\nCatalyst 帮助您仅用几行代码即可实现紧凑但功能齐全的深度学习流水线。您无需编写大量样板代码，即可获得包含指标、早停、模型检查点等功能的训练循环。\n\n\n### 安装\n\n通用安装：\n```bash\npip install -U catalyst\n```\n\n\u003Cdetails>\n\u003Csummary>专用版本，可能需要额外依赖\u003C\u002Fsummary>\n\u003Cp>\n\n```bash\npip install catalyst[ml]         # 安装基于机器学习的 Catalyst\npip install catalyst[cv]         # 安装基于计算机视觉的 Catalyst\n# 安装主分支版本\npip install git+https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst@master --upgrade\n# 所有可用扩展在此列出：\n# https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fsetup.py\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\nCatalyst 兼容：Python 3.7+、PyTorch 1.4+。\u003Cbr\u002F>\n已在 Ubuntu 16.04\u002F18.04\u002F20.04、macOS 10.15、Windows 10 以及 Windows Subsystem for Linux 上进行过测试。\n\n### 文档\n- [主分支](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002F)\n- [22.02 版本](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv22.02\u002Findex.html)\n\n- \u003Cdetails>\n  \u003Csummary>2021 年版\u003C\u002Fsummary>\n  \u003Cp>\n\n    - [21.12](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.12\u002Findex.html)\n    - [21.11](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.11\u002Findex.html)\n    - [21.10](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.10\u002Findex.html)\n    - [21.09](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.09\u002Findex.html)\n    - [21.08](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.08\u002Findex.html)\n    - [21.07](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.07\u002Findex.html)\n    - [21.06](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.06\u002Findex.html)\n    - [21.05](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.05\u002Findex.html)（[Catalyst — 用于加速深度学习研发的 PyTorch 框架](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Fcatalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link&sk=885b4409aecab505db0a63b06f19dcef))\n    - [21.04\u002F21.04.1](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.04\u002Findex.html)、[21.04.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.04.2\u002Findex.html)\n    - [21.03](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.03\u002Findex.html)、[21.03.1\u002F21.03.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.03.1\u002Findex.html)\n\n  \u003C\u002Fp>\n  \u003C\u002Fdetails>\n- \u003Cdetails>\n  \u003Csummary>2020 年版\u003C\u002Fsummary>\n  \u003Cp>\n\n    - [20.12](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.12\u002Findex.html)\n    - [20.11](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.11\u002Findex.html)\n    - [20.10](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.10\u002Findex.html)\n    - [20.09](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.09\u002Findex.html)\n    - [20.08.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.08.2\u002Findex.html)\n    - [20.07](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.07\u002Findex.html)（[开发博客：20.07 发布](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Fcatalyst-dev-blog-20-07-release-fb489cd23e14?source=friends_link&sk=7ab92169658fe9a9e1c44068f28cc36c)）\n    - [20.06](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.06\u002Findex.html)\n    - [20.05](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.05\u002Findex.html)、[20.05.1](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.05.1\u002Findex.html)\n    - [20.04](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.04\u002Findex.html)、[20.04.1](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.04.1\u002Findex.html)、[20.04.2](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv20.04.2\u002Findex.html)\n\n  \u003C\u002Fp>\n  \u003C\u002Fdetails>\n\n\n### 最小示例\n\n- [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fcustomizing_what_happens_in_train.ipynb) 入门教程“自定义 `train` 中发生的事情”（[customizing_what_happens_in_train.ipynb](.\u002Fexamples\u002Fnotebooks\u002Fcustomizing_what_happens_in_train.ipynb)）\n- [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fcustomization_tutorial.ipynb) 带有自定义示例的演示（[customization_tutorial.ipynb](.\u002Fexamples\u002Fnotebooks\u002Fcustomization_tutorial.ipynb)）\n- [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Freinforcement_learning.ipynb) 使用 Catalyst 进行强化学习（[reinforcement_learning.ipynb](.\u002Fexamples\u002Fnotebooks\u002Freinforcement_learning.ipynb)）\n- [更多示例](.\u002Fexamples\u002F)\n\n\u003Cdetails>\n\u003Csummary>CustomRunner – PyTorch 循环分解\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        # 模型推理步骤\n        return self.model(batch[0].to(self.engine.device))\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss\", \"accuracy01\", \"accuracy03\"]\n        }\n\n    def handle_batch(self, batch):\n        # 模型训练\u002F验证步骤\n        # 解包批次数据\n        x, y = batch\n        # 执行模型前向传播\n        logits = self.model(x)\n        # 计算损失\n        loss = F.cross_entropy(logits, y)\n        # 计算指标\n        accuracy01、accuracy03 = metrics.accuracy(logits, y, topk=(1, 3))\n        # 记录指标\n        self.batch_metrics.update(\n            {\"loss\": loss, \"accuracy01\": accuracy01, \"accuracy03\": accuracy03}\n        )\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n        # 执行模型反向传播\n        if self.is_train_loader:\n            self.engine.backward(loss)\n            self.optimizer.step()\n            self.optimizer.zero_grad()\n\n    def on_loader_end(self, runner):\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\nrunner = CustomRunner()\n\n# 模型训练\nrunner.train(\n    model=model,\n    optimizer=optimizer,\n    loaders=loaders,\n    logdir=\".\u002Flogs\",\n    num_epochs=5,\n    verbose=True,\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n)\n# 模型推理\nfor logits in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert logits.detach().cpu().numpy().shape[-1] == 10\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>机器学习 - 线性回归\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# 数据\nnum_samples, num_features = int(1e4), int(1e1)\nX, y = torch.rand(num_samples, num_features), torch.rand(num_samples)\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# 模型、损失函数、优化器、学习率调度器\nmodel = torch.nn.Linear(num_features, 1)\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])\n\n# 模型训练\nrunner = dl.SupervisedRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\".\u002Flogdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    num_epochs=8,\n    verbose=True,\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>机器学习 - 多分类\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# 样本数据\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples,) * num_classes).to(torch.int64)\n\n# PyTorch 数据加载器\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# 模型、损失函数、优化器、学习率调度器\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# 模型训练\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\".\u002Flogdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy03\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=num_classes),\n        # 如需额外指标，可取消注释：\n        # dl.PrecisionRecallF1SupportCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n        # dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n        # catalyst[ml] 需要 ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n    ],\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>机器学习 - 多标签分类\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# 样本数据\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples, num_classes) > 0.5).to(torch.float32)\n\n# PyTorch 数据加载器\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# 模型、损失函数、优化器、学习率调度器\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# 模型训练\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\".\u002Flogdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy01\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        \u002F\u002F 如需额外指标，可取消注释：\n        \u002F\u002F dl.MultilabelAccuracyCallback(input_key=\"scores\", target_key=\"targets\", threshold=0.5),\n        \u002F\u002F dl.MultilabelPrecisionRecallF1SupportCallback(\n        \u002F\u002F     input_key=\"scores\", target_key=\"targets\", threshold=0.5\n        \u002F\u002F ),\n    ]\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>机器学习 - 多头分类\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# 样本数据\nnum_samples, num_features, num_classes1, num_classes2 = int(1e4), int(1e1), 4, 10\nX = torch.rand(num_samples, num_features)\ny1 = (torch.rand(num_samples,) * num_classes1).to(torch.int64)\ny2 = (torch.rand(num_samples,) * num_classes2).to(torch.int64)\n\n# PyTorch 数据加载器\ndataset = TensorDataset(X, y1, y2)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\nclass CustomModule(nn.Module):\n    def __init__(self, in_features: int, out_features1: int, out_features2: int):\n        super().__init__()\n        self.shared = nn.Linear(in_features, 128)\n        self.head1 = nn.Linear(128, out_features1)\n        self.head2 = nn.Linear(128, out_features2)\n\n    def forward(self, x):\n        x = self.shared(x)\n        y1 = self.head1(x)\n        y2 = self.head2(x)\n        return y1, y2\n\n# 模型、损失函数、优化器、学习率调度器\nmodel = CustomModule(num_features, num_classes1, num_classes2)\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters())\nscheduler = optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\nclass CustomRunner(dl.Runner):\n    def handle_batch(self, batch):\n        x, y1, y2 = batch\n        y1_hat, y2_hat = self.model(x)\n        self.batch = {\n            \"features\": x,\n            \"logits1\": y1_hat,\n            \"logits2\": y2_hat,\n            \"targets1\": y1,\n            \"targets2\": y2,\n        }\n\n# 模型训练\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.CriterionCallback(metric_key=\"loss1\", input_key=\"logits1\", target_key=\"targets1\"),\n        dl.CriterionCallback(metric_key=\"loss2\", input_key=\"logits2\", target_key=\"targets2\"),\n        dl.MetricAggregationCallback(metric_key=\"loss\", metrics=[\"loss1\", \"loss2\"], mode=\"mean\"),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.AccuracyCallback(\n            input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_\"\n        ),\n        dl.AccuracyCallback(\n            input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_\"\n        ),\n        # catalyst[ml] 需要 ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_cm\"\n        # ),\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_cm\"\n        # ),\n        dl.CheckpointCallback(\n            logdir=\".\u002Flogs\u002Fone\",\n            loader_key=\"valid\", metric_key=\"one_accuracy01\", minimize=False, topk=1\n        ),\n        dl.CheckpointCallback(\n            logdir=\".\u002Flogs\u002Ftwo\",\n            loader_key=\"valid\", metric_key=\"two_accuracy03\", minimize=False, topk=3\n        ),\n    ],\n    loggers={\"console\": dl.ConsoleLogger(), \"tb\": dl.TensorboardLogger(\".\u002Flogs\u002Ftb\")},\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>ML – 推荐系统\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# 样本数据\nnum_users, num_features, num_items = int(1e4), int(1e1), 10\nX = torch.rand(num_users, num_features)\ny = (torch.rand(num_users, num_items) > 0.5).to(torch.float32)\n\n# PyTorch 数据加载器\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# 模型、损失函数、优化器、学习率调度器\nmodel = torch.nn.Linear(num_features, num_items)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# 模型训练\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.CriterionCallback(input_key=\"logits\", target_key=\"targets\", metric_key=\"loss\"),\n        # 如需额外指标，请取消注释：\n        # dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # dl.HitrateCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MRRCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MAPCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.NDCGCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.CheckpointCallback(\n            logdir=\".\u002Flogs\", loader_key=\"valid\", metric_key=\"loss\", minimize=True\n        ),\n    ]\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST 分类\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nrunner = dl.SupervisedRunner()\n# 模型训练\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    logdir=\".\u002Flogs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n# 如需额外指标，请取消注释：\n#     callbacks=[\n#         dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=10),\n#         dl.PrecisionRecallF1SupportCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=10\n#         ),\n#         dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n#         \u002F\u002F catalyst[ml] 需要 ``pip install catalyst[ml]``\n#         dl.ConfusionMatrixCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n#         ),\n#     ]\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST 分割\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.losses import IoULoss\n\n\nmodel = nn.Sequential(\n    nn.Conv2d(1, 1, 3, 1, 1), nn.ReLU(),\n    nn.Conv2d(1, 1, 3, 1, 1), nn.Sigmoid(),\n)\ncriterion = IoULoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch):\n        x = batch[self._input_key]\n        x_noise = (x + torch.rand_like(x)).clamp_(0, 1)\n        x_ = self.model(x_noise)\n        self.batch = {self._input_key: x, self._output_key: x_, self._target_key: x}\n\nrunner = CustomRunner(\n    input_key=\"features\", output_key=\"scores\", target_key=\"targets\", loss_key=\"loss\"\n)\n\n# 模型训练\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.IOUCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.DiceCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.TrevskyCallback(input_key=\"scores\", target_key=\"targets\", alpha=0.2),\n    ],\n    logdir=\".\u002Flogdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST 度量学习\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.data import HardTripletsSampler\nfrom catalyst.contrib.datasets import MnistMLDataset, MnistQGDataset\nfrom catalyst.contrib.losses import TripletMarginLossWithSampler\nfrom catalyst.contrib.models import MnistSimpleNet\nfrom catalyst.data.sampler import BatchBalanceClassSampler\n\n\n# 1. 训练和验证数据加载器\ntrain_dataset = MnistMLDataset(root=os.getcwd())\nsampler = BatchBalanceClassSampler(\n    labels=train_dataset.get_labels(), num_classes=5, num_samples=10, num_batches=10\n)\ntrain_loader = DataLoader(dataset=train_dataset, batch_sampler=sampler)\n\nvalid_dataset = MnistQGDataset(root=os.getcwd(), gallery_fraq=0.2)\nvalid_loader = DataLoader(dataset=valid_dataset，batch_size=1024)\n\n# 2. 模型和优化器\nmodel = MnistSimpleNet(out_features=16)\noptimizer = Adam(model.parameters(), lr=0.001)\n\n# 3. 带三元组采样的损失函数\nsampler_inbatch = HardTripletsSampler(norm_required=False)\ncriterion = TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)\n\n# 4. 使用 Catalyst Runner 进行训练\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch) -> None:\n        if self.is_train_loader:\n            images, targets = batch[\"features\"].float(), batch[\"targets\"].long()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets,}\n        else:\n            images, targets, is_query = \\\n                batch[\"features\"].float(), batch[\"targets\"].long(), batch[\"is_query\"].bool()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets, \"is_query\": is_query}\n\ncallbacks = [\n    dl.ControlFlowCallbackWrapper(\n        dl.CriterionCallback(input_key=\"embeddings\", target_key=\"targets\", metric_key=\"loss\"),\n        loaders=\"train\",\n    ),\n    dl.ControlFlowCallbackWrapper(\n        dl.CMCScoreCallback(\n            embeddings_key=\"embeddings\",\n            labels_key=\"targets\",\n            is_query_key=\"is_query\",\n            topk=[1],\n        ),\n        loaders=\"valid\",\n    ),\n    dl.PeriodicLoaderCallback(\n        valid_loader_key=\"valid\", valid_metric_key=\"cmc01\", minimize=False, valid=2\n    ),\n]\n\nrunner = CustomRunner(input_key=\"features\", output_key=\"embeddings\")\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    callbacks=callbacks,\n    loaders={\"train\": train_loader, \"valid\": valid_loader},\n    verbose=False,\n    logdir=\".\u002Flogs\",\n    valid_loader=\"valid\",\n    valid_metric=\"cmc01\",\n    minimize_valid_metric=False,\n    num_epochs=10,\n)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>CV - MNIST GAN\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.layers import GlobalMaxPool2d, Lambda\n\nlatent_dim = 128\ngenerator = nn.Sequential(\n    \u002F\u002F 我们希望生成128个系数，以便重塑为7x7x128的特征图\n    nn.Linear(128, 128 * 7 * 7),\n    nn.LeakyReLU(0.2, inplace=True),\n    Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(128, 1, (7, 7), padding=3),\n    nn.Sigmoid(),\n)\ndiscriminator = nn.Sequential(\n    nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    GlobalMaxPool2d(),\n    nn.Flatten(),\n    nn.Linear(128, 1),\n)\n\nmodel = nn.ModuleDict({\"generator\": generator, \"discriminator\": discriminator})\ncriterion = {\"generator\": nn.BCEWithLogitsLoss(), \"discriminator\": nn.BCEWithLogitsLoss()}\noptimizer = {\n    \"generator\": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n    \"discriminator\": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n}\ntrain_data = MNIST(os.getcwd(), train=False)\nloaders = {\"train\": DataLoader(train_data, batch_size=32)}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        batch_size = 1\n        \u002F\u002F 在潜在空间中随机采样点\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        \u002F\u002F 将其解码为假图像\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        return generated_images\n\n    def handle_batch(self, batch):\n        real_images, _ = batch\n        batch_size = real_images.shape[0]\n\n        \u002F\u002F 在潜在空间中随机采样点\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n\n        \u002F\u002F 将其解码为假图像\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        \u002F\u002F 将其与真实图像结合\n        combined_images = torch.cat([generated_images, real_images])\n\n        \u002F\u002F 组装用于区分真假图像的标签\n        labels = \\\n            torch.cat([torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))]).to(self.engine.device)\n        \u002F\u002F 向标签中添加随机噪声——这是一个重要的技巧！\n        labels += 0.05 * torch.rand(labels.shape).to(self.engine.device)\n\n        \u002F\u002F 判别器前向传播\n        combined_predictions = self.model[\"discriminator\"](combined_images)\n\n        \u002F\u002F 再次在潜在空间中随机采样点\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        \u002F\u002F 组装表示“所有图像均为真”的标签\n        misleading_labels = torch.zeros((batch_size, 1)).to(self.engine.device)\n\n        \u002F\u002F 生成器前向传播\n        generated_images = self.model[\"generator\"](random_latent_vectors)\n        generated_predictions = self.model[\"discriminator\"](generated_images)\n\n        self.batch = {\n            \"combined_predictions\": combined_predictions,\n            \"labels\": labels,\n            \"generated_predictions\": generated_predictions,\n            \"misleading_labels\": misleading_labels,\n        }\n\n\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    callbacks=[\n        dl.CriterionCallback(\n            input_key=\"combined_predictions\",\n            target_key=\"labels\",\n            metric_key=\"loss_discriminator\",\n            criterion_key=\"discriminator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_discriminator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"discriminator\",\n            metric_key=\"loss_discriminator\",\n        ),\n        dl.CriterionCallback(\n            input_key=\"generated_predictions\",\n            target_key=\"misleading_labels\",\n            metric_key=\"loss_generator\",\n            criterion_key=\"generator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_generator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"generator\",\n            metric_key=\"loss_generator\",\n        ),\n    ],\n    valid_loader=\"train\",\n    valid_metric=\"loss_generator\",\n    minimize_valid_metric=True,\n    num_epochs=20,\n    verbose=True,\n    logdir=\".\u002Flogs_gan\",\n)\n\n\u002F\u002F 可视化（需要 matplotlib）：\n\u002F\u002F import matplotlib.pyplot as plt\n\u002F\u002F %matplotlib inline\n\n# plt.imshow(runner.predict_batch(None)[0, 0].cpu().numpy())\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>计算机视觉 - MNIST 变分自编码器\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport torch\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nLOG_SCALE_MAX = 2\nLOG_SCALE_MIN = -10\n\ndef normal_sample(loc, log_scale):\n    scale = torch.exp(0.5 * log_scale)\n    return loc + scale * torch.randn_like(scale)\n\nclass VAE(nn.Module):\n    def __init__(self, in_features, hid_features):\n        super().__init__()\n        self.hid_features = hid_features\n        self.encoder = nn.Linear(in_features, hid_features * 2)\n        self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())\n\n    def forward(self, x, deterministic=False):\n        z = self.encoder(x)\n        bs, z_dim = z.shape\n\n        loc, log_scale = z[:, : z_dim \u002F\u002F 2], z[:, z_dim \u002F\u002F 2 :]\n        log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)\n\n        z_ = loc if deterministic else normal_sample(loc, log_scale)\n        z_ = z_.view(bs, -1)\n        x_ = self.decoder(z_)\n\n        return x_, loc, log_scale\n\nclass CustomRunner(dl.IRunner):\n    def __init__(self, hid_features, logdir, engine):\n        super().__init__()\n        self.hid_features = hid_features\n        self._logdir = logdir\n        self._engine = engine\n\n    def get_engine(self):\n        return self._engine\n\n    def get_loggers(self):\n        return {\n            \"console\": dl.ConsoleLogger(),\n            \"csv\": dl.CSVLogger(logdir=self._logdir),\n            \"tensorboard\": dl.TensorboardLogger(logdir=self._logdir),\n        }\n\n    @property\n    def num_epochs(self) -> int:\n        return 1\n\n    def get_loaders(self):\n        loaders = {\n            \"train\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n            \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n        }\n        return loaders\n\n    def get_model(self):\n        model = self.model if self.model is not None else VAE(28 * 28, self.hid_features)\n        return model\n\n    def get_optimizer(self, model):\n        return optim.Adam(model.parameters(), lr=0.02)\n\n    def get_callbacks(self):\n        return {\n            \"backward\": dl.BackwardCallback(metric_key=\"loss\"),\n            \"optimizer\": dl.OptimizerCallback(metric_key=\"loss\"),\n            \"checkpoint\": dl.CheckpointCallback(\n                self._logdir,\n                loader_key=\"valid\",\n                metric_key=\"loss\",\n                minimize=True,\n                topk=3,\n            ),\n        }\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss_ae\", \"loss_kld\", \"loss\"]\n        }\n\n    def handle_batch(self, batch):\n        x, _ = batch\n        x = x.view(x.size(0), -1)\n        x_, loc, log_scale = self.model(x, deterministic=not self.is_train_loader)\n\n        loss_ae = F.mse_loss(x_, x)\n        loss_kld = (\n            -0.5 * torch.sum(1 + log_scale - loc.pow(2) - log_scale.exp(), dim=1)\n        ).mean()\n        loss = loss_ae + loss_kld * 0.01\n\n        self.batch_metrics = {\"loss_ae\": loss_ae, \"loss_kld\": loss_kld, \"loss\": loss}\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n\n    def on_loader_end(self, runner):\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\n    def predict_batch(self, batch):\n        random_latent_vectors = torch.randn(1, self.hid_features).to(self.engine.device)\n        generated_images = self.model.decoder(random_latent_vectors).detach()\n        return generated_images\n\nrunner = CustomRunner(128, \".\u002Flogs\", dl.CPUEngine())\nrunner.run()\n# 可视化（需要 matplotlib）：\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n\n# plt.imshow(runner.predict_batch(None)[0].cpu().numpy().reshape(28, 28))\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>AutoML - hyperparameters optimization with Optuna\u003C\u002Fsummary>\n\u003Cp>\n\n```python\nimport os\nimport optuna\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\n\ndef objective(trial):\n    lr = trial.suggest_loguniform(\"lr\", 1e-3, 1e-1)\n    num_hidden = int(trial.suggest_loguniform(\"num_hidden\", 32, 128))\n\n    train_data = MNIST(os.getcwd(), train=True)\n    valid_data = MNIST(os.getcwd(), train=False)\n    loaders = {\n        \"train\": DataLoader(train_data, batch_size=32),\n        \"valid\": DataLoader(valid_data, batch_size=32),\n    }\n    model = nn.Sequential(\n        nn.Flatten(), nn.Linear(784, num_hidden), nn.ReLU(), nn.Linear(num_hidden, 10)\n    )\n    optimizer = torch.optim.Adam(model.parameters(), lr=lr)\n    criterion = nn.CrossEntropyLoss()\n\n    runner = dl.SupervisedRunner(input_key=\"features\", output_key=\"logits\", target_key=\"targets\")\n    runner.train(\n        model=model,\n        criterion=criterion,\n        optimizer=optimizer,\n        loaders=loaders,\n        callbacks={\n            \"accuracy\": dl.AccuracyCallback(\n                input_key=\"logits\", target_key=\"targets\", num_classes=10\n            ),\n            # catalyst[optuna] required ``pip install catalyst[optuna]``\n            \"optuna\": dl.OptunaPruningCallback(\n                loader_key=\"valid\", metric_key=\"accuracy01\", minimize=False, trial=trial\n            ),\n        },\n        num_epochs=3,\n    )\n    score = trial.best_score\n    return score\n\nstudy = optuna.create_study(\n    direction=\"maximize\",\n    pruner=optuna.pruners.MedianPruner(\n        n_startup_trials=1, n_warmup_steps=0, interval_steps=1\n    ),\n)\nstudy.optimize(objective, n_trials=3, timeout=300)\nprint(study.best_value, study.best_params)\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Config API - minimal example\u003C\u002Fsummary>\n\u003Cp>\n\n```yaml title=\"example.yaml\"\nrunner:\n  _target_: catalyst.runners.SupervisedRunner\n  model:\n    _var_: model\n    _target_: torch.nn.Sequential\n    args:\n      - _target_: torch.nn.Flatten\n      - _target_: torch.nn.Linear\n        in_features: 784  # 28 * 28\n        out_features: 10\n  input_key: features\n  output_key: &output_key logits\n  target_key: &target_key targets\n  loss_key: &loss_key loss\n\nrun:\n  # ≈ stage 1\n  - _call_: train  # runner.train(...)\n\n    criterion:\n      _target_: torch.nn.CrossEntropyLoss\n\n    optimizer:\n      _target_: torch.optim.Adam\n      params:  # model.parameters()\n        _var_: model.parameters\n      lr: 0.02\n\n    loaders:\n      train:\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: y\n        batch_size: 32\n\n      &valid_loader_key valid:\n        &valid_loader\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: n\n        batch_size: 32\n\n    callbacks:\n      - &accuracy_metric\n        _target_: catalyst.callbacks.AccuracyCallback\n        input_key: *output_key\n        target_key: *target_key\n        topk: [1,3,5]\n      - _target_: catalyst.callbacks.PrecisionRecallF1SupportCallback\n        input_key: *output_key\n        target_key: *target_key\n\n    num_epochs: 1\n    logdir: logs\n    valid_loader: *valid_loader_key\n    valid_metric: *loss_key\n    minimize_valid_metric: y\n    verbose: y\n\n  # ≈ stage 2\n  - _call_: evaluate_loader  # runner.evaluate_loader(...)\n    loader: *valid_loader\n    callbacks:\n      - *accuracy_metric\n\n```\n\n```sh\ncatalyst-run --config example.yaml\n```\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n### Tests\nAll Catalyst code, features, and pipelines [are fully tested](.\u002Ftests).\nWe also have our own [catalyst-codestyle](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcodestyle) and a corresponding pre-commit hook.\nDuring testing, we train a variety of different models: image classification,\nimage segmentation, text classification, GANs, and much more.\nWe then compare their convergence metrics in order to verify\nthe correctness of the training procedure and its reproducibility.\nAs a result, Catalyst provides fully tested and reproducible\nbest practices for your deep learning research and development.\n\n### [Blog Posts](https:\u002F\u002Fcatalyst-team.com\u002Fpost\u002F)\n\n### [Talks](https:\u002F\u002Fcatalyst-team.com\u002Ftalk\u002F)\n\n\n## Community\n\n### 由 Catalyst 加速\n\n\u003Cdetails>\n\u003Csummary>研究论文\u003C\u002Fsummary>\n\u003Cp>\n\n- [用于情感分类的层次化注意力机制及其可视化](https:\u002F\u002Fgithub.com\u002Fneuromation\u002Fml-recipe-hier-attention)\n- [儿科骨龄评估](https:\u002F\u002Fgithub.com\u002Fneuromation\u002Fml-recipe-bone-age)\n- [论文《告诉我该看哪里：引导式注意力推理网络》的实现](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FGAIN)\n- [论文《滤波器响应归一化层：消除深度神经网络训练中的批依赖性》的实现](https:\u002F\u002Fgithub.com\u002Fyukkyo\u002FPyTorch-FilterResponseNormalizationLayer)\n- [论文《面向真实场景说话人识别的话语级聚合方法》的实现](https:\u002F\u002Fgithub.com\u002FptJexio\u002FSpeaker-Recognition)\n- [论文《在鸡尾酒会上听声辨音：一种与说话人无关的视听语音分离模型》的实现](https:\u002F\u002Fgithub.com\u002Fvitrioil\u002FSpeech-Separation)\n- [论文《ESRGAN：增强型超分辨率生成对抗网络》的实现](https:\u002F\u002Fgithub.com\u002Fleverxgroup\u002Fesrgan)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>博客文章\u003C\u002Fsummary>\n\u003Cp>\n\n- [使用 PyTorch 解决鸡尾酒会问题](https:\u002F\u002Fmedium.com\u002Fpytorch\u002Faddressing-the-cocktail-party-problem-using-pytorch-305fb74560ea)\n- [超越时尚：使用 Catalyst 进行深度学习（Config API）](https:\u002F\u002Fevilmartians.com\u002Fchronicles\u002Fbeyond-fashion-deep-learning-with-catalyst)\n- [从 Notebook API 到 Config API 的教程（俄语）](https:\u002F\u002Fgithub.com\u002FBekovmi\u002FSegmentation_tutorial)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>竞赛\u003C\u002Fsummary>\n\u003Cp>\n\n- [Kaggle Quick, Draw! 涂鸦识别挑战赛](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-QuickDraw) - 第11名\n- [Catalyst.RL - NeurIPS 2018：假肢人工智能挑战赛](https:\u002F\u002Fgithub.com\u002FScitator\u002Fneurips-18-prosthetics-challenge) – 第3名\n- [Kaggle Google Landmark 2019](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-Google-Landmark-2019) - 第30名\n- [iMet Collection 2019 - FGVC6](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-iMet) - 第24名\n- [ID R&D 抗欺骗挑战赛](https:\u002F\u002Fgithub.com\u002Fbagxi\u002Fidrnd-anti-spoofing-challenge-solution) - 第14名\n- [NeurIPS 2019：递归细胞图像分类](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-Recursion-Cellular) - 第4名\n- [MICCAI 2019：放射治疗计划自动结构分割挑战赛2019](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FStructSeg2019)\n  * `任务3：胸部CT扫描中的危及器官分割`解决方案获得第3名\n  * `任务4：肺癌大体靶区体积分割`解决方案获得第4名\n- [Kaggle Seversteal 钢铁检测](https:\u002F\u002Fgithub.com\u002Fbamps53\u002Fkaggle-severstal) - 第5名\n- [RSNA 颅内出血检测](https:\u002F\u002Fgithub.com\u002Fngxbac\u002FKaggle-RSNA) - 第5名\n- [APTOS 2019 失明检测](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FKaggle-2019-Blindness-Detection) – 第7名\n- [Catalyst.RL - NeurIPS 2019：学会移动——四处走动](https:\u002F\u002Fgithub.com\u002FScitator\u002Frun-skeleton-run-in-3d) – 第2名\n- [xView2 灾害评估挑战赛](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FxView2-Solution) - 第3名\n\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>工具包\u003C\u002Fsummary>\n\u003Cp>\n\n- [Catalyst.RL](https:\u002F\u002Fgithub.com\u002FScitator\u002Fcatalyst-rl-framework) – 由 [Scitator](https:\u002F\u002Fgithub.com\u002FScitator) 开发的可复现强化学习研究分布式框架\n- [Catalyst.Classification](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fclassification) - 包含伪标签的全面分类流水线，由 [Bagxi](https:\u002F\u002Fgithub.com\u002Fbagxi) 和 [Pdanilov](https:\u002F\u002Fgithub.com\u002Fpdanilov) 共同开发\n- [Catalyst.Segmentation](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fsegmentation) - 分割流水线，包括二值分割、语义分割和实例分割，由 [Bagxi](https:\u002F\u002Fgithub.com\u002Fbagxi) 开发\n- [Catalyst.Detection](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fdetection) - 无锚点检测流水线，由 [Avi2011class](https:\u002F\u002Fgithub.com\u002FAvi2011class) 和 [TezRomacH](https:\u002F\u002Fgithub.com\u002FTezRomacH) 共同开发\n- [Catalyst.GAN](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fgan) - 可复现的GAN流水线，由 [Asmekal](https:\u002F\u002Fgithub.com\u002Fasmekal) 开发\n- [Catalyst.Neuro](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fneuro) - 脑影像分析项目，与 [TReNDS中心](https:\u002F\u002Ftrendscenter.org) 合作完成\n- [MLComp](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fmlcomp) – 由 [Lightforever](https:\u002F\u002Fgithub.com\u002Flightforever) 开发的具有UI界面的分布式DAG机器学习框架\n- [Pytorch toolbelt](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002Fpytorch-toolbelt) - 用于快速研发原型和Kaggle竞赛的PyTorch扩展，由 [BloodAxe](https:\u002F\u002Fgithub.com\u002FBloodAxe) 开发\n- [Helper functions](https:\u002F\u002Fgithub.com\u002Fternaus\u002Figlovikov_helper_functions) - 由 [Ternaus](https:\u002F\u002Fgithub.com\u002Fternaus) 提供的一系列辅助函数\n- [使用 Catalyst 进行BERT蒸馏](https:\u002F\u002Fgithub.com\u002Felephantmipt\u002Fbert-distillation)，由 [elephantmipt](https:\u002F\u002Fgithub.com\u002Felephantmipt) 开发\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>其他\u003C\u002Fsummary>\n\u003Cp>\n\n- [CamVid分割示例](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FCatalyst-CamVid-Segmentation-Example) - CamVid数据集的语义分割示例\n- [在“从卫星图像中理解云”竞赛中使用便捷工具进行分割的Notebook API教程](https:\u002F\u002Fwww.kaggle.com\u002Fartgor\u002Fsegmentation-in-pytorch-using-convenient-tools\u002F)\n- [Catalyst.RL - NeurIPS 2019：学会移动——四处走动](https:\u002F\u002Fgithub.com\u002FScitator\u002Flearning-to-move-starter-kit) – 入门套件\n- [Catalyst.RL - NeurIPS 2019：动物AI奥运会](https:\u002F\u002Fgithub.com\u002FScitator\u002Fanimal-olympics-starter-kit) - 入门套件\n- [Inria分割示例](https:\u002F\u002Fgithub.com\u002FBloodAxe\u002FCatalyst-Inria-Segmentation-Example) - Inria卫星分割挑战赛中训练分割模型的示例\n- [iglovikov_segmentation](https:\u002F\u002Fgithub.com\u002Fternaus\u002Figlovikov_segmentation) - 使用Catalyst进行语义分割的流水线\n- [将Catalyst运行日志记录到Comet](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1TaG27HcMh2jyRKBGsqRXLiGUfsHVyCq6?usp=sharing) - 示例展示了如何将Catalyst运行中的指标、超参数等信息记录到[Comet](https:\u002F\u002Fwww.comet.ml\u002Fsite\u002Fdata-scientists\u002F)\n\n\u003C\u002Fp>\n\u003C\u002Fdetails>\n\n\n更多项目请参阅 [GitHub依赖关系图](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fnetwork\u002Fdependents)。\n\n如果您的项目实现了某篇论文、具有显著的应用案例或教程、或是Kaggle竞赛的解决方案，或者您的代码仅仅展示了有趣的结果并使用了Catalyst，我们非常乐意将您的项目添加到上述列表中！请随时向我们提交一个包含类似上述简要描述的PR。\n\n### 贡献指南\n\n我们感谢所有贡献。如果您计划贡献修复bug的代码，无需事先与我们沟通，直接提交PR即可。如果您计划贡献新功能、新的实用函数或扩展，请先创建一个问题，并与我们讨论。\n\n- 更多信息请参阅[贡献指南](CONTRIBUTING.md)。\n- 通过参与本项目，您同意遵守其[行为准则](CODE_OF_CONDUCT.md)。\n\n### 用户反馈\n\n我们创建了 `feedback@catalyst-team.com` 作为用户反馈的额外渠道。\n\n- 如果您喜欢这个项目并想感谢我们，这里就是合适的地方。\n- 如果您希望与 Catalyst 团队合作以提升深度学习研发水平，我们随时欢迎。\n- 如果您不喜欢使用 GitHub Issues 而更倾向于通过电子邮件沟通，请随时给我们发送邮件。\n- 最后，如果您对某些方面不满意，请务必告诉我们，我们将一起探讨改进方案。\n\n我们非常感谢任何形式的反馈。谢谢！\n\n\n### 致谢\n\n自 Catalyst 开发之初以来，许多人都以各种方式对其产生了重要影响。\n\n#### Catalyst 团队\n- [Dmytro Doroshenko](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fdmytro-doroshenko-05671112a\u002F) ([ditwoo](https:\u002F\u002Fgithub.com\u002FDitwoo))\n- [Eugene Kachan](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fyauheni-kachan\u002F) ([bagxi](https:\u002F\u002Fgithub.com\u002Fbagxi))\n- [Nikita Balagansky](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fnikita-balagansky-50414a19a\u002F) ([elephantmipt](https:\u002F\u002Fgithub.com\u002Felephantmipt))\n- [Sergey Kolesnikov](https:\u002F\u002Fwww.scitator.com\u002F) ([scitator](https:\u002F\u002Fgithub.com\u002FScitator))\n\n#### Catalyst 贡献者\n- [Aleksey Grinchuk](https:\u002F\u002Fwww.facebook.com\u002Fgrinchuk.alexey) ([alexgrinch](https:\u002F\u002Fgithub.com\u002FAlexGrinch))\n- [Aleksey Shabanov](https:\u002F\u002Flinkedin.com\u002Fin\u002Faleksey-shabanov-96b351189) ([AlekseySh](https:\u002F\u002Fgithub.com\u002FAlekseySh))\n- [Alex Gaziev](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Falexgaziev\u002F) ([gazay](https:\u002F\u002Fgithub.com\u002Fgazay))\n- [Andrey Zharkov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fandrey-zharkov-8554a1153\u002F) ([asmekal](https:\u002F\u002Fgithub.com\u002Fasmekal))\n- [Artem Zolkin](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fartem-zolkin-b5155571\u002F) ([arquestro](https:\u002F\u002Fgithub.com\u002FArquestro))\n- [David Kuryakin](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fdkuryakin\u002F) ([dkuryakin](https:\u002F\u002Fgithub.com\u002Fdkuryakin))\n- [Evgeny Semyonov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fewan-semyonov\u002F) ([lightforever](https:\u002F\u002Fgithub.com\u002Flightforever))\n- [Eugene Khvedchenya](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fcvtalks\u002F) ([bloodaxe](https:\u002F\u002Fgithub.com\u002FBloodAxe))\n- [Ivan Stepanenko](https:\u002F\u002Fwww.facebook.com\u002Fistepanenko)\n- [Julia Shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina) ([julia-shenshina](https:\u002F\u002Fgithub.com\u002Fjulia-shenshina))\n- [Nguyen Xuan Bac](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fbac-nguyen-xuan-70340b66\u002F) ([ngxbac](https:\u002F\u002Fgithub.com\u002Fngxbac))\n- [Roman Tezikov](http:\u002F\u002Flinkedin.com\u002Fin\u002Froman-tezikov\u002F) ([TezRomacH](https:\u002F\u002Fgithub.com\u002FTezRomacH))\n- [Valentin Khrulkov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fvkhrulkov\u002F) ([khrulkovv](https:\u002F\u002Fgithub.com\u002FKhrulkovV))\n- [Vladimir Iglovikov](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Figlovikov\u002F) ([ternaus](https:\u002F\u002Fgithub.com\u002Fternaus))\n- [Vsevolod Poletaev](https:\u002F\u002Flinkedin.com\u002Fin\u002Fvsevolod-poletaev-468071165) ([hexfaker](https:\u002F\u002Fgithub.com\u002Fhexfaker))\n- [Yury Kashnitsky](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fkashnitskiy\u002F) ([yorko](https:\u002F\u002Fgithub.com\u002FYorko))\n\n\n### 受信任的机构\n- [Awecom](https:\u002F\u002Fwww.awecom.com)\n- [神经影像与数据科学转化研究中心 (TReNDS)](https:\u002F\u002Ftrendscenter.org) 的研究人员\n- [深度学习学校](https:\u002F\u002Fen.dlschool.org)\n- [埃默里大学](https:\u002F\u002Fwww.emory.edu) 的研究人员\n- [Evil Martians](https:\u002F\u002Fevilmartians.com)\n- [佐治亚理工学院](https:\u002F\u002Fwww.gatech.edu) 的研究人员\n- [佐治亚州立大学](https:\u002F\u002Fwww.gsu.edu) 的研究人员\n- [Helios](http:\u002F\u002Fhelios.to)\n- [HPCD 实验室](https:\u002F\u002Fwww.hpcdlab.com)\n- [iFarm](https:\u002F\u002Fifarmproject.com)\n- [Kinoplan](http:\u002F\u002Fkinoplan.io\u002F)\n- [莫斯科物理技术研究所](https:\u002F\u002Fmipt.ru\u002Fenglish\u002F) 的研究人员\n- [Neuromation](https:\u002F\u002Fneuromation.io)\n- [Poteha Labs](https:\u002F\u002Fpotehalabs.com\u002Fen\u002F)\n- [Provectus](https:\u002F\u002Fprovectus.com)\n- [斯科尔科沃科学技术研究所](https:\u002F\u002Fwww.skoltech.ru\u002Fen) 的研究人员\n- [SoftConstruct](https:\u002F\u002Fwww.softconstruct.io\u002F)\n- [Tinkoff](https:\u002F\u002Fwww.tinkoff.ru\u002Feng\u002F) 的研究人员\n- [Yandex 研究院](https:\u002F\u002Fresearch.yandex.com) 的研究人员\n\n\n### 引用\n如果您希望在您的出版物中引用本仓库，请使用以下 BibTeX 格式：\n\n    @misc{catalyst,\n        author = {Kolesnikov, Sergey},\n        title = {Catalyst - 加速深度学习研发},\n        year = {2018},\n        publisher = {GitHub},\n        journal = {GitHub 代码库},\n        howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst}},\n    }","# Catalyst 快速上手指南\n\nCatalyst 是一个基于 PyTorch 的深度学习研发框架，旨在加速实验迭代、提高代码复用率并确保结果可复现。它帮你摆脱重复编写训练循环的繁琐，让你专注于模型创新。\n\n## 1. 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Linux (Ubuntu 16.04+), macOS, Windows 10, 或 WSL (Windows Subsystem for Linux)。\n*   **Python 版本**: 3.7 及以上 (推荐 3.8+)。\n*   **PyTorch 版本**: 1.4 及以上。\n*   **前置依赖**: 建议先安装好对应你 CUDA 版本的 PyTorch。\n\n> **国内加速提示**：\n> 在中国大陆地区，建议使用清华或阿里镜像源加速 pip 安装，以避免网络超时问题。\n\n## 2. 安装步骤\n\n### 基础安装\n使用 pip 安装最新稳定版：\n\n```bash\npip install -U catalyst\n```\n\n**推荐使用国内镜像源安装：**\n\n```bash\npip install -U catalyst -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 专项版本安装（可选）\n如果你需要特定领域的扩展功能（如计算机视觉 CV 或机器学习 ML 工具集）：\n\n```bash\n# 安装包含 ML 相关依赖的版本\npip install catalyst[ml] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装包含 CV 相关依赖的版本\npip install catalyst[cv] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\nCatalyst 的核心优势在于通过 `SupervisedRunner` 极简地构建训练流程。以下是一个完整的 MNIST 分类示例，涵盖模型定义、训练、评估及推理。\n\n### 最小化代码示例\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, utils\nfrom catalyst.contrib.datasets import MNIST\n\n# 1. 定义模型、损失函数和优化器\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\n# 2. 准备数据加载器\nloaders = {\n    \"train\": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),\n    \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n}\n\n# 3. 初始化 Runner\n# 指定输入、输出、标签和损失的键名，以便框架自动处理数据流\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", \n    output_key=\"logits\", \n    target_key=\"targets\", \n    loss_key=\"loss\"\n)\n\n# 4. 开始训练\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        # 添加准确率回调 (Top-1, Top-3, Top-5)\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5)),\n        # 添加精确率、召回率和 F1 分数回调\n        dl.PrecisionRecallF1SupportCallback(input_key=\"logits\", target_key=\"targets\"),\n    ],\n    logdir=\".\u002Flogs\",           # 日志保存目录\n    valid_loader=\"valid\",      # 验证集加载器名称\n    valid_metric=\"loss\",       # 用于早停的指标\n    minimize_valid_metric=True,# 是否最小化该指标\n    verbose=True,              # 打印详细日志\n)\n\n# 5. 模型评估\nmetrics = runner.evaluate_loader(\n    loader=loaders[\"valid\"],\n    callbacks=[dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5))],\n)\nprint(f\"Evaluation metrics: {metrics}\")\n\n# 6. 模型推理\nfor prediction in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert prediction[\"logits\"].detach().cpu().numpy().shape[-1] == 10\n\n# 7. 模型后处理与导出 (可选)\nmodel = runner.model.cpu()\nbatch = next(iter(loaders[\"valid\"]))[0]\n\n# 模型追踪、量化、剪枝及 ONNX 导出\nutils.trace_model(model=model, batch=batch)\nutils.quantize_model(model=model)\nutils.prune_model(model=model, pruning_fn=\"l1_unstructured\", amount=0.8)\nutils.onnx_export(model=model, batch=batch, file=\".\u002Flogs\u002Fmnist.onnx\", verbose=True)\n```\n\n### 核心概念简述\n*   **Runner**: 控制训练流程的核心类（如 `SupervisedRunner`），封装了训练循环、验证和推理逻辑。\n*   **Callbacks**: 用于在训练的不同阶段（如 epoch 结束、batch 结束）执行特定操作（记录指标、保存模型、学习率调整等）。\n*   **Loaders**: 字典格式的数据加载器，通常包含 `\"train\"` 和 `\"valid\"` 键。\n*   **Keys**: 通过字符串键名（如 `\"features\"`, `\"targets\"`）在数据字典、模型输出和损失函数之间建立映射，解耦数据结构。","某计算机视觉算法团队正在研发一套基于 PyTorch 的医疗影像病灶分割系统，需要在短时间内验证多种网络架构与训练策略的有效性。\n\n### 没有 catalyst 时\n- **重复造轮子**：每次尝试新模型（如从 U-Net 切换到 DeepLab），工程师都需手动重写繁琐的数据加载、训练循环及验证逻辑，耗费大量时间在样板代码上。\n- **实验难以复现**：由于缺乏统一的实验管理标准，不同成员编写的训练脚本风格迥异，导致超参数调整记录混乱，难以精确回溯和对比历史实验结果。\n- **扩展成本高昂**：想要引入新的评估指标（如 Dice 系数）或自定义回调函数时，往往需要深入修改核心训练流程，极易引入 Bug 且维护困难。\n- **协作效率低下**：团队成员间无法直接复用彼此的代码模块，每个人都在编写功能相似但实现各异的“一次性”训练脚本。\n\n### 使用 catalyst 后\n- **专注核心创新**：catalyst 提供了标准化的 Runner 和 Trainer 抽象，团队只需定义模型和数据集，即可自动获得完整的训练流程，将精力完全集中在网络结构优化上。\n- **实验可追溯性强**：借助内置的实验日志管理和配置系统，所有超参数、指标变化及模型版本均被自动记录，轻松实现实验结果的精准复现与横向对比。\n- **灵活插拔扩展**：通过简单的配置即可集成自定义指标或回调机制，无需触碰底层训练循环，像搭积木一样快速构建复杂的实验流水线。\n- **代码高度复用**：统一的框架规范使得团队成员可以无缝共享和组合各自的模块，显著提升了整体研发迭代速度。\n\ncatalyst 通过标准化深度学习研发流程，让团队从繁琐的工程实现中解放出来，真正实现了“只写新逻辑，不写训练循环”的高效研发模式。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcatalyst-team_catalyst_da88dba8.png","catalyst-team","Catalyst-Team","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fcatalyst-team_847514c4.png","",null,"https:\u002F\u002Fgithub.com\u002Fcatalyst-team",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Python","#3572A5",98.8,{"name":88,"color":89,"percentage":90},"Shell","#89e051",1,{"name":92,"color":93,"percentage":94},"Dockerfile","#384d54",0.2,{"name":96,"color":97,"percentage":98},"Makefile","#427819",0.1,3375,401,"2026-04-03T12:06:32","Apache-2.0","Linux, macOS, Windows, WSL","未说明 (基于 PyTorch，支持 CPU\u002FGPU)","未说明",{"notes":107,"python":108,"dependencies":109},"该工具是 PyTorch 的高级封装框架。基础安装仅需 catalyst，也可按需安装特定扩展（如 catalyst[ml] 或 catalyst[cv]）。已在 Ubuntu 16.04\u002F18.04\u002F20.04, macOS 10.15, Windows 10 及 WSL 上通过测试。","3.7+",[110],"torch>=1.4",[26,54,14,13],[113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132],"deep-learning","reinforcement-learning","machine-learning","computer-vision","pytorch","python","distributed-computing","infrastructure","research","reproducibility","image-processing","image-classification","image-segmentation","object-detection","natural-language-processing","text-classification","text-segmentation","information-retrieval","recommender-system","metric-learning","2026-03-27T02:49:30.150509","2026-04-06T05:16:38.505943",[136,141,146,151,156,161],{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},13036,"使用 OptimizerCallback 进行梯度累积时报错，如何解决？","如果您显式使用了 OptimizerCallback（例如设置 accumulation_steps），通常也需要显式添加 CriterionCallback 到 callbacks 列表中。尝试将 callbacks 修改为包含 DiceCallback(), EarlyStoppingCallback(), OptimizerCallback(accumulation_steps=2) 以及 CriterionCallback()。","https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F507",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},13037,"如何在 Catalyst 中正确记录 WandB 的 batch 级别指标？","默认情况下，batch 指标可能被错误地按 epoch 记录。确保在配置 WandB Logger 时，step 参数设置为 global_sample_step 而不是 global_epoch_step，这样每个 step（batch）的指标才能被单独记录。该问题在后续版本中已修复，建议升级 Catalyst 版本。","https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1290",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},13038,"WandB Logger 在启用 DDP（分布式数据并行）时无法工作怎么办？","在 DDP 模式下使用 WandB Logger 时，可能会遇到 'Run' object has no attribute 等错误。这通常是因为多进程初始化冲突。建议在非主进程（rank > 0）中禁用日志记录，或者确保 wandb.init() 仅在主进程中调用。如果问题持续，请检查 Catalyst 和 wandb 的版本兼容性，或参考官方 DDP 集成示例。","https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1316",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},13039,"SupervisedRunner 和 CriterionCallback 中的参数命名不一致（如 input_target_key vs input_key），这是 Bug 吗？","这是一个已知的命名不一致问题。在 SupervisedRunner 中使用 input_target_key 指定目标键，而在 CriterionCallback 中对应的参数名为 input_key。虽然名称不同，但它们的功能是对应的（都指向数据字典中的 targets）。目前该问题标记为 wontfix 或需社区贡献重构，使用时请注意区分这两个参数名以避免混淆。","https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F770",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},13040,"Catalyst 是否支持三元组损失（Triplet Loss）？","是的，Catalyst 社区已经集成了三元组损失相关的功能。您可以参考社区提供的实现示例，如 siamese-triplet 或 triplet-network-pytorch 等项目。此外，Catalyst 仓库中可能已包含相关的 Notebook 示例或 contrib 模块，建议查看最新的文档或 examples 目录获取具体用法。","https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F459",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},13041,"如何在 Python API 中对模型进行评估（类似 train 方法）？","Catalyst 计划推出 evaluate_loader 方法以简化评估流程，类似于现有的 .train 和 .predict_loader 方法。在该功能完全合并前，您可以复用 runner.train 的逻辑，但仅传入验证集 loader 并设置 num_epochs=1，同时确保不使用训练特定的回调（如学习率调度器）。关注官方更新以获取原生 evaluate_loader 支持。","https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1186",[167,172,177,182,187,192,197,202,207,211,216,221,226,231,236,241,246,251,256,261],{"id":168,"version":169,"summary_zh":170,"released_at":171},71711,"v22.04","## [22.04] - 2022年4月29日\n\n### 新增\n\n- 为 Config API 添加了 `catalyst-tune` [#1411](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1411)\n- 添加了针对 Python 3.9 和 3.10 的测试 [#1414](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1414)\n\n### 修复\n\n- 修复了 `catalyst` 与 Python 3.10 的兼容性问题 [#1409](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1409)","2022-04-29T04:45:11",{"id":173,"version":174,"summary_zh":175,"released_at":176},71712,"v22.02.1","## [22.02.1] - 2022-02-27\n\n_少量修复及 Config API v22 MVP。_\n\n### 新增\n\n- 添加了用于支持 Config API 的 `catalyst-run` [#1406](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1406)\n\n\n### 修复\n\n- Logger API 命名问题 [#1405](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1405)","2022-02-27T17:51:10",{"id":178,"version":179,"summary_zh":180,"released_at":181},71713,"v22.02","## [22.02] - 2022年2月13日\n\n### 简要总结\n- Catalyst 架构简化。\n- [#1395](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1395), [#1396](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1396), [#1397](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1397), [#1398](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1398), [#1399](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1399), [#1400](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1400), [#1401](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1401), [#1402](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1402), [#1403](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1403)。\n\n### 新增内容\n\n- 针对不同硬件加速器配置的额外测试。更多信息请查看 `tests\u002Fpipelines` 文件夹。\n- `BackwardCallback` 和 `BackwardCallbackOrder` 作为 `loss.backward` 的抽象层。现在您可以轻松地记录模型梯度，或在 `OptimizerCallback` 之前对其进行转换。\n- `CheckpointCallbackOrder` 用于 `ICheckpointCallback`。\n\n### 变更内容\n\n- 最低 Python 版本提升至 `3.7`，最低 PyTorch 版本提升至 `1.4.0`。\n- 引擎基于 Accelerate 重写。首先，我们发现这两者非常接近；其次，Accelerate 提供了更加用户友好的 API，并为“Nvidia APEX”和“Facebook Fairscale”提供了更稳定的接口——而 Catalyst 本身并不支持这些工具。\n- `SelfSupervisedRunner` 已从 Catalyst API 移至 `examples` 文件夹。未来将仅支持以下 Runner API：`IRunner`、`Runner`、`ISupervisedRunner` 和 `SupervisedRunner`，以确保一致性。如果您需要其他类型的 Runner，请自行实现 `CustomRunner`，并以 `SelfSupervisedRunner` 作为参考。\n- `Runner.{global\u002Fstage}_{batch\u002Floader\u002Fepoch}_metrics` 重命名为 `Runner.{batch\u002Floader\u002Fepoch}_metrics`。\n- `CheckpointCallback` 从头开始重写。\n- Catalyst 注册表现已仅支持完整导入路径。\n- Logger API 更改为所有 `log_*` 方法均接收 `IRunner` 对象。\n- Metric API：`topk_args` 重命名为 `topk`。\n- Contrib API：移除了 `catalyst.contrib` 的初始化导入，现需使用 `from catalyst.contrib.{smth} import {smth}`。为提高稳定性，未来版本可能会进一步改为仅支持完整导入。\n- 所有快速入门、最小示例、Notebook 和流水线均已迁移到新版本。\n- 代码风格调整为右边界 `89` 列。坦白说，在 MBP'16 上使用 `89` 列右边界来维护 Catalyst 要容易得多。\n\n### 移除内容\n\n- 移除了 `ITrial`。\n- 移除了对阶段的支持。尽管我们在深度学习实验中提倡使用阶段，但当前的硬件加速器尚不完全适应这种设置。此外，约 95% 的深度学习流水线都是单阶段的。多阶段 Runner 的支持目前正在评估中。如需多阶段支持，请定义一个具有自定义 API 的 `CustomRunner`。\n- 移除了 Config\u002FHydra API 的支持。Config API 目前正在评估中。在此期间，您可以使用 [hydra-slayer](https:\u002F\u002Fgithub.","2022-02-13T09:27:22",{"id":183,"version":184,"summary_zh":185,"released_at":186},71714,"v22.02rc0","## [22.02rc0] - 2022-02-07\n\n### 简而言之\n\nCatalyst 22 版本的测试版。\n\n- 核心架构迁移至类似 [Animus](https:\u002F\u002Fgithub.com\u002FScitator\u002Fanimus) 的设计（移除了阶段机制）\n- 引擎迁移至 [Accelerate](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Faccelerate)\n- `config` 和 `hydra` API 已弃用，转而使用基于 [hydra-slayer](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fhydra-slayer) 的自定义配置运行器\n- 基于深度学习的脚本已从 API 中移除\n- 自监督训练器已移至示例目录——建议用户根据需求进行自定义\n- `contrib` 和 `utils` 模块被精简\n- 依赖项得到简化\n- 代码风格调整为每行最多 89 个字符（在 16 英寸屏幕上显示更佳 ;)）","2022-02-07T06:10:02",{"id":188,"version":189,"summary_zh":190,"released_at":191},71715,"v21.12","## [21.12] - 2021年12月28日\n\n### 简而言之\n\n分布式引擎更新（多节点支持）及多项其他改进。\n\n### 新增功能\n\n- 用于自监督学习基准测试的 MNIST 数据集 ([#1368](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1368))\n- MovieLens 20M 数据集 [#1336](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1336)\n- 日志记录属性，用于自定义日志输出 ([#1372](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1372))\n- MacridVAE 示例 ([#1363](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1363))\n- 自监督学习基准测试结果 ([#1374](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1374))\n- Neptune 示例 ([#1377](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1377))\n- 引擎的多节点支持 ([#1364](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1364))\n\n### 变更内容\n\n- 强化学习示例更新至最新版本 ([#1370](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1370))\n- DDPLoaderWrapper 更新至新版本 ([#1385](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1385))\n- 分类指标中的 `num_classes` 参数现为可选 ([#1379](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1379))\n- Colab CI\u002FCD 更新至新版本\n\n### 移除内容\n\n- \n\n### 修复内容\n\n- 为 `catalyst[cv]` 添加了 `requests` 依赖项 ([#1371](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1370))\n- 修复了加载器步骤计数问题 ([#1374](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1374))\n- 修复了目标检测示例的数据预处理问题 ([#1369](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1369))\n- 修复了使用 FP16 运行时的梯度裁剪问题 ([#1378](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1378))\n- 修复了 DDP 运行中配置 API 的问题 ([#1383](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1383))\n- 修复了 FP16 引擎的检查点创建问题 ([#1382](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1382))\n\n### 贡献者 ❤️\n\n@bagxi @ditwoo @MrNightSky @Nimrais @y-ksenia @sergunya17 @Thiefwerty @zkid18","2021-12-28T04:11:48",{"id":193,"version":194,"summary_zh":195,"released_at":196},71716,"v21.11","## [21.11] - 2021-11-30\n\n### 简而言之\n框架架构简化与提速 + SSL 和 RecSys 扩展。\n\n### 新增功能\n\n- MultiVAE 推荐系统示例 ([#1340](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1340))\n- 恢复 `resume` 支持 — 解决 [#1193](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1193) ([#1349](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1349))\n- 平滑 Dice 损失加入 contrib 模块 ([#1344](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1344))\n- `runner.train` 增加 `profile` 标志位 ([#1348](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1348))\n- MultiDAE 推荐系统示例 ([#1356](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1356))\n- `SETTINGS.log_batch_metrics`、`SETTINGS.log_epoch_metrics`、`SETTINGS.compute_per_class_metrics`，用于框架级 Metric 和 Logger API 的规范定义 ([#1357](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1357))\n- 所有可用 Logger 均新增 `log_batch_metrics` 和 `log_epoch_metrics` 选项 ([#1357](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1357))\n- 所有可用的多分类\u002F标签指标均新增 `compute_per_class_metrics` 选项 ([#1357](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1357))\n- PyTorch 基准测试脚本及简化的 MNIST 示例 ([#1360](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1360))\n\n### 变更内容\n\n- 进行了多项框架简化工作 ([#1346](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1346))：\n  - `catalyst-contrib` 脚本精简至仅保留 `collect-env` 和 `project-embeddings`\n  - `catalyst-dl` 脚本精简至仅保留 `run` 和 `tune`\n  - 废弃了基于 Catalyst 的变换前缀 `transforms.`\n  - 将 `catalyst.tools` 移至 `catalyst.extras`\n  - 将 `catalyst.data` 中的任务相关扩展移至 `catalyst.contrib.data`\n  - 将 `catalyst.data.transforms` 移至 `catalyst.contrib.data.transforms`\n  - 将 `Normalize` 和 `ToTensor` 变换更名为 `NormalizeImage` 和 `ImageToTensor`\n  - 将度量学习相关扩展移至 `catalyst.contrib.data`\n  - 将 `catalyst.contrib` 转为以代码即文档的方式进行开发\n  - 将 `catalyst[cv]` 和 `catalyst[ml]` 扩展调整为扁平化架构设计；示例：`catalyst.contrib.data.dataset_cv`、`catalyst.contrib.data.dataset_ml`\n  - 将 `catalyst.contrib` 调整为扁平化架构设计；示例：`catalyst.contrib.data`、`catalyst.contrib.datasets`、`catalyst.contrib.layers`、`catalyst.contrib.models`、`catalyst.contrib.optimizers`、`catalyst.contrib.schedulers`\n  - 将内部功能移至 `***._misc` 模块\n  - 将 `catalyst.utils.mixup` 移至 `catalyst.utils.torch`\n  - 将 `catalyst.utils.numpy` 移至 `catalyst.contrib.utils.numpy`\n- 默认日志记录逻辑由“批次 & 周期”改为仅“周期”，以节省日志记录时的计算开销；如需重新指定，请使用：\n  - `SETTINGS.log_batch_metrics=True\u002FFalse` 或 `os.environ[\"CATALYST_LOG_BATCH_METRICS\"]`\n  - `SETTINGS.log_epoch_metrics=True\u002FFalse` 或 `os.environ[\"CATALYST_LOG_EPOCH_METRICS\"]`\n- 默认指标计算","2021-11-30T07:36:48",{"id":198,"version":199,"summary_zh":200,"released_at":201},71717,"v21.10","## [21.10] - 2021年10月30日\n\n### 简而言之\n\n更新了 README 和教程，并修复了一些 DDP 相关的问题。\n\n### 新增内容\n\n- RSquareLoss ([#1313](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1313))\n- 自监督学习示例更新：([#1305](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1305))、([#1322](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1322))、([#1325](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1325))、([#1335](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1335))\n- Albert 训练示例 ([#1326](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1326))\n- YOLO-X（新增）检测示例及重构 ([#1324](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1324))\n- `TopKMetric` 抽象类 ([#1330](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1330))\n\n### 变更内容\n\n- 简化了 README ([#1312](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1312))\n- 改进了 DDP 教程 ([#1327](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1327))\n- 将 `CMCMetric` 的命名从 `\u003Cprefix>cmc\u003Csuffix>\u003Ck>` 改为 `\u003Cprefix>cmc\u003Ck>\u003Csuffix>` ([#1330](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1330))\n\n### 移除内容\n\n- \n\n### 修复内容\n\n- 修复了零种子错误 ([#1329](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1329))\n- 更新了代码风格相关问题 ([#1331](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1331))\n- TopK 指标相关修复：([#1330](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1330))、([#1334](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1334))、([#1339](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1339))\n- 为 `catalyst-dl run` 添加了 `--expdir` 参数 ([#1338](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1338))\n- 为分布式设置添加了 ControlFlowCallback ([#1341](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1341))","2021-10-30T07:39:44",{"id":203,"version":204,"summary_zh":205,"released_at":206},71718,"v21.09","## [21.09] - 2021年9月30日\n\n### 新增\n\n- CometLogger 支持 ([#1283](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1283))\n- CometLogger 示例 ([#1287](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1287))\n- XLA 文档 ([#1288](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1288))\n- 对比损失函数：`NTXentLoss` ([#1278](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1278))、`SupervisedContrastiveLoss` ([#1293](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1293))\n- 自监督学习：`ISelfSupervisedRunner`、`SelfSupervisedConfigRunner`、`SelfSupervisedRunner`、`SelfSupervisedDatasetWrapper` ([#1278](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1278))\n- SimCLR 示例 ([#1278](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1278))\n- 有监督对比学习示例 ([#1293](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1293))\n- runner-callbacks 交互的额外警告 ([#1295](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1295))\n- 将 `CategoricalRegressionLoss` 和 `QuantileRegressionLoss` 移至 `contrib` 模块 ([#1295](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1295))\n- R2 分数指标 ([#1274](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1274))\n\n\n### 变更\n- 改进了 `WandbLogger`，支持 artifacts 并修复了日志记录步骤问题 ([#1309](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1309))\n- 完整的 `Runner` 清理工作（包括回调和数据加载器的销毁）已移至 `PipelineParallelFairScaleEngine` 中 ([#1295](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1295))\n- 为兼容 PyTorch，将 `HuberLoss` 重命名为 `HuberLossV0` ([#1295](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1295))\n- 代码风格更新 ([#1298](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1298))\n- `BalanceBatchSampler` 已弃用 ([#1303](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1303))\n\n### 移除\n\n- \n\n### 修复\n\n- CI\u002FCD 流程 ([#1292](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1292))、([#1299](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1299))、([#1304](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1304))、([#1306](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1306))\n- Optuna 配置文件 ([#1296](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1292))、([#1296](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1299))\n\n\n### 贡献者 ❤️\n\n@asteyo @AyushExel @bagxi @DN6 @gr33n-made @Nimrais @Podidiving  @y-ksenia","2021-09-30T07:15:05",{"id":208,"version":209,"summary_zh":80,"released_at":210},71719,"v21.09rc1","2021-09-27T19:02:25",{"id":212,"version":213,"summary_zh":214,"released_at":215},71720,"v21.09rc0","大家好，不错的项目！\n\n这是用于测试的用例发布，用来检查我们更新后的基础设施。","2021-09-27T06:21:40",{"id":217,"version":218,"summary_zh":219,"released_at":220},71721,"v21.08","## [21.08] - 2021年8月31日\n\n### 新增\n\n- 推荐系统损失函数：`AdaptiveHingeLoss`、`BPRLoss`、`HingeLoss`、`LogisticLoss`、`RocStarLoss`、`WARPLoss`（[#1269](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1269)、[#1282](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1282)）\n- 目标检测示例（[#1271](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1271)）\n- SklearnModelCallback（[#1261](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1261)）\n- Barlow Twins 示例（[#1261](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1261)）\n- TPU\u002FXLA 支持（[#1275](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1275)）\n  - 配有更新的[示例](.\u002Fexamples\u002Fengines)\n- 所有可用引擎原生支持 `sync_bn`（[#1275](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1275)）\n  - 包括 Torch、AMP、Apex 和 FairScale\n\n### 变更\n\n- 注册表迁移至 `hydra-slayer`（[#1264](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1264)）\n- （[#1275](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1275)）\n  - 为加快训练速度，移除了 DDP 运行中的批量指标同步\n  - 将 `AccumulationMetric` 重命名为 `AccumulativeMetric`\n    - 从 `catalyst.metrics._metric` 移至 `catalyst.metrics._accumulative`\n    - 将 `accululative_fields` 重命名为 `keys`\n\n### 移除\n\n- \n\n### 修复\n\n- PeriodicLoaderCallback 文档字符串（[#1279](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1279)）\n- matplotlib 问题（[#1272](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1272)）\n- 加载器的样本计数器（[#1285](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1285)）\n\n### 贡献者 ❤️\n@bagxi @Casyfill @ditwoo @Nimrais @penguinflys @sergunya17 @zkid18","2021-08-31T07:53:46",{"id":222,"version":223,"summary_zh":224,"released_at":225},71722,"v21.07","## [21.07] - 2021-07-29\r\n\r\n### Added\r\n\r\n- added `pre-commit` hook to run codestyle checker on commit ([#1257](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1257))\r\n- `on publish` github action for docker and docs added ([#1260](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1260))\r\n- MixupCallback and `utils.mixup_batch` ([#1241](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1241))\r\n- Barlow twins loss ([#1259](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1259))\r\n- BatchBalanceClassSampler ([#1262](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1262))\r\n\r\n### Changed\r\n\r\n-\r\n\r\n### Removed\r\n\r\n-\r\n\r\n### Fixed\r\n\r\n- make `expdir` in `catalyst-dl run` optional ([#1249](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1249))\r\n- Bump neptune-client from 0.9.5 to 0.9.8 in `requirements-neptune.txt` ([#1251](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1251))\r\n- automatic merge for master (with [Mergify](https:\u002F\u002Fmergify.io\u002F)) fixed ([#1250](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1250))\r\n- Evaluate loader custom model bug was fixed ([#1254](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1254))\r\n- `BatchPrefetchLoaderWrapper` issue with batch-based PyTorch samplers ([#1262](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1262))\r\n- Adapted MlflowLogger for new config hierarchy ([#1263](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1263))\r\n\r\n### Contributors ❤️\r\n@AlekseySh @bagxi @Casyfill @Dokholyan @leoromanovich @Nimrais @y-ksenia","2021-07-29T06:01:51",{"id":227,"version":228,"summary_zh":229,"released_at":230},71723,"v21.06","## [21.06] - 2021-06-29\r\n\r\n### Added\r\n\r\n- ([#1230](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1230))\r\n  - FairScale support\r\n  - DeepSpeed support\r\n  - `utils.ddp_sync_run` function for synchronous ddp run\r\n  - CIFAR10 and CIFAR100 datasets from torchvision (no cv-based requirements)\r\n  - [Catalyst Engines demo](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Ftree\u002Fmaster\u002Fexamples\u002Fengines)\r\n- `dataset_from_params` support in config API ([#1231](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1231))\r\n- transform from params support for config API added ([#1236](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1236))\r\n- samplers from params support for config API added ([#1240](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1240))\r\n- recursive registry.get_from_params added ([#1241](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1241))\r\n- albumentations integration ([#1238](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1238))\r\n- Profiler callback ([#1226](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1226))\r\n\r\n### Changed\r\n\r\n- ([#1230](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1230))\r\n  - loaders creation now wrapper with `utils.ddp_sync_run` for `utils.ddp_sync_run` data preparation\r\n  - runner support stage cleanup: loaders and callbacks will be deleted on the stage end\r\n  - Apex-based engines now support both APEXEngine and ApexEngine registry names\r\n\r\n### Fixed\r\n\r\n- multiprocessing in minimal tests hotfix ([#1232](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1232))\r\n- Tracing callback hotfix ([#1234](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1234))\r\n- Engine hotfix for `predict_loader` ([#1235](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1235))\r\n- ([#1230](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1230))\r\n  - Hydra hotfix due to `1.1.0` version changes\r\n- `HuberLoss` name conflict for pytorch 1.9 hotfix ([#1239](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1239))\r\n\r\n\r\n### Contributors ❤️ \r\n@bagxi @y-ksenia @ditwoo @BorNick @Inkln ","2021-06-29T06:37:29",{"id":232,"version":233,"summary_zh":234,"released_at":235},71724,"v21.05","## [21.05] - 2021-05-31\r\n\r\n### Added\r\n\r\n- Reinforcement learning tutorials ([#1205](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1205))\r\n- customization demo ([#1207](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1207))\r\n- FAQ docs: multiple input and output keys, engine tutorial ([#1202](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1202))\r\n- minimal Config API example ([#1215](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1215))\r\n- Distributed RL example (Catalyst.RL 2.0 concepts) ([#1224](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1224))\r\n- SklearnCallback as integration of sklearn metrics ([#1198](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1198))\r\n\r\n### Changed\r\n\r\n- tests moved to `tests` folder ([#1208](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1208))\r\n- pipeline tests moved to `tests\u002Fpipelines` ([#1215](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1215))\r\n- updated NeptuneLogger docstrings ([#1223](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1223))\r\n\r\n### Removed\r\n\r\n-\r\n\r\n### Fixed\r\n\r\n- customizing what happens in `train()` notebook ([#1203](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1203))\r\n- transforms imports under catalyst.data ([#1211](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1211))\r\n- change layerwise to layerwise_params ([#1210](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1210))\r\n- add torch metrics support ([#1195](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1195))\r\n- add Config API support for BatchTransformCallback ([#1209](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1209))\r\n\r\nBONUS: [Catalyst workshop videos!](https:\u002F\u002Fyoutube.com\u002Fplaylist?list=PLAjRKb17KSnMk8rXFQ5VVjfwYMEUfIRUK)","2021-05-31T06:23:16",{"id":237,"version":238,"summary_zh":239,"released_at":240},71725,"v21.04.2","## [21.04.2] - 2021-04-30\r\n\r\n### Added\r\n\r\n- Weights and Biases Logger (``WandbLogger``) ([#1176](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1176))\r\n- Neptune Logger (``NeptuneLogger``) ([#1196](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1196))\r\n- `log_artifact` method for logging arbitrary files like audio, video, or model weights to `ILogger` and `IRunner` ([#1196](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1196))","2021-04-30T15:40:12",{"id":242,"version":243,"summary_zh":244,"released_at":245},71726,"v21.04.1","- a small hotfix for `catalyst.contrib` module","2021-04-19T07:05:54",{"id":247,"version":248,"summary_zh":249,"released_at":250},71727,"v21.04","## [21.04] - 2021-04-17\r\n\r\n\r\n### Added\r\n\r\n- Nifti Reader (NiftiReader) ([#1151](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1151))\r\n- CMC score and callback for ReID task (ReidCMCMetric and ReidCMCScoreCallback) ([#1170](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1170))\r\n- Market1501 metric learning datasets (Market1501MLDataset and Market1501QGDataset) ([#1170](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1170))\r\n- extra kwargs support for Engines ([#1156](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1156))\r\n- engines exception for unknown model type ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n- a few docs to the supported loggers ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n\r\n### Changed\r\n\r\n- ``TensorboardLogger`` switched from ``global_batch_step`` counter to ``global_sample_step`` one ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n- ``TensorboardLogger`` logs loader metric ``on_loader_end`` rather than ``on_epoch_end`` ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n- ``prefix`` renamed to ``metric_key`` for ``MetricAggregationCallback`` ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n- ``micro``, ``macro`` and ``weighted`` aggregations renamed to ``_micro``, ``_macro`` and ``_weighted`` ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n- ``BatchTransformCallback`` updated ([#1153](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1153))\r\n\r\n### Removed\r\n\r\n- auto ``torch.sigmoid`` usage for ``metrics.AUCMetric`` and ``metrics.auc`` ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n\r\n### Fixed\r\n\r\n- hitrate calculation issue ([#1155](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1155))\r\n- ILoader wrapper usage issue with Runner ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n- counters for ddp case ([#1174](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1174))\r\n","2021-04-17T05:15:53",{"id":252,"version":253,"summary_zh":254,"released_at":255},71728,"v21.03.2","### Fixed\r\n\r\n- minimal requirements issue ([#1147](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1147))","2021-03-29T06:20:55",{"id":257,"version":258,"summary_zh":259,"released_at":260},71729,"v21.03.1","## [21.03.1] - 2021-03-28\r\n\r\n### Added\r\n\r\n- Additive Margin SoftMax(AMSoftmax)([#1125](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1125))\r\n- Generalized Mean Pooling(GeM)([#1084](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1084))\r\n- Key-value support for CriterionCallback ([#1130](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1130))\r\n- Engine configuration through cmd ([#1134](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1134))\r\n- Extra utils for thresholds ([#1134](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1134))\r\n- Added gradient clipping function to optimizer callback ([1124](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1124))\r\n- FactorizedLinear to contrib ([1142](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1142))\r\n- Extra init params for ``ConsoleLogger`` ([1142](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1142))\r\n- Tracing, Quantization, Onnx, Pruninng Callbacks ([1127](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1127))\r\n- `_key_value` for schedulers in case of multiple optimizers fixed ([#1146](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1146))\r\n\r\n### Changed\r\n\r\n- CriterionCallback now inherits from BatchMetricCallback [#1130](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1130))\r\n    - united metrics computation logic\r\n\r\n### Removed\r\n\r\n- Config API deprecated parsings logic ([1142](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1142)) ([1138](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1138))\r\n\r\n### Fixed\r\n\r\n- Data-Model device sync and ``Engine`` logic during `runner.predict_loader` ([#1134](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1134))\r\n- BatchLimitLoaderWrapper logic for loaders with shuffle flag ([#1136](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1136))\r\n- config description in the examples ([1142](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1142))\r\n- Config API deprecated parsings logic ([1142](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1142)) ([1138](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1138))\r\n- RecSys metrics Top_k calculations ([#1140] (https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fpull\u002F1140))","2021-03-28T17:39:48",{"id":262,"version":263,"summary_zh":264,"released_at":265},71730,"v21.03","The `v20` is dead, long live the `v21`! \r\n\r\n## [21.03] - 2021-03-13 ([#1095](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fissues\u002F1095))\r\n\r\n### Added\r\n\r\n- [``Engine`` abstraction](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fapi\u002Fengines.html) to support various hardware backends and accelerators: CPU, GPU, multi GPU, distributed GPU, TPU, Apex, and AMP half-precision training.\r\n- [``Logger`` abstraction](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fapi\u002Floggers.html) to support various monitoring tools: console, tensorboard, MLflow, etc.\r\n- ``Trial`` abstraction to support various hyperoptimization tools: Optuna, Ray, etc.\r\n- [``Metric`` abstraction](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fapi\u002Fmetrics.html) to support various of machine learning metrics: classification, segmentation, RecSys and NLP.\r\n- Full support for Hydra API.\r\n- Full DDP support for Python API.\r\n- MLflow support for metrics logging.\r\n- United API for model post-processing: tracing, quantization, pruning, onnx-exporting.\r\n- United API for metrics: classification, segmentation, RecSys, and NLP with full DDP and micro\u002Fmacro\u002Fweighted\u002Fetc aggregations support.\r\n\r\n### Changed\r\n\r\n- ``Experiment`` abstraction merged into ``Runner`` one.\r\n- Runner, SupervisedRunner, ConfigRunner, HydraRunner architectures and dependencies redesigned.\r\n- Internal [settings](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fcatalyst\u002Fsettings.py) and [registry](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Fblob\u002Fmaster\u002Fcatalyst\u002Fregistry.py) mechanisms refactored to be simpler, user-friendly and more extendable.\r\n- Bunch of Config API test removed with Python API and pytest.\r\n- Codestyle now supports up to 99 symbols per line :)\r\n- All callbacks\u002Frunners moved for contrib to the library core if was possible.\r\n- ``Runner`` abstraction simplified to store only current state of the experiment run: all validation logic was moved to the callbacks (by this way, you could easily select best model on various metrics simultaneously).\r\n- ``Runner.input`` and ``Runner.output`` merged into united ``Runner.batch`` storage for simplicity.\r\n- All metric moved from ``catalyst.utils.metrics`` to ``catalyst.metrics``.\r\n- All metrics now works on scores\u002Fmetric-defined-input rather that logits (!).\r\n- Logging logic moved from ``Callbacks`` to appropriate ``Loggers``.\r\n- ``KorniaCallbacks`` refactored to ``BatchTransformCallback``.\r\n\r\n### Removed\r\n\r\n- Lots of unnecessary contrib extensions.\r\n- Transforms configuration support through Config API (could be returned in next releases).\r\n- Integrated Python cmd command for model pruning, swa, etc (should be returned in next releases).\r\n- ``CallbackOrder.Validation`` and ``CallbackOrder.Logging``\r\n- All 2020 year backward compatibility fixes and legacy support.\r\n\r\n### Fixed\r\n\r\n- Docs rendering simplified.\r\n- LrFinderCallback.\r\n\r\n[Release docs](https:\u002F\u002Fcatalyst-team.github.io\u002Fcatalyst\u002Fv21.03\u002Findex.html),\r\n[Python API minimal examples](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst#minimal-examples), \r\n[Config\u002FHydra API example](https:\u002F\u002Fgithub.com\u002Fcatalyst-team\u002Fcatalyst\u002Ftree\u002Fmaster\u002Fexamples\u002Fmnist_stages).","2021-03-13T11:01:59"]