[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mosaicml--composer":3,"tool-mosaicml--composer":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":10,"env_os":100,"env_gpu":101,"env_ram":100,"env_deps":102,"category_tags":107,"github_topics":108,"view_count":117,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":150},103,"mosaicml\u002Fcomposer","composer","Supercharge Your Model Training","Composer 是由 MosaicML 推出的开源深度学习训练库，基于 PyTorch 构建，旨在显著提升模型训练效率。Composer 主要解决了大规模集群上分布式训练流程复杂的问题。以往开发者需要手动处理并行技术、分布式数据加载和内存优化等底层细节，而 Composer 将这些复杂性抽象化，让用户能专注于模型实验与迭代，无需因基础设施问题而减速。\n\nComposer 特别适合需要训练各类神经网络的开发者和研究人员，涵盖大型语言模型（LLM）、扩散模型、嵌入模型及卷积神经网络等。Composer 的核心优势在于其高度优化的 Trainer 抽象层，专为现代深度学习工作负载设计。Composer 集成了高效的多节点训练最佳实践，曾被 MosaicML 团队用于训练 MPT 等前沿模型。即使面对集群级硬件需求，Composer 也能让训练过程变得简单流畅，帮助社区以更低的门槛实现可扩展的高效训练。","\u003Cbr \u002F>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer#gh-light-mode-only\" class=\"only-light\">\n      \u003Cimg src=\".\u002Fdocs\u002Fsource\u002F_static\u002Flogo-light-mode.png\" width=\"50%\"\u002F>\n    \u003C\u002Fa>\n    \u003C!-- SETUPTOOLS_LONG_DESCRIPTION_HIDE_BEGIN -->\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer#gh-dark-mode-only\" class=\"only-dark\">\n      \u003Cimg src=\".\u002Fdocs\u002Fsource\u002F_static\u002Flogo-dark-mode.png\" width=\"50%\"\u002F>\n    \u003C\u002Fa>\n    \u003C!-- SETUPTOOLS_LONG_DESCRIPTION_HIDE_END -->\n\u003C\u002Fp>\n\n\u003Ch2>\u003Cp align=\"center\">Supercharge your Model Training\u003C\u002Fp>\u003C\u002Fh2>\n\u003Ch3>\u003Cp align=\"center\">Deep Learning Framework for Training at Scale\u003C\u002Fp>\u003C\u002Fh3>\n\n\u003Ch4>\u003Cp align='center'>\n\u003Ca href=\"https:\u002F\u002Fwww.mosaicml.com\">[Website]\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fgetting_started\u002Finstallation.html\">[Getting Started]\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002F\">[Docs]\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fwww.databricks.com\u002Fcompany\u002Fcareers\u002Fopen-positions?department=Mosaic%20AI&location=all\">[We're Hiring!]\u003C\u002Fa>\n\u003C\u002Fp>\u003C\u002Fh4>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmosaicml\u002F\">\n        \u003Cimg alt=\"PyPi Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fmosaicml\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmosaicml\u002F\">\n        \u003Cimg alt=\"PyPi Package Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmosaicml\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmosaicml\u002F\">\n        \u003Cimg alt=\"PyPi Downloads\" src=\"https:\u002F\u002Fstatic.pepy.tech\u002Fpersonalized-badge\u002Fmosaicml?period=month&units=international_system&left_color=grey&right_color=blue&left_text=Downloads\u002Fmonth\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002F\">\n        \u003Cimg alt=\"Documentation\" src=\"https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fcomposer\u002Fbadge\u002F?version=stable\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdub.sh\u002Fmcomm\">\n        \u003Cimg alt=\"Chat @ Slack\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-2eb67d.svg?logo=slack\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_readme_15ad78713c43.png\">\n        \u003Cimg alt=\"License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-green.svg?logo=slack\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cbr \u002F>\n\n# **👋 Welcome**\n\nComposer is an open-source deep learning training library by [MosaicML](https:\u002F\u002Fwww.mosaicml.com\u002F). Built on top of PyTorch, the Composer library makes it easier to implement distributed training workflows on large-scale clusters.\n\nWe built Composer to be **optimized for scalability and usability**, integrating best practices for efficient, multi-node training. By abstracting away low-level complexities like parallelism techniques, distributed data loading, and memory optimization, you can focus on training modern ML models and running experiments without slowing down.\n\nWe recommend using Composer to speedup your experimentation workflow if you’re training neural networks of any size, including:\n\n- Large Language Models (LLMs)\n- Diffusion models\n- Embedding models (e.g. BERT)\n- Transformer-based models\n- Convolutional Neural Networks (CNNs)\n\nComposer is heavily used by the MosaicML research team to train state-of-the-art models like MPT, and we open-sourced this library to enable the ML community to do the same. This framework is used by organizations in both the tech industry and the academic sphere and is continually updated with new features, bug fixes, and stability improvements for production workloads.\n\n# **🔑 Key Features**\n![Composer is to give you better workflows with the ability to maximize scale and customizability.](docs\u002Fsource\u002F_static\u002Fimages\u002Fkey_features.png)\n\nWe designed Composer from the ground up for modern deep learning workloads. Gone are the days of AlexNet and ResNet, when state-of-the-art models could be trained on a couple of desktop GPUs. Today, developing the latest and greatest deep learning models often requires cluster-scale hardware — but with Composer’s help, you’ll hardly notice the difference.\n\nThe heart of Composer is our Trainer abstraction: a highly optimized PyTorch training loop designed to allow both you and your model to iterate faster. Our trainer has simple ways for you to configure your parallelization scheme, data loaders, metrics, loggers, and more.\n\n## Scalability\n\nWhether you’re training on 1 GPU or 512 GPUs, 50MB or 10TB of data - Composer is built to keep your workflow simple.\n\n- [**FSDP**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fdistributed_training.html#fullyshardeddataparallel-fsdp): For large models that are too large to fit on GPUs, Composer has integrated PyTorch [FullyShardedDataParallelism](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fdistributed_training.html#fullyshardeddataparallel-fsdp) into our trainer and made it simple to efficiently parallelize custom models. We’ve found FSDP is competitive performance-wise with much more complex parallelism strategies. Alternatively, Composer also supports standard PyTorch distributed data parallelism (DDP) execution.\n- [**Elastic sharded checkpointing**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fdistributed_training.html#saving-and-loading-sharded-checkpoints-with-fsdp): Save on eight GPUs, resume on sixteen. Composer supports elastic sharded checkpointing, so you never have to worry if your sharded saved state is compatible with your new hardware setup.\n- **Data streaming:** Working with large datasets? Download datasets from cloud blob storage on the fly by integrating with MosaicML [StreamingDataset](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fstreaming) during model training.\n\n## Customizability\n\nOther high-level deep learning trainers provide simplicity at the cost of rigidity. When you want to add your own features, their abstractions get in your way. Composer, on the other hand, provides simple ways for you to customize our Trainer to your needs.\n\n![Composer’s training loop has a series of events that occur at each stage in the training process.](docs\u002Fsource\u002F_static\u002Fimages\u002Ftraning_loop.png)\n\n***Fig. 1:** Composer’s training loop has a series of events that occur at each stage in the training process. Callbacks are functions that users write to run at specific events. For example, our [Learning Rate Monitor Callback](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fapi_reference\u002Fgenerated\u002Fcomposer.callbacks.LRMonitor.html#composer.callbacks.LRMonitor) logs the learning rate at every BATCH_END event.*\n\n- [**Callbacks**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Ftrainer\u002Fcallbacks.html): Composer’s callback system allows you to insert custom logic at any point in the training loop. We’ve written callbacks to monitor memory usage, log and visualize images, and estimate your model’s remaining training time, to name a few. This feature is popular among researchers who want to implement and experiment with custom training techniques.\n- [**Speedup algorithms**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Fcustom_speedup_methods.html): We draw from the latest research to create a collection of algorithmic speedups. Stack these speedups into MosaicML recipes to boost your training speeds. Our team has open-sourced the optimal combinations of speedups for different types of models.\n    - **8x speedup: Stable Diffusion**\n        - $200k original SD2 cost —> $50k ([Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fdiffusion))\n    - **7x speedup: ResNet-50 on ImageNet**\n        - 3h33m —> 25m on 8xA100 ([Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmosaic-resnet))\n    - **8.8x speedup: BERT-Base Pretraining**\n        - 10h —> 1.13h on 8xA100 ([Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmosaicbert))\n    - **5.4x speedup: DeepLab v3 on ADE20K**\n        - 3h30m —> 39m on 8xA100 ([Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fbehind-the-scenes))\n\n## Better workflows\n\nComposer is built to automate away low-level pain points and headaches so you can focus on the important (and fun) parts of deep learning and iterate faster.\n\n- [**Auto-resumption**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fresumption.html): Failed training run? Have no fear — just re-run your code, and Composer will automatically resume from your latest saved checkpoint.\n- [**CUDA OOM Prevention**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Fauto_microbatching.html): Say goodbye to out-of-memory errors. Set your microbatch size to “auto”, and Composer will automatically select the biggest one that fits on your GPUs.\n- **[Time Abstractions](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Flatest\u002Ftrainer\u002Ftime.html):** Ever messed up your conversion between update steps, epochs, samples, and tokens? Specify your training duration with custom units (epochs, batches, samples, and tokens) in your training loop with our `Time` class.\n\n## Integrations\n\nIntegrate with the tools you know and love for experiment tracking and data streaming.\n\n- **Cloud integrations**: Our Checkpointing and logging features have first-class support for remote storage and loading from Cloud bucket (OCI, GCP, AWS S3).\n- **********Experiment tracking:********** Weights and Biases, MLFlow, CometML, and neptune.ai — the choice is yours, easily log your data to your favorite platform.\n\n# **🚀 Getting Started**\n\n## **📍**Prerequisites\n\nComposer is designed for users who are comfortable with Python and have basic familiarity with deep learning fundamentals and PyTorch.\n\n**********************************************Software requirements:**********************************************  A recent version of PyTorch.\n\n**********************************************Hardware requirements:**********************************************  System with CUDA-compatible GPUs (AMD + RoCM coming soon!). Composer can run on CPUs, but for full benefits, we recommend using it on hardware accelerators.\n\n## **💾 Installation**\n\nComposer can be installed with `pip`:\n\n\u003C!--pytest.mark.skip-->\n```bash\npip install mosaicml\n```\n\nTo simplify the environment setup for Composer, we also provide a set of [pre-built Docker images](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fgetting_started\u002Finstallation.html#docker). We *highly recommend* you use our Docker images.\n\n## **🏁 Quick Start**\n\nHere is a code snippet demonstrating our Trainer on the MNIST dataset.\n\n\u003C!--pytest.mark.filterwarnings(r'ignore:Some targets have less than 1 total probability:UserWarning')-->\n\u003C!--pytest.mark.filterwarnings('ignore:Cannot split tensor of length .* into batches of size 128.*:UserWarning')-->\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\nfrom composer import Trainer\nfrom composer.models import ComposerClassifier\nfrom composer.algorithms import LabelSmoothing, CutMix, ChannelsLast\n\nclass Model(nn.Module):\n    \"\"\"Toy convolutional neural network architecture in pytorch for MNIST.\"\"\"\n\n    def __init__(self, num_classes: int = 10):\n        super().__init__()\n\n        self.num_classes = num_classes\n\n        self.conv1 = nn.Conv2d(1, 16, (3, 3), padding=0)\n        self.conv2 = nn.Conv2d(16, 32, (3, 3), padding=0)\n        self.bn = nn.BatchNorm2d(32)\n        self.fc1 = nn.Linear(32 * 16, 32)\n        self.fc2 = nn.Linear(32, num_classes)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = F.relu(out)\n        out = self.conv2(out)\n        out = self.bn(out)\n        out = F.relu(out)\n        out = F.adaptive_avg_pool2d(out, (4, 4))\n        out = torch.flatten(out, 1, -1)\n        out = self.fc1(out)\n        out = F.relu(out)\n        return self.fc2(out)\n\ntransform = transforms.Compose([transforms.ToTensor()])\ndataset = datasets.MNIST(\"data\", train=True, download=True, transform=transform)\ntrain_dataloader = DataLoader(dataset, batch_size=128)\n\ntrainer = Trainer(\n    model=ComposerClassifier(module=Model(), num_classes=10),\n    train_dataloader=train_dataloader,\n    max_duration=\"2ep\",\n    algorithms=[\n        LabelSmoothing(smoothing=0.1),\n        CutMix(alpha=1.0),\n        ChannelsLast(),\n    ],\n)\ntrainer.fit()\n```\n\nNext, check out our [Getting Started Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmosaicml\u002Fcomposer\u002Fblob\u002F9f594876f957c912758e540598ac9f47a468c39d\u002Fexamples\u002Fgetting_started.ipynb) for a walk-through of Composer’s main features. In this tutorial, we will cover the basics of the Composer Trainer:\n\n- Dataloader\n- Trainer\n- Optimizer and Scheduler\n- Logging\n- Training a baseline model\n- Speeding up training\n\n## **📚 Learn more**\n\nOnce you’ve completed the Quick Start, you can go through the below tutorials or our [documentation](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002F) to further familiarize yourself with Composer.\n\nIf you have any questions, please feel free to reach out to us on our [Community Slack](https:\u002F\u002Fdub.sh\u002Fmcomm)!\n\nHere are some resources actively maintained by the Composer community to help you get started:\n\u003Ctable>\n\u003Cthead>\n  \u003Ctr>\n      \u003Cth>\u003Cb>Resource\u003C\u002Fb>\u003C\u002Fth>\n      \u003Cth>\u003Cb>Details\u003C\u002Fb>\u003C\u002Fth>\n  \u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n    \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmosaicml\u002Fcomposer\u002Fblob\u002Fdev\u002Fexamples\u002Ffinetune_huggingface.ipynb\" target=\"_blank\" rel=\"noopener noreferrer\">Training BERTs with Composer and 🤗 \u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>A Colab Notebook showing how to train BERT models with Composer and 🤗!\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_readme_3c63b71e13a8.png\" target=\"_blank\" rel=\"noopener noreferrer\">Pretraining and Finetuning an LLM Tutorial\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>A tutorial from MosaicML’s LLM Foundry, using MosaicML Composer, StreamingDataset, and MCLI on training and evaluating LLMs.\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Fmigrate_from_ptl.html\" target=\"_blank\" rel=\"noopener noreferrer\">Migrating from PyTorch Lightning\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>A tutorial is to illustrating a path from working in PyTorch Lightning to working in Composer.\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Ffinetune_huggingface.html\" target=\"_blank\" rel=\"noopener noreferrer\">Finetuning and Pretraining HuggingFace Models\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>Want to use Hugging Face models with Composer? No problem. Here, we’ll walk through using Composer to fine-tune a pretrained Hugging Face BERT model.\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmosaicml\u002Fcomposer\u002Fblob\u002Fdev\u002Fexamples\u002Fcustom_speedup_methods.ipynb\" target=\"_blank\" rel=\"noopener noreferrer\">Building Speedup Methods\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>A Colab Notebook showing how to build new training modifications on top of Composer\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n# 🛠️ For Best Results, Use within the Databricks & MosaicML Ecosystem\n\nComposer can be used on its own, but for the smoothest experience we recommend using it in combination with other components of the MosaicML ecosystem:\n\n![We recommend that you train models with Composer, MosaicML StreamingDatasets, and Mosaic AI training.](docs\u002Fsource\u002F_static\u002Fimages\u002Fecosystem.png)\n\n- [**Mosaic AI training**](https:\u002F\u002Fwww.databricks.com\u002Fproduct\u002Fmachine-learning\u002Fmosaic-ai-training) (MCLI)- Our proprietary Command Line Interface (CLI) and Python SDK for orchestrating, scaling, and monitoring the GPU nodes and container images executing training and deployment. Used by our customers for training their own Generative AI models.\n    - **To get started, [reach out here](https:\u002F\u002Fwww.databricks.com\u002Fcompany\u002Fcontact) and check out our [Training](https:\u002F\u002Fwww.databricks.com\u002Fproduct\u002Fmachine-learning\u002Fmosaic-ai-training) product pages**\n- [**MosaicML LLM Foundry**](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fllm-foundry) - This open source repository contains code for training, finetuning, evaluating, and preparing LLMs for inference with [Composer](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer). Designed to be easy to use, efficient and flexible, this codebase is designed to enable rapid experimentation with the latest techniques.\n- [**MosaicML StreamingDataset**](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fstreaming) - Open-source library for fast, accurate streaming from cloud storage.\n- [**MosaicML Diffusion**](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fdiffusion) - Open-source code to train your own Stable Diffusion model on your own data.  Learn more via our blogs: ([Results](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fstable-diffusion-2) , [Speedup Details](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fdiffusion))\n\n# **🏆 Project Showcase**\n\nHere are some projects and experiments that used Composer. Got something to add? Share in our [Community Slack](https:\u002F\u002Fdub.sh\u002Fmcomm)!\n\n- [**MPT Foundation Series:**](https:\u002F\u002Fwww.mosaicml.com\u002Fmpt) Commercially usable open source LLMs, optimized for fast training and inference and trained with Composer.\n    - [MPT-7B Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmpt-7b)\n    - [MPT-7B-8k Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Flong-context-mpt-7b-8k)\n    - [MPT-30B Blog](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmpt-30b)\n- [**Mosaic Diffusion Models**](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Ftraining-stable-diffusion-from-scratch-costs-160k): see how we trained a stable diffusion model from scratch for \u003C$50k\n- [**replit-code-v1-3b**](https:\u002F\u002Fhuggingface.co\u002Freplit\u002Freplit-code-v1-3b): A 2.7B Causal Language Model focused on **Code Completion,** trained by Replit on Mosaic AI training in 10 days.\n- **BabyLLM:** the first LLM to support both Arabic and English. This 7B model was trained by MetaDialog on the world’s largest Arabic\u002FEnglish dataset to improve customer support workflows ([Blog](https:\u002F\u002Fblogs.nvidia.com\u002Fblog\u002F2023\u002F08\u002F31\u002Fgenerative-ai-startups-africa-middle-east\u002F))\n- [**BioMedLM**](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fintroducing-pubmed-gpt): a domain-specific LLM for Bio Medicine built by MosaicML and [Stanford CRFM](https:\u002F\u002Fcrfm.stanford.edu\u002F)\n\n# 💫 Contributors\n\nComposer is part of the broader Machine Learning community, and we welcome any contributions, pull requests, or issues!\n\nTo start contributing, see our [Contributing](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_readme_4f664175bf08.png) page.\n\nP.S.: [We're hiring](https:\u002F\u002Fwww.databricks.com\u002Fcompany\u002Fcareers\u002Fopen-positions?department=Mosaic%20AI&location=all)!\n\n# ❓FAQ\n\n- **What is the best tech stack you recommend when training large models?**\n    - We recommend that users combine components of the MosaicML ecosystem for the smoothest experience:\n        - Composer\n        - [StreamingDataset](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fstreaming)\n        - [MCLI](https:\u002F\u002Fwww.databricks.com\u002Fproduct\u002Fmachine-learning\u002Fmosaic-ai-training) (Databricks Mosaic AI Training)\n- **How can I get community support for using Composer?**\n    - You can join our [Community Slack](https:\u002F\u002Fdub.sh\u002Fmcomm)!\n- **How does Composer compare to other trainers like NeMo Megatron and PyTorch Lightning?**\n    - We built Composer to be optimized for both simplicity and efficiency. Community users have shared that they enjoy Composer for its capabilities and ease of use compared to alternative libraries.\n- **How do I use Composer to train graph neural networks (GNNs), or Generative Adversarial Networks (GANs), or models for reinforcement learning (RL)?**\n    - We recommend you use alternative libraries for if you want to train these types of models - a lot of assumptions we made when designing Composer are suboptimal for GNNs, RL, and GANs\n- **How can I speed up HuggingFace downloads?\n    - You can use hf transfer (`pip install hf-transfer`) and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER=1`\n\n# ✍️ Citation\n```\n@misc{mosaicml2022composer,\n    author = {The Mosaic ML Team},\n    title = {composer},\n    year = {2021},\n    howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002F}},\n}\n```\n","\u003Cbr \u002F>\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer#gh-light-mode-only\" class=\"only-light\">\n      \u003Cimg src=\".\u002Fdocs\u002Fsource\u002F_static\u002Flogo-light-mode.png\" width=\"50%\"\u002F>\n    \u003C\u002Fa>\n    \u003C!-- SETUPTOOLS_LONG_DESCRIPTION_HIDE_BEGIN -->\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer#gh-dark-mode-only\" class=\"only-dark\">\n      \u003Cimg src=\".\u002Fdocs\u002Fsource\u002F_static\u002Flogo-dark-mode.png\" width=\"50%\"\u002F>\n    \u003C\u002Fa>\n    \u003C!-- SETUPTOOLS_LONG_DESCRIPTION_HIDE_END -->\n\u003C\u002Fp>\n\n\u003Ch2>\u003Cp align=\"center\">为你的模型训练赋能\u003C\u002Fp>\u003C\u002Fh2>\n\u003Ch3>\u003Cp align=\"center\">大规模训练深度学习（Deep Learning）框架\u003C\u002Fp>\u003C\u002Fh3>\n\n\u003Ch4>\u003Cp align='center'>\n\u003Ca href=\"https:\u002F\u002Fwww.mosaicml.com\">[网站]\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fgetting_started\u002Finstallation.html\">[快速入门]\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002F\">[文档]\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fwww.databricks.com\u002Fcompany\u002Fcareers\u002Fopen-positions?department=Mosaic%20AI&location=all\">[我们正在招聘！]\u003C\u002Fa>\n\u003C\u002Fp>\u003C\u002Fh4>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmosaicml\u002F\">\n        \u003Cimg alt=\"PyPi 版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fmosaicml\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmosaicml\u002F\">\n        \u003Cimg alt=\"PyPi 包版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmosaicml\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmosaicml\u002F\">\n        \u003Cimg alt=\"PyPi 下载量\" src=\"https:\u002F\u002Fstatic.pepy.tech\u002Fpersonalized-badge\u002Fmosaicml?period=month&units=international_system&left_color=grey&right_color=blue&left_text=Downloads\u002Fmonth\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002F\">\n        \u003Cimg alt=\"文档\" src=\"https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fcomposer\u002Fbadge\u002F?version=stable\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdub.sh\u002Fmcomm\">\n        \u003Cimg alt=\"Slack 聊天\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-2eb67d.svg?logo=slack\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_readme_7a33cbdb1366.png\">\n        \u003Cimg alt=\"许可证\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-green.svg?logo=slack\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cbr \u002F>\n\n# **👋 欢迎**\n\nComposer 是由 [MosaicML](https:\u002F\u002Fwww.mosaicml.com\u002F) 开发的开源深度学习（Deep Learning）训练库。基于 PyTorch 构建，Composer 库使得在大规模集群上实现分布式训练（Distributed Training）工作流变得更加容易。\n\n我们构建 Composer 是为了**针对可扩展性（Scalability）和易用性（Usability）进行优化**，集成了高效多节点训练的最佳实践。通过抽象掉并行化技术、分布式数据加载和内存优化等底层复杂性，您可以专注于训练现代机器学习（ML）模型和运行实验，而无需担心速度变慢。\n\n如果您正在训练任何规模的神经网络，包括以下模型，我们建议使用 Composer 来加速您的实验工作流：\n\n- 大型语言模型（Large Language Models, LLMs）\n- 扩散模型（Diffusion models）\n- 嵌入模型（Embedding models，例如 BERT）\n- 基于 Transformer 的模型\n- 卷积神经网络（Convolutional Neural Networks, CNNs）\n\nMosaicML 研究团队广泛使用 Composer 来训练最先进（State-of-the-art）的模型，如 MPT，我们将此库开源以使机器学习社区也能做到同样的事情。该框架被科技行业和学术界的组织使用，并不断更新新功能、错误修复和稳定性改进，以适应生产负载。\n\n# **🔑 主要特性**\n![Composer 旨在为您提供更好的工作流，能够最大化扩展性和可定制性。](docs\u002Fsource\u002F_static\u002Fimages\u002Fkey_features.png)\n\n我们从零开始设计 Composer，以适应现代深度学习工作负载。AlexNet 和 ResNet 的时代已经过去，那时最先进的模型可以在几块桌面 GPU（图形处理器）上训练。如今，开发最新最好的深度学习模型通常需要集群级硬件——但在 Composer 的帮助下，您几乎不会注意到其中的差异。\n\nComposer 的核心是我们的 Trainer 抽象：一个高度优化的 PyTorch 训练循环，旨在让您和您的模型都能更快地迭代。我们的 Trainer 提供了简单的方法来配置并行化方案、数据加载器、指标、日志记录器等。\n\n## 可扩展性\n\n无论您是在 1 个 GPU（图形处理器）还是 512 个 GPU 上训练，处理 50MB 还是 10TB 的数据 - Composer 旨在保持您的工作流简单。\n\n- [**FSDP**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fdistributed_training.html#fullyshardeddataparallel-fsdp)：对于大到无法放入 GPU 的大型模型，Composer 已将 PyTorch [完全分片数据并行（FullyShardedDataParallelism）](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fdistributed_training.html#fullyshardeddataparallel-fsdp) 集成到我们的 Trainer 中，并使得高效并行化自定义模型变得简单。我们发现 FSDP 在性能上与更复杂的并行策略具有竞争力。或者，Composer 也支持标准的 PyTorch 分布式数据并行（Distributed Data Parallelism, DDP）执行。\n- [**弹性分片检查点（Elastic sharded checkpointing）**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fdistributed_training.html#saving-and-loading-sharded-checkpoints-with-fsdp)：在八个 GPU 上保存，在十六个 GPU 上恢复。Composer 支持弹性分片检查点，因此您无需担心分片保存的状态是否与新的硬件设置兼容。\n- **数据流式传输（Data streaming）：** 处理大型数据集？通过在模型训练期间集成 MosaicML [StreamingDataset](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fstreaming)，按需从云对象存储（Cloud blob storage）下载数据集。\n\n## 可定制性\n\n其他高级深度学习训练器（deep learning trainers）往往以牺牲灵活性为代价来换取简便性。当您想要添加自己的功能时，它们的抽象层反而会成为阻碍。相比之下，Composer 提供了简单的方法，让您可以根据需求定制我们的 Trainer（训练器）。\n\n![Composer's training loop has a series of events that occur at each stage in the training process.](docs\u002Fsource\u002F_static\u002Fimages\u002Ftraning_loop.png)\n\n***图 1：** Composer 的训练循环在训练过程的每个阶段都有一系列事件发生。回调（Callbacks）是用户编写的函数，用于在特定事件运行时执行。例如，我们的 [Learning Rate Monitor Callback](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fapi_reference\u002Fgenerated\u002Fcomposer.callbacks.LRMonitor.html#composer.callbacks.LRMonitor) 会在每个 BATCH_END 事件记录学习率。*\n\n- [**回调（Callbacks）**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Ftrainer\u002Fcallbacks.html)：Composer 的回调系统允许您在训练循环的任何位置插入自定义逻辑。我们编写了回调来监控内存使用情况、记录和可视化图像，以及估算模型的剩余训练时间等。此功能在想要实现和实验自定义训练技术的研究人员中很受欢迎。\n- [**加速算法（Speedup algorithms）**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Fcustom_speedup_methods.html)：我们借鉴最新研究创建了一系列算法加速方法。将这些加速方法堆叠到 MosaicML 配方（recipes）中以提升训练速度。我们的团队已经开源了针对不同类型模型的最佳加速组合。\n    - **8 倍加速：Stable Diffusion**\n        - $200k 原始 SD2 成本 —> $50k ([博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fdiffusion))\n    - **7 倍加速：ImageNet 上的 ResNet-50**\n        - 3 小时 33 分 —> 25 分钟（8xA100 上）([博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmosaic-resnet))\n    - **8.8 倍加速：BERT-Base 预训练**\n        - 10 小时 —> 1.13 小时（8xA100 上）([博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmosaicbert))\n    - **5.4 倍加速：ADE20K 上的 DeepLab v3**\n        - 3 小时 30 分 —> 39 分钟（8xA100 上）([博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fbehind-the-scenes))\n\n## 更优的工作流\n\nComposer 旨在自动化消除低级痛点和问题，以便您可以专注于深度学习的重要（且有趣）部分并更快地迭代。\n\n- [**自动恢复（Auto-resumption）**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fnotes\u002Fresumption.html)：训练运行失败？别担心——只需重新运行代码，Composer 就会自动从您最新保存的检查点（checkpoint）恢复。\n- [**CUDA 内存不足预防（CUDA OOM Prevention）**](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Fauto_microbatching.html)：告别内存不足（OOM）错误。将您的微批次大小（microbatch size）设置为\"auto\"，Composer 将自动选择适合您图形处理器（GPU）的最大值。\n- **[时间抽象（Time Abstractions）](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Flatest\u002Ftrainer\u002Ftime.html)：** 是否曾搞混过更新步骤、轮次（epochs）、样本（samples）和令牌（tokens）之间的转换？使用我们的 `Time` 类，在训练循环中指定具有自定义单位（epochs、batches、samples 和 tokens）的训练持续时间。\n\n## 集成\n\n与您熟悉和喜爱的实验跟踪和数据流工具集成。\n\n- **云集成**：我们的检查点（Checkpointing）和日志记录功能原生支持远程存储以及从云存储桶（OCI, GCP, AWS S3）加载。\n- **********实验跟踪：********** Weights and Biases, MLFlow, CometML, and neptune.ai — 选择权在您，轻松将数据记录到您喜欢的平台。\n\n# **🚀 快速开始**\n\n## **📍** 前置条件\n\nComposer 专为熟悉 Python 并对深度学习基础知识和 PyTorch 有基本了解的用户设计。\n\n**********************************************软件要求：**********************************************  最新版本的 PyTorch。\n\n**********************************************硬件要求：**********************************************  具有 CUDA 兼容图形处理器（GPU）的系统（AMD + RoCM 即将推出！）。Composer 可以在中央处理器（CPU）上运行，但为了获得全部益处，我们建议在硬件加速器上使用它。\n\n## **💾 安装**\n\nComposer 可以通过 `pip` 安装：\n\n\u003C!--pytest.mark.skip-->\n```bash\npip install mosaicml\n```\n\n为了简化 Composer 的环境设置，我们还提供了一组 [预构建的 Docker 镜像](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fgetting_started\u002Finstallation.html#docker)。我们*强烈建议*您使用我们的 Docker 镜像。\n\n## **🏁 快速入门**\n\n这是一个代码片段，展示了我们在 MNIST 数据集上的 Trainer（训练器）。\n\n\u003C!--pytest.mark.filterwarnings(r'ignore:Some targets have less than 1 total probability:UserWarning')-->\n\u003C!--pytest.mark.filterwarnings('ignore:Cannot split tensor of length .* into batches of size 128.*:UserWarning')-->\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\nfrom composer import Trainer\nfrom composer.models import ComposerClassifier\nfrom composer.algorithms import LabelSmoothing, CutMix, ChannelsLast\n\nclass Model(nn.Module):\n    \"\"\"Toy convolutional neural network architecture in pytorch for MNIST.\"\"\"\n\n    def __init__(self, num_classes: int = 10):\n        super().__init__()\n\n        self.num_classes = num_classes\n\n        self.conv1 = nn.Conv2d(1, 16, (3, 3), padding=0)\n        self.conv2 = nn.Conv2d(16, 32, (3, 3), padding=0)\n        self.bn = nn.BatchNorm2d(32)\n        self.fc1 = nn.Linear(32 * 16, 32)\n        self.fc2 = nn.Linear(32, num_classes)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = F.relu(out)\n        out = self.conv2(out)\n        out = self.bn(out)\n        out = F.relu(out)\n        out = F.adaptive_avg_pool2d(out, (4, 4))\n        out = torch.flatten(out, 1, -1)\n        out = self.fc1(out)\n        out = F.relu(out)\n        return self.fc2(out)\n\ntransform = transforms.Compose([transforms.ToTensor()])\ndataset = datasets.MNIST(\"data\", train=True, download=True, transform=transform)\ntrain_dataloader = DataLoader(dataset, batch_size=128)\n\ntrainer = Trainer(\n    model=ComposerClassifier(module=Model(), num_classes=10),\n    train_dataloader=train_dataloader,\n    max_duration=\"2ep\",\n    algorithms=[\n        LabelSmoothing(smoothing=0.1),\n        CutMix(alpha=1.0),\n        ChannelsLast(),\n    ],\n)\ntrainer.fit()\n```\n\n接下来，查看我们的 [入门 Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmosaicml\u002Fcomposer\u002Fblob\u002F9f594876f957c912758e540598ac9f47a468c39d\u002Fexamples\u002Fgetting_started.ipynb) 以逐步了解 Composer 的主要功能。在本教程中，我们将涵盖 Composer Trainer 的基础知识：\n\n- 数据加载器（Dataloader）\n- 训练器（Trainer）\n- 优化器（Optimizer）和调度器（Scheduler）\n- 日志记录\n- 训练基线模型\n- 加速训练\n\n## **📚 了解更多**\n\n完成快速入门 (Quick Start) 后，您可以浏览下面的教程或我们的 [文档](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002F) 以进一步熟悉 Composer (训练框架)。\n\n如果您有任何问题，请随时通过我们的 [社区 Slack](https:\u002F\u002Fdub.sh\u002Fmcomm) 与我们联系！\n\n以下是一些由 Composer 社区积极维护的资源，帮助您入门：\n\u003Ctable>\n\u003Cthead>\n  \u003Ctr>\n      \u003Cth>\u003Cb>资源\u003C\u002Fb>\u003C\u002Fth>\n      \u003Cth>\u003Cb>详情\u003C\u002Fb>\u003C\u002Fth>\n  \u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n    \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmosaicml\u002Fcomposer\u002Fblob\u002Fdev\u002Fexamples\u002Ffinetune_huggingface.ipynb\" target=\"_blank\" rel=\"noopener noreferrer\">使用 Composer 和 🤗 训练 BERT\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>一个 Colab 笔记本，展示如何使用 Composer 和 🤗 训练 BERT 模型！\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_readme_952cfc22bb5e.png\" target=\"_blank\" rel=\"noopener noreferrer\">大语言模型 (LLM) 预训练和微调教程\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>来自 MosaicML LLM Foundry 的教程，使用 MosaicML Composer、StreamingDataset 和 MCLI 进行大语言模型的训练和评估。\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Fmigrate_from_ptl.html\" target=\"_blank\" rel=\"noopener noreferrer\">从 PyTorch Lightning 迁移\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>本教程说明了从 PyTorch Lightning 工作流迁移到 Composer 的路径。\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fexamples\u002Ffinetune_huggingface.html\" target=\"_blank\" rel=\"noopener noreferrer\">微调和预训练 HuggingFace 模型\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>想在 Composer 中使用 Hugging Face 模型？没问题。在这里，我们将逐步介绍如何使用 Composer 微调预训练的 Hugging Face BERT 模型。\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fmosaicml\u002Fcomposer\u002Fblob\u002Fdev\u002Fexamples\u002Fcustom_speedup_methods.ipynb\" target=\"_blank\" rel=\"noopener noreferrer\">构建加速方法\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd>一个 Colab 笔记本，展示如何在 Composer 之上构建新的训练修改方法\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\n# 🛠️ 为了获得最佳效果，请在 Databricks & MosaicML 生态系统内使用\n\nComposer 可以单独使用，但为了获得最流畅的体验，我们建议将其与 MosaicML 生态系统的其他组件结合使用：\n\n![我们建议您使用 Composer、MosaicML StreamingDatasets 和 Mosaic AI training 训练模型。](docs\u002Fsource\u002F_static\u002Fimages\u002Fecosystem.png)\n\n- [**Mosaic AI training**](https:\u002F\u002Fwww.databricks.com\u002Fproduct\u002Fmachine-learning\u002Fmosaic-ai-training) (MCLI)- 我们专有的命令行界面 (CLI) 和 Python SDK (软件开发工具包)，用于编排、扩展和监控执行训练和部署的 GPU 节点和容器镜像。我们的客户用它来训练自己的生成式 AI (Generative AI) 模型。\n    - **要开始使用，请 [在此联系我们](https:\u002F\u002Fwww.databricks.com\u002Fcompany\u002Fcontact) 并查看我们的 [Training](https:\u002F\u002Fwww.databricks.com\u002Fproduct\u002Fmachine-learning\u002Fmosaic-ai-training) 产品页面**\n- [**MosaicML LLM Foundry**](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fllm-foundry) - 这个开源仓库包含使用 [Composer](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer) 训练、微调、评估和准备大语言模型 (LLM) 进行推理的代码。旨在易于使用、高效且灵活，该代码库旨在支持对最新技术进行快速实验。\n- [**MosaicML StreamingDataset**](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fstreaming) - 用于从云存储快速、准确流式传输的开源库。\n- [**MosaicML Diffusion**](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fdiffusion) - 开源代码，用于在您自己的数据上训练自己的 Stable Diffusion 模型。通过我们的博客了解更多：([结果](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fstable-diffusion-2) , [加速详情](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fdiffusion))\n\n# **🏆 项目展示**\n\n以下是一些使用 Composer 的项目和实验。有什么要补充的吗？请在我们的 [社区 Slack](https:\u002F\u002Fdub.sh\u002Fmcomm) 中分享！\n\n- [**MPT Foundation Series:**](https:\u002F\u002Fwww.mosaicml.com\u002Fmpt) 可用于商业目的的开源大语言模型 (LLM)，针对快速训练和推理进行了优化，并使用 Composer 训练。\n    - [MPT-7B 博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmpt-7b)\n    - [MPT-7B-8k 博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Flong-context-mpt-7b-8k)\n    - [MPT-30B 博客](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fmpt-30b)\n- [**Mosaic Diffusion 模型**](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Ftraining-stable-diffusion-from-scratch-costs-160k): 看看我们如何以低于 5 万美元的成本从头训练 Stable Diffusion 模型\n- [**replit-code-v1-3b**](https:\u002F\u002Fhuggingface.co\u002Freplit\u002Freplit-code-v1-3b): 一个 2.7B 因果语言模型，专注于 **代码补全**，由 Replit 在 Mosaic AI training 上训练，耗时 10 天。\n- **BabyLLM:** 第一个支持阿拉伯语和英语的大语言模型。这个 7B 模型由 MetaDialog 在世界上最大的阿拉伯语\u002F英语数据集上训练，以改善客户支持工作流 ([博客](https:\u002F\u002Fblogs.nvidia.com\u002Fblog\u002F2023\u002F08\u002F31\u002Fgenerative-ai-startups-africa-middle-east\u002F))\n- [**BioMedLM**](https:\u002F\u002Fwww.mosaicml.com\u002Fblog\u002Fintroducing-pubmed-gpt): 由 MosaicML 和 [Stanford CRFM](https:\u002F\u002Fcrfm.stanford.edu\u002F) 构建的生物医药领域专用大语言模型\n\n# 💫 贡献者\n\nComposer 是更广泛的机器学习 (Machine Learning) 社区的一部分，我们欢迎任何贡献、Pull requests 或问题！\n\n开始贡献，请参阅我们的 [贡献](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_readme_f9c4757049b6.png) 页面。\n\n附注：[我们正在招聘](https:\u002F\u002Fwww.databricks.com\u002Fcompany\u002Fcareers\u002Fopen-positions?department=Mosaic%20AI&location=all)!\n\n# ❓常见问题解答 (FAQ)\n\n- **训练大型模型时，您推荐的最佳技术栈是什么？**\n    - 我们建议用户结合 MosaicML 生态系统的组件以获得最流畅的体验：\n        - Composer\n        - [StreamingDataset](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fstreaming)\n        - [MCLI](https:\u002F\u002Fwww.databricks.com\u002Fproduct\u002Fmachine-learning\u002Fmosaic-ai-training) (Databricks Mosaic AI Training)\n- **如何获得使用 Composer 的社区支持？**\n    - 您可以加入我们的 [社区 Slack](https:\u002F\u002Fdub.sh\u002Fmcomm)!\n- **Composer 与 NeMo Megatron 和 PyTorch Lightning 等其他训练器相比如何？**\n    - 我们构建 Composer 是为了优化简洁性和效率。社区用户分享说，与其他库相比，他们喜欢 Composer 的功能和易用性。\n- **如何使用 Composer 训练图神经网络 (GNNs)、生成对抗网络 (GANs) 或强化学习 (RL) 模型？**\n    - 如果您想训练这些类型的模型，我们建议您使用其他库 - 我们在设计 Composer 时所做的许多假设对于 GNNs、RL 和 GANs 来说并不是最优的\n- **如何加快 HuggingFace 下载速度？**\n    - 您可以使用 hf transfer (`pip install hf-transfer`) 并设置环境变量 `HF_HUB_ENABLE_HF_TRANSFER=1`\n\n# ✍️ 引用\n```\n@misc{mosaicml2022composer,\n    author = {The Mosaic ML Team},\n    title = {composer},\n    year = {2021},\n    howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002F}},\n}\n```","# Composer 快速上手指南\n\n## 简介\nComposer 是由 MosaicML 开发的开源深度学习训练库，基于 PyTorch 构建。它专为大规模集群上的分布式训练设计，通过抽象底层复杂性（如并行技术、分布式数据加载和内存优化），帮助用户更高效地训练现代 ML 模型（如 LLM、Diffusion、Transformer 等）。\n\n## 环境准备\n\n**软件要求：**\n*   Python\n*   最近版本的 PyTorch\n\n**硬件要求：**\n*   推荐使用配备 CUDA 兼容 GPU 的系统（以获得最佳性能）。\n*   支持 CPU 运行，但无法发挥全部优势。\n*   （AMD + RoCM 支持即将推出）\n\n## 安装步骤\n\n您可以通过 `pip` 直接安装：\n\n```bash\npip install mosaicml\n```\n\n**推荐方式：**\n为了简化环境配置，官方提供了一套 [预构建的 Docker 镜像](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Fstable\u002Fgetting_started\u002Finstallation.html#docker)，强烈建议使用 Docker 进行部署。\n\n## 基本使用\n\n以下是一个基于 MNIST 数据集的最小化训练示例，展示了如何使用 `Trainer` 以及集成算法（如标签平滑、CutMix 等）。\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\nfrom composer import Trainer\nfrom composer.models import ComposerClassifier\nfrom composer.algorithms import LabelSmoothing, CutMix, ChannelsLast\n\nclass Model(nn.Module):\n    \"\"\"Toy convolutional neural network architecture in pytorch for MNIST.\"\"\"\n\n    def __init__(self, num_classes: int = 10):\n        super().__init__()\n\n        self.num_classes = num_classes\n\n        self.conv1 = nn.Conv2d(1, 16, (3, 3), padding=0)\n        self.conv2 = nn.Conv2d(16, 32, (3, 3), padding=0)\n        self.bn = nn.BatchNorm2d(32)\n        self.fc1 = nn.Linear(32 * 16, 32)\n        self.fc2 = nn.Linear(32, num_classes)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = F.relu(out)\n        out = self.conv2(out)\n        out = self.bn(out)\n        out = F.relu(out)\n        out = F.adaptive_avg_pool2d(out, (4, 4))\n        out = torch.flatten(out, 1, -1)\n        out = self.fc1(out)\n        out = F.relu(out)\n        return self.fc2(out)\n\ntransform = transforms.Compose([transforms.ToTensor()])\ndataset = datasets.MNIST(\"data\", train=True, download=True, transform=transform)\ntrain_dataloader = DataLoader(dataset, batch_size=128)\n\ntrainer = Trainer(\n    model=ComposerClassifier(module=Model(), num_classes=10),\n    train_dataloader=train_dataloader,\n    max_duration=\"2ep\",\n    algorithms=[\n        LabelSmoothing(smoothing=0.1),\n        CutMix(alpha=1.0),\n        ChannelsLast(),\n    ],\n)\ntrainer.fit()\n```\n\n**核心功能说明：**\n*   **Trainer:** 高度优化的 PyTorch 训练循环抽象。\n*   **Algorithms:** 轻松集成加速算法（如示例中的 `LabelSmoothing`）。\n*   **Time Abstractions:** 支持使用自定义单位（如 `2ep` 表示 2 个 epoch）指定训练时长。\n\n更多详细信息请参考 [官方文档](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002F)。","某 AI 初创公司的算法团队需要在 8 卡 GPU 集群上微调一个 70 亿参数的 Transformer 模型，面对紧迫的业务上线压力，他们急需提升训练效率。\n\n### 没有 composer 时\n- 手动编写分布式训练代码，配置 DDP 或 FSDP 策略耗时且极易出错，工程师深陷工程细节。\n- 显存优化困难，经常遇到 OOM 错误，需要反复手动调整 batch size 和梯度累积策略。\n- 缺乏统一的训练循环管理，实验配置分散，导致结果复现性差，调试成本极高。\n- 多节点通信效率低，硬件利用率不足 50%，训练周期长达数周，严重拖累迭代速度。\n\n### 使用 composer 后\n- composer 提供标准化的 Trainer 接口，几行代码即可启用复杂的分布式训练策略，开箱即用。\n- 内置多种显存优化算法（如梯度检查点），无需手动调参即可稳定运行大模型，减少报错。\n- 集成最佳实践训练流程，实验配置可版本化管理，让团队协作和结果复现更轻松。\n- 自动优化多节点通信与数据加载，硬件利用率提升至 80% 以上，训练速度显著加快。\n\ncomposer 让大规模模型训练像单机开发一样简单，大幅降低工程门槛并加速业务迭代。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmosaicml_composer_df229855.png","mosaicml","Databricks Mosaic Research","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmosaicml_b4287c5e.png","We remove the barriers to state-of-the-art generative AI model development and make data + AI available to all",null,"DbrxMosaicAI","https:\u002F\u002Fwww.databricks.com\u002Fresearch\u002Fmosaic","https:\u002F\u002Fgithub.com\u002Fmosaicml",[84,88,92],{"name":85,"color":86,"percentage":87},"Python","#3572A5",99.6,{"name":89,"color":90,"percentage":91},"Dockerfile","#384d54",0.4,{"name":93,"color":94,"percentage":95},"Makefile","#427819",0,5472,464,"2026-03-31T12:15:24","Apache-2.0","未说明","需要 CUDA 兼容 GPU（推荐），支持 CPU 运行，AMD RoCM 即将支持",{"notes":103,"python":100,"dependencies":104},"强烈建议使用提供的预构建 Docker 镜像以简化环境设置。支持多种云存储集成（AWS S3, GCP, OCI）。支持 FSDP 和 DDP 分布式训练策略。包含自动恢复和 CUDA OOM 预防功能。",[105,106],"torch","torchvision",[13],[109,110,111,112,113,114,115,116],"deep-learning","pytorch","neural-networks","ml-systems","ml-efficiency","ml-training","machine-learning","neural-network",14,"2026-03-27T02:49:30.150509","2026-04-06T05:36:40.222513",[121,126,130,135,140,145],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},13,"如何保存大型模型（如 13B+）的检查点以避免 RAM 不足？","建议使用 FSDP 配置来保存分片检查点。配置示例如下：\n```yaml\nfsdp_config:\n  state_dict_type: sharded\n  sharding_strategy: FULL_SHARD\n  mixed_precision: PURE\n  activation_checkpointing: true\n  activation_checkpointing_reentrant: false\n  activation_cpu_offload: false\n  limit_all_gathers: true\n  verbose: false\n```\n此外，使用 Lion optimizer 也可以在不分片的情况下进行训练。","https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fissues\u002F2715",{"id":117,"question_zh":127,"answer_zh":128,"source_url":129},"Trainer 是否支持多次调用 `.fit()` 方法？","Composer 的惯例是“一次运行 = 一个 Trainer 实例”。如果需要预训练后微调或参数扫描，建议为每次运行创建新的 Trainer。但在交互式开发等场景下，`.fit()` 可选接受 `training_duration` 参数来支持多次调用，允许在运行中途更改 Trainer 属性。","https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fissues\u002F138",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},15,"使用 WandB 日志时，为什么 HF 指标不显示或步数计数错误？","这是 WandB 的内部行为导致的：\n1. 如果手动指定时间戳且小于 WandB 内部步数，指标会被丢弃（日志中会显示 WARNING）。\n2. 如果不指定步数，WandB 可能在每次 log 调用后递增步数，导致指标步数看似错误。\n维护者已通过 PR 修复此问题，建议更新到最新分支或版本。","https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fissues\u002F1473",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},16,"为什么设置 `grad_accum=\"auto\"` 时会出现 CUDA OOM 错误？","这可能是由于 GPU 内存碎片化引起的。建议在 grad accum 调整后执行 cuda cache clear。维护者已合并了 cache clearing 功能来解决自动 grad accum 的问题，显著降低了缓存碎片率。如果更新后仍遇到问题，请尝试手动清理缓存或反馈。","https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fissues\u002F1331",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},17,"评估循环（Evaluation loop）在有或没有 DeepSpeed 的情况下都失败怎么办？","这可能与本地系统的特定 `ninja` 版本有关。这是一个环境依赖问题，尝试修复或更改 `ninja` 版本可能解决此问题。如果问题依旧，请提供可复现的示例以便进一步排查。","https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fissues\u002F1472",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},18,"为什么使用 `add_dataset_transform` 的算法（如 randaugment）未按预期工作？","因为 dataloader workers 在 INIT 事件运行前已创建，算法尝试 monkeypatch 数据集可能无效。建议改为在 `AFTER_DATALOADER` 事件上修改数据，或者重新创建 dataloader。注意，某些边缘情况（如预创建的持久 workers，n workers > 0）可能不支持。","https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fissues\u002F313",[151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241,246],{"id":152,"version":153,"summary_zh":154,"released_at":155},99717,"v0.32.1","## What's Changed\r\n* Removed extraneous usage of `fsdp_config.load_monolith_rank0_only` since that's unreliable by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3901\r\n* Fixed automicrobatching issue for FSDP1 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3909\r\n* reverted mlflow upgrade due to slowdowns by @ethantang-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3910\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.32.0...v0.32.1","2025-07-26T00:26:23",{"id":157,"version":158,"summary_zh":159,"released_at":160},99718,"v0.32.0","## What's Changed\r\n* Update FSDP checkpointing test to use UC Volumes and updated dockerfile for new composer version by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3865\r\n* Using cu128 instead of cu126 for pr-gpu and daily tests by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3867\r\n* Refactored auto-microbatching hook handles for FSDP by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3843\r\n* Removed most s3 bucket based tests (replaced with UC Volumes) by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3869\r\n* Supporting Mixed Init on FSDP2 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3872\r\n* Documentation Improvements: Clarify Explanations in Gated Linear Units and Squeeze-Excite README Files by @leopardracer in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3875\r\n* Mlflow move to cpu by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3878\r\n* FSDP2 mixed init fixes by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3882\r\n* Remove sklearn dep by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3883\r\n* Monolithic checkpointing by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3876\r\n* Update docs conf.py copyright to 2025 by @jacobfulano in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3751\r\n* Fix Typos in Comments for activation_monitor.py and mlperf.py by @kilavvy in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3877\r\n* Mixed Precision for FSDP2 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3884\r\n* Add h200 to flops dict by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3889\r\n* updated fsdp2 config by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3896\r\n* Supporting peft for FSDP2 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3897\r\n\r\n## New Contributors\r\n* @leopardracer made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3875\r\n* @kilavvy made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3877\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.31.0...v0.32.0","2025-07-15T21:58:06",{"id":162,"version":163,"summary_zh":164,"released_at":165},99719,"v0.31.0","## What's New\r\n#### 1. PyTorch 2.7.0 Compatibility (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3850)\r\n\r\nWe've added support for PyTorch 2.7.0 and created a Dockerfile to support PyTorch 2.7.0 + CUDA 12.8. The current Composer image supports PyTorch 2.7.0 + CUDA 12.6.3.\r\n\r\n#### 2. Experimental FSDP2 support has been added to `Trainer` (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3852)\r\n\r\nExperimental FSDP2 support was added to `Trainer` with:\r\n- `auto_wrap` based on `_fsdp_wrap_fn` and\u002For `_fsdp_wrap` attributes within the model (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3826)\r\n- Activation checkpointing and CPU offloading (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3832)\r\n- Meta initialization (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3852)\r\n\r\nNote: Not all features are supported yet (e.g. automicrobatching, monolithic checkpointing)\r\n\r\nUsage:\r\n\r\nAdd `FSDP_VERSION=2` as an environment variable and set your FSDP2 config (`parallelism_config`) as desired. The full set of available attributes can be found [here](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fblob\u002F22b0ae853afb2287ecfc07575b800c4273bdac74\u002Fcomposer\u002Futils\u002Fparallelism.py#L66).\r\n\r\n## Bug Fixes\r\n- Resolve a memory hang issue in Mlflow monitor process (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3830)\r\n\r\n## What's Changed\r\n* Bump Composer 0.31.0.dev0 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3808\r\n* Update Checkpoint Back-Compatibility Test by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3810\r\n* Extend docker build matrix to add an entry for pytorch2.6+cu126 by @sirejdua-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3805\r\n* Bump databricks-sdk from 0.47.0 to 0.49.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3814\r\n* Bump pypandoc from 1.14 to 1.15 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3813\r\n* Update google-cloud-storage requirement from \u003C3.0,>=2.0.0 to >=2.0.0,\u003C4.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3812\r\n* Update setuptools version by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3816\r\n* Kickstart FSDP2 by @bowenyang008 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3806\r\n* Remove network calls to HF in CI by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3817\r\n* Update psutil requirement from \u003C7,>=5.8.0 to >=5.8.0,\u003C8 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3818\r\n* [FSDP2] Init FSDP2 based checkpointing by @bowenyang008 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3824\r\n* Update torchmetrics requirement from \u003C1.6.1,>=1.0 to >=1.0,\u003C1.7.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3829\r\n* Bump coverage[toml] from 7.6.8 to 7.8.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3827\r\n* Bump yamllint from 1.35.1 to 1.37.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3820\r\n* Update numpy requirement from \u003C2.2.0,>=1.21.5 to >=1.21.5,\u003C2.3.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3828\r\n* Update optimizer params for fsdp2 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3822\r\n* Change Mlflow monitor process from fork to spawn to reduce memory usage by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3830\r\n* Ignore mlflow warning in test by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3831\r\n* Bump HF hub version by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3839\r\n* Bump databricks-sdk from 0.49.0 to 0.50.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3834\r\n* Update transformers requirement from !=4.34.0,\u003C4.51,>=4.11 to >=4.11,!=4.34.0,\u003C4.52 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3838\r\n* Eliminate dead code before torch version 2.4   by @bowenyang008 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3833\r\n* Support submodule wrapping for FSDP2 according to model definition (with `_fsdp_wrap` and `fsdp_wrap_fn`) by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3826\r\n* Activation Checkpointing and Offloading for FSDP2 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3832\r\n* Pin EFA installer version by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3842\r\n* Add two legacy torch images to the container build matrix by @asfandyarq in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3841\r\n* Bump yamllint from 1.37.0 to 1.37.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3845\r\n* Update packaging requirement from \u003C24.3,>=21.3.0 to >=21.3.0,\u003C25.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3846\r\n* Bump cryptography from 44.0.0 to 44.0.3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3848\r\n* Upgrade yapf version by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3840\r\n* Bump ipython from 8.11.0 to 8.36.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3847\r\n* Update huggingface-hub requirement from \u003C0.31,>=0.21.2 to >=0.21.2,\u003C0.32 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3851\r\n* Update EFA installer version by @dakinggg in https:\u002F\u002Fgit","2025-05-28T17:30:57",{"id":167,"version":168,"summary_zh":169,"released_at":170},99720,"v0.30.0","## What's New\r\n### 1. Python 3.12 Bump (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3783)\r\nWe've added support for Python 3.12 and deprecated Python 3.9 support.\r\n\r\n## What's Changed\r\n* Updated `test_fsdp_load_old_checkpoint` with 0.29.0 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3771\r\n* Mlflow rocm error by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3775\r\n* Update docker to have FA==2.7.4.post1 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3772\r\n* [GRT-3415] Remove dead code for peft logging by @bowenyang008 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3777\r\n* Patch Mflow .trash directories by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3778\r\n* Remove TE ONNX Export Context to Enable TE FusedAttention on AMD Hardware by @jjuvonen-amd in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3779\r\n* Update Makefile to use WORLD_SIZE by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3781\r\n* Bump gitpython from 3.1.43 to 3.1.44 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3785\r\n* deprecate gcs test by @ethantang-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3791\r\n* Update mosaicml-cli requirement from \u003C0.7,>=0.5.25 to >=0.5.25,\u003C0.8 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3742\r\n* Bump databricks-sdk from 0.44.1 to 0.47.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3786\r\n* deprecate ghcr by @KevDevSha in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3790\r\n* Bump transformers by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3793\r\n* Bump Python 3.12 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3783\r\n* Fix checkpoint loading in Pytorch 2.6.0 for ckpts exported before Pytorch 2.1.0 by @ethantang-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3792\r\n* Update huggingface-hub requirement from \u003C0.27,>=0.21.2 to >=0.21.2,\u003C0.30 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3795\r\n* Update pytest-httpserver requirement from \u003C1.1,>=1.0.4 to >=1.0.4,\u003C1.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3796\r\n* Update scikit-learn requirement from \u003C1.6,>=1.2.0 to >=1.2.0,\u003C1.7 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3799\r\n* Bump Release Ref 0.3.3 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3804\r\n* Remove huggyllama fixture by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3807\r\n* Fix release docker with 3.10 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3809\r\n\r\n## New Contributors\r\n* @bowenyang008 made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3777\r\n* @jjuvonen-amd made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3779\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.29.0...v0.30.0","2025-04-04T20:22:00",{"id":172,"version":173,"summary_zh":174,"released_at":175},99721,"v0.29.0","## Deprecations\r\n### 1. `device_transforms` param in `DataSpec` has been deprecated (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3770)\r\n\r\nComposer no longer supports the `device_transforms` parameter in `DataSpec`. Instead, `DataSpec` supports `batch_transforms` for batch level transformations on CPU and `microbatch_transforms` for micro-batch level transformations on target device.\r\n\r\n## What's Changed\r\n* Add checkpoint BC tests for 0.27.0 and 0.28.0 by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3735\r\n* Address sklearn device issues by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3748\r\n* Update FAQ with hf-transfer info by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3745\r\n* Fix MLFlow logger CI error by ignoring UserWarning by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3758\r\n* Bump ci to v0.3.3 by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3759\r\n* Fix order of arguments to `loss` by @gsganden in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3754\r\n* fix: make JSONTraceHandler.batch_end robust to \u002Ftmp\u002F being on diff mount to dest by @thundergolfer in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3766\r\n* Bump pytorch to 2.6.0 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3763\r\n* Bump databricks-sdk from 0.38.0 to 0.44.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3765\r\n* Version bump to v0.30.0.dev0 by @rithwik-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3770\r\n\r\n## New Contributors\r\n* @gsganden made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3754\r\n* @thundergolfer made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3766\r\n* @rithwik-db made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3763\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.28.0...v0.29.0","2025-02-25T00:24:25",{"id":177,"version":178,"summary_zh":179,"released_at":180},99722,"v0.28.0","## Deprecations\r\n### 1. Deepspeed Deprecation (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3732)\r\nComposer no longer supports the Deepspeed deep learning library. Support has shifted to PyTorch-native solutions such as FSDP and DDP only. Please use Composer v0.27.0 or before to continue using Deepspeed!\r\n\r\n## What's Changed\r\n* Fix composer gpu daily test to use torch 2.5.1 by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3712\r\n* Bump coverage[toml] from 7.6.4 to 7.6.7 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3713\r\n* Update torchmetrics requirement from \u003C1.5.3,>=1.0 to >=1.0,\u003C1.6.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3714\r\n* Bump ubuntu 22.04 + fix CI mlflow tests by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3716\r\n* Bump databricks-sdk from 0.36.0 to 0.37.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3715\r\n* Bump mosaicml\u002Fpytorch images to use new mosaicml\u002Fpytorch images with updated ubuntu 22.04 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3718\r\n* migrated all possible assets from GCP to repo by @ethantang-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3717\r\n* Bump databricks-sdk from 0.37.0 to 0.38.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3720\r\n* Bump coverage[toml] from 7.6.7 to 7.6.8 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3721\r\n* Expose `DistributedSampler` RNG seed argument by @janEbert in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3724\r\n* Fix netifaces install in Dockerfile by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3726\r\n* Update protobuf requirement from \u003C5.29 to \u003C5.30 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3728\r\n* Bump cryptography from 43.0.3 to 44.0.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3731\r\n* Speed up CI tests :) by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3727\r\n* Remove deepspeed completely by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3732\r\n* Fix daily test failures by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3733\r\n* Version bump to v0.29.0.dev0 by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3734\r\n\r\n## New Contributors\r\n* @janEbert made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3724\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.27.0...v0.28.0","2024-12-04T15:51:08",{"id":182,"version":183,"summary_zh":184,"released_at":185},99723,"v0.27.0","## What's New\r\n### 1. Torch 2.5.1 Compatibility (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3701)\r\nWe've added support for torch 2.5.1, including checkpointing bug fixes from PyTorch.\r\n\r\n### 2. Add batch\u002Fmicrobatch transforms (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3703)\r\nSped up device transformations by doing batch transform on CPU and microbatch transforms on GPU\r\n\r\n## Deprecations and Breaking Changes\r\n### 1. MLFlow Metrics Deduplication (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3678)\r\nWe added a metric de-duplication feature for the MLflow logger in Composer. Metrics that remain unchanged since the last step are not logged unless specific conditions are met, which by default is if we've reached a 100th multiple of duplicated metric steps. This optimizes logging storage by reducing redundant entries, balancing detailed sampling with efficiency.\r\n\r\nExample:\r\n```\r\nMlflowLogger(..., log_duplicated_metric_every_n_steps=100)\r\n```\r\n\r\n## What's Changed\r\n* Metrics dedup for MLflow logger by @chenmoneygithub in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3678\r\n* Bump databricks-sdk from 0.33.0 to 0.36.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3686\r\n* Update pillow requirement from \u003C11,>=10.3.0 to >=10.3.0,\u003C12 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3684\r\n* Lower min torchmetrics version by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3691\r\n* Private link error handling by @nancyhung in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3689\r\n* Update checkpoint tests to use new version 0.26.0 by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3683\r\n* Bump coverage[toml] from 7.6.3 to 7.6.4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3694\r\n* Pin checkpoint state dict flattening patch by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3700\r\n* Torch bump to 2.5.1 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3701\r\n* Fix typo in trainer doc by @XiaohanZhangCMU in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3702\r\n* Update packaging requirement from \u003C24.2,>=21.3.0 to >=21.3.0,\u003C24.3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3707\r\n* Update torchmetrics requirement from \u003C1.4.1,>=1.0 to >=1.0,\u003C1.5.3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3706\r\n* Add batch\u002Fmicrobatch transforms by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3703\r\n* Bump version to 0.28.0.dev0 by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3709\r\n* Add torch 2.5.1 composer tests by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3710\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.26.1...v0.27.0","2024-11-14T19:35:41",{"id":187,"version":188,"summary_zh":189,"released_at":190},99724,"v0.26.1","## What's Changed\r\n* Private link error handling by @nancyhung in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3689\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.26.0...v0.26.1","2024-11-01T06:07:00",{"id":192,"version":193,"summary_zh":194,"released_at":195},99725,"v0.26.0","## What's New\r\n### 1. Torch 2.5.0 Compatibility (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3609)\r\nWe've added support for torch 2.5.0, including necessary patches to Torch.\r\n\r\n## Deprecations and Breaking Changes\r\n### 1. FSDP Configuration Changes(#3681)\r\nWe no longer support passing `fsdp_config` and `fsdp_auto_wrap` directly to `Trainer`. \r\n\r\nIf you'd like to specify an fsdp config and configure fsdp auto wrapping, you should use `parallelism_config`.\r\n\r\n```\r\ntrainer = Trainer(\r\n    parallelism_config = {\r\n        'fsdp': { \r\n            'auto_wrap': True\r\n            ...\r\n        }\r\n    }\r\n)\r\n```\r\n### 2. Removal of Pytorch Legacy Sharded Checkpoint Support (#3631)\r\nPyTorch briefly used a different sharded checkpoint format than the current one, which was quickly deprecated by PyTorch. We have removed support for this format. We initially removed support for saving in this format in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F2262, and the original feature was added in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F1902. Please reach out if you have concerns or need help converting your checkpoints to the new format.\r\n\r\n\r\n## What's Changed\r\n* Add backward compatibility checkpoint tests for v0.25.0 by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3635\r\n* Don't use TP when `tensor_parallel_degree` is 1 by @eitanturok in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3636\r\n* Update huggingface-hub requirement from \u003C0.25,>=0.21.2 to >=0.21.2,\u003C0.26 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3637\r\n* Update transformers requirement from !=4.34.0,\u003C4.45,>=4.11 to >=4.11,!=4.34.0,\u003C4.46 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3638\r\n* Bump databricks-sdk from 0.32.0 to 0.33.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3639\r\n* Remove Legacy Checkpointing by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3631\r\n* Surface UC permission error by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3642\r\n* Tensor Parallelism Tests by @eitanturok in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3620\r\n* Switch to log.info for deterministic mode by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3643\r\n* Update pre-commit requirement from \u003C4,>=3.4.0 to >=3.4.0,\u003C5 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3645\r\n* Update peft requirement from \u003C0.13,>=0.10.0 to >=0.10.0,\u003C0.14 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3646\r\n* Create callback to load checkpoint by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3641\r\n* Bump jupyter from 1.0.0 to 1.1.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3595\r\n* Fix DB SDK Import by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3648\r\n* Bump coverage[toml] from 7.6.0 to 7.6.3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3651\r\n* Bump pypandoc from 1.13 to 1.14 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3652\r\n* Replace list with Sequence by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3654\r\n* Add better error handling for non-rank 0 during Monolithic Checkpoint Loading  by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3647\r\n* Raising a better warning if train or eval did not process any data. by @ethantang-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3656\r\n* Fix Logo  by @XiaohanZhangCMU in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3659\r\n* Update huggingface-hub requirement from \u003C0.26,>=0.21.2 to >=0.21.2,\u003C0.27 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3668\r\n* Bump cryptography from 42.0.8 to 43.0.3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3667\r\n* Bump pytorch to 2.5.0 by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3663\r\n* Don't overwrite sys.excepthook in mlflow logger by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3675\r\n* Fix pull request target by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3676\r\n* Use a temp path to save local checkpoints for remote save path by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3673\r\n* Loss gen tokens by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3677\r\n* Refactor `maybe_create_object_store_from_uri` by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3679\r\n* Don't error if some batch slice has no loss generating tokens by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3682\r\n* Bump version to 0.27.0.dev0 by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3681\r\n\r\n## New Contributors\r\n* @ethantang-db made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3656\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.25.0...v0.26.0","2024-10-25T21:36:56",{"id":197,"version":198,"summary_zh":199,"released_at":200},99726,"v0.25.0","## What's New\r\n### 1. Torch 2.4.1 Compatibility (#3609)\r\nWe've added support for torch 2.4.1, including necessary patches to Torch.\r\n\r\n## Deprecations and breaking changes\r\n### 1. Microbatch device movement (#3567)\r\nInstead of moving the entire batch to device at once, we now move each microbatch to device. This saves memory for large inputs, e.g. multimodal data, when training with many microbatches.\r\n\r\nThis change may affect certain callbacks which run operations on the batch which require it to be moved to an accelerator ahead of time, such as the two changed in this PR. There shouldn't be too many of these callbacks, so we anticipate this change will be relatively safe.\r\n\r\n### 2. DeepSpeed deprecation version (#3634)\r\nWe have update the Composer version that we will remove support for DeepSpeed to 0.27.0. Please reach out on GitHub if you have any concerns about this.\r\n\r\n### 3. PyTorch legacy sharded checkpoint format\r\nPyTorch briefly used a different sharded checkpoint format than the current one, which was quickly deprecated by PyTorch. We have continued to support loading legacy format checkpoints for a while, but we will likely be removing support for this format entirely in an upcoming release. We initially removed support for saving in this format in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F2262, and the original feature was added in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F1902. Please reach out if you have concerns or need help converting your checkpoints to the new format.\r\n\r\n## What's Changed\r\n* Set dev version back to 0.25.0.dev0 by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3582\r\n* Microbatch Device Movement by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3567\r\n* Init Dist Default None by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3585\r\n* Explicit None Check in get_device by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3586\r\n* Update protobuf requirement from \u003C5.28 to \u003C5.29 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3591\r\n* Bump databricks-sdk from 0.30.0 to 0.31.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3592\r\n* Update ci-testing to 0.2.2 by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3590\r\n* Bump Mellanox Tools by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3597\r\n* Roll back ci-testing for daillies by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3598\r\n* Revert driver changes by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3599\r\n* Remove step in log_image for MLFlow by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3601\r\n* Reduce system metrics logging frequency by @chenmoneygithub in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3604\r\n* Bump databricks-sdk from 0.31.1 to 0.32.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3608\r\n* torch2.4.1 by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3609\r\n* Test with torch2.4.1 image by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3610\r\n* fix 2.4.1 test by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3612\r\n* Remove tensor option for _global_exception_occured by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3611\r\n* Update error message for overwrite to be more user friendly by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3619\r\n* Update wandb requirement from \u003C0.18,>=0.13.2 to >=0.13.2,\u003C0.19 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3615\r\n* Fix RNG key checking by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3623\r\n* Update datasets requirement from \u003C3,>=2.4 to >=2.4,\u003C4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3626\r\n* Disable exceptions for MosaicML Logger by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3627\r\n* Fix CPU dailies by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3628\r\n* fix 2.4.1ckpt by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3629\r\n* More checkpoint debug logs by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3632\r\n* Lower DeepSpeed deprecation version by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3634\r\n* Bump version 25 by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3633\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.24.1...v0.25.0","2024-09-24T20:56:05",{"id":202,"version":203,"summary_zh":204,"released_at":205},99727,"v0.24.1","## Bug Fixes\r\n\r\n**1. Disallow passing `device_mesh` to `FSDPConfig` ([#3580](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3580))**\r\n\r\nExplicitly errors if `device_mesh` is passed to `FSDPConfig`. This completes the deprecation from v0.24.0 and also addresses cases where a user specified a device mesh but it was ignored, leading to training with the incorrect parallelism style (e.g., using FSDP instead of HSDP).\r\n\r\n## What's Changed\r\n* Bump main version to 0.25.0.dev0 by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3573\r\n* update daily by @KevDevSha in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3572\r\n* Bump pandoc from 2.3 to 2.4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3575\r\n* Update transformers requirement from !=4.34.0,\u003C4.44,>=4.11 to >=4.11,!=4.34.0,\u003C4.45 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3574\r\n* Checkpoint backwards compatibility tests for v0.24.0 by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3579\r\n* Error if device mesh specified in fsdp config by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3580\r\n* Bump version to 0.24.1. by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3581\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.24.0...v0.24.1","2024-08-27T22:37:35",{"id":207,"version":208,"summary_zh":209,"released_at":210},99728,"v0.24.0","## What's New\r\n### 1. Torch 2.4 Compatibility ([#3542](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3542), [#3549](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3549), [#3553](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3553), [#3552](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3552), [#3565](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3565))\r\nComposer now supports Torch 2.4! We are tracking a few issues with the latest PyTorch we have raised with the PyTorch team related to checkpointing:\r\n- \\[[PyTorch Issue](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Fissues\u002F133415)\\] Distributed checkpointing using PyTorch DCP has issues with stateless optimizers, e.g. SGD. We recommend using `composer.optim.DecoupledSGDW` as a workaround.\r\n- \\[[PyTorch Issue](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Fissues\u002F133923)\\] Distributed checkpointing using PyTorch DCP broke backwards compatibility. We have patched this using the following [planner](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3565), but this may break custom planner loading.\r\n\r\n### 2. New checkpointing APIs ([#3447](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3447), [#3474](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3474), [#3488](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3488), [#3452](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3452))\r\nWe've added new checkpointing APIs to download, upload, and load \u002F save, so that checkpointing is usable outside of a `Trainer` object. We will be fully migrating to these new APIs in the next minor release.\r\n\r\n### 3: Improved Auto-microbatching ([#3510](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3510), [#3522](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3522))\r\nWe've fixed deadlocks with auto-microbatching with FSDP, bringing throughput in line with manually setting the microbatch size. This is achieved through enabling sync hooks wherever a training run might OOM to find the correct microbatch size, and disabling these hooks for the rest of training.\r\n\r\n\r\n## Bug Fixes\r\n### 1. Fix checkpoint symlink uploads ([#3376](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3376))\r\nEnsures that checkpoint files are uploaded before the symlink file, fixing errors with missing or incomplete checkpoints.\r\n\r\n### 2. Optimizer tracks same parameters after FSDP wrapping ([#3502](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3502))\r\nWhen only a subset of parameters should be tracked by the optimizer, FSDP wrapping will now not interfere.\r\n\r\n## What's Changed\r\n* Bump ipykernel from 6.29.2 to 6.29.5 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3459\r\n* Update torchmetrics requirement from \u003C1.3.3,>=0.10.0 to >=1.4.0.post0,\u003C1.4.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3460\r\n* [Checkpoint] Fix symlink issue where symlink file uploaded before checkpoint files upload by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3376\r\n* Bump databricks-sdk from 0.28.0 to 0.29.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3456\r\n* Remove Log Exception by @jjanezhang in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3464\r\n* Corrected docs for MFU in SpeedMonitor by @JackZ-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3469\r\n* [checkpoint v2] Download api by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3447\r\n* Upload api by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3474\r\n* [Checkpoint V2] Upload API by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3488\r\n* Load api by @eracah in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3452\r\n* Add helpful comment explaining HSDP initialization seeding by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3470\r\n* Add fit start to mosaicmllogger by @ethanma-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3467\r\n* Remove OOM-Driven FSDP Deadlocks and Increase Throughput of Automicrobatching by @JackZ-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3510\r\n* Move hooks and fsdp modules onto state rather than trainer by @JackZ-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3522\r\n* Bump coverage[toml] from 7.5.4 to 7.6.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3471\r\n* revert a wip PR by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3475\r\n* Change FP8 Eval to default to activation dtype by @j316chuck in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3454\r\n* Get a shared file system safe signal file name by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3485\r\n* Bumping flash attention version to v2.6.2 by @ShashankMosaicML in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3489\r\n* Bump to Pytorch 2.4  by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3542\r\n* Add Torch 2.4 Tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3549\r\n* Fix torch 2.4 images for tests by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3553\r\n* Fix torch 2.4 tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3552\r\n* Fix bug when subset of model parameters is passed into optimizer with FSDP by @sashaDoubov in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3502\r\n*","2024-08-26T14:48:28",{"id":212,"version":213,"summary_zh":214,"released_at":215},99729,"v0.23.5","## What's New\r\n### 1. Variable length dataloaders (#3416)\r\nAdds support for dataloaders with rank-dependent lengths. The solution terminates iteration for dataloaders on all ranks when the first dataloader finishes.\r\n\r\n## Bug Fixed\r\n### 1. Remove close flush for mosaicml logger (#3446)\r\nPreviously, the MosaicML Logger sporadically raised an error when the python interpreter was shutting down as it attempted to flush data on `Event.CLOSE` using futures, which cannot be scheduled at that time. Instead, we now only block on finishing existing data upload on `Event.CLOSE`, avoiding scheduling new futures.\r\n\r\n## What's Changed\r\n* Update numpy requirement from \u003C1.27.0,>=1.21.5 to >=1.21.5,\u003C2.1.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3406\r\n* Restore dev version by @karan6181 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3417\r\n* Save checkpoint to disk for API with new save layout by @eracah in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3399\r\n* Patch PyTorch 2.3.1 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3419\r\n* Fixes some typing issues by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3418\r\n* Fix style by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3420\r\n* Bump coverage[toml] from 7.5.3 to 7.5.4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3422\r\n* Update psutil requirement from \u003C6,>=5.8.0 to >=5.8.0,\u003C7 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3424\r\n* Add support for variable length dataloaders in DDP by @JAEarly in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3416\r\n* Hsdp + MoE CI tests by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3378\r\n* Bumping MLflow version to 2.14.1 by @JackZ-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3425\r\n* Skip HSDP + TP pytests that require torch 2.3 or above by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3426\r\n* Remove CodeQL workflow by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3429\r\n* Remove save overwrite by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3431\r\n* Fixes to TP Docs by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3430\r\n* Lower the system metrics logging frequency to reduce MLflow server's load by @chenmoneygithub in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3436\r\n* Update paramiko requirement from \u003C3,>=2.11.0 to >=3.4.0,\u003C4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3439\r\n* Bump CI testing version by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3433\r\n* Fix docstring for EVAL_AFTER_ALL\u002FEVAL_BEFORE_ALL by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3445\r\n* Remove close flush for mosaicml logger by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3446\r\n* Remove MosaicMLLambdaEvalClient by @aspfohl in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3432\r\n* Relax hf hub pin by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3435\r\n* Pytest skip 2 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3448\r\n* bump version v0.23.5 by @XiaohanZhangCMU in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3450\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.23.4...v0.23.5","2024-07-03T02:08:01",{"id":217,"version":218,"summary_zh":219,"released_at":220},99730,"v0.23.4","## Bug Fixes\r\n\r\n**1. Patch PyTorch 2.3.1 (https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3419)**\r\n\r\nFixes missing import when monkeypatching device mesh functions in PyTorch 2.3.1. This is necessary for MoE training.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.23.3...v0.23.4","2024-06-21T15:09:05",{"id":222,"version":223,"summary_zh":224,"released_at":225},99731,"v0.23.3","## New Features\r\n\r\n### 1. Update mlflow logger to use the new API with time-dimension to view images in MLFlow (#3286)\r\n\r\nWe've enhanced the MLflow logger's `log_image` function to use the new API with time-dimension support, enabling images to be viewed in MLflow.\r\n\r\n### 2. Add logging buffer time to MLFLow logger (#3401)\r\n\r\nWe've added the `logging_buffer_seconds` argument to the MLflow logger, which specifies how many seconds to buffer before sending logs to the MLflow tracking server.\r\n\r\n## Bug Fixes\r\n### 1. Only require `databricks-sdk` when on Databricks platform (#3389)\r\n\r\nPreviously, MLFlow always imported the databricks-sdk. Now, we only require the sdk if on the databricks platform and using databricks secrets to access managed MLFlow.\r\n\r\n### 2. Skip extra dataset state load during job resumption (#3393)\r\n\r\nPreviously, when loading a checkpoint with `train_dataloader`, the `dataset_state` would load first, and if `train_dataloader` was set again afterward, `load_state_dict` would be called with a `None` value. Now, we've added a check in the `train_dataloader` setter to skip this redundant load.\r\n\r\n### 3. Fix auto-microbatching on CUDA 12.4 (#3400)\r\n\r\nIn CUDA 12.4, the out-of-memory error message has changed to `CUDA error: out of memory`. Previously, our logic hardcoded checks for `CUDA out of memory` when using `device_train_microbatch_size=\"auto\"`. Now, we check for both `CUDA out of memory` and `CUDA error: out of memory`.\r\n\r\n### 4. Fix mlflow logging to Databricks workspace file paths which startswith `\u002FShared\u002F` prefix (#3410)\r\n\r\nPreviously, for MLflow logging, we prepended the path `\u002FUsers\u002F` to all user-provided logging paths on the Databricks platform, if not specified, including paths starting with `\u002FShared\u002F`, which was incorrect since `\u002FShared\u002F` indicates a shared workspace. Now, the `\u002FUsers\u002F` prepend is skipped for paths starting with `\u002FShared\u002F`.\r\n\r\n## What's Changed\r\n* Bump CI from 0.0.7 to 0.0.8 by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3383\r\n* Fix backward compatibility caused by missing eval metrics class  by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3385\r\n* Bump version v0.23.2 by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3386\r\n* Restore dev version by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3388\r\n* Only requires `databricks-sdk` when inside the Databricks platform by @antoinebrl in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3389\r\n* Update packaging requirement from \u003C24.1,>=21.3.0 to >=21.3.0,\u003C24.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3392\r\n* Bump cryptography from 42.0.6 to 42.0.8 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3391\r\n* Skip extra dataset state load by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3393\r\n* Remove FSDP restriction from PyTorch 1.13 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3395\r\n* Check for 'CUDA error: out of memory' when auto-microbatching by @JAEarly in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3400\r\n* Add tokens to iterations by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3374\r\n* Busy wait utils in dist by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3396\r\n* Add buffering time to mlflow logger by @chenmoneygithub in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3401\r\n* Add missing import for PyTorch 2.3.1 device mesh slicing by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3402\r\n* Add pynvml to mlflow dep group by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3404\r\n* min\u002Fmax flagging added to system_metrics_monitor with only non-redundant, necessary gpu metrics logged by @JackZ-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3373\r\n* Simplify launcher world size parsing by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3398\r\n* Optionally use `flash-attn`'s CE loss for metrics by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3394\r\n* log image fix by @jessechancy in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3286\r\n* [ckpt-rewr] Save state dict API by @eracah in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3372\r\n* Revert \"Optionally use `flash-attn`'s CE loss for metrics (#3394)\" by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3408\r\n* CPU tests image fix by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3409\r\n* Add setter for epoch in iteration by @b-chu in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3407\r\n* Move pillow dep as required by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3412\r\n* fixing mlflow logging to Databricks workspace file paths with \u002FShared\u002F prefix by @JackZ-db in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3410\r\n* Bump version v0.23.3 by @karan6181 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3414\r\n\r\n## New Contributors\r\n* @JackZ-db made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3373\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.23.2...v0.23.3","2024-06-21T00:18:38",{"id":227,"version":228,"summary_zh":229,"released_at":230},99732,"v0.23.2","# Bug Fixes\r\n* Fix backward compatibility issue caused by missing eval metrics class \r\n\r\n# What's Changed:\r\n* Fix backward compatibility issue caused by missing eval metrics class by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3385\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.23.1...release\u002Fv0.23.2","2024-06-08T03:11:25",{"id":232,"version":233,"summary_zh":234,"released_at":235},99733,"v0.23.1","## What's New\r\n\r\n**1. PyTorch 2.3.1 Upgrade**\r\n\r\nComposer now supports PyTorch 2.3.1. \r\n\r\n## What's Changed\r\n* Torch 2.3.1 Upgrade by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3367\r\n* Fix monkeypatch imports by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3375\r\n* Remove unnecessary state dict and load_state_dict functions by @eracah in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3361\r\n* Adding checkpoint backwards compatibility tests after 0.23.0 release by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3377\r\n* prepare_fsdp_module documentation fix by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3379\r\n* Composer version bump to v0.23.1 by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3380\r\n* Clear caplog and use as context manager in test_logging by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3382\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.23.0...v0.23.1","2024-06-07T15:03:15",{"id":237,"version":238,"summary_zh":239,"released_at":240},99734,"v0.23.0","## What's New\r\n\r\n**1. Parallelism V2 + Tensor Parallel (#3335)**\r\n\r\nComposer now supports PyTorch's implementation of [tensor parallelism](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fdistributed.tensor.parallel.html). As part of this, we've revamped and simplified how Composer does distributed training. Previously, Composer accepted a `fsdp_config` attribute in the Trainer:\r\n```\r\ntrainer = Trainer(model, fsdp_config = {'sharding_strategy': 'FULL_SHARD'})\r\n```\r\nAs we generalize to more forms of parallelism, we've deprecated `fsdp_config` in favor of `parallelism_config`:\r\n```\r\ntrainer = Trainer(\r\n    model = model,\r\n    ...\r\n    parallelism_config = {\r\n        'fsdp': {\r\n            'sharding_strategy': 'FULL_SHARD',\r\n            'data_parallel_shard_degree': 2,      # Size of shard dimension\r\n            'data_parallel_replicate_degree': 2,  # Size of replicate dimension\r\n        },\r\n        'tp_config': {\r\n            'tensor_parallel_degree': 2,          # Size of TP dimension\r\n            'layer_plan': ...  # describes how to TP layers\r\n        }\r\n    }\r\n)\r\n```\r\nAs part of this change, we now default to using DTensor for parallelism with PyTorch FSDP. PyTorch has deprecated ShardedTensor, so this migrates to the new backend which avoids various checkpointing bugs.\r\n\r\nSee the [docs](https:\u002F\u002Fdocs.mosaicml.com\u002Fprojects\u002Fcomposer\u002Fen\u002Flatest\u002Fnotes\u002Fdistributed_training.html#tensor-parallel-tp) for tensor parallel for more information. Note that tensor parallel is still experimental and may be subject to API breaking changes. All checkpointing features may also not work with this parallelism. \r\n\r\n**2. MLFLow API Simplification**\r\n\r\nPreviously, MLFlow logger required a tracking URI and an absolute user path when using MLFlow with Databricks:\r\n```\r\nmlflow_logger = MLFlowLogger(\r\n    tracking_uri = 'databricks',\r\n    experiment_name = '\u002FUsers\u002Fxxx.yyy@zzz.com\u002Fmy-first-project\u002F'\r\n)\r\n\r\ntrainer = Trainer(\r\n    model = model,\r\n    ...\r\n    loggers = mlflow_logger,\r\n)\r\n```\r\nNow, if you are using Databricks secrets as an environment variable, Composer will autopopulate `tracking_uri` and the `experiment_name` prefix:\r\n```\r\ntrainer = Trainer(\r\n    model = model,\r\n    ...\r\n    loggers = MLFlowLogger(experiment_name='my-first-project'),\r\n)\r\n```\r\n\r\n**3. Wallclock Save Interval**\r\n\r\nComposer now supports setting a save interval in wallclock time:\r\n```\r\ntrainer = Trainer(\r\n    model = model,\r\n    ...\r\n    save_interval='30m',\r\n)\r\n```\r\nNote that most durations, such as `max_duration`, do not accept wallclock time, and the initial version of this feature is only limited to a subset of time features like `save_interval`.\r\n## Bug Fixes\r\n* Don't close the engine if it's already closed in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3143\r\n* Fix HF tests with Pin in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3248\r\n* Fix backwards compatibility tests in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3252\r\n* Fix unexpected remote checkpointing downloading in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3271\r\n* Fix HSDP with ShardDegree \u003C 8 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3313\r\n## What's Changed\r\n* Remove CPU offload for DDP\u002Fsingle-gpu by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3242\r\n* Adding more checkpoint backwards compatability tests by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3244\r\n* Don't close the engine if its already closed by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3143\r\n* Replace `evaluator.dataloader.device_eval_batch_size` with `evaluator.device_eval_microbatch_size` by @ShashankMosaicML in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3247\r\n* Fix HF tests with Pin by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3248\r\n* Remove ICL metrics by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3243\r\n* Add offset and length arguments for checkpoint validation functions by @irenedea in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3246\r\n* Fix backwards compatibility tests, raise error for torch version mismatch by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3252\r\n* Bump cryptography from 41.0.5 to 42.0.6 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3256\r\n* Bump databricks-sdk from 0.25.1 to 0.27.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3257\r\n* Improve GCS Object Store by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3251\r\n* add retry to gcs.upload_file by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3232\r\n* Add unit test support for full state dict + load_weights_only and save_weights_only by @eracah in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3260\r\n* will\u002Fbump_aws_ofi_nccl by @willgleich in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3253\r\n* Fix daily GCS tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3268\r\n* Fix: SAM not working with FSDP\u002FDeepSpeed and LR scheduler. by @Joqsan in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3259\r\n* Add upload timeout patch to mlflow on azure by @dakinggg in htt","2024-06-05T20:34:13",{"id":242,"version":243,"summary_zh":244,"released_at":245},99735,"v0.22.0","## What's New\r\n\r\n### 🔥 Support for PyTorch v2.3.0\r\nComposer now supports the recently-released PyTorch [version 2.3.0](https:\u002F\u002Fdev-discuss.pytorch.org\u002Ft\u002Fpytorch-2-3-0-general-availability\u002F2033)! Please raise any issues with us so we can address them.\r\n\r\n## Bug Fixes\r\n* Fixing checks for device microbatch size for sequence parallelism in [#3200](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3200)\r\n* Fixing token logging in [#3206](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3206)\r\n* Search for run name in MLFlowLogger in [#3215](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3215)\r\n* Fix FQN names with activation checkpointing in [#3210](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3210)\r\n* Strict weight matching for checkpoint loading in [#3219](https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3219)\r\n\r\n## What's Changed\r\n* Bump transformers by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3197\r\n* Add deprecation warnings for ICL datasets\u002Fhelper functions\u002Fmetrics by @bmosaicml in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3125\r\n* Bump traitlets from 5.14.2 to 5.14.3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3204\r\n* Raise LR schedule warnings only when necessary by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3207\r\n* Add torch 2.3 support by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3209\r\n* Add torch 2.3 CI\u002FCD by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3211\r\n* Fix daily test images by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3212\r\n* Try FAv2 2.5.7 from source by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3213\r\n* Update tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3217\r\n* Fix torch 2.3 GPU tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3218\r\n* Use flash-attn 2.5.8 with no build isolation in docker images by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3224\r\n* Add a torch.cuda.empty_cache() in utils.save_checkpoint by @bfontain in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3216\r\n* Require 2 steps for GS object store  by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3228\r\n* Add `rename_metrics` to Mlflow logger by @hanlint in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3225\r\n* Fix daily tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3229\r\n* Change precision for daily tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3231\r\n* Create new Mlflow run by default and introduce `run_group` by @chenmoneygithub in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3208\r\n* Fix daily test pt 4 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3233\r\n* Deprecate and bump version to 0.22 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3230\r\n* Fix daily tests v5 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3234\r\n* Fix daily v6 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3235\r\n* fix daily tests v7 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3236\r\n* Raise the daily test timeout by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3241\r\n* Accelerate GPU tests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3237\r\n* Make sharded checkpoint loading backwards-compatible by @snarayan21 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3240\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.21.3...v0.22.0","2024-05-01T16:59:54",{"id":247,"version":248,"summary_zh":249,"released_at":250},99736,"v0.21.3","## Bug Fixes\r\n\r\n**1. Increased Robustness to Checkpoint Loading**\r\n\r\nWe've patched several edge cases in loading sharded checkpoints, especially with DTensors, which should decrease memory usage when loading checkpoints. We've also hardened retry logic against object cloud failure, ensuring higher robustness to transient network issues.\r\n\r\n## What's Changed\r\n* Raise daily test timeout by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3172\r\n* fix remote file naming by @cli99 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3173\r\n* [fix] DTensor + SHARD_GRAD_OP + use_orig_params by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3175\r\n* Bump db sdk by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3176\r\n* Build latest pytorch nightly images by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3179\r\n* Add FP8 TransformerEngine activation checkpointing by @cli99 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3156\r\n* Enabling the computation of validation loss and other metrics when using sequence parallelism by @ShashankMosaicML in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3183\r\n* Update mosaic_fsdp_utils.py by @vchiley in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3185\r\n* Fix the FSDP.optim_state_dict_to_load OOM by @bigning in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3184\r\n* Revert \"Update mosaic_fsdp_utils.py\" by @vchiley in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3187\r\n* Bump databricks-sdk from 0.24.0 to 0.25.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3190\r\n* Add version tag to local builds by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3188\r\n* Update `NeptuneLogger` by @AleksanderWWW in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3165\r\n* Filter neptune warning in doctests by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3195\r\n* Removal of metrics deepcopy before computing the metrics by @gregjauvion in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3180\r\n* Fix MLFlow Tag Name for Resumption by @KuuCi in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3194\r\n* Fix mistral gating by @dakinggg in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3199\r\n* Bump version to 0.21.3 by @mvpatel2000 in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3198\r\n\r\n## New Contributors\r\n* @gregjauvion made their first contribution in https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fpull\u002F3180\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmosaicml\u002Fcomposer\u002Fcompare\u002Fv0.21.2...v0.21.3","2024-04-19T15:41:25"]