[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-autogluon--autogluon":3,"tool-autogluon--autogluon":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":67,"owner_name":67,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":99,"env_os":100,"env_gpu":101,"env_ram":102,"env_deps":103,"category_tags":107,"github_topics":108,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":128,"updated_at":129,"faqs":130,"releases":151},1418,"autogluon\u002Fautogluon","autogluon","Fast and Accurate ML in 3 Lines of Code","AutoGluon 是由 AWS AI 团队开发的开源自动化机器学习框架，旨在帮助用户仅用几行代码即可构建高精度的预测模型。它主要解决了传统机器学习流程中特征工程复杂、模型选择困难以及调参耗时等痛点，将原本繁琐的开发过程简化为“一键式”操作。\n\n无论是处理表格数据、图像识别、文本分析还是时间序列预测，AutoGluon 都能自动执行数据预处理、模型集成、超参数优化及深度学习架构搜索等关键步骤。其独特的技术亮点在于强大的多模型堆叠（Stacking）策略与自适应资源分配机制，能够在有限时间内自动组合多种算法以达到业界领先的预测精度，同时支持在 CPU 和 GPU 环境下高效运行。\n\n这款工具非常适合希望快速验证想法的数据科学家、需要部署高性能模型但缺乏深厚算法背景的开发者，以及寻求降低机器学习门槛的研究人员。即使是对机器学习细节不甚熟悉的普通技术人员，也能通过 AutoGluon 轻松训练并部署端到端的优质模型，显著提升应用开发效率。","\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautogluon_autogluon_readme_a2b14975f1bb.png\" width=\"350\">\n\n## Fast and Accurate ML in 3 Lines of Code\n\n[![Latest Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fautogluon\u002Fautogluon)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Freleases)\n[![Conda Forge](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fautogluon.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fautogluon)\n[![Python Versions](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10%20%7C%203.11%20%7C%203.12%20%7C%203.13-blue)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fautogluon\u002F)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautogluon_autogluon_readme_910e23dc06d9.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fautogluon)\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue.svg)](.\u002FLICENSE)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\n[![Continuous Integration](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml)\n[![Platform Tests](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fplatform_tests-command.yml\u002Fbadge.svg?event=schedule)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fplatform_tests-command.yml)\n\n[Installation](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Finstall.html) | [Documentation](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Findex.html) | [Release Notes](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fwhats_new\u002Findex.html)\n\n\u003C\u002Fdiv>\n\nAutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.  With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.\n\n\n## 💾 Installation\n\nAutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.\n\nYou can install AutoGluon with:\n\n```python\npip install autogluon\n```\n\nVisit our [Installation Guide](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Finstall.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.\n\n## :zap: Quickstart\n\nBuild accurate end-to-end ML models in just 3 lines of code!\n\n```python\nfrom autogluon.tabular import TabularPredictor\npredictor = TabularPredictor(label=\"class\").fit(\"train.csv\", presets=\"best\")\npredictions = predictor.predict(\"test.csv\")\n```\n\n| AutoGluon Task      |                                                                                Quickstart                                                                                |                                                                                API                                                                                |\n|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| TabularPredictor    | [![Quick Start](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=&message=tutorial&color=grey)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftabular\u002Ftabular-quick-start.html) |                 [![API](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fapi-reference-blue.svg)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.tabular.TabularPredictor.html)                 |\n| TimeSeriesPredictor | [![Quick Start](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=&message=tutorial&color=grey)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-quick-start.html)            | [![API](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fapi-reference-blue.svg)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.timeseries.TimeSeriesPredictor.html) |\n| MultiModalPredictor | [![Quick Start](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=&message=tutorial&color=grey)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fmultimodal_prediction\u002Fmultimodal-quick-start.html)            | [![API](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fapi-reference-blue.svg)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.multimodal.MultiModalPredictor.html) |\n\n## :mag: Resources\n\n### Hands-on Tutorials \u002F Talks\n\nBelow is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).\n\n| Title                                                                                                                    | Format   | Location                                                                         | Date       |\n|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|\n| :tv: [Structured Foundation Models Meets AutoML](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2025\u002F46786)                                                                               | Expo Talk       | [ICML 2025](https:\u002F\u002Ficml.cc\u002FConferences\u002F2025)                                                                                      | 2025\u002F07\u002F13  |\n| :tv: [AutoGluon 1.2: Advancing AutoML with Foundational Models and LLM Agents](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2024\u002Fexpo-workshop\u002F100328)                               | Expo Workshop   | [NeurIPS 2024](https:\u002F\u002Fneurips.cc\u002FConferences\u002F2024)                                                                                | 2024\u002F12\u002F10  |\n| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SwPq9qjaN2Q)                | Tutorial | [AutoML 2024](https:\u002F\u002F2024.automl.cc\u002F)                                           | 2024\u002F09\u002F09 |\n| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https:\u002F\u002F2023.automl.cc\u002F)                                           | 2023\u002F09\u002F12 |\n| :sound: [AutoGluon: The Story](https:\u002F\u002Fautomlpodcast.com\u002Fepisode\u002Fautogluon-the-story)                                    | Podcast  | [The AutoML Podcast](https:\u002F\u002Fautomlpodcast.com\u002F)                                 | 2023\u002F09\u002F05 |\n| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https:\u002F\u002Fyoutu.be\u002FLwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin                                                                    | 2023\u002F06\u002F20 |\n| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=J1UQUCPB88I)    | Tutorial | PyData Seattle                                                                   | 2023\u002F06\u002F20 |\n| :tv: [The AutoML Revolution](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=VAAITEds-28)                                                | Tutorial | [Fall AutoML School 2022](https:\u002F\u002Fsites.google.com\u002Fview\u002Fautoml-fall-school-2022) | 2022\u002F10\u002F18 |\n\n### Scientific Publications\n- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))\n- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))\n- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https:\u002F\u002Fdatasets-benchmarks-proceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))\n- [XTab: Cross-table Pretraining for Tabular Transformers](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fzhu23k\u002Fzhu23k.pdf) (*ICML*, 2023)\n- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))\n- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.02971.pdf) (*AutoML Conf*, 2024)\n- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))\n- [Chronos: Learning the Language of Time Series](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815) (*TMLR*, 2024)\n- [Multi-layer Stack Ensembles for Time Series Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))\n- [Chronos-2: From Univariate to Universal Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))\n- [TabArena: A Living Benchmark for Machine Learning on Tabular Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16791) (*NeurIPS Spotlight*, 2025)\n- [Mitra: Mixed Synthetic Priors for Enhancing Tabular Foundation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21204) (*NeurIPS*, 2025)\n- [MLZero: A Multi-Agent System for End-to-end Machine Learning Automation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13941) (*NeurIPS*, 2025)\n- [fev-bench: A Realistic Benchmark for Time Series Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26468) (*Arxiv*, 2025)\n\n### Articles\n- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https:\u002F\u002Ftowardsdatascience.com\u002Fautogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)\n- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https:\u002F\u002Faws.amazon.com\u002Fblogs\u002Fopensource\u002Fmachine-learning-with-autogluon-an-open-source-automl-library\u002F) (*AWS Open Source Blog*, Mar 2020)\n- [AutoGluon overview & example applications](https:\u002F\u002Ftowardsdatascience.com\u002Fautogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)\n\n### Train\u002FDeploy AutoGluon in the Cloud\n- [AutoGluon Cloud](https:\u002F\u002Fauto.gluon.ai\u002Fcloud\u002Fstable\u002Findex.html) (Recommended)\n- [AutoGluon on Amazon SageMaker](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fcloud_fit_deploy\u002Fcloud-aws-sagemaker-train-deploy.html)\n- [AutoGluon Deep Learning Containers](https:\u002F\u002Fgithub.com\u002Faws\u002Fdeep-learning-containers\u002Fblob\u002Fmaster\u002Favailable_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)\n- [AutoGluon Official Docker Container](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fautogluon\u002Fautogluon)\n\n#### Outdated \u002F Unsupported Cloud Options\n- [AutoGluon on SageMaker AutoPilot](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fcloud_fit_deploy\u002Fautopilot-autogluon.html) (Uses an old AutoGluon 0.4 release)\n- [AutoGluon-Tabular on AWS Marketplace](https:\u002F\u002Faws.amazon.com\u002Fmarketplace\u002Fpp\u002Fprodview-n4zf5pmjt7ism) (Outdated and not maintained by us)\n\n## :pencil: Citing AutoGluon\n\nIf you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).\n\n## :wave: How to get involved\n\nWe are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) to get started.\n\n## :classical_building: License\n\nThis library is licensed under the Apache 2.0 License.\n","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautogluon_autogluon_readme_a2b14975f1bb.png\" width=\"350\">\n\n## 仅需三行代码，即可实现快速且精准的机器学习\n\n[![最新版本](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fautogluon\u002Fautogluon)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Freleases)\n[![Conda Forge](https:\u002F\u002Fimg.shields.io\u002Fconda\u002Fvn\u002Fconda-forge\u002Fautogluon.svg)](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fautogluon)\n[![Python 版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10%20%7C%203.11%20%7C%203.12%20%7C%203.13-blue)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fautogluon\u002F)\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautogluon_autogluon_readme_910e23dc06d9.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fautogluon)\n[![GitHub 许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue.svg)](.\u002FLICENSE)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\n[![持续集成](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fcontinuous_integration.yml)\n[![平台测试](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fplatform_tests-command.yml\u002Fbadge.svg?event=schedule)](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Factions\u002Fworkflows\u002Fplatform_tests-command.yml)\n\n[安装指南](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Finstall.html) | [文档](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Findex.html) | [版本更新日志](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fwhats_new\u002Findex.html)\n\n\u003C\u002Fdiv>\n\nAutoGluon 由 AWS AI 开发，可自动化执行机器学习任务，助您轻松在各类应用中实现卓越的预测性能。只需几行代码，您便能针对图像、文本、时间序列和表格数据，训练并部署高精度的机器学习与深度学习模型。\n\n## 💾 安装方法\n\nAutoGluon 支持 Python 3.10 至 3.13，并可在 Linux、MacOS 和 Windows 系统上运行。\n\n您可以通过以下方式安装 AutoGluon：\n\n```python\npip install autogluon\n```\n\n请访问我们的 [安装指南](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Finstall.html) 以获取详细说明，包括 GPU 支持、Conda 安装以及可选依赖项。\n\n## :zap: 快速入门\n\n只需三行代码，即可构建精准的端到端机器学习模型！\n\n```python\nfrom autogluon.tabular import TabularPredictor\npredictor = TabularPredictor(label=\"class\").fit(\"train.csv\", presets=\"best\")\npredictions = predictor.predict(\"test.csv\")\n```\n\n| AutoGluon 任务      |                                                                                快速入门                                                                                |                                                                                API                                                                                |\n|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| TabularPredictor    | [![快速入门](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=&message=tutorial&color=grey)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftabular\u002Ftabular-quick-start.html) |                 [![API](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fapi-reference-blue.svg)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.tabular.TabularPredictor.html)                 |\n| TimeSeriesPredictor | [![快速入门](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=&message=tutorial&color=grey)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-quick-start.html)            | [![API](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fapi-reference-blue.svg)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.timeseries.TimeSeriesPredictor.html) |\n| MultiModalPredictor | [![快速入门](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=&message=tutorial&color=grey)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fmultimodal_prediction\u002Fmultimodal-quick-start.html)            | [![API](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fapi-reference-blue.svg)](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.multimodal.MultiModalPredictor.html) |\n\n## :mag: 资源\n\n### 实践教程\u002F演讲\n\n以下是近期关于 AutoGluon 的精选教程与演讲列表。完整列表请见 [这里](AWESOME.md#videos--tutorials)。\n\n| 标题                                                                                                                    | 形式   | 地点                                                                         | 日期       |\n|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|\n| :tv: [结构化基础模型与 AutoML 的融合](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2025\u002F46786)                                                                               | Expo 演讲       | [ICML 2025](https:\u002F\u002Ficml.cc\u002FConferences\u002F2025)                                                                                      | 2025\u002F07\u002F13  |\n| :tv: [AutoGluon 1.2：借助基础模型与 LLM 代理推动 AutoML 进步](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2024\u002Fexpo-workshop\u002F100328)                               | Expo 工作坊   | [NeurIPS 2024](https:\u002F\u002Fneurips.cc\u002FConferences\u002F2024)                                                                                | 2024\u002F12\u002F10  |\n| :tv: [AutoGluon：迈向无代码自动化机器学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SwPq9qjaN2Q)                | 教程 | [AutoML 2024](https:\u002F\u002F2024.automl.cc\u002F)                                           | 2024\u002F09\u002F09 |\n| :tv: [AutoGluon 1.0：用零行代码打破 AutoML 的天花板](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5tvp_Ihgnuk) | 教程 | [AutoML 2023](https:\u002F\u002F2023.automl.cc\u002F)                                           | 2023\u002F09\u002F12 |\n| :sound: [AutoGluon：故事](https:\u002F\u002Fautomlpodcast.com\u002Fepisode\u002Fautogluon-the-story)                                    | 播客  | [AutoML 播客](https:\u002F\u002Fautomlpodcast.com\u002F)                                 | 2023\u002F09\u002F05 |\n| :tv: [AutoGluon：面向表格数据、多模态数据及时间序列数据的 AutoML](https:\u002F\u002Fyoutu.be\u002FLwu15m5mmbs?si=jSaFJDqkTU27C0fa) | 教程 | PyData 柏林                                                                    | 2023\u002F06\u002F20 |\n| :tv: [用几行代码解决复杂的机器学习问题：AutoGluon](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=J1UQUCPB88I)    | 教程 | PyData 西雅图                                                                   | 2023\u002F06\u002F20 |\n| :tv: [AutoML 革命](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=VAAITEds-28)                                                | 教程 | [2022 秋季 AutoML 学校](https:\u002F\u002Fsites.google.com\u002Fview\u002Fautoml-fall-school-2022) | 2022\u002F10\u002F18 |\n\n### 科学论文\n- [AutoGluon-表格数据：面向结构化数据的稳健且精准的 AutoML](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))\n- [通过增强蒸馏实现表格数据的快速、精准且简单模型](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))\n- [以文本字段为基准，对表格数据进行多模态 AutoML 的性能评估](https:\u002F\u002Fdatasets-benchmarks-proceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))\n- [XTab：用于表格 Transformer 的跨表预训练](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fzhu23k\u002Fzhu23k.pdf) (*ICML*, 2023)\n- [AutoGluon-时间序列：用于概率性时间序列预测的 AutoML](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))\n- [TabRepo：大型表格模型评估库及其 AutoML 应用](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.02971.pdf) (*AutoML Conf*, 2024)\n- [AutoGluon-多模态（AutoMM）：利用基础模型为多模态 AutoML 提供强大支持](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))\n- [Chronos：学习时间序列的语言](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07815) (*TMLR*, 2024)\n- [多层堆叠集成方法用于时间序列预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))\n- [Chronos-2：从单变量预测到通用预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))\n- [TabArena：面向表格数据的机器学习实时基准](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16791) (*NeurIPS Spotlight*, 2025)\n- [Mitra：用于提升表格基础模型的混合合成先验](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.21204) (*NeurIPS*, 2025)\n- [MLZero：一套用于端到端机器学习自动化的多智能体系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13941) (*NeurIPS*, 2025)\n- [fev-bench：真实时间序列预测的基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26468) (*Arxiv*, 2025)\n\n### 文章\n- [AutoGluon-时间序列：一库搞定所有时间序列预测模型](https:\u002F\u002Ftowardsdatascience.com\u002Fautogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, 2024年1月)\n- [AutoGluon 用于表格数据：只需三行代码，便能在 Kaggle 竞赛中跻身前1%](https:\u002F\u002Faws.amazon.com\u002Fblogs\u002Fopensource\u002Fmachine-learning-with-autogluon-an-open-source-automl-library\u002F) (*AWS 开源博客*, 2020年3月)\n- [AutoGluon 概述与应用案例](https:\u002F\u002Ftowardsdatascience.com\u002Fautogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, 2019年12月)\n\n### 在云端训练\u002F部署 AutoGluon\n- [AutoGluon 云平台](https:\u002F\u002Fauto.gluon.ai\u002Fcloud\u002Fstable\u002Findex.html)（推荐）\n- [AutoGluon 在 Amazon SageMaker 上](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fcloud_fit_deploy\u002Fcloud-aws-sagemaker-train-deploy.html)\n- [AutoGluon 深度学习容器](https:\u002F\u002Fgithub.com\u002Faws\u002Fdeep-learning-containers\u002Fblob\u002Fmaster\u002Favailable_images.md#autogluon-training-containers)（经安全认证，并由 AutoGluon 开发者维护）\n- [AutoGluon 官方 Docker 容器](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fautogluon\u002Fautogluon)\n\n#### 已过时\u002F不支持的云服务选项\n- [AutoGluon 在 SageMaker 自动驾驶平台上](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fcloud_fit_deploy\u002Fautopilot-autogluon.html)（使用的是旧版 AutoGluon 0.4 版本）\n- [AutoGluon-表格数据在 AWS 市场中](https:\u002F\u002Faws.amazon.com\u002Fmarketplace\u002Fpp\u002Fprodview-n4zf5pmjt7ism)（已过时，且未由我们维护）\n\n## :pencil: 引用 AutoGluon\n\n如果您在科学出版物中使用了 AutoGluon，请参考我们的 [引用指南](CITING.md)。\n\n## :wave: 如何参与其中？\n\n我们积极欢迎各位为 AutoGluon 项目贡献代码。如果您对参与 AutoGluon 的开发感兴趣，请阅读 [贡献指南](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md)，以便顺利起步。\n\n## :古典建筑: 许可证\n\n本库采用 Apache 2.0 许可证授权。","# AutoGluon 快速上手指南\n\nAutoGluon 是由 AWS AI 开发的自动化机器学习（AutoML）库，只需几行代码即可在表格、文本、图像和时间序列数据上训练并部署高精度的机器学习与深度学习模型。\n\n## 环境准备\n\n*   **操作系统**：支持 Linux、macOS 和 Windows。\n*   **Python 版本**：3.10、3.11、3.12 或 3.13。\n*   **硬件建议**：基础功能仅需 CPU；若需加速深度学习模型训练或处理大规模数据，建议使用配备 NVIDIA GPU 的环境。\n\n## 安装步骤\n\n### 1. 基础安装\n使用 pip 进行标准安装：\n\n```bash\npip install autogluon\n```\n\n### 2. 国内加速安装（推荐）\n对于中国开发者，建议使用清华源或阿里源以加快下载速度：\n\n```bash\npip install autogluon -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 高级安装选项\n*   **GPU 支持**：若需启用 GPU 加速，请参考官方安装指南配置 CUDA 环境。\n*   **Conda 安装**：也可通过 Conda Forge 安装：\n    ```bash\n    conda install -c conda-forge autogluon\n    ```\n\n> 详细安装说明（含可选依赖）请访问：[AutoGluon 安装指南](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Finstall.html)\n\n## 基本使用\n\nAutoGluon 的核心优势在于极简的 API。以下示例展示如何仅用 **3 行代码** 完成表格数据的模型训练与预测。\n\n### 表格数据预测 (TabularPredictor)\n\n假设你有一个名为 `train.csv` 的训练集（包含特征列和目标列 `class`）和一个 `test.csv` 测试集。\n\n```python\nfrom autogluon.tabular import TabularPredictor\n\n# 初始化预测器并自动训练模型，presets=\"best\" 表示追求最高精度\npredictor = TabularPredictor(label=\"class\").fit(\"train.csv\", presets=\"best\")\n\n# 对测试数据进行预测\npredictions = predictor.predict(\"test.csv\")\n```\n\n**代码说明：**\n1.  **导入模块**：引入表格数据预测器。\n2.  **训练 (`fit`)**：指定目标列名 (`label`)，传入训练数据路径。`presets=\"best\"` 会自动启用包括模型集成、超参数优化在内的高级策略以获取最佳效果。\n3.  **预测 (`predict`)**：传入测试数据路径，直接返回预测结果。\n\n### 其他任务类型\nAutoGluon 同样支持时间序列和多模态任务，使用方式类似：\n*   **时间序列**：使用 `autogluon.timeseries.TimeSeriesPredictor`\n*   **多模态（图像\u002F文本）**：使用 `autogluon.multimodal.MultiModalPredictor`\n\n更多具体任务的教程请参考 [官方文档](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Findex.html)。","某电商公司的数据分析师需要在周末前基于历史订单数据构建一个用户流失预测模型，以支持下周的营销决策。\n\n### 没有 autogluon 时\n- **技术门槛高**：需要手动编写大量代码进行数据清洗、特征工程，并逐一尝试随机森林、XGBoost 等多种算法库。\n- **调参耗时久**：为寻找最优超参数组合，往往需耗费数天时间进行网格搜索或人工经验调整，极易错过业务窗口期。\n- **模型对比难**：难以快速评估不同模型在当前数据集上的表现，容易陷入局部最优，导致最终预测准确率不理想。\n- **部署流程繁**：从实验环境到生产环境的模型序列化与接口封装过程复杂，容易因环境依赖问题导致上线失败。\n\n### 使用 autogluon 后\n- **极简代码启动**：仅需 3 行代码即可自动完成数据预处理、特征生成及多模型训练，无需精通底层算法细节。\n- **自动化高性能**：autogluon 内置的自动堆叠（Stacking）与超参数优化策略，能在短时间内自动产出超越手工调优的 SOTA 模型。\n- **智能模型集成**：自动训练并对比多种机器学习与深度学习模型，智能选择最佳集成方案，显著提升预测准确率。\n- **一键部署应用**：训练好的模型可直接保存并用于批量预测或实时推理，大幅简化了从原型到生产环境的落地路径。\n\nautogluon 将原本需要数周的专业建模工作压缩至小时级，让业务人员也能轻松拥有顶尖的预测能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautogluon_autogluon_a2b14975.png","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fautogluon_c6cf6bc3.png","",null,"https:\u002F\u002Fgithub.com\u002Fautogluon",[80,84,88,92],{"name":81,"color":82,"percentage":83},"Python","#3572A5",99.8,{"name":85,"color":86,"percentage":87},"Shell","#89e051",0.2,{"name":89,"color":90,"percentage":91},"Batchfile","#C1F12E",0,{"name":93,"color":94,"percentage":91},"Dockerfile","#384d54",10198,1133,"2026-04-05T09:11:28","Apache-2.0",1,"Linux, macOS, Windows","未说明（支持 GPU 加速，具体配置需参考安装指南）","未说明",{"notes":104,"python":105,"dependencies":106},"可通过 pip 或 conda 安装。详细安装说明（包括 GPU 支持、Conda 安装及可选依赖）请参阅官方安装指南。支持表格、时间序列、多模态（图像\u002F文本）等任务。","3.10, 3.11, 3.12, 3.13",[],[51,26,14,13,54],[109,110,111,112,113,114,115,116,117,118,119,120,121,122,67,123,124,125,126,127],"automl","machine-learning","data-science","deep-learning","ensemble-learning","computer-vision","natural-language-processing","structured-data","object-detection","gluon","transfer-learning","pytorch","automated-machine-learning","scikit-learn","tabular-data","hyperparameter-optimization","time-series","forecasting","python","2026-03-27T02:49:30.150509","2026-04-06T05:16:49.512135",[131,136,141,146],{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},6511,"如何在 Conda 环境中安装 AutoGluon？","AutoGluon 现已在 conda-forge 渠道提供。推荐使用 mamba 进行安装以解决依赖问题。\n\n对于 Linux 和 macOS：\n```bash\nconda install -n base mamba -c conda-forge\nmamba create -n ag autogluon python -c conda-forge\n```\n\n对于 Windows（注意：autogluon.multimodal 暂不支持 Windows，仅支持 tabular 和 timeseries）：\n```bash\nconda install -n base mamba -c conda-forge\nmamba create -n ag autogluon.tabular autogluon.timeseries python -c conda-forge\n```","https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fissues\u002F612",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},6512,"AutoGluon 是否支持 Apple M1\u002FM2 Mac 芯片？","目前 AutoGluon 对 M1\u002FM2 Mac 的支持有限。虽然 PyTorch 本身支持 M1，但 AutoGluon 依赖的 GPU 加速功能通常基于 CUDA，而 Mac M1\u002FM2 的 GPU 不支持 CUDA。因此，无法在 M1\u002FM2 Mac 上获得完整的 GPU 加速体验。团队已创建相关议题跟踪此功能的添加进度，但在官方正式支持前，可能无法正常运行或性能受限。","https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fissues\u002F1242",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},6513,"运行文本预测时遇到 'RuntimeError: The best config ... is not found in config history' 错误怎么办？","该错误通常与模型配置或版本兼容性有关。可以尝试显式指定更强的骨干网络模型来解决。例如，将默认的 `google_electra_small` 更改为 `google_electra_base`。\n\n示例代码如下：\n```python\nfrom autogluon import TextPrediction as task\nimport pandas as pd\n\ndf = pd.read_csv('Train.txt')\n\nhyperparameters = {\n    'models': {\n        'BertForTextPredictionBasic': {\n            'search_space': {\n                'model.backbone.name': 'google_electra_base'\n            }\n        }\n    }\n}\n\npredictor = task.fit(df, feature_columns=['Product_Description', 'Product_Type'], label='Sentiment', verbosity=3, hyperparameters=hyperparameters)\n```","https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fissues\u002F647",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},6514,"导入 TextPredictor 时出现 'Segmentation fault (core dumped)' 或 CUDA 库加载失败警告如何解决？","这通常是由于 PyTorch、Torchvision 或其他深度学习框架的版本不兼容导致的。请确保安装最新版本的 AutoGluon，因为最新的安装指南已修复了此类依赖冲突问题。\n\n如果是从源码安装，请克隆仓库并运行 `full_install.sh` 脚本以确保所有依赖项版本正确匹配：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon.git\ncd autogluon\n.\u002Ffull_install.sh\n```\n同时检查环境中是否同时安装了冲突的 MXNet 和 PyTorch 版本，尽量避免在同一环境中混用不同框架的 GPU 版本。","https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fissues\u002F1762",[152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247],{"id":153,"version":154,"summary_zh":155,"released_at":156},106095,"v1.5.0","# Version 1.5.0\r\n\r\nWe are happy to announce the AutoGluon 1.5.0 release!\r\n\r\nAutoGluon 1.5.0 introduces new features and major improvements to both tabular and time series modules.\r\n\r\nThis release contains [131 commits from 17 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=7%2F28%2F2025&to=12%2F19%2F2025&type=c)! See the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F1.4.0...1.5.0\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)\r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nThis release supports Python versions 3.10, 3.11, 3.12 and 3.13. Support for Python 3.13 is currently experimental, and some features might not be available when running Python 3.13 on Windows. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.5.0.\r\n\r\n--------\r\n\r\n## Spotlight\r\n\r\n### Chronos-2\r\n\r\nAutoGluon v1.5 adds support for [Chronos-2](https:\u002F\u002Fhuggingface.co\u002Famazon\u002Fchronos-2), our latest generation of foundation models for time series forecasting. Chronos-2 natively handles all types of dynamic covariates, and performs cross-learning from items in the batch. It produces multi-step quantile forecasts and is designed for strong out-of-the-box performance on new datasets.\r\n\r\nChronos-2 achieves state-of-the-art zero-shot accuracy among public models on major benchmarks such as [fev-bench](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fautogluon\u002Ffev-bench) and [GIFT-Eval](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSalesforce\u002FGIFT-Eval), making it a strong default choice when little or no task-specific training data is available.\r\n\r\nIn AutoGluon, Chronos-2 can be used in **zero-shot mode** or **fine-tuned** on custom data. Both **LoRA fine-tuning** and **full fine-tuning** are supported. Chronos-2 integrates into the standard `TimeSeriesPredictor` workflow, making it easy to backtest, compare against classical and deep learning models, and combine with other models in ensembles.\r\n```python\r\nfrom autogluon.timeseries import TimeSeriesPredictor\r\n\r\npredictor = TimeSeriesPredictor(...)\r\npredictor.fit(train_data, presets=\"chronos2\")  # zero-shot mode\r\n```\r\nMore details on zero-shot usage, fine-tuning and ensembling are available in the [updated tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftimeseries\u002Fforecasting-chronos.html).\r\n\r\n\r\n### Tabular\r\n\r\n**AutoGluon 1.5 Extreme sets a new state-of-the-art on [TabArena](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FTabArena\u002Fleaderboard)**, with a 60 Elo improvement over AutoGluon 1.4 Extreme.\r\nOn average, AutoGluon 1.5 Extreme trains in half the time, has 50% faster inference speed, a 70% win-rate, and 2.8% less relative error compared to AutoGluon 1.4 Extreme. Whereas 1.4 used a mixed portfolio that changed depending on data size, 1.5 uses a single fixed portfolio for all datasets. \r\n\r\nWhile AutoGluon 1.4 Extreme focused on improving performance on small datasets, AutoGluon 1.5 Extreme focuses especially on improving medium to large datasets. For datasets between 10k - 100k training samples, AutoGluon 1.5 has a **82% win-rate** vs AutoGluon 1.4.\r\n\r\nNotable Improvements:\r\n\r\n1. Added [TabDPT model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.18164), a tabular foundation model pre-trained exclusively on real data.\r\n2. Added TabPrep-LightGBM, a LightGBM model with custom preprocessing logic including target mean encoding and feature crossing.\r\n3. Added early stopping logic for the portfolio which stops training early for small datasets to mitigate overfitting and reduce training time.\r\n\r\nAutoGluon 1.5 Extreme uses exclusively open and permissively licensed models, making it suitable for production and commercial use-cases.\r\n\r\nTo use AutoGluon 1.5 Extreme, you will need a GPU, ideally with at least 20 GB of VRAM to ensure stability. Performance gains are primarily on datasets with up to 100k training samples.\r\n\r\n```python\r\n# pip install autogluon.tabular[tabarena]  # \u003C-- Required for TabDPT, TabICL, TabPFN, and Mitra\r\nfrom autogluon.tabular import TabularPredictor\r\npredictor = TabularPredictor(...).fit(train_data, presets=\"extreme\")  # GPU required\r\n```\r\n\r\n\u003Ctable align=\"center\" width=\"1800\">\r\n  \u003Cthead>\r\n    \u003Ctr>\r\n      \u003Cth colspan=\"5\" align=\"center\" style=\"font-size: 1.1em;\">\r\n        TabArena All (51 datasets)\r\n      \u003C\u002Fth>\r\n    \u003C\u002Ftr>\r\n    \u003Ctr>\r\n      \u003Cth align=\"left\"   width=\"470\">Model\u003C\u002Fth>\r\n      \u003Cth align=\"center\" width=\"120\">Elo [⬆️]\u003C\u002Fth>\r\n      \u003Cth align=\"center\" width=\"320\">Improvability (%) [⬇️]\u003C\u002Fth>\r\n      \u003Cth align=\"center\" width=\"430\">Train Time (s\u002F1K) [⬇️]\u003C\u002Fth>\r\n      \u003Cth align=\"center\" width=\"450\">Predict Time (s\u002F1K) [⬇️]\u003C\u002Fth>\r\n    \u003C\u002Ftr>\r\n  \u003C\u002Fthead>\r\n  \u003Ctbody>\r\n    \u003Ctr>\r\n      \u003Ctd>AutoGluon 1.5 (extreme, 4h)\u003C\u002Ftd>\r\n      \u003Ctd align=\"center\">\u003Cb>1736\u003C\u002Fb>\u003C\u002Ftd>\r\n      \u003Ctd align=\"center\">\u003Cb>3.498\u003C\u002Fb>\u003C\u002Ftd>\r\n      \u003Ctd align=\"center\">289.07\u003C\u002Ftd>\r\n      \u003Ctd align=\"center\">4","2025-12-19T19:55:33",{"id":158,"version":159,"summary_zh":160,"released_at":161},106096,"v1.4.0","# Version 1.4.0\r\n\r\nWe are happy to announce the AutoGluon 1.4.0 release!\r\n\r\nAutoGluon 1.4.0 introduces massive new features and improvements to both tabular and time series modules. In particular, we introduce the `extreme` preset to TabularPredictor, which sets a new state of the art for predictive performance **by a massive margin** on datasets with fewer than 30000 samples. We have also added 5 new tabular model families in this release: RealMLP, TabM, TabPFNv2, TabICL, and Mitra. We also release **MLZero 1.0**, aka AutoGluon-Assistant, an end-to-end automated data science agent that brings AutoGluon from 3 lines of code to 0. For more details, refer to the highlights section below.\r\n\r\nThis release contains [70 commits from 18 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=5%2F21%2F2025&to=7%2F26%2F2025&type=c)! See the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F1.3.1...1.4.0\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)\r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nThis release supports Python versions 3.9, 3.10, 3.11, and 3.12. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.4.0.\r\n\r\n--------\r\n\r\n## Spotlight\r\n\r\n### AutoGluon Tabular Extreme Preset\r\n\r\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002FInnixma\u002Fautogluon-doc-utils\u002Frefs\u002Fheads\u002Fmain\u002Fdocs\u002Fwhats_new\u002Fv1.4.0\u002FAG14_TabArena.png\" width=\"100%\"\u002F>\r\n\r\nAutoGluon 1.4.0 introduces a new tabular preset, `extreme_quality` aka `extreme`.\r\nAutoGluon's [extreme preset](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftabular\u002Ftabular-essentials.html#presets) is **the largest singular improvement to AutoGluon's predictive performance in the history of the package**, even larger than the improvement seen in AutoGluon 1.0 compared to 0.8.\r\nThis preset achieves an **88% win-rate** vs Autogluon 1.3 `best_quality` for datasets with fewer than 10000 samples, and a 290 Elo improvement overall on [TabArena](https:\u002F\u002Ftabarena.ai) (shown in the figure above).\r\n\r\nTry it out in 3 lines of code:\r\n\r\n```python\r\nfrom autogluon.tabular import TabularPredictor\r\npredictor = TabularPredictor(label=\"class\").fit(\"train.csv\", presets=\"extreme\")\r\npredictions = predictor.predict(\"test.csv\")\r\n```\r\n\r\nThe `extreme` preset leverages a [new model portfolio](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002Fmaster\u002Ftabular\u002Fsrc\u002Fautogluon\u002Ftabular\u002Fconfigs\u002Fzeroshot\u002Fzeroshot_portfolio_2025.py), which is an improved version of the `TabArena ensemble` shown in Figure 6a of the [TabArena paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16791). It consists of many new model families added in this release: TabPFNv2, TabICL, Mitra, TabM, as well as tree methods: CatBoost, LightGBM, XGBoost. This preset is not only more accurate, it also requires much less training time. AutoGluon's `extreme` preset in 5 minutes is able to outperform `best` ran for 4 hours.\r\n\r\nIn order to get the most out of the `extreme` preset, a CUDA compatible GPU is required, ideally with 32+ GB vRAM.\r\nNote that inference time can be longer than `best`, but with a GPU it is very reasonable. The `extreme` portfolio is only leveraged for datasets with at most 30000 samples. For larger datasets, we continue to use the `best_quality` portfolio. The preset requires downloading foundation model weights for TabPFNv2, TabICL, and Mitra during fit. If you don't have an internet connection, ensure that you pre-download the weights of the models to be able to use them during fit.\r\n\r\nThis preset is considered experimental for this release, and may change without warning in a future release.\r\n\r\n### TabArena and new models: TabPFNv2, TabICL, TabM, RealMLP\r\n\r\n🚨What is SOTA on tabular data, really? We are excited to introduce [TabArena](https:\u002F\u002Ftabarena.ai), a living benchmark for machine learning on IID tabular data with:\r\n\r\n📊 an online leaderboard accepting submissions  \r\n📑 carefully curated datasets (real, predictive, tabular, IID, permissive license)  \r\n📈 strong tree-based, deep learning, and foundation models  \r\n⚙️ best practices for evaluation (inner CV, outer CV, early stopping)  \r\n\r\nℹ️ 𝐎𝐯𝐞𝐫𝐯𝐢𝐞𝐰  \r\nLeaderboard: https:\u002F\u002Ftabarena.ai  \r\nPaper: https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16791  \r\nCode: https:\u002F\u002Ftabarena.ai\u002Fcode  \r\n\r\n💡 𝐌𝐚𝐢𝐧 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬:  \r\n➡️ Recent deep learning models, RealMLP and TabM, have marginally overtaken boosted trees with weighted ensembling, although they have slower train+inference times. With defaults or regular tuning, CatBoost takes the #1 spot.  \r\n➡️ Foundation models TabPFNv2 and TabICL are only applicable to a subset of datasets, but perform very strongly on these. They have a large inference time and still need tuning\u002Fensembling to get the top spot (for TabPFNv2).  \r\n➡️ The winner does NOT take it all. By using a weighted ensemble of different model types from TabArena,","2025-07-29T05:11:55",{"id":163,"version":164,"summary_zh":165,"released_at":166},106097,"v1.3.1","# Version 1.3.1\r\n\r\nWe are happy to announce the AutoGluon 1.3.1 release!\r\n\r\nAutoGluon 1.3.1 contains several bug fixes and logging improvements for Tabular, TimeSeries, and Multimodal modules.\r\n\r\nThis release contains [9 commits from 5 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=5%2F1%2F2025&to=5%2F20%2F2025&type=c)! See the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F1.3.0...1.3.1\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)\r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nThis release supports Python versions 3.9, 3.10, 3.11, and 3.12. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.3.1.\r\n\r\n--------\r\n\r\n## General\r\n- Version update. @tonyhoo  [#5112](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5112)\r\n\r\n--------\r\n\r\n## Tabular\r\n\r\n### Fixes and Improvements\r\n- Fix TabPFN dependency. @fplein  [#5119](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5119)\r\n- Fix incorrect reference to positive_class in TabularPredictor constructor. @celestinoxp [#5129](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5129)\r\n\r\n--------\r\n\r\n## TimeSeries\r\n\r\n### Fixes and Improvements\r\n- Fix ensemble weights format for printing. @shchur [#5132](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5132)\r\n- Avoid masking the `scaler` param with the default `target_scaler` value for `DirectTabular` and `RecursiveTabular` models. @shchur  [#5131](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5131)\r\n- Fix `FutureWarning` in leaderboard and evaluate methods. @shchur [#5126](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5126)\r\n\r\n--------\r\n\r\n## Multimodal\r\n\r\n### Fixes and Improvements\r\n- Fix multimodal tutorial issue after 1.3 release @tonyhoo [#5121](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5121)\r\n\r\n--------\r\n\r\n## Documentation and CI\r\n- Add release instructions for pasting whats_new release notes. @Innixma [#5111](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5111)\r\n- Update docker image to use 1.3 release base. @tonyhoo [#5130](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5130)\r\n\r\n--------\r\n\r\n## Contributors\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@shchur @tonyhoo @celestinoxp\r\n\r\n\r\n### New Contributors\r\n- @fplein made their first contribution in [#5119](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F5119)\r\n","2025-05-22T05:57:30",{"id":168,"version":169,"summary_zh":170,"released_at":171},106098,"v1.3.0","# Version 1.3.0\r\n\r\nWe are happy to announce the AutoGluon 1.3.0 release!\r\n\r\nAutoGluon 1.3 focuses on stability & usability improvements, bug fixes, and dependency upgrades.\r\n\r\nThis release contains [144 commits from 20 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=11%2F29%2F2024&to=4%2F30%2F2025&type=c)! See the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv1.2.0...v1.3.0\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)\r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nLoading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.3.\r\n\r\n--------\r\n\r\n## Highlights\r\n\r\n### AutoGluon-Tabular is the state of the art in the AutoML Benchmark 2025!\r\n\r\nThe [AutoML Benchmark 2025](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.01222), an independent large-scale evaluation of tabular AutoML frameworks, showcases AutoGluon 1.2 as the state of the art AutoML framework! Highlights include:\r\n- AutoGluon's rank statistically significantly outperforms all AutoML systems via the Nemenyi post-hoc test across all time constraints.\r\n- AutoGluon with a 5 minute training budget outperforms all other AutoML systems with a 1 hour training budget.\r\n- AutoGluon is pareto efficient in quality and speed across all evaluated presets and time constraints.\r\n- AutoGluon with `presets=\"high\", infer_limit=0.0001` (HQIL in the figures) achieves >10,000 samples\u002Fsecond inference throughput while outperforming all methods.\r\n- AutoGluon is the most stable AutoML system. For \"best\" and \"high\" presets, AutoGluon has 0 failures on all time budgets >5 minutes.\r\n\r\n\u003Cp float=\"left\">\r\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002FInnixma\u002Fautogluon-doc-utils\u002Frefs\u002Fheads\u002Fmain\u002Fdocs\u002Fwhats_new\u002Fv1.3.0\u002Famlb2025_fig3a.png\" width=\"40%\"\u002F>\r\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002FInnixma\u002Fautogluon-doc-utils\u002Frefs\u002Fheads\u002Fmain\u002Fdocs\u002Fwhats_new\u002Fv1.3.0\u002Famlb2025_fig10d.png\" width=\"35%\"\u002F>\r\n\u003C\u002Fp>\r\n\r\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002FInnixma\u002Fautogluon-doc-utils\u002Frefs\u002Fheads\u002Fmain\u002Fdocs\u002Fwhats_new\u002Fv1.3.0\u002Famlb2025_fig1.png\" width=\"100%\"\u002F>\r\n\r\n### AutoGluon Multimodal's \"Bag of Tricks\" Update\r\n\r\nWe are pleased to announce the integration of a comprehensive \"Bag of Tricks\" update for AutoGluon's MultiModal (AutoMM). This significant enhancement substantially improves multimodal AutoML performance when working with combinations of image, text, and tabular data. The update implements various strategies including multimodal model fusion techniques, multimodal data augmentation, cross-modal alignment, tabular data serialization, better handling of missing modalities, and an ensemble learner that integrates these techniques for optimal performance.\r\n\r\nUsers can now access these capabilities through a simple parameter when initializing the MultiModalPredictor after following the instruction [here](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002F2b90eb0f4a848941d70cd387c2fdec67bc67706d\u002Fmultimodal\u002Fsrc\u002Fautogluon\u002Fmultimodal\u002Flearners\u002Fensemble.py#L306-L322) to download the checkpoints:\r\n```python\r\nfrom autogluon.multimodal import MultiModalPredictor\r\npredictor = MultiModalPredictor(label=\"label\", use_ensemble=True)\r\npredictor.fit(train_data=train_data)\r\n```\r\n\r\nWe express our gratitude to @zhiqiangdon, for this substantial contribution that enhances AutoGluon's capabilities for handling complex multimodal datasets. Here is the corresponding research paper describing the technical details: [Bag of Tricks for Multimodal AutoML with Image, Text, and Tabular Data](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2412.16243v1).\r\n\r\n\r\n## Deprecations and Breaking Changes\r\n\r\nThe following deprecated TabularPredictor methods have been removed in the 1.3.0 release (deprecated in 1.0.0, raise in 1.2.0, removed in 1.3.0). Please use the new names:\r\n- `persist_models` -> `persist`, `unpersist_models` -> `unpersist`, `get_model_names` -> `model_names`, `get_model_best` -> `model_best`, `get_pred_from_proba` -> `predict_from_proba`, `get_model_full_dict` -> `model_refit_map`, `get_oof_pred_proba` -> `predict_proba_oof`, `get_oof_pred` -> `predict_oof`, `get_size_disk_per_file` -> `disk_usage_per_file`, `get_size_disk` -> `disk_usage`, `get_model_names_persisted` -> `model_names(persisted=True)`\r\n\r\nThe following logic has been deprecated starting in 1.3.0 and will log a `FutureWarning`. Functionality will be changed in a future release:\r\n\r\n- (**FutureWarning**) `TabularPredictor.delete_models()` will default to `dry_run=False` in a future release (currently `dry_run=True`). Please ensure you explicitly specify `dry_run=True` for the existing logic to remain in future releases. @Innixma (#4905)\r\n\r\n\r\n## General\r\n\r\n\r\n### Improvements\r\n- (**Major**) Internal refactor of `AbstractTrainer` class to improve extensibility and reduce code duplication. @canerturkmen (#4804, #4820, #","2025-05-01T17:09:22",{"id":173,"version":174,"summary_zh":175,"released_at":176},106099,"v1.2.0","# Version 1.2.0\r\n\r\nWe're happy to announce the AutoGluon 1.2.0 release.\r\n\r\nAutoGluon 1.2 contains massive improvements to both Tabular and TimeSeries modules, each achieving a 70% win-rate vs AutoGluon 1.1. This release additionally adds support for Python 3.12 and drops support for Python 3.8.\r\n\r\nThis release contains [186 commits from 19 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=2024-06-15&to=2024-11-29&type=c)! See the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv1.1.1...v1.2.0\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)  \r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nLoading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.2.\r\n\r\nFor Tabular, we encompass the primary enhancements of the new [TabPFNMix tabular foundation model](https:\u002F\u002Fhuggingface.co\u002Fautogluon\u002Ftabpfn-mix-1.0-classifier) and parallel fit strategy into the new `\"experimental_quality\"` preset to ensure a smooth transition period for those who wish to try the new cutting edge features. We will be using this release to gather feedback prior to incorporating these features into the other presets. We also introduce a new stack layer model pruning technique that results in a 3x inference speedup on small datasets with zero performance loss and greatly improved post-hoc calibration across the board, particularly on small datasets.\r\n\r\nFor TimeSeries, we introduce [Chronos-Bolt](https:\u002F\u002Fhuggingface.co\u002Fautogluon\u002Fchronos-bolt-base), our latest foundation model integrated into AutoGluon, with massive improvements to both accuracy and inference speed compared to Chronos, along with fine-tuning capabilities. We also added covariate regressor support!\r\n\r\nWe are also excited to announce [AutoGluon-Assistant](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon-assistant\u002F) (AG-A), our first venture into the realm of Automated Data Science.\r\n\r\nSee more details in the Spotlights below!\r\n\r\n## Spotlight\r\n\r\n### AutoGluon Becomes the Golden Standard for Competition ML in 2024\r\n\r\nBefore diving into the new features of 1.2, we would like to start by highlighting the [wide-spread adoption](https:\u002F\u002Fwww.kaggle.com\u002Fsearch?q=autogluon+sortBy%3Adate) AutoGluon has received on competition ML sites like Kaggle in 2024. Across all of 2024, AutoGluon was used to achieve a top 3 finish in 15 out of 18 tabular Kaggle competitions, including 7 first place finishes, and was never outside the top 1% of private leaderboard placements, with an average of over 1000 competing human teams in each competition. In the $75,000 prize money [2024 Kaggle AutoML Grand Prix](https:\u002F\u002Fwww.kaggle.com\u002Fautoml-grand-prix), AutoGluon was used by the 1st, 2nd, and 3rd place teams, with the 2nd place team led by two AutoGluon developers: [Lennart Purucker](https:\u002F\u002Fgithub.com\u002FLennartPurucker) and [Nick Erickson](https:\u002F\u002Fgithub.com\u002FInnixma)! For comparison, in 2023 AutoGluon achieved only 1 first place and 1 second place solution. We attribute the bulk of this increase to the improvements seen in AutoGluon 1.0 and beyond.\r\n\r\n\u003Ccenter>\r\n\u003Cimg src=\"https:\u002F\u002Fautogluon.s3.amazonaws.com\u002Fimages\u002Fautogluon_kaggle_results_2024.png\" width=\"75%\"\u002F>\r\n\u003C\u002Fcenter>\r\n\r\nWe'd like to emphasize that these results are achieved via human expert interaction with AutoGluon and other tools, and often includes manual feature engineering and hyperparameter tuning to get the most out of AutoGluon. To see a live tracking of all AutoGluon solution placements on Kaggle, refer to our [AWESOME.md ML competition section](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002Fmaster\u002FAWESOME.md#kaggle) where we provide links to all solution write-ups.\r\n\r\n### AutoGluon-Assistant: Automating Data Science with AutoGluon and LLMs\r\n\r\nWe are excited to share the release of a new [AutoGluon-Assistant module](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon-assistant\u002F) (AG-A), powered by LLMs from AWS Bedrock or OpenAI. AutoGluon-Assistant empowers users to solve tabular machine learning problems using only natural language descriptions, in zero lines of code with our simple user interface. Fully autonomous AG-A outperforms 74% of human ML practitioners in Kaggle competitions and secured a live top 10 finish in the $75,000 prize money [2024 Kaggle AutoML Grand Prix](https:\u002F\u002Fwww.kaggle.com\u002Fautoml-grand-prix) competition as Team AGA 🤖!\r\n\r\n### TabularPredictor presets=\"experimental_quality\"\r\n\r\nTabularPredictor has a new `\"experimental_quality\"` preset that offers even better predictive quality than `\"best_quality\"`. On [the AutoMLBenchmark](https:\u002F\u002Fgithub.com\u002Fopenml\u002Fautomlbenchmark), we observe a 70% winrate vs `best_quality` when running for 4 hours on a 64 CPU machine. This preset is a testing ground for cutting edge features and models which we hope to incorporate into `best_quality` for","2024-11-27T17:04:12",{"id":178,"version":179,"summary_zh":180,"released_at":181},106100,"v1.1.1","# Version 1.1.1\r\n\r\nWe're happy to announce the AutoGluon 1.1.1 release.\r\n\r\nAutoGluon 1.1.1 contains bug fixes and logging improvements for Tabular, TimeSeries, and Multimodal modules, as well as support for PyTorch 2.2 and 2.3.\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)  \r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nThis release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.1.\r\n\r\nThis release contains **[52 commits from 11 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv1.1.0...v1.1.1)**!\r\n\r\n## General\r\n- Add support for PyTorch 2.2. @prateekdesai04 (#4123)\r\n- Add support for PyTorch 2.3. @suzhoum (#4239, #4256)\r\n- Upgrade GluonTS to 0.15.1. @shchur (#4231)\r\n\r\n## Tabular\r\nNote: Trying to load a TabularPredictor with a FastAI model trained on a previous AutoGluon release will raise an exception when calling `predict` due to a fix in the `model-interals.pkl` path. Please ensure matching versions.\r\n\r\n- Fix deadlock when `num_gpus>0` and dynamic_stacking is enabled. @Innixma (#4208)\r\n- Improve decision threshold calibration. @Innixma (#4136, #4137)\r\n- Improve dynamic stacking logging. @Innixma (#4208, #4262)\r\n- Fix regression metrics (other than RMSE and MSE) being calculated incorrectly for LightGBM early stopping. @Innixma (#4174)\r\n- Fix custom multiclass metrics being calculated incorrectly for LightGBM early stopping. @Innixma (#4250)\r\n- Fix HPO crashing with NN_TORCH and FASTAI models. @Innixma (#4232)\r\n- Improve NN_TORCH runtime estimate. @Innixma (#4247)\r\n- Add infer throughput logging. @Innixma (#4200)\r\n- Disable sklearnex for linear models due to observed performance degradation. @Innixma (#4223)\r\n- Improve sklearnex logging verbosity in Kaggle. @Innixma (#4216)\r\n- Rename cached version file to version.txt. @Innixma (#4203)\r\n- Add refit_full support for Linear models. @Innixma (#4222)\r\n- Add AsTypeFeatureGenerator detailed exception logging. @Innixma (#4251, #4252)\r\n\r\n## TimeSeries\r\n- Ensure prediction_length is stored as an integer. @shchur (#4160)\r\n- Fix tabular model preprocessing failure edge-case. @shchur (#4175)\r\n- Fix loading of Tabular models failure if predictor moved to a different directory. @shchur (#4171)\r\n- Fix cached predictions error when predictor saved on-top of an existing predictor. @shchur (#4202)\r\n- Use AutoGluon forks of Chronos models. @shchur (#4198)\r\n- Fix off-by-one bug in Chronos inference. @canerturkmen (#4205)\r\n- Rename cached version file to version.txt. @Innixma (#4203)\r\n- Use correct target and quantile_levels in fallback model for MLForecast. @shchur (#4230)\r\n\r\n## Multimodal\r\n- Fix bug in CLIP's image feature normalization. @Harry-zzh (#4114)\r\n- Fix bug in text augmentation. @Harry-zzh (#4115)\r\n- Modify default fine-tuning tricks. @Harry-zzh (#4166)\r\n- Add PyTorch version warning for object detection. @FANGAreNotGnu (#4217)\r\n\r\n## Docs and CI\r\n- Add competition solutions to `AWESOME.md`. @Innixma @shchur (#4122, #4163, #4245)\r\n- Fix PDF classification tutorial. @zhiqiangdon (#4127)\r\n- Add AutoMM paper citation. @zhiqiangdon (#4154)\r\n- Add pickle load warning in all modules and tutorials. @shchur (#4243)\r\n- Various minor doc and test fixes and improvements. @tonyhoo @shchur @lovvge @Innixma @suzhoum (#4113, #4176, #4225, #4233, #4235, #4249, #4266)\r\n\r\n## Contributors\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@Innixma @shchur @Harry-zzh @suzhoum @zhiqiangdon @lovvge @rey-allan @prateekdesai04 @canerturkmen @FANGAreNotGnu @tonyhoo \r\n\r\n### New Contributors\r\n* @lovvge made their first contribution in https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcommit\u002F57a15fcfbbbc94514ff20ed2774cd447d9f4115f\r\n* @rey-allan made their first contribution in #4145","2024-06-14T20:28:33",{"id":183,"version":184,"summary_zh":185,"released_at":186},106101,"v0.8.3","## What's Changed\r\nv0.8.3 is a patch release to address security vulnerabilities.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F0.8.2...0.8.3\r\n\r\nThis version supports Python versions 3.8, 3.9, and 3.10.\r\n\r\n# Changes\r\n* `transformers` and other packages version upgrades + some fixes:  @suzhoum (#4155)\r\n","2024-05-02T19:05:27",{"id":188,"version":189,"summary_zh":190,"released_at":191},106102,"v1.1.0","# Version 1.1.0\r\n\r\nWe're happy to announce the AutoGluon 1.1 release.\r\n\r\nAutoGluon 1.1 contains major improvements to the TimeSeries module, achieving a 60% win-rate vs AutoGluon 1.0 through the addition of Chronos, a pretrained model for time series forecasting, along with numerous other enhancements. The other modules have also been enhanced through new features such as Conv-LORA support and improved performance for large tabular datasets between 5 - 30 GB in size. For a full breakdown of AutoGluon 1.1 features, please refer to the feature spotlights and the itemized enhancements below.\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)  \r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\nThis release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.\r\n\r\nThis release contains **[125 commits from 20 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv1.0.0...v1.1.0)**!\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@shchur @prateekdesai04 @Innixma @canerturkmen @zhiqiangdon @tonyhoo @AnirudhDagar @Harry-zzh @suzhoum @FANGAreNotGnu @nimasteryang @lostella @dassaswat @afmkt @npepin-hub @mglowacki100 @ddelange @LennartPurucker @taoyang1122 @gradientsky\r\n\r\nSpecial thanks to @ddelange for their continued assistance with Python 3.11 support and Ray version upgrades!\r\n\r\n## Spotlight\r\n\r\n### AutoGluon Achieves Top Placements in ML Competitions!\r\n\r\nAutoGluon has experienced [wide-spread adoption on Kaggle](https:\u002F\u002Fwww.kaggle.com\u002Fsearch?q=autogluon+sortBy%3Adate) since the AutoGluon 1.0 release. \r\nAutoGluon has been used in over 130 Kaggle notebooks and mentioned in over 100 discussion threads in the past 90 days!\r\nMost excitingly, AutoGluon has already been used to achieve top ranking placements in multiple competitions with thousands of competitors since the start of 2024:\r\n\r\n| Placement                                | Competition                                                                                                                                       | Author                                             | Date       | AutoGluon Details | Notes                          |\r\n|:-----------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------|:-----------|:------------------|:-------------------------------|\r\n| :3rd_place_medal: Rank 3\u002F2303 (Top 0.1%) | [Steel Plate Defect Prediction](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Fplayground-series-s4e3\u002Fdiscussion\u002F488127)                                     | [Samvel Kocharyan](https:\u002F\u002Fgithub.com\u002Fsamvelkoch)  | 2024\u002F03\u002F31 | v1.0, Tabular     | Kaggle Playground Series S4E3  |\r\n| :2nd_place_medal: Rank 2\u002F93 (Top 2%)     | [Prediction Interval Competition I: Birth Weight](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Fprediction-interval-competition-i-birth-weight\u002Fleaderboard) | [Oleksandr Shchur](https:\u002F\u002Fshchur.github.io\u002F)      | 2024\u002F03\u002F21 | v1.0, Tabular     |                                |\r\n| :2nd_place_medal: Rank 2\u002F1542 (Top 0.1%) | [WiDS Datathon 2024 Challenge #1](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Fwidsdatathon2024-challenge1\u002Fdiscussion\u002F482285)                              | [lazy_panda](https:\u002F\u002Fwww.kaggle.com\u002Fbyteliberator) | 2024\u002F03\u002F01 | v1.0, Tabular     |                                |\r\n| :2nd_place_medal: Rank 2\u002F3746 (Top 0.1%) | [Multi-Class Prediction of Obesity Risk](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Fplayground-series-s4e2\u002Fdiscussion\u002F480939)                            | [Kirderf](https:\u002F\u002Ftwitter.com\u002Fkirderf9)            | 2024\u002F02\u002F29 | v1.0, Tabular     | Kaggle Playground Series S4E2  |\r\n| :2nd_place_medal: Rank 2\u002F3777 (Top 0.1%) | [Binary Classification with a Bank Churn Dataset](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Fplayground-series-s4e1\u002Fdiscussion\u002F472496)                   | [lukaszl](https:\u002F\u002Fwww.kaggle.com\u002Flukaszl)          | 2024\u002F01\u002F31 | v1.0, Tabular     | Kaggle Playground Series S4E1  |\r\n| Rank 4\u002F1718 (Top 0.2%)                   | [Multi-Class Prediction of Cirrhosis Outcomes](https:\u002F\u002Fwww.kaggle.com\u002Fcompetitions\u002Fplayground-series-s3e26\u002Fdiscussion\u002F464863)                     | [Kirderf](https:\u002F\u002Ftwitter.com\u002Fkirderf9)            | 2024\u002F01\u002F01 | v1.0, Tabular     | Kaggle Playground Series S3E26 |\r\n\r\nWe are thrilled that the data science community is leveraging AutoGluon as their go-to method to quickly and effectively achieve top-ranking ML solutions! For an up-to-date list of competition solutions using AutoGluon refer to our [AWESOME.md](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002Fmaster\u002FAWESOME.md#competition-solutions-using-autogluon), and don't hesitate to let us know if you use A","2024-04-17T16:46:34",{"id":193,"version":194,"summary_zh":195,"released_at":196},106103,"v1.0.0","# Version 1.0.0\r\n\r\nToday is finally the day... AutoGluon 1.0 has arrived!! After [over four years of development](https:\u002F\u002Fautomlpodcast.com\u002Fepisode\u002Fautogluon-the-story) and [2061 commits from 111 contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors), we are excited to share with you the culmination of our efforts to create and democratize the most powerful, easy to use, and feature rich automated machine learning system in the world.\r\n\r\nAutoGluon 1.0 comes with transformative enhancements to predictive quality resulting from the combination of multiple novel ensembling innovations, spotlighted below. Besides performance enhancements, many other improvements have been made that are detailed in the individual module sections.\r\n\r\nThis release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.0.\r\n\r\nThis release contains 223 commits from 17 contributors!\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@shchur, @zhiqiangdon, @Innixma, @prateekdesai04, @FANGAreNotGnu, @yinweisu, @taoyang1122, @LennartPurucker, @Harry-zzh, @AnirudhDagar, @jaheba, @gradientsky, @melopeo, @ddelange, @tonyhoo, @canerturkmen, @suzhoum\r\n\r\nJoin the community: [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N)  \r\nGet the latest updates: [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fautogluon?style=social)](https:\u002F\u002Ftwitter.com\u002Fautogluon)\r\n\r\n## Spotlight\r\n\r\n### Tabular Performance Enhancements\r\n\r\nAutoGluon 1.0 features **major enhancements to predictive quality**, establishing a new state-of-the-art in Tabular modeling. To the best of our knowledge, **AutoGluon 1.0 marks the largest leap forward in the state-of-the-art for tabular data since the [original AutoGluon paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06505) from March 2020**. The enhancements come primarily from two features: **[Dynamic stacking](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fpull\u002F3616)** to [mitigate stacked overfitting](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fissues\u002F2779#issuecomment-1736468165), and a new **[learned model hyperparameters portfolio](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fblob\u002F1.0.0\u002Ftabular\u002Fsrc\u002Fautogluon\u002Ftabular\u002Fconfigs\u002Fzeroshot\u002Fzeroshot_portfolio_2023.py)** via Zeroshot-HPO, obtained from the newly released **[TabRepo](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Ftabrepo)** ensemble simulation library. Together, they lead to a **75% win-rate compared to AutoGluon 0.8 with faster inference speed, lower disk usage, and higher stability.**\r\n\r\n### AutoML Benchmark Results\r\n\r\nOpenML released the [official 2023 AutoML Benchmark results](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.12560) on November 16th, 2023. Their results show AutoGluon 0.8 as _the_ state-of-the-art in AutoML systems across a wide variety of tasks: _\"Overall, in terms of model performance, AutoGluon consistently has the highest average rank in our benchmark.\"_ We now showcase that AutoGluon 1.0 achieves far superior results even to AutoGluon 0.8!\r\n\r\nBelow is a comparison on the [OpenML AutoML Benchmark](https:\u002F\u002Fopenml.github.io\u002Fautomlbenchmark\u002Findex.html) across 1040 tasks. LightGBM, XGBoost, and CatBoost results were obtained via AutoGluon, and other methods are from [the official AutoML Benchmark 2023 results](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.12560). AutoGluon 1.0 has a 95%+ win-rate against traditional tabular models, including **a 99% win-rate vs LightGBM and a 100% win-rate vs XGBoost**. AutoGluon 1.0 has between an 82% and 94% win-rate against other AutoML systems. For all methods, AutoGluon is able to achieve >10% average loss improvement (Ex: Going from 90% accuracy to 91% accuracy is a 10% loss improvement). **AutoGluon 1.0 achieves first place in 63% of tasks**, with lightautoml having the second most at 12% (AutoGluon 0.8 previously took first place 48% of the time). AutoGluon 1.0 even achieves a 7.4% average loss improvement over AutoGluon 0.8!\r\n\r\n| Method                       | AG Winrate | AG Loss Improvement | Rescaled Loss |     Rank | Champion |\r\n|:-----------------------------|:-----------|:--------------------|--------------:|---------:|:---------|\r\n| AutoGluon 1.0 (Best, 4h8c)   | **-**      | **-**               |      **0.04** | **1.95** | **63%**  |\r\n| lightautoml (2023, 4h8c)     | 84%        | 12.0%               |           0.2 |     4.78 | 12%      |\r\n| H2OAutoML (2023, 4h8c)       | 94%        | 10.8%               |          0.17 |     4.98 | 1%       |\r\n| FLAML (2023, 4h8c)           | 86%        | 16.7%               |          0.23 |     5.29 | 5%       |\r\n| MLJAR (2023, 4h8c)           | 82%        | 23.0%               |          0.33 |     5.53 | 6%       |\r\n| autosklearn (2023, 4h8c)     | 91%        | 12.5%               |          0.22 |     6.07 | 4%       |\r\n| GAMA (2023, 4h8c)            | 86%        | 15.4%               |          0.28 |     6.13 | 5%       |\r\n| CatBoost (2023, 4h8c)        | 95","2023-11-30T05:56:34",{"id":198,"version":199,"summary_zh":200,"released_at":201},106104,"v0.8.2","# Version 0.8.2\r\n\r\nv0.8.2 is a hot-fix release to pin `pydantic` version to avoid crashing during HPO\r\n\r\nAs always, only load previously trained models using the same version of AutoGluon that they were originally trained on. \r\nLoading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F0.8.1...0.8.2\r\n\r\nThis version supports Python versions 3.8, 3.9, and 3.10.\r\n\r\n# Changes\r\n\r\n* codespell: action, config + some typos fixed @yarikoptic @yinweisu (#3323)\r\n* Unpin sentencepiece @zhiqiangdon (#3368)\r\n* Pin pydantic @yinweisu (3370)\r\n","2023-06-30T22:16:33",{"id":203,"version":204,"summary_zh":205,"released_at":206},106105,"v0.8.1","# Version 0.8.1\r\n\r\nv0.8.1 is a bug fix release.\r\n\r\nAs always, only load previously trained models using the same version of AutoGluon that they were originally trained on. \r\nLoading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F0.8.0...0.8.1\r\n\r\nThis version supports Python versions 3.8, 3.9, and 3.10.\r\n\r\n# Changes\r\n\r\n## Documentation improvements\r\n\r\n* Update google analytics property @gidler (#3330)\r\n* Add Discord Link @Innixma (#3332)\r\n* Add community section to website front page @Innixma (#3333)\r\n* Update Windows Conda install instructions @gidler (#3346)\r\n* Add some missing Colab buttons in tutorials @gidler (#3359)\r\n\r\n\r\n## Bug Fixes \u002F General Improvements\r\n\r\n* Move PyMuPDF to optional @Innixma @zhiqiangdon (#3331)\r\n* Remove TIMM in core setup @Innixma (#3334)\r\n* Update persist_models max_memory 0.1 -> 0.4 @Innixma (#3338)\r\n* Lint modules @yinweisu (#3337, #3339, #3344, #3347)\r\n* Remove fairscale @zhiqiangdon (#3342)\r\n* Fix refit crash @Innixma (#3348)\r\n* Fix `DirectTabular` model failing for some metrics; hide warnings produced by `AutoARIMA` @shchur (#3350)\r\n* Pin dependencies @yinweisu (#3358)\r\n* Reduce per gpu batch size for AutoMM high_quality_hpo to avoid out of memory error for some corner cases @zhiqiangdon (#3360)\r\n* Fix HPO crash by setting reuse_actor to False @yinweisu (#3361)\r\n","2023-06-29T21:11:12",{"id":208,"version":209,"summary_zh":210,"released_at":211},106106,"v0.8.0","# Version 0.8.0\r\nWe're happy to announce the AutoGluon 0.8 release.\r\n\r\nNEW:  [![](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1043248669505368144?logo=discord&style=flat)](https:\u002F\u002Fdiscord.gg\u002FwjUmjqAc2N) Join our official community discord server to ask questions and get involved!\r\n\r\nNote: Loading models trained in different versions of AutoGluon is not supported.\r\n\r\nThis release contains 196 commits from 20 contributors!\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002F0.7.0...0.8.0\r\n\r\nSpecial thanks to @geoalgo for the joint work in generating the experimental tabular Zeroshot-HPO portfolio this release!\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@shchur, @Innixma, @yinweisu, @gradientsky, @FANGAreNotGnu, @zhiqiangdon, @gidler, @liangfu, @tonyhoo, @cheungdaven, @cnpgs, @giswqs, @suzhoum, @yongxinw, @isunli, @jjaeyeon, @xiaochenbin9527, @yzhliu, @jsharpna, @sxjscience\r\n\r\nAutoGluon 0.8 supports Python versions 3.8, 3.9, and 3.10.\r\n\r\n# Changes\r\n\r\n## Highlights\r\n* AutoGluon TimeSeries introduced several major improvements, including new models, upgraded presets that lead to better forecast accuracy, and optimizations that speed up training & inference.\r\n* AutoGluon Tabular now supports **[calibrating the decision threshold in binary classification](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Ftabular\u002Ftabular-indepth.html#decision-threshold-calibration)** ([API](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Fapi\u002Fautogluon.tabular.TabularPredictor.calibrate_decision_threshold.html)), leading to massive improvements in metrics such as `f1` and `balanced_accuracy`. It is not uncommon to see `f1` scores improve from `0.70` to `0.73` as an example. We **strongly** encourage all users who are using these metrics to try out the new decision threshold calibration logic.\r\n* AutoGluon MultiModal introduces two new features: 1) [**PDF document classification**](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fdocument\u002Fpdf_classification.html), and 2) [**Open Vocabulary Object Detection**](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fobject_detection\u002Fquick_start\u002Fquick_start_ovd.html).\r\n* AutoGluon MultiModal upgraded the presets for object detection, now offering `medium_quality`, `high_quality`, and `best_quality` options. The empirical results demonstrate significant ~20% relative improvements in the mAP (mean Average Precision) metric, using the same preset.\r\n* AutoGluon Tabular has added an experimental **Zeroshot HPO config** which performs well on small datasets \u003C10000 rows when at least an hour of training time is provided (~60% win-rate vs `best_quality`). To try it out, specify `presets=\"experimental_zeroshot_hpo_hybrid\"` when calling `fit()`.\r\n* AutoGluon EDA added support for [**Anomaly Detection**](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Feda\u002Feda-auto-anomaly-detection.html) and [**Partial Dependence Plots**](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Feda\u002Feda-auto-analyze-interaction.html#using-interaction-charts-to-learn-information-about-the-data).\r\n* AutoGluon Tabular has added experimental support for **[TabPFN](https:\u002F\u002Fgithub.com\u002Fautoml\u002FTabPFN)**, a pre-trained tabular transformer model. Try it out via `pip install autogluon.tabular[all,tabpfn]` (hyperparameter key is \"TABPFN\")! You can also try it out via specifying `presets=\"experimental_extreme_quality\"`.\r\n\r\n## General\r\n* General doc improvements @tonyhoo @Innixma @yinweisu @gidler @cnpgs @isunli @giswqs (#2940, #2953, #2963, #3007, #3027, #3059, #3068, #3083, #3128, #3129, #3130, #3147, #3174, #3187, #3256, #3258, #3280, #3306, #3307, #3311, #3313)\r\n* General code fixes and improvements @yinweisu @Innixma (#2921, #3078, #3113, #3140, #3206)\r\n* CI improvements @yinweisu @gidler @yzhliu @liangfu @gradientsky (#2965, #3008, #3013, #3020, #3046, #3053, #3108, #3135, #3159, #3283, #3185)\r\n* New AutoGluon Webpage @gidler @shchur (#2924)\r\n* Support sample_weight in RMSE @jjaeyeon (#3052)\r\n* Move AG search space to common @yinweisu (#3192)\r\n* Deprecation utils @yinweisu (#3206, #3209)\r\n* Update namespace packages for PEP420 compatibility @gradientsky (#3228)\r\n\r\n## Multimodal\r\n\r\nAutoGluon MultiModal (also known as AutoMM) introduces two new features: 1) PDF document classification, and 2) Open Vocabulary Object Detection. Additionally, we have upgraded the presets for object detection, now offering `medium_quality`, `high_quality`, and `best_quality` options. The empirical results demonstrate significant ~20% relative improvements in the mAP (mean Average Precision) metric, using the same preset.\r\n\r\n### New Features\r\n* PDF Document Classification. See [tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fdocument\u002Fpdf_classification.html) @cheungdaven (#2864, #3043)\r\n* Open Vocabulary Object Detection. See [tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fobject_detection\u002Fquick_start\u002Fquick_start_ovd.html) @FANGAreNotGnu (#3164)\r\n\r\n### Performance Improvements\r\n* Upgrade the detection engine from mmdet 2.x to mmdet 3.x, and upgrade ","2023-06-16T02:00:24",{"id":213,"version":214,"summary_zh":215,"released_at":216},106107,"v0.7.0","# Version 0.7.0\r\n\r\nWe're happy to announce the AutoGluon 0.7 release. This release contains a new experimental module `autogluon.eda` for exploratory\r\ndata analysis. AutoGluon 0.7 offers **conda-forge support**, enhancements to Tabular, MultiModal, and Time Series\r\nmodules, and many quality of life improvements and fixes.\r\n\r\nAs always, only load previously trained models using the same version of AutoGluon that they were originally trained on.\r\nLoading models trained in different versions of AutoGluon is not supported.\r\n\r\nThis release contains [**170** commits from **19** contributors](https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=2023-01-10&to=2023-02-16&type=c)!\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv0.6.2...v0.7.0\r\n\r\nSpecial thanks to @MountPOTATO who is a first time contributor to AutoGluon this release!\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@Innixma, @zhiqiangdon, @yinweisu, @gradientsky, @shchur, @sxjscience, @FANGAreNotGnu, @yongxinw, @cheungdaven,\r\n@liangfu, @tonyhoo, @bryanyzhu, @suzhoum, @canerturkmen, @giswqs, @gidler, @yzhliu, @Linuxdex and @MountPOTATO\r\n\r\nAutoGluon 0.7 supports Python versions 3.8, 3.9, and **3.10**. Python 3.7 is no longer supported as of this release. \r\n\r\n# Changes\r\n\r\n## NEW: AutoGluon available on conda-forge\r\n\r\nAs of AutoGluon 0.7 release, AutoGluon is now available on [conda-forge](https:\u002F\u002Fanaconda.org\u002Fconda-forge\u002Fautogluon) (#612)!\r\n\r\nKudos to the following individuals for making this happen:\r\n  * @giswqs for leading the entire effort and being a 1-man army driving this forward.\r\n  * @h-vetinari for providing excellent advice for working with conda-forge and some truly exceptional feedback.\r\n  * @arturdaraujo, @PertuyF, @ngam and @priyanga24 for their encouragement, suggestions, and feedback.\r\n  * The conda-forge team for their prompt and effective reviews of our (many) PRs.\r\n  * @gradientsky for testing M1 support during the early stages.\r\n  * @sxjscience, @zhiqiangdon, @canerturkmen, @shchur, and @Innixma for helping upgrade our downstream dependency versions to be compatible with conda.\r\n  * Everyone else who has supported this process either directly or indirectly.\r\n\r\n## NEW: `autogluon.eda` (Exploratory Data Analysis)\r\n\r\nWe are happy to announce AutoGluon Exploratory Data Analysis (EDA) toolkit. Starting with v0.7, AutoGluon now can analyze and visualize different aspects of data and models. We invite you to explore the following tutorials: [Quick Fit](https:\u002F\u002Fauto.gluon.ai\u002Fdev\u002Ftutorials\u002Fstable\u002Feda-auto-quick-fit.html), [Dataset Overview](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Feda\u002Feda-auto-dataset-overview.html), [Target Variable Analysis](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Feda\u002Feda-auto-target-analysis.html), [Covariate Shift Analysis](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Feda\u002Feda-auto-covariate-shift.html). Other materials can be found in [EDA Section](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Feda\u002Findex.html) of the website.\r\n\r\n## General\r\n\r\n- Added Python 3.10 support. @Innixma (#2721)\r\n- Dropped Python 3.7 support. @Innixma (#2722)\r\n- Removed `dask` and `distributed` dependencies. @Innixma (#2691)\r\n- Removed `autogluon.text` and `autogluon.vision` modules. We recommend using `autogluon.multimodal` for text and vision tasks going forward.\r\n\r\n## AutoMM\r\n\r\nAutoGluon MultiModal (a.k.a AutoMM) supports three new features: 1) document classification; 2) named entity recognition\r\nfor Chinese language; 3) few shot learning with SVM  \r\n\r\nMeanwhile, we removed `autogluon.text` and `autogluon.vision` as these features are supported in `autogluon.multimodal`\r\n\r\n### New features\r\n\r\n- Document Classification\r\n  - Add scanned document classification (experimental).\r\n  - Customers can train models for scanned document classification in a few lines of codes\r\n  - See [tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fdocument\u002Fdocument_classification.html)\r\n  - Contributors and commits: @cheungdaven (#2765, #2826, #2833, #2928)\r\n- NER for Chinese Language\r\n  - Support Chinese named entity recognition\r\n  - See [tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fdocument\u002Fdocument_classification.html)\r\n  - Contributors and commits:  @cheungdaven (#2676, #2709)\r\n- Few Shot Learning with SVM\r\n  - Improved few shot learning by adding SVM support\r\n  - See [tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fadvanced_topics\u002Ffew_shot_learning.html)\r\n  - Contributors and commits: @yongxinw (#2850)\r\n\r\n### Other Enhancements\r\n\r\n- Add new loss function `FocalLoss`. @yongxinw (#2860)\r\n- Add matcher realtime inference support. @zhiqiangdon (#2613)\r\n- Add matcher HPO. @zhiqiangdon (#2619)\r\n- Add YOLOX models (small, large, and x-large) and update presets for object detection. @FANGAreNotGnu (#2644, #2867, #2927, #2933)\r\n- Add AutoMM presets @zhiqiangdon. (#2620, #2749, #2839)\r\n- Add model dump for models from HuggingFace, timm and mmdet. @suzhoum @FANGAreNotGnu @liangfu (#2682, #2700,","2023-02-17T07:00:10",{"id":218,"version":219,"summary_zh":220,"released_at":221},106108,"v0.6.2","# Version 0.6.2\r\n\r\nv0.6.2 is a security and bug fix release.\r\n\r\nAs always, only load previously trained models using the same version of AutoGluon that they were originally trained on.\r\nLoading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv0.6.1...v0.6.2\r\n\r\nSpecial thanks to  @daikikatsuragawa and @yzhliu who were first time contributors to AutoGluon this release!\r\n\r\nThis version supports Python versions 3.7 to 3.9. 0.6.x are the last releases that will support Python 3.7.\r\n\r\n# Changes\r\n\r\n## Documentation improvements\r\n\r\n- Ray usage FAQ (#2559) - @yinweisu\r\n- Fix missing Predictor API doc (#2573) - @gidler\r\n- 2023 Roadmap Update (#2590) - @Innixma\r\n- Image classifiction tutorial update for bytearray (#2598) - @suzhoum\r\n- Fix broken tutorial index links (#2617) - @shchur\r\n- Improve timeseries quickstart tutorial (#2653) - @shchur\r\n\r\n\r\n## Bug Fixes \u002F Security\r\n\r\n- [multimodal] Refactoring and bug fixes(#2554, #2541, #2477, #2569, #2578, #2613, #2620, #2630, #2633, #2635, #2647, #2645, #2652, #2659) - @zhiqiangdon, @yongxinw, @FANGAreNotGnu, @sxjscience, @Innixma\r\n- [multimodal] Support of named entity recognition (#2556) - @cheungdaven\r\n- [multimodal] bytearray support for image modality (#2495) - @suzhoum\r\n- [multimodal] Support HPO for matcher (#2619) - @zhiqiangdon\r\n- [multimodal] Support Onnx export for timm image model (#2564) - @liangfu\r\n- [tabular] Refactoring and bug fixes (#2387, #2595，#2599, #2589, #2628, #2376, #2642, #2646, #2650, #2657) - @Innixma, @liangfu， @yzhliu, @daikikatsuragawa, @yinweisu\r\n- [tabular] Fix ensemble folding (#2582) - @yinweisu\r\n- [tabular] Convert ColumnTransformer in tabular NN from sklearn to onnx (#2503) - @liangfu \r\n- [tabular] Throw error on non-finite values in label column ($2509) - @gidler\r\n- [timeseries] Refactoring and bug fixes (#2584, #2594, #2605, #2606) - @shchur\r\n- [timeseries] Speed up data preparation for local models (#2587) - @shchur\r\n- [timeseries] Spped up prediction for GluonTS models (#2593) - @shchur\r\n- [timeseries] Speed up the train\u002Fval splitter (#2586) - @shchur\r\n  [timeseries] Speed up TimeSeriesEnsembleSelection.fit (#2602) - @shchur\r\n- [security] Update torch (#2588) - @gradientsky\r\n","2023-01-11T22:21:31",{"id":223,"version":224,"summary_zh":225,"released_at":226},106109,"v0.6.1","# Version 0.6.1\r\n\r\nv0.6.1 is a security fix \u002F bug fix release.\r\n\r\nAs always, only load previously trained models using the same version of AutoGluon that they were originally trained on. \r\nLoading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fautogluon\u002Fautogluon\u002Fcompare\u002Fv0.6.0...v0.6.1\r\n\r\nSpecial thanks to @lvwerra who is first time contributors to AutoGluon this release!\r\n\r\nThis version supports Python versions 3.7 to 3.9. 0.6.x are the last releases that will support Python 3.7.\r\n\r\n# Changes\r\n\r\n## Documentation improvements\r\n\r\n- Fix object detection tutorial layout (#2450) - @bryanyzhu\r\n- Add multimodal cheatsheet (#2467) - @sxjscience\r\n- Refactoring detection inference quickstart and bug fix on fit->predict - @yongxinw, @zhiqiangdon, @Innixma, @BingzhaoZhu, @tonyhoo\r\n- Use Pothole Dataset in Tutorial for AutoMM Detection (#2468) - @FANGAreNotGnu\r\n- add time series cheat sheet, add time series to doc titles (#2478) - @canerturkmen\r\n- Update all repo references to autogluon\u002Fautogluon (#2463) - @gidler\r\n- fix typo in object detection tutorial CI (#2516) - @tonyhoo\r\n\r\n## Bug Fixes \u002F Security\r\n\r\n- bump evaluate to 0.3.0 (#2433) - @lvwerra\r\n- Add finetune\u002Feval tests for AutoMM detection (#2441) - @FANGAreNotGnu\r\n- Adding Joint IA3_LoRA as efficient finetuning strategy (#2451) - @Raldir\r\n- Fix AutoMM warnings about object detection (#2458) - @zhiqiangdon\r\n- [Tabular] Speed up feature transform in tabular NN model (#2442) - @liangfu\r\n- fix matcher cpu inference bug (#2461) - @sxjscience\r\n- [timeseries] Silence GluonTS JSON warning (#2454) - @shchur\r\n- [timeseries] Fix pandas groupby bug + GluonTS index bug (#2420) - @shchur\r\n- Simplified infer speed throughput calculation (#2465) - @Innixma\r\n- [Tabular] make tabular nn dataset iterable (#2395) - @liangfu\r\n- Remove old images and dataset download scripts (#2471) - @Innixma\r\n- Support image bytearray in AutoMM (#2490) - @suzhoum\r\n- [NER] add an NER visualizer (#2500) - @cheungdaven\r\n- [Cloud] Lazy load TextPredcitor and ImagePredictor which will be deprecated (#2517) - @tonyhoo\r\n- Use detectron2 visualizer and update quickstart (#2502) - @yongxinw, @zhiqiangdon, @Innixma, @BingzhaoZhu, @tonyhoo\r\n- fix df preprocessor properties (#2512) - @zhiqiangdon\r\n- [timeseries] Fix info and fit_summary for TimeSeriesPredictor (#2510) - @shchur\r\n- [timeseries] Pass known_covariates to component models of the WeightedEnsemble - @shchur\r\n- [timeseries] Gracefully handle inconsistencies in static_features provided by user - @shchur\r\n- [security] update Pillow to >=9.3.0 (#2519) - @gradientsky\r\n- [CI] upgrade codeql v1 to v2 as v1 will be deprecated (#2528) - @tonyhoo\r\n- Upgrade scikit-learn-intelex version (#2466) - @Innixma\r\n- Save AutoGluonTabular model to the correct folder (#2530) - @shchur\r\n- support predicting with model fitted on v0.5.1 (#2531) - @liangfu\r\n- [timeseries] Implement input validation for TimeSeriesPredictor and improve debug messages - @shchur\r\n- [timeseries] Ensure that timestamps are sorted when creating a TimeSeriesDataFrame - @shchur\r\n- Add tests for preprocessing mutation (#2540) - @Innixma\r\n- Fix timezone datetime edgecase (#2538) - @Innixma, @gradientsky\r\n- Mmdet Fix Image Identifier (#2492) - @FANGAreNotGnu\r\n- [timeseries] Warn if provided data has a frequency that is not supported - @shchur\r\n- Train and inference with different image data types (#2535) - @suzhoum\r\n- Remove pycocotools (#2548) - @bryanyzhu\r\n- avoid copying identical dataframes (#2532) - @liangfu\r\n- Fix AutoMM Tokenizer (#2550) - @FANGAreNotGnu\r\n- [Tabular] Resource Allocation Fix (#2536) - @yinweisu\r\n- imodels version cap (#2557) - @yinweisu\r\n- Fix int32\u002Fint64 difference between windows and other platforms; fix mutation issue (#2558) - @gradientsky\r\n","2022-12-13T03:34:41",{"id":228,"version":229,"summary_zh":230,"released_at":231},106110,"v0.5.3","# Version 0.5.3\r\n\r\nv0.5.3 is a security hotfix release.\r\n\r\nThis release is **non-breaking** when upgrading from v0.5.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fcompare\u002Fv0.5.2...v0.5.3\r\n\r\nThis version supports Python versions 3.7 to 3.9.\r\n","2022-11-19T01:25:46",{"id":233,"version":234,"summary_zh":235,"released_at":236},106111,"v0.6.0","# Version 0.6.0\r\n\r\nWe're happy to announce the AutoGluon 0.6 release. 0.6 contains major enhancements to Tabular, Multimodal, and Time Series\r\nmodules, along with many quality of life improvements and fixes.\r\n\r\nAs always, only load previously trained models using the same version of AutoGluon that they were originally trained on.\r\nLoading models trained in different versions of AutoGluon is not supported.\r\n\r\nThis release contains [**263** commits from **25** contributors](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=2022-07-18&to=2022-11-15&type=c)!\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fcompare\u002Fv0.5.2...v0.6.0\r\n\r\nSpecial thanks to @cheungdaven, @suzhoum, @BingzhaoZhu, @liangfu, @Harry-zzh, @gidler, @yongxinw, @martinschaef,\r\n@giswqs, @Jalagarto, @geoalgo, @lujiaying and @leloykun who were first time contributors to AutoGluon this release!\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n@shchur, @yinweisu, @zhiqiangdon, @Innixma, @FANGAreNotGnu, @canerturkmen, @sxjscience, @gradientsky, @cheungdaven,\r\n@bryanyzhu, @suzhoum, @BingzhaoZhu, @yongxinw, @tonyhoo, @liangfu, @Harry-zzh, @Raldir, @gidler, @martinschaef, \r\n@giswqs, @Jalagarto, @geoalgo, @lujiaying, @leloykun, @yiqings\r\n\r\nThis version supports Python versions 3.7 to 3.9. This is the last release that will support Python 3.7.\r\n\r\n# Changes\r\n\r\n## AutoMM\r\n\r\nAutoGluon Multimodal (a.k.a AutoMM) supports three new features: 1) object detection, 2) named entity recognition, and 3) multimodal matching. In addition, the HPO backend of AutoGluon Multimodal has been upgraded to ray 2.0. It also supports fine-tuning billion-scale FLAN-T5-XL model on a single AWS g4.2x-large instance with improved parameter-efficient finetuning. Starting from 0.6, we recommend using autogluon.multimodal rather than autogluon.text or autogluon.vision and added deprecation warnings.\r\n\r\n### New features\r\n\r\n- Object Detection\r\n  - Add new problem_type `\"object_detection\"`.\r\n  - Customers can run inference with pretrained object detection models and train their own model with three lines of code.\r\n  - Integrate with [open-mmlab\u002Fmmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection), which supports classic detection architectures like Faster RCNN, and more efficient and performant architectures like YOLOV3 and VFNet.\r\n  - See [tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fobject_detection\u002Findex.html) and [examples](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Ftree\u002Fmaster\u002Fexamples\u002Fautomm\u002Fobject_detection) for more detail.\r\n  - Contributors and commits: @FANGAreNotGnu, @bryanyzhu, @zhiqiangdon, @yongxinw, @sxjscience, @Harry-zzh (#2025, #2061, #2131, #2181, #2196, #2215, #2244, #2265, #2290, #2311, #2312, #2337, #2349, #2353, #2360, #2362, #2365, #2380, #2381, #2391, #2393, #2400, #2419, #2421, #2063, #2104, #2411)\r\n\r\n- Named Entity Recognition\r\n  - Add new problem_type `\"ner\"`.\r\n  - Customers can train models to extract named entities with three lines of code.\r\n  - The implementation supports any backbones in huggingface\u002Ftransformer, including the recently [FLAN-T5 series](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11416) released by Google.\r\n  - See [tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Ftext_prediction\u002Fner.html) for more detail.\r\n  - Contributors and commits: @cheungdaven (#2183, #2232, #2220, #2282, #2295, #2301, #2337, #2346, #2361, #2372, #2394, #2412)\r\n\r\n- Multimodal Matching\r\n  - Add new problem_type `\"text_similarity\"`, `\"image_similarity\"`, `\"image_text_similarity\"`.\r\n  - Users can now extract semantic embeddings with pretrained models for text-text, image-image, and text-image matching problems.\r\n  - Moreover, users can further finetune these models with relevance data.\r\n  - The semantic text embedding model can also be combined with BM25 to form a hybrid indexing solution.\r\n  - Internally, AutoGluon Multimodal implements a twin-tower architecture that is flexible in the choice of backbones for each tower. It supports image backbones in TIMM, text backbones in huggingface\u002Ftransformers, and also the CLIP backbone.\r\n  - See [tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fmatching\u002Findex.html) for more detail.\r\n  - Contributors and commits: @zhiqiangdon @FANGAreNotGnu @cheungdaven @suzhoum @sxjscience @bryanyzhu (#1975, #1994, #2142, #2179, #2186, #2217, #2235, #2284, #2297, #2313, #2326, #2337, #2347, #2357, #2358, #2362, #2363, #2375, #2378, #2404, #2416, #2407, #2417)\r\n\r\n- Miscellaneous minor fixes. @cheungdaven @FANGAreNotGnu @geoalgo @zhiqiangdon (#2402, #2409, #2026, #2401, #2418)\r\n\r\n### Other Enhancements\r\n\r\n- Fix the FT-Transformer implementation and support Fastformer. @BingzhaoZhu @yiqings (#1958, #2194, #2251, #2344, #2379, #2386)\r\n- Support finetuning billion-scale FLAN-T5-XL in a single AWS g4.2x-large instance via improved parameter-efficient finetuning. See [tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fadvanced_topics\u002Fefficient_finetuning_basic.html). @Raldir @sxj","2022-11-17T04:52:05",{"id":238,"version":239,"summary_zh":240,"released_at":241},106112,"v0.5.2","# Version 0.5.2\r\n\r\nv0.5.2 is a security hotfix release.\r\n\r\nThis release is **non-breaking** when upgrading from v0.5.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fcompare\u002Fv0.5.1...v0.5.2\r\n\r\nThis version supports Python versions 3.7 to 3.9.\r\n","2022-07-29T00:05:50",{"id":243,"version":244,"summary_zh":245,"released_at":246},106113,"v0.4.3","# Version 0.4.3\r\n\r\nv0.4.3 is a security hotfix release.\r\n\r\nThis release is **non-breaking** when upgrading from v0.4.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fcompare\u002Fv0.4.2...v0.4.3\r\n\r\nThis version supports Python versions 3.7 to 3.9.\r\n","2022-07-28T22:45:42",{"id":248,"version":249,"summary_zh":250,"released_at":251},106114,"v0.5.1","# Version 0.5.1\r\nWe're happy to announce the AutoGluon 0.5 release. This release contains major optimizations and bug fixes to autogluon.multimodal and autogluon.timeseries modules, as well as inference speed improvements to autogluon.tabular.\r\n\r\nThis release is non-breaking when upgrading from v0.5.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.\r\n\r\nThis release contains [58 commits from 14 contributors](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fgraphs\u002Fcontributors?from=2022-06-22&to=2022-07-18&type=c)!\r\n\r\nFull Contributor List (ordered by # of commits):\r\n\r\n- @zhiqiangdon, @yinweisu, @Innixma, @canerturkmen, @sxjscience,  @bryanyzhu, @jsharpna, @gidler, @gradientsky, @Linuxdex, @muxuezi, @yiqings, @huibinshen, @FANGAreNotGnu\r\n\r\nThis version supports Python versions 3.7 to 3.9.\r\n\r\nSee the full commit change-log here: https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Fcompare\u002Fv0.5.0...v0.5.1\r\n\r\n\r\n### AutoMM\r\n\r\nChanged to a new namespace `autogluon.multimodal` (AutoMM), which is a deep learning \"model zoo\" of model zoos. On one hand, AutoMM can automatically train deep models for unimodal (image-only, text-only or tabular-only) problems. On the other hand, AutoMM can automatically solve multimodal (any combinations of image, text, and tabular) problems by fusing multiple deep learning models. In addition, AutoMM can be used as a base model in AutoGluon Tabular and participate in the model ensemble.\r\n\r\n### New features\r\n\r\n- Supported zero-shot learning with CLIP (#1922) @zhiqiangdon\r\n  - Users can directly perform zero-shot image classification with the [CLIP model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00020). Moreover, users can extract image and text embeddings with CLIP to do image-to-text or text-to-image retrieval. \r\n\r\n- Improved efficient finetuning\r\n  - Support “bit_fit”, “norm_fit“, “lora”, “lora_bias”, “lora_norm”. In four multilingual datasets ([xnli](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fxnli), [stsb_multi_mt](http:\u002F\u002Fstsb_multi_mt\u002F), [paws-x](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fpaws-x), [amazon_reviews_multi](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Famazon_reviews_multi)), “lora_bias”, which is a combination of [LoRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09685) and [BitFit](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10199), achieved the best overall performance. Compared to finetuning the whole network, “lora_bias” will only finetune **\u003C0.5%** of the network parameters and can achieve comparable performance on “stsb_multi_mt” (#1780, #1809). @Raldir @zhiqiangdon\r\n  - Support finetuning the [mT5-XL](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fmt5-xl) model that has 1.7B parameters on a single NVIDIA G4 GPU. In AutoMM, we only use the T5-encoder (1.7B parameters) like [Sentence-T5](https:\u002F\u002Faclanthology.org\u002F2022.findings-acl.146.pdf). (#1933) @sxjscience\r\n\r\n- Added more data augmentation techniques\r\n  - [Mixup](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.09412.pdf) for image data. (#1730) @Linuxdex\r\n  - [TrivialAugment](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.10158.pdf) for both image and text data. (#1792) @lzcemma\r\n  - [Easy text augmentations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.11196.pdf). (#1756) @lzcemma\r\n\r\n- Enhanced teacher-student model distillation\r\n  - Support distilling the knowledge from a unimodal\u002Fmultimodal teacher model to a student model. (#1670, #1895) @zhiqiangdon\r\n\r\n### More tutorials and examples\r\n\r\n- [Beginner tutorials](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Findex.html) of applying AutoMM to image, text, or multimodal (including tabular) data. (#1861, #1908, #1858, #1869) @bryanyzhu @sxjscience @zhiqiangdon\r\n\r\n- [A zero-shot image classification tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fclip_zeroshot.html) with the CLIP model. (#1942) @bryanyzhu\r\n\r\n- A tutorial of using [CLIP model to extract embeddings](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fclip_embedding.html) for image-text retrieval. (#1957) @bryanyzhu\r\n\r\n- [A tutorial](https:\u002F\u002Fauto.gluon.ai\u002Fstable\u002Ftutorials\u002Fmultimodal\u002Fcustomization.html) to introduce comprehensive AutoMM configurations (#1861). @zhiqiangdon\r\n\r\n- [AutoMM for tabular data examples](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Ftree\u002Fmaster\u002Fexamples\u002Fautomm\u002Ftabular_dl) (#1752, #1893, #1903). @yiqings\r\n\r\n- [AutoMM distillation example](https:\u002F\u002Fgithub.com\u002Fawslabs\u002Fautogluon\u002Ftree\u002Fmaster\u002Fexamples\u002Fautomm\u002Fdistillation) (#1846). @FANGAreNotGnu\r\n\r\n- A Kaggle notebook about how to use AutoMM to predict pet adoption: https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Flinuxdex\u002Fuse-autogluon-to-predict-pet-adoption. The model achieves the score equivalent to **top 1% (20th\u002F3537) in this kernel-only competition (test data is only available in the kernel without internet access)** (#1796, #1847, #1894, #1943). @Linuxdex\r\n","2022-07-19T03:46:32"]