[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-wandb--examples":3,"tool-wandb--examples":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":72,"owner_website":77,"owner_url":78,"languages":79,"stars":92,"forks":93,"last_commit_at":94,"license":76,"difficulty_score":95,"env_os":96,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":105,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":106,"updated_at":107,"faqs":108,"releases":138},6512,"wandb\u002Fexamples","examples","Example deep learning projects that use wandb's features.","examples 是 Weights & Biases（W&B）官方提供的一系列深度学习示例项目集合，旨在帮助开发者快速上手并充分利用 W&B 平台的强大功能。在机器学习开发过程中，研究人员往往面临实验记录混乱、超参数调整难以追踪、模型版本管理复杂等痛点。examples 通过提供涵盖图像分类、自然语言处理等多个领域的实战代码，演示了如何将 W&B 的实验追踪、可视化报表、数据制品管理、超参数搜索（Sweeps）以及模型注册等核心功能无缝集成到工作流中。\n\n这套资源特别适合人工智能领域的开发者、算法工程师及科研人员使用。无论你是刚接触 W&B 的新手，还是希望优化现有训练流程的资深从业者，都能从中找到有价值的参考。其独特亮点在于“所见即所得”的学习体验：用户无需从零搭建监控体系，只需运行示例代码，即可直观看到训练指标如何被自动记录、实验结果如何生成交互式图表，以及如何高效对比不同模型表现。通过借鉴这些最佳实践，团队能够更专注于模型创新，显著提升从数据集准备到模型部署的整体研发效率，让构建更好的模型变得更快、更透明。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fassets\u002Flogo-light.svg#gh-light-mode-only\" width=\"600\" alt=\"Weights & Biases\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fassets\u002Flogo-dark.svg#gh-dark-mode-only\" width=\"600\" alt=\"Weights & Biases\"\u002F>\n\u003C\u002Fp>\n\nUse W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production machine learning models. Get started with W&B today, [sign up for a free account!](https:\u002F\u002Fwandb.com?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme)\n\n\u003C!-- \n&nbsp;\n\n\u003Cp align='center'>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Ftrack?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fexperiments-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fexperiments-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Experiments\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Freports?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Freport-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Freport-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Reports\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fartifacts?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fartifacts-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fartifacts-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Artifacts\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fdata-vis?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Ftables-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Ftables-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Tables\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fsweeps?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fsweeps-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fsweeps-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Sweeps\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fmodels?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fmodels-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fmodels-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Model Management\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Flaunch?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Flaunch-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Flaunch-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Launch\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003C\u002Fp>\n\n&nbsp; -->\n\n# 🚀 Getting Started\n\n### Never lose your progress again. \nSave everything you need to compare and reproduce models — architecture, hyperparameters, weights, model predictions, GPU usage, git commits, and even datasets — in 5 minutes. W&B is free for personal use and academic projects, and it's easy to get started.\n\n**Check out our libraries of [example scripts](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fexamples) and [example colabs](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fcolabs)**\nor read on for code snippets and more!\n\nIf you have any questions, please don't hesitate to ask in our [Discourse forum](http:\u002F\u002Fwandb.me\u002Fand-you).\n\n# 🤝 Simple integration with any framework\nInstall `wandb` library and login:\n```\npip install wandb\nwandb login\n```\nFlexible integration for any Python script:\n```python\nimport wandb\n\n# 1. Start a W&B run\nwandb.init(project='gpt3')\n\n# 2. Save model inputs and hyperparameters\nconfig = wandb.config\nconfig.learning_rate = 0.01\n\n# Model training code here ...\n\n# 3. Log metrics over time to visualize performance\nfor i in range (10):\n    wandb.log({\"loss\": loss})\n```\n\n### [Try in a colab →](http:\u002F\u002Fwandb.me\u002Fintro-colab)\n\nIf you have any questions, please don't hesitate to ask in our [Discourse forum](http:\u002F\u002Fwandb.me\u002Fand-you).\n\n![](https:\u002F\u002Fi.imgur.com\u002FTU34QFZ.png)\n\n**[Explore a W&B dashboard](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=gnD8BFuyVUA)**\n\n# 📈 Track model and data pipeline hyperparameters\nSet `wandb.config` once at the beginning of your script to save your hyperparameters, input settings (like dataset name or model type), and any other independent variables for your experiments. This is useful for analyzing your experiments and reproducing your work in the future. Setting configs also allows you to [visualize](https:\u002F\u002Fdocs.wandb.com\u002Fsweeps\u002Fvisualize-sweep-results) the relationships between features of your model architecture or data pipeline and the model performance (as seen in the screenshot above).\n\n```python\nwandb.init()\nwandb.config.epochs = 4\nwandb.config.batch_size = 32\nwandb.config.learning_rate = 0.001\nwandb.config.architecture = \"resnet\"\n```\n\n- **[See how to set configs in a colab →](http:\u002F\u002Fwandb.me\u002Fconfig-colab)**\n- [Docs](https:\u002F\u002Fdocs.wandb.com\u002Flibrary\u002Fconfig)\n\n# 🏗 Use your favorite framework\n\nUse your favorite framework with W&B. W&B integrations make it fast and easy to set up experiment tracking and data versioning inside existing projects. For more information on how to integrate W&B with the framework of your choice, see the [Integrations chapter](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations) in the W&B Developer Guide.\n\n\u003C!-- \u003Cp align='center'>\n\u003Cimg src=\".\u002Fdocs\u002FREADME_images\u002Fintegrations.png\" width=\"100%\" \u002F>\n\u003C\u002Fp> -->\n\n\u003Cdetails>\n\u003Csummary>🔥 PyTorch\u003C\u002Fsummary>\n\nCall `.watch` and pass in your PyTorch model to automatically log gradients and store the network topology. Next, use `.log` to track other metrics. The following example demonstrates an example of how to do this:\n\n```python\nimport wandb\n\n# 1. Start a new run\nrun = wandb.init(project=\"gpt4\")\n\n# 2. Save model inputs and hyperparameters\nconfig = run.config\nconfig.dropout = 0.01\n\n# 3. Log gradients and model parameters\nrun.watch(model)\nfor batch_idx, (data, target) in enumerate(train_loader):\n    ...\n    if batch_idx % args.log_interval == 0:\n        # 4. Log metrics to visualize performance\n        run.log({\"loss\": loss})\n```\n\n- Run an example [Google Colab Notebook](http:\u002F\u002Fwandb.me\u002Fpytorch-colab).\n- Read the [Developer Guide](https:\u002F\u002Fdocs.wandb.com\u002Fguides\u002Fintegrations\u002Fpytorch?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations) for technical details on how to integrate PyTorch with W&B.\n- Explore [W&B Reports](https:\u002F\u002Fapp.wandb.ai\u002Fwandb\u002Fgetting-started\u002Freports\u002FPytorch--VmlldzoyMTEwNzM?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations).\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>🌊 TensorFlow\u002FKeras\u003C\u002Fsummary>\nUse W&B Callbacks to automatically save metrics to W&B when you call `model.fit` during training.\n\nThe following code example demonstrates how your script might look like when you integrate W&B with Keras:\n\n```python\n# This script needs these libraries to be installed:\n#   tensorflow, numpy\n\nimport wandb\nfrom wandb.keras import WandbMetricsLogger, WandbModelCheckpoint\n\nimport random\nimport numpy as np\nimport tensorflow as tf\n\n\n# Start a run, tracking hyperparameters\nrun = wandb.init(\n    # set the wandb project where this run will be logged\n    project=\"my-awesome-project\",\n    # track hyperparameters and run metadata with wandb.config\n    config={\n        \"layer_1\": 512,\n        \"activation_1\": \"relu\",\n        \"dropout\": random.uniform(0.01, 0.80),\n        \"layer_2\": 10,\n        \"activation_2\": \"softmax\",\n        \"optimizer\": \"sgd\",\n        \"loss\": \"sparse_categorical_crossentropy\",\n        \"metric\": \"accuracy\",\n        \"epoch\": 8,\n        \"batch_size\": 256,\n    },\n)\n\n# [optional] use wandb.config as your config\nconfig = run.config\n\n# get the data\nmnist = tf.keras.datasets.mnist\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train \u002F 255.0, x_test \u002F 255.0\nx_train, y_train = x_train[::5], y_train[::5]\nx_test, y_test = x_test[::20], y_test[::20]\nlabels = [str(digit) for digit in range(np.max(y_train) + 1)]\n\n# build a model\nmodel = tf.keras.models.Sequential(\n    [\n        tf.keras.layers.Flatten(input_shape=(28, 28)),\n        tf.keras.layers.Dense(config.layer_1, activation=config.activation_1),\n        tf.keras.layers.Dropout(config.dropout),\n        tf.keras.layers.Dense(config.layer_2, activation=config.activation_2),\n    ]\n)\n\n# compile the model\nmodel.compile(optimizer=config.optimizer, loss=config.loss, metrics=[config.metric])\n\n# WandbMetricsLogger will log train and validation metrics to wandb\n# WandbModelCheckpoint will upload model checkpoints to wandb\nhistory = model.fit(\n    x=x_train,\n    y=y_train,\n    epochs=config.epoch,\n    batch_size=config.batch_size,\n    validation_data=(x_test, y_test),\n    callbacks=[\n        WandbMetricsLogger(log_freq=5),\n        WandbModelCheckpoint(\"models\"),\n    ],\n)\n\n# [optional] finish the wandb run, necessary in notebooks\nrun.finish()\n```\n\nGet started integrating your Keras model with W&B today:\n\n- Run an example [Google Colab Notebook](https:\u002F\u002Fwandb.me\u002Fintro-keras?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)\n- Read the [Developer Guide](https:\u002F\u002Fdocs.wandb.com\u002Fguides\u002Fintegrations\u002Fkeras?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations) for technical details on how to integrate Keras with W&B.\n- Explore [W&B Reports](https:\u002F\u002Fapp.wandb.ai\u002Fwandb\u002Fgetting-started\u002Freports\u002FKeras--VmlldzoyMTEwNjQ?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations).\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>🤗 Huggingface Transformers\u003C\u002Fsummary>\n\nPass `wandb` to the `report_to` argument when you run a script using a HuggingFace Trainer. W&B will automatically log losses,\nevaluation metrics, model topology, and gradients.\n\n**Note**: The environment you run your script in must have `wandb` installed.\n\nThe following example demonstrates how to integrate W&B with Hugging Face:\n\n```python\n# This script needs these libraries to be installed:\n#   numpy, transformers, datasets\n\nimport wandb\n\nimport os\nimport numpy as np\nfrom datasets import load_dataset\nfrom transformers import TrainingArguments, Trainer\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\n\ndef tokenize_function(examples):\n    return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\n\n\ndef compute_metrics(eval_pred):\n    logits, labels = eval_pred\n    predictions = np.argmax(logits, axis=-1)\n    return {\"accuracy\": np.mean(predictions == labels)}\n\n\n# download prepare the data\ndataset = load_dataset(\"yelp_review_full\")\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n\nsmall_train_dataset = dataset[\"train\"].shuffle(seed=42).select(range(1000))\nsmall_eval_dataset = dataset[\"test\"].shuffle(seed=42).select(range(300))\n\nsmall_train_dataset = small_train_dataset.map(tokenize_function, batched=True)\nsmall_eval_dataset = small_eval_dataset.map(tokenize_function, batched=True)\n\n# download the model\nmodel = AutoModelForSequenceClassification.from_pretrained(\n    \"distilbert-base-uncased\", num_labels=5\n)\n\n# set the wandb project where this run will be logged\nos.environ[\"WANDB_PROJECT\"] = \"my-awesome-project\"\n\n# save your trained model checkpoint to wandb\nos.environ[\"WANDB_LOG_MODEL\"] = \"true\"\n\n# turn off watch to log faster\nos.environ[\"WANDB_WATCH\"] = \"false\"\n\n# pass \"wandb\" to the `report_to` parameter to turn on wandb logging\ntraining_args = TrainingArguments(\n    output_dir=\"models\",\n    report_to=\"wandb\",\n    logging_steps=5,\n    per_device_train_batch_size=32,\n    per_device_eval_batch_size=32,\n    evaluation_strategy=\"steps\",\n    eval_steps=20,\n    max_steps=100,\n    save_steps=100,\n)\n\n# define the trainer and start training\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=small_train_dataset,\n    eval_dataset=small_eval_dataset,\n    compute_metrics=compute_metrics,\n)\ntrainer.train()\n\n# [optional] finish the wandb run, necessary in notebooks\nwandb.finish()\n```\n\n- Run an example [Google Colab Notebook](http:\u002F\u002Fwandb.me\u002Fhf?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations).\n- Read the [Developer Guide](https:\u002F\u002Fdocs.wandb.com\u002Fguides\u002Fintegrations\u002Fhuggingface?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations) for technical details on how to integrate Hugging Face with W&B.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>⚡️ PyTorch Lightning\u003C\u002Fsummary>\n\nBuild scalable, structured, high-performance PyTorch models with Lightning and log them with W&B.\n\n```python\n# This script needs these libraries to be installed:\n#   torch, torchvision, pytorch_lightning\n\nimport wandb\n\nimport os\nfrom torch import optim, nn, utils\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import ToTensor\n\nimport pytorch_lightning as pl\nfrom pytorch_lightning.loggers import WandbLogger\n\n\nclass LitAutoEncoder(pl.LightningModule):\n    def __init__(self, lr=1e-3, inp_size=28, optimizer=\"Adam\"):\n        super().__init__()\n\n        self.encoder = nn.Sequential(\n            nn.Linear(inp_size * inp_size, 64), nn.ReLU(), nn.Linear(64, 3)\n        )\n        self.decoder = nn.Sequential(\n            nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, inp_size * inp_size)\n        )\n        self.lr = lr\n\n        # save hyperparameters to self.hparamsm auto-logged by wandb\n        self.save_hyperparameters()\n\n    def training_step(self, batch, batch_idx):\n        x, y = batch\n        x = x.view(x.size(0), -1)\n        z = self.encoder(x)\n        x_hat = self.decoder(z)\n        loss = nn.functional.mse_loss(x_hat, x)\n\n        # log metrics to wandb\n        self.log(\"train_loss\", loss)\n        return loss\n\n    def configure_optimizers(self):\n        optimizer = optim.Adam(self.parameters(), lr=self.lr)\n        return optimizer\n\n\n# init the autoencoder\nautoencoder = LitAutoEncoder(lr=1e-3, inp_size=28)\n\n# setup data\nbatch_size = 32\ndataset = MNIST(os.getcwd(), download=True, transform=ToTensor())\ntrain_loader = utils.data.DataLoader(dataset, shuffle=True)\n\n# initialise the wandb logger and name your wandb project\nwandb_logger = WandbLogger(project=\"my-awesome-project\")\n\n# add your batch size to the wandb config\nwandb_logger.experiment.config[\"batch_size\"] = batch_size\n\n# pass wandb_logger to the Trainer\ntrainer = pl.Trainer(limit_train_batches=750, max_epochs=5, logger=wandb_logger)\n\n# train the model\ntrainer.fit(model=autoencoder, train_dataloaders=train_loader)\n\n# [optional] finish the wandb run, necessary in notebooks\nwandb.finish()\n```\n\n- Run an example [Google Colab Notebook](http:\u002F\u002Fwandb.me\u002Flightning?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations).\n- Read the [Developer Guide](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations\u002Flightning?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations) for technical details on how to integrate PyTorch Lightning with W&B.\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>💨 XGBoost\u003C\u002Fsummary>\nUse W&B Callbacks to automatically save metrics to W&B when you call `model.fit` during training.\n\nThe following code example demonstrates how your script might look like when you integrate W&B with XGBoost:\n\n```python\n# This script needs these libraries to be installed:\n#   numpy, xgboost\n\nimport wandb\nfrom wandb.xgboost import WandbCallback\n\nimport numpy as np\nimport xgboost as xgb\n\n\n# setup parameters for xgboost\nparam = {\n    \"objective\": \"multi:softmax\",\n    \"eta\": 0.1,\n    \"max_depth\": 6,\n    \"nthread\": 4,\n    \"num_class\": 6,\n}\n\n# start a new wandb run to track this script\nrun = wandb.init(\n    # set the wandb project where this run will be logged\n    project=\"my-awesome-project\",\n    # track hyperparameters and run metadata\n    config=param,\n)\n\n# download data from wandb Artifacts and prep data\nrun.use_artifact(\"wandb\u002Fintro\u002Fdermatology_data:v0\", type=\"dataset\").download(\".\")\ndata = np.loadtxt(\n    \".\u002Fdermatology.data\",\n    delimiter=\",\",\n    converters={33: lambda x: int(x == \"?\"), 34: lambda x: int(x) - 1},\n)\nsz = data.shape\n\ntrain = data[: int(sz[0] * 0.7), :]\ntest = data[int(sz[0] * 0.7) :, :]\n\ntrain_X = train[:, :33]\ntrain_Y = train[:, 34]\n\ntest_X = test[:, :33]\ntest_Y = test[:, 34]\n\nxg_train = xgb.DMatrix(train_X, label=train_Y)\nxg_test = xgb.DMatrix(test_X, label=test_Y)\nwatchlist = [(xg_train, \"train\"), (xg_test, \"test\")]\n\n# add another config to the wandb run\nnum_round = 5\nrun.config[\"num_round\"] = 5\nrun.config[\"data_shape\"] = sz\n\n# pass WandbCallback to the booster to log its configs and metrics\nbst = xgb.train(\n    param, xg_train, num_round, evals=watchlist, callbacks=[WandbCallback()]\n)\n\n# get prediction\npred = bst.predict(xg_test)\nerror_rate = np.sum(pred != test_Y) \u002F test_Y.shape[0]\n\n# log your test metric to wandb\nrun.summary[\"Error Rate\"] = error_rate\n\n# [optional] finish the wandb run, necessary in notebooks\nrun.finish()\n```\n\n- Run an example [Google Colab Notebook](https:\u002F\u002Fwandb.me\u002Fxgboost?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations).\n- Read the [Developer Guide](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations\u002Fxgboost?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations) for technical details on how to integrate XGBoost with W&B.\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>🧮 Sci-Kit Learn\u003C\u002Fsummary>\nUse wandb to visualize and compare your scikit-learn models' performance:\n\n```python\n# This script needs these libraries to be installed:\n#   numpy, sklearn\n\nimport wandb\nfrom wandb.sklearn import plot_precision_recall, plot_feature_importances\nfrom wandb.sklearn import plot_class_proportions, plot_learning_curve, plot_roc\n\nimport numpy as np\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n\n# load and process data\nwbcd = datasets.load_breast_cancer()\nfeature_names = wbcd.feature_names\nlabels = wbcd.target_names\n\ntest_size = 0.2\nX_train, X_test, y_train, y_test = train_test_split(\n    wbcd.data, wbcd.target, test_size=test_size\n)\n\n# train model\nmodel = RandomForestClassifier()\nmodel.fit(X_train, y_train)\nmodel_params = model.get_params()\n\n# get predictions\ny_pred = model.predict(X_test)\ny_probas = model.predict_proba(X_test)\nimportances = model.feature_importances_\nindices = np.argsort(importances)[::-1]\n\n# start a new wandb run and add your model hyperparameters\nrun = wandb.init(project=\"my-awesome-project\", config=model_params)\n\n# Add additional configs to wandb\nrun.config.update(\n    {\n        \"test_size\": test_size,\n        \"train_len\": len(X_train),\n        \"test_len\": len(X_test),\n    }\n)\n\n# log additional visualisations to wandb\nplot_class_proportions(y_train, y_test, labels)\nplot_learning_curve(model, X_train, y_train)\nplot_roc(y_test, y_probas, labels)\nplot_precision_recall(y_test, y_probas, labels)\nplot_feature_importances(model)\n\n# [optional] finish the wandb run, necessary in notebooks\nrun.finish()\n```\n\n- Run an example [Google Colab Notebook](https:\u002F\u002Fwandb.me\u002Fscikit-colab?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations).\n- Read the [Developer Guide](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations\u002Fscikit?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations) for technical details on how to integrate Scikit-Learn with W&B.\n\u003C\u002Fdetails>\n\n&nbsp;\n# 🧹 Optimize hyperparameters with Sweeps\nUse Weights & Biases Sweeps to automate hyperparameter optimization and explore the space of possible models.\n\n### [Try Sweeps in PyTorch in a Colab →](http:\u002F\u002Fwandb.me\u002Fsweeps-colab)\n### [Try Sweeps in TensorFlow in a Colab →](http:\u002F\u002Fwandb.me\u002Ftf-sweeps-colab)\n\n### Benefits of using W&B Sweeps \n- **Quick to setup:** With just a few lines of code you can run W&B sweeps.\n- **Transparent:** We cite all the algorithms we're using, and our code is [open source](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fclient\u002Fblob\u002Fmaster\u002Fwandb\u002Fsdk\u002Fwandb_sweep.py).\n- **Powerful:** Our sweeps are completely customizable and configurable. You can launch a sweep across dozens of machines, and it's just as easy as starting a sweep on your laptop.\n\n### [Get started in 5 mins →](https:\u002F\u002Fdocs.wandb.com\u002Fsweeps\u002Fquickstart)\n\n\u003Cimg src=\"https:\u002F\u002Fgblobscdn.gitbook.com\u002Fassets%2F-Lqya5RvLedGEWPhtkjU%2F-LyfPCyvV8By5YBltxfh%2F-LyfQsxswLC-6WKGgfGj%2Fcentral%20sweep%20server%203.png?alt=media&token=c81e4fe7-7ee4-48ea-a4cd-7b28113c6088\" width=\"400\" alt=\"Weights & Biases\" \u002F>\n\n### Common use cases\n- **Explore:** Efficiently sample the space of hyperparameter combinations to discover promising regions and build an intuition about your model.\n- **Optimize:**  Use sweeps to find a set of hyperparameters with optimal performance.\n- **K-fold cross validation:** [Here's a brief code example](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fexamples\u002Fwandb-sweeps\u002Fsweeps-cross-validation) of _k_-fold cross validation with W&B Sweeps.\n\n### Visualize Sweeps results\nThe hyperparameter importance plot surfaces which hyperparameters were the best predictors of, and highly correlated to desirable values for your metrics.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwandb_examples_readme_9d5896271b25.png\" width=\"720\" alt=\"Weights & Biases\" \u002F>\n\nParallel coordinates plots map hyperparameter values to model metrics. They're useful for honing in on combinations of hyperparameters that led to the best model performance.\n\n\u003Cimg src=\"https:\u002F\u002Fi.imgur.com\u002FTHYXBN0.png\" width=\"720\" alt=\"Weights & Biases\" \u002F>\n\n# 📜 Share insights with Reports\nReports let you [organize visualizations, describe your findings, and share updates with collaborators](http:\u002F\u002Fwandb.me\u002Freports-guide).\n\n### Common use cases\n- **Notes:** Add a graph with a quick note to yourself.\n- **Collaboration:** Share findings with your colleagues.\n- **Work log:** Track what you've tried and plan next steps.\n\n**Explore reports in [The Gallery →](https:\u002F\u002Fwandb.ai\u002Fgallery) | Read the [Docs](https:\u002F\u002Fdocs.wandb.com\u002Freports)**\n\nOnce you have experiments in W&B, you can visualize and document results in Reports with just a few clicks. Here's a quick [demo video](http:\u002F\u002Fwandb.me\u002Fshort-reports).\n\n![](https:\u002F\u002Fi.imgur.com\u002Fdn0Dyd8.png)\n\n# 🏺 Version control datasets and models with Artifacts\nGit and GitHub make code version control easy,\nbut they're not optimized for tracking the other parts of the ML pipeline:\ndatasets, models, and other large binary files.\n\nW&B's Artifacts are.\nWith just a few extra lines of code,\nyou can start tracking you and your team's outputs,\nall directly linked to run.\n\n### Try Artifacts in a [Colab](http:\u002F\u002Fwandb.me\u002Fartifacts-colab) with a [video tutorial](http:\u002F\u002Fwandb.me\u002Fartifacts-video)\n\n![](https:\u002F\u002Fi.imgur.com\u002FzvBWhGx.png)\n\n### Common use cases\n- **Pipeline Management:** Track and visualize the inputs and outputs of your runs as a graph\n- **Don't Repeat Yourself™:** Prevent the duplication of compute effort\n- **Sharing Data in Teams:** Collaborate on models and datasets without all the headaches\n\n![](https:\u002F\u002Fi.imgur.com\u002Fw92cYQm.png)\n\n**Learn about Artifacts [here →](https:\u002F\u002Fwww.wandb.com\u002Farticles\u002Fannouncing-artifacts) | Read the [Docs](https:\u002F\u002Fdocs.wandb.com\u002Fartifacts)**\n\n\n\n# Visualize and Query data with Tables\n\nGroup, sort, filter, generate calculated columns, and create charts from tabular data.\n\nSpend more time deriving insights, and less time building charts manually.\n\n```\n# log my table\n\nwandb.log({\"table\": my_dataframe})\n```\n\n![](https:\u002F\u002Fi.imgur.com\u002FFg9xR6M.gif)\n\n### Try Tables in a [Colab](http:\u002F\u002Fwandb.me\u002Ftables-quickstart) or these [examples](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fcolabs\u002Ftables)\n\n**Explore Tables [here →](https:\u002F\u002Fwandb.ai\u002Fsite\u002Ftables) | Read the [Docs](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fdata-vis)**\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fassets\u002Flogo-light.svg#gh-light-mode-only\" width=\"600\" alt=\"Weights & Biases\"\u002F>\n  \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fassets\u002Flogo-dark.svg#gh-dark-mode-only\" width=\"600\" alt=\"Weights & Biases\"\u002F>\n\u003C\u002Fp>\n\n使用 W&B 更快地构建更优秀的模型。跟踪并可视化机器学习流水线中的各个环节，从数据集到生产环境中的机器学习模型。立即开始使用 W&B，[注册免费账户！](https:\u002F\u002Fwandb.com?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme)\n\n\u003C!-- \n&nbsp;\n\n\u003Cp align='center'>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Ftrack?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fexperiments-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fexperiments-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Experiments\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Freports?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Freport-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Freport-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Reports\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fartifacts?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fartifacts-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fartifacts-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Artifacts\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fdata-vis?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Ftables-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Ftables-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Tables\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fsweeps?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fsweeps-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fsweeps-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Sweeps\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fmodels?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Fmodels-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Fmodels-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Model Management\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Flaunch?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme\">\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_dark_background\u002Flaunch-dark.svg\" width=\"13.5%\">\n  \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fwandb\u002Fwandb\u002Fmain\u002Fdocs\u002FREADME_images\u002FProduct_Icons_light\u002Flaunch-light.svg\" width=\"13.5%\">\n  \u003Cimg alt=\"Weights and Biases Launch\" src=\"\">\n\u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003C\u002Fp>\n\n&nbsp; -->\n\n# 🚀 入门指南\n\n### 再也不用担心进度丢失。\n在短短5分钟内，保存所有用于比较和复现模型所需的内容——架构、超参数、权重、模型预测、GPU使用情况、Git提交记录，甚至数据集。W&B 对个人用户和学术项目免费，并且上手非常简单。\n\n**查看我们的 [示例脚本库](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fexamples) 和 [示例 Colab](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fcolabs)**，\n或者继续阅读以获取代码片段及更多信息！\n\n如有任何问题，请随时在我们的 [Discourse 论坛](http:\u002F\u002Fwandb.me\u002Fand-you) 提问。\n\n# 🤝 轻松集成任意框架\n安装 `wandb` 库并登录：\n```\npip install wandb\nwandb login\n```\n灵活的集成方式适用于任何 Python 脚本：\n```python\nimport wandb\n\n# 1. 开始一个 W&B 实验\nwandb.init(project='gpt3')\n\n# 2. 保存模型输入和超参数\nconfig = wandb.config\nconfig.learning_rate = 0.01\n\n# 模型训练代码在此...\n\n# 3. 随时间记录指标，以便可视化性能\nfor i in range(10):\n    wandb.log({\"loss\": loss})\n```\n\n### [在 Colab 中尝试 →](http:\u002F\u002Fwandb.me\u002Fintro-colab)\n\n如有任何问题，请随时在我们的 [Discourse 论坛](http:\u002F\u002Fwandb.me\u002Fand-you) 提问。\n\n![](https:\u002F\u002Fi.imgur.com\u002FTU34QFZ.png)\n\n**[探索 W&B 仪表板](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=gnD8BFuyVUA)**\n\n# 📈 跟踪模型与数据流水线的超参数\n在脚本开头只需设置一次 `wandb.config`，即可保存您的超参数、输入设置（如数据集名称或模型类型）以及其他实验中的自变量。这对于分析实验结果以及未来复现实验非常有帮助。设置配置还能让您 [可视化](https:\u002F\u002Fdocs.wandb.com\u002Fsweeps\u002Fvisualize-sweep-results) 模型架构或数据流水线的特征与模型性能之间的关系（如上图所示）。\n\n```python\nwandb.init()\nwandb.config.epochs = 4\nwandb.config.batch_size = 32\nwandb.config.learning_rate = 0.001\nwandb.config.architecture = \"resnet\"\n```\n\n- **[查看如何在 Colab 中设置配置 →](http:\u002F\u002Fwandb.me\u002Fconfig-colab)**\n- [文档](https:\u002F\u002Fdocs.wandb.com\u002Flibrary\u002Fconfig)\n\n# 🏗 使用你喜欢的框架\n\n在 W&B 中使用你喜欢的框架。W&B 的集成使你能够快速、轻松地在现有项目中设置实验跟踪和数据版本控制。有关如何将 W&B 与你选择的框架集成的更多信息，请参阅 W&B 开发者指南中的[集成章节](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations)。\n\n\u003C!-- \u003Cp align='center'>\n\u003Cimg src=\".\u002Fdocs\u002FREADME_images\u002Fintegrations.png\" width=\"100%\" \u002F>\n\u003C\u002Fp> -->\n\n\u003Cdetails>\n\u003Csummary>🔥 PyTorch\u003C\u002Fsummary>\n\n调用 `.watch` 并传入你的 PyTorch 模型，即可自动记录梯度并存储网络拓扑结构。接下来，使用 `.log` 来跟踪其他指标。以下示例展示了如何操作：\n\n```python\nimport wandb\n\n# 1. 开始一个新的运行\nrun = wandb.init(project=\"gpt4\")\n\n# 2. 保存模型输入和超参数\nconfig = run.config\nconfig.dropout = 0.01\n\n# 3. 记录梯度和模型参数\nrun.watch(model)\nfor batch_idx, (data, target) in enumerate(train_loader):\n    ...\n    if batch_idx % args.log_interval == 0:\n        # 4. 记录指标以可视化性能\n        run.log({\"loss\": loss})\n```\n\n- 运行一个示例 [Google Colab 笔记本](http:\u002F\u002Fwandb.me\u002Fpytorch-colab)。\n- 阅读[开发者指南](https:\u002F\u002Fdocs.wandb.com\u002Fguides\u002Fintegrations\u002Fpytorch?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)，了解如何将 PyTorch 与 W&B 集成的技术细节。\n- 探索[W&B 报告](https:\u002F\u002Fapp.wandb.ai\u002Fwandb\u002Fgetting-started\u002Freports\u002FPytorch--VmlldzoyMTEwNzM?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)。\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>🌊 TensorFlow\u002FKeras\u003C\u002Fsummary>\n在训练过程中调用 `model.fit` 时，使用 W&B 回调函数可自动将指标保存到 W&B。\n\n以下代码示例展示了将 W&B 与 Keras 集成时脚本可能的样子：\n\n```python\n# 此脚本需要安装以下库：\n#   tensorflow, numpy\n\nimport wandb\nfrom wandb.keras import WandbMetricsLogger, WandbModelCheckpoint\n\nimport random\nimport numpy as np\nimport tensorflow as tf\n\n\n# 开始一次运行，跟踪超参数\nrun = wandb.init(\n    # 设置本次运行将被记录的 W&B 项目\n    project=\"my-awesome-project\",\n    # 使用 wandb.config 跟踪超参数和运行元数据\n    config={\n        \"layer_1\": 512,\n        \"activation_1\": \"relu\",\n        \"dropout\": random.uniform(0.01, 0.80),\n        \"layer_2\": 10,\n        \"activation_2\": \"softmax\",\n        \"optimizer\": \"sgd\",\n        \"loss\": \"sparse_categorical_crossentropy\",\n        \"metric\": \"accuracy\",\n        \"epoch\": 8,\n        \"batch_size\": 256,\n    },\n)\n\n# [可选] 使用 wandb.config 作为你的配置\nconfig = run.config\n\n# 获取数据\nmnist = tf.keras.datasets.mnist\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train \u002F 255.0, x_test \u002F 255.0\nx_train, y_train = x_train[::5], y_train[::5]\nx_test, y_test = x_test[::20], y_test[::20]\nlabels = [str(digit) for digit in range(np.max(y_train) + 1)]\n\n# 构建模型\nmodel = tf.keras.models.Sequential(\n    [\n        tf.keras.layers.Flatten(input_shape=(28, 28)),\n        tf.keras.layers.Dense(config.layer_1, activation=config.activation_1),\n        tf.keras.layers.Dropout(config.dropout),\n        tf.keras.layers.Dense(config.layer_2, activation=config.activation_2),\n    ]\n)\n\n# 编译模型\nmodel.compile(optimizer=config.optimizer, loss=config.loss, metrics=[config.metric])\n\n# WandbMetricsLogger 将训练和验证指标记录到 wandb\n# WandbModelCheckpoint 将模型检查点上传到 wandb\nhistory = model.fit(\n    x=x_train,\n    y=y_train,\n    epochs=config.epoch,\n    batch_size=config.batch_size,\n    validation_data=(x_test, y_test),\n    callbacks=[\n        WandbMetricsLogger(log_freq=5),\n        WandbModelCheckpoint(\"models\"),\n    ],\n)\n\n# [可选] 结束 W&B 运行，在笔记本中是必需的\nrun.finish()\n```\n\n立即开始将你的 Keras 模型与 W&B 集成吧：\n\n- 运行一个示例 [Google Colab 笔记本](https:\u002F\u002Fwandb.me\u002Fintro-keras?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)\n- 阅读[开发者指南](https:\u002F\u002Fdocs.wandb.com\u002Fguides\u002Fintegrations\u002Fkeras?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)，了解如何将 Keras 与 W&B 集成的技术细节。\n- 探索[W&B 报告](https:\u002F\u002Fapp.wandb.ai\u002Fwandb\u002Fgetting-started\u002Freports\u002FKeras--VmlldzoyMTEwNjQ?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)。\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>🤗 Huggingface Transformers\u003C\u002Fsummary>\n\n在使用 HuggingFace Trainer 运行脚本时，将 `wandb` 传递给 `report_to` 参数。W&B 将自动记录损失、评估指标、模型拓扑和梯度。\n\n**注意**：运行脚本的环境必须已安装 `wandb`。\n\n以下示例展示了如何将 W&B 与 Hugging Face 集成：\n\n```python\n# 此脚本需要安装以下库：\n#   numpy, transformers, datasets\n\nimport wandb\n\nimport os\nimport numpy as np\nfrom datasets import load_dataset\nfrom transformers import TrainingArguments, Trainer\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\n\ndef tokenize_function(examples):\n    return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\n\n\ndef compute_metrics(eval_pred):\n    logits, labels = eval_pred\n    predictions = np.argmax(logits, axis=-1)\n    return {\"accuracy\": np.mean(predictions == labels)}\n\n\n# 下载并准备数据\ndataset = load_dataset(\"yelp_review_full\")\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n\nsmall_train_dataset = dataset[\"train\"].shuffle(seed=42).select(range(1000))\nsmall_eval_dataset = dataset[\"test\"].shuffle(seed=42).select(range(300))\n\nsmall_train_dataset = small_train_dataset.map(tokenize_function, batched=True)\nsmall_eval_dataset = small_eval_dataset.map(tokenize_function, batched=True)\n\n# 下载模型\nmodel = AutoModelForSequenceClassification.from_pretrained(\n    \"distilbert-base-uncased\", num_labels=5\n)\n\n# 设置本次运行将被记录的 W&B 项目\nos.environ[\"WANDB_PROJECT\"] = \"my-awesome-project\"\n\n# 将你训练好的模型检查点保存到 W&B\nos.environ[\"WANDB_LOG_MODEL\"] = \"true\"\n\n# 关闭 watch 以加快日志记录速度\nos.environ[\"WANDB_WATCH\"] = \"false\"\n\n# 将“wandb”传递给 `report_to` 参数以启用 W&B 日志记录\ntraining_args = TrainingArguments(\n    output_dir=\"models\",\n    report_to=\"wandb\",\n    logging_steps=5,\n    per_device_train_batch_size=32,\n    per_device_eval_batch_size=32,\n    evaluation_strategy=\"steps\",\n    eval_steps=20,\n    max_steps=100,\n    save_steps=100,\n)\n\n# 定义训练器并开始训练\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=small_train_dataset,\n    eval_dataset=small_eval_dataset,\n    compute_metrics=compute_metrics,\n)\ntrainer.train()\n\n# [可选] 完成 wandb 运行，在笔记本中是必需的\nwandb.finish()\n```\n\n- 运行一个示例 [Google Colab Notebook](http:\u002F\u002Fwandb.me\u002Fhf?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)。\n- 阅读[开发者指南](https:\u002F\u002Fdocs.wandb.com\u002Fguides\u002Fintegrations\u002Fhuggingface?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)，了解如何将 Hugging Face 与 W&B 集成的技术细节。\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>⚡️ PyTorch Lightning\u003C\u002Fsummary>\n\n使用 Lightning 构建可扩展、结构化、高性能的 PyTorch 模型，并通过 W&B 进行日志记录。\n\n```python\n# 此脚本需要安装以下库：\n#   torch, torchvision, pytorch_lightning\n\nimport wandb\n\nimport os\nfrom torch import optim, nn, utils\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import ToTensor\n\nimport pytorch_lightning as pl\nfrom pytorch_lightning.loggers import WandbLogger\n\n\nclass LitAutoEncoder(pl.LightningModule):\n    def __init__(self, lr=1e-3, inp_size=28, optimizer=\"Adam\"):\n        super().__init__()\n\n        self.encoder = nn.Sequential(\n            nn.Linear(inp_size * inp_size, 64), nn.ReLU(), nn.Linear(64, 3)\n        )\n        self.decoder = nn.Sequential(\n            nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, inp_size * inp_size)\n        )\n        self.lr = lr\n\n        # 将超参数保存到 self.hparams 中，由 wandb 自动记录\n        self.save_hyperparameters()\n\n    def training_step(self, batch, batch_idx):\n        x, y = batch\n        x = x.view(x.size(0), -1)\n        z = self.encoder(x)\n        x_hat = self.decoder(z)\n        loss = nn.functional.mse_loss(x_hat, x)\n\n        # 将指标记录到 wandb\n        self.log(\"train_loss\", loss)\n        return loss\n\n    def configure_optimizers(self):\n        optimizer = optim.Adam(self.parameters(), lr=self.lr)\n        return optimizer\n\n\n# 初始化自编码器\nautoencoder = LitAutoEncoder(lr=1e-3, inp_size=28)\n\n# 准备数据\nbatch_size = 32\ndataset = MNIST(os.getcwd(), download=True, transform=ToTensor())\ntrain_loader = utils.data.DataLoader(dataset, shuffle=True)\n\n# 初始化 wandb 日志记录器并命名你的 wandb 项目\nwandb_logger = WandbLogger(project=\"my-awesome-project\")\n\n# 将你的批量大小添加到 wandb 配置中\nwandb_logger.experiment.config[\"batch_size\"] = batch_size\n\n# 将 wandb_logger 传递给 Trainer\ntrainer = pl.Trainer(limit_train_batches=750, max_epochs=5, logger=wandb_logger)\n\n# 训练模型\ntrainer.fit(model=autoencoder, train_dataloaders=train_loader)\n\n# [可选] 完成 wandb 运行，在笔记本中是必需的\nwandb.finish()\n```\n\n- 运行一个示例 [Google Colab Notebook](http:\u002F\u002Fwandb.me\u002Flightning?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)。\n- 阅读[开发者指南](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations\u002Flightning?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)，了解如何将 PyTorch Lightning 与 W&B 集成的技术细节。\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>💨 XGBoost\u003C\u002Fsummary>\n使用 W&B 回调函数，当您在训练过程中调用 `model.fit` 时，自动将指标保存到 W&B。\n\n以下代码示例展示了您的脚本在与 XGBoost 集成 W&B 时可能的样子：\n\n```python\n# 此脚本需要安装以下库：\n#   numpy, xgboost\n\nimport wandb\nfrom wandb.xgboost import WandbCallback\n\nimport numpy as np\nimport xgboost as xgb\n\n\n# 设置 XGBoost 参数\nparam = {\n    \"objective\": \"multi:softmax\",\n    \"eta\": 0.1,\n    \"max_depth\": 6,\n    \"nthread\": 4,\n    \"num_class\": 6,\n}\n\n# 开始一个新的 wandb 运行以跟踪此脚本\nrun = wandb.init(\n    # 设置此运行将被记录的 wandb 项目\n    project=\"my-awesome-project\",\n    # 跟踪超参数和运行元数据\n    config=param,\n)\n\n# 从 wandb Artifacts 下载数据并准备数据\nrun.use_artifact(\"wandb\u002Fintro\u002Fdermatology_data:v0\", type=\"dataset\").download(\".\")\ndata = np.loadtxt(\n    \".\u002Fdermatology.data\",\n    delimiter=\",\",\n    converters={33: lambda x: int(x == \"?\"), 34: lambda x: int(x) - 1},\n)\nsz = data.shape\n\ntrain = data[: int(sz[0] * 0.7), :]\ntest = data[int(sz[0] * 0.7) :, :]\n\ntrain_X = train[:, :33]\ntrain_Y = train[:, 34]\n\ntest_X = test[:, :33]\ntest_Y = test[:, 34]\n\nxg_train = xgb.DMatrix(train_X, label=train_Y)\nxg_test = xgb.DMatrix(test_X, label=test_Y)\nwatchlist = [(xg_train, \"train\"), (xg_test, \"test\")]\n\n# 向 wandb 运行添加另一个配置\nnum_round = 5\nrun.config[\"num_round\"] = 5\nrun.config[\"data_shape\"] = sz\n\n# 将 WandbCallback 传递给 booster，以记录其配置和指标\nbst = xgb.train(\n    param, xg_train, num_round, evals=watchlist, callbacks=[WandbCallback()]\n)\n\n# 获取预测\npred = bst.predict(xg_test)\nerror_rate = np.sum(pred != test_Y) \u002F test_Y.shape[0]\n\n# 将你的测试指标记录到 wandb\nrun.summary[\"Error Rate\"] = error_rate\n\n# [可选] 完成 wandb 运行，在笔记本中是必需的\nrun.finish()\n```\n\n- 运行一个示例 [Google Colab Notebook](https:\u002F\u002Fwandb.me\u002Fxgboost?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)。\n- 阅读[开发者指南](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations\u002Fxgboost?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)，了解如何将 XGBoost 与 W&B 集成的技术细节。\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>🧮 Sci-Kit Learn\u003C\u002Fsummary>\n使用 wandb 可视化并比较你的 scikit-learn 模型的性能：\n\n```python\n# 此脚本需要安装以下库：\n#   numpy, sklearn\n\nimport wandb\nfrom wandb.sklearn import plot_precision_recall, plot_feature_importances\nfrom wandb.sklearn import plot_class_proportions, plot_learning_curve, plot_roc\n\nimport numpy as np\nfrom sklearn import datasets\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n\n# 加载并处理数据\nwbcd = datasets.load_breast_cancer()\nfeature_names = wbcd.feature_names\nlabels = wbcd.target_names\n\ntest_size = 0.2\nX_train, X_test, y_train, y_test = train_test_split(\n    wbcd.data, wbcd.target, test_size=test_size\n)\n\n# 训练模型\nmodel = RandomForestClassifier()\nmodel.fit(X_train, y_train)\nmodel_params = model.get_params()\n\n# 获取预测\ny_pred = model.predict(X_test)\ny_probas = model.predict_proba(X_test)\nimportances = model.feature_importances_\nindices = np.argsort(importances)[::-1]\n\n# 开始一个新的 wandb 运行并添加你的模型超参数\nrun = wandb.init(project=\"my-awesome-project\", config=model_params)\n\n# 向 wandb 添加额外的配置\nrun.config.update(\n    {\n        \"test_size\": test_size,\n        \"train_len\": len(X_train),\n        \"test_len\": len(X_test),\n    }\n)\n\n# 将额外的可视化记录到 wandb\nplot_class_proportions(y_train, y_test, labels)\nplot_learning_curve(model, X_train, y_train)\nplot_roc(y_test, y_probas, labels)\nplot_precision_recall(y_test, y_probas, labels)\nplot_feature_importances(model)\n\n# [可选] 完成 wandb 运行，在笔记本中是必需的\nrun.finish()\n```\n\n- 运行一个示例 [Google Colab 笔记本](https:\u002F\u002Fwandb.me\u002Fscikit-colab?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)。\n- 阅读[开发者指南](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fintegrations\u002Fscikit?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=integrations)，了解如何将 Scikit-Learn 与 W&B 集成的技术细节。\n\u003C\u002Fdetails>\n\n&nbsp;\n# 🧹 使用 Sweeps 优化超参数\n使用 Weights & Biases 的 Sweeps 功能，自动化超参数优化并探索可能的模型空间。\n\n### [在 Colab 中尝试 PyTorch 的 Sweeps →](http:\u002F\u002Fwandb.me\u002Fsweeps-colab)\n### [在 Colab 中尝试 TensorFlow 的 Sweeps →](http:\u002F\u002Fwandb.me\u002Ftf-sweeps-colab)\n\n### 使用 W&B Sweeps 的优势\n- **设置快速：** 只需几行代码即可运行 W&B 的 Sweeps。\n- **透明：** 我们会引用所使用的所有算法，并且我们的代码是[开源的](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fclient\u002Fblob\u002Fmaster\u002Fwandb\u002Fsdk\u002Fwandb_sweep.py)。\n- **强大：** 我们的 Sweeps 完全可自定义和配置。您可以在数十台机器上启动 Sweeps，其操作方式与在笔记本电脑上启动 Sweeps 一样简单。\n\n### [5 分钟入门 →](https:\u002F\u002Fdocs.wandb.com\u002Fsweeps\u002Fquickstart)\n\n\u003Cimg src=\"https:\u002F\u002Fgblobscdn.gitbook.com\u002Fassets%2F-Lqya5RvLedGEWPhtkjU%2F-LyfPCyvV8By5YBltxfh%2F-LyfQsxswLC-6WKGgfGj%2Fcentral%20sweep%20server%203.png?alt=media&token=c81e4fe7-7ee4-48ea-a4cd-7b28113c6088\" width=\"400\" alt=\"Weights & Biases\" \u002F>\n\n### 常见用例\n- **探索：** 高效地采样超参数组合的空间，以发现有前景的区域，并对您的模型建立直观的理解。\n- **优化：** 使用 Sweeps 找到性能最优的超参数集合。\n- **K 折交叉验证：** [这里有一个简短的代码示例](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fexamples\u002Fwandb-sweeps\u002Fsweeps-cross-validation)展示了如何使用 W&B Sweeps 进行 K 折交叉验证。\n\n### 可视化 Sweeps 结果\n超参数重要性图显示了哪些超参数是对目标指标的最佳预测因子，并且与理想的指标值高度相关。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwandb_examples_readme_9d5896271b25.png\" width=\"720\" alt=\"Weights & Biases\" \u002F>\n\n平行坐标图将超参数值映射到模型指标上。它们有助于找到导致最佳模型性能的超参数组合。\n\n\u003Cimg src=\"https:\u002F\u002Fi.imgur.com\u002FTHYXBN0.png\" width=\"720\" alt=\"Weights & Biases\" \u002F>\n\n# 📜 使用 Reports 分享洞察\nReports 让您能够[组织可视化内容、描述您的发现，并与合作者分享更新](http:\u002F\u002Fwandb.me\u002Freports-guide)。\n\n### 常见用例\n- **笔记：** 添加一张图表，并附上简短的个人备注。\n- **协作：** 与同事分享您的发现。\n- **工作日志：** 跟踪您已经尝试过的内容，并规划下一步行动。\n\n**在 [The Gallery →](https:\u002F\u002Fwandb.ai\u002Fgallery) 探索 Reports | 阅读[文档](https:\u002F\u002Fdocs.wandb.com\u002Freports)**\n\n一旦您在 W&B 中有了实验数据，只需点击几下，就可以在 Reports 中可视化和记录结果。这里有一个快速的[演示视频](http:\u002F\u002Fwandb.me\u002Fshort-reports)。\n\n![](https:\u002F\u002Fi.imgur.com\u002Fdn0Dyd8.png)\n\n# 🏺 使用 Artifacts 对数据集和模型进行版本控制\nGit 和 GitHub 使代码版本控制变得容易，但它们并不适合跟踪机器学习流水线中的其他部分：\n数据集、模型以及其他大型二进制文件。\n\n而 W&B 的 Artifacts 则非常适合这一点。只需添加几行额外的代码，\n您就可以开始跟踪您和团队的输出，\n并且这些输出都会直接链接到相应的运行中。\n\n### 在 [Colab](http:\u002F\u002Fwandb.me\u002Fartifacts-colab) 中尝试 Artifacts，并观看[视频教程](http:\u002F\u002Fwandb.me\u002Fartifacts-video)\n\n![](https:\u002F\u002Fi.imgur.com\u002FzvBWhGx.png)\n\n### 常见用例\n- **流水线管理：** 将您的运行的输入和输出以图的形式进行跟踪和可视化。\n- **不要重复劳动™：** 防止计算资源的重复使用。\n- **团队数据共享：** 在没有繁琐流程的情况下，协作处理模型和数据集。\n\n![](https:\u002F\u002Fi.imgur.com\u002Fw92cYQm.png)\n\n**在此了解 Artifacts [→](https:\u002F\u002Fwww.wandb.com\u002Farticles\u002Fannouncing-artifacts) | 阅读[文档](https:\u002F\u002Fdocs.wandb.com\u002Fartifacts)**\n\n\n\n# 使用 Tables 可视化和查询数据\n\n对表格数据进行分组、排序、筛选，生成计算列，并创建图表。\n\n让您把更多时间花在获取洞察上，而不是手动构建图表。\n\n```\n# 记录我的表格\n\nwandb.log({\"table\": my_dataframe})\n```\n\n![](https:\u002F\u002Fi.imgur.com\u002FFg9xR6M.gif)\n\n### 在 [Colab](http:\u002F\u002Fwandb.me\u002Ftables-quickstart) 或这些[示例](https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Ftree\u002Fmaster\u002Fcolabs\u002Ftables)中尝试 Tables\n\n**在此探索 Tables [→](https:\u002F\u002Fwandb.ai\u002Fsite\u002Ftables) | 阅读[文档](https:\u002F\u002Fdocs.wandb.ai\u002Fguides\u002Fdata-vis)**","# Weights & Biases (W&B) 快速上手指南\n\nWeights & Biases (W&B) 是一款强大的机器学习实验跟踪工具，帮助你可视化训练过程、管理超参数、记录模型指标，并轻松复现实验结果。\n\n## 环境准备\n\n- **操作系统**：Windows、macOS、Linux\n- **Python 版本**：Python 3.6+\n- **前置依赖**：\n  - `pip` 包管理器\n  - 已安装的机器学习框架（如 PyTorch、TensorFlow\u002FKeras、Hugging Face Transformers 等，可选）\n  - 有效的网络连接（用于登录和上传数据）\n\n> 💡 提示：国内用户若遇到网络延迟，可配置代理或使用国内镜像源加速 pip 安装。\n\n## 安装步骤\n\n1. 使用 pip 安装 W&B 客户端库：\n```bash\npip install wandb\n```\n\n2. 登录你的 W&B 账户（首次使用需注册免费账号）：\n```bash\nwandb login\n```\n运行后会提示你输入 API Key，可在 [https:\u002F\u002Fwandb.ai\u002Fauthorize](https:\u002F\u002Fwandb.ai\u002Fauthorize) 获取。\n\n## 基本使用\n\n以下是最小化的使用示例，适用于任何 Python 脚本：\n\n```python\nimport wandb\n\n# 1. 初始化一个运行（run）\nwandb.init(project='my-first-project')\n\n# 2. 记录超参数\nconfig = wandb.config\nconfig.learning_rate = 0.01\nconfig.epochs = 10\nconfig.batch_size = 32\n\n# 模拟训练过程\nfor epoch in range(config.epochs):\n    loss = 1.0 \u002F (epoch + 1)  # 替换为你的实际损失计算\n    accuracy = 1 - loss       # 替换为你的实际准确率计算\n    \n    # 3. 记录指标\n    wandb.log({\"loss\": loss, \"accuracy\": accuracy})\n\n# 运行结束后自动同步数据到云端\n```\n\n### 集成主流框架（简要示例）\n\n#### PyTorch\n```python\nrun = wandb.init(project=\"pytorch-example\")\nrun.watch(model)  # 自动记录梯度与模型结构\n\nfor batch_idx, (data, target) in enumerate(train_loader):\n    # ... 训练代码 ...\n    if batch_idx % 10 == 0:\n        run.log({\"loss\": loss.item()})\n```\n\n#### TensorFlow \u002F Keras\n```python\nfrom wandb.keras import WandbMetricsLogger, WandbModelCheckpoint\n\nmodel.fit(\n    x_train, y_train,\n    epochs=8,\n    callbacks=[\n        WandbMetricsLogger(log_freq=5),\n        WandbModelCheckpoint(\"models\")\n    ]\n)\n```\n\n#### Hugging Face Transformers\n在 Trainer 中设置 `report_to=\"wandb\"` 即可自动集成：\n```python\ntrainer = Trainer(\n    model=model,\n    args=args,\n    train_dataset=train_dataset,\n    report_to=\"wandb\"  # 启用 W&B 日志\n)\n```\n\n完成上述步骤后，访问 [https:\u002F\u002Fwandb.ai](https:\u002F\u002Fwandb.ai) 即可查看实时仪表盘、对比实验、生成报告。\n\n> 📌 更多示例代码和 Colab 笔记本请参考官方仓库：  \n> https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples","某初创公司的算法团队正在开发一个基于 Transformer 的文本分类模型，需要在两周内完成从原型验证到超参数调优的全过程。\n\n### 没有 examples 时\n- **起步困难**：团队成员需从零编写 W&B 集成代码，反复查阅文档才能搞懂如何正确记录实验指标和超参数，浪费大量宝贵时间。\n- **流程混乱**：由于缺乏标准参考，不同成员记录日志的格式不统一，导致后期无法在面板上横向对比不同实验的效果。\n- **功能盲区**：团队仅使用了基础的日志记录功能，完全不知道如何利用 W&B 的 Sweeps 进行自动化超参数搜索，或如何通过 Artifacts 管理模型版本。\n- **调试低效**：遇到训练发散或数据异常时，因未预设好数据可视化（Tables）和系统资源监控，只能靠打印日志盲目猜测原因。\n\n### 使用 examples 后\n- **快速落地**：直接复用 examples 中成熟的深度学习项目模板，几分钟内即可接入完整的实验追踪体系，立即开始核心模型迭代。\n- **规范统一**：参照示例中的最佳实践，团队建立了标准化的实验记录规范，确保所有成员的实验数据都能在同一个 Dashboard 中清晰对比。\n- **进阶赋能**：通过模仿 examples 中的配置，轻松启动了自动化超参数扫描（Sweeps），并学会了使用 Artifacts 版本化存储数据集与模型，提升复现性。\n- **洞察深入**：利用示例中预置的数据可视化图表和系统监控面板，迅速定位到是特定批次数据导致了梯度爆炸，大幅缩短调试周期。\n\nexamples 将原本需要数天摸索的配置工作缩短为几小时的直接应用，让团队能专注于模型创新而非工具搭建。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwandb_examples_845e2b50.png","wandb","Weights & Biases","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fwandb_da272741.png","Building the best tools for ML practitioners",null,"https:\u002F\u002Fwandb.ai","https:\u002F\u002Fgithub.com\u002Fwandb",[80,84,88],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",99.8,{"name":85,"color":86,"percentage":87},"Python","#3572A5",0.2,{"name":89,"color":90,"percentage":91},"Shell","#89e051",0,1203,300,"2026-04-07T02:58:53",1,"未说明",{"notes":98,"python":96,"dependencies":99},"该工具为 Weights & Biases (W&B) 的示例集合，主要用于演示如何集成 W&B 到不同的机器学习框架（如 PyTorch, TensorFlow\u002FKeras, Hugging Face）。运行这些示例需要安装对应的框架库。核心依赖仅为 'wandb' 库，需通过 'pip install wandb' 安装并登录账号。具体的 GPU、内存及 Python 版本需求取决于用户选择运行的具体示例脚本及其所依赖的深度学习框架，README 中未对整体环境提出统一的硬性指标。",[72,100,101,102,103,104],"torch (可选，用于 PyTorch 集成)","tensorflow (可选，用于 Keras 集成)","numpy","transformers (可选，用于 Hugging Face 集成)","datasets (可选，用于 Hugging Face 集成)",[14],"2026-03-27T02:49:30.150509","2026-04-11T10:01:32.362081",[109,114,119,124,129,133],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},29461,"为什么在最后一次迭代或结束时 wandb 没有记录日志（如图表、检查点等）？","这通常是因为第一次记录某个键（key）时使用了列表格式，导致系统默认生成的图表类型无法正确绘制列表数据。即使后续运行改为使用 `wandb.Histogram`，由于该键的默认图表类型已被锁定为错误的格式，新数据仍无法正确显示。\n\n解决方案有两种：\n1. **推荐方法**：在 `wandb.init()` 中指定一个新的项目名（例如 `histo-define-metric-test`），确保该项目中首次记录的数据就是正确的 `Histogram` 格式，这样新创建的运行会自动生成正确的直方图。\n2. **手动修复**：在界面上手动为该指标创建正确的图表类型（如直方图），之后新的运行在该工作空间下就会自动应用此设置。\n\n注意：仅仅重新开始一个 run 并不能解决问题，因为相关状态是保存在工作空间（workspace）级别的。","https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Fissues\u002F89",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},29462,"创建 Sweep 后，是否可以修改其配置（例如删除导致显存溢出的批次大小）？","官方不支持直接编辑已创建的 Sweep 配置，但可以通过以下“黑客”方法强制向 Sweep 运行中添加新的配置参数（假设运行是由 sweep agent 启动的）：\n\n```python\nrun = wandb.init(allow_val_change=True)  # 初始化运行，此时 run.config 包含 sweep 参数\nwandb.config.__dict__[\"_locked\"] = {}  # 删除对 sweep 参数的锁定\nwandb.config.update({\"a\": 1}, allow_val_change=True)  # 更新配置，新参数将分配给该次运行并显示在 sweep 工作区中\n```\n\n通过这种方式，你可以推送任何配置到 sweep 运行中，更新的参数会显示在 sweep 工作区。但这属于非官方支持的操作，需谨慎使用。","https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Fissues\u002F71",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},29463,"fastai 集成报错 `ModuleNotFoundError: No module named 'fastai.callbacks'` 怎么办？","这是因为 fastai 发布了 2.0 版本，其内部结构发生了变化，旧的 `fastai.callbacks` 模块已被移除或重构，导致 `wandb.fastai` 集成代码无法找到对应的模块。\n\n确认该问题是由 fastai 版本升级引起的兼容性错误。建议检查 wandb 是否发布了适配 fastai 2.0 的新版本集成代码，或者暂时回退到兼容的 fastai 旧版本（如 1.x）。如果必须使用 fastai 2.0，可能需要手动编写自定义回调函数来替代原有的 `WandbCallback`，直到官方更新集成库。","https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Fissues\u002F53",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},29464,"在使用 k-Fold 交叉验证和 Sweeps 时，数据集应该放在代码的什么位置？","虽然示例代码中没有明确展示数据集放置的具体路径，但标准做法是在初始化 Sweep 之前加载并预处理数据。你需要在调用 `wandb.sweep` 和启动 agent 之前，先完成数据的读取、划分（如使用 `train_test_split`）和标准化（如 `StandardScaler`）。\n\n对于 k-Fold 交叉验证，通常需要在训练循环内部处理折的划分，而不是在数据加载阶段硬编码。确保你的数据加载逻辑独立于超参数搜索过程，并且每个 fold 的数据划分是在每次运行（run）开始时动态生成的，以保证实验的可复现性和正确性。如果遇到具体卡住的问题，建议在代码中添加打印语句调试，或尝试移除 `start_method=\"thread\"` 等多进程相关设置看是否改善。","https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Fissues\u002F74",{"id":130,"question_zh":131,"answer_zh":132,"source_url":123},29465,"如何在多进程数据加载（如使用 multiprocessing.Queue）时避免 wandb 初始化导致进程挂起？","当在每个批次数据加载时启动单独进程（使用 `multiprocessing.Queue`）时，仅仅调用 `wandb.init()` 就可能导致进程挂起。这很可能是由于 wandb 对计算机资源（如 CPU、内存）的监控与多进程机制存在冲突。\n\n目前官方正在调查此问题。临时解决方案包括：\n1. 尽量避免在每个子进程中调用 `wandb.init()`，只在主进程中初始化一次。\n2. 如果必须在子进程中使用 wandb，尝试禁用资源监控功能（如果配置允许）。\n3. 考虑自定义 Dataloaders 的获取方式，避免频繁创建新进程，从而减少与 wandb 监控系统的冲突。\n\n建议关注官方后续更新以获取正式修复。",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},29466,"如何将 Colab 笔记本整合到此仓库并进行版本控制？","为了便于用户理解 wandb 在不同场景下的用法，应将 Colab 笔记本纳入仓库并进行版本管理。具体流程如下：\n\n1. 作者在个人 Google Drive 上的 Colab 中编写内容，迭代直至就绪。\n2. 对于需要保留输出结果的笔记本，重启内核（恢复出厂设置以检查下载\u002F安装错误）并重新运行整个笔记本。\n3. 在 Colab 工具栏点击 \"File > Save a copy in GitHub\"。\n4. 在弹出的对话框中设置目标仓库、分支和文件路径。建议为新内容创建新分支并通过 PR 合并。\n   - 文件名应去掉自动添加的 `Copy_of_` 前缀。\n   - 文件路径应以 `colabs\u002F{identifier}\u002F` 开头，以便分类管理（参考 examples 目录结构）。\n\n这样可以确保 Colab 内容与代码库同步，便于维护和协作。","https:\u002F\u002Fgithub.com\u002Fwandb\u002Fexamples\u002Fissues\u002F28",[]]