[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lightly-ai--lightly":3,"tool-lightly-ai--lightly":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159267,2,"2026-04-17T11:29:14",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":32,"env_os":98,"env_gpu":99,"env_ram":98,"env_deps":100,"category_tags":105,"github_topics":106,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":149},8630,"lightly-ai\u002Flightly","lightly","A python library for self-supervised learning on images.","Lightly 是一个专为图像自监督学习打造的 Python 框架，旨在帮助开发者在无需大量人工标注数据的情况下，高效训练高质量的计算机视觉模型。它主要解决了传统深度学习对标注数据依赖度高、成本昂贵且耗时长的痛点，让用户能充分利用海量未标注图像进行预训练，从而显著提升模型在分类、检测等下游任务中的表现。\n\n这款工具非常适合人工智能研究员、算法工程师以及希望探索前沿无监督学习技术的开发者使用。Lightly 采用了模块化设计，像搭积木一样提供了损失函数、模型头等底层构建块，代码风格与 PyTorch 高度一致，上手十分轻松。其独特亮点在于不仅支持自定义骨干网络，还原生集成了 PyTorch Lightning，能够便捷地实现分布式训练，大幅缩短实验周期。此外，Lightly 内置了包括 AIM 在内的多种主流自监督学习算法，并提供了丰富的示例代码和 Colab 教程，帮助用户快速复现论文成果或启动新项目。无论是学术研究还是工业级应用，Lightly 都能成为你构建强大视觉模型的得力助手。","\u003Ca name=\"top\">\u003C\u002Fa>\n![LightlySSL self-supervised learning Logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_c7fbd2dc476b.png)\n\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Flightly-ai\u002Flightly)](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002FLICENSE.txt)\n[![Unit Tests](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fworkflows\u002FUnit%20Tests\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Factions\u002Fworkflows\u002Ftest.yml)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Flightly)](https:\u002F\u002Fpypi.org\u002Fproject\u002Flightly\u002F)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_150dfe653354.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Flightly)\n[![Code style: black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F752876370337726585?logo=discord&logoColor=white&label=discord&color=7289da)](https:\u002F\u002Fdiscord.gg\u002FxvNJW94)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002FLightlyAI)](https:\u002F\u002Fx.com\u002FLightlyAI)\n[![codecov.io](https:\u002F\u002Fcodecov.io\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fcoverage.svg?branch=master)](https:\u002F\u002Fapp.codecov.io\u002Fgh\u002Flightly-ai\u002Flightly)\n\n\nLightly**SSL** is a computer vision framework for self-supervised learning.\n\n- [Documentation](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002F)\n- [Github](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly)\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FxvNJW94)\n\nFor a commercial version with more features, including Docker support and pretraining\nmodels for embedding, classification, detection, and segmentation tasks with\na single command, please contact sales@lightly.ai.\n\nWe've also built a whole platform on top, with additional features for active learning\nand [data curation](https:\u002F\u002Fdocs.lightly.ai\u002Fdocs\u002Fwhat-is-lightly). If you're interested in the\nLightly Worker Solution to easily process millions of samples and run [powerful algorithms](https:\u002F\u002Fdocs.lightly.ai\u002Fdocs\u002Fcustomize-a-selection)\non your data, check out [lightly.ai](https:\u002F\u002Fwww.lightly.ai). It's free to get started!\n\n## News 🚀\n\n* March 23, 2026 - Check out our latest open-source project [LightlyStudio](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-studio) to visualize, annotate, and manage your data with ease! 🔍\n* April 15, 2025 - We are excited to announce that you can now leverage SSL and distillation pretraining in just a few lines of code! We've worked hard to make self-supervised learning even more accessible with our new project [LightlyTrain](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-train). Head over there to get started and supercharge your models! ⚡️\n\n\u003Cp>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-train\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_5ba1efa5667d.png\" alt=\"LightlyTrain\" height=\"40\"\u002F>\u003C\u002Fa>\n\u003Cspan>&nbsp;&nbsp;&nbsp;&nbsp;\u003C\u002Fspan>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-studio\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_24163c8ac4eb.png\" alt=\"LightlyStudio\" height=\"40\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n## Features\n\nThis self-supervised learning framework offers the following features:\n\n- Modular framework, which exposes low-level building blocks such as loss functions and\n  model heads.\n- Easy to use and written in a PyTorch-like style.\n- Supports custom backbone models for self-supervised pre-training.\n- Support for distributed training using PyTorch Lightning.\n\n### Supported Models\n\nYou can [find sample code for all the supported models here.](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmodels.html) We provide PyTorch, PyTorch Lightning,\nand PyTorch Lightning distributed examples for all models to kickstart your project.\n\n**Models**:\n\n| Model          | Year | Paper | Docs | Colab (PyTorch) | Colab (PyTorch Lightning) |\n|----------------|------|-------|------|-----------------|----------------------------|\n| AIM            | 2024 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08541) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Faim.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Faim.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Faim.ipynb) |\n| Barlow Twins   | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fbarlowtwins.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fbarlowtwins.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fbarlowtwins.ipynb) |\n| BYOL           | 2020 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fbyol.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fbyol.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fbyol.ipynb) |\n| DCL & DCLW     | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdcl.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdcl.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdcl.ipynb) |\n| DenseCL        | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdensecl.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdensecl.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdensecl.ipynb) |\n| DINO           | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdino.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdino.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdino.ipynb) |\n| DINOv2         | 2023 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07193) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdinov2.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdinov2.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdinov2.ipynb) |\n| iBOT           | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07832) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fibot.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fibot.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fibot.ipynb) |\n| MAE            | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmae.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fmae.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fmae.ipynb) |\n| MSN            | 2022 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmsn.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fmsn.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fmsn.ipynb) |\n| MoCo           | 2019 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmoco.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fmoco.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fmoco.ipynb) |\n| NNCLR          | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14548) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fnnclr.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fnnclr.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fnnclr.ipynb) |\n| PMSN           | 2022 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fpmsn.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fpmsn.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fpmsn.ipynb) |\n| SimCLR         | 2020 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimclr.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fsimclr.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fsimclr.ipynb) |\n| SimMIM         | 2022 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimmim.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fsimmim.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fsimmim.ipynb) |\n| SimSiam        | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimsiam.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fsimsiam.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fsimsiam.ipynb) |\n| SwaV           | 2020 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fswav.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fswav.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fswav.ipynb) |\n| VICReg         | 2021 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906) | [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fvicreg.html) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fvicreg.ipynb) | [![Open In Colab](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fvicreg.ipynb) |\n\n## Tutorials\n\nWant to jump to the tutorials and see Lightly in action?\n\n- [Train MoCo on CIFAR-10](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_moco_memory_bank.html)\n- [Train SimCLR on Clothing Data](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_simclr_clothing.html)\n- [Train SimSiam on Satellite Images](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_simsiam_esa.html)\n- [Use Lightly with Custom Augmentations](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_custom_augmentations.html)\n- [Pre-train a Detectron2 Backbone with Lightly](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_pretrain_detectron2.html)\n- [Finetuning Lightly Checkpoints](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_checkpoint_finetuning.html)\n- [Using timm Models as Backbones](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_timm_backbone.html)\n\nCommunity and partner projects:\n\n- [On-Device Deep Learning with Lightly on an ARM microcontroller](https:\u002F\u002Fgithub.com\u002FARM-software\u002FEndpointAI\u002Ftree\u002Fmaster\u002FProofOfConcepts\u002FVision\u002FOpenMvMaskDefaults)\n\n## Quick Start\n\nLightly requires **Python 3.7+**. We recommend installing Lightly in a **Linux** or **OSX** environment. Python 3.13 is not yet supported, as PyTorch itself lacks Python 3.13 compatibility.\n\n### Dependencies\n\nDue to the modular nature of the Lightly package some modules can be used with older versions of dependencies. However, to use all features as of today lightly requires the following dependencies:\n\n- [PyTorch](https:\u002F\u002Fpytorch.org\u002F)>=1.11.0\n- [Torchvision](https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Findex.html)>=0.12.0\n- [PyTorch Lightning](https:\u002F\u002Fwww.pytorchlightning.ai\u002Findex.html)>=1.7.1\n\nLightly is compatible with PyTorch and PyTorch Lightning v2.0+!\n\n### Installation\n\nYou can install Lightly and its dependencies from PyPI with:\n\n```\npip3 install lightly\n```\n\nWe strongly recommend installing Lightly in a dedicated virtualenv to avoid conflicts with your system packages.\n\n### Lightly in Action\n\nWith Lightly, you can use the latest self-supervised learning methods in a modular\nway using the full power of PyTorch. Experiment with various backbones,\nmodels, and loss functions. The framework has been designed to be easy to use\nfrom the ground up. [Find more examples in our docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmodels.html).\n\n```python\nimport torch\nimport torchvision\n\nfrom lightly import loss\nfrom lightly import transforms\nfrom lightly.data import LightlyDataset\nfrom lightly.models.modules import heads\n\n\n# Create a PyTorch module for the SimCLR model.\nclass SimCLR(torch.nn.Module):\n    def __init__(self, backbone):\n        super().__init__()\n        self.backbone = backbone\n        self.projection_head = heads.SimCLRProjectionHead(\n            input_dim=512,  # Resnet18 features have 512 dimensions.\n            hidden_dim=512,\n            output_dim=128,\n        )\n\n    def forward(self, x):\n        features = self.backbone(x).flatten(start_dim=1)\n        z = self.projection_head(features)\n        return z\n\n\n# Use a resnet backbone from torchvision.\nbackbone = torchvision.models.resnet18()\n# Ignore the classification head as we only want the features.\nbackbone.fc = torch.nn.Identity()\n\n# Build the SimCLR model.\nmodel = SimCLR(backbone)\n\n# Prepare transform that creates multiple random views for every image.\ntransform = transforms.SimCLRTransform(input_size=32, cj_prob=0.5)\n\n\n# Create a dataset from your image folder.\ndataset = LightlyDataset(input_dir=\".\u002Fmy\u002Fcute\u002Fcats\u002Fdataset\u002F\", transform=transform)\n\n# Build a PyTorch dataloader.\ndataloader = torch.utils.data.DataLoader(\n    dataset,  # Pass the dataset to the dataloader.\n    batch_size=128,  # A large batch size helps with the learning.\n    shuffle=True,  # Shuffling is important!\n)\n\n# Lightly exposes building blocks such as loss functions.\ncriterion = loss.NTXentLoss(temperature=0.5)\n\n# Get a PyTorch optimizer.\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, weight_decay=1e-6)\n\n# Train the model.\nfor epoch in range(10):\n    for (view0, view1), targets, filenames in dataloader:\n        z0 = model(view0)\n        z1 = model(view1)\n        loss = criterion(z0, z1)\n        loss.backward()\n        optimizer.step()\n        optimizer.zero_grad()\n        print(f\"loss: {loss.item():.5f}\")\n```\n\nYou can easily use another model like SimSiam by swapping the model and the\nloss function.\n\n```python\n# PyTorch module for the SimSiam model.\nclass SimSiam(torch.nn.Module):\n    def __init__(self, backbone):\n        super().__init__()\n        self.backbone = backbone\n        self.projection_head = heads.SimSiamProjectionHead(512, 512, 128)\n        self.prediction_head = heads.SimSiamPredictionHead(128, 64, 128)\n\n    def forward(self, x):\n        features = self.backbone(x).flatten(start_dim=1)\n        z = self.projection_head(features)\n        p = self.prediction_head(z)\n        z = z.detach()\n        return z, p\n\n\nmodel = SimSiam(backbone)\n\n# Use the SimSiam loss function.\ncriterion = loss.NegativeCosineSimilarity()\n```\n\nYou can [find a more complete example for SimSiam here.](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimsiam.html)\n\nUse PyTorch Lightning to train the model:\n\n```python\nfrom pytorch_lightning import LightningModule, Trainer\n\nclass SimCLR(LightningModule):\n    def __init__(self):\n        super().__init__()\n        resnet = torchvision.models.resnet18()\n        resnet.fc = torch.nn.Identity()\n        self.backbone = resnet\n        self.projection_head = heads.SimCLRProjectionHead(512, 512, 128)\n        self.criterion = loss.NTXentLoss()\n\n    def forward(self, x):\n        features = self.backbone(x).flatten(start_dim=1)\n        z = self.projection_head(features)\n        return z\n\n    def training_step(self, batch, batch_index):\n        (view0, view1), _, _ = batch\n        z0 = self.forward(view0)\n        z1 = self.forward(view1)\n        loss = self.criterion(z0, z1)\n        return loss\n\n    def configure_optimizers(self):\n        optim = torch.optim.SGD(self.parameters(), lr=0.06)\n        return optim\n\n\nmodel = SimCLR()\ntrainer = Trainer(max_epochs=10, devices=1, accelerator=\"gpu\")\ntrainer.fit(model, dataloader)\n```\n\nSee [our docs for a full PyTorch Lightning example.](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimclr.html)\n\nOr train the model on 4 GPUs:\n\n```python\n\n# Use distributed version of loss functions.\ncriterion = loss.NTXentLoss(gather_distributed=True)\n\ntrainer = Trainer(\n    max_epochs=10,\n    devices=4,\n    accelerator=\"gpu\",\n    strategy=\"ddp\",\n    sync_batchnorm=True,\n    use_distributed_sampler=True,  # or replace_sampler_ddp=True for PyTorch Lightning \u003C2.0\n)\ntrainer.fit(model, dataloader)\n```\n\nWe provide multi-GPU training examples with distributed gather and synchronized BatchNorm.\n[Have a look at our docs regarding distributed training.](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fdistributed_training.html)\n\n## Benchmarks\n\nImplemented models and their performance on various datasets. Hyperparameters are not\ntuned for maximum accuracy. For detailed results and more information about the benchmarks click\n[here](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html).\n\n### ImageNet1k\n\n[ImageNet1k benchmarks](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#imagenet1k)\n\n**Note**: Evaluation settings are based on these papers:\n\n- Linear: [SimCLR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- Finetune: [SimCLR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- KNN: [InstDisc](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.01978)\n\nSee the [benchmarking scripts](.\u002Fbenchmarks\u002Fimagenet\u002Fresnet50\u002F) for details.\n\n| Model           | Backbone | Batch Size | Epochs | Linear Top1 | Finetune Top1 | kNN Top1 | Tensorboard                                                                                                                                                                    | Checkpoint                                                                                                                                                              |\n| --------------- | -------- | ---------- | ------ | ----------- | ------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| BarlowTwins     | Res50    | 256        | 100    | 62.9        | 72.6          | 45.6     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_barlowtwins_2023-08-18_00-11-03\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1692310273.Machine2.569794.0) | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_barlowtwins_2023-08-18_00-11-03\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt) |\n| BYOL            | Res50    | 256        | 100    | 62.5        | 74.5          | 46.0     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_byol_2024-02-14_16-10-09\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1707923418.Machine2.3205.0)          | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_byol_2024-02-14_16-10-09\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)        |\n| DINO            | Res50    | 128        | 100    | 68.2        | 72.5          | 49.9     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dino_2023-06-06_13-59-48\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1686052799.Machine2.482599.0)        | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dino_2023-06-06_13-59-48\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D1000900.ckpt)       |\n| DINO            | ViT-S\u002F16    | 128        | 100    | 73.3        | 79.8          | 67.5     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits14_dino_2025-02-16_16-03-14\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1739718198.compute-03-ubuntu-4x4090.2832462.0)        | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits14_dino_2025-02-16_16-03-14\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D1000900.ckpt)       |\n| iBOT            | ViT-S\u002F16    | 128        | 100    | 72.2        | 78.3          | 65.4     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits16_ibot_2025-07-10_13-47-17\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1752148040.compute-01-ubuntu-2x4090.253473.0)        | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits16_ibot_2025-07-10_13-47-17\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D1000900.ckpt)       |\n| MAE             | ViT-B\u002F16 | 256        | 100    | 46.0        | 81.3          | 11.2     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vitb16_mae_2024-02-25_19-57-30\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1708887459.Machine2.1092409.0)          | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vitb16_mae_2024-02-25_19-57-30\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)           |\n| MoCoV2          | Res50    | 256        | 100    | 61.5        | 74.3          | 41.8     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_mocov2_2024-02-18_10-29-14\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1708248562.Machine2.439033.0)      | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_mocov2_2024-02-18_10-29-14\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)      |\n| SimCLR\\*        | Res50    | 256        | 100    | 63.2        | 73.9          | 44.8     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_simclr_2023-06-22_09-11-13\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1687417883.Machine2.33270.0)       | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_simclr_2023-06-22_09-11-13\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)      |\n| SimCLR\\* + DCL  | Res50    | 256        | 100    | 65.1        | 73.5          | 49.6     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dcl_2023-07-04_16-51-40\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1688482310.Machine2.247807.0)         | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dcl_2023-07-04_16-51-40\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)         |\n| SimCLR\\* + DCLW | Res50    | 256        | 100    | 64.5        | 73.2          | 48.5     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dclw_2023-07-07_14-57-13\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1688734645.Machine2.3176.0)          | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dclw_2023-07-07_14-57-13\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)        |\n| SwAV            | Res50    | 256        | 100    | 67.2        | 75.4          | 49.5     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_swav_2023-05-25_08-29-14\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1684996168.Machine2.1445108.0)       | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_swav_2023-05-25_08-29-14\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)        |\n| TiCo            | Res50    | 256        | 100    | 49.7        | 72.7          | 26.6     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_tico_2024-01-07_18-40-57\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1704649265.Machine2.1604956.0)       | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_tico_2024-01-07_18-40-57\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D250200.ckpt)        |\n| VICReg          | Res50    | 256        | 100    | 63.0        | 73.7          | 46.3     | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_vicreg_2023-09-11_10-53-08\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1694422401.Machine2.556563.0)      | [link](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_vicreg_2023-09-11_10-53-08\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)      |\n\n_\\*We use square root learning rate scaling instead of linear scaling as it yields\nbetter results for smaller batch sizes. See Appendix B.1 in the [SimCLR paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)._\n\n### ImageNet100\n\n[ImageNet100 benchmarks detailed results](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#imagenet100)\n\n### Imagenette\n\n[Imagenette benchmarks detailed results](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#imagenette)\n\n### CIFAR-10\n\n[CIFAR-10 benchmarks detailed results](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#cifar-10)\n\n## Terminology\n\nBelow you can see a schematic overview of the different concepts in the package.\nThe terms in bold are explained in more detail in our [documentation](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002F).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_f2eef97e497e.png\" alt=\"Overview of the Lightly pip package\"\u002F>\u003C\u002Fa>\n\n### Next Steps\n\nHead to the [documentation](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002F) and see the things you can achieve with Lightly!\n\n## Development\n\nTo install dev dependencies (for example to contribute to the framework) you can use the following command:\n\n```\npip3 install -e \".[dev]\"\n```\n\nFor more information about how to contribute have a look [here](CONTRIBUTING.md).\n\n### Running Tests\n\nUnit tests are within the [tests directory](tests\u002F) and we recommend running them using\n[pytest](https:\u002F\u002Fdocs.pytest.org\u002Fen\u002Fstable\u002F). There are two test configurations\navailable. By default, only a subset will be run:\n\n```\nmake test-fast\n```\n\nTo run all tests (including the slow ones) you can use the following command:\n\n```\nmake test\n```\n\nTo test a specific file or directory use:\n\n```\npytest \u003Cpath to file or directory>\n```\n\n### Code Formatting\n\nTo format code with [black](https:\u002F\u002Fblack.readthedocs.io\u002Fen\u002Fstable\u002F) and [isort](https:\u002F\u002Fdocs.pytest.org) run:\n\n```\nmake format\n```\n\n## Further Reading\n\n**Self-Supervised Learning**:\n\n- Have a look at our [#papers channel on discord](https:\u002F\u002Fdiscord.com\u002Fchannels\u002F752876370337726585\u002F815153188487299083)\n  for the newest self-supervised learning papers.\n- [A Cookbook of Self-Supervised Learning, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12210)\n- [Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [What Should Not Be Contrastive in Contrastive Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05659)\n- [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [Momentum Contrast for Unsupervised Visual Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n\n## FAQ\n\n- Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?\n\n  - Self-supervised learning has become increasingly popular among scientists over the last years because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on _your_ dataset, you can make sure that the representations have all the necessary information about your images.\n\n- How can I contribute?\n\n  - Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our [contribution guide](CONTRIBUTING.md).\n\n- Is this framework for free?\n\n  - Yes, this framework is completely free to use and we provide the source code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind Lightly is committed to keep this framework open-source.\n\n- If this framework is free, how is the company behind Lightly making money?\n  - Training self-supervised models is only one part of our solution.\n    [The company behind Lightly](https:\u002F\u002Flightly.ai\u002F) focuses on processing and analyzing embeddings created by self-supervised models.\n    By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently.\n    As the [Lightly Solution](https:\u002F\u002Fdocs.lightly.ai) is a freemium product, you can try it out for free. However, we will charge for some features.\n  - In any case this framework will always be free to use, even for commercial purposes.\n\n## Lightly in Research\n\n- [DINOv2-3D: Self-Supervised 3D Vision Transformer Pretraining](https:\u002F\u002Fgithub.com\u002FAIM-Harvard\u002FDINOv2-3D-Med)\n- [Joint-Embedding vs Reconstruction: Provable Benefits of Latent Space Prediction for Self-Supervised Learning, 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12477)\n- [Reverse Engineering Self-Supervised Learning, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15614)\n- [Learning Visual Representations via Language-Guided Sampling, 2023](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.12248.pdf)\n- [Self-Supervised Learning Methods for Label-Efficient Dental Caries Classification, 2022](https:\u002F\u002Fwww.mdpi.com\u002F2075-4418\u002F12\u002F5\u002F1237)\n- [DPCL: Contrastive representation learning with differential privacy, 2022](https:\u002F\u002Fassets.researchsquare.com\u002Ffiles\u002Frs-1516950\u002Fv1_covered.pdf?c=1654486158)\n- [Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [solo-learn: A Library of Self-supervised Methods for Visual Representation Learning, 2021](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume23\u002F21-1155\u002F21-1155.pdf)\n\n## Company behind this Open Source Framework\n\n[Lightly](https:\u002F\u002Fwww.lightly.ai) is a spin-off from ETH Zurich that helps companies\nbuild efficient active learning pipelines to select the most relevant data for their models.\n\nYou can find out more about the company and it's services by following the links below:\n\n- [Homepage](https:\u002F\u002Fwww.lightly.ai)\n- [LightlyTrain](https:\u002F\u002Fdocs.lightly.ai\u002Ftrain\u002Fstable\u002Findex.html)\n- [Web-App](https:\u002F\u002Fapp.lightly.ai)\n- [Lightly Solution Documentation (Lightly Worker & API)](https:\u002F\u002Fdocs.lightly.ai\u002F)\n- [Lightly's AwesomeSSL](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Fawesome-self-supervised-learning) (collection of SSL papers)\n\n[Back to top🚀](#top)\n","\u003Ca name=\"top\">\u003C\u002Fa>\n![LightlySSL 自监督学习 Logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_c7fbd2dc476b.png)\n\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Flightly-ai\u002Flightly)](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002FLICENSE.txt)\n[![单元测试](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fworkflows\u002FUnit%20Tests\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Factions\u002Fworkflows\u002Ftest.yml)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Flightly)](https:\u002F\u002Fpypi.org\u002Fproject\u002Flightly\u002F)\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_150dfe653354.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Flightly)\n[![代码风格：black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F752876370337726585?logo=discord&logoColor=white&label=discord&color=7289da)](https:\u002F\u002Fdiscord.gg\u002FxvNJW94)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002FLightlyAI)](https:\u002F\u002Fx.com\u002FLightlyAI)\n[![codecov.io](https:\u002F\u002Fcodecov.io\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fcoverage.svg?branch=master)](https:\u002F\u002Fapp.codecov.io\u002Fgh\u002Flightly-ai\u002Flightly)\n\n\nLightly**SSL** 是一个用于自监督学习的计算机视觉框架。\n\n- [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002F)\n- [Github](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly)\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FxvNJW94)\n\n如需功能更丰富的商业版本，包括 Docker 支持以及针对嵌入、分类、检测和分割任务的一键式预训练模型，请联系 sales@lightly.ai。\n\n我们还在此基础上构建了一个完整的平台，提供了主动学习和[数据整理](https:\u002F\u002Fdocs.lightly.ai\u002Fdocs\u002Fwhat-is-lightly)等附加功能。如果您对 Lightly Worker 解决方案感兴趣，该方案可轻松处理数百万个样本，并在您的数据上运行[强大的算法](https:\u002F\u002Fdocs.lightly.ai\u002Fdocs\u002Fcustomize-a-selection)，请访问 [lightly.ai](https:\u002F\u002Fwww.lightly.ai)。立即开始使用完全免费！\n\n## 新闻 🚀\n\n* 2026年3月23日 - 欢迎查看我们的最新开源项目 [LightlyStudio](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-studio)，它能让您轻松地可视化、标注和管理数据！🔍\n* 2025年4月15日 - 我们很高兴地宣布，现在只需几行代码即可利用自监督学习和蒸馏预训练！通过我们的新项目 [LightlyTrain](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-train)，我们致力于让自监督学习变得更加易用。快去那里开始使用，为您的模型注入强大动力吧！⚡️\n\n\u003Cp>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-train\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_5ba1efa5667d.png\" alt=\"LightlyTrain\" height=\"40\"\u002F>\u003C\u002Fa>\n\u003Cspan>&nbsp;&nbsp;&nbsp;&nbsp;\u003C\u002Fspan>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-studio\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_24163c8ac4eb.png\" alt=\"LightlyStudio\" height=\"40\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n## 特性\n\n该自监督学习框架提供以下特性：\n\n- 模块化框架，公开了损失函数和模型头部等底层构建模块。\n- 使用简单，采用类似 PyTorch 的风格编写。\n- 支持自监督预训练的自定义骨干网络模型。\n- 支持使用 PyTorch Lightning 进行分布式训练。\n\n### 支持的模型\n\n您可以在这里找到所有支持模型的示例代码。[https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmodels.html] 我们为所有模型提供了 PyTorch、PyTorch Lightning 和 PyTorch Lightning 分布式训练的示例，帮助您快速启动项目。\n\n**模型**：\n\n| 模型          | 年份 | 论文 | 文档 | Colab (PyTorch) | Colab (PyTorch Lightning) |\n|----------------|------|-------|------|-----------------|----------------------------|\n| AIM            | 2024 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08541) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Faim.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Faim.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Faim.ipynb) |\n| Barlow Twins   | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fbarlowtwins.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fbarlowtwins.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fbarlowtwins.ipynb) |\n| BYOL           | 2020 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fbyol.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fbyol.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fbyol.ipynb) |\n| DCL & DCLW     | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdcl.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdcl.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdcl.ipynb) |\n| DenseCL        | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdensecl.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdensecl.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdensecl.ipynb) |\n| DINO           | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdino.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdino.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdino.ipynb) |\n| DINOv2         | 2023 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07193) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdinov2.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fdinov2.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fdinov2.ipynb) |\n| iBOT           | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07832) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fibot.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fibot.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fibot.ipynb) |\n| MAE            | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmae.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fmae.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fmae.ipynb) |\n| MSN            | 2022 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmsn.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fmsn.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fmsn.ipynb) |\n| MoCo           | 2019 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmoco.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fmoco.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch_lightning\u002Fmoco.ipynb) |\n| NNCLR          | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14548) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fnnclr.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebooks\u002Fpytorch\u002Fnnclr.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fnnclr.ipynb) |\n| PMSN           | 2022 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fpmsn.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch\u002Fpmsn.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fpmsn.ipynb) |\n| SimCLR         | 2020 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimclr.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch\u002Fsimclr.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fsimclr.ipynb) |\n| SimMIM         | 2022 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimmim.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch\u002Fsimmim.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fsimmim.ipynb) |\n| SimSiam        | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimsiam.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch\u002Fsimsiam.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fsimsiam.ipynb) |\n| SwaV           | 2020 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fswav.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch\u002Fswav.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fswav.ipynb) |\n| VICReg         | 2021 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906) | [文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fvicreg.html) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch\u002Fvicreg.ipynb) | [![在Colab中打开](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FColab-PyTorch_Lightning-blue?logo=googlecolab)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flightly-ai\u002Flightly\u002Fblob\u002Fmaster\u002Fexamples\u002Fnotebookspytorch_lightning\u002Fvicreg.ipynb) |\n\n## 教程\n\n想直接跳到教程，看看 Lightly 的实际应用吗？\n\n- [在 CIFAR-10 数据集上训练 MoCo](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_moco_memory_bank.html)\n- [在服装数据集上训练 SimCLR](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_simclr_clothing.html)\n- [在卫星图像上训练 SimSiam](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_simsiam_esa.html)\n- [使用自定义增强技术与 Lightly 配合](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_custom_augmentations.html)\n- [用 Lightly 预训练 Detectron2 的主干网络](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_pretrain_detectron2.html)\n- [微调 Lightly 检查点](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_checkpoint_finetuning.html)\n- [将 timm 模型用作主干网络](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Ftutorials\u002Fpackage\u002Ftutorial_timm_backbone.html)\n\n社区和合作伙伴项目：\n\n- [在 ARM 微控制器上使用 Lightly 进行设备端深度学习](https:\u002F\u002Fgithub.com\u002FARM-software\u002FEndpointAI\u002Ftree\u002Fmaster\u002FProofOfConcepts\u002FVision\u002FOpenMvMaskDefaults)\n\n## 快速入门\n\nLightly 需要 **Python 3.7+**。我们建议在 **Linux** 或 **OSX** 环境中安装 Lightly。目前尚不支持 Python 3.13，因为 PyTorch 本身尚未兼容 Python 3.13。\n\n### 依赖项\n\n由于 Lightly 包的模块化特性，部分模块可以与较旧版本的依赖项一起使用。然而，为了使用当前的所有功能，Lightly 需要以下依赖项：\n\n- [PyTorch](https:\u002F\u002Fpytorch.org\u002F) ≥ 1.11.0\n- [Torchvision](https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Findex.html) ≥ 0.12.0\n- [PyTorch Lightning](https:\u002F\u002Fwww.pytorchlightning.ai\u002Findex.html) ≥ 1.7.1\n\nLightly 与 PyTorch 和 PyTorch Lightning v2.0 及以上版本兼容！\n\n### 安装\n\n您可以通过 PyPI 安装 Lightly 及其依赖项：\n\n```\npip3 install lightly\n```\n\n我们强烈建议将 Lightly 安装在一个专用的 virtualenv 中，以避免与系统包发生冲突。\n\n### Lightly 实战\n\n借助 Lightly，您可以以模块化的方式使用最新的自监督学习方法，并充分利用 PyTorch 的强大功能。尝试不同的主干网络、模型和损失函数。该框架从设计之初就注重易用性。[更多示例请参阅我们的文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmodels.html)。\n\n```python\nimport torch\nimport torchvision\n\nfrom lightly import loss\nfrom lightly import transforms\nfrom lightly.data import LightlyDataset\nfrom lightly.models.modules import heads\n\n\n# 创建一个用于 SimCLR 模型的 PyTorch 模块。\nclass SimCLR(torch.nn.Module):\n    def __init__(self, backbone):\n        super().__init__()\n        self.backbone = backbone\n        self.projection_head = heads.SimCLRProjectionHead(\n            input_dim=512,  # Resnet18 特征有 512 维。\n            hidden_dim=512,\n            output_dim=128,\n        )\n\n    def forward(self, x):\n        features = self.backbone(x).flatten(start_dim=1)\n        z = self.projection_head(features)\n        return z\n\n\n# 使用来自 torchvision 的 resnet 主干网络。\nbackbone = torchvision.models.resnet18()\n# 忽略分类头，因为我们只想要特征。\nbackbone.fc = torch.nn.Identity()\n\n# 构建 SimCLR 模型。\nmodel = SimCLR(backbone)\n\n# 准备一个为每张图片生成多个随机视图的变换。\ntransform = transforms.SimCLRTransform(input_size=32, cj_prob=0.5)\n\n\n# 从您的图像文件夹创建数据集。\ndataset = LightlyDataset(input_dir=\".\u002Fmy\u002Fcute\u002Fcats\u002Fdataset\u002F\", transform=transform)\n\n# 构建一个 PyTorch 数据加载器。\ndataloader = torch.utils.data.DataLoader(\n    dataset,  # 将数据集传递给数据加载器。\n    batch_size=128,  # 较大的批量有助于学习。\n    shuffle=True,  # 打乱顺序很重要！\n)\n\n# Lightly 提供了诸如损失函数之类的构建模块。\ncriterion = loss.NTXentLoss(temperature=0.5)\n\n# 获取一个 PyTorch 优化器。\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, weight_decay=1e-6)\n\n# 训练模型。\nfor epoch in range(10):\n    for (view0, view1), targets, filenames in dataloader:\n        z0 = model(view0)\n        z1 = model(view1)\n        loss = criterion(z0, z1)\n        loss.backward()\n        optimizer.step()\n        optimizer.zero_grad()\n        print(f\"loss: {loss.item():.5f}\")\n```\n\n您只需更换模型和损失函数，即可轻松使用其他模型，例如 SimSiam。\n\n```python\n# 用于 SimSiam 模型的 PyTorch 模块。\nclass SimSiam(torch.nn.Module):\n    def __init__(self, backbone):\n        super().__init__()\n        self.backbone = backbone\n        self.projection_head = heads.SimSiamProjectionHead(512, 512, 128)\n        self.prediction_head = heads.SimSiamPredictionHead(128, 64, 128)\n\n    def forward(self, x):\n        features = self.backbone(x).flatten(start_dim=1)\n        z = self.projection_head(features)\n        p = self.prediction_head(z)\n        z = z.detach()\n        return z, p\n\n\nmodel = SimSiam(backbone)\n\n# 使用 SimSiam 的损失函数。\ncriterion = loss.NegativeCosineSimilarity()\n```\n\n您可以在 [这里找到更完整的 SimSiam 示例。](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimsiam.html)\n\n使用 PyTorch Lightning 训练模型：\n\n```python\nfrom pytorch_lightning import LightningModule, Trainer\n\nclass SimCLR(LightningModule):\n    def __init__(self):\n        super().__init__()\n        resnet = torchvision.models.resnet18()\n        resnet.fc = torch.nn.Identity()\n        self.backbone = resnet\n        self.projection_head = heads.SimCLRProjectionHead(512, 512, 128)\n        self.criterion = loss.NTXentLoss()\n\n    def forward(self, x):\n        features = self.backbone(x).flatten(start_dim=1)\n        z = self.projection_head(features)\n        return z\n\n    def training_step(self, batch, batch_index):\n        (view0, view1), _, _ = batch\n        z0 = self.forward(view0)\n        z1 = self.forward(view1)\n        loss = self.criterion(z0, z1)\n        return loss\n\n    def configure_optimizers(self):\n        optim = torch.optim.SGD(self.parameters(), lr=0.06)\n        return optim\n\n\nmodel = SimCLR()\ntrainer = Trainer(max_epochs=10, devices=1, accelerator=\"gpu\")\ntrainer.fit(model, dataloader)\n```\n\n有关完整的 PyTorch Lightning 示例，请参阅 [我们的文档。](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fsimclr.html)\n\n或者在 4 张 GPU 上训练模型：\n\n```python\n\n# 使用分布式版本的损失函数。\ncriterion = loss.NTXentLoss(gather_distributed=True)\n\ntrainer = Trainer(\n    max_epochs=10,\n    devices=4,\n    accelerator=\"gpu\",\n    strategy=\"ddp\",\n    sync_batchnorm=True,\n    use_distributed_sampler=True,  # 或者对于 PyTorch Lightning \u003C2.0，使用 replace_sampler_ddp=True\n)\ntrainer.fit(model, dataloader)\n```\n\n我们提供了多 GPU 训练示例，支持分布式 gather 和同步 BatchNorm。\n[请查看我们的分布式训练文档。](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fdistributed_training.html)\n\n## 基准测试\n\n已实现的模型及其在不同数据集上的性能。超参数并未针对最高准确率进行调优。如需查看详细结果及更多关于基准测试的信息，请点击\n[此处](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html)。\n\n### ImageNet1k\n\n[ImageNet1k 基准测试](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#imagenet1k)\n\n**注意**：评估设置基于以下论文：\n\n- 线性评估：[SimCLR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- 微调：[SimCLR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- KNN：[InstDisc](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.01978)\n\n有关详细信息，请参阅 [基准测试脚本](.\u002Fbenchmarks\u002Fimagenet\u002Fresnet50\u002F)。\n\n| 模型           | 主干网络 | 批量大小 | 轮数 | 线性评估 Top1 | 微调 Top1 | kNN Top1 | TensorBoard                                                                                                                                                                    | 检查点                                                                                                                                                              |\n| --------------- | -------- | ---------- | ------ | ----------- | ------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| BarlowTwins     | Res50    | 256        | 100    | 62.9        | 72.6          | 45.6     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_barlowtwins_2023-08-18_00-11-03\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1692310273.Machine2.569794.0) | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_barlowtwins_2023-08-18_00-11-03\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt) |\n| BYOL            | Res50    | 256        | 100    | 62.5        | 74.5          | 46.0     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_byol_2024-02-14_16-10-09\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1707923418.Machine2.3205.0)          | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_byol_2024-02-14_16-10-09\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)        |\n| DINO            | Res50    | 128        | 100    | 68.2        | 72.5          | 49.9     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dino_2023-06-06_13-59-48\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1686052799.Machine2.482599.0)        | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dino_2023-06-06_13-59-48\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D1000900.ckpt)       |\n| DINO            | ViT-S\u002F16    | 128        | 100    | 73.3        | 79.8          | 67.5     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits14_dino_2025-02-16_16-03-14\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1739718198.compute-03-ubuntu-4x4090.2832462.0)        | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits14_dino_2025-02-16_16-03-14\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D1000900.ckpt)       |\n| iBOT            | ViT-S\u002F16    | 128        | 100    | 72.2        | 78.3          | 65.4     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits16_ibot_2025-07-10_13-47-17\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1752148040.compute-01-ubuntu-2x4090.253473.0)        | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vits16_ibot_2025-07-10_13-47-17\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D1000900.ckpt)       |\n| MAE             | ViT-B\u002F16 | 256        | 100    | 46.0        | 81.3          | 11.2     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vitb16_mae_2024-02-25_19-57-30\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1708887459.Machine2.1092409.0)          | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_vitb16_mae_2024-02-25_19-57-30\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)           |\n| MoCoV2          | Res50    | 256        | 100    | 61.5        | 74.3          | 41.8     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_mocov2_2024-02-18_10-29-14\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1708248562.Machine2.439033.0)      | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_mocov2_2024-02-18_10-29-14\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)      |\n| SimCLR\\*        | Res50    | 256        | 100    | 63.2        | 73.9          | 44.8     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_simclr_2023-06-22_09-11-13\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1687417883.Machine2.33270.0)       | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_simclr_2023-06-22_09-11-13\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)      |\n| SimCLR\\* + DCL  | Res50    | 256        | 100    | 65.1        | 73.5          | 49.6     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dcl_2023-07-04_16-51-40\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1688482310.Machine2.247807.0)         | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dcl_2023-07-04_16-51-40\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)         |\n| SimCLR\\* + DCLW | Res50    | 256        | 100    | 64.5        | 73.2          | 48.5     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dclw_2023-07-07_14-57-13\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1688734645.Machine2.3176.0)          | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_dclw_2023-07-07_14-57-13\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)        |\n| SwAV            | Res50    | 256        | 100    | 67.2        | 75.4          | 49.5     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_swav_2023-05-25_08-29-14\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1684996168.Machine2.1445108.0)       | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_swav_2023-05-25_08-29-14\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)        |\n| TiCo            | Res50    | 256        | 100    | 49.7        | 72.7          | 26.6     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_tico_2024-01-07_18-40-57\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1704649265.Machine2.1604956.0)       | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_tico_2024-01-07_18-40-57\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D250200.ckpt)        |\n| VICReg          | Res50    | 256        | 100    | 63.0        | 73.7          | 46.3     | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_vicreg_2023-09-11_10-53-08\u002Fpretrain\u002Fversion_0\u002Fevents.out.tfevents.1694422401.Machine2.556563.0)      | [链接](https:\u002F\u002Flightly-ssl-checkpoints.s3.amazonaws.com\u002Fimagenet_resnet50_vicreg_2023-09-11_10-53-08\u002Fpretrain\u002Fversion_0\u002Fcheckpoints\u002Fepoch%3D99-step%3D500400.ckpt)      |\n\n_\\*我们采用平方根学习率缩放而非线性缩放，因为对于较小的批量大小，前者能带来更好的效果。详见 [SimCLR 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709) 的附录 B.1。_\n\n### ImageNet100\n\n[ImageNet100 benchmarks detailed results](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#imagenet100)\n\n### Imagenette\n\n[Imagenette benchmarks detailed results](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#imagenette)\n\n### CIFAR-10\n\n[CIFAR-10 benchmarks detailed results](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fgetting_started\u002Fbenchmarks.html#cifar-10)\n\n## Terminology\n\nBelow you can see a schematic overview of the different concepts in the package.\nThe terms in bold are explained in more detail in our [documentation](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002F).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_readme_f2eef97e497e.png\" alt=\"Overview of the Lightly pip package\"\u002F>\u003C\u002Fa>\n\n### Next Steps\n\nHead to the [documentation](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002F) and see the things you can achieve with Lightly!\n\n## Development\n\nTo install dev dependencies (for example to contribute to the framework) you can use the following command:\n\n```\npip3 install -e \".[dev]\"\n```\n\nFor more information about how to contribute have a look [here](CONTRIBUTING.md).\n\n### Running Tests\n\nUnit tests are within the [tests directory](tests\u002F) and we recommend running them using\n[pytest](https:\u002F\u002Fdocs.pytest.org\u002Fen\u002Fstable\u002F). There are two test configurations\navailable. By default, only a subset will be run:\n\n```\nmake test-fast\n```\n\nTo run all tests (including the slow ones) you can use the following command:\n\n```\nmake test\n```\n\nTo test a specific file or directory use:\n\n```\npytest \u003Cpath to file or directory>\n```\n\n### Code Formatting\n\nTo format code with [black](https:\u002F\u002Fblack.readthedocs.io\u002Fen\u002Fstable\u002F) and [isort](https:\u002F\u002Fdocs.pytest.org) run:\n\n```\nmake format\n```\n\n## Further Reading\n\n**Self-Supervised Learning**:\n\n- Have a look at our [#papers channel on discord](https:\u002F\u002Fdiscord.com\u002Fchannels\u002F752876370337726585\u002F815153188487299083)\n  for the newest self-supervised learning papers.\n- [A Cookbook of Self-Supervised Learning, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12210)\n- [Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [What Should Not Be Contrastive in Contrastive Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05659)\n- [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [Momentum Contrast for Unsupervised Visual Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n\n## FAQ\n\n- Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?\n\n  - Self-supervised learning has become increasingly popular among scientists over the last years because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on _your_ dataset, you can make sure that the representations have all the necessary information about your images.\n\n- How can I contribute?\n\n  - Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our [contribution guide](CONTRIBUTING.md).\n\n- Is this framework for free?\n\n  - Yes, this framework is completely free to use and we provide the source code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind Lightly is committed to keep this framework open-source.\n\n- If this framework is free, how is the company behind Lightly making money?\n  - Training self-supervised models is only one part of our solution.\n    [The company behind Lightly](https:\u002F\u002Flightly.ai\u002F) focuses on processing and analyzing embeddings created by self-supervised models.\n    By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently.\n    As the [Lightly Solution](https:\u002F\u002Fdocs.lightly.ai) is a freemium product, you can try it out for free. However, we will charge for some features.\n  - In any case this framework will always be free to use, even for commercial purposes.\n\n## Lightly in Research\n\n- [DINOv2-3D: Self-Supervised 3D Vision Transformer Pretraining](https:\u002F\u002Fgithub.com\u002FAIM-Harvard\u002FDINOv2-3D-Med)\n- [Joint-Embedding vs Reconstruction: Provable Benefits of Latent Space Prediction for Self-Supervised Learning, 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12477)\n- [Reverse Engineering Self-Supervised Learning, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15614)\n- [Learning Visual Representations via Language-Guided Sampling, 2023](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.12248.pdf)\n- [Self-Supervised Learning Methods for Label-Efficient Dental Caries Classification, 2022](https:\u002F\u002Fwww.mdpi.com\u002F2075-4418\u002F12\u002F5\u002F1237)\n- [DPCL: Contrastive representation learning with differential privacy, 2022](https:\u002F\u002Fassets.researchsquare.com\u002Ffiles\u002Frs-1516950\u002Fv1_covered.pdf?c=1654486158)\n- [Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [solo-learn: A Library of Self-supervised Methods for Visual Representation Learning, 2021](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume23\u002F21-1155\u002F21-1155.pdf)\n\n## Company behind this Open Source Framework\n\n[Lightly](https:\u002F\u002Fwww.lightly.ai) is a spin-off from ETH Zurich that helps companies\nbuild efficient active learning pipelines to select the most relevant data for their models.\n\nYou can find out more about the company and it's services by following the links below:\n\n- [Homepage](https:\u002F\u002Fwww.lightly.ai)\n- [LightlyTrain](https:\u002F\u002Fdocs.lightly.ai\u002Ftrain\u002Fstable\u002Findex.html)\n- [Web-App](https:\u002F\u002Fapp.lightly.ai)\n- [Lightly Solution Documentation (Lightly Worker & API)](https:\u002F\u002Fdocs.lightly.ai\u002F)\n- [Lightly's AwesomeSSL](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Fawesome-self-supervised-learning) (collection of SSL papers)\n\n[Back to top🚀](#top)","# LightlySSL 快速上手指南\n\nLightlySSL 是一个用于自监督学习（Self-Supervised Learning, SSL）的计算机视觉框架。它提供了模块化的构建块（如损失函数和模型头），风格类似 PyTorch，并支持基于 PyTorch Lightning 的分布式训练。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python**: 3.7 或更高版本\n*   **核心依赖**:\n    *   PyTorch (建议最新版本)\n    *   torchvision\n    *   PyTorch Lightning (可选，用于分布式训练示例)\n\n> **提示**：如果您使用 GPU 进行训练，请确保已正确安装对应版本的 CUDA 驱动和 PyTorch GPU 版本。\n\n## 2. 安装步骤\n\n您可以直接通过 PyPI 安装 Lightly。为了获得更快的下载速度，推荐使用国内镜像源（如清华大学开源软件镜像站）。\n\n### 使用 pip 安装\n\n```bash\npip install lightly -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 验证安装\n\n安装完成后，您可以在 Python 中运行以下命令验证是否成功：\n\n```python\nimport lightly\nprint(lightly.__version__)\n```\n\n## 3. 基本使用\n\nLightly 的设计遵循 PyTorch 风格，您可以像定义普通神经网络一样定义自监督学习方法。以下是一个使用 **SimCLR** 算法进行自监督预训练的最简示例。\n\n### 步骤 1: 准备数据加载器\n\n首先，您需要定义数据增强策略并创建 DataLoader。Lightly 提供了常用的变换组合。\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets\nfrom torchvision.transforms import transforms\n\nimport lightly\n\n# 定义 SimCLR 风格的数据增强\ntransform = transforms.Compose([\n    transforms.RandomResizedCrop(224),\n    transforms.RandomHorizontalFlip(),\n    transforms.ColorJitter(0.8, 0.8, 0.8, 0.2),\n    transforms.ToTensor(),\n    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n])\n\n# 加载数据集 (以 CIFAR10 为例)\ndataset = datasets.CIFAR10(root='.\u002Fdata', train=True, download=True, transform=transform)\ndataloader = DataLoader(dataset, batch_size=64, shuffle=True, num_workers=4)\n```\n\n### 步骤 2: 定义模型与自监督方法\n\n选择一个骨干网络（Backbone）并将其包装在自监督方法中。\n\n```python\nimport torch.nn as nn\nfrom lightly.models import ResNetGenerator\nfrom lightly.loss import NTXentLoss\nfrom lightly.methods import SimCLR\n\n# 1. 创建骨干网络 (例如 ResNet-18)\nbackbone = ResNetGenerator('resnet-18', 1)\nlast_conv_channels = list(backbone.children())[-1].in_features\n\n# 2. 创建投影头 (Projection Head)\nprojection_head = lightly.models.modules.heads.SimCLRProjectionHead(\n    input_dim=last_conv_channels,\n    hidden_dim=last_conv_channels,\n    output_dim=128\n)\n\n# 3. 组合成 SimCLR 方法\nmodel = SimCLR(backbone, projection_head)\n\n# 4. 定义损失函数\ncriterion = NTXentLoss()\n```\n\n### 步骤 3: 设置优化器并开始训练\n\n您可以直接使用 PyTorch 的原生训练循环，或者使用 PyTorch Lightning（推荐）来简化流程。以下是原生 PyTorch 的训练循环示例：\n\n```python\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel.to(device)\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-6)\n\nmodel.train()\nfor epoch in range(10):\n    total_loss = 0\n    for batch_idx, (x0, x1), _ in enumerate(dataloader):\n        x0 = x0.to(device)\n        x1 = x1.to(device)\n        \n        # 前向传播\n        z0, z1 = model(x0, x1)\n        \n        # 计算损失\n        loss = criterion(z0, z1)\n        \n        # 反向传播\n        optimizer.zero_grad()\n        loss.backward()\n        optimizer.step()\n        \n        total_loss += loss.item()\n        \n    print(f\"Epoch {epoch}: Loss = {total_loss \u002F len(dataloader):.4f}\")\n\nprint(\"训练完成！模型权重已更新。\")\n```\n\n### 下一步\n\n*   **更多模型**: Lightly 支持 Barlow Twins, BYOL, DINO, MoCo, MAE 等多种主流 SSL 算法。\n*   **PyTorch Lightning**: 查看官方文档中的 `examples\u002Fnotebooks\u002Fpytorch_lightning` 目录，获取基于 Lightning 的分布式训练示例。\n*   **预训练模型**: 访问 [Lightly 文档](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fmodels.html) 获取各模型的详细代码和 Colab 演示。","某医疗影像初创团队正试图利用数万张未标注的肺部 CT 切片训练病灶检测模型，但面临专业医生标注成本过高且周期漫长的困境。\n\n### 没有 lightly 时\n- **标注依赖重**：必须等待放射科医生完成全量数据的人工勾画，项目启动即陷入数周的“数据空窗期”。\n- **冷启动困难**：直接在少量标注数据上训练深度模型，因样本不足导致过拟合严重，模型对罕见病灶几乎无法识别。\n- **复现成本高**：自研自监督学习算法需从零编写复杂的损失函数和数据增强管道，调试耗时且极易出错。\n- **算力浪费**：由于缺乏有效的预训练权重，模型收敛极慢，大量 GPU 机时消耗在重复学习基础图像特征上。\n\n### 使用 lightly 后\n- **无标预训练**：利用 lightly 提供的 SimCLR 或 MoCo 等算法，直接在全部未标注 CT 数据上进行自监督预训练，让模型先“看懂”肺部结构。\n- **小样本爆发**：仅需医生标注极少部分关键样本进行微调，模型即可快速迁移知识，在罕见病灶识别上准确率提升显著。\n- **开箱即用**：通过几行 PyTorch 风格代码即可调用模块化组件，轻松替换骨干网络，一天内完成从实验到部署的流程。\n- **高效收敛**：借助高质量的自监督权重初始化，训练轮次减少 60% 以上，大幅降低了云端 GPU 的租赁成本。\n\nlightly 将昂贵的数据标注需求转化为可自动获取的无标签学习能力，让医疗 AI 模型在数据匮乏初期也能具备强大的泛化性能。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flightly-ai_lightly_8aea3516.png","lightly-ai","Lightly","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flightly-ai_d2789b33.png","",null,"info@lightly.ai","LightlyAI","https:\u002F\u002Fwww.lightly.ai\u002F","https:\u002F\u002Fgithub.com\u002Flightly-ai",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.8,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.2,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0,3718,325,"2026-04-17T09:33:37","MIT","未说明","未说明（基于 PyTorch 和自监督学习特性，通常建议配备支持 CUDA 的 NVIDIA GPU 以加速训练）",{"notes":101,"python":98,"dependencies":102},"该工具是一个基于 PyTorch 的自监督学习框架，支持分布式训练。README 中未列出具体的版本号和硬件硬性指标，但提供了多种模型（如 DINO, MAE, SimCLR 等）的 PyTorch 和 PyTorch Lightning 实现示例。商业版提供 Docker 支持。",[103,104],"torch","pytorch-lightning",[14,15],[107,108,109,110,111,112,113,114,115],"deep-learning","self-supervised-learning","machine-learning","computer-vision","pytorch","embeddings","contrastive-learning","hacktoberfest","contributions-welcome","2026-03-27T02:49:30.150509","2026-04-18T02:22:19.405435",[119,124,129,134,139,144],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},38644,"Lightly 是否支持 NumPy 2.0？导入时出现 `np.float_` 属性错误怎么办？","Lightly 已在版本 1.5.12 中添加了对 NumPy 2.0 的支持。如果您遇到 `AttributeError: np.float_ was removed...` 错误，请升级 Lightly：\n\n方法 1（推荐）：升级到最新版\npip install --upgrade lightly\n\n方法 2（临时方案）：降级 NumPy 到 1.x 版本\npip install numpy==1.26\n\n注意：Conda-forge 渠道的更新可能略有延迟，如遇问题请检查 conda-forge 状态。","https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1558",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},38645,"如何在 Lightly 中保存和加载 YOLO 模型权重以便进行微调？","在 Lightly 中与 YOLO 模型配合使用时，必须使用 PyTorch 原生的方式保存和加载状态字典（state_dict），而不是 YOLO 特有的加载方式。\n\n正确的保存代码：\ntorch.save(yolo.state_dict(), \"y3_weights.pth\")\n\n正确的加载代码：\nyolo.load_state_dict(torch.load(\"y3_weights.pth\"))\n\n错误示例：不要使用 model = YOLO('...').load('weights') 这种 YOLO 特有的方式加载在 Lightly 中训练过的权重，这会导致报错。请先加载基础模型结构，再载入 state_dict。","https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1080",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},38646,"Lightly 计划支持 PIRL (Permutation Invariant Representation Learning) 算法吗？","官方目前没有计划添加 PIRL 支持。维护者表示，与现代自监督学习（SSL）方法相比，PIRL 已经过时（outdated），因此不会将其纳入未来的开发路线图中。建议用户尝试 Lightly 中支持的更现代的 SSL 方法（如 SimCLR, MoCo, BYOL 等）。","https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F396",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},38647,"在哪里可以找到 ApiWorkflow 类函数的使用示例和文档？","ApiWorkflow 类提供了创建数据集、上传样本、上传嵌入向量等功能。获取使用示例的途径如下：\n\n1. 查看源代码 Docstring：大多数函数在 lightly.api.api_workflow* 模块中都有自解释的文档字符串。\n2. 官方教程：推荐阅读官方文档中的主动学习（Active Learning）教程，地址为：https:\u002F\u002Fdocs.lightly.ai\u002Ftutorials\u002Fplatform.html\n\n如果在特定函数使用上仍有疑问，可以直接在相关 Issue 下提问。","https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F458",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},38648,"Lightly 是否支持 DINOv2 模型？","是的，Lightly 已经正式支持 DINOv2。相关的实现代码和文档已经合并到主分支中（参考 PR #1823, #1844, #1848, #1849）。您现在可以直接在 Lightly 中使用基于 ViT 骨干网络的 DINOv2 自监督学习方法。","https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1166",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},38649,"如何在 Lightly 中实现或使用 Masked Autoencoder (MAE)？","Lightly 采用了模块化方式实现 MAE。建议使用以下低层级构建块：\n\n1. 核心组件：使用 lightly.models.modules.encoders.MAEEncoder 和 MAEDecoder。\n2. 位置编码注意：原始 MAE 论文使用正弦 - 余弦位置编码，而 torchvision 默认使用学习型编码。如果需要严格复现论文，需注意此差异或手动替换位置编码。\n3. 工具函数：随机掩码（random masking）等辅助功能已封装在工具模块中。\n\n目前建议直接调用编码器\u002F解码器模块进行实验，高层级的 MAE 封装类可能会在未来根据更多类似模型（如 SimMIM）的实现情况再进行整合。","https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F721",[150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245],{"id":151,"version":152,"summary_zh":153,"released_at":154},314593,"v1.5.7","Tiny improvements\r\n* Increase download timeout for json files ([#1556](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1556))\r\n* Migrate `coverage` and `mypy` configuration to `pyproject.toml` [#1549](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1549)). Many thanks to @SauravMaheshkar for this improvement!\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-06-21T07:16:55",{"id":156,"version":157,"summary_zh":158,"released_at":159},314594,"v1.5.6","Changes\r\n* Allow lightly-serve to run securely via https by passing ssl_cert and ssl_key\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-06-05T12:57:55",{"id":161,"version":162,"summary_zh":163,"released_at":164},314595,"v1.5.5","Changes\r\n* Add unpatchify model utils operation to reconstruct an image from its patches. See the [PR](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1544) for more information. Thanks to @randombenj for implementing this!\r\n* Fixes in CI regarding coverage.\r\n* Fixes in lightly-serve that the server was sometimes not shut down correctly.\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-05-29T13:05:30",{"id":166,"version":167,"summary_zh":168,"released_at":169},314596,"v1.5.4","Changes\r\n* Fixes the GatherLayer for multiple GPUs. See [PR](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1531) for more information.\r\n* Different typos in tutorials\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-05-16T12:53:22",{"id":171,"version":172,"summary_zh":173,"released_at":174},314577,"v1.15.23","## 变更内容\n* 修复 `totensor` 已弃用警告 (#1864)，由 @KylevdLangemheen 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1866 中完成\n* 将函数式变换 API 移至兼容层 (#1867)，由 @KylevdLangemheen 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1870 中完成\n* 修复 LightlySSL 文档的横幅和选项卡名称，由 @yutong-xiang-97 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1872 中完成\n* 在 README 中添加 DINOv2 (#1873)，由 @KylevdLangemheen 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1875 中完成\n* 修复 SMoG 示例，由 @thomasmarchioro3 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1876 中完成\n* 添加 DirectCLR (#781)，由 @KylevdLangemheen 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1874 中完成\n* 更新 README 中的徽章链接，由 @Olexandr88 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1880 中完成\n* 添加社交媒体 Twitter 链接，由 @Olexandr88 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1883 中完成\n* 更新 CI 的依赖包，由 @IgorSusmelj 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1896 中完成\n* 贡献指南：添加关于导入的说明，由 @liopeer 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1895 中完成\n* 添加 LeJEPA SIGReg 损失的实现，由 @gabrielfruet 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1893 中完成\n* 加快动量更新速度，由 @vsey 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1899 中完成\n* 在 README 中添加 LightlyStudio 的引用，由 @michal-lightly 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1903 中完成\n* 在 README 中添加 DINOv2-3D：自监督 3D 视觉 Transformer 预训练，由 @guarin 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1890 中完成\n* 添加对 torchvision 0.26 的支持，由 @guarin 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1904 中完成\n\n## 新贡献者\n* @thomasmarchioro3 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1876 中完成了首次贡献\n* @Olexandr88 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1880 中完成了首次贡献\n* @gabrielfruet 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1893 中完成了首次贡献\n* @vsey 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1899 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcompare\u002Fv1.15.22...v1.15.23\n\n衷心感谢所有贡献者！\n\n\u003C!-- 停止 Discord 消息 -->\n\u003C!-- 上述注释定义了发布消息在 Discord 中显示的范围 -->\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：通过冗余减少进行自监督学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：一种新的自监督学习方法，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [DINOv2：学习鲁棒的视觉特征，无需…","2026-03-24T15:28:27",{"id":176,"version":177,"summary_zh":178,"released_at":179},314582,"v1.5.18","## 变更\n* 更新 SimCLR 模型示例\n* 更新 MAE 模型文档\n* 调整分页逻辑，将每页最大条目数设为 2500 条以增强健壮性\n* 修复拼写错误：将 `Dict()` 改为 `dict()`，以解决由 @gatienc 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1785 中报告的类实例化错误\n\n衷心感谢所有贡献者！\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：基于冗余减少的自监督学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：一种新的自监督学习方法，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022 年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自编码器是可扩展的视觉学习器，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动力对比，2019 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：基于最近邻的视觉表征对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\n- [PMSN：暹罗网络中的先验匹配，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\n- [SimCLR：视觉表征对比学习的简单框架，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [SimMIM：图像掩码建模的简单框架，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\n- [SimSiam：探索简单的暹罗表征学习，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\n- [SMoG：通过同步动量分组进行无监督视觉表征学习，2022 年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\n- [SwAV：基于对比聚类分配的视觉特征无监督学习，M. Caron，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [TiCo：用于自监督视觉表征学习的变换不变性和协方差对比，2022 年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\n- [VICReg：用于自监督学习的方差-不变性-协方差正则化，Bardes, A. 等人，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\n- [VICRegL：本地视觉特征的自监督学习，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2025-01-28T13:39:51",{"id":181,"version":182,"summary_zh":183,"released_at":184},314583,"v1.5.17","## 变更\n- 修复 detcon 损失的分布式问题\n- 移除掩码池化中的启发式方法\n- 修复 torchvision 依赖测试\n\n非常感谢所有贡献者！\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：通过冗余减少进行自监督学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：自监督学习的新方法，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自动编码器是可扩展的视觉学习器，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动量对比，2019年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：视觉表征的最近邻对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\n- [PMSN：暹罗网络中的先验匹配，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\n- [SimCLR：视觉表征对比学习的简单框架，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [SimMIM：掩码图像建模的简单框架，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\n- [SimSiam：探索简单的暹罗表征学习，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\n- [SMoG：通过同步动量分组进行无监督视觉表征学习，2022年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\n- [SwAV：通过对比聚类分配进行视觉特征的无监督学习，M. Caron，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [TiCo：用于自监督视觉表征学习的变换不变性和协方差对比，2022年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\n- [VICReg：用于自监督学习的方差-不变性-协方差正则化，Bardes, A. 等人，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\n- [VICRegL：本地视觉特征的自监督学习，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2025-01-21T10:44:44",{"id":186,"version":187,"summary_zh":188,"released_at":189},314584,"v1.5.16","## 变更\n- 添加了 DetConSLoss 和 DetConBLoss\n- 部分移除了 OpenCV 依赖，感谢 @vectorvp\n- 修复：在 @vectorvp 的帮助下修复了 IJEPA 示例\n\n### 类型标注与文档\n- 在 @philippmwirth 的努力下，许多文件现已正确添加类型标注并经过类型检查\n- 我们移除了关于 [Lightly**One** Worker](https:\u002F\u002Fdocs.lightly.ai\u002Fdocs\u002Finstall-lightly) 的旧版且过时的文档\n\n衷心感谢所有贡献者！\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：通过冗余减少进行自监督学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：自监督学习的新方法，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上资源高效的自监督学习，2022年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自编码器是可扩展的视觉学习器，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动量对比，2019年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：视觉表征的最近邻对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\n- [PMSN：暹罗网络中的先验匹配，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\n- [SimCLR：视觉表征对比学习的简单框架，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [SimMIM：图像掩码建模的简单框架，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\n- [SimSiam：探索简单的暹罗表征学习，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\n- [SMoG：通过同步动量分组进行无监督视觉表征学习，2022年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\n- [SwAV：通过对比聚类分配进行视觉特征的无监督学习，M. Caron，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [TiCo：用于自监督视觉表征学习的变换不变性和协方差对比，2022年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\n- [VICReg：用于自监督学习的方差-不变性-协方差正则化，Bardes, A. 等人，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\n- [VICRegL：局部视觉特征的自监督学习，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2025-01-07T09:43:17",{"id":191,"version":192,"summary_zh":193,"released_at":194},314585,"v1.5.15","## 变更\n\n### 新增变换\n- 添加相位偏移变换（[#1714](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1714)），由 @pearguacamole 实现\n- 添加 FDATransform（[#1734](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1734)），由 @vectorvp 实现\n\n### 切换到与版本无关的 torchvision 变换\n- 如果 torchvision 变换 v2 可用，则使用 v2；否则使用 torchvision 变换 v1。详情请参阅 [此评论](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1547#issuecomment-2124050272)。\n- 为 DetCon 添加变换，并为 torchvision.transforms.v2 添加 MultiViewTransformV2（[#1737](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1737)）\n\n### 类型标注、命名及文档字符串改进\n- 对 `data\u002F_utils`（[#1740](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1740)）、`data\u002F_helpers`（[#1742](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1742)）和 `tests\u002Fmodels`（[#1744](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1744)）进行类型标注，由 @vectorvp 完成\n- 清理 lightly\u002Fdata 子包中的文档字符串（[#1741](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1741)），由 @ChiragAgg5k 完成\n- 重构：更新命名并移除 AmplitudeRescaleTransform 中未使用的包（[#1732](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1732)），由 @vectorvp 完成\n\n### 其他\n- 修复 DINOProjectionHead 的 BatchNorm 处理（[#1729](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1729)）\n- 为带有分割掩码的池化操作添加掩码平均池化（DetCon）（[#1739](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1739)）\n\n非常感谢所有贡献者！\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：通过冗余减少实现自监督学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：自监督学习的新方法，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自编码器是可扩展的视觉学习器，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动力对比，2019年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：基于最近邻的视觉表征对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\n- [PMSN：暹罗网络中的先验匹配，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210","2024-11-29T08:08:48",{"id":196,"version":197,"summary_zh":198,"released_at":199},314586,"v1.5.14","## 变更\n\n### 新增变换\n- [添加 RFFT2D 和 IRFFT2D 变换](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcommit\u002Fcb4f1f68ad6967c04500d540b16a22427ba211f8) @snehilchatterjee\n- [添加 RandomFrequencyMaskTransform](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcommit\u002F9da0a244776bf24b6486f9ce77b813b6953870e7) @payo101 \n- [添加 GaussianMixtureMaskTransform](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcommit\u002Ffe7664a3959ba4bca2099d53863d4592a38fb396) @snehilchatterjee \n- [添加 AmplitudeRescaleTransform](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcommit\u002F9578268ee32465bac357196b2af043f1c130bb2e) @payo101 \n- 更好地支持 torchvision.transforms v1 和 v2，且不会产生警告或错误。\n\n### 新增和更新的文档字符串\n- 由 @Prathamesh010、@ayush22iitbhu、@ChiragAgg5k 和 @HarshitVashisht11 完成的多项改进。\n\n### 文档改进\n- README.md 的改进由 @bhargavshirin、@kushal34712、@eltociear、@Mefisto04 和 @ayush22iitbhu 完成。\n- 其他文档和教程部分的改进由 @jizhang02 完成。\n- 修复了 Windows 上的示例代码 @snehilchatterjee。\n- 改进了 CONTRIBUTING.md 文件 @Prathamesh010。\n- 添加了返回顶部按钮，以方便导航 @hackit-coder。\n\n### 更多且更好的类型注解\n- 对所有 Python 版本进行类型检查。\n- 为 serve.py 添加类型注解 @ishaanagw。\n- 清理 data 子包中的 _image.py 和 _utils.py 文件 @ChiragAgg5k。\n\n### 更好的格式化\n- 将类和公共函数移至文件顶部 @fadkeabhi 和 @SauravMaheshkar。\n\n### 其他\n- [在基准测试结束时以 Markdown 表格形式打印聚合指标](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcommit\u002Fb6955fd40b9b8e2f11cbd6d291820281ed47ba3a) @EricLiclair。\n\n非常感谢所有贡献者！\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：通过冗余减少实现自监督学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：一种新的自监督学习方法，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022 年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自动编码器是可扩展的视觉学习器，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动量对比法，2019 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：视觉表征的最近邻对比学习","2024-11-07T08:33:15",{"id":201,"version":202,"summary_zh":203,"released_at":204},314587,"v1.5.13","- Support python 3.12, thanks @MalteEbner \r\n- update cosine warmup scheduler, thanks @guarin \r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-09-24T08:48:24",{"id":206,"version":207,"summary_zh":208,"released_at":209},314588,"v1.5.12","- Use TiCoTransform Everywhere\r\n- Refactor DINOLoss to not use center module\r\n- Add CenterCrop to val transform\r\n\r\n#### Dependencies\r\n- Make library compatible with torch 1.10, torchvision 0.11, and pytorch lightning 1.6 (by using [uv](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv)), thanks @guarin \r\n\r\n#### Docs\r\n- Add notebooks, thanks @SauravMaheshkar\r\n- Add Timm Backbone Tutorial, thanks @SauravMaheshkar\r\n- Further docs and tutorial improvements\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-08-20T10:04:51",{"id":211,"version":212,"summary_zh":213,"released_at":214},314589,"v1.5.11","\r\n- Added IBOTPatchLoss, KoLeoLoss and block masking, thanks @guarin\r\n- Allow learnable positional embeddings and boolean masking in masked vision transformer\r\n- Refactor IJEPA to use timm, thanks @radiradev\r\n\r\n#### Dependencies\r\n- Allow NumPy 2, thanks @adamjstewart\r\n- Removed lightning-bolts dependency\r\n\r\n#### Docs\r\n- Add finetuning tutorial, thanks @SauravMaheshkar\r\n- Fix MoCo link in DenseCL docs and further docs and tutorial improvements\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-08-07T06:51:05",{"id":216,"version":217,"summary_zh":218,"released_at":219},314590,"v1.5.10","- Adds the DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training method. See the [docs](https:\u002F\u002Fdocs.lightly.ai\u002Fself-supervised-learning\u002Fexamples\u002Fdensecl.html). \r\n- Add TiCoTransform, thanks @radiradev!\r\n- Improvements to the pre-commit hooks, thanks @SauravMaheshkar!\r\n- Fix memory bank issue when using `gather_distributed=True` and training on a single GPU\r\n- Fix student head update in DINO benchmark\r\n- Various improvements to MaskedVisionTransformer\r\n- Renaming of Lightly SSL to Lightly**SSL**\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-07-25T09:25:21",{"id":221,"version":222,"summary_zh":223,"released_at":224},314591,"v1.5.9","- Lightly is now compatible with pydantic2\r\n- migrated to pyproject\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-07-11T09:41:16",{"id":226,"version":227,"summary_zh":228,"released_at":229},314592,"v1.5.8","Two changes w.r.t numpy version 2:\r\n- Make lightly itself support numpy version 2: https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1561\r\n- Disallow numpy 2.0 in the requirements, as torchvision is not yet compatible with numpy 2: https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1562\r\n\r\nFor more context, see https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fissues\u002F1558\r\n\r\n### Models\r\n- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\r\n- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\r\n- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\r\n- [DCL: Decoupled Contrastive Learning, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\r\n- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\r\n- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\r\n- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\r\n- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\r\n- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\r\n- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\r\n- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\r\n- [PMSN: Prior Matching for Siamese Networks, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\r\n- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\r\n- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\r\n- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\r\n- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\r\n- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\r\n- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\r\n- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04906)\r\n- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01571)","2024-06-24T16:01:04",{"id":231,"version":232,"summary_zh":233,"released_at":234},314578,"v1.15.22","## 变更内容\n* 通过 @misrasaurabh1 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1847 中对 DetConBLoss 中的 `_same_mask` 函数进行优化，使其速度提升 25%。\n* 修复 DINOv2 的若干 bug。\n* 将 DINOv2 添加到文档中，并附带示例。\n* 修复：@RDR2Blackwater 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1850 中修复了 vicregl_loss 张量引发的 RuntimeError。\n* 添加 iBOTTransform。\n* 实现 iBOT 算法。\n* 将 iBOT 添加到文档中，并附带示例。\n* 修复 ResNet 基准测试中的 KNN 数据类型问题。\n* 修复损失函数的 CUDA 测试。\n* 修复 OnlineClassifier 的相关问题。\n* 支持 GaussianBlur 的张量输入。\n* 使用 ToTensor 辅助函数，由 @ajtritt 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1862 中完成。\n* 将旋转操作移至翻转之后，用于数据增强，由 @KylevdLangemheen 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1865 中完成。\n* 改进开发环境的 pre-commit 钩子。\n* 添加 iBOT 基准测试。\n\n## 新贡献者\n* @misrasaurabh1 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1847 中完成了首次贡献。\n* @RDR2Blackwater 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1850 中完成了首次贡献。\n* @ajtritt 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1862 中完成了首次贡献。\n* @KylevdLangemheen 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1865 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcompare\u002Fv1.15.21...v1.15.22\n\n衷心感谢所有贡献者！\n\n\u003C!-- STOP DISCORD MESSAGE -->\n\u003C!-- 上述注释定义了发布消息在 Discord 中显示的范围 -->\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：基于冗余减少的自监督学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：一种新的自监督学习方法，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [DINOv2：无监督下学习鲁棒视觉特征，2023 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07193)\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022 年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [iBOT：使用在线分词器进行图像 BERT 预训练，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07832)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自编码器是可扩展的视觉学习器，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动量对比，2019 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：基于最近邻对比的视觉表征学习","2025-07-22T14:39:47",{"id":236,"version":237,"summary_zh":238,"released_at":239},314579,"v1.15.21","## 变更内容\n* 添加 DINOv2 ViT 基准实现\n* 将 Meta 发表的论文 [联合嵌入 vs 重建：自监督学习中潜在空间预测的可证明优势，2025]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12477）添加到“Lightly 在研究中”。感谢他们的贡献！\n* @yvesyue 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1819 中为基准测试添加了 `seed_everything`，以提高实验的可重复性。\n* @yvesyue 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1820 中修复了针对较新版本 NumPy 的 MyPy 类型检查问题。\n* @yvesyue 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1827 中修复了 DCLLoss 负项聚合问题，并添加了基于循环的参考测试。\n* 修复了 KNN 基准评估中的错误。\n* 修复了余弦调度器预热轮次中的错误。\n* 由于较新版本 TIMM 的接口变更，修复了 `MaskedCausalBlock.__init__() 获取了意外的关键字参数 'proj_bias'` 的问题。\n* 由于较新版本 Torchvision 的接口变更，修复了 `AddGridTransform`。\n* 修改 `format` 和 `format-check`，使其仅针对 Python 目录。\n* 移除了视频下载功能。\n* 移除了未使用的下载函数，并添加了类型注解。\n\n## 新贡献者\n* @yvesyue 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1819 中做出了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fcompare\u002Fv1.5.20...v1.15.21\n\n非常感谢我们的贡献者！\n\n\u003C!-- 停止 Discord 消息 -->\n\u003C!-- 上述注释定义了发布消息在 Discord 中显示的范围 -->\n\n### 模型\n- [AIM：大型自回归图像模型的可扩展预训练]（https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf）\n- [Barlow Twins：通过冗余减少进行自监督学习，2021]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230）\n- [Bootstrap your own latent：一种新的自监督学习方法，2020]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733）\n- [DCL：解耦对比学习，2021]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848）\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157）\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294）\n- [DINOv2：无监督学习鲁棒视觉特征，2023]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07193）\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022]（https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4）\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243）\n- [MAE：掩码自编码器是可扩展的视觉学习器，2021]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377）\n- [MSN：用于标签高效学习的掩码暹罗网络，2022]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141）\n- [MoCo：用于无监督视觉表示学习的动量对比，2019]（https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722）\n- [NNCLR：视觉表示的最近邻对比学习，2021]（https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf）\n- [PMSN：先验匹配 f","2025-06-11T15:31:17",{"id":241,"version":242,"summary_zh":243,"released_at":244},314580,"v1.5.20","## 变更\n* 添加了 DINO ViT 基准测试\n* 修复了 BTLoss：通过 @adosar 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1806 中的修改，确保其对仿射变换的不变性\n* 测试了 BTLoss：使用 `torch.allclose` 的默认值，由 @adosar 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1810 中完成\n* 为 k-近邻预测添加了更详细的文档字符串，由 @maxprogrammer007 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1812 中完成\n* 移除了 `CosineWarmUpScheduler` 中的 `verbose` 参数\n* 添加了 [LightlyTrain](https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly-train) 参考\n* 将 `lightly-train` 命令重命名为 `lightly-ssl-train`\n\n非常感谢我们的贡献者！\n\n\u003C!-- STOP DISCORD MESSAGE -->\n\u003C!-- 上述注释定义了发布消息在 Discord 中显示的范围 -->\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：基于冗余减少的自监督学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：一种新的自监督学习方法，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上的资源高效自监督学习，2022年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自编码器是可扩展的视觉学习器，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动量对比法，2019年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：视觉表征的最近邻对比学习，2021年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\n- [PMSN：暹罗网络中的先验匹配，2022年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\n- [SimCLR：视觉表征对比学习的简单框架，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [SimMIM：用于掩码图像建模的简单框架，2021年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\n- [SimSiam：探索简单的暹罗表征学习，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\n- [SMoG：通过同步动量分组进行无监督视觉表征学习，2022年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\n- [SwAV：通过对比聚类分配进行视觉特征的无监督学习，M. Caron，2020年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [TiCo：用于自监督视觉表征学习的变换不变性和协方差对比，2022年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\n- [VICReg：方差-不变性-协方差","2025-04-22T14:11:02",{"id":246,"version":247,"summary_zh":248,"released_at":249},314581,"v1.5.19","## 变更\n* 文档：由 @adosar 在 https:\u002F\u002Fgithub.com\u002Flightly-ai\u002Flightly\u002Fpull\u002F1789 中修复了 `NTXentLoss` 的 forward 方法与文档字符串之间的不一致。\n* 更新文档中的 NNCLR 模型示例。\n* 更新文档中的 BYOL 模型示例。\n* 更新文档中的 DINO 模型示例。\n* 更新文档中的 SimSiam 模型示例。\n* 为 DetCon 的池化操作添加了额外的测试。\n* 修复 MAE 示例中 Lightning Trainer 策略的问题，并在基准测试中支持新的 Lightning 版本。\n* 添加了 MACL（模型感知对比学习）的损失函数。\n* 更新了 CONTRIBUTING 指南和 GitHub Actions。\n* 修复了损失测试中的多个问题。\n\n### 模型\n- [AIM：大规模自回归图像模型的可扩展预训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.08541.pdf)\n- [Barlow Twins：通过冗余减少进行自监督学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03230)\n- [Bootstrap your own latent：一种新的自监督学习方法，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07733)\n- [DCL：解耦对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06848)\n- [DenseCL：用于自监督视觉预训练的密集对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.09157)\n- [DINO：自监督视觉 Transformer 中涌现的特性，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14294)\n- [FastSiam：单 GPU 上资源高效的自监督学习，2022 年](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-16788-1_4)\n- [I-JEPA：基于联合嵌入预测架构的图像自监督学习，2023 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08243)\n- [MAE：掩码自编码器是可扩展的视觉学习者，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06377)\n- [MSN：用于标签高效学习的掩码暹罗网络，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.07141)\n- [MoCo：用于无监督视觉表征学习的动量对比，2019 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05722)\n- [NNCLR：视觉表征的最近邻对比学习，2021 年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14548.pdf)\n- [PMSN：暹罗网络中的先验匹配，2022 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07277)\n- [SimCLR：视觉表征对比学习的简单框架，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.05709)\n- [SimMIM：掩码图像建模的简单框架，2021 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09886)\n- [SimSiam：探索简单的暹罗表征学习，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.10566)\n- [SMoG：通过同步动量分组进行无监督视觉表征学习，2022 年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06167.pdf)\n- [SwAV：通过对比聚类分配进行视觉特征的无监督学习，M. Caron，2020 年](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09882)\n- [TiCo：用于自监督视觉表征学习的变换不变性和协方差对比，2022 年](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.10698.pdf)\n- [VICReg：用于自监督学习的方差-不变性-协方差正则化，Bardes, A. 等人，2022 年](https:\u002F\u002Farxiv.org\u002F","2025-02-18T08:55:07"]