[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SforAiDl--KD_Lib":3,"tool-SforAiDl--KD_Lib":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":32,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":108,"github_topics":110,"view_count":121,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":122,"updated_at":123,"faqs":124,"releases":154},1179,"SforAiDl\u002FKD_Lib","KD_Lib","A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.","KD_Lib 是一个基于 PyTorch 的模型压缩工具库，专注于知识蒸馏、剪枝和量化等技术，帮助开发者更高效地优化和部署深度学习模型。它简化了这些复杂技术的实现流程，使研究人员能够快速验证新方法并进行性能对比。适用于需要提升模型效率、减小模型体积或适应边缘设备的开发者和研究人员。工具内置多种经典算法实现，并提供直观的训练与评估接口，支持自定义模型和数据集。其模块化设计和清晰文档让使用门槛更低，是模型优化领域的实用助手。","\u003Ch1 align=\"center\">KD-Lib\u003C\u002Fh1>\n\u003Ch3 align=\"center\">A PyTorch model compression library containing easy-to-use methods for knowledge distillation, pruning, and quantization\u003C\u002Fh3>\n\n\u003Cdiv align='center'>\n\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSforAiDl_KD_Lib_readme_c39be4c40c89.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkd-lib)\n[![Tests](https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Factions\u002Fworkflows\u002Fpython-package-test.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Factions\u002Fworkflows\u002Fpython-package-test.yml)\n[![Docs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSforAiDl_KD_Lib_readme_13d664e1afd7.png)](https:\u002F\u002Fkd-lib.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n**[Documentation](https:\u002F\u002Fkd-lib.readthedocs.io\u002Fen\u002Flatest\u002F)** | **[Tutorials](https:\u002F\u002Fkd-lib.readthedocs.io\u002Fen\u002Flatest\u002Fusage\u002Ftutorials\u002Findex.html)**\n\n\u003C\u002Fdiv>\n\n## Installation\n\n### From source (recommended)\n\n```shell\n\nhttps:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib.git\ncd KD_Lib\npython setup.py install\n\n```\n\n### From PyPI\n\n```shell\n\npip install KD-Lib\n\n```\n\n## Example usage\n\nTo implement the most basic version of knowledge distillation from [Distilling the Knowledge in a Neural Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02531) and plot loss curves:\n\n```python\n\nimport torch\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom KD_Lib.KD import VanillaKD\n\n# This part is where you define your datasets, dataloaders, models and optimizers\n\ntrain_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=True,\n        download=True,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\ntest_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=False,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\nteacher_model = \u003Cyour model>\nstudent_model = \u003Cyour model>\n\nteacher_optimizer = optim.SGD(teacher_model.parameters(), 0.01)\nstudent_optimizer = optim.SGD(student_model.parameters(), 0.01)\n\n# Now, this is where KD_Lib comes into the picture\n\ndistiller = VanillaKD(teacher_model, student_model, train_loader, test_loader, \n                      teacher_optimizer, student_optimizer)  \ndistiller.train_teacher(epochs=5, plot_losses=True, save_model=True)    # Train the teacher network\ndistiller.train_student(epochs=5, plot_losses=True, save_model=True)    # Train the student network\ndistiller.evaluate(teacher=False)                                       # Evaluate the student network\ndistiller.get_parameters()                                              # A utility function to get the number of \n                                                                        # parameters in the  teacher and the student network\n\n```\n\nTo train a collection of 3 models in an online fashion using the framework in [Deep Mutual Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00384)\nand log training details to Tensorboard: \n\n```python\n\nimport torch\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom KD_Lib.KD import DML\nfrom KD_Lib.models import ResNet18, ResNet50          # To use models packaged in KD_Lib\n\n# Define your datasets, dataloaders, models and optimizers\n\ntrain_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=True,\n        download=True,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\ntest_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=False,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\nstudent_params = [4, 4, 4, 4, 4]\nstudent_model_1 = ResNet50(student_params, 1, 10)\nstudent_model_2 = ResNet18(student_params, 1, 10)\n\nstudent_cohort = [student_model_1, student_model_2]\n\nstudent_optimizer_1 = optim.SGD(student_model_1.parameters(), 0.01)\nstudent_optimizer_2 = optim.SGD(student_model_2.parameters(), 0.01)\n\nstudent_optimizers = [student_optimizer_1, student_optimizer_2]\n\n# Now, this is where KD_Lib comes into the picture \n\ndistiller = DML(student_cohort, train_loader, test_loader, student_optimizers, log=True, logdir=\".\u002Flogs\")\n\ndistiller.train_students(epochs=5)\ndistiller.evaluate()\ndistiller.get_parameters()\n\n```\n\n## Methods Implemented\n\nSome benchmark results can be found in the [logs](.\u002Flogs.rst) file.\n\n|  Paper \u002F Method                                           |  Link                            | Repository (KD_Lib\u002F) |\n| ----------------------------------------------------------|----------------------------------|----------------------|\n| Distilling the Knowledge in a Neural Network              | https:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02531 | KD\u002Fvision\u002Fvanilla    |\n| Improved Knowledge Distillation via Teacher Assistant     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.03393 | KD\u002Fvision\u002FTAKD       |\n| Relational Knowledge Distillation                         | https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.05068 | KD\u002Fvision\u002FRKD        |\n| Distilling Knowledge from Noisy Teachers                  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09650 | KD\u002Fvision\u002Fnoisy      |\n| Paying More Attention To The Attention                    | https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03928 | KD\u002Fvision\u002Fattention  |\n| Revisit Knowledge Distillation: a Teacher-free \u003Cbr> Framework  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11723 |KD\u002Fvision\u002Fteacher_free|\n| Mean Teachers are Better Role Models                      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01780 |KD\u002Fvision\u002Fmean_teacher|\n| Knowledge Distillation via Route Constrained \u003Cbr> Optimization | https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09149 | KD\u002Fvision\u002FRCO        |\n| Born Again Neural Networks                                | https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04770 | KD\u002Fvision\u002FBANN       |\n| Preparing Lessons: Improve Knowledge Distillation \u003Cbr> with Better Supervision | https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07471 | KD\u002Fvision\u002FKA |\n| Improving Generalization Robustness with Noisy \u003Cbr> Collaboration in Knowledge Distillation | https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.05057 | KD\u002Fvision\u002Fnoisy|\n| Distilling Task-Specific Knowledge from BERT into \u003Cbr> Simple Neural Networks | https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12136 | KD\u002Ftext\u002FBERT2LSTM |\n| Deep Mutual Learning                                      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00384 | KD\u002Fvision\u002FDML        |\n| The Lottery Ticket Hypothesis: Finding Sparse, \u003Cbr> Trainable Neural Networks | https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.03635 | Pruning\u002Flottery_tickets|\n| Regularizing Class-wise Predictions via \u003Cbr> Self-knowledge Distillation | https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13964 | KD\u002Fvision\u002FCSDK |\n\n\u003Cbr>\n\nPlease cite our pre-print if you find `KD-Lib` useful in any way :)\n\n```bibtex\n\n@misc{shah2020kdlib,\n  title={KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization}, \n  author={Het Shah and Avishree Khare and Neelay Shah and Khizir Siddiqui},\n  year={2020},\n  eprint={2011.14691},\n  archivePrefix={arXiv},\n  primaryClass={cs.LG}\n}\n\n```\n","\u003Ch1 align=\"center\">KD-Lib\u003C\u002Fh1>\n\u003Ch3 align=\"center\">一个包含知识蒸馏、剪枝和量化等易用方法的 PyTorch 模型压缩库\u003C\u002Fh3>\n\n\u003Cdiv align='center'>\n\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSforAiDl_KD_Lib_readme_c39be4c40c89.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkd-lib)\n[![测试](https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Factions\u002Fworkflows\u002Fpython-package-test.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Factions\u002Fworkflows\u002Fpython-package-test.yml)\n[![文档](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSforAiDl_KD_Lib_readme_13d664e1afd7.png)](https:\u002F\u002Fkd-lib.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n**[文档](https:\u002F\u002Fkd-lib.readthedocs.io\u002Fen\u002Flatest\u002F)** | **[教程](https:\u002F\u002Fkd-lib.readthedocs.io\u002Fen\u002Flatest\u002Fusage\u002Ftutorials\u002Findex.html)**\n\n\u003C\u002Fdiv>\n\n## 安装\n\n### 从源码安装（推荐）\n\n```shell\n\ngit clone https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib.git\ncd KD_Lib\npython setup.py install\n\n```\n\n### 从 PyPI 安装\n\n```shell\n\npip install KD-Lib\n\n```\n\n## 示例用法\n\n要实现来自 [Distilling the Knowledge in a Neural Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02531) 的最基础的知识蒸馏版本，并绘制损失曲线：\n\n```python\n\nimport torch\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom KD_Lib.KD import VanillaKD\n\n# 这一部分是你定义数据集、数据加载器、模型和优化器的地方\n\ntrain_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=True,\n        download=True,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\ntest_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=False,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\nteacher_model = \u003C你的模型>\nstudent_model = \u003C你的模型>\n\nteacher_optimizer = optim.SGD(teacher_model.parameters(), 0.01)\nstudent_optimizer = optim.SGD(student_model.parameters(), 0.01)\n\n# 现在，KD_Lib 就派上用场了\n\ndistiller = VanillaKD(teacher_model, student_model, train_loader, test_loader, \n                      teacher_optimizer, student_optimizer)  \ndistiller.train_teacher(epochs=5, plot_losses=True, save_model=True)    # 训练教师网络\ndistiller.train_student(epochs=5, plot_losses=True, save_model=True)    # 训练学生网络\ndistiller.evaluate(teacher=False)                                       # 评估学生网络\ndistiller.get_parameters()                                              # 一个用于获取教师和学生网络参数数量的实用函数\n\n```\n\n要使用 [Deep Mutual Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00384) 中的框架，以在线方式训练一组 3 个模型，并将训练细节记录到 TensorBoard 中：\n\n```python\n\nimport torch\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom KD_Lib.KD import DML\nfrom KD_Lib.models import ResNet18, ResNet50          # 使用 KD_Lib 中打包的模型\n\n# 定义你的数据集、数据加载器、模型和优化器\n\ntrain_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=True,\n        download=True,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\ntest_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=False,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\nstudent_params = [4, 4, 4, 4, 4]\nstudent_model_1 = ResNet50(student_params, 1, 10)\nstudent_model_2 = ResNet18(student_params, 1, 10)\n\nstudent_cohort = [student_model_1, student_model_2]\n\nstudent_optimizer_1 = optim.SGD(student_model_1.parameters(), 0.01)\nstudent_optimizer_2 = optim.SGD(student_model_2.parameters(), 0.01)\n\nstudent_optimizers = [student_optimizer_1, student_optimizer_2]\n\n# 现在，KD_Lib 就派上用场了\n\ndistiller = DML(student_cohort, train_loader, test_loader, student_optimizers, log=True, logdir=\".\u002Flogs\")\n\ndistiller.train_students(epochs=5)\ndistiller.evaluate()\ndistiller.get_parameters()\n\n```\n\n## 已实现的方法\n\n部分基准测试结果可在 [日志](.\u002Flogs.rst) 文件中找到。\n\n| 论文 \u002F 方法                                           | 链接                            | 仓库 (KD_Lib\u002F) |\n| ----------------------------------------------------------|----------------------------------|----------------------|\n| 从神经网络中提炼知识              | https:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02531 | KD\u002Fvision\u002Fvanilla    |\n| 通过教师助手改进知识蒸馏     | https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.03393 | KD\u002Fvision\u002FTAKD       |\n| 关系知识蒸馏                         | https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.05068 | KD\u002Fvision\u002FRKD        |\n| 从噪声教师处蒸馏知识                  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09650 | KD\u002Fvision\u002Fnoisy      |\n| 更加关注注意力机制                    | https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03928 | KD\u002Fvision\u002Fattention  |\n| 重访知识蒸馏：无教师框架  | https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11723 |KD\u002Fvision\u002Fteacher_free|\n| 平均教师是更好的榜样                      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01780 |KD\u002Fvision\u002Fmean_teacher|\n| 通过路径约束优化进行知识蒸馏 | https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09149 | KD\u002Fvision\u002FRCO        |\n| 再生神经网络                                | https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04770 | KD\u002Fvision\u002FBANN       |\n| 准备课程：通过更好的监督改进知识蒸馏 | https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07471 | KD\u002Fvision\u002FKA |\n| 通过知识蒸馏中的噪声协作提升泛化鲁棒性 | https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.05057 | KD\u002Fvision\u002Fnoisy|\n| 将BERT中的任务特定知识蒸馏到简单神经网络 | https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.12136 | KD\u002Ftext\u002FBERT2LSTM |\n| 深度互学习                                      | https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00384 | KD\u002Fvision\u002FDML        |\n| 彩票假说：寻找稀疏且可训练的神经网络 | https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.03635 | Pruning\u002Flottery_tickets|\n| 通过自我知识蒸馏正则化类别级预测 | https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13964 | KD\u002Fvision\u002FCSDK |\n\n\u003Cbr>\n\n如果您在任何方面觉得 `KD-Lib` 有用，请引用我们的预印本 :)\n\n```bibtex\n\n@misc{shah2020kdlib,\n  title={KD-Lib: 用于知识蒸馏、剪枝和量化的一套 PyTorch 库}, \n  author={Het Shah 和 Avishree Khare 和 Neelay Shah 和 Khizir Siddiqui},\n  year={2020},\n  eprint={2011.14691},\n  archivePrefix={arXiv},\n  primaryClass={cs.LG}\n}\n\n```","# KD-Lib 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.6 或更高版本\n- PyTorch 1.0 或更高版本（建议使用 1.8+）\n\n### 前置依赖\n- `torchvision`：用于数据集加载和预处理\n- `numpy`：用于数值计算\n- `matplotlib`（可选）：用于绘制损失曲线\n\n> 推荐使用国内镜像源加速安装，例如：\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 安装步骤\n\n### 从源码安装（推荐）\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib.git\ncd KD_Lib\npython setup.py install\n```\n\n### 从 PyPI 安装\n```bash\npip install KD-Lib\n```\n\n## 基本使用\n\n### 知识蒸馏示例（VanillaKD）\n```python\nimport torch\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom KD_Lib.KD import VanillaKD\n\n# 数据加载器\ntrain_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=True,\n        download=True,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\ntest_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=False,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\n# 定义教师模型和学生模型（需自行替换为实际模型）\nteacher_model = \u003Cyour model>\nstudent_model = \u003Cyour model>\n\n# 定义优化器\nteacher_optimizer = optim.SGD(teacher_model.parameters(), 0.01)\nstudent_optimizer = optim.SGD(student_model.parameters(), 0.01)\n\n# 初始化蒸馏器\ndistiller = VanillaKD(teacher_model, student_model, train_loader, test_loader, \n                      teacher_optimizer, student_optimizer)\n\n# 训练教师模型\ndistiller.train_teacher(epochs=5, plot_losses=True, save_model=True)\n\n# 训练学生模型\ndistiller.train_student(epochs=5, plot_losses=True, save_model=True)\n\n# 评估学生模型\ndistiller.evaluate(teacher=False)\n\n# 查看模型参数量\ndistiller.get_parameters()\n```\n\n### 深度互学习示例（DML）\n```python\nimport torch\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom KD_Lib.KD import DML\nfrom KD_Lib.models import ResNet18, ResNet50\n\n# 数据加载器\ntrain_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=True,\n        download=True,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\ntest_loader = torch.utils.data.DataLoader(\n    datasets.MNIST(\n        \"mnist_data\",\n        train=False,\n        transform=transforms.Compose(\n            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n        ),\n    ),\n    batch_size=32,\n    shuffle=True,\n)\n\n# 定义学生模型\nstudent_params = [4, 4, 4, 4, 4]\nstudent_model_1 = ResNet50(student_params, 1, 10)\nstudent_model_2 = ResNet18(student_params, 1, 10)\n\nstudent_cohort = [student_model_1, student_model_2]\n\n# 定义优化器\nstudent_optimizer_1 = optim.SGD(student_model_1.parameters(), 0.01)\nstudent_optimizer_2 = optim.SGD(student_model_2.parameters(), 0.01)\n\nstudent_optimizers = [student_optimizer_1, student_optimizer_2]\n\n# 初始化蒸馏器\ndistiller = DML(student_cohort, train_loader, test_loader, student_optimizers, log=True, logdir=\".\u002Flogs\")\n\n# 训练学生模型\ndistiller.train_students(epochs=5)\n\n# 评估\ndistiller.evaluate()\n\n# 查看模型参数量\ndistiller.get_parameters()\n```","某高校计算机视觉实验室正在开发一个用于实时图像识别的轻量级模型，以部署在边缘设备上。团队需要在保持较高准确率的同时，减少模型的计算量和内存占用。\n\n### 没有 KD_Lib 时  \n- 需要手动实现知识蒸馏、剪枝和量化算法，代码重复度高且容易出错  \n- 每次尝试新方法都需要从头搭建训练流程，耗时且难以验证效果  \n- 缺乏统一的评估标准，不同方法之间的性能对比困难  \n- 模型压缩后的推理速度和精度难以平衡，调试成本高  \n\n### 使用 KD_Lib 后  \n- 提供了现成的知识蒸馏、剪枝和量化模块，节省大量开发时间  \n- 支持快速切换不同压缩策略，便于实验对比和优化  \n- 内置评估工具可直接获取模型参数量、推理速度和准确率等关键指标  \n- 通过简单配置即可实现模型轻量化，兼顾性能与效率  \n\nKD_Lib 有效提升了模型压缩的效率和效果，使团队能够更专注于算法创新而非基础实现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSforAiDl_KD_Lib_c917b4c3.png","SforAiDl","Society for Artificial Intelligence and Deep Learning","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSforAiDl_446a2130.png","",null,"SforAiDL","www.saidl.in","https:\u002F\u002Fgithub.com\u002FSforAiDl",[81,85],{"name":82,"color":83,"percentage":84},"Python","#3572A5",98.8,{"name":86,"color":87,"percentage":88},"Makefile","#427819",1.2,651,61,"2026-04-03T09:27:29","MIT","Linux, macOS, Windows","需要 NVIDIA GPU，显存 8GB+，CUDA 11.7+","16GB+",{"notes":97,"python":98,"dependencies":99},"建议使用 conda 管理环境，首次运行需下载约 5GB 模型文件","3.8+",[100,101,102,103,104,105,106,107],"torch>=2.0","transformers>=4.30","accelerate","torchvision","numpy","pandas","matplotlib","scikit-learn",[14,109,16],"其他",[111,112,113,114,115,116,117,118,119,120],"knowledge-distillation","model-compression","pruning","quantization","pytorch","deep-learning-library","machine-learning","data-science","benchmarking","algorithm-implementations",4,"2026-03-27T02:49:30.150509","2026-04-11T18:32:45.490333",[125,130,134,139,144,149],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},5341,"无法从 KD_Lib 导入 VanillaKD，提示 'VanillaKD' 不存在","请尝试从源码安装 KD_Lib。执行以下命令：\n```\npip install transformers\ngit clone https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib.git\ncd KD_Lib\npython setup.py install\n```\n然后使用 `from KD_Lib.KD import VanillaKD` 导入。","https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Fissues\u002F104",{"id":131,"question_zh":132,"answer_zh":133,"source_url":129},5342,"运行 `distiller.train_students(epochs=5, log=True, logdir=\".\u002FLogs\")` 报错 'log' 是意外的关键字参数","该方法不支持 'log' 参数。请移除 'log' 和 'logdir' 参数，或检查是否使用了过时的 API 版本。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},5343,"导入 'KD_Lib.KD' 模块时报 'No module named' 错误","请确保正确安装了 KD_Lib。可以尝试从源码安装，具体步骤如下：\n```\npip install transformers\ngit clone https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib.git\ncd KD_Lib\npython setup.py install\n```\n安装完成后，使用 `from KD_Lib.KD import VanillaKD` 导入。","https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Fissues\u002F143",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},5344,"运行时出现 'RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4'","当前库主要适用于分类任务，不支持 4 维目标张量。如果用于分割任务，请修改代码以适应 3 维目标输入。","https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Fissues\u002F122",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},5345,"如何为 NLP 数据集自定义数据加载器？","KD-Lib 的 distiller 对象期望数据加载器提供两个元素：输入数据和标签。如果你的数据加载器提供了三个元素（如输入、attention mask 和标签），请修改你的数据加载器以只返回两个元素。","https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Fissues\u002F128",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},5346,"如何实现 Subclass Distillation 论文中的方法？","目前库中未直接实现该方法。由于该方法需要在教师模型中添加子类头，并依赖于模型结构，而库的设计不包含对模型结构的假设，因此未进行实现。你可以根据论文自行扩展模型结构。","https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\u002Fissues\u002F54",[155,159,163],{"id":156,"version":157,"summary_zh":76,"released_at":158},203462,"v0.0.32","2022-05-18T08:34:00",{"id":160,"version":161,"summary_zh":76,"released_at":162},203463,"v0.0.31","2022-05-15T19:42:42",{"id":164,"version":165,"summary_zh":76,"released_at":166},203464,"v0.0.30","2022-03-24T14:36:22"]