[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Cadene--pretrained-models.pytorch":3,"tool-Cadene--pretrained-models.pytorch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":79,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":23,"env_os":98,"env_gpu":99,"env_ram":98,"env_deps":100,"category_tags":105,"github_topics":106,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":146},742,"Cadene\u002Fpretrained-models.pytorch","pretrained-models.pytorch","Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.","pretrained-models.pytorch 是一个专为 PyTorch 框架打造的预训练卷积神经网络集合。它囊括了 ResNet、InceptionV4、NASNet、SENet 等多种主流及前沿视觉模型，让开发者无需从零训练即可直接调用强大的特征提取能力。\n\n深度学习研究常面临复现论文困难或迁移学习资源不足的挑战。pretrained-models.pytorch 通过提供统一且简洁的 API 接口（灵感源自 torchvision），有效降低了获取和使用高质量预训练模型的门槛。用户仅需少量代码即可加载模型，将精力集中在算法验证或业务应用上，而无需耗费时间在网络搭建与权重初始化等繁琐环节。\n\npretrained-models.pytorch 同样适合计算机视觉领域的研究人员和开发者，特别是那些需要快速评估不同架构性能、复现学术成果或构建基于迁移学习项目的团队。此外，库内持续更新的模型列表涵盖了从经典到最新的网络结构，并支持 ImageNet 上的精度评估数据，是探索深度学习模型能力的可靠选择。无论是入门实践还是进阶研究，pretrained-models.pytorch 都","pretrained-models.pytorch 是一个专为 PyTorch 框架打造的预训练卷积神经网络集合。它囊括了 ResNet、InceptionV4、NASNet、SENet 等多种主流及前沿视觉模型，让开发者无需从零训练即可直接调用强大的特征提取能力。\n\n深度学习研究常面临复现论文困难或迁移学习资源不足的挑战。pretrained-models.pytorch 通过提供统一且简洁的 API 接口（灵感源自 torchvision），有效降低了获取和使用高质量预训练模型的门槛。用户仅需少量代码即可加载模型，将精力集中在算法验证或业务应用上，而无需耗费时间在网络搭建与权重初始化等繁琐环节。\n\npretrained-models.pytorch 同样适合计算机视觉领域的研究人员和开发者，特别是那些需要快速评估不同架构性能、复现学术成果或构建基于迁移学习项目的团队。此外，库内持续更新的模型列表涵盖了从经典到最新的网络结构，并支持 ImageNet 上的精度评估数据，是探索深度学习模型能力的可靠选择。无论是入门实践还是进阶研究，pretrained-models.pytorch 都能提供便捷的支持。","# Pretrained models for Pytorch (Work in progress)\n\nThe goal of this repo is:\n\n- to help to reproduce research papers results (transfer learning setups for instance),\n- to access pretrained ConvNets with a unique interface\u002FAPI inspired by torchvision.\n\n\u003Ca href=\"https:\u002F\u002Ftravis-ci.org\u002FCadene\u002Fpretrained-models.pytorch\">\u003Cimg src=\"https:\u002F\u002Fapi.travis-ci.org\u002FCadene\u002Fpretrained-models.pytorch.svg?branch=master\"\u002F>\u003C\u002Fa>\n\nNews:\n- 27\u002F10\u002F2018: Fix compatibility issues, Add tests, Add travis\n- 04\u002F06\u002F2018: [PolyNet](https:\u002F\u002Fgithub.com\u002FCUHK-MMLAB\u002Fpolynet) and [PNASNet-5-Large](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00559) thanks to [Alex Parinov](https:\u002F\u002Fgithub.com\u002Fcreafz)\n- 16\u002F04\u002F2018: [SE-ResNet* and SE-ResNeXt*](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) thanks to [Alex Parinov](https:\u002F\u002Fgithub.com\u002Fcreafz)\n- 09\u002F04\u002F2018: [SENet154](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) thanks to [Alex Parinov](https:\u002F\u002Fgithub.com\u002Fcreafz)\n- 22\u002F03\u002F2018: CaffeResNet101 (good for localization with FasterRCNN)\n- 21\u002F03\u002F2018: NASNet Mobile thanks to [Veronika Yurchuk](https:\u002F\u002Fgithub.com\u002Fveronikayurchuk) and [Anastasiia](https:\u002F\u002Fgithub.com\u002FDagnyT)\n- 25\u002F01\u002F2018: DualPathNetworks thanks to [Ross Wightman](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-dpn-pretrained), Xception thanks to [T Standley](https:\u002F\u002Fgithub.com\u002Ftstandley\u002FXception-PyTorch), improved TransformImage API\n- 13\u002F01\u002F2018: `pip install pretrainedmodels`, `pretrainedmodels.model_names`, `pretrainedmodels.pretrained_settings`\n- 12\u002F01\u002F2018: `python setup.py install`\n- 08\u002F12\u002F2017: update data url (\u002F!\\ `git pull` is needed)\n- 30\u002F11\u002F2017: improve API (`model.features(input)`, `model.logits(features)`, `model.forward(input)`, `model.last_linear`)\n- 16\u002F11\u002F2017: nasnet-a-large pretrained model ported by T. Durand and R. Cadene\n- 22\u002F07\u002F2017: torchvision pretrained models\n- 22\u002F07\u002F2017: momentum in inceptionv4 and inceptionresnetv2 to 0.1\n- 17\u002F07\u002F2017: model.input_range attribut\n- 17\u002F07\u002F2017: BNInception pretrained on Imagenet\n\n## Summary\n\n- [Installation](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#installation)\n- [Quick examples](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#quick-examples)\n- [Few use cases](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#few-use-cases)\n    - [Compute imagenet logits](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#compute-imagenet-logits)\n    - [Compute imagenet validation metrics](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#compute-imagenet-validation-metrics)\n- [Evaluation on ImageNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#evaluation-on-imagenet)\n    - [Accuracy on valset](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#accuracy-on-validation-set)\n    - [Reproducing results](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#reproducing-results)\n- [Documentation](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#documentation)\n    - [Available models](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#available-models)\n        - [AlexNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [BNInception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#bninception)\n        - [CaffeResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#caffe-resnet)\n        - [DenseNet121](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet161](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet169](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet201](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet201](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DualPathNet68](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet92](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet98](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet107](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet113](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [FBResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#facebook-resnet)\n        - [InceptionResNetV2](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception)\n        - [InceptionV3](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception)\n        - [InceptionV4](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception)\n        - [NASNet-A-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#nasnet)\n        - [NASNet-A-Mobile](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#nasnet)\n        - [PNASNet-5-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#pnasnet)\n        - [PolyNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#polynet)\n        - [ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext)\n        - [ResNeXt101_64x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext)\n        - [ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet18](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet34](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [SENet154](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNeXt50_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SqueezeNet1_0](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [SqueezeNet1_1](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG11](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG13](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG16](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG19](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG11_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG13_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG16_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG19_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [Xception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#xception)\n    - [Model API](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#model-api)\n        - [model.input_size](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelinput_size)\n        - [model.input_space](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelinput_space)\n        - [model.input_range](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelinput_range)\n        - [model.mean](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelmean)\n        - [model.std](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelstd)\n        - [model.features](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelfeatures)\n        - [model.logits](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modellogits)\n        - [model.forward](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelforward)\n- [Reproducing porting](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#reproducing)\n    - [ResNet*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#hand-porting-of-resnet152)\n    - [ResNeXt*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#automatic-porting-of-resnext)\n    - [Inception*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#hand-porting-of-inceptionv4-and-inceptionresnetv2)\n\n## Installation\n\n1. [python3 with anaconda](https:\u002F\u002Fwww.continuum.io\u002Fdownloads)\n2. [pytorch with\u002Fout CUDA](http:\u002F\u002Fpytorch.org)\n\n### Install from pip\n\n3. `pip install pretrainedmodels`\n\n### Install from repo\n\n3. `git clone https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch.git`\n4. `cd pretrained-models.pytorch`\n5. `python setup.py install`\n\n\n## Quick examples\n\n- To import `pretrainedmodels`:\n\n```python\nimport pretrainedmodels\n```\n\n- To print the available pretrained models:\n\n```python\nprint(pretrainedmodels.model_names)\n> ['fbresnet152', 'bninception', 'resnext101_32x4d', 'resnext101_64x4d', 'inceptionv4', 'inceptionresnetv2', 'alexnet', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'inceptionv3', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19_bn', 'vgg19', 'nasnetalarge', 'nasnetamobile', 'cafferesnet101', 'senet154',  'se_resnet50', 'se_resnet101', 'se_resnet152', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'cafferesnet101', 'polynet', 'pnasnet5large']\n```\n\n- To print the available pretrained settings for a chosen model:\n\n```python\nprint(pretrainedmodels.pretrained_settings['nasnetalarge'])\n> {'imagenet': {'url': 'http:\u002F\u002Fdata.lip6.fr\u002Fcadene\u002Fpretrainedmodels\u002Fnasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1000}, 'imagenet+background': {'url': 'http:\u002F\u002Fdata.lip6.fr\u002Fcadene\u002Fpretrainedmodels\u002Fnasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1001}}\n```\n\n- To load a pretrained models from imagenet:\n\n```python\nmodel_name = 'nasnetalarge' # could be fbresnet152 or inceptionresnetv2\nmodel = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')\nmodel.eval()\n```\n\n**Note**: By default, models will be downloaded to your `$HOME\u002F.torch` folder. You can modify this behavior using the `$TORCH_HOME` variable as follow: `export TORCH_HOME=\"\u002Flocal\u002Fpretrainedmodels\"`\n\n- To load an image and do a complete forward pass:\n\n```python\nimport torch\nimport pretrainedmodels.utils as utils\n\nload_img = utils.LoadImage()\n\n# transformations depending on the model\n# rescale, center crop, normalize, and others (ex: ToBGR, ToRange255)\ntf_img = utils.TransformImage(model) \n\npath_img = 'data\u002Fcat.jpg'\n\ninput_img = load_img(path_img)\ninput_tensor = tf_img(input_img)         # 3x400x225 -> 3x299x299 size may differ\ninput_tensor = input_tensor.unsqueeze(0) # 3x299x299 -> 1x3x299x299\ninput = torch.autograd.Variable(input_tensor,\n    requires_grad=False)\n\noutput_logits = model(input) # 1x1000\n```\n\n- To extract features (beware this API is not available for all networks):\n\n```python\noutput_features = model.features(input) # 1x14x14x2048 size may differ\noutput_logits = model.logits(output_features) # 1x1000\n```\n\n## Few use cases\n\n### Compute imagenet logits\n\n- See [examples\u002Fimagenet_logits.py](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fblob\u002Fmaster\u002Fexamples\u002Fimagenet_logits.py) to compute logits of classes appearance over a single image with a pretrained model on imagenet.\n\n```\n$ python examples\u002Fimagenet_logits.py -h\n> nasnetalarge, resnet152, inceptionresnetv2, inceptionv4, ...\n```\n\n```\n$ python examples\u002Fimagenet_logits.py -a nasnetalarge --path_img data\u002Fcat.jpg\n> 'nasnetalarge': data\u002Fcat.jpg' is a 'tiger cat' \n```\n\n### Compute imagenet evaluation metrics\n\n- See [examples\u002Fimagenet_eval.py](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fblob\u002Fmaster\u002Fexamples\u002Fimagenet_eval.py) to evaluate pretrained models on imagenet valset. \n\n```\n$ python examples\u002Fimagenet_eval.py \u002Flocal\u002Fcommon-data\u002Fimagenet_2012\u002Fimages -a nasnetalarge -b 20 -e\n> * Acc@1 82.693, Acc@5 96.13\n```\n\n\n## Evaluation on imagenet\n\n### Accuracy on validation set (single model)\n\nResults were obtained using (center cropped) images of the same size than during the training process.\n\nModel | Version | Acc@1 | Acc@5\n--- | --- | --- | ---\nPNASNet-5-Large | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) | 82.858 | 96.182\n[PNASNet-5-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#pnasnet) | Our porting | 82.736 | 95.992\nNASNet-A-Large | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) | 82.693 | 96.163\n[NASNet-A-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#nasnet) | Our porting | 82.566 | 96.086\nSENet154 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 81.32 | 95.53\n[SENet154](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | Our porting | 81.304 | 95.498\nPolyNet | [Caffe](https:\u002F\u002Fgithub.com\u002FCUHK-MMLAB\u002Fpolynet) | 81.29 | 95.75\n[PolyNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#polynet) | Our porting | 81.002 | 95.624\nInceptionResNetV2 | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fslim) | 80.4 | 95.3\nInceptionV4 | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fslim) | 80.2 | 95.3\n[SE-ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | Our porting | 80.236 | 95.028\nSE-ResNeXt101_32x4d | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 80.19 | 95.04\n[InceptionResNetV2](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception) | Our porting | 80.170 | 95.234\n[InceptionV4](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception) | Our porting | 80.062 | 94.926\n[DualPathNet107_5k](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | Our porting | 79.746 | 94.684\nResNeXt101_64x4d | [Torch7](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt) | 79.6 | 94.7\n[DualPathNet131](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | Our porting | 79.432 | 94.574\n[DualPathNet92_5k](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | Our porting | 79.400 | 94.620\n[DualPathNet98](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | Our porting | 79.224 | 94.488\n[SE-ResNeXt50_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | Our porting | 79.076 | 94.434\nSE-ResNeXt50_32x4d | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 79.03 | 94.46\n[Xception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#xception) | [Keras](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fxception.py) | 79.000 | 94.500\n[ResNeXt101_64x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext) | Our porting | 78.956 | 94.252\n[Xception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#xception) | Our porting | 78.888 | 94.292\nResNeXt101_32x4d | [Torch7](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt) | 78.8 | 94.4\nSE-ResNet152 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 78.66 | 94.46\n[SE-ResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | Our porting | 78.658 | 94.374\nResNet152 | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 78.428 | 94.110\n[SE-ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | Our porting | 78.396 | 94.258\nSE-ResNet101 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 78.25 | 94.28\n[ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext) | Our porting | 78.188 | 93.886\nFBResNet152 | [Torch7](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch) | 77.84 | 93.84\nSE-ResNet50 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 77.63 | 93.64\n[SE-ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | Our porting | 77.636 | 93.752\n[DenseNet161](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.560 | 93.798\n[ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.438 | 93.672\n[FBResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#facebook-resnet) | Our porting | 77.386 | 93.594\n[InceptionV3](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.294 | 93.454\n[DenseNet201](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.152 | 93.548\n[DualPathNet68b_5k](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | Our porting | 77.034 | 93.590\n[CaffeResnet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#caffe-resnet) | [Caffe](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks) | 76.400 | 92.900\n[CaffeResnet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#caffe-resnet) | Our porting | 76.200 | 92.766\n[DenseNet169](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 76.026 | 92.992\n[ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 76.002 | 92.980\n[DualPathNet68](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | Our porting | 75.868 | 92.774\n[DenseNet121](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 74.646 | 92.136\n[VGG19_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 74.266 | 92.066\nNASNet-A-Mobile | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) | 74.0 | 91.6\n[NASNet-A-Mobile](https:\u002F\u002Fgithub.com\u002Fveronikayurchuk\u002Fpretrained-models.pytorch\u002Fblob\u002Fmaster\u002Fpretrainedmodels\u002Fmodels\u002Fnasnet_mobile.py) | Our porting | 74.080 | 91.740\n[ResNet34](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 73.554 | 91.456\n[BNInception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#bninception) | Our porting | 73.524 | 91.562\n[VGG16_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 73.518 | 91.608\n[VGG19](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 72.080 | 90.822\n[VGG16](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 71.636 | 90.354\n[VGG13_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 71.508 | 90.494\n[VGG11_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 70.452 | 89.818\n[ResNet18](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 70.142 | 89.274\n[VGG13](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 69.662 | 89.264\n[VGG11](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 68.970 | 88.746\n[SqueezeNet1_1](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 58.250 | 80.800\n[SqueezeNet1_0](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 58.108 | 80.428\n[Alexnet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 56.432 | 79.194\n\nNotes:\n- the Pytorch version of ResNet152 is not a porting of the Torch7 but has been retrained by facebook.\n- For the PolyNet evaluation each image was resized to 378x378 without preserving the aspect ratio and then the central 331×331 patch from the resulting image was used.\n\nBeware, the accuracy reported here is not always representative of the transferable capacity of the network on other tasks and datasets. You must try them all! :P\n    \n### Reproducing results\n\nPlease see [Compute imagenet validation metrics](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#compute-imagenet-validation-metrics)\n\n\n## Documentation\n\n### Available models\n\n#### NASNet*\n\nSource: [TensorFlow Slim repo](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim)\n\n- `nasnetalarge(num_classes=1000, pretrained='imagenet')`\n- `nasnetalarge(num_classes=1001, pretrained='imagenet+background')`\n- `nasnetamobile(num_classes=1000, pretrained='imagenet')`\n\n#### FaceBook ResNet*\n\nSource: [Torch7 repo of FaceBook](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch)\n\nThere are a bit different from the ResNet* of torchvision. ResNet152 is currently the only one available.\n\n- `fbresnet152(num_classes=1000, pretrained='imagenet')`\n\n#### Caffe ResNet*\n\nSource: [Caffe repo of KaimingHe](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks)\n\n- `cafferesnet101(num_classes=1000, pretrained='imagenet')`\n\n\n#### Inception*\n\nSource: [TensorFlow Slim repo](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fslim) and [Pytorch\u002FVision repo](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision\u002Ftree\u002Fmaster\u002Ftorchvision) for `inceptionv3`\n\n- `inceptionresnetv2(num_classes=1000, pretrained='imagenet')`\n- `inceptionresnetv2(num_classes=1001, pretrained='imagenet+background')`\n- `inceptionv4(num_classes=1000, pretrained='imagenet')`\n- `inceptionv4(num_classes=1001, pretrained='imagenet+background')`\n- `inceptionv3(num_classes=1000, pretrained='imagenet')`\n\n#### BNInception\n\nSource: [Trained with Caffe](https:\u002F\u002Fgithub.com\u002FCadene\u002Ftensorflow-model-zoo.torch\u002Fpull\u002F2) by [Xiong Yuanjun](http:\u002F\u002Fyjxiong.me)\n\n- `bninception(num_classes=1000, pretrained='imagenet')`\n\n#### ResNeXt*\n\nSource: [ResNeXt repo of FaceBook](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt)\n\n- `resnext101_32x4d(num_classes=1000, pretrained='imagenet')`\n- `resnext101_62x4d(num_classes=1000, pretrained='imagenet')`\n\n#### DualPathNetworks\n\nSource: [MXNET repo of Chen Yunpeng](https:\u002F\u002Fgithub.com\u002Fcypw\u002FDPNs)\n\nThe porting has been made possible by [Ross Wightman](http:\u002F\u002Frwightman.com) in his [PyTorch repo](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-dpn-pretrained).\n\nAs you can see [here](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-dpn-pretrained) DualPathNetworks allows you to try different scales. The default one in this repo is 0.875 meaning that the original input size is 256 before croping to 224.\n\n- `dpn68(num_classes=1000, pretrained='imagenet')`\n- `dpn98(num_classes=1000, pretrained='imagenet')`\n- `dpn131(num_classes=1000, pretrained='imagenet')`\n- `dpn68b(num_classes=1000, pretrained='imagenet+5k')`\n- `dpn92(num_classes=1000, pretrained='imagenet+5k')`\n- `dpn107(num_classes=1000, pretrained='imagenet+5k')`\n\n`'imagenet+5k'` means that the network has been pretrained on imagenet5k before being finetuned on imagenet1k.\n\n#### Xception\n\nSource: [Keras repo](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fxception.py)\n\nThe porting has been made possible by [T Standley](https:\u002F\u002Fgithub.com\u002Ftstandley\u002FXception-PyTorch).\n\n- `xception(num_classes=1000, pretrained='imagenet')`\n\n\n#### SENet*\n\nSource: [Caffe repo of Jie Hu](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet)\n\n- `senet154(num_classes=1000, pretrained='imagenet')`\n- `se_resnet50(num_classes=1000, pretrained='imagenet')`\n- `se_resnet101(num_classes=1000, pretrained='imagenet')`\n- `se_resnet152(num_classes=1000, pretrained='imagenet')`\n- `se_resnext50_32x4d(num_classes=1000, pretrained='imagenet')`\n- `se_resnext101_32x4d(num_classes=1000, pretrained='imagenet')`\n\n#### PNASNet*\n\nSource: [TensorFlow Slim repo](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim)\n\n- `pnasnet5large(num_classes=1000, pretrained='imagenet')`\n- `pnasnet5large(num_classes=1001, pretrained='imagenet+background')`\n\n#### PolyNet\n\nSource: [Caffe repo of the CUHK Multimedia Lab](https:\u002F\u002Fgithub.com\u002FCUHK-MMLAB\u002Fpolynet)\n\n- `polynet(num_classes=1000, pretrained='imagenet')`\n\n#### TorchVision\n\nSource: [Pytorch\u002FVision repo](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision\u002Ftree\u002Fmaster\u002Ftorchvision)\n\n(`inceptionv3` included in [Inception*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception))\n\n- `resnet18(num_classes=1000, pretrained='imagenet')`\n- `resnet34(num_classes=1000, pretrained='imagenet')`\n- `resnet50(num_classes=1000, pretrained='imagenet')`\n- `resnet101(num_classes=1000, pretrained='imagenet')`\n- `resnet152(num_classes=1000, pretrained='imagenet')`\n- `densenet121(num_classes=1000, pretrained='imagenet')`\n- `densenet161(num_classes=1000, pretrained='imagenet')`\n- `densenet169(num_classes=1000, pretrained='imagenet')`\n- `densenet201(num_classes=1000, pretrained='imagenet')`\n- `squeezenet1_0(num_classes=1000, pretrained='imagenet')`\n- `squeezenet1_1(num_classes=1000, pretrained='imagenet')`\n- `alexnet(num_classes=1000, pretrained='imagenet')`\n- `vgg11(num_classes=1000, pretrained='imagenet')`\n- `vgg13(num_classes=1000, pretrained='imagenet')`\n- `vgg16(num_classes=1000, pretrained='imagenet')`\n- `vgg19(num_classes=1000, pretrained='imagenet')`\n- `vgg11_bn(num_classes=1000, pretrained='imagenet')`\n- `vgg13_bn(num_classes=1000, pretrained='imagenet')`\n- `vgg16_bn(num_classes=1000, pretrained='imagenet')`\n- `vgg19_bn(num_classes=1000, pretrained='imagenet')`\n\n\n### Model API\n\nOnce a pretrained model has been loaded, you can use it that way.\n\n**Important note**: All image must be loaded using `PIL` which scales the pixel values between 0 and 1.\n\n#### `model.input_size`\n\nAttribut of type `list` composed of 3 numbers:\n\n- number of color channels,\n- height of the input image,\n- width of the input image.\n\nExample:\n\n- `[3, 299, 299]` for inception* networks,\n- `[3, 224, 224]` for resnet* networks.\n\n\n#### `model.input_space`\n\nAttribut of type `str` representating the color space of the image. Can be `RGB` or `BGR`.\n\n\n#### `model.input_range`\n\nAttribut of type `list` composed of 2 numbers:\n\n- min pixel value,\n- max pixel value.\n\nExample:\n\n- `[0, 1]` for resnet* and inception* networks,\n- `[0, 255]` for bninception network.\n\n\n#### `model.mean`\n\nAttribut of type `list` composed of 3 numbers which are used to normalize the input image (substract \"color-channel-wise\").\n\nExample:\n\n- `[0.5, 0.5, 0.5]` for inception* networks,\n- `[0.485, 0.456, 0.406]` for resnet* networks.\n\n\n#### `model.std`\n\nAttribut of type `list` composed of 3 numbers which are used to normalize the input image (divide \"color-channel-wise\").\n\nExample:\n\n- `[0.5, 0.5, 0.5]` for inception* networks,\n- `[0.229, 0.224, 0.225]` for resnet* networks.\n\n\n#### `model.features`\n\n\u002F!\\ work in progress (may not be available)\n\nMethod which is used to extract the features from the image.\n\nExample when the model is loaded using `fbresnet152`:\n\n```python\nprint(input_224.size())            # (1,3,224,224)\noutput = model.features(input_224) \nprint(output.size())               # (1,2048,1,1)\n\n# print(input_448.size())          # (1,3,448,448)\noutput = model.features(input_448)\n# print(output.size())             # (1,2048,7,7)\n```\n\n#### `model.logits`\n\n\u002F!\\ work in progress (may not be available)\n\nMethod which is used to classify the features from the image.\n\nExample when the model is loaded using `fbresnet152`:\n\n```python\noutput = model.features(input_224) \nprint(output.size())               # (1,2048, 1, 1)\noutput = model.logits(output)\nprint(output.size())               # (1,1000)\n```\n\n#### `model.forward`\n\nMethod used to call `model.features` and `model.logits`. It can be overwritten as desired.\n\n**Note**: A good practice is to use `model.__call__` as your function of choice to forward an input to your model. See the example bellow.\n\n```python\n# Without model.__call__\noutput = model.forward(input_224)\nprint(output.size())      # (1,1000)\n\n# With model.__call__\noutput = model(input_224)\nprint(output.size())      # (1,1000)\n```\n\n#### `model.last_linear`\n\nAttribut of type `nn.Linear`. This module is the last one to be called during the forward pass.\n\n- Can be replaced by an adapted `nn.Linear` for fine tuning.\n- Can be replaced by `pretrained.utils.Identity` for features extraction. \n\nExample when the model is loaded using `fbresnet152`:\n\n```python\nprint(input_224.size())            # (1,3,224,224)\noutput = model.features(input_224) \nprint(output.size())               # (1,2048,1,1)\noutput = model.logits(output)\nprint(output.size())               # (1,1000)\n\n# fine tuning\ndim_feats = model.last_linear.in_features # =2048\nnb_classes = 4\nmodel.last_linear = nn.Linear(dim_feats, nb_classes)\noutput = model(input_224)\nprint(output.size())               # (1,4)\n\n# features extraction\nmodel.last_linear = pretrained.utils.Identity()\noutput = model(input_224)\nprint(output.size())               # (1,2048)\n```\n\n## Reproducing\n\n### Hand porting of ResNet152\n\n```\nth pretrainedmodels\u002Ffbresnet\u002Fresnet152_dump.lua\npython pretrainedmodels\u002Ffbresnet\u002Fresnet152_load.py\n```\n\n### Automatic porting of ResNeXt\n\nhttps:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fconvert_torch_to_pytorch\n\n### Hand porting of NASNet, InceptionV4 and InceptionResNetV2\n\nhttps:\u002F\u002Fgithub.com\u002FCadene\u002Ftensorflow-model-zoo.torch\n\n\n## Acknowledgement\n\nThanks to the deep learning community and especially to the contributers of the pytorch ecosystem.\n","# PyTorch 预训练模型（开发中）\n\n本仓库的目标是：\n\n- 帮助复现研究论文的结果（例如迁移学习设置），\n- 通过受 torchvision 启用的统一接口\u002FAPI（应用程序接口）访问预训练的卷积神经网络（ConvNets）。\n\n\u003Ca href=\"https:\u002F\u002Ftravis-ci.org\u002FCadene\u002Fpretrained-models.pytorch\">\u003Cimg src=\"https:\u002F\u002Fapi.travis-ci.org\u002FCadene\u002Fpretrained-models.pytorch.svg?branch=master\"\u002F>\u003C\u002Fa>\n\n最新动态：\n- 27\u002F10\u002F2018：修复兼容性问题，添加测试，添加 Travis 支持\n- 04\u002F06\u002F2018：[PolyNet](https:\u002F\u002Fgithub.com\u002FCUHK-MMLAB\u002Fpolynet) 和 [PNASNet-5-Large](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00559) 感谢 [Alex Parinov](https:\u002F\u002Fgithub.com\u002Fcreafz)\n- 16\u002F04\u002F2018：[SE-ResNet* and SE-ResNeXt*](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) 感谢 [Alex Parinov](https:\u002F\u002Fgithub.com\u002Fcreafz)\n- 09\u002F04\u002F2018：[SENet154](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) 感谢 [Alex Parinov](https:\u002F\u002Fgithub.com\u002Fcreafz)\n- 22\u002F03\u002F2018：CaffeResNet101（适用于与 FasterRCNN（快速区域卷积神经网络）配合进行定位）\n- 21\u002F03\u002F2018：NASNet Mobile 感谢 [Veronika Yurchuk](https:\u002F\u002Fgithub.com\u002Fveronikayurchuk) 和 [Anastasiia](https:\u002F\u002Fgithub.com\u002FDagnyT)\n- 25\u002F01\u002F2018：DualPathNetworks 感谢 [Ross Wightman](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-dpn-pretrained)，Xception 感谢 [T Standley](https:\u002F\u002Fgithub.com\u002Ftstandley\u002FXception-PyTorch)，改进了 TransformImage API\n- 13\u002F01\u002F2018：`pip install pretrainedmodels`，`pretrainedmodels.model_names`，`pretrainedmodels.pretrained_settings`\n- 12\u002F01\u002F2018：`python setup.py install`\n- 08\u002F12\u002F2017：更新数据 URL（\u002F!\\ `git pull` 是需要的）\n- 30\u002F11\u002F2017：改进 API（`model.features(input)`，`model.logits(features)`，`model.forward(input)`，`model.last_linear`）\n- 16\u002F11\u002F2017：nasnet-a-large 预训练模型由 T. Durand 和 R. Cadene 移植\n- 22\u002F07\u002F2017：torchvision 预训练模型\n- 22\u002F07\u002F2017：inceptionv4 和 inceptionresnetv2 中的动量调整为 0.1\n- 17\u002F07\u002F2017：model.input_range 属性\n- 17\u002F07\u002F2017：BNInception 在 Imagenet 上预训练\n\n## 概述\n\n- [安装](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#installation)\n- [快速示例](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#quick-examples)\n- [一些使用案例](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#few-use-cases)\n    - [计算 ImageNet logits(对数几率)](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#compute-imagenet-logits)\n    - [计算 ImageNet 验证指标](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#compute-imagenet-validation-metrics)\n- [ImageNet 评估](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#evaluation-on-imagenet)\n    - [验证集准确率](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#accuracy-on-validation-set)\n    - [复现结果](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#reproducing-results)\n- [文档](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#documentation)\n    - [可用模型](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#available-models)\n        - [AlexNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [BNInception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#bninception)\n        - [CaffeResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#caffe-resnet)\n        - [DenseNet121](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet161](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet169](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet201](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DenseNet201](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [DualPathNet68](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet92](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet98](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet107](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [DualPathNet113](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks)\n        - [FBResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#facebook-resnet)\n        - [InceptionResNetV2](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception)\n        - [InceptionV3](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception)\n        - [InceptionV4](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception)\n        - [NASNet-A-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#nasnet)\n        - [NASNet-A-Mobile](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#nasnet)\n        - [PNASNet-5-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#pnasnet)\n        - [PolyNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#polynet)\n        - [ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext)\n        - [ResNeXt101_64x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext)\n        - [ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet18](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet34](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [SENet154](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNeXt50_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SE-ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet)\n        - [SqueezeNet1_0](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [SqueezeNet1_1](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG11](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG13](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG16](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG19](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG11_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG13_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG16_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [VGG19_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision)\n        - [Xception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#xception)\n    - [模型 API](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#model-api)\n        - [model.input_size](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelinput_size)\n        - [model.input_space](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelinput_space)\n        - [model.input_range](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelinput_range)\n        - [model.mean](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelmean)\n        - [model.std](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelstd)\n        - [model.features](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelfeatures)\n        - [model.logits](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modellogits)\n        - [model.forward](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#modelforward)\n- [移植与复现](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#reproducing)\n    - [ResNet*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#hand-porting-of-resnet152)\n    - [ResNeXt*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#automatic-porting-of-resnext)\n    - [Inception*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#hand-porting-of-inceptionv4-and-inceptionresnetv2)\n\n## 安装\n\n1. [带有 Anaconda 的 Python3](https:\u002F\u002Fwww.continuum.io\u002Fdownloads)\n2. [PyTorch（含或不含 CUDA）](http:\u002F\u002Fpytorch.org)\n\n### 通过 pip 安装\n\n3. `pip install pretrainedmodels`\n\n### 从仓库安装\n\n3. `git clone https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch.git`\n4. `cd pretrained-models.pytorch`\n5. `python setup.py install`\n\n## 快速示例\n\n- 导入 `pretrainedmodels`：\n\n```python\nimport pretrainedmodels\n```\n\n- 打印可用的预训练模型：\n\n```python\nprint(pretrainedmodels.model_names)\n> ['fbresnet152', 'bninception', 'resnext101_32x4d', 'resnext101_64x4d', 'inceptionv4', 'inceptionresnetv2', 'alexnet', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'inceptionv3', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19_bn', 'vgg19', 'nasnetalarge', 'nasnetamobile', 'cafferesnet101', 'senet154',  'se_resnet50', 'se_resnet101', 'se_resnet152', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'cafferesnet101', 'polynet', 'pnasnet5large']\n```\n\n- 打印选定模型的可用预训练设置：\n\n```python\nprint(pretrainedmodels.pretrained_settings['nasnetalarge'])\n> {'imagenet': {'url': 'http:\u002F\u002Fdata.lip6.fr\u002Fcadene\u002Fpretrainedmodels\u002Fnasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1000}, 'imagenet+background': {'url': 'http:\u002F\u002Fdata.lip6.fr\u002Fcadene\u002Fpretrainedmodels\u002Fnasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1001}}\n```\n\n- 从 ImageNet 加载预训练模型：\n\n```python\nmodel_name = 'nasnetalarge' # could be fbresnet152 or inceptionresnetv2\nmodel = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')\nmodel.eval()\n```\n\n**注意**：默认情况下，模型将下载到您的 `$HOME\u002F.torch` 文件夹。您可以使用 `$TORCH_HOME` 变量修改此行为，如下所示：`export TORCH_HOME=\"\u002Flocal\u002Fpretrainedmodels\"`\n\n- 加载图像并执行完整的前向传播（forward pass）：\n\n```python\nimport torch\nimport pretrainedmodels.utils as utils\n\nload_img = utils.LoadImage()\n\n# transformations depending on the model\n# rescale, center crop, normalize, and others (ex: ToBGR, ToRange255)\ntf_img = utils.TransformImage(model) \n\npath_img = 'data\u002Fcat.jpg'\n\ninput_img = load_img(path_img)\ninput_tensor = tf_img(input_img)         # 3x400x225 -> 3x299x299 size may differ\ninput_tensor = input_tensor.unsqueeze(0) # 3x299x299 -> 1x3x299x299\ninput = torch.autograd.Variable(input_tensor,\n    requires_grad=False)\n\noutput_logits = model(input) # 1x1000\n```\n\n- 提取特征（注意：此 API (应用程序接口) 并非对所有网络都可用）：\n\n```python\noutput_features = model.features(input) # 1x14x14x2048 size may differ\noutput_logits = model.logits(output_features) # 1x1000\n```\n\n## 一些使用案例\n\n### 计算 ImageNet logits (对数几率)\n\n- 参见 [examples\u002Fimagenet_logits.py](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fblob\u002Fmaster\u002Fexamples\u002Fimagenet_logits.py) 以使用在 ImageNet 上的预训练模型计算单张图像中各类别的 logits。\n\n```\n$ python examples\u002Fimagenet_logits.py -h\n> nasnetalarge, resnet152, inceptionresnetv2, inceptionv4, ...\n```\n\n```\n$ python examples\u002Fimagenet_logits.py -a nasnetalarge --path_img data\u002Fcat.jpg\n> 'nasnetalarge': data\u002Fcat.jpg' is a 'tiger cat' \n```\n\n### 计算 ImageNet 评估指标\n\n- 参见 [examples\u002Fimagenet_eval.py](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fblob\u002Fmaster\u002Fexamples\u002Fimagenet_eval.py) 以在 ImageNet 验证集上评估预训练模型。\n\n```\n$ python examples\u002Fimagenet_eval.py \u002Flocal\u002Fcommon-data\u002Fimagenet_2012\u002Fimages -a nasnetalarge -b 20 -e\n> * Acc@1 82.693, Acc@5 96.13\n```\n\n\n## ImageNet 评估\n\n### 验证集准确率（单模型）\n\n结果是使用与训练过程中相同大小的图像（中心裁剪）获得的。\n\n模型 | 实现来源 | Top-1 准确率 | Top-5 准确率\n--- | --- | --- | ---\nPNASNet-5-Large | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) | 82.858 | 96.182\n[PNASNet-5-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#pnasnet) | 本项目移植 | 82.736 | 95.992\nNASNet-A-Large | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) | 82.693 | 96.163\n[NASNet-A-Large](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#nasnet) | 本项目移植 | 82.566 | 96.086\nSENet154 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 81.32 | 95.53\n[SENet154](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | 本项目移植 | 81.304 | 95.498\nPolyNet | [Caffe](https:\u002F\u002Fgithub.com\u002FCUHK-MMLAB\u002Fpolynet) | 81.29 | 95.75\n[PolyNet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#polynet) | 本项目移植 | 81.002 | 95.624\nInceptionResNetV2 | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fslim) | 80.4 | 95.3\nInceptionV4 | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fslim) | 80.2 | 95.3\n[SE-ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | 本项目移植 | 80.236 | 95.028\nSE-ResNeXt101_32x4d | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 80.19 | 95.04\n[InceptionResNetV2](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception) | 本项目移植 | 80.170 | 95.234\n[InceptionV4](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception) | 本项目移植 | 80.062 | 94.926\n[DualPathNet107_5k](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | 本项目移植 | 79.746 | 94.684\nResNeXt101_64x4d | [Torch7](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt) | 79.6 | 94.7\n[DualPathNet131](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | 本项目移植 | 79.432 | 94.574\n[DualPathNet92_5k](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | 本项目移植 | 79.400 | 94.620\n[DualPathNet98](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | 本项目移植 | 79.224 | 94.488\n[SE-ResNeXt50_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | 本项目移植 | 79.076 | 94.434\nSE-ResNeXt50_32x4d | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 79.03 | 94.46\n[Xception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#xception) | [Keras](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fxception.py) | 79.000 | 94.500\n[ResNeXt101_64x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext) | 本项目移植 | 78.956 | 94.252\n[Xception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#xception) | 本项目移植 | 78.888 | 94.292\nResNeXt101_32x4d | [Torch7](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt) | 78.8 | 94.4\nSE-ResNet152 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 78.66 | 94.46\n[SE-ResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | 本项目移植 | 78.658 | 94.374\nResNet152 | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 78.428 | 94.110\n[SE-ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | 本项目移植 | 78.396 | 94.258\nSE-ResNet101 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 78.25 | 94.28\n[ResNeXt101_32x4d](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#resnext) | 本项目移植 | 78.188 | 93.886\nFBResNet152 | [Torch7](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch) | 77.84 | 93.84\nSE-ResNet50 | [Caffe](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 77.63 | 93.64\n[SE-ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#senet) | 本项目移植 | 77.636 | 93.752\n[DenseNet161](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.560 | 93.798\n[ResNet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.438 | 93.672\n[FBResNet152](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#facebook-resnet) | 本项目移植 | 77.386 | 93.594\n[InceptionV3](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.294 | 93.454\n[DenseNet201](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 77.152 | 93.548\n[DualPathNet68b_5k](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | 本项目移植 | 77.034 | 93.590\n[CaffeResnet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#caffe-resnet) | [Caffe](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks) | 76.400 | 92.900\n[CaffeResnet101](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#caffe-resnet) | 本项目移植 | 76.200 | 92.766\n[DenseNet169](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 76.026 | 92.992\n[ResNet50](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 76.002 | 92.980\n[DualPathNet68](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#dualpathnetworks) | 本项目移植 | 75.868 | 92.774\n[DenseNet121](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 74.646 | 92.136\n[VGG19_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 74.266 | 92.066\nNASNet-A-Mobile | [Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) | 74.0 | 91.6\n[NASNet-A-Mobile](https:\u002F\u002Fgithub.com\u002Fveronikayurchuk\u002Fpretrained-models.pytorch\u002Fblob\u002Fmaster\u002Fpretrainedmodels\u002Fmodels\u002Fnasnet_mobile.py) | 本项目移植 | 74.080 | 91.740\n[ResNet34](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 73.554 | 91.456\n[BNInception](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#bninception) | 本项目移植 | 73.524 | 91.562\n[VGG16_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 73.518 | 91.608\n[VGG19](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 72.080 | 90.822\n[VGG16](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 71.636 | 90.354\n[VGG13_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 71.508 | 90.494\n[VGG11_BN](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 70.452 | 89.818\n[ResNet18](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 70.142 | 89.274\n[VGG13](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 69.662 | 89.264\n[VGG11](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 68.970 | 88.746\n[SqueezeNet1_1](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 58.250 | 80.800\n[SqueezeNet1_0](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 58.108 | 80.428\n[Alexnet](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#torchvision) | [Pytorch](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision#models) | 56.432 | 79.194\n\n注意事项：\n- ResNet152 的 PyTorch (深度学习框架) 版本并非 Torch7 的移植版，而是由 Facebook 重新训练的。\n- 对于 PolyNet 评估，每张图像被调整为 378x378（不保持宽高比），然后使用结果图像的中心 331×331 区域。\n\n注意，此处报告的准确率并不总是代表网络在其他任务和数据集上的迁移能力。你必须全部尝试一下！:P\n\n\n### 复现结果\n\n请参阅 [计算 ImageNet (图像数据集) 验证指标 (validation metrics)](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#compute-imagenet-validation-metrics)\n\n\n## 文档\n\n### 可用模型\n\n#### NASNet*\n\n来源：[TensorFlow Slim 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim)\n\n- `nasnetalarge(num_classes=1000, pretrained='imagenet')`\n- `nasnetalarge(num_classes=1001, pretrained='imagenet+background')`\n- `nasnetamobile(num_classes=1000, pretrained='imagenet')`\n\n#### FaceBook ResNet*\n\n来源：[FaceBook Torch7 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch)\n\n它们与 torchvision 中的 ResNet (残差网络)* 略有不同。ResNet152 是目前唯一可用的。\n\n- `fbresnet152(num_classes=1000, pretrained='imagenet')`\n\n#### Caffe ResNet*\n\n来源：[KaimingHe 的 Caffe 仓库 (repo)](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks)\n\n- `cafferesnet101(num_classes=1000, pretrained='imagenet')`\n\n\n#### Inception*\n\n来源：[TensorFlow Slim 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fslim) 和 [PyTorch\u002FVision 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision\u002Ftree\u002Fmaster\u002Ftorchvision) 用于 `inceptionv3`\n\n- `inceptionresnetv2(num_classes=1000, pretrained='imagenet')`\n- `inceptionresnetv2(num_classes=1001, pretrained='imagenet+background')`\n- `inceptionv4(num_classes=1000, pretrained='imagenet')`\n- `inceptionv4(num_classes=1001, pretrained='imagenet+background')`\n- `inceptionv3(num_classes=1000, pretrained='imagenet')`\n\n#### BNInception\n\n来源：[使用 Caffe 训练](https:\u002F\u002Fgithub.com\u002FCadene\u002Ftensorflow-model-zoo.torch\u002Fpull\u002F2) 由 [Xiong Yuanjun](http:\u002F\u002Fyjxiong.me)\n\n- `bninception(num_classes=1000, pretrained='imagenet')`\n\n#### ResNeXt*\n\n来源：[FaceBook ResNeXt 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt)\n\n- `resnext101_32x4d(num_classes=1000, pretrained='imagenet')`\n- `resnext101_62x4d(num_classes=1000, pretrained='imagenet')`\n\n#### DualPathNetworks\n\n来源：[Chen Yunpeng 的 MXNet 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Fcypw\u002FDPNs)\n\n移植工作由 [Ross Wightman](http:\u002F\u002Frwightman.com) 在其 [PyTorch 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-dpn-pretrained) 中完成。\n\n正如你在这里看到的，DualPathNetworks 允许你尝试不同的比例。本仓库中的默认比例为 0.875，这意味着原始输入大小为 256，在裁剪至 224 之前。\n\n- `dpn68(num_classes=1000, pretrained='imagenet')`\n- `dpn98(num_classes=1000, pretrained='imagenet')`\n- `dpn131(num_classes=1000, pretrained='imagenet')`\n- `dpn68b(num_classes=1000, pretrained='imagenet+5k')`\n- `dpn92(num_classes=1000, pretrained='imagenet+5k')`\n- `dpn107(num_classes=1000, pretrained='imagenet+5k')`\n\n`'imagenet+5k'` 表示该网络先在 imagenet5k 上预训练 (pretrained)，然后在 imagenet1k 上进行微调 (finetuned)。\n\n#### Xception\n\n来源：[Keras 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fxception.py)\n\n移植工作由 [T Standley](https:\u002F\u002Fgithub.com\u002Ftstandley\u002FXception-PyTorch) 完成。\n\n- `xception(num_classes=1000, pretrained='imagenet')`\n\n\n#### SENet*\n\n来源：[Jie Hu 的 Caffe 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet)\n\n- `senet154(num_classes=1000, pretrained='imagenet')`\n- `se_resnet50(num_classes=1000, pretrained='imagenet')`\n- `se_resnet101(num_classes=1000, pretrained='imagenet')`\n- `se_resnet152(num_classes=1000, pretrained='imagenet')`\n- `se_resnext50_32x4d(num_classes=1000, pretrained='imagenet')`\n- `se_resnext101_32x4d(num_classes=1000, pretrained='imagenet')`\n\n#### PNASNet*\n\n来源：[TensorFlow Slim 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim)\n\n- `pnasnet5large(num_classes=1000, pretrained='imagenet')`\n- `pnasnet5large(num_classes=1001, pretrained='imagenet+background')`\n\n#### PolyNet\n\n来源：[CUHK 多媒体实验室的 Caffe 仓库 (repo)](https:\u002F\u002Fgithub.com\u002FCUHK-MMLAB\u002Fpolynet)\n\n- `polynet(num_classes=1000, pretrained='imagenet')`\n\n#### TorchVision\n\n来源：[PyTorch\u002FVision 仓库 (repo)](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision\u002Ftree\u002Fmaster\u002Ftorchvision)\n\n(`inceptionv3` 包含在 [Inception*](https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch#inception))\n\n- `resnet18(num_classes=1000, pretrained='imagenet')`\n- `resnet34(num_classes=1000, pretrained='imagenet')`\n- `resnet50(num_classes=1000, pretrained='imagenet')`\n- `resnet101(num_classes=1000, pretrained='imagenet')`\n- `resnet152(num_classes=1000, pretrained='imagenet')`\n- `densenet121(num_classes=1000, pretrained='imagenet')`\n- `densenet161(num_classes=1000, pretrained='imagenet')`\n- `densenet169(num_classes=1000, pretrained='imagenet')`\n- `densenet201(num_classes=1000, pretrained='imagenet')`\n- `squeezenet1_0(num_classes=1000, pretrained='imagenet')`\n- `squeezenet1_1(num_classes=1000, pretrained='imagenet')`\n- `alexnet(num_classes=1000, pretrained='imagenet')`\n- `vgg11(num_classes=1000, pretrained='imagenet')`\n- `vgg13(num_classes=1000, pretrained='imagenet')`\n- `vgg16(num_classes=1000, pretrained='imagenet')`\n- `vgg19(num_classes=1000, pretrained='imagenet')`\n- `vgg11_bn(num_classes=1000, pretrained='imagenet')`\n- `vgg13_bn(num_classes=1000, pretrained='imagenet')`\n- `vgg16_bn(num_classes=1000, pretrained='imagenet')`\n- `vgg19_bn(num_classes=1000, pretrained='imagenet')`\n\n### 模型 API\n\n一旦预训练模型加载完成，你就可以这样使用它。\n\n**重要提示**：所有图像必须使用 `PIL` (Python Imaging Library) 加载，它会将像素值缩放到 0 到 1 之间。\n\n#### `model.input_size`\n\n类型为 `list` (列表) 的属性，由 3 个数字组成：\n\n- 颜色通道数，\n- 输入图像的高度，\n- 输入图像的宽度。\n\n示例：\n\n- `[3, 299, 299]` 用于 inception* 网络，\n- `[3, 224, 224]` 用于 resnet* 网络。\n\n\n#### `model.input_space`\n\n类型为 `str` (字符串) 的属性，表示图像的颜色空间。可以是 `RGB` 或 `BGR`。\n\n\n#### `model.input_range`\n\n类型为 `list` (列表) 的属性，由 2 个数字组成：\n\n- 最小像素值，\n- 最大像素值。\n\n示例：\n\n- `[0, 1]` 用于 resnet* 和 inception* 网络，\n- `[0, 255]` 用于 bninception 网络。\n\n\n#### `model.mean`\n\n类型为 `list` (列表) 的属性，由 3 个数字组成，用于归一化输入图像（按“颜色通道”减去）。\n\n示例：\n\n- `[0.5, 0.5, 0.5]` 用于 inception* 网络，\n- `[0.485, 0.456, 0.406]` 用于 resnet* 网络。\n\n\n#### `model.std`\n\n类型为 `list` (列表) 的属性，由 3 个数字组成，用于归一化输入图像（按“颜色通道”除以）。\n\n示例：\n\n- `[0.5, 0.5, 0.5]` 用于 inception* 网络，\n- `[0.229, 0.224, 0.225]` 用于 resnet* 网络。\n\n\n#### `model.features`\n\n\u002F!\\ 进行中（可能不可用）\n\n用于从图像中提取 `features` (特征) 的方法。\n\n当使用 `fbresnet152` 加载模型时的示例：\n\n```python\nprint(input_224.size())            # (1,3,224,224)\noutput = model.features(input_224) \nprint(output.size())               # (1,2048,1,1)\n\n# print(input_448.size())          # (1,3,448,448)\noutput = model.features(input_448)\n# print(output.size())             # (1,2048,7,7)\n```\n\n#### `model.logits`\n\n\u002F!\\ 进行中（可能不可用）\n\n用于对图像 `features` (特征) 进行分类的方法，输出 `logits` (原始输出)。\n\n当使用 `fbresnet152` 加载模型时的示例：\n\n```python\noutput = model.features(input_224) \nprint(output.size())               # (1,2048, 1, 1)\noutput = model.logits(output)\nprint(output.size())               # (1,1000)\n```\n\n#### `model.forward`\n\n用于调用 `model.features` 和 `model.logits` 的方法。可以根据需要重写。\n\n**注意**：一种好的做法是使用 `model.__call__` 作为将输入 `forward` (转发) 到模型的首选函数。见下方示例。\n\n```python\n# Without model.__call__\noutput = model.forward(input_224)\nprint(output.size())      # (1,1000)\n\n# With model.__call__\noutput = model(input_224)\nprint(output.size())      # (1,1000)\n```\n\n#### `model.last_linear`\n\n类型为 `nn.Linear` (线性层) 的属性。该模块是在 `forward pass` (前向传播) 过程中最后被调用的模块。\n\n- 可以替换为适配的 `nn.Linear` (线性层) 以进行 `fine tuning` (微调)。\n- 可以替换为 `pretrained.utils.Identity` (恒等模块) 以进行特征提取。 \n\n当使用 `fbresnet152` 加载模型时的示例：\n\n```python\nprint(input_224.size())            # (1,3,224,224)\noutput = model.features(input_224) \nprint(output.size())               # (1,2048,1,1)\noutput = model.logits(output)\nprint(output.size())               # (1,1000)\n\n# fine tuning\ndim_feats = model.last_linear.in_features # =2048\nnb_classes = 4\nmodel.last_linear = nn.Linear(dim_feats, nb_classes)\noutput = model(input_224)\nprint(output.size())               # (1,4)\n\n# features extraction\nmodel.last_linear = pretrained.utils.Identity()\noutput = model(input_224)\nprint(output.size())               # (1,2048)\n```\n\n## 复现\n\n### ResNet152 的手动移植\n\n```\nth pretrainedmodels\u002Ffbresnet\u002Fresnet152_dump.lua\npython pretrainedmodels\u002Ffbresnet\u002Fresnet152_load.py\n```\n\n### ResNeXt 的自动移植\n\nhttps:\u002F\u002Fgithub.com\u002Fclcarwin\u002Fconvert_torch_to_pytorch\n\n### NASNet、InceptionV4 和 InceptionResNetV2 的手动移植\n\nhttps:\u002F\u002Fgithub.com\u002FCadene\u002Ftensorflow-model-zoo.torch\n\n\n## 致谢\n\n感谢深度学习社区，特别是 PyTorch 生态系统的贡献者。","# pretrained-models.pytorch 快速上手指南\n\n## 环境准备\n\n- **Python**: 建议使用 Python 3 (推荐使用 Anaconda 环境)\n- **深度学习框架**: PyTorch (支持 CPU 或 CUDA 版本)\n\n## 安装步骤\n\n### 方式一：通过 pip 安装（推荐）\n\n直接安装官方发布的包：\n\n```bash\npip install pretrainedmodels\n```\n\n### 方式二：从源码安装\n\n如需获取最新开发版本，可克隆仓库并安装：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch.git\ncd pretrained-models.pytorch\npython setup.py install\n```\n\n> **注意**：默认情况下，预训练模型权重将下载至 `$HOME\u002F.torch` 文件夹。如需修改下载路径，可设置环境变量：\n> `export TORCH_HOME=\"\u002Flocal\u002Fpretrainedmodels\"`\n\n## 基本使用\n\n### 1. 导入库与查看可用模型\n\n```python\nimport pretrainedmodels\n\n# 打印所有可用的预训练模型名称\nprint(pretrainedmodels.model_names)\n```\n\n### 2. 加载预训练模型\n\n以 `nasnetalarge` 为例加载 ImageNet 预训练模型：\n\n```python\nmodel_name = 'nasnetalarge' # 也可以是 fbresnet152 或 inceptionresnetv2\nmodel = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')\nmodel.eval()\n```\n\n### 3. 图像推理示例\n\n加载图片并进行完整的前向传播：\n\n```python\nimport torch\nimport pretrainedmodels.utils as utils\n\nload_img = utils.LoadImage()\n\n# 根据模型类型进行变换（重缩放、中心裁剪、归一化等）\ntf_img = utils.TransformImage(model) \n\npath_img = 'data\u002Fcat.jpg'\n\ninput_img = load_img(path_img)\ninput_tensor = tf_img(input_img)         # 尺寸可能因模型而异\ninput_tensor = input_tensor.unsqueeze(0) # 增加 batch 维度\ninput = torch.autograd.Variable(input_tensor,\n    requires_grad=False)\n\noutput_logits = model(input) # 输出类别 logits\n```\n\n### 4. 特征提取（部分网络支持）\n\n某些网络支持分离特征提取和分类头：\n\n```python\noutput_features = model.features(input) # 提取特征图\noutput_logits = model.logits(output_features) # 计算 logits\n```","某工业质检团队正在开发自动化表面缺陷检测系统，急需对比多种主流卷积神经网络在特定数据集上的表现。\n\n### 没有 pretrained-models.pytorch 时\n- 研发人员需从零编写 ResNet、InceptionV4 等复杂网络结构代码，易出错且维护成本高。\n- 预训练权重散落在不同 GitHub 项目或 Caffe 转换脚本中，下载和集成过程繁琐。\n- 各模型对输入图片的尺寸、均值方差要求不同，需手动编写大量适配代码。\n- 每次更换基准模型都要重新调试整个推理 pipeline，严重拖慢实验迭代速度。\n\n### 使用 pretrained-models.pytorch 后\n- 直接调用接口即可加载 NASNet、SE-ResNet 等数十种 SOTA 模型的预训练参数。\n- 提供类似 torchvision 的统一 API，无需关心底层架构差异，代码简洁清晰。\n- 内置标准化的 TransformImage 接口，自动处理图像缩放与归一化，保证特征提取准确。\n- 可轻松替换 last_linear 层，将 ImageNet 预训练模型快速迁移到工业缺陷分类任务。\n\npretrained-models.pytorch 通过标准化接口和丰富模型库，显著缩短了从算法验证到工程落地的周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCadene_pretrained-models.pytorch_190bbaf3.png","Cadene","Remi","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FCadene_79dfaf95.png",null,"@huggingface ","Paris","RemiCadene","http:\u002F\u002Fremicadene.com","https:\u002F\u002Fgithub.com\u002FCadene",[86,90],{"name":87,"color":88,"percentage":89},"Python","#3572A5",99.7,{"name":91,"color":92,"percentage":93},"Lua","#000080",0.3,9108,1819,"2026-04-05T06:46:44","BSD-3-Clause","未说明","可选 (支持 CUDA 或 CPU)",{"notes":101,"python":102,"dependencies":103},"建议使用 Anaconda 管理环境；模型文件默认下载至 $HOME\u002F.torch 目录，可通过 $TORCH_HOME 环境变量修改；项目发布于 2018 年，使用新版 PyTorch 时需注意 API 兼容性","3",[104],"torch",[13,14],[107,108,109,110,111,112],"imagenet","resnet","resnext","pretrained","pytorch","inception","2026-03-27T02:49:30.150509","2026-04-06T05:36:38.032059",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},3150,"预训练模型的输入预处理应该使用哪些均值和标准差？","应使用 ImageNet 数据集的标准统计值。均值为 [0.485, 0.456, 0.406]，标准差为 [0.229, 0.224, 0.225]。输入图像通常需先调整大小并归一化到 [0,1] 范围后再进行标准化处理。","https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fissues\u002F66",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},3151,"使用 PNASNet 时遇到 `TypeError: forward() missing 1 required positional argument: 'x_right'` 错误如何解决？","此错误是因为 `nn.Sequential` 容器要求子模块的 `forward` 方法仅接受一个参数，而 PNASNet 的部分层（如 `CellStem1`）需要两个参数。解决方法是不应简单地将模型包装在 `nn.Sequential` 中，而是直接使用完整模型或参考相关示例代码进行拆分处理。","https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fissues\u002F127",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},3152,"该仓库是否包含 Google Brain 发布的 NASNet 模型？","目前主仓库尚未直接集成 NASNet。维护者建议参考其提供的 TensorFlow 转 PyTorch 端口代码：https:\u002F\u002Fgithub.com\u002FCadene\u002Ftensorflow-model-zoo.torch\u002Ftree\u002Fmaster\u002Fnasnet，如有问题可联系维护者。","https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fissues\u002F13",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},3153,"InceptionV1 模型中是否存在 ReLU 激活函数未被使用的情况？","早期版本中部分 ReLU 输出看似未被使用，但验证集准确率正常。该结构问题已在后续版本中修复（Commit 87e9751），建议更新到最新版本以确保架构正确性。","https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fissues\u002F90",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},3154,"NASNet 实现中的 MaxPool 和 AvgPool 操作是否配置正确？","是的，原实现中存在 MaxPool 和 AvgPool 混淆的问题。维护者已根据 TensorFlow 官方实现进行了修正（Commit 247b037），请务必使用更新后的代码版本。","https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fissues\u002F15",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},3155,"ResNeXt 模型训练速度明显慢于 ResNet 是否正常？","尽管 FLOPs 相近，但因网络结构更复杂，PyTorch 框架下可能存在性能差异。维护者已针对此问题进行了优化（Commit 4417b9e），建议检查是否使用了最新版本的库以获得最佳训练速度。","https:\u002F\u002Fgithub.com\u002FCadene\u002Fpretrained-models.pytorch\u002Fissues\u002F8",[]]