[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-taehoonlee--tensornets":3,"tool-taehoonlee--tensornets":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":23,"env_os":94,"env_gpu":95,"env_ram":94,"env_deps":96,"category_tags":102,"github_topics":103,"view_count":23,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":124,"updated_at":125,"faqs":126,"releases":165},2762,"taehoonlee\u002Ftensornets","tensornets","High level network definitions with pre-trained weights in TensorFlow","tensornets 是一个基于 TensorFlow 构建的高层神经网络定义库，旨在让开发者轻松调用带有预训练权重的经典模型。它主要解决了在现有机器学习工作流中集成新模型时面临的代码冗余、版本兼容困难以及中间层访问复杂等痛点。\n\n这款工具非常适合需要在 TensorFlow 环境中快速验证想法的 AI 研究人员和工程师。与传统的类封装方式不同，tensornets 采用简洁的函数式接口，直接接收并返回张量（Tensor），能够无缝嵌入任何现有的 TensorFlow 流程。其核心亮点在于极高的代码可读性与可维护性：例如，它将原本需要两千多行代码实现的 Inception 系列模型精简至约五百行。此外，tensornets 提供了便捷的 API，让用户不仅能一键加载预训练权重进行推理，还能轻松提取任意中间层的特征输出，或保存与恢复模型参数。无论是复现论文结果还是部署生产环境，tensornets 都能提供轻量、透明且高效的解决方案。","# TensorNets [![Build Status](https:\u002F\u002Ftravis-ci.org\u002Ftaehoonlee\u002Ftensornets.svg?branch=master)](https:\u002F\u002Ftravis-ci.org\u002Ftaehoonlee\u002Ftensornets)\n\nHigh level network definitions with pre-trained weights in [TensorFlow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow) (tested with `2.1.0 >=` TF `>= 1.4.0`).\n\n## Guiding principles\n\n- **Applicability.** Many people already have their own ML workflows, and want to put a new model on their workflows. TensorNets can be easily plugged together because it is designed as simple functional interfaces without custom classes.\n- **Manageability.** Models are written in `tf.contrib.layers`, which is lightweight like PyTorch and Keras, and allows for ease of accessibility to every weight and end-point. Also, it is easy to deploy and expand a collection of pre-processing and pre-trained weights.\n- **Readability.** With recent TensorFlow APIs, more factoring and less indenting can be possible. For example, all the inception variants are implemented as about 500 lines of code in [TensorNets](tensornets\u002Finceptions.py) while 2000+ lines in [official TensorFlow models](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v3.py).\n- **Reproducibility.** You can always reproduce the original results with [simple APIs](#utilities) including feature extractions. Furthermore, you don't need to care about a version of TensorFlow beacuse compatibilities with various releases of TensorFlow have been checked with [Travis](https:\u002F\u002Ftravis-ci.org\u002Ftaehoonlee\u002Ftensornets\u002Fbuilds).\n\n## Installation\n\nYou can install TensorNets from PyPI (`pip install tensornets`) or directly from GitHub (`pip install git+https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets.git`).\n\n## A quick example\n\nEach network (see [full list](#image-classification)) is not a custom class but a function that takes and returns `tf.Tensor` as its input and output. Here is an example of `ResNet50`:\n\n```python\nimport tensorflow as tf\n# import tensorflow.compat.v1 as tf  # for TF 2\nimport tensornets as nets\n# tf.disable_v2_behavior()  # for TF 2\n\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\nmodel = nets.ResNet50(inputs)\n\nassert isinstance(model, tf.Tensor)\n```\n\nYou can load an example image by using `utils.load_img` returning a `np.ndarray` as the NHWC format:\n\n```python\nimg = nets.utils.load_img('cat.png', target_size=256, crop_size=224)\nassert img.shape == (1, 224, 224, 3)\n```\n\nOnce your network is created, you can run with regular TensorFlow APIs 😊 because all the networks in TensorNets always return `tf.Tensor`. Using pre-trained weights and pre-processing are as easy as [`pretrained()`](tensornets\u002Fpretrained.py) and [`preprocess()`](tensornets\u002Fpreprocess.py) to reproduce the original results:\n\n```python\nwith tf.Session() as sess:\n    img = model.preprocess(img)  # equivalent to img = nets.preprocess(model, img)\n    sess.run(model.pretrained())  # equivalent to nets.pretrained(model)\n    preds = sess.run(model, {inputs: img})\n```\n\nYou can see the most probable classes:\n\n```python\nprint(nets.utils.decode_predictions(preds, top=2)[0])\n[(u'n02124075', u'Egyptian_cat', 0.28067636), (u'n02127052', u'lynx', 0.16826575)]\n```\n\nYou can also easily obtain values of intermediate layers with `middles()` and `outputs()`:\n\n```python\nwith tf.Session() as sess:\n    img = model.preprocess(img)\n    sess.run(model.pretrained())\n    middles = sess.run(model.middles(), {inputs: img})\n    outputs = sess.run(model.outputs(), {inputs: img})\n\nmodel.print_middles()\nassert middles[0].shape == (1, 56, 56, 256)\nassert middles[-1].shape == (1, 7, 7, 2048)\n\nmodel.print_outputs()\nassert sum(sum((outputs[-1] - preds) ** 2)) \u003C 1e-8\n```\n\nWith `load()` and `save()`, your weight values can be restorable:\n\n```python\nwith tf.Session() as sess:\n    model.init()\n    # ... your training ...\n    model.save('test.npz')\n\nwith tf.Session() as sess:\n    model.load('test.npz')\n    # ... your deployment ...\n```\n\nTensorNets enables us to deploy well-known architectures and benchmark those results faster ⚡️. For more information, you can check out the lists of [utilities](#utilities), [examples](#examples), and [architectures](#performance).\n\n## Object detection example\n\nEach object detection model **can be coupled with any network in TensorNets** (see [performance](#object-detection)) and takes two arguments: a placeholder and a function acting as a stem layer. Here is an example of `YOLOv2` for PASCAL VOC:\n\n```python\nimport tensorflow as tf\nimport tensornets as nets\n\ninputs = tf.placeholder(tf.float32, [None, 416, 416, 3])\nmodel = nets.YOLOv2(inputs, nets.Darknet19)\n\nimg = nets.utils.load_img('cat.png')\n\nwith tf.Session() as sess:\n    sess.run(model.pretrained())\n    preds = sess.run(model, {inputs: model.preprocess(img)})\n    boxes = model.get_boxes(preds, img.shape[1:3])\n```\n\nLike other models, a detection model also returns `tf.Tensor` as its output. You can see the bounding box predictions `(x1, y1, x2, y2, score)` by using `model.get_boxes(model_output, original_img_shape)` and visualize the results:\n\n```python\nfrom tensornets.datasets import voc\nprint(\"%s: %s\" % (voc.classnames[7], boxes[7][0]))  # 7 is cat\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nbox = boxes[7][0]\nplt.imshow(img[0].astype(np.uint8))\nplt.gca().add_patch(plt.Rectangle(\n    (box[0], box[1]), box[2] - box[0], box[3] - box[1],\n    fill=False, edgecolor='r', linewidth=2))\nplt.show()\n```\n\nMore detection examples such as FasterRCNN on VOC2007 are [here](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets-examples\u002Fblob\u002Fmaster\u002Ftest_all_voc_models.ipynb) 😎. Note that:\n\n- APIs of detection models are slightly different:\n  * `YOLOv3`: `sess.run(model.preds, {inputs: img})`,\n  * `YOLOv2`: `sess.run(model, {inputs: img})`,\n  * `FasterRCNN`: `sess.run(model, {inputs: img, model.scales: scale})`,\n\n- `FasterRCNN` requires `roi_pooling`:\n  * `git clone https:\u002F\u002Fgithub.com\u002Fdeepsense-io\u002Froi-pooling && cd roi-pooling && vi roi_pooling\u002FMakefile` and edit according to [here](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fissues\u002F13607#issuecomment-335530430),\n  * `python setup.py install`.\n\n## Utilities\n\nBesides `pretrained()` and `preprocess()`, the output `tf.Tensor` provides the following useful methods:\n\n- `logits`: returns the `tf.Tensor` logits (the values before the softmax),\n- `middles()` (=`get_middles()`): returns a list of all the representative `tf.Tensor` end-points,\n- `outputs()` (=`get_outputs()`): returns a list of all the `tf.Tensor` end-points,\n- `weights()` (=`get_weights()`): returns a list of all the `tf.Tensor` weight matrices,\n- `summary()` (=`print_summary()`): prints the numbers of layers, weight matrices, and parameters,\n- `print_middles()`: prints all the representative end-points,\n- `print_outputs()`: prints all the end-points,\n- `print_weights()`: prints all the weight matrices.\n\n\n\u003Cdetails>\n\u003Csummary>Example outputs of print methods are:\u003C\u002Fsummary>\n\n```\n>>> model.print_middles()\nScope: resnet50\nconv2\u002Fblock1\u002Fout:0 (?, 56, 56, 256)\nconv2\u002Fblock2\u002Fout:0 (?, 56, 56, 256)\nconv2\u002Fblock3\u002Fout:0 (?, 56, 56, 256)\nconv3\u002Fblock1\u002Fout:0 (?, 28, 28, 512)\nconv3\u002Fblock2\u002Fout:0 (?, 28, 28, 512)\nconv3\u002Fblock3\u002Fout:0 (?, 28, 28, 512)\nconv3\u002Fblock4\u002Fout:0 (?, 28, 28, 512)\nconv4\u002Fblock1\u002Fout:0 (?, 14, 14, 1024)\n...\n\n>>> model.print_outputs()\nScope: resnet50\nconv1\u002Fpad:0 (?, 230, 230, 3)\nconv1\u002Fconv\u002FBiasAdd:0 (?, 112, 112, 64)\nconv1\u002Fbn\u002Fbatchnorm\u002Fadd_1:0 (?, 112, 112, 64)\nconv1\u002Frelu:0 (?, 112, 112, 64)\npool1\u002Fpad:0 (?, 114, 114, 64)\npool1\u002FMaxPool:0 (?, 56, 56, 64)\nconv2\u002Fblock1\u002F0\u002Fconv\u002FBiasAdd:0 (?, 56, 56, 256)\nconv2\u002Fblock1\u002F0\u002Fbn\u002Fbatchnorm\u002Fadd_1:0 (?, 56, 56, 256)\nconv2\u002Fblock1\u002F1\u002Fconv\u002FBiasAdd:0 (?, 56, 56, 64)\nconv2\u002Fblock1\u002F1\u002Fbn\u002Fbatchnorm\u002Fadd_1:0 (?, 56, 56, 64)\nconv2\u002Fblock1\u002F1\u002Frelu:0 (?, 56, 56, 64)\n...\n\n>>> model.print_weights()\nScope: resnet50\nconv1\u002Fconv\u002Fweights:0 (7, 7, 3, 64)\nconv1\u002Fconv\u002Fbiases:0 (64,)\nconv1\u002Fbn\u002Fbeta:0 (64,)\nconv1\u002Fbn\u002Fgamma:0 (64,)\nconv1\u002Fbn\u002Fmoving_mean:0 (64,)\nconv1\u002Fbn\u002Fmoving_variance:0 (64,)\nconv2\u002Fblock1\u002F0\u002Fconv\u002Fweights:0 (1, 1, 64, 256)\nconv2\u002Fblock1\u002F0\u002Fconv\u002Fbiases:0 (256,)\nconv2\u002Fblock1\u002F0\u002Fbn\u002Fbeta:0 (256,)\nconv2\u002Fblock1\u002F0\u002Fbn\u002Fgamma:0 (256,)\n...\n\n>>> model.summary()\nScope: resnet50\nTotal layers: 54\nTotal weights: 320\nTotal parameters: 25,636,712\n```\n\u003C\u002Fdetails>\n\n## Examples\n\n- Comparison of different networks:\n\n```python\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\nmodels = [\n    nets.MobileNet75(inputs),\n    nets.MobileNet100(inputs),\n    nets.SqueezeNet(inputs),\n]\n\nimg = utils.load_img('cat.png', target_size=256, crop_size=224)\nimgs = nets.preprocess(models, img)\n\nwith tf.Session() as sess:\n    nets.pretrained(models)\n    for (model, img) in zip(models, imgs):\n        preds = sess.run(model, {inputs: img})\n        print(utils.decode_predictions(preds, top=2)[0])\n```\n\n- Transfer learning:\n\n```python\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\noutputs = tf.placeholder(tf.float32, [None, 50])\nmodel = nets.DenseNet169(inputs, is_training=True, classes=50)\n\nloss = tf.losses.softmax_cross_entropy(outputs, model.logits)\ntrain = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(loss)\n\nwith tf.Session() as sess:\n    nets.pretrained(model)\n    for (x, y) in your_NumPy_data:  # the NHWC and one-hot format\n        sess.run(train, {inputs: x, outputs: y})\n```\n\n- Using multi-GPU:\n\n```python\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\nmodels = []\n\nwith tf.device('gpu:0'):\n    models.append(nets.ResNeXt50(inputs))\n\nwith tf.device('gpu:1'):\n    models.append(nets.DenseNet201(inputs))\n\nfrom tensornets.preprocess import fb_preprocess\nimg = utils.load_img('cat.png', target_size=256, crop_size=224)\nimg = fb_preprocess(img)\n\nwith tf.Session() as sess:\n    nets.pretrained(models)\n    preds = sess.run(models, {inputs: img})\n    for pred in preds:\n        print(utils.decode_predictions(pred, top=2)[0])\n```\n\n## Performance\n\n### Image classification\n\n- The top-k accuracies were obtained with TensorNets on **ImageNet validation set** and may slightly differ from the original ones.\n  * Input: input size fed into models\n  * Top-1: single center crop, top-1 accuracy\n  * Top-5: single center crop, top-5 accuracy\n  * MAC: rounded the number of float operations by using [tf.profiler](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fblob\u002Fmaster\u002Ftensorflow\u002Fcore\u002Fprofiler\u002Fg3doc\u002Fprofile_model_architecture.md)\n  * Size: rounded the number of parameters (w\u002F fully-connected layers)\n  * Stem: rounded the number of parameters (w\u002Fo fully-connected layers)\n- The computation times were measured on NVIDIA Tesla P100 (3584 cores, 16 GB global memory) with cuDNN 6.0 and CUDA 8.0.\n  * Speed: milliseconds for inferences of 100 images\n- The summary plot is generated by [this script](examples\u002Fgenerate_summary.py).\n\n|              | Input | Top-1       | Top-5       | MAC    | Size   | Stem   | Speed | References                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n|--------------|-------|-------------|-------------|--------|--------|--------|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [ResNet50](tensornets\u002Fresnets.py#L85)             |  224  | 74.874      | 92.018      |  51.0M | 25.6M  | 23.6M  | 195.4 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v1.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnet.lua) \u003Cbr \u002F> [[caffe]](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks\u002Fblob\u002Fmaster\u002Fprototxt\u002FResNet-50-deploy.prototxt) [[keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fresnet50.py) |\n| [ResNet101](tensornets\u002Fresnets.py#L113)           |  224  | 76.420      | 92.786      |  88.9M | 44.7M  | 42.7M  | 311.7 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v1.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnet.lua) \u003Cbr \u002F> [[caffe]](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks\u002Fblob\u002Fmaster\u002Fprototxt\u002FResNet-101-deploy.prototxt) |\n| [ResNet152](tensornets\u002Fresnets.py#L141)           |  224  | 76.604      | 93.118      | 120.1M | 60.4M  | 58.4M  | 439.1 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v1.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnet.lua) \u003Cbr \u002F> [[caffe]](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks\u002Fblob\u002Fmaster\u002Fprototxt\u002FResNet-152-deploy.prototxt) |\n| [ResNet50v2](tensornets\u002Fresnets.py#L98)           |  299  | 75.960      | 93.034      |  51.0M | 25.6M  | 23.6M  | 209.7 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNet101v2](tensornets\u002Fresnets.py#L126)         |  299  | 77.234      | 93.816      |  88.9M | 44.7M  | 42.6M  | 326.2 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNet152v2](tensornets\u002Fresnets.py#L154)         |  299  | 78.032      | 94.162      | 120.1M | 60.4M  | 58.3M  | 455.2 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNet200v2](tensornets\u002Fresnets.py#L169)         |  224  | 78.286      | 94.152      | 129.0M | 64.9M  | 62.9M  | 618.3 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNeXt50c32](tensornets\u002Fresnets.py#L184)        |  224  | 77.740      | 93.810      |  49.9M | 25.1M  | 23.0M  | 267.4 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnext.lua) |\n| [ResNeXt101c32](tensornets\u002Fresnets.py#L200)       |  224  | 78.730      | 94.294      |  88.1M | 44.3M  | 42.3M  | 427.9 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnext.lua) |\n| [ResNeXt101c64](tensornets\u002Fresnets.py#L216)       |  224  | 79.494      | 94.592      |   0.0M | 83.7M  | 81.6M  | 877.8 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431) [[torch-fb]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnext.lua) |\n| [WideResNet50](tensornets\u002Fresnets.py#L232)        |  224  | 78.018      | 93.934      | 137.6M | 69.0M  | 66.9M  | 358.1 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07146) [[torch]](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fwide-residual-networks\u002Fblob\u002Fmaster\u002Fpretrained\u002Fwide-resnet.lua) |\n| [Inception1](tensornets\u002Finceptions.py#L62)        |  224  | 66.840      | 87.676      |  14.0M | 7.0M   | 6.0M   | 165.1 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.4842) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v1.py) [[caffe-zoo]](https:\u002F\u002Fgithub.com\u002FBVLC\u002Fcaffe\u002Fblob\u002Fmaster\u002Fmodels\u002Fbvlc_googlenet\u002Fdeploy.prototxt) |\n| [Inception2](tensornets\u002Finceptions.py#L100)       |  224  | 74.680      | 92.156      |  22.3M | 11.2M  | 10.2M  | 134.3 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v2.py) |\n| [Inception3](tensornets\u002Finceptions.py#L137)       |  299  | 77.946      | 93.758      |  47.6M | 23.9M  | 21.8M  | 314.6 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.00567) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v3.py) [[keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Finception_v3.py) |\n| [Inception4](tensornets\u002Finceptions.py#L173)       |  299  | 80.120      | 94.978      |  85.2M | 42.7M  | 41.2M  | 582.1 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.07261) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v4.py) |\n| [InceptionResNet2](tensornets\u002Finceptions.py#L258) |  299  | 80.256      | 95.252      | 111.5M | 55.9M  | 54.3M  | 656.8 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.07261) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_resnet_v2.py) |\n| [NASNetAlarge](tensornets\u002Fnasnets.py#L101)        |  331  | 82.498      | 96.004      | 186.2M | 93.5M  | 89.5M  | 2081  | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07012) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) |\n| [NASNetAmobile](tensornets\u002Fnasnets.py#L109)       |  224  | 74.366      | 91.854      |  15.3M | 7.7M   | 6.7M   | 165.8 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07012) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) |\n| [PNASNetlarge](tensornets\u002Fnasnets.py#L148)        |  331  | 82.634      | 96.050      | 171.8M | 86.2M  | 81.9M  | 1978  | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00559) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) |\n| [VGG16](tensornets\u002Fvggs.py#L69)                   |  224  | 71.268      | 90.050      | 276.7M | 138.4M | 14.7M  | 348.4 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556) [[keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fvgg16.py) |\n| [VGG19](tensornets\u002Fvggs.py#L76)                   |  224  | 71.256      | 89.988      | 287.3M | 143.7M | 20.0M  | 399.8 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556) [[keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fvgg19.py) |\n| [DenseNet121](tensornets\u002Fdensenets.py#L64)        |  224  | 74.972      | 92.258      |  15.8M | 8.1M   | 7.0M   | 202.9 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) [[torch]](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet\u002Fblob\u002Fmaster\u002Fmodels\u002Fdensenet.lua) |\n| [DenseNet169](tensornets\u002Fdensenets.py#L72)        |  224  | 76.176      | 93.176      |  28.0M | 14.3M  | 12.6M  | 219.1 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) [[torch]](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet\u002Fblob\u002Fmaster\u002Fmodels\u002Fdensenet.lua) |\n| [DenseNet201](tensornets\u002Fdensenets.py#L80)        |  224  | 77.320      | 93.620      |  39.6M | 20.2M  | 18.3M  | 272.0 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) [[torch]](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet\u002Fblob\u002Fmaster\u002Fmodels\u002Fdensenet.lua) |\n| [MobileNet25](tensornets\u002Fmobilenets.py#L277)      |  224  | 51.582      | 75.792      |   0.9M | 0.5M   | 0.2M   | 34.46 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet50](tensornets\u002Fmobilenets.py#L284)      |  224  | 64.292      | 85.624      |   2.6M | 1.3M   | 0.8M   | 52.46 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet75](tensornets\u002Fmobilenets.py#L291)      |  224  | 68.412      | 88.242      |   5.1M | 2.6M   | 1.8M   | 70.11 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet100](tensornets\u002Fmobilenets.py#L298)     |  224  | 70.424      | 89.504      |   8.4M | 4.3M   | 3.2M   | 83.41 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet35v2](tensornets\u002Fmobilenets.py#L305)    |  224  | 60.086      | 82.432      |   3.3M | 1.7M   | 0.4M   | 57.04 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet50v2](tensornets\u002Fmobilenets.py#L312)    |  224  | 65.194      | 86.062      |   3.9M | 2.0M   | 0.7M   | 64.35 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet75v2](tensornets\u002Fmobilenets.py#L319)    |  224  | 69.532      | 89.176      |   5.2M | 2.7M   | 1.4M   | 88.68 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet100v2](tensornets\u002Fmobilenets.py#L326)   |  224  | 71.336      | 90.142      |   6.9M | 3.5M   | 2.3M   | 93.82 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet130v2](tensornets\u002Fmobilenets.py#L333)   |  224  | 74.680      | 92.122      |  10.7M | 5.4M   | 3.8M   | 130.4 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet140v2](tensornets\u002Fmobilenets.py#L340)   |  224  | 75.230      | 92.422      |  12.1M | 6.2M   | 4.4M   | 132.9 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [75v3large](tensornets\u002Fmobilenets.py#L347)        |  224  | 73.754      | 91.618      |   7.9M | 4.0M   | 2.7M   | 79.73 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3large](tensornets\u002Fmobilenets.py#L355)       |  224  | 75.790      | 92.840      |  27.3M | 5.5M   | 4.2M   | 94.71 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3largemini](tensornets\u002Fmobilenets.py#L363)   |  224  | 72.706      | 90.930      |   7.8M | 3.9M   | 2.7M   | 70.57 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [75v3small](tensornets\u002Fmobilenets.py#L371)        |  224  | 66.138      | 86.534      |   4.1M | 2.1M   | 1.0M   | 37.78 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3small](tensornets\u002Fmobilenets.py#L379)       |  224  | 68.318      | 87.942      |   5.1M | 2.6M   | 1.5M   | 42.00 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3smallmini](tensornets\u002Fmobilenets.py#L387)   |  224  | 63.440      | 84.646      |   4.1M | 2.1M   | 1.0M   | 29.65 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [EfficientNetB0](tensornets\u002Fefficientnets.py#L131)|  224  | 77.012      | 93.338      |  26.2M | 5.3M   | 4.0M   | 147.1 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB1](tensornets\u002Fefficientnets.py#L139)|  240  | 79.040      | 94.284      |  15.4M | 7.9M   | 6.6M   | 217.3 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB2](tensornets\u002Fefficientnets.py#L147)|  260  | 80.064      | 94.862      |  18.1M | 9.2M   | 7.8M   | 296.4 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB3](tensornets\u002Fefficientnets.py#L155)|  300  | 81.384      | 95.586      |  24.2M | 12.3M  | 10.8M  | 482.7 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB4](tensornets\u002Fefficientnets.py#L163)|  380  | 82.588      | 96.094      |  38.4M | 19.5M  | 17.7M  | 959.5 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB5](tensornets\u002Fefficientnets.py#L171)|  456  | 83.496      | 96.590      |  60.4M | 30.6M  | 28.5M  | 1872  | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB6](tensornets\u002Fefficientnets.py#L179)|  528  | 83.772      | 96.762      |  85.5M | 43.3M  | 41.0M  | 3503  | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB7](tensornets\u002Fefficientnets.py#L187)|  600  | 84.088      | 96.740      | 131.9M | 66.7M  | 64.1M  | 6149  | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[tf-tpu]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [SqueezeNet](tensornets\u002Fsqueezenets.py#L46)       |  224  | 54.434      | 78.040      |   2.5M | 1.2M   | 0.7M   | 71.43 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.07360) [[caffe]](https:\u002F\u002Fgithub.com\u002FDeepScale\u002FSqueezeNet\u002Fblob\u002Fmaster\u002FSqueezeNet_v1.1\u002Ftrain_val.prototxt) |\n\n![summary](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftaehoonlee_tensornets_readme_5cdfe6c64d16.png)\n\n### Object detection\n\n- The object detection models can be coupled with any network but mAPs could be measured only for the models with pre-trained weights. Note that:\n  * `YOLOv3VOC` was trained by taehoonlee with [this recipe](https:\u002F\u002Fgithub.com\u002Fpjreddie\u002Fdarknet\u002Fblob\u002Fmaster\u002Fcfg\u002Fyolov3-voc.cfg) modified as `max_batches=70000, steps=40000,60000`,\n  * `YOLOv2VOC` is equivalent to `YOLOv2(inputs, Darknet19)`,\n  * `TinyYOLOv2VOC`: `TinyYOLOv2(inputs, TinyDarknet19)`,\n  * `FasterRCNN_ZF_VOC`: `FasterRCNN(inputs, ZF)`,\n  * `FasterRCNN_VGG16_VOC`: `FasterRCNN(inputs, VGG16, stem_out='conv5\u002F3')`.\n- The mAPs were obtained with TensorNets and may slightly differ from the original ones. The test input sizes were the numbers reported as the best in the papers:\n  * `YOLOv3`, `YOLOv2`: 416x416\n  * `FasterRCNN`: min\\_shorter\\_side=600, max\\_longer\\_side=1000\n- The computation times were measured on NVIDIA Tesla P100 (3584 cores, 16 GB global memory) with cuDNN 6.0 and CUDA 8.0.\n  * Size: rounded the number of parameters\n  * Speed: milliseconds only for network inferences of a 416x416 or 608x608 single image\n  * FPS: 1000 \u002F speed\n\n| PASCAL VOC2007 test                                                    | mAP    | Size   | Speed |  FPS  | References |\n|------------------------------------------------------------------------|--------|--------|-------|-------|------------|\n| [YOLOv3VOC (416)](tensornets\u002Freferences\u002Fyolos.py#L177)                 | 0.7423 | 62M    | 24.09 | 41.51 | [[paper]](https:\u002F\u002Fpjreddie.com\u002Fmedia\u002Ffiles\u002Fpapers\u002FYOLOv3.pdf) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv2VOC (416)](tensornets\u002Freferences\u002Fyolos.py#L205)                 | 0.7320 | 51M    | 14.75 | 67.80 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [TinyYOLOv2VOC (416)](tensornets\u002Freferences\u002Fyolos.py#L241)             | 0.5303 | 16M    | 6.534 | 153.0 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [FasterRCNN\\_ZF\\_VOC](tensornets\u002Freferences\u002Frcnns.py#L150)             | 0.4466 | 59M    | 241.4 | 3.325 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.01497) [[caffe]](https:\u002F\u002Fgithub.com\u002Frbgirshick\u002Fpy-faster-rcnn) [[roi-pooling]](https:\u002F\u002Fgithub.com\u002Fdeepsense-ai\u002Froi-pooling) |\n| [FasterRCNN\\_VGG16\\_VOC](tensornets\u002Freferences\u002Frcnns.py#L186)          | 0.6872 | 137M   | 300.7 | 4.143 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.01497) [[caffe]](https:\u002F\u002Fgithub.com\u002Frbgirshick\u002Fpy-faster-rcnn) [[roi-pooling]](https:\u002F\u002Fgithub.com\u002Fdeepsense-ai\u002Froi-pooling) |\n\n| MS COCO val2014                                                        | mAP    | Size   | Speed |  FPS  | References |\n|------------------------------------------------------------------------|--------|--------|-------|-------|------------|\n| [YOLOv3COCO (608)](tensornets\u002Freferences\u002Fyolos.py#L167)                | 0.6016 | 62M    | 60.66 | 16.49 | [[paper]](https:\u002F\u002Fpjreddie.com\u002Fmedia\u002Ffiles\u002Fpapers\u002FYOLOv3.pdf) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv3COCO (416)](tensornets\u002Freferences\u002Fyolos.py#L167)                | 0.6028 | 62M    | 40.23 | 24.85 | [[paper]](https:\u002F\u002Fpjreddie.com\u002Fmedia\u002Ffiles\u002Fpapers\u002FYOLOv3.pdf) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv2COCO (608)](tensornets\u002Freferences\u002Fyolos.py#L187)                | 0.5189 | 51M    | 45.88 | 21.80 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv2COCO (416)](tensornets\u002Freferences\u002Fyolos.py#L187)                | 0.4922 | 51M    | 21.66 | 46.17 | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n\n## News 📰\n\n- The six variants of MobileNetv3 are released, [12 Mar 2020](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fpull\u002F58).\n- The eight variants of EfficientNet are released, [28 Jan 2020](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fpull\u002F56).\n- It is available to use TensorNets on TF 2, [23 Jan 2020](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fpull\u002F55).\n- MS COCO utils are released, [9 Jul 2018](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002F4a34243891e6649b72b9c0b7114b8f3d51d1d779).\n- PNASNetlarge is released, [12 May 2018](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002Fe2e0f0f7791731d3b7dfa989cae569c15a22cdd6).\n- The six variants of MobileNetv2 are released, [5 May 2018](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002Ffb429b6637f943875249dff50f4bc6220d9d50bf).\n- YOLOv3 for COCO and VOC are released, [4 April 2018](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002Fd8b2d8a54dc4b775a174035da63561028deb6624).\n- Generic object detection models for YOLOv2 and FasterRCNN are released, [26 March 2018](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002F67915e659d2097a96c82ba7740b9e43a8c69858d).\n\n## Future work 🔥\n\n- Add training codes.\n- Add image classification models.\n  * [PolyNet: A Pursuit of Structural Diversity in Very Deep Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05725v2), CVPR 2017, Top-5 4.25%\n  * [Squeeze-and-Excitation Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01507v2), CVPR 2018, Top-5 3.79%\n  * [GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.06965), arXiv 2018, Top-5 3.0%\n- Add object detection models (MaskRCNN, SSD).\n- Add image segmentation models (FCN, UNet).\n- Add image datasets (OpenImages).\n- Add style transfer examples which can be coupled with any network in TensorNets.\n- Add speech and language models with representative datasets (WaveNet, ByteNet).\n","# TensorNets [![构建状态](https:\u002F\u002Ftravis-ci.org\u002Ftaehoonlee\u002Ftensornets.svg?branch=master)](https:\u002F\u002Ftravis-ci.org\u002Ftaehoonlee\u002Ftensornets)\n\n在 [TensorFlow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow) 中提供带有预训练权重的高级网络定义（经测试与 `2.1.0 >=` TF `>= 1.4.0` 兼容）。\n\n## 指导原则\n\n- **适用性。** 许多人已经有自己的机器学习工作流，并希望在其工作流中加入新模型。TensorNets 可以轻松集成，因为它被设计为简单的函数式接口，而不使用自定义类。\n- **可管理性。** 模型使用 `tf.contrib.layers` 编写，其轻量级特性类似于 PyTorch 和 Keras，便于访问每个权重和输出端点。此外，部署和扩展预处理及预训练权重的集合也非常容易。\n- **可读性。** 借助最新的 TensorFlow API，可以实现更多的代码模块化和更少的缩进。例如，所有 Inception 变体在 [TensorNets](tensornets\u002Finceptions.py) 中仅用约 500 行代码实现，而在 [官方 TensorFlow 模型](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v3.py) 中则需要 2000 多行。\n- **可复现性。** 您始终可以通过包括特征提取在内的[简单 API](#utilities) 复现原始结果。此外，您无需担心 TensorFlow 的版本问题，因为与不同版本 TensorFlow 的兼容性已通过 [Travis](https:\u002F\u002Ftravis-ci.org\u002Ftaehoonlee\u002Ftensornets\u002Fbuilds) 进行了验证。\n\n## 安装\n\n您可以从 PyPI (`pip install tensornets`) 或直接从 GitHub (`pip install git+https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets.git`) 安装 TensorNets。\n\n## 快速示例\n\n每个网络（参见[完整列表](#image-classification)）都不是自定义类，而是一个以 `tf.Tensor` 作为输入和输出的函数。以下是 `ResNet50` 的示例：\n\n```python\nimport tensorflow as tf\n# import tensorflow.compat.v1 as tf  # 适用于 TF 2\nimport tensornets as nets\n# tf.disable_v2_behavior()  # 适用于 TF 2\n\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\nmodel = nets.ResNet50(inputs)\n\nassert isinstance(model, tf.Tensor)\n```\n\n您可以使用 `utils.load_img` 加载示例图像，该函数返回一个 NHWC 格式的 `np.ndarray`：\n\n```python\nimg = nets.utils.load_img('cat.png', target_size=256, crop_size=224)\nassert img.shape == (1, 224, 224, 3)\n```\n\n一旦创建了您的网络，就可以使用常规的 TensorFlow API 运行 😊，因为 TensorNets 中的所有网络始终返回 `tf.Tensor`。使用预训练权重和预处理非常简单，只需调用 [`pretrained()`](tensornets\u002Fpretrained.py) 和 [`preprocess()`](tensornets\u002Fpreprocess.py)，即可复现原始结果：\n\n```python\nwith tf.Session() as sess:\n    img = model.preprocess(img)  # 等同于 img = nets.preprocess(model, img)\n    sess.run(model.pretrained())  # 等同于 nets.pretrained(model)\n    preds = sess.run(model, {inputs: img})\n```\n\n您可以查看最可能的类别：\n\n```python\nprint(nets.utils.decode_predictions(preds, top=2)[0])\n[(u'n02124075', u'Egyptian_cat', 0.28067636), (u'n02127052', u'lynx', 0.16826575)]\n```\n\n您还可以轻松获取中间层的值，使用 `middles()` 和 `outputs()`：\n\n```python\nwith tf.Session() as sess:\n    img = model.preprocess(img)\n    sess.run(model.pretrained())\n    middles = sess.run(model.middles(), {inputs: img})\n    outputs = sess.run(model.outputs(), {inputs: img})\n\nmodel.print_middles()\nassert middles[0].shape == (1, 56, 56, 256)\nassert middles[-1].shape == (1, 7, 7, 2048)\n\nmodel.print_outputs()\nassert sum(sum((outputs[-1] - preds) ** 2)) \u003C 1e-8\n```\n\n借助 `load()` 和 `save()`，您的权重值可以恢复：\n\n```python\nwith tf.Session() as sess:\n    model.init()\n    # ... 您的训练 ...\n    model.save('test.npz')\n\nwith tf.Session() as sess:\n    model.load('test.npz')\n    # ... 您的部署 ...\n```\n\nTensorNets 使我们能够更快地部署知名架构并对其进行基准测试 ⚡️。有关更多信息，请查看 [实用工具](#utilities)、[示例](#examples) 和 [架构](#performance) 列表。\n\n## 目标检测示例\n\n每个目标检测模型**都可以与 TensorNets 中的任何网络结合使用**（参见 [性能](#object-detection)），并接受两个参数：占位符和充当茎层的函数。以下是用于 PASCAL VOC 的 `YOLOv2` 示例：\n\n```python\nimport tensorflow as tf\nimport tensornets as nets\n\ninputs = tf.placeholder(tf.float32, [None, 416, 416, 3])\nmodel = nets.YOLOv2(inputs, nets.Darknet19)\n\nimg = nets.utils.load_img('cat.png')\n\nwith tf.Session() as sess:\n    sess.run(model.pretrained())\n    preds = sess.run(model, {inputs: model.preprocess(img)})\n    boxes = model.get_boxes(preds, img.shape[1:3])\n```\n\n与其他模型一样，检测模型也会返回 `tf.Tensor` 作为输出。您可以使用 `model.get_boxes(model_output, original_img_shape)` 查看边界框预测 `(x1, y1, x2, y2, score)`，并可视化结果：\n\n```python\nfrom tensornets.datasets import voc\nprint(\"%s: %s\" % (voc.classnames[7], boxes[7][0]))  # 7 是猫\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nbox = boxes[7][0]\nplt.imshow(img[0].astype(np.uint8))\nplt.gca().add_patch(plt.Rectangle(\n    (box[0], box[1]), box[2] - box[0], box[3] - box[1],\n    fill=False, edgecolor='r', linewidth=2))\nplt.show()\n```\n\n更多检测示例，例如 VOC2007 上的 FasterRCNN，可在[这里](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets-examples\u002Fblob\u002Fmaster\u002Ftest_all_voc_models.ipynb)找到 😎。请注意：\n\n- 检测模型的 API 略有不同：\n  * `YOLOv3`: `sess.run(model.preds, {inputs: img})`,\n  * `YOLOv2`: `sess.run(model, {inputs: img})`,\n  * `FasterRCNN`: `sess.run(model, {inputs: img, model.scales: scale})`，\n\n- `FasterRCNN` 需要 `roi_pooling`：\n  * `git clone https:\u002F\u002Fgithub.com\u002Fdeepsense-io\u002Froi-pooling && cd roi-pooling && vi roi_pooling\u002FMakefile` 并根据[此处](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fissues\u002F13607#issuecomment-335530430)进行编辑，\n  * `python setup.py install`。\n\n## 工具函数\n\n除了 `pretrained()` 和 `preprocess()` 之外，输出的 `tf.Tensor` 还提供了以下有用的方法：\n\n- `logits`：返回 `tf.Tensor` 形式的 logits（即 softmax 之前的值）；\n- `middles()`（=`get_middles()`）：返回所有代表性端点的 `tf.Tensor` 列表；\n- `outputs()`（=`get_outputs()`）：返回所有端点的 `tf.Tensor` 列表；\n- `weights()`（=`get_weights()`）：返回所有权重矩阵的 `tf.Tensor` 列表；\n- `summary()`（=`print_summary()`）：打印层数、权重矩阵数和参数量；\n- `print_middles()`：打印所有代表性端点；\n- `print_outputs()`：打印所有端点；\n- `print_weights()`：打印所有权重矩阵。\n\n\n\u003Cdetails>\n\u003Csummary>打印方法的示例输出如下：\u003C\u002Fsummary>\n\n```\n>>> model.print_middles()\nScope: resnet50\nconv2\u002Fblock1\u002Fout:0 (?, 56, 56, 256)\nconv2\u002Fblock2\u002Fout:0 (?, 56, 56, 256)\nconv2\u002Fblock3\u002Fout:0 (?, 56, 56, 256)\nconv3\u002Fblock1\u002Fout:0 (?, 28, 28, 512)\nconv3\u002Fblock2\u002Fout:0 (?, 28, 28, 512)\nconv3\u002Fblock3\u002Fout:0 (?, 28, 28, 512)\nconv3\u002Fblock4\u002Fout:0 (?, 28, 28, 512)\nconv4\u002Fblock1\u002Fout:0 (?, 14, 14, 1024)\n...\n\n>>> model.print_outputs()\nScope: resnet50\nconv1\u002Fpad:0 (?, 230, 230, 3)\nconv1\u002Fconv\u002FBiasAdd:0 (?, 112, 112, 64)\nconv1\u002Fbn\u002Fbatchnorm\u002Fadd_1:0 (?, 112, 112, 64)\nconv1\u002Frelu:0 (?, 112, 112, 64)\npool1\u002Fpad:0 (?, 114, 114, 64)\npool1\u002FMaxPool:0 (?, 56, 56, 64)\nconv2\u002Fblock1\u002F0\u002Fconv\u002FBiasAdd:0 (?, 56, 56, 256)\nconv2\u002Fblock1\u002F0\u002Fbn\u002Fbatchnorm\u002Fadd_1:0 (?, 56, 56, 256)\nconv2\u002Fblock1\u002F1\u002Fconv\u002FBiasAdd:0 (?, 56, 56, 64)\nconv2\u002Fblock1\u002F1\u002Fbn\u002Fbatchnorm\u002Fadd_1:0 (?, 56, 56, 64)\nconv2\u002Fblock1\u002F1\u002Frelu:0 (?, 56, 56, 64)\n...\n\n>>> model.print_weights()\nScope: resnet50\nconv1\u002Fconv\u002Fweights:0 (7, 7, 3, 64)\nconv1\u002Fconv\u002Fbiases:0 (64,)\nconv1\u002Fbn\u002Fbeta:0 (64,)\nconv1\u002Fbn\u002Fgamma:0 (64,)\nconv1\u002Fbn\u002Fmoving_mean:0 (64,)\nconv1\u002Fbn\u002Fmoving_variance:0 (64,)\nconv2\u002Fblock1\u002F0\u002Fconv\u002Fweights:0 (1, 1, 64, 256)\nconv2\u002Fblock1\u002F0\u002Fconv\u002Fbiases:0 (256,)\nconv2\u002Fblock1\u002F0\u002Fbn\u002Fbeta:0 (256,)\nconv2\u002Fblock1\u002F0\u002Fbn\u002Fgamma:0 (256,)\n...\n\n>>> model.summary()\nScope: resnet50\nTotal layers: 54\nTotal weights: 320\nTotal parameters: 25,636,712\n```\n\u003C\u002Fdetails>\n\n## 示例\n\n- 不同网络的比较：\n\n```python\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\nmodels = [\n    nets.MobileNet75(inputs),\n    nets.MobileNet100(inputs),\n    nets.SqueezeNet(inputs),\n]\n\nimg = utils.load_img('cat.png', target_size=256, crop_size=224)\nimgs = nets.preprocess(models, img)\n\nwith tf.Session() as sess:\n    nets.pretrained(models)\n    for (model, img) in zip(models, imgs):\n        preds = sess.run(model, {inputs: img})\n        print(utils.decode_predictions(preds, top=2)[0])\n```\n\n- 迁移学习：\n\n```python\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\noutputs = tf.placeholder(tf.float32, [None, 50])\nmodel = nets.DenseNet169(inputs, is_training=True, classes=50)\n\nloss = tf.losses.softmax_cross_entropy(outputs, model.logits)\ntrain = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(loss)\n\nwith tf.Session() as sess:\n    nets.pretrained(model)\n    for (x, y) in your_NumPy_data:  # the NHWC and one-hot format\n        sess.run(train, {inputs: x, outputs: y})\n```\n\n- 使用多 GPU：\n\n```python\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\nmodels = []\n\nwith tf.device('gpu:0'):\n    models.append(nets.ResNeXt50(inputs))\n\nwith tf.device('gpu:1'):\n    models.append(nets.DenseNet201(inputs))\n\nfrom tensornets.preprocess import fb_preprocess\nimg = utils.load_img('cat.png', target_size=256, crop_size=224)\nimg = fb_preprocess(img)\n\nwith tf.Session() as sess:\n    nets.pretrained(models)\n    preds = sess.run(models, {inputs: img})\n    for pred in preds:\n        print(utils.decode_predictions(pred, top=2)[0])\n```\n\n## 性能\n\n### 图像分类\n\n- Top-k 准确率是在 **ImageNet 验证集** 上使用 TensorNets 测得的，可能与原始结果略有不同。\n  * 输入：输入到模型中的图像尺寸\n  * Top-1：单中心裁剪，Top-1 准确率\n  * Top-5：单中心裁剪，Top-5 准确率\n  * MAC：使用 [tf.profiler](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fblob\u002Fmaster\u002Ftensorflow\u002Fcore\u002Fprofiler\u002Fg3doc\u002Fprofile_model_architecture.md) 对浮点运算次数进行四舍五入\n  * Size：对参数总数（包括全连接层）进行四舍五入\n  * Stem：对参数总数（不包括全连接层）进行四舍五入\n- 计算时间是在 NVIDIA Tesla P100（3584 核，16 GB 全局内存）上，使用 cuDNN 6.0 和 CUDA 8.0 测得的。\n  * 速度：100 张图像推理所需的时间（毫秒）\n- 摘要图由 [此脚本](examples\u002Fgenerate_summary.py) 生成。\n\n|              | 输入大小 | Top-1准确率 | Top-5准确率 | MAC数   | 参数量  | Stem参数量 | 推理速度 | 参考文献                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n|--------------|-------|-------------|-------------|--------|--------|--------|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [ResNet50](tensornets\u002Fresnets.py#L85)             |  224  | 74.874      | 92.018      |  51.0M | 25.6M  | 23.6M  | 195.4 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v1.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnet.lua) \u003Cbr \u002F> [[Caffe]](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks\u002Fblob\u002Fmaster\u002Fprototxt\u002FResNet-50-deploy.prototxt) [[Keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fresnet50.py) |\n| [ResNet101](tensornets\u002Fresnets.py#L113)           |  224  | 76.420      | 92.786      |  88.9M | 44.7M  | 42.7M  | 311.7 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v1.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnet.lua) \u003Cbr \u002F> [[Caffe]](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks\u002Fblob\u002Fmaster\u002Fprototxt\u002FResNet-101-deploy.prototxt) |\n| [ResNet152](tensornets\u002Fresnets.py#L141)           |  224  | 76.604      | 93.118      | 120.1M | 60.4M  | 58.4M  | 439.1 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v1.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnet.lua) \u003Cbr \u002F> [[Caffe]](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks\u002Fblob\u002Fmaster\u002Fprototxt\u002FResNet-152-deploy.prototxt) |\n| [ResNet50v2](tensornets\u002Fresnets.py#L98)           |  299  | 75.960      | 93.034      |  51.0M | 25.6M  | 23.6M  | 209.7 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNet101v2](tensornets\u002Fresnets.py#L126)         |  299  | 77.234      | 93.816      |  88.9M | 44.7M  | 42.6M  | 326.2 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNet152v2](tensornets\u002Fresnets.py#L154)         |  299  | 78.032      | 94.162      | 120.1M | 60.4M  | 58.3M  | 455.2 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNet200v2](tensornets\u002Fresnets.py#L169)         |  224  | 78.286      | 94.152      | 129.0M | 64.9M  | 62.9M  | 618.3 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.05027) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fresnet_v2.py) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Ffb.resnet.torch\u002Fblob\u002Fmaster\u002Fmodels\u002Fpreresnet.lua) |\n| [ResNeXt50c32](tensornets\u002Fresnets.py#L184)        |  224  | 77.740      | 93.810      |  49.9M | 25.1M  | 23.0M  | 267.4 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnext.lua) |\n| [ResNeXt101c32](tensornets\u002Fresnets.py#L200)       |  224  | 78.730      | 94.294      |  88.1M | 44.3M  | 42.3M  | 427.9 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnext.lua) |\n| [ResNeXt101c64](tensornets\u002Fresnets.py#L216)       |  224  | 79.494      | 94.592      |   0.0M | 83.7M  | 81.6M  | 877.8 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431) [[PyTorch-FB]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt\u002Fblob\u002Fmaster\u002Fmodels\u002Fresnext.lua) |\n| [WideResNet50](tensornets\u002Fresnets.py#L232)        |  224  | 78.018      | 93.934      | 137.6M | 69.0M  | 66.9M  | 358.1 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07146) [[PyTorch]](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fwide-residual-networks\u002Fblob\u002Fmaster\u002Fpretrained\u002Fwide-resnet.lua) |\n| [Inception1](tensornets\u002Finceptions.py#L62)        |  224  | 66.840      | 87.676      |  14.0M | 7.0M   | 6.0M   | 165.1 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.4842) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v1.py) [[Caffe-Zoo]](https:\u002F\u002Fgithub.com\u002FBVLC\u002Fcaffe\u002Fblob\u002Fmaster\u002Fmodels\u002Fbvlc_googlenet\u002Fdeploy.prototxt) |\n| [Inception2](tensornets\u002Finceptions.py#L100)       |  224  | 74.680      | 92.156      |  22.3M | 11.2M  | 10.2M  | 134.3 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.03167) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v2.py) |\n| [Inception3](tensornets\u002Finceptions.py#L137)       |  299  | 77.946      | 93.758      |  47.6M | 23.9M  | 21.8M  | 314.6 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.00567) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v3.py) [[Keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Finception_v3.py) |\n| [Inception4](tensornets\u002Finceptions.py#L173)       |  299  | 80.120      | 94.978      |  85.2M | 42.7M  | 41.2M  | 582.1 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.07261) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_v4.py) |\n| [InceptionResNet2](tensornets\u002Finceptions.py#L258)|  299  | 80.256      | 95.252      | 111.5M | 55.9M  | 54.3M  | 656.8 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.07261) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Finception_resnet_v2.py) |\n| [NASNetAlarge](tensornets\u002Fnasnets.py#L101)        |  331  | 82.498      | 96.004      | 186.2M | 93.5M  | 89.5M  | 2081  | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07012) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) |\n| [NASNetAmobile](tensornets\u002Fnasnets.py#L109)       |  224  | 74.366      | 91.854      |  15.3M | 7.7M   | 6.7M   | 165.8 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07012) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) |\n| [PNASNetlarge](tensornets\u002Fnasnets.py#L148)        |  331  | 82.634      | 96.050      | 171.8M | 86.2M  | 81.9M  | 1978  | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00559) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) |\n| [VGG16](tensornets\u002Fvggs.py#L69)                   |  224  | 71.268      | 90.050      | 276.7M | 138.4M | 14.7M  | 348.4 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556) [[Keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fvgg16.py) |\n| [VGG19](tensornets\u002Fvggs.py#L76)                   |  224  | 71.256      | 89.988      | 287.3M | 143.7M | 20.0M  | 399.8 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556) [[Keras]](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fblob\u002Fmaster\u002Fkeras\u002Fapplications\u002Fvgg19.py) |\n| [DenseNet121](tensornets\u002Fdensenets.py#L64)        |  224  | 74.972      | 92.258      |  15.8M | 8.1M   | 7.0M   | 202.9 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) [[PyTorch]](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet\u002Fblob\u002Fmaster\u002Fmodels\u002Fdensenet.lua) |\n| [DenseNet169](tensornets\u002Fdensenets.py#L72)        |  224  | 76.176      | 93.176      |  28.0M | 14.3M  | 12.6M  | 219.1 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) [[PyTorch]](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet\u002Fblobmaster\u002Fmodels\u002Fdensenet.lua) |\n| [DenseNet201](tensornets\u002Fdensenets.py#L80)        |  224  | 77.320      | 93.620      |  39.6M | 20.2M  | 18.3M  | 272.0 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.06993) [[PyTorch]](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet\u002Fblob\u002Fmaster\u002Fmodels\u002Fdensenet.lua) |\n| [MobileNet25](tensornets\u002Fmobilenets.py#L277)      |  224  | 51.582      | 75.792      |   0.9M | 0.5M   | 0.2M   | 34.46 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet50](tensornets\u002Fmobilenets.py#L284)      |  224  | 64.292      | 85.624      |   2.6M | 1.3M   | 0.8M   | 52.46 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet75](tensornets\u002Fmobilenets.py#L291)      |  224  | 68.412      | 88.242      |   5.1M | 2.6M   | 1.8M   | 70.11 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet100](tensornets\u002Fmobilenets.py#L298)     |  224  | 70.424      | 89.504      |   8.4M | 4.3M   | 3.2M   | 83.41 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet_v1.py) |\n| [MobileNet35v2](tensornets\u002Fmobilenets.py#L305)    |  224  | 60.086      | 82.432      |   3.3M | 1.7M   | 0.4M   | 57.04 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet50v2](tensornets\u002Fmobilenets.py#L312)    |  224  | 65.194      | 86.062      |   3.9M | 2.0M   | 0.7M   | 64.35 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet75v2](tensornets\u002Fmobilenets.py#L319)    |  224  | 69.532      | 89.176      |   5.2M | 2.7M   | 1.4M   | 88.68 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet100v2](tensornets\u002Fmobilenets.py#L326)   |  224  | 71.336      | 90.142      |   6.9M | 3.5M   | 2.3M   | 93.82 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet130v2](tensornets\u002Fmobilenets.py#L333)   |  224  | 74.680      | 92.122      |  10.7M | 5.4M   | 3.8M   | 130.4 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [MobileNet140v2](tensornets\u002Fmobilenets.py#L340)   |  224  | 75.230      | 92.422      |  12.1M | 6.2M   | 4.4M   | 132.9 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04381) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v2.py) |\n| [75v3large](tensornets\u002Fmobilenets.py#L347)        |  224  | 73.754      | 91.618      |   7.9M | 4.0M   | 2.7M   | 79.73 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3large](tensornets\u002Fmobilenets.py#L355)       |  224  | 75.790      | 92.840      |  27.3M | 5.5M   | 4.2M   | 94.71 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3largemini](tensornets\u002Fmobilenets.py#L363)  |  224  | 72.706      | 90.930      |   7.8M | 3.9M   | 2.7M   | 70.57 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [75v3small](tensornets\u002Fmobilenets.py#L371)        |  224  | 66.138      | 86.534      |   4.1M | 2.1M   | 1.0M   | 37.78 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3small](tensornets\u002Fmobilenets.py#L379)       |  224  | 68.318      | 87.942      |   5.1M | 2.6M   | 1.5M   | 42.00 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [100v3smallmini](tensornets\u002Fmobilenets.py#L387)   |  224  | 63.440      | 84.646      |   4.1M | 2.1M   | 1.0M   | 29.65 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) [[tf-slim]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblobmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fmobilenet\u002Fmobilenet_v3.py) |\n| [EfficientNetB0](tensornets\u002Fefficientnets.py#L131)|  224  | 77.012      | 93.338      |  26.2M | 5.3M   | 4.0M   | 147.1 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblob\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB1](tensornets\u002Fefficientnets.py#L139)|  240  | 79.040      | 94.284      |  15.4M | 7.9M   | 6.6M   | 217.3 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB2](tensornets\u002Fefficientnets.py#L147)|  260  | 80.064      | 94.862      |  18.1M | 9.2M   | 7.8M   | 296.4 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB3](tensornets\u002Fefficientnets.py#L155)|  300  | 81.384      | 95.586      |  24.2M | 12.3M  | 10.8M  | 482.7 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB4](tensornets\u002Fefficientnets.py#L163)|  380  | 82.588      | 96.094      |  38.4M | 19.5M  | 17.7M  | 959.5 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB5](tensornets\u002Fefficientnets.py#L171)|  456  | 83.496      | 96.590      |  60.4M | 30.6M  | 28.5M  | 1872  | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB6](tensornets\u002Fefficientnets.py#L179)|  528  | 83.772      | 96.762      |  85.5M | 43.3M  | 41.0M  | 3503  | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [EfficientNetB7](tensornets\u002Fefficientnets.py#L187)|  600  | 84.088      | 96.740      | 131.9M | 66.7M  | 64.1M  | 6149  | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.11946) [[TF-TPU]](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Fblobmaster\u002Fmodels\u002Fofficial\u002Fefficientnet\u002Fefficientnet_model.py) |\n| [SqueezeNet](tensornets\u002Fsqueezenets.py#L46)       |  224  | 54.434      | 78.040      |   2.5M | 1.2M   | 0.7M   | 71.43 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.07360) [[Caffe]](https:\u002F\u002Fgithub.com\u002FDeepScale\u002FSqueezeNet\u002Fblob\u002Fmaster\u002FSqueezeNet_v1.1\u002Ftrain_val.prototxt) |\n\n![summary](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftaehoonlee_tensornets_readme_5cdfe6c64d16.png)\n\n\n\n### 目标检测\n\n- 目标检测模型可以与任何网络结合使用，但仅能对具有预训练权重的模型计算 mAP。请注意：\n  * `YOLOv3VOC` 是由 taehoonlee 使用 [此配置文件](https:\u002F\u002Fgithub.com\u002Fpjreddie\u002Fdarknet\u002Fblob\u002Fmaster\u002Fcfg\u002Fyolov3-voc.cfg) 训练的，该配置文件已修改为 `max_batches=70000, steps=40000,60000`。\n  * `YOLOv2VOC` 等价于 `YOLOv2(inputs, Darknet19)`。\n  * `TinyYOLOv2VOC`: `TinyYOLOv2(inputs, TinyDarknet19)`。\n  * `FasterRCNN_ZF_VOC`: `FasterRCNN(inputs, ZF)`。\n  * `FasterRCNN_VGG16_VOC`: `FasterRCNN(inputs, VGG16, stem_out='conv5\u002F3')`。\n- mAP 值是通过 TensorNets 获得的，可能与原始值略有不同。测试输入尺寸采用论文中报告的最佳数值：\n  * `YOLOv3`、`YOLOv2`: 416x416\n  * `FasterRCNN`: min\\_shorter\\_side=600, max\\_longer\\_side=1000\n- 计算时间是在 NVIDIA Tesla P100（3584 核，16 GB 全局内存）上测量的，使用 cuDNN 6.0 和 CUDA 8.0。\n  * 大小：四舍五入参数数量。\n  * 速度：仅针对单张 416x416 或 608x608 图像的网络推理耗时（毫秒）。\n  * FPS：1000 \u002F 速度。\n\n| PASCAL VOC2007 测试                                                    | mAP    | 大小   | 速度 |  FPS  | 参考文献 |\n|------------------------------------------------------------------------|--------|--------|-------|-------|------------|\n| [YOLOv3VOC (416)](tensornets\u002Freferences\u002Fyolos.py#L177)                 | 0.7423 | 62M    | 24.09 | 41.51 | [[论文]](https:\u002F\u002Fpjreddie.com\u002Fmedia\u002Ffiles\u002Fpapers\u002FYOLOv3.pdf) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv2VOC (416)](tensornets\u002Freferences\u002Fyolos.py#L205)                 | 0.7320 | 51M    | 14.75 | 67.80 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [TinyYOLOv2VOC (416)](tensornets\u002Freferences\u002Fyolos.py#L241)             | 0.5303 | 16M    | 6.534 | 153.0 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [FasterRCNN\\_ZF\\_VOC](tensornets\u002Freferences\u002Frcnns.py#L150)             | 0.4466 | 59M    | 241.4 | 3.325 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.01497) [[caffe]](https:\u002F\u002Fgithub.com\u002Frbgirshick\u002Fpy-faster-rcnn) [[roi-pooling]](https:\u002F\u002Fgithub.com\u002Fdeepsense-ai\u002Froi-pooling) |\n| [FasterRCNN\\_VGG16\\_VOC](tensornets\u002Freferences\u002Frcnns.py#L186)          | 0.6872 | 137M   | 300.7 | 4.143 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.01497) [[caffe]](https:\u002F\u002Fgithub.com\u002Frbgirshick\u002Fpy-faster-rcnn) [[roi-pooling]](https:\u002F\u002Fgithub.com\u002Fdeepsense-ai\u002Froi-pooling) |\n\n| MS COCO val2014                                                        | mAP    | 大小   | 速度 |  FPS  | 参考文献 |\n|------------------------------------------------------------------------|--------|--------|-------|-------|------------|\n| [YOLOv3COCO (608)](tensornets\u002Freferences\u002Fyolos.py#L167)                | 0.6016 | 62M    | 60.66 | 16.49 | [[论文]](https:\u002F\u002Fpjreddie.com\u002Fmedia\u002Ffiles\u002Fpapers\u002FYOLOv3.pdf) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv3COCO (416)](tensornets\u002Freferences\u002Fyolos.py#L167)                | 0.6028 | 62M    | 40.23 | 24.85 | [[论文]](https:\u002F\u002Fpjreddie.com\u002Fmedia\u002Ffiles\u002Fpapers\u002FYOLOv3.pdf) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv2COCO (608)](tensornets\u002Freferences\u002Fyolos.py#L187)                | 0.5189 | 51M    | 45.88 | 21.80 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n| [YOLOv2COCO (416)](tensornets\u002Freferences\u002Fyolos.py#L187)                | 0.4922 | 51M    | 21.66 | 46.17 | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08242) [[darknet]](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F) [[darkflow]](https:\u002F\u002Fgithub.com\u002Fthtrieu\u002Fdarkflow) |\n\n## 新闻 📰\n\n- MobileNetv3 的六种变体发布，[2020年3月12日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fpull\u002F58)。\n- EfficientNet 的八种变体发布，[2020年1月28日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fpull\u002F56)。\n- TensorNets 现已支持 TF 2，[2020年1月23日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fpull\u002F55)。\n- MS COCO 工具发布，[2018年7月9日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002F4a34243891e6649b72b9c0b7114b8f3d51d1d779)。\n- PNASNetlarge 发布，[2018年5月12日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002Fe2e0f0f7791731d3b7dfa989cae569c15a22cdd6)。\n- MobileNetv2 的六种变体发布，[2018年5月5日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002Ffb429b6637f943875249dff50f4bc6220d9d50bf)。\n- YOLOv3 用于 COCO 和 VOC 的版本发布，[2018年4月4日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002Fd8b2d8a54dc4b775a174035da63561028deb6624)。\n- YOLOv2 和 FasterRCNN 的通用目标检测模型发布，[2018年3月26日](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fcommit\u002F67915e659d2097a96c82ba7740b9e43a8c69858d)。\n\n## 未来工作 🔥\n\n- 添加训练代码。\n- 添加图像分类模型。\n  * [PolyNet: 非常深的网络中结构多样性的探索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05725v2)，CVPR 2017，Top-5 4.25%。\n  * [挤压与激励网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01507v2)，CVPR 2018，Top-5 3.79%。\n  * [GPipe: 使用流水线并行高效训练巨型神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.06965)，arXiv 2018，Top-5 3.0%。\n- 添加目标检测模型（MaskRCNN、SSD）。\n- 添加图像分割模型（FCN、UNet）。\n- 添加图像数据集（OpenImages）。\n- 添加风格迁移示例，这些示例可以与 TensorNets 中的任何网络结合使用。\n- 添加语音和语言模型，并配备代表性数据集（WaveNet、ByteNet）。","# TensorNets 快速上手指南\n\nTensorNets 是一个基于 TensorFlow 的高级网络定义库，提供带有预训练权重的常用模型。其核心设计理念是**函数式接口**（而非自定义类），所有模型均直接返回 `tf.Tensor`，便于无缝集成到现有的机器学习工作流中，同时保持代码的高可读性和可复现性。\n\n## 环境准备\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：建议 Python 3.6+\n*   **TensorFlow 版本**：支持 `1.4.0` 至 `2.1.0+`\n    *   若使用 TensorFlow 2.x，需兼容 v1 行为（见下方代码示例）。\n*   **前置依赖**：\n    *   `numpy`\n    *   `Pillow` (用于图像加载)\n    *   `matplotlib` (可选，用于可视化)\n\n## 安装步骤\n\n你可以通过 PyPI 或 GitHub 直接安装。国内用户建议使用清华源或阿里源加速下载。\n\n**方式一：通过 PyPI 安装（推荐）**\n\n```bash\npip install tensornets -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**方式二：从 GitHub 源码安装（获取最新版本）**\n\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets.git -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\nTensorNets 中的每个网络都是一个函数，接收 `tf.Tensor` 作为输入并返回 `tf.Tensor` 作为输出。以下以 **ResNet50** 为例展示最简使用流程。\n\n### 1. 构建模型与加载图像\n\n```python\nimport tensorflow as tf\n# 如果使用 TF 2.x，请取消下面两行的注释以兼容 v1 行为\n# import tensorflow.compat.v1 as tf\n# tf.disable_v2_behavior()\n\nimport tensornets as nets\n\n# 定义输入占位符 [Batch, Height, Width, Channels]\ninputs = tf.placeholder(tf.float32, [None, 224, 224, 3])\n\n# 实例化模型 (直接返回 tf.Tensor)\nmodel = nets.ResNet50(inputs)\n\n# 加载并预处理图像 (返回 NHWC 格式的 numpy 数组)\nimg = nets.utils.load_img('cat.png', target_size=256, crop_size=224)\n```\n\n### 2. 运行推理与加载预训练权重\n\n在 Session 中加载预训练权重并进行预测。`preprocess()` 和 `pretrained()` 方法确保了结果与原论文一致。\n\n```python\nwith tf.Session() as sess:\n    # 预处理图像以匹配模型要求\n    img_processed = model.preprocess(img)\n    \n    # 加载预训练权重\n    sess.run(model.pretrained())\n    \n    # 执行推理\n    preds = sess.run(model, {inputs: img_processed})\n\n# 解码预测结果 (输出概率最高的类别)\nprint(nets.utils.decode_predictions(preds, top=2)[0])\n# 示例输出：[(u'n02124075', u'Egyptian_cat', 0.28...), (u'n02127052', u'lynx', 0.16...)]\n```\n\n### 3. 提取中间层特征\n\n你可以轻松获取中间层（middles）或所有输出端点（outputs）的张量值，便于迁移学习或特征分析。\n\n```python\nwith tf.Session() as sess:\n    img_processed = model.preprocess(img)\n    sess.run(model.pretrained())\n    \n    # 获取代表性中间层输出\n    middles = sess.run(model.middles(), {inputs: img_processed})\n    \n    # 打印中间层结构信息\n    model.print_middles()\n    \n    # 验证形状 (例如第一个中间层)\n    print(\"First middle layer shape:\", middles[0].shape)\n```\n\n### 4. 保存与加载权重\n\n支持将训练后的权重保存为 `.npz` 文件以便后续部署。\n\n```python\n# 保存权重\nwith tf.Session() as sess:\n    model.init()\n    # ... 执行训练过程 ...\n    model.save('my_weights.npz')\n\n# 加载权重\nwith tf.Session() as sess:\n    model.load('my_weights.npz')\n    # ... 执行部署或推理 ...\n```","某计算机视觉团队正在构建一个工业缺陷检测系统，需要快速集成 ResNet50 等预训练模型作为特征提取器，以便在少量样本上进行微调。\n\n### 没有 tensornets 时\n- **代码冗余严重**：复现官方 Inception 或 ResNet 架构需编写上千行嵌套代码，结构复杂且难以阅读维护。\n- **权重加载繁琐**：手动匹配预训练权重与模型层名称极易出错，常因版本不兼容导致加载失败。\n- **中间层获取困难**：若想提取特定卷积层的特征图，必须修改原模型类定义或重新构建计算图，流程断裂。\n- **部署灵活性差**：模型被封装在自定义类中，难以无缝插入团队现有的 TensorFlow 工作流。\n\n### 使用 tensornets 后\n- **架构即函数**：ResNet50 等模型仅需一行函数调用即可生成，代码量缩减至几百行，逻辑清晰易读。\n- **一键加载权重**：调用 `model.pretrained()` 即可自动对齐并加载预训练参数，完美复现原始论文结果。\n- **灵活提取特征**：通过 `model.middles()` 和 `model.outputs()` 可直接获取任意中间层张量，无需修改模型结构。\n- **原生无缝集成**：模型输入输出均为标准 `tf.Tensor`，能直接嵌入现有数据处理流水线，无额外适配成本。\n\ntensornets 通过将复杂的预训练模型转化为简洁的函数式接口，极大降低了高性能视觉模型的研发门槛与集成成本。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftaehoonlee_tensornets_f39452bc.png","taehoonlee","Taehoon Lee","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftaehoonlee_64b99b08.jpg","Privacy Preserving Machine Learning 🇰🇷","@desilo","Seoul, South Korea","me@taehoonlee.com",null,"https:\u002F\u002Ftaehoonlee.com","https:\u002F\u002Fgithub.com\u002Ftaehoonlee",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,1000,179,"2026-04-02T08:36:01","MIT","未说明","非必需。若需性能测试或加速，文档提及测试环境为 NVIDIA Tesla P100 (16GB 显存)，搭配 CUDA 8.0 和 cuDNN 6.0。FasterRCNN 模型需要额外编译 roi_pooling 组件。",{"notes":97,"python":94,"dependencies":98},"该工具主要基于 TensorFlow 1.x 风格开发（使用 tf.contrib.layers），在 TensorFlow 2.x 中运行可能需要禁用 v2 行为（tf.disable_v2_behavior()）或使用兼容模式。安装 FasterRCNN 时需手动克隆并编译第三方 roi_pooling 库。预训练权重会在首次运行时自动下载。",[99,100,101],"tensorflow>=1.4.0, \u003C=2.1.0","numpy","matplotlib",[13,14,26],[104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123],"tensorflow","model","zoo","deep-learning","object-detection","yolo","yolov2","yolov3","faster-rcnn","resnet","inception","nasnet","pnasnet","vgg","densenet","mobilenet","mobilenetv2","mobilenetv3","efficientnet","squeezenet","2026-03-27T02:49:30.150509","2026-04-06T08:45:31.972829",[127,132,137,141,146,151,156,161],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},12773,"遇到 'NoneType' object is not callable 错误（特别是在 yolov2_box 调用时）该如何解决？","这通常是因为 Cython 扩展未正确编译。解决方法是：首先更新 setuptools（使用命令 `pip show setuptools` 查看版本），然后运行 `python setup.py build_ext --inplace` 重新构建扩展。注意，不同 Anaconda 环境中的 setuptools 版本可能不同，需确保在当前环境中执行。","https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fissues\u002F18",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},12774,"如何在自定义类别数量（如 545 类）上训练 YOLOv2 模型？","虽然官方曾提供基于 darknet 和 darkflow 的训练代码草稿，但发现收敛效果不佳（mAP 仅 5%）。维护者建议参考 PyTorch 实现的 YOLOv2 代码进行修改，或者等待库的后续更新以支持更稳定的自定义类别训练功能。目前直接使用内置训练代码可能无法获得理想结果。","https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fissues\u002F13",{"id":138,"question_zh":139,"answer_zh":140,"source_url":136},12775,"训练完成后如何保存权重并在推理时加载？","虽然 Issue 中提出了该问题，但维护者指出当前的训练代码处于草稿阶段且收敛存在问题，因此尚未完善标准的保存\u002F加载流程文档。建议关注仓库的后续更新或参考 darkflow\u002Fdarknet 的权重处理方式，待训练代码稳定后会有明确指引。",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},12776,"在使用 VGG16 加载预训练权重时，如果输入尺寸非默认值导致形状不匹配（shape mismatch）该怎么办？","当输入尺寸改变导致全连接层（fc）形状不匹配时，库的设计原则是抛出错误而不是静默忽略。维护者认为静默跳过中间层（如 fc6）会导致错误且不安全。用户需要确保输入尺寸与预训练权重匹配，或者自行修改网络结构以适应新尺寸，目前不支持通过参数自动忽略形状不匹配。","https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fissues\u002F34",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},12777,"YOLOv3 检测到的边界框比预期小很多，或者出现重复检测，这是什么原因？","这通常是非极大值抑制（NMS）函数的问题。维护者已确认并修复了 NMS 相关的逻辑错误。如果遇到此类现象，请确保升级到包含 NMS 修复的最新版本代码。","https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fissues\u002F17",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},12778,"在 Windows 上使用 pip install tensornets 安装失败，报错涉及 build wheel 或非法参数值怎么办？","Windows 下安装失败通常是因为缺少编译依赖或构建后端配置问题。错误日志中出现的 'illegal value' 往往与底层线性代数库（如 LAPACK）或编译器环境有关。建议检查是否安装了正确的 Visual C++ Build Tools，并确保 Python 环境与 wheel 包兼容。如果源码安装失败，可尝试寻找预编译的 wheel 文件或检查项目是否指定了构建后端（pyproject.toml）。","https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fissues\u002F24",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},12779,"安装过程中遇到 pybluez 库编译失败（特别是在 Anaconda Python 2.7 环境下）如何解决？","pybluez 的安装问题与 tensornets 核心功能无关，它是特定于蓝牙功能的依赖项。如果在非 Linux 环境（如 Windows）或使用 Anaconda 时遇到 pybluez 编译错误，建议直接前往 pybluez 的官方仓库（https:\u002F\u002Fgithub.com\u002Fpybluez\u002Fpybluez）寻求帮助，或者在不使用蓝牙功能的情况下尝试绕过该依赖。","https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fissues\u002F37",{"id":162,"question_zh":163,"answer_zh":164,"source_url":136},12780,"是否支持在训练期间冻结某些层（例如只训练最后几层）？","类似于 Darknet 中的 `stopbackward` 功能，tensornets 当时的实现尚未完善对冻结层的支持。维护者正在重构训练代码，建议参考其他成熟的 PyTorch 或 TensorFlow 实现来手动实现梯度截断，或等待库的正式更新。",[166,171,176,181,186,191,196,201,206,211,216,221],{"id":167,"version":168,"summary_zh":169,"released_at":170},71433,"0.4.6","## 改进之处\n\n- 修复在未安装所需包时出现的错误 (#60)。\n\n## API 变更\n\n- 更新了 `init`、`load` 和 `save` 方法（详见 [示例](https:\u002F\u002Fgithub.com\u002Ftaehoonlee\u002Ftensornets\u002Fblob\u002Fmaster\u002Ftests\u002Fbasics_test.py#L225-L270)）。\n\n## 破坏性变更\n\n无。","2020-03-31T04:38:27",{"id":172,"version":173,"summary_zh":174,"released_at":175},71434,"0.4.5","## 改进之处\n\n无。\n\n## API 变更\n\n- 添加 `init`、`load` 和 `save` 方法。\n- 添加带有预训练权重的 MobileNetv3 (#58)。\n- 在 `middles`、`outputs` 和 `weights` 中添加 `names` 参数。\n\n## 破坏性变更\n\n无。","2020-03-13T06:01:09",{"id":177,"version":178,"summary_zh":179,"released_at":180},71435,"0.4.3","## 改进之处\n\n无。\n\n## API 变更\n\n- 添加带有预训练权重的 EfficientNet (#56)。\n\n## 破坏性变更\n\n无。","2020-01-29T00:18:39",{"id":182,"version":183,"summary_zh":184,"released_at":185},71436,"0.4.2","## 需要改进的地方\n\n- 支持 TensorFlow 2 (#55)。\n- 禁用 TensorFlow 1.14 和 1.15 的弃用警告。\n- 单元测试和持续集成的改进。\n- 更新性能表格，使其符合最新的报告格式。\n\n## API 变更\n\n- 添加 `reduce_max` 和 `swish`。\n- 在 `set_args` 中添加 `weights_regularizer` 参数。\n\n## 不兼容的变更\n\n无。","2020-01-23T08:41:11",{"id":187,"version":188,"summary_zh":189,"released_at":190},71437,"0.4.1","## 改进点\n\n- 提升代码可读性。\n- 修复了 `pretrained_initializer` 在最新版 NumPy 下的 bug。\n- 改进单元测试和 CI 流程。\n- 为所有 ImageNet 模型添加评估代码（参见 #49）。\n\n## API 变更\n\n- 添加 `logits`，用于直接访问 logits 张量。\n- 添加 `middles()`，等同于 `get_middles()`。\n- 添加 `outputs()`，等同于 `get_outputs()`。\n- 添加 `weights()`，等同于 `get_weights()`。\n- 添加 `summary()`，等同于 `print_summary()`。\n\n## 破坏性变更\n\n无。","2019-10-13T11:23:52",{"id":192,"version":193,"summary_zh":194,"released_at":195},71438,"0.4.0","## 改进点\n\n- 可复现性改进。\n  - 修订 Inception 3 的预训练权重（top-5 准确率提升了 0.038%）。\n  - 在 Inception2 中添加缺失的 ReLU 激活函数（top-5 准确率提升了 0.468%）。\n- 修复 `variable_scope` 和 `name_scope` 中的 bug（参见 #43）。\n- 单元测试与持续集成改进。\n- 文档改进。\n\n## API 变更\n\n无。\n\n## 破坏性变更\n\n无。","2019-03-08T11:13:15",{"id":197,"version":198,"summary_zh":199,"released_at":200},71439,"0.3.6","这是一个小型版本更新，解决了 Windows 上的安装问题。","2018-11-10T12:38:16",{"id":202,"version":203,"summary_zh":204,"released_at":205},71440,"0.3.5","## 改进点\n\n- 可复现性改进。\n  - 在DenseNet中添加缺失的ReLU（Top-5准确率提升了2.58%-3.2%）。\n  - 修订ResNet中`pool1`的零填充方式（Top-5准确率提升了0.6%-1.16%）。\n- 单元测试\u002FCI改进。\n  - 如果仅修改了README文件，则禁用相关测试。\n- 新API：MS COCO工具库。\n\n## API变更\n\n- 添加MS COCO工具库。\n- 添加VOC数据集的训练数据加载器。\n- 添加YOLOv2的训练代码。\n\n## 破坏性变更\n\n无。","2018-09-01T11:13:07",{"id":207,"version":208,"summary_zh":209,"released_at":210},71441,"0.3.4","请注意，以下说明包含了 0.3.3 版本的变更。此外，由于 PyPI 冲突，0.3.2 版本并不存在。\n\n## 改进点\n\n- 修复了若干 bug。\n- 新增 API：图像分类和目标检测模型。\n- 单元测试与持续集成（CI）的改进。\n- 文档的优化。\n- 对 Float16 的支持改进（参见 #14）。\n- Python 3 兼容性的提升。\n\n## API 变更\n\n- 增加图像分类模型（`Darknet19`、`MobileNet35v2`、…、`MobileNet140v2` 和 `PNASNetlarge`）。\n- 增加目标检测模型引用（`YOLOv3VOC` 和 `YOLOv3COCO`）。\n- 增加操作（`local_flatten` 和 `upsample`）。\n\n## 破坏性变更\n\n无。","2018-05-12T08:14:31",{"id":212,"version":213,"summary_zh":214,"released_at":215},71442,"0.3.1","这是一个小型版本更新，改进了依赖项导入，并解决了 PyPI 依赖冲突问题。","2018-03-27T06:08:40",{"id":217,"version":218,"summary_zh":219,"released_at":220},71443,"0.3.0","## Areas of improvement\r\n\r\n- Bug fixes.\r\n- New APIs: object detection models and PASCAL VOC utils.\r\n- Unit tests \u002F CI improvements.\r\n- Documentation improvements.\r\n\r\n## API changes\r\n\r\n- Add generic object detection models (`YOLOv2`, `TinyYOLOv2`, and `FasterRCNN`).\r\n- Add references object detection models (`YOLOv2VOC`, `TinyYOLOv2VOC`, `FasterRCNN_ZF_VOC`, and `FasterRCNN_VGG16_VOC`).\r\n- Add PASCAL VOC utils.\r\n- Add operations (`stack`: `tf.stack`, `gather`: `tf.gather`, `srn`: a spatial local response normalization)\r\n- Revise `nets.pretrained` to perform each model's `pretrained`.\r\n\r\n## Breaking changes\r\n\r\nNone.","2018-03-27T04:55:55",{"id":222,"version":223,"summary_zh":82,"released_at":224},71444,"0.2.0","2018-02-16T04:50:30"]