[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-qubvel--segmentation_models":3,"tool-qubvel--segmentation_models":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":78,"owner_email":78,"owner_twitter":80,"owner_website":78,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":23,"env_os":91,"env_gpu":92,"env_ram":91,"env_deps":93,"category_tags":101,"github_topics":102,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":155},2325,"qubvel\u002Fsegmentation_models","segmentation_models","Segmentation models with pretrained backbones. Keras and TensorFlow Keras.","segmentation_models 是一个基于 Keras 和 TensorFlow 构建的 Python 库，专为图像分割任务设计。它旨在解决开发者在搭建深度学习模型时重复造轮子的痛点，让用户无需从零编写复杂的网络结构，仅需两行代码即可快速创建高性能的分割模型。\n\n该库非常适合人工智能开发者、研究人员以及需要处理医学影像、卫星地图或自动驾驶视觉数据的技术团队使用。其核心亮点在于提供了包括传奇架构 U-Net 在内的四种主流模型结构，并支持多达 25 种预训练骨干网络（如 ResNet、EfficientNet 等）。这些骨干网络均加载了 ImageNet 预训练权重，能显著加速模型收敛并提升最终精度。\n\n此外，segmentation_models 还内置了多种实用的分割专用损失函数（如 Jaccard、Dice、Focal Loss）和评估指标，灵活支持二分类及多分类场景。无论是进行简单的原型验证，还是部署复杂的生产级应用，它都能通过高度封装的 API 降低技术门槛，让使用者更专注于业务逻辑与数据本身。",".. raw:: html\n\n    \u003Cp align=\"center\">\n      \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqubvel_segmentation_models_readme_93d7f6ea6a21.png\">\n      \u003Cb>Python library with Neural Networks for Image Segmentation based on \u003Ca href=https:\u002F\u002Fwww.keras.io>Keras\u003C\u002Fa> and \u003Ca href=https:\u002F\u002Fwww.tensorflow.org>TensorFlow\u003C\u002Fa>.\n      \u003C\u002Fb>\n      \u003Cbr>\u003C\u002Fbr>\n\n      \u003Ca href=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fsegmentation-models\" alt=\"PyPI\">\n        \u003Cimg src=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fsegmentation-models.svg\" \u002F>\u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fsegmentation-models.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\" alt=\"Documentation\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqubvel_segmentation_models_readme_13d664e1afd7.png\" \u002F>\u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Ftravis-ci.com\u002Fqubvel\u002Fsegmentation_models\" alt=\"Build Status\">\n        \u003Cimg src=\"https:\u002F\u002Ftravis-ci.com\u002Fqubvel\u002Fsegmentation_models.svg?branch=master\" \u002F>\u003C\u002Fa>\n    \u003C\u002Fp>\n\n\n**The main features** of this library are:\n\n-  High level API (just two lines of code to create model for segmentation)\n-  **4** models architectures for binary and multi-class image segmentation\n   (including legendary **Unet**)\n-  **25** available backbones for each architecture\n-  All backbones have **pre-trained** weights for faster and better\n   convergence\n- Helpful segmentation losses (Jaccard, Dice, Focal) and metrics (IoU, F-score)\n\n**Important note**\n\n    Some models of version ``1.*`` are not compatible with previously trained models,\n    if you have such models and want to load them - roll back with:\n\n    $ pip install -U segmentation-models==0.2.1\n\nTable of Contents\n~~~~~~~~~~~~~~~~~\n - `Quick start`_\n - `Simple training pipeline`_\n - `Examples`_\n - `Models and Backbones`_\n - `Installation`_\n - `Documentation`_\n - `Change log`_\n - `Citing`_\n - `License`_\n \nQuick start\n~~~~~~~~~~~\nLibrary is build to work together with Keras and TensorFlow Keras frameworks\n\n.. code:: python\n\n    import segmentation_models as sm\n    # Segmentation Models: using `keras` framework.\n\nBy default it tries to import ``keras``, if it is not installed, it will try to start with ``tensorflow.keras`` framework.\nThere are several ways to choose framework:\n\n- Provide environment variable ``SM_FRAMEWORK=keras`` \u002F ``SM_FRAMEWORK=tf.keras`` before import ``segmentation_models``\n- Change framework ``sm.set_framework('keras')`` \u002F  ``sm.set_framework('tf.keras')``\n\nYou can also specify what kind of ``image_data_format`` to use, segmentation-models works with both: ``channels_last`` and ``channels_first``.\nThis can be useful for further model conversion to Nvidia TensorRT format or optimizing model for cpu\u002Fgpu computations.\n\n.. code:: python\n\n    import keras\n    # or from tensorflow import keras\n\n    keras.backend.set_image_data_format('channels_last')\n    # or keras.backend.set_image_data_format('channels_first')\n\nCreated segmentation model is just an instance of Keras Model, which can be build as easy as:\n\n.. code:: python\n    \n    model = sm.Unet()\n    \nDepending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:\n\n.. code:: python\n\n    model = sm.Unet('resnet34', encoder_weights='imagenet')\n\nChange number of output classes in the model (choose your case):\n\n.. code:: python\n    \n    # binary segmentation (this parameters are default when you call Unet('resnet34')\n    model = sm.Unet('resnet34', classes=1, activation='sigmoid')\n    \n.. code:: python\n    \n    # multiclass segmentation with non overlapping class masks (your classes + background)\n    model = sm.Unet('resnet34', classes=3, activation='softmax')\n    \n.. code:: python\n    \n    # multiclass segmentation with independent overlapping\u002Fnon-overlapping class masks\n    model = sm.Unet('resnet34', classes=3, activation='sigmoid')\n    \n    \nChange input shape of the model:\n\n.. code:: python\n    \n    # if you set input channels not equal to 3, you have to set encoder_weights=None\n    # how to handle such case with encoder_weights='imagenet' described in docs\n    model = Unet('resnet34', input_shape=(None, None, 6), encoder_weights=None)\n   \nSimple training pipeline\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: python\n\n    import segmentation_models as sm\n\n    BACKBONE = 'resnet34'\n    preprocess_input = sm.get_preprocessing(BACKBONE)\n\n    # load your data\n    x_train, y_train, x_val, y_val = load_data(...)\n\n    # preprocess input\n    x_train = preprocess_input(x_train)\n    x_val = preprocess_input(x_val)\n\n    # define model\n    model = sm.Unet(BACKBONE, encoder_weights='imagenet')\n    model.compile(\n        'Adam',\n        loss=sm.losses.bce_jaccard_loss,\n        metrics=[sm.metrics.iou_score],\n    )\n\n    # fit model\n    # if you use data generator use model.fit_generator(...) instead of model.fit(...)\n    # more about `fit_generator` here: https:\u002F\u002Fkeras.io\u002Fmodels\u002Fsequential\u002F#fit_generator\n    model.fit(\n       x=x_train,\n       y=y_train,\n       batch_size=16,\n       epochs=100,\n       validation_data=(x_val, y_val),\n    )\n\nSame manipulations can be done with ``Linknet``, ``PSPNet`` and ``FPN``. For more detailed information about models API and  use cases `Read the Docs \u003Chttps:\u002F\u002Fsegmentation-models.readthedocs.io\u002Fen\u002Flatest\u002F>`__.\n\nExamples\n~~~~~~~~\nModels training examples:\n - [Jupyter Notebook] Binary segmentation (`cars`) on CamVid dataset `here \u003Chttps:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fexamples\u002Fbinary%20segmentation%20(camvid).ipynb>`__.\n - [Jupyter Notebook] Multi-class segmentation (`cars`, `pedestrians`) on CamVid dataset `here \u003Chttps:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fexamples\u002Fmulticlass%20segmentation%20(camvid).ipynb>`__.\n\nModels and Backbones\n~~~~~~~~~~~~~~~~~~~~\n**Models**\n\n-  `Unet \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597>`__\n-  `FPN \u003Chttp:\u002F\u002Fpresentations.cocodataset.org\u002FCOCO17-Stuff-FAIR.pdf>`__\n-  `Linknet \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03718>`__\n-  `PSPNet \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01105>`__\n\n============= ==============\nUnet          Linknet\n============= ==============\n|unet_image|  |linknet_image|\n============= ==============\n\n============= ==============\nPSPNet        FPN\n============= ==============\n|psp_image|   |fpn_image|\n============= ==============\n\n.. _Unet: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Freadme\u002FLICENSE\n.. _Linknet: https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03718\n.. _PSPNet: https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01105\n.. _FPN: http:\u002F\u002Fpresentations.cocodataset.org\u002FCOCO17-Stuff-FAIR.pdf\n\n.. |unet_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Funet.png\n.. |linknet_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Flinknet.png\n.. |psp_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Fpspnet.png\n.. |fpn_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Ffpn.png\n\n**Backbones**\n\n.. table:: \n\n    =============  ===== \n    Type           Names\n    =============  =====\n    VGG            ``'vgg16' 'vgg19'``\n    ResNet         ``'resnet18' 'resnet34' 'resnet50' 'resnet101' 'resnet152'``\n    SE-ResNet      ``'seresnet18' 'seresnet34' 'seresnet50' 'seresnet101' 'seresnet152'``\n    ResNeXt        ``'resnext50' 'resnext101'``\n    SE-ResNeXt     ``'seresnext50' 'seresnext101'``\n    SENet154       ``'senet154'``\n    DenseNet       ``'densenet121' 'densenet169' 'densenet201'`` \n    Inception      ``'inceptionv3' 'inceptionresnetv2'``\n    MobileNet      ``'mobilenet' 'mobilenetv2'``\n    EfficientNet   ``'efficientnetb0' 'efficientnetb1' 'efficientnetb2' 'efficientnetb3' 'efficientnetb4' 'efficientnetb5' efficientnetb6' efficientnetb7'``\n    =============  =====\n\n.. epigraph::\n    All backbones have weights trained on 2012 ILSVRC ImageNet dataset (``encoder_weights='imagenet'``). \n\n\nInstallation\n~~~~~~~~~~~~\n\n**Requirements**\n\n1) python 3\n2) keras >= 2.2.0 or tensorflow >= 1.13\n3) keras-applications >= 1.0.7, \u003C=1.0.8\n4) image-classifiers == 1.0.*\n5) efficientnet == 1.0.*\n\n**PyPI stable package**\n\n.. code:: bash\n\n    $ pip install -U segmentation-models\n\n**PyPI latest package**\n\n.. code:: bash\n\n    $ pip install -U --pre segmentation-models\n\n**Source latest version**\n\n.. code:: bash\n\n    $ pip install git+https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\n    \nDocumentation\n~~~~~~~~~~~~~\nLatest **documentation** is avaliable on `Read the\nDocs \u003Chttps:\u002F\u002Fsegmentation-models.readthedocs.io\u002Fen\u002Flatest\u002F>`__\n\nChange Log\n~~~~~~~~~~\nTo see important changes between versions look at CHANGELOG.md_\n\nCiting\n~~~~~~~~\n\n.. code::\n\n    @misc{Yakubovskiy:2019,\n      Author = {Pavel Iakubovskii},\n      Title = {Segmentation Models},\n      Year = {2019},\n      Publisher = {GitHub},\n      Journal = {GitHub repository},\n      Howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models}}\n    } \n\nLicense\n~~~~~~~\nProject is distributed under `MIT Licence`_.\n\n.. _CHANGELOG.md: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002FCHANGELOG.md\n.. _`MIT Licence`: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002FLICENSE\n",".. raw:: html\n\n    \u003Cp align=\"center\">\n      \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqubvel_segmentation_models_readme_93d7f6ea6a21.png\">\n      \u003Cb>基于 \u003Ca href=https:\u002F\u002Fwww.keras.io>Keras\u003C\u002Fa> 和 \u003Ca href=https:\u002F\u002Fwww.tensorflow.org>TensorFlow\u003C\u002Fa> 的用于图像分割的神经网络 Python 库。\u003C\u002Fb>\n      \u003Cbr>\u003C\u002Fbr>\n\n      \u003Ca href=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fsegmentation-models\" alt=\"PyPI\">\n        \u003Cimg src=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fsegmentation-models.svg\" \u002F>\u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fsegmentation-models.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest\" alt=\"Documentation\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqubvel_segmentation_models_readme_13d664e1afd7.png\" \u002F>\u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Ftravis-ci.com\u002Fqubvel\u002Fsegmentation_models\" alt=\"Build Status\">\n        \u003Cimg src=\"https:\u002F\u002Ftravis-ci.com\u002Fqubvel\u002Fsegmentation_models.svg?branch=master\" \u002F>\u003C\u002Fa>\n    \u003C\u002Fp>\n\n\n**该库的主要特性** 包括：\n\n- 高级 API（只需两行代码即可创建分割模型）\n- 适用于二分类和多分类图像分割的 **4** 种模型架构\n  （包括经典的 **Unet**）\n- 每种架构支持 **25** 种可用的骨干网络\n- 所有骨干网络均提供 **预训练** 权重，以加快收敛并提升性能\n- 有用的分割损失函数（Jaccard、Dice、Focal）和评估指标（IoU、F-score）\n\n**重要提示**\n\n    版本 ``1.*`` 中的部分模型与先前训练的模型不兼容。如果您拥有此类模型并希望加载它们，请回退到以下版本：\n\n    $ pip install -U segmentation-models==0.2.1\n\n目录\n~~~~~~~~~~~~~~~~~\n - `快速入门`_\n - `简单训练流程`_\n - `示例`_\n - `模型与骨干网络`_\n - `安装`_\n - `文档`_\n - `变更日志`_\n - `引用`_\n - `许可证`_\n\n快速入门\n~~~~~~~~~~~\n该库专为与 Keras 和 TensorFlow Keras 框架协同工作而设计。\n\n.. code:: python\n\n    import segmentation_models as sm\n    # Segmentation Models：使用 `keras` 框架。\n\n默认情况下，它会尝试导入 ``keras``；如果未安装，则会尝试使用 ``tensorflow.keras`` 框架。您可以通过以下方式选择框架：\n\n- 在导入 ``segmentation_models`` 之前设置环境变量 ``SM_FRAMEWORK=keras`` \u002F ``SM_FRAMEWORK=tf.keras``\n- 调用 ``sm.set_framework('keras')`` \u002F ``sm.set_framework('tf.keras')`` 更改框架\n\n您还可以指定使用的 ``image_data_format`` 类型，Segmentation Models 支持两种格式：``channels_last`` 和 ``channels_first``。\n这对于后续将模型转换为 Nvidia TensorRT 格式或针对 CPU\u002FGPU 计算优化模型非常有用。\n\n.. code:: python\n\n    import keras\n    # 或 from tensorflow import keras\n\n    keras.backend.set_image_data_format('channels_last')\n    # 或 keras.backend.set_image_data_format('channels_first')\n\n创建的分割模型只是 Keras Model 的一个实例，其构建方式非常简单：\n\n.. code:: python\n    \n    model = sm.Unet()\n    \n根据任务需求，您可以选择参数数量不同的骨干网络来改变网络架构，并使用预训练权重进行初始化：\n\n.. code:: python\n\n    model = sm.Unet('resnet34', encoder_weights='imagenet')\n\n更改模型的输出类别数（根据您的情况选择）：\n\n.. code:: python\n    \n    # 二分类分割（调用 Unet('resnet34') 时，默认参数即为此配置）\n    model = sm.Unet('resnet34', classes=1, activation='sigmoid')\n    \n.. code:: python\n    \n    # 多分类分割，类别掩码互不重叠（包含背景在内的所有类别）\n    model = sm.Unet('resnet34', classes=3, activation='softmax')\n    \n.. code:: python\n    \n    # 多分类分割，类别掩码可独立重叠\u002F不重叠\n    model = sm.Unet('resnet34', classes=3, activation='sigmoid')\n    \n    \n更改模型的输入形状：\n\n.. code:: python\n    \n    # 如果输入通道数不等于 3，必须将 encoder_weights 设置为 None\n    # 如何在 encoder_weights='imagenet' 的情况下处理这种情况，请参阅文档\n    model = Unet('resnet34', input_shape=(None, None, 6), encoder_weights=None)\n   \n简单训练流程\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: python\n\n    import segmentation_models as sm\n\n    BACKBONE = 'resnet34'\n    preprocess_input = sm.get_preprocessing(BACKBONE)\n\n    # 加载您的数据\n    x_train, y_train, x_val, y_val = load_data(...)\n\n    # 预处理输入数据\n    x_train = preprocess_input(x_train)\n    x_val = preprocess_input(x_val)\n\n    # 定义模型\n    model = sm.Unet(BACKBONE, encoder_weights='imagenet')\n    model.compile(\n        'Adam',\n        loss=sm.losses.bce_jaccard_loss,\n        metrics=[sm.metrics.iou_score],\n    )\n\n    # 训练模型\n    # 如果使用数据生成器，请使用 model.fit_generator(...) 代替 model.fit(...)\n    # 关于 `fit_generator` 的更多信息请参见：https:\u002F\u002Fkeras.io\u002Fmodels\u002Fsequential\u002F#fit_generator\n    model.fit(\n       x=x_train,\n       y=y_train,\n       batch_size=16,\n       epochs=100,\n       validation_data=(x_val, y_val),\n    )\n\n同样的操作也可以应用于 ``Linknet``、``PSPNet`` 和 ``FPN``。有关模型 API 和应用场景的更多详细信息，请参阅 `Read the Docs \u003Chttps:\u002F\u002Fsegmentation-models.readthedocs.io\u002Fen\u002Flatest\u002F>`__。\n\n示例\n~~~~~~~~\n模型训练示例：\n - [Jupyter Notebook] CamVid 数据集上的二分类分割（车辆）`在此 \u003Chttps:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fexamples\u002Fbinary%20segmentation%20(camvid).ipynb>`__。\n - [Jupyter Notebook] CamVid 数据集上的多分类分割（车辆、行人）`在此 \u003Chttps:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fexamples\u002Fmulticlass%20segmentation%20(camvid).ipynb>`__。\n\n模型与骨干网络\n~~~~~~~~~~~~~~~~~~~~\n**模型**\n\n-  `Unet \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597>`__\n-  `FPN \u003Chttp:\u002F\u002Fpresentations.cocodataset.org\u002FCOCO17-Stuff-FAIR.pdf>`__\n-  `Linknet \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03718>`__\n-  `PSPNet \u003Chttps:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01105>`__\n\n============= ==============\nUnet          Linknet\n============= ==============\n|unet_image|  |linknet_image|\n============= ==============\n\n============= ==============\nPSPNet        FPN\n============= ==============\n|psp_image|   |fpn_image|\n============= ==============\n\n.. _Unet: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Freadme\u002FLICENSE\n.. _Linknet: https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03718\n.. _PSPNet: https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01105\n.. _FPN: http:\u002F\u002Fpresentations.cocodataset.org\u002FCOCO17-Stuff-FAIR.pdf\n\n.. |unet_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Funet.png\n.. |linknet_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Flinknet.png\n.. |psp_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Fpspnet.png\n.. |fpn_image| image:: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002Fimages\u002Ffpn.png\n\n**骨干网络**\n\n.. table::\n\n=============  ===== \n    类型           名称\n    =============  =====\n    VGG            ``'vgg16' 'vgg19'``\n    ResNet         ``'resnet18' 'resnet34' 'resnet50' 'resnet101' 'resnet152'``\n    SE-ResNet      ``'seresnet18' 'seresnet34' 'seresnet50' 'seresnet101' 'seresnet152'``\n    ResNeXt        ``'resnext50' 'resnext101'``\n    SE-ResNeXt     ``'seresnext50' 'seresnext101'``\n    SENet154       ``'senet154'``\n    DenseNet       ``'densenet121' 'densenet169' 'densenet201'`` \n    Inception      ``'inceptionv3' 'inceptionresnetv2'``\n    MobileNet      ``'mobilenet' 'mobilenetv2'``\n    EfficientNet   ``'efficientnetb0' 'efficientnetb1' 'efficientnetb2' 'efficientnetb3' 'efficientnetb4' 'efficientnetb5' efficientnetb6' efficientnetb7'``\n    =============  =====\n\n.. 引言::\n    所有主干网络都使用在2012年ILSVRC ImageNet数据集上预训练的权重（``encoder_weights='imagenet'``）。\n\n\n安装\n~~~~~~~\n\n**要求**\n\n1) python 3\n2) keras >= 2.2.0 或 tensorflow >= 1.13\n3) keras-applications >= 1.0.7, \u003C=1.0.8\n4) image-classifiers == 1.0.*\n5) efficientnet == 1.0.*\n\n**PyPI 稳定版包**\n\n.. code:: bash\n\n    $ pip install -U segmentation-models\n\n**PyPI 最新版本包**\n\n.. code:: bash\n\n    $ pip install -U --pre segmentation-models\n\n**源码最新版本**\n\n.. code:: bash\n\n    $ pip install git+https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\n    \n文档\n~~~~~~~\n最新的 **文档** 可以在 `Read the Docs \u003Chttps:\u002F\u002Fsegmentation-models.readthedocs.io\u002Fen\u002Flatest\u002F>`__ 上找到。\n\n变更日志\n~~~~~~~~~~\n要查看各版本之间的重大变更，请参阅 CHANGELOG.md_\n\n引用\n~~~~~~~~\n\n.. code::\n\n    @misc{Yakubovskiy:2019,\n      Author = {Pavel Iakubovskii},\n      Title = {Segmentation Models},\n      Year = {2019},\n      Publisher = {GitHub},\n      Journal = {GitHub 仓库},\n      Howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models}}\n    } \n\n许可\n~~~~~~~\n该项目采用 `MIT 许可证`_ 发布。\n\n.. _CHANGELOG.md: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002FCHANGELOG.md\n.. _`MIT Licence`: https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fblob\u002Fmaster\u002FLICENSE","# segmentation_models 快速上手指南\n\n`segmentation_models` 是一个基于 Keras 和 TensorFlow 的 Python 库，提供了用于图像分割的神经网络模型。它支持多种主流架构（如 U-Net, FPN, PSPNet, Linknet）和 25 种预训练骨干网络，旨在通过极简的 API 帮助开发者快速构建高性能的分割模型。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: Python 3.x\n*   **核心依赖**:\n    *   `keras` >= 2.2.0 **或** `tensorflow` >= 1.13\n    *   `keras-applications` (版本需在 1.0.7 到 1.0.8 之间)\n    *   `image-classifiers` == 1.0.*\n    *   `efficientnet` == 1.0.*\n\n> **提示**：建议先安装好 `tensorflow` 或 `keras`，再安装本库以避免依赖冲突。\n\n## 安装步骤\n\n### 1. 使用 PyPI 安装（推荐）\n\n安装稳定版本：\n\n```bash\npip install -U segmentation-models\n```\n\n如果需要体验最新功能（预览版）：\n\n```bash\npip install -U --pre segmentation-models\n```\n\n### 2. 使用国内镜像源加速（推荐中国开发者）\n\n为了获得更快的下载速度，建议使用清华或阿里镜像源进行安装：\n\n```bash\n# 使用清华镜像源安装稳定版\npip install -U segmentation-models -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 或使用阿里镜像源\npip install -U segmentation-models -i https:\u002F\u002Fmirrors.aliyun.com\u002Fpypi\u002Fsimple\u002F\n```\n\n### 3. 从源码安装\n\n如需安装最新的开发版本：\n\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\n```\n\n## 基本使用\n\n该库默认会自动检测并导入 `keras`，如果未找到则尝试使用 `tensorflow.keras`。你也可以手动指定框架。\n\n### 1. 快速创建模型\n\n最简单的用法是直接使用默认的 U-Net 模型：\n\n```python\nimport segmentation_models as sm\n\n# 创建默认的 U-Net 模型\nmodel = sm.Unet()\n```\n\n### 2. 自定义骨干网络与预训练权重\n\n实际应用中，通常指定骨干网络（Backbone）并使用 ImageNet 预训练权重以加速收敛：\n\n```python\nimport segmentation_models as sm\n\n# 使用 ResNet34 作为骨干网络，加载 ImageNet 预训练权重\nmodel = sm.Unet('resnet34', encoder_weights='imagenet')\n```\n\n### 3. 配置输出类别与激活函数\n\n根据任务类型（二分类或多分类）调整输出层：\n\n```python\nimport segmentation_models as sm\n\n# 场景 A: 二分类分割 (默认)\n# classes=1, 激活函数为 sigmoid\nmodel_binary = sm.Unet('resnet34', classes=1, activation='sigmoid')\n\n# 场景 B: 多分类分割 (互斥类别，含背景)\n# classes=3 (例如：背景 + 2 个目标类), 激活函数为 softmax\nmodel_multi = sm.Unet('resnet34', classes=3, activation='softmax')\n\n# 场景 C: 多标签分割 (类别可重叠)\n# classes=3, 激活函数为 sigmoid\nmodel_multilabel = sm.Unet('resnet34', classes=3, activation='sigmoid')\n```\n\n### 4. 完整训练流程示例\n\n以下是一个包含数据预处理、模型编译和训练的标准流程：\n\n```python\nimport segmentation_models as sm\n\n# 1. 选择骨干网络并获取对应的预处理函数\nBACKBONE = 'resnet34'\npreprocess_input = sm.get_preprocessing(BACKBONE)\n\n# 2. 加载你的数据 (此处仅为示意，需替换为实际数据加载代码)\n# x_train, y_train, x_val, y_val = load_data(...)\n\n# 3. 数据预处理\n# x_train = preprocess_input(x_train)\n# x_val = preprocess_input(x_val)\n\n# 4. 定义模型\nmodel = sm.Unet(BACKBONE, encoder_weights='imagenet')\n\n# 5. 编译模型\n# 使用库提供的组合损失函数 (BCE + Jaccard) 和评估指标 (IoU)\nmodel.compile(\n    'Adam',\n    loss=sm.losses.bce_jaccard_loss,\n    metrics=[sm.metrics.iou_score],\n)\n\n# 6. 训练模型\n# model.fit(\n#    x=x_train,\n#    y=y_train,\n#    batch_size=16,\n#    epochs=100,\n#    validation_data=(x_val, y_val),\n# )\n```\n\n> **注意**：除了 `Unet`，该库还支持 `Linknet`, `PSPNet` 和 `FPN` 架构，使用方法完全相同，只需将 `sm.Unet` 替换为相应的类名即可。","某医疗影像初创公司的算法团队正致力于开发一款自动识别肺部 CT 扫描中结节区域的辅助诊断系统，急需构建高精度的图像分割模型。\n\n### 没有 segmentation_models 时\n- **重复造轮子耗时久**：工程师需从零编写 U-Net 等复杂网络架构代码，仅搭建基础模型结构就耗费数天时间。\n- **训练收敛慢且效果差**：缺乏现成的预训练权重支持，模型只能随机初始化，导致在有限医疗数据上训练极难收敛，准确率低下。\n- **评估指标实现繁琐**：针对分割任务特有的 Dice 系数、IoU 等损失函数和评估指标，需手动推导公式并编写底层代码，易出错且调试困难。\n- **骨干网络切换成本高**：若想尝试 ResNet 等不同骨干网络以提升性能，需大幅重构代码，实验迭代周期被严重拉长。\n\n### 使用 segmentation_models 后\n- **极速构建模型**：仅需两行代码即可调用带有预训练权重的 U-Net 模型（如 `sm.Unet('resnet34', encoder_weights='imagenet')`），将建模时间从数天缩短至几分钟。\n- **迁移学习加速收敛**：直接加载 ImageNet 预训练权重作为特征提取器，显著提升了小样本医疗数据的训练速度与最终分割精度。\n- **开箱即用的专业指标**：库内内置了 Jaccard、Dice、Focal 等专用损失函数及评估指标，直接调用即可优化模型，无需关心底层数学实现。\n- **灵活架构探索**：通过简单修改参数即可在 25 种骨干网络和 4 种架构间自由切换，团队能高效进行对比实验，快速找到最优方案。\n\nsegmentation_models 通过提供“一行代码建模”的高层 API 和丰富的预训练资源，将研发重心从繁琐的基础设施搭建转移到了核心业务逻辑优化上。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqubvel_segmentation_models_93d7f6ea.png","qubvel","Pavel Iakubovskii","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fqubvel_5c987df0.png",null,"AI Engineer @ Praktika.ai | ex HuggingFace 🤗","qubvelx","https:\u002F\u002Fgithub.com\u002Fqubvel",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,4921,1046,"2026-04-02T21:25:35","MIT","未说明","未说明 (基于 Keras\u002FTensorFlow，支持 CPU\u002FGPU 计算，具体取决于后端配置)",{"notes":94,"python":95,"dependencies":96},"该库默认尝试导入 'keras'，若未安装则使用 'tensorflow.keras'。可通过环境变量 SM_FRAMEWORK 或代码 sm.set_framework() 指定框架。支持 'channels_last' 和 'channels_first' 数据格式。所有骨干网络均提供在 ImageNet 数据集上预训练的权重。注意：版本 1.* 的部分模型与旧版不兼容，如需加载旧模型需回退至 0.2.1 版本。","3",[97,98,99,100],"keras>=2.2.0 或 tensorflow>=1.13","keras-applications>=1.0.7, \u003C=1.0.8","image-classifiers==1.0.*","efficientnet==1.0.*",[13,14],[103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122],"unet","fpn","segmentation","keras","pretrained","pre-trained","image-segmentation","linknet","pspnet","tensorflow","segmentation-models","resnet","resnext","efficientnet","densenet","keras-tensorflow","keras-models","tensorflow-keras","keras-examples","mobilenet","2026-03-27T02:49:30.150509","2026-04-06T05:17:47.108583",[126,131,136,141,146,151],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},10670,"如何为多分类任务设置类别权重（class_weights）？","如果遇到张量形状错误，请尝试安装修复了类别权重问题的特定分支版本。使用以下命令安装：\npip install git+https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models@fix\u002Flosses-class-weights\n安装后，新的 categorical_crossentropy 损失函数应能正常处理 class_weights 参数。","https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fissues\u002F109",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},10671,"为什么模型预测结果的分辨率变低（出现块状伪影）？","这通常是因为代码运行目录与库的源代码目录冲突导致的。请确保你的训练脚本（如 seg_train.py）和测试脚本不在克隆的 segmentation_models 仓库目录内运行。\n解决方案：将你的脚本移动到另一个独立的目录中运行，这样 Python 会调用已安装的库版本而不是本地源码中的旧版本。更新包到最新版本（特别是包含 fix-resize-image-layer-config 修复的版本）也能解决此问题。","https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fissues\u002F90",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},10672,"在 Google Colab 上遇到 ResourceExhaustedError (OOM) 怎么办？","当使用较大的骨干网络（如 ResNet50, ResNet101, ResNeXt）或 EfficientNet-B4\u002FB5\u002FB6 时，显存可能不足。解决方案包括：\n1. 减小 batch_size（例如小于 10 或 12）。\n2. 尝试更换较小的骨干网络（如 VGG16, VGG19, ResNet34），这些网络在相同条件下通常不会报错。\n3. 确认未错误地启用多 GPU 模式（除非显式使用了 keras.utils.multi_gpu_model）。","https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fissues\u002F167",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},10673,"如何在解冻编码器后进行微调（Fine-tuning）？","标准的微调流程是先冻结编码器训练，然后解冻所有层继续训练。代码示例如下：\n1. 初始化模型并冻结编码器：\n   model = Unet(backbone_name='resnet34', encoder_weights='imagenet', encoder_freeze=True)\n   model.compile(optimizer=Adam(lr=1e-4), loss=..., metrics=[...])\n   model.fit_generator(..., epochs=2)\n2. 解冻编码器并重新编译模型：\n   from segmentation_models.utils import set_trainable\n   set_trainable(model)\n   # 注意：set_trainable 内部通常会处理重新编译，如果手动修改了 trainable 属性，需确保模型已重新编译。\n   model.fit_generator(..., epochs=100)","https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fissues\u002F140",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},10674,"如何在 Google Colab 中正确安装并使用 tf.keras 版本？","在 Google Colab 中，默认环境已包含 tf.keras。如果直接从源码克隆仓库导入失败，通常是因为依赖项（如 classification_models）冲突或缺失。\n建议不要直接克隆源码运行，而是通过 pip 安装兼容版本。如果必须使用源码，请确保安装了所有依赖项（requirements.txt），并注意 Colab 环境中可能需要调整 keras_applications 的使用以适配 tf.keras。维护者建议检查是否正确安装了 `classification_models` 依赖。","https:\u002F\u002Fgithub.com\u002Fqubvel\u002Fsegmentation_models\u002Fissues\u002F80",{"id":152,"question_zh":153,"answer_zh":154,"source_url":135},10675,"保存并重新加载模型后，预测结果不一致或报错怎么办？","这是一个已知问题，已在后续版本中修复。如果你遇到保存模型后加载预测失败的情况，请确保更新了 segmentation_models 包。\n你可以运行以下测试代码验证是否修复：\nimport os, keras, numpy as np, segmentation_models\nos.environ['CUDA_VISIBLE_DEVICES'] = ''\nmodel = segmentation_models.FPN(classes=1, activation='sigmoid')\nmodel.save('.\u002Ftest_model.h5')\nreloaded_model = keras.models.load_model('.\u002Ftest_model.h5')\npr_1 = model.predict(np.ones([1, 32, 32, 3]))\npr_2 = reloaded_model.predict(np.ones([1, 32, 32, 3]))\nprint(np.allclose(pr_1, pr_2)) # 应输出 True",[156,161,166,171],{"id":157,"version":158,"summary_zh":159,"released_at":160},71250,"1.0.1","#### Minor fixes\r\n - `FocalLoss` fixed `alpha` and positional arguments order\r\n - `PSPNet` changed last layer upsampling to `nearest - > bilinear`","2020-01-10T11:28:38",{"id":162,"version":163,"summary_zh":164,"released_at":165},71251,"v1.0.0","###### Areas of improvement\r\n - Support for `keras` and `tf.keras`\r\n - Losses as classes, base loss operations (sum of losses, multiplied loss)\r\n - NCHW and NHWC support\r\n - Removed pure tf operations to work with other keras backends\r\n - Reduced a number of custom objects for better models serialization and deserialization\r\n\r\n###### New featrues\r\n - New backbones: EfficentNetB[0-7] \r\n - New loss function: Focal loss \r\n - New metrics: Precision, Recall\r\n \r\n###### API changes\r\n - `get_preprocessing` moved from `sm.backbones.get_preprocessing` to `sm.get_preprocessing`","2019-10-15T08:45:33",{"id":167,"version":168,"summary_zh":169,"released_at":170},71252,"v1.0.0b1","* Support for keras and tf.keras\r\n* Focal loss; precision and recall metrics\r\n* New losses functionality: aggregation and multiplication by factor\r\n* NCHW and NHWC support\r\n* Removed pure `tf` operations to work with other keras backends\r\n* Reduced a number of custom objects for better models serialization and deserialization","2019-08-09T13:20:43",{"id":172,"version":173,"summary_zh":174,"released_at":175},71253,"v0.2.1","##### Areas of improvements \r\n\r\n - Added `set_regularization` function \r\n - Added `beta` argument to dice loss\r\n - Added `threshold` argument for metrics\r\n - Fixed `prerprocess_input` for mobilenets\r\n - Fixed missing parameter `interpolation` in `ResizeImage` layer config\r\n - Some minor improvements in docs, fixed typos","2019-05-23T14:28:34"]