[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-IBM--MAX-Image-Resolution-Enhancer":3,"tool-IBM--MAX-Image-Resolution-Enhancer":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":10,"env_os":97,"env_gpu":98,"env_ram":99,"env_deps":100,"category_tags":105,"github_topics":106,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":115,"updated_at":116,"faqs":117,"releases":148},847,"IBM\u002FMAX-Image-Resolution-Enhancer","MAX-Image-Resolution-Enhancer","Upscale an image by a factor of 4, while generating photo-realistic details.","MAX-Image-Resolution-Enhancer 是一款强大的开源图像超分辨率增强模型，核心功能是将低分辨率图片放大 4 倍，并智能生成照片级的逼真细节。MAX-Image-Resolution-Enhancer 有效解决了传统图像放大技术中常见的模糊、锯齿及细节丢失问题，让模糊的老照片或缩略图也能呈现清晰锐利的质感。\n\n技术上，MAX-Image-Resolution-Enhancer 基于 TensorFlow 框架构建，采用生成对抗网络（GAN）架构。模型在 60 万张 OpenImages V4 数据集上进行了深度训练，相比原始 SRGAN 实现，MAX-Image-Resolution-Enhancer 在保证结构相似性的同时，更侧重于优化人眼观感，输出结果更具真实感。项目不仅提供了预训练权重，还包含完整的 Docker 容器化部署代码及 API 接口，支持快速集成到 Web 服务中。\n\nMAX-Image-Resolution-Enhancer 非常适合开发者将其作为后端服务接入应用，也适合研究人员复现算法效果，或是设计师用于快速修复和提升图片素材质量。采用 Ap","MAX-Image-Resolution-Enhancer 是一款强大的开源图像超分辨率增强模型，核心功能是将低分辨率图片放大 4 倍，并智能生成照片级的逼真细节。MAX-Image-Resolution-Enhancer 有效解决了传统图像放大技术中常见的模糊、锯齿及细节丢失问题，让模糊的老照片或缩略图也能呈现清晰锐利的质感。\n\n技术上，MAX-Image-Resolution-Enhancer 基于 TensorFlow 框架构建，采用生成对抗网络（GAN）架构。模型在 60 万张 OpenImages V4 数据集上进行了深度训练，相比原始 SRGAN 实现，MAX-Image-Resolution-Enhancer 在保证结构相似性的同时，更侧重于优化人眼观感，输出结果更具真实感。项目不仅提供了预训练权重，还包含完整的 Docker 容器化部署代码及 API 接口，支持快速集成到 Web 服务中。\n\nMAX-Image-Resolution-Enhancer 非常适合开发者将其作为后端服务接入应用，也适合研究人员复现算法效果，或是设计师用于快速修复和提升图片素材质量。采用 Apache 2.0 开源协议，允许自由修改与分发，是处理图像高清化需求的实用且高效的解决方案。","[![Build Status](https:\u002F\u002Ftravis-ci.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer.svg?branch=master)](https:\u002F\u002Ftravis-ci.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer) [![API demo](https:\u002F\u002Fimg.shields.io\u002Fwebsite\u002Fhttp\u002Fmax-image-resolution-enhancer.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud\u002Fswagger.json.svg?label=API%20demo&down_message=down&up_message=up)](http:\u002F\u002Fmax-image-resolution-enhancer.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud)\n\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_readme_b1cb4264b499.png\" width=\"400px\">](http:\u002F\u002Fibm.biz\u002Fmax-to-ibm-cloud-tutorial)\n\n# IBM Developer Model Asset Exchange: Image Resolution Enhancer\n\nThis repository contains code to instantiate and deploy an image resolution enhancer. \nThis model is able to upscale a pixelated image by a factor of 4, while generating photo-realistic details.\n\nThe GAN is based on [this GitHub repository](https:\u002F\u002Fgithub.com\u002Fbrade31919\u002FSRGAN-tensorflow) and on [this research article](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1609.04802.pdf).\n\nThe model was trained on 600,000 images of the [OpenImages V4](https:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html) dataset, and the model files are hosted on\n[IBM Cloud Object Storage](https:\u002F\u002Fmax-cdn.cdn.appdomain.cloud\u002Fmax-image-resolution-enhancer\u002F1.0.0\u002Fassets.tar.gz).\nThe code in this repository deploys the model as a web service in a Docker container. This repository was developed\nas part of the [IBM Developer Model Asset Exchange](https:\u002F\u002Fdeveloper.ibm.com\u002Fexchanges\u002Fmodels\u002F) and the public API is powered by [IBM Cloud](https:\u002F\u002Fibm.biz\u002FBdz2XM).\n\n## Model Metadata\n| Domain | Application | Industry  | Framework | Training Data | Input Data Format |\n| ------------- | --------  | -------- | --------- | --------- | -------------- | \n| Vision | Super-Resolution | General | TensorFlow | [OpenImages V4](https:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html) | Image (RGB\u002FHWC) |\n\n## Benchmark\n\n| Set5 | Author's SRGAN | This SRGAN |\n| -------- | ------------------ | ----------- |\n| PSNR | 29.40 | 29.56 |\n| SSIM | 0.85 | 0.85 |\n\n| Set14 | Author's SRGAN | This SRGAN |\n| -------- | ------------------ | ----------- |\n| PSNR | 26.02 | 26.25 |\n| SSIM | 0.74 | 0.72 |\n\n| BSD100 | Author's SRGAN | This SRGAN |\n| -------- | ------------------ | ----------- |\n| PSNR | 25.16 | 24.4 |\n| SSIM | 0.67  |  0.67 |\n\nThe performance of this implementation was evaluated on three datasets: Set5, Set14, and BSD100.\nThe PSNR (peak signal to noise ratio) and SSIM (structural similarity index) metrics were evaluated, although the paper discusses\nthe MOS (mean opinion score) as the most favorable metric. In essence, the SRGAN implementation trades a better PSNR or SSIM score for a result more appealing to the human eye. This leads to a collection of output images with more crisp and realistic details.\n\n\n_NOTE: The SRGAN in the paper was trained on 350k ImageNet samples, whereas this SRGAN was trained on 600k [OpenImages V4](https:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html) pictures._\n\n## References\n\n* _C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, W. Shi_, [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1609.04802.pdf), ArXiv, 2017.\n* [SRGAN-tensorflow (model code source)](https:\u002F\u002Fgithub.com\u002Fbrade31919\u002FSRGAN-tensorflow)\n* [tensorflow-SRGAN](https:\u002F\u002Fgithub.com\u002Ftrevor-m\u002Ftensorflow-SRGAN)\n* [Deconvolution and Checkerboard Artefacts](https:\u002F\u002Fdistill.pub\u002F2016\u002Fdeconv-checkerboard\u002F)\n\n## Licenses\n\n| Component | License | Link  |\n| ------------- | --------  | -------- |\n| This repository | [Apache 2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | [LICENSE](https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer\u002Fblob\u002Fmaster\u002FLICENSE) |\n| Model Weights | [Apache 2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | [LICENSE](https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer\u002Fblob\u002Fmaster\u002FLICENSE) |\n| Model Code (3rd party) | [MIT](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) | [LICENSE](https:\u002F\u002Fgithub.com\u002Fbrade31919\u002FSRGAN-tensorflow\u002Fblob\u002Fmaster\u002FLICENSE.txt) |\n| Test samples | [CC BY 2.0](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby\u002F2.0\u002F) | [Asset README](samples\u002FREADME.md) |\n|  | [CC0](https:\u002F\u002Fcreativecommons.org\u002Fpublicdomain\u002Fzero\u002F1.0\u002F) | [Asset README](samples\u002FREADME.md) |\n\n## Pre-requisites:\n\n* `docker`: The [Docker](https:\u002F\u002Fwww.docker.com\u002F) command-line interface. Follow the [installation instructions](https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F) for your system.\n* The minimum recommended resources for this model is 8 GB Memory (see Troubleshooting) and 4 CPUs.\n* If you are on x86-64\u002FAMD64, your CPU must support [AVX](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAdvanced_Vector_Extensions) at the minimum.\n\n# Deployment options\n\n* [Deploy from Quay](#deploy-from-quay)\n* [Deploy on Red Hat OpenShift](#deploy-on-red-hat-openshift)\n* [Deploy on Kubernetes](#deploy-on-kubernetes)\n* [Run Locally](#run-locally)\n\n## Deploy from Quay\n\nTo run the docker image, which automatically starts the model serving API, run:\n\n```\n$ docker run -it -p 5000:5000 quay.io\u002Fcodait\u002Fmax-image-resolution-enhancer\n```\n\nThis will pull a pre-built image from the Quay.io container registry (or use an existing image if already cached locally) and run it.\nIf you'd rather checkout and build the model locally you can follow the [run locally](#run-locally) steps below.\n\n## Deploy on Red Hat OpenShift\n\nYou can deploy the model-serving microservice on Red Hat OpenShift by following the instructions for the OpenShift web console or the OpenShift Container Platform CLI [in this tutorial](https:\u002F\u002Fdeveloper.ibm.com\u002Ftutorials\u002Fdeploy-a-model-asset-exchange-microservice-on-red-hat-openshift\u002F), specifying `quay.io\u002Fcodait\u002Fmax-image-resolution-enhancer` as the image name.\n\n## Deploy on Kubernetes\n\nYou can also deploy the model on Kubernetes using the latest docker image on Quay.\n\nOn your Kubernetes cluster, run the following commands:\n\n```\n$ kubectl apply -f https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer\u002Fraw\u002Fmaster\u002Fmax-image-resolution-enhancer.yaml\n```\n\nThe model will be available internally at port `5000`, but can also be accessed externally through the `NodePort`.\n\nA more elaborate tutorial on how to deploy this MAX model to production on [IBM Cloud](https:\u002F\u002Fibm.biz\u002FBdz2XM) can be found [here](http:\u002F\u002Fibm.biz\u002Fmax-to-ibm-cloud-tutorial).\n\n## Run Locally\n\n1. [Build the Model](#1-build-the-model)\n2. [Deploy the Model](#2-deploy-the-model)\n3. [Use the Model](#3-use-the-model)\n4. [Development](#4-development)\n5. [Cleanup](#5-cleanup)\n\n\n### 1. Build the Model\n\nClone this repository locally. In a terminal, run the following command:\n\n```\n$ git clone https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer.git\n```\n\nChange directory into the repository base folder:\n\n```\n$ cd max-image-resolution-enhancer\n```\n\nTo build the docker image locally, run: \n\n```\n$ docker build -t max-image-resolution-enhancer .\n```\n\nAll required model assets will be downloaded during the build process. _Note_ that currently this docker image is CPU only (we will add support for GPU images later).\n\n\n### 2. Deploy the Model\n\nTo run the docker image, which automatically starts the model serving API, run:\n\n```\n$ docker run -it -p 5000:5000 max-image-resolution-enhancer\n```\n\n### 3. Use the Model\n\nThe API server automatically generates an interactive Swagger documentation page. Go to `http:\u002F\u002Flocalhost:5000` to load it. From there you can explore the API and also create test requests.\n\nUse the `model\u002Fpredict` endpoint to load a test image (you can use one of the test images from the `samples\u002Ftest_examples\u002Flow_resolution` folder) in order to get a high resolution output image returned.\n\nThe ideal input image is a PNG file with a resolution between 100x100 and 500x500, preferably without any post-capture processing and flashy colors. The model is able to generate details from a pixelated image (low DPI), but is not able to correct a 'blurred' image.\n\n![input](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_readme_19cf67411264.png)\n_Left: input image (128×80). Right: output image (512×320)_\n\n![Swagger UI screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_readme_c9f9d5a97668.png)\n\nYou can also test it on the command line, for example:\n\n```\n$ curl -F \"image=@samples\u002Ftest_examples\u002Flow_resolution\u002Ffood.png\" -XPOST http:\u002F\u002Flocalhost:5000\u002Fmodel\u002Fpredict > food_high_res.png\n```\n\nThe above command will send the low resolution `food.png` file to the model, and save the high resolution output image to the `food_high_res.png` file in the root directory.\n\n### 4. Development\n\nTo run the Flask API app in debug mode, edit `config.py` to set `DEBUG = True` under the application settings. You will then need to rebuild the docker image (see [step 1](#1-build-the-model)).\n\nPlease remember to set `DEBUG = False` when running the model in production. \n\n### 5. Cleanup\n\nTo stop the Docker container, type `CTRL` + `C` in your terminal.\n\n# Troubleshooting\n- Calling the ```model\u002Fpredict``` endpoint kills the Docker container with the message ```Killed```\n> This is likely caused due to the default limitation of Docker's memory allocation to 2 GB. Navigate to the ```Preferences``` menu under the Docker Desktop application. Use the slider to increase the available memory to 8 GB and restart Docker Desktop.\n\n- The details in the output image are different than what may be expected and are sometimes not physically possible\n> This model generates details basically 'out of thin air'. Creating something out of nothing is not possible without making assumptions.\nThe network attempts to recognize elements in the low-resolution image from which it can infer what the reality (human eye | super-resolution) could have looked like. If a group of pixels strongly resembles an observation that is not related to the content of the image, it might lead to observing results that are not 'physically possible'. \n\n>For example: a white pixel in a low-resolution image might have been converted to a snowflake, although the original picture might have been taken in the desert. This example is imaginary and has not actually been observed.\n\n- Artefacts in the output image\n> Observing artefacts in some images is unfortunately inevitable as any neural network is subject to technical limitations and characteristics of the training data.\n\n> Keep in mind that the best results are achieved with the following:\n> * A PNG image\n> * An image that is sufficiently zoomed in. During the process, the network groups a block of pixels together. If the block contains more details than the network produces, the result will be spurious.\n> * An image taken under natural light, without filters, and with few bright or flashy colors. The neural network was not trained on heavily edited images.\n> * An image that has sufficiently high resolution to not confuse the network with multiple possibilities (e.g. a sole pixel in a very low-resolution image could represent an entire car, person, sandwich,..)\n> * The model is able to generate details from a pixelated image (low DPI), but is not able to correct a 'blurred' image.\n\n## Resources and Contributions\n   \nIf you are interested in contributing to the Model Asset Exchange project or have any queries, please follow the instructions [here](https:\u002F\u002Fgithub.com\u002FCODAIT\u002Fmax-central-repo).\n","[![Build Status](https:\u002F\u002Ftravis-ci.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer.svg?branch=master)](https:\u002F\u002Ftravis-ci.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer) [![API demo](https:\u002F\u002Fimg.shields.io\u002Fwebsite\u002Fhttp\u002Fmax-image-resolution-enhancer.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud\u002Fswagger.json.svg?label=API%20demo&down_message=down&up_message=up)](http:\u002F\u002Fmax-image-resolution-enhancer.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud)\n\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_readme_b1cb4264b499.png\" width=\"400px\">](http:\u002F\u002Fibm.biz\u002Fmax-to-ibm-cloud-tutorial)\n\n# IBM Developer 模型资产交换：图像分辨率增强器\n\n此仓库包含用于实例化和部署图像分辨率增强器的代码。\n该模型能够将像素化图像放大 4 倍，同时生成照片级真实的细节。\n\nGAN（生成对抗网络）基于 [此 GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fbrade31919\u002FSRGAN-tensorflow) 和 [此研究文章](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1609.04802.pdf)。\n\n该模型在 [OpenImages V4](https:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html) 数据集的 600,000 张图像上进行了训练，模型文件托管在\n[IBM Cloud Object Storage](https:\u002F\u002Fmax-cdn.cdn.appdomain.cloud\u002Fmax-image-resolution-enhancer\u002F1.0.0\u002Fassets.tar.gz) 上。\n本仓库中的代码将模型作为 Web 服务部署在 Docker 容器中。本仓库是 [IBM Developer 模型资产交换](https:\u002F\u002Fdeveloper.ibm.com\u002Fexchanges\u002Fmodels\u002F) 项目的一部分，公共 API 由 [IBM Cloud](https:\u002F\u002Fibm.biz\u002FBdz2XM) 提供支持。\n\n## 模型元数据\n| 领域 | 应用 | 行业 | 框架 | 训练数据 | 输入数据格式 |\n| ------------- | --------  | -------- | --------- | --------- | -------------- | \n| 视觉 | 超分辨率 | 通用 | TensorFlow | [OpenImages V4](https:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html) | 图像 (RGB\u002FHWC) |\n\n## 基准测试\n\n| Set5 | 原作者 SRGAN | 本 SRGAN |\n| -------- | ------------------ | ----------- |\n| PSNR | 29.40 | 29.56 |\n| SSIM | 0.85 | 0.85 |\n\n| Set14 | 原作者 SRGAN | 本 SRGAN |\n| -------- | ------------------ | ----------- |\n| PSNR | 26.02 | 26.25 |\n| SSIM | 0.74 | 0.72 |\n\n| BSD100 | 原作者 SRGAN | 本 SRGAN |\n| -------- | ------------------ | ----------- |\n| PSNR | 25.16 | 24.4 |\n| SSIM | 0.67  |  0.67 |\n\n此实现的性能在三个数据集上进行了评估：Set5、Set14 和 BSD100。\n评估了 PSNR（峰值信噪比）和 SSIM（结构相似性指数）指标，尽管论文中将 MOS（平均意见得分）讨论为最有利的指标。\n本质上，SRGAN 实现是为了获得更符合人眼审美的结果，而在 PSNR 或 SSIM 分数上做出权衡。这导致输出一组具有更清晰和逼真细节的图像。\n\n_NOTE: 论文中的 SRGAN 是在 35 万张 ImageNet 样本上训练的，而本 SRGAN 是在 60 万张 [OpenImages V4](https:\u002F\u002Fstorage.googleapis.com\u002Fopenimages\u002Fweb\u002Findex.html) 图片上训练的。_\n\n## 参考文献\n\n* _C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, W. Shi_, [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1609.04802.pdf), ArXiv, 2017.\n* [SRGAN-tensorflow (模型代码源)](https:\u002F\u002Fgithub.com\u002Fbrade31919\u002FSRGAN-tensorflow)\n* [tensorflow-SRGAN](https:\u002F\u002Fgithub.com\u002Ftrevor-m\u002Ftensorflow-SRGAN)\n* [反卷积与棋盘格伪影](https:\u002F\u002Fdistill.pub\u002F2016\u002Fdeconv-checkerboard\u002F)\n\n## 许可证\n\n| 组件 | 许可证 | 链接  |\n| ------------- | --------  | -------- |\n| 本仓库 | [Apache 2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | [LICENSE](https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer\u002Fblob\u002Fmaster\u002FLICENSE) |\n| 模型权重 | [Apache 2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | [LICENSE](https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer\u002Fblob\u002Fmaster\u002FLICENSE) |\n| 模型代码 (第三方) | [MIT](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) | [LICENSE](https:\u002F\u002Fgithub.com\u002Fbrade31919\u002FSRGAN-tensorflow\u002Fblob\u002Fmaster\u002FLICENSE.txt) |\n| 测试样本 | [CC BY 2.0](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby\u002F2.0\u002F) | [资源 README](samples\u002FREADME.md) |\n|  | [CC0](https:\u002F\u002Fcreativecommons.org\u002Fpublicdomain\u002Fzero\u002F1.0\u002F) | [资源 README](samples\u002FREADME.md) |\n\n## 前置条件：\n\n* `docker`: [Docker](https:\u002F\u002Fwww.docker.com\u002F) 命令行界面。请遵循适用于您系统的 [安装说明](https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F)。\n* 此模型的最小推荐资源为 8 GB 内存（参见故障排除）和 4 个 CPU。\n* 如果您使用的是 x86-64\u002FAMD64，您的 CPU 必须至少支持 [AVX（高级向量扩展）](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAdvanced_Vector_Extensions)。\n\n# 部署选项\n\n* [从 Quay 部署](#deploy-from-quay)\n* [在 Red Hat OpenShift 上部署](#deploy-on-red-hat-openshift)\n* [在 Kubernetes 上部署](#deploy-on-kubernetes)\n* [本地运行](#run-locally)\n\n## 从 Quay 部署\n\n要运行自动启动模型服务 API 的 Docker 镜像，请运行：\n\n```\n$ docker run -it -p 5000:5000 quay.io\u002Fcodait\u002Fmax-image-resolution-enhancer\n```\n\n这将从 Quay.io 容器注册表拉取预构建的镜像（如果本地已缓存则使用现有镜像）并运行它。\n如果您更愿意检出并在本地构建模型，可以遵循下面的 [本地运行](#run-locally) 步骤。\n\n## 在 Red Hat OpenShift 上部署\n\n您可以通过遵循 OpenShift Web 控制台或 OpenShift Container Platform CLI 的说明，在此教程中指定 `quay.io\u002Fcodait\u002Fmax-image-resolution-enhancer` 作为镜像名称，从而在 Red Hat OpenShift 上部署模型服务微服务 [在此教程](https:\u002F\u002Fdeveloper.ibm.com\u002Ftutorials\u002Fdeploy-a-model-asset-exchange-microservice-on-red-hat-openshift\u002F)。\n\n## 在 Kubernetes 上部署\n\n您也可以使用 Quay 上的最新 Docker 镜像在 Kubernetes 上部署该模型。\n\n在您的 Kubernetes 集群上，运行以下命令：\n\n```\n$ kubectl apply -f https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer\u002Fraw\u002Fmaster\u002Fmax-image-resolution-enhancer.yaml\n```\n\n模型将在内部端口 `5000` 可用，但也可以通过 `NodePort` 从外部访问。\n\n有关如何将此 MAX 模型部署到 [IBM Cloud](https:\u002F\u002Fibm.biz\u002FBdz2XM) 生产环境的更详细教程，请查看 [此处](http:\u002F\u002Fibm.biz\u002Fmax-to-ibm-cloud-tutorial)。\n\n## 本地运行\n\n1. [构建模型](#1-build-the-model)\n2. [部署模型](#2-deploy-the-model)\n3. [使用模型](#3-use-the-model)\n4. [开发](#4-development)\n5. [清理](#5-cleanup)\n\n### 1. 构建模型\n\n在本地克隆此仓库。在终端中，运行以下命令：\n\n```\n$ git clone https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer.git\n```\n\n进入仓库根目录：\n\n```\n$ cd max-image-resolution-enhancer\n```\n\n要在本地构建 Docker (Docker 容器) 镜像，请运行：\n\n```\n$ docker build -t max-image-resolution-enhancer .\n```\n\n所有所需的模型资源将在构建过程中下载。**注意**，目前该 Docker 镜像仅支持 CPU (中央处理器)（我们稍后将添加对 GPU (图形处理器) 镜像的支持）。\n\n### 2. 部署模型\n\n要运行 Docker 镜像（它会自动启动模型服务 API (应用程序编程接口)），请运行：\n\n```\n$ docker run -it -p 5000:5000 max-image-resolution-enhancer\n```\n\n### 3. 使用模型\n\nAPI (应用程序编程接口) 服务器会自动生成一个交互式的 Swagger 文档页面。访问 `http:\u002F\u002Flocalhost:5000` 加载它。在那里你可以探索 API 并创建测试请求。\n\n使用 `model\u002Fpredict` 端点 (Endpoint) 加载测试图片（你可以使用 `samples\u002Ftest_examples\u002Flow_resolution` 文件夹中的测试图片之一），以获取返回的高分辨率输出图片。\n\n理想的输入图片是分辨率为 100x100 到 500x500 之间的 PNG 文件，最好没有任何后期处理或鲜艳的色彩。该模型能够从像素化图像（低 DPI (每英寸点数)）生成细节，但无法修正“模糊”的图像。\n\n![input](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_readme_19cf67411264.png)\n_Left: input image (128×80). Right: output image (512×320)_\n\n![Swagger UI screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_readme_c9f9d5a97668.png)\n\n你也可以在命令行上测试，例如：\n\n```\n$ curl -F \"image=@samples\u002Ftest_examples\u002Flow_resolution\u002Ffood.png\" -XPOST http:\u002F\u002Flocalhost:5000\u002Fmodel\u002Fpredict > food_high_res.png\n```\n\n上述命令将把低分辨率的 `food.png` 文件发送给模型，并将高分辨率的输出图像保存到根目录下的 `food_high_res.png` 文件中。\n\n### 4. 开发\n\n要以调试模式 (Debug mode) 运行 Flask (Flask 框架) API 应用，请编辑 `config.py` 并在应用程序设置下设置 `DEBUG = True`。然后你需要重新构建 Docker 镜像（参见 [步骤 1](#1-build-the-model)）。\n\n请记住在生产环境中运行模型时将 `DEBUG` 设置为 `False`。\n\n### 5. 清理\n\n要停止 Docker 容器，请在终端中输入 `CTRL` + `C`。\n\n# 故障排除\n- 调用 ```model\u002Fpredict``` 端点会导致 Docker 容器被终止，并显示消息 ```Killed```\n> 这可能是由于 Docker 内存分配默认限制为 2 GB 所致。导航至 Docker Desktop 应用程序下的 ```Preferences``` 菜单。使用滑块将可用内存增加到 8 GB 并重启 Docker Desktop。\n\n- 输出图像中的细节与预期不同，有时甚至不符合物理规律\n> 该模型基本上是从“无”中生成细节。在不做出假设的情况下，凭空创造是不可能的。\n网络试图识别低分辨率图像中的元素，从而推断现实（人眼 | 超分辨率 (Super-resolution)）可能看起来的样子。如果一组像素强烈类似于与图像内容无关的观察结果，则可能导致观察到不符合‘物理规律’的结果。 \n\n> 例如：低分辨率图像中的一个白色像素可能被转换为雪花，尽管原图可能是在沙漠中拍摄的。此示例是虚构的，实际上并未观察到。\n\n- 输出图像中的伪影\n> 不幸的是，在某些图像中观察到伪影是不可避免的，因为任何神经网络 (Neural Network) 都受到技术限制和训练数据特性的影响。\n\n> 请记住，要达到最佳效果，需满足以下条件：\n> * PNG 图像\n> * 充分放大的图像。在此过程中，网络会将一组像素组合在一起。如果块中包含的细节多于网络生成的细节，结果将是虚假的。\n> * 在自然光下拍摄、无滤镜且亮色或鲜艳色彩较少的图像。神经网络未针对重度编辑的图像进行训练。\n> * 具有足够高解析度的图像，以免让网络混淆多种可能性（例如，极低分辨率图像中的单个像素可能代表整辆车、人、三明治等……）\n> * 该模型能够从像素化图像（低 DPI (每英寸点数)）生成细节，但无法修正“模糊”的图像。\n\n## 资源和贡献\n   \n如果您有兴趣为 Model Asset Exchange 项目做出贡献或有任何问题，请遵循 [此处](https:\u002F\u002Fgithub.com\u002FCODAIT\u002Fmax-central-repo) 的说明。","# MAX-Image-Resolution-Enhancer 快速上手指南\n\nIBM 开源的图像分辨率增强工具，基于 SRGAN 模型，可将像素化图像放大 4 倍并生成逼真的细节。\n\n## 1. 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **Docker**: 已安装 Docker 命令行界面。\n*   **硬件资源**: 建议至少 8 GB 内存和 4 个 CPU 核心。\n*   **CPU 架构**: x86-64\u002FAMD64 架构需支持 [AVX](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FAdvanced_Vector_Extensions) 指令集。\n\n## 2. 安装步骤\n\n本工具以 Docker 容器形式部署，提供预构建镜像和本地构建两种方式。\n\n### 方式一：运行预构建镜像（推荐）\n\n直接从 Quay.io 拉取镜像并启动服务：\n\n```bash\n$ docker run -it -p 5000:5000 quay.io\u002Fcodait\u002Fmax-image-resolution-enhancer\n```\n\n### 方式二：本地构建与部署\n\n如果您希望从源码构建，请按以下步骤操作：\n\n1.  **克隆仓库**\n    ```bash\n    $ git clone https:\u002F\u002Fgithub.com\u002FIBM\u002Fmax-image-resolution-enhancer.git\n    $ cd max-image-resolution-enhancer\n    ```\n\n2.  **构建镜像**\n    ```bash\n    $ docker build -t max-image-resolution-enhancer .\n    ```\n    *注意：构建过程会自动下载所需的模型资产。当前镜像仅支持 CPU 运行。*\n\n3.  **启动服务**\n    ```bash\n    $ docker run -it -p 5000:5000 max-image-resolution-enhancer\n    ```\n\n## 3. 基本使用\n\n服务启动后，API 服务器将自动运行。\n\n### 访问交互文档\n打开浏览器访问 `http:\u002F\u002Flocalhost:5000`，即可查看 Swagger 文档并进行测试请求。\n\n### 调用 API 接口\n使用 `model\u002Fpredict` 端点上传图片，获取高分辨率输出。\n\n**示例命令：**\n```bash\n$ curl -F \"image=@samples\u002Ftest_examples\u002Flow_resolution\u002Ffood.png\" -XPOST http:\u002F\u002Flocalhost:5000\u002Fmodel\u002Fpredict > food_high_res.png\n```\n上述命令会将低分辨率图片发送至模型，并将结果保存为 `food_high_res.png`。\n\n### 输入要求\n为了获得最佳效果，建议输入图片满足以下条件：\n*   **格式**: PNG 文件。\n*   **尺寸**: 分辨率介于 100x100 至 500x500 之间。\n*   **内容**: 避免过度后期处理或鲜艳色彩；模型适用于修复像素化（低 DPI）图像，无法修正模糊图像。","电商运营团队在整理历史库存商品档案时，急需将供应商提供的低像素老照片升级为符合现代网页标准的高清大图。\n\n### 没有 MAX-Image-Resolution-Enhancer 时\n- 直接拉伸低分辨率图片会导致严重锯齿和模糊，无法看清商品材质纹理。\n- 依赖设计师手动 PS 修复，处理一张高清图平均需耗时 15 分钟，效率极低。\n- 传统插值算法生成的图像缺乏真实感，显得廉价，影响用户购买欲望。\n- 批量处理时无法保证画质一致性，导致店铺页面风格杂乱。\n\n### 使用 MAX-Image-Resolution-Enhancer 后\n- MAX-Image-Resolution-Enhancer 能将输入图片无损放大 4 倍，并自动补全缺失的细节信息。\n- 通过 Docker 容器化部署接入现有系统，实现千张图片的自动化批量处理。\n- 基于 GAN 技术生成的照片级真实细节，让布料褶皱、金属光泽等特征清晰锐利。\n- 输出结果结构稳定，确保了所有上架商品图片在视觉呈现上的高度统一。\n\nMAX-Image-Resolution-Enhancer 不仅解决了画质瓶颈，更将图像处理从人工劳动转变为高效的技术流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIBM_MAX-Image-Resolution-Enhancer_19cf6741.png","IBM","International Business Machines","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FIBM_251be740.jpg","Open Source @ IBM",null,"ibmdeveloper","https:\u002F\u002Fwww.ibm.com\u002Fopensource\u002F","https:\u002F\u002Fgithub.com\u002FIBM",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",97.9,{"name":90,"color":91,"percentage":92},"Dockerfile","#384d54",2.1,1040,159,"2026-04-01T14:18:55","Apache-2.0","未说明","不需要，当前镜像仅支持 CPU","8GB",{"notes":101,"python":97,"dependencies":102},"1. 当前 Docker 镜像仅支持 CPU 运行，暂不支持 GPU；2. x86-64\u002FAMD64 架构需支持 AVX 指令集；3. Docker 默认内存限制为 2GB，需在设置中调整为 8GB 以防止容器被杀；4. 输入图像建议为 PNG 格式，分辨率在 100x100 至 500x500 之间；5. 模型文件会在构建时自动下载。",[103,104],"tensorflow","flask",[14,13,15],[107,108,109,110,111,112,113,114,103],"computer-vision","machine-learning","ai","neural-network","ibm","docker-image","codait","machine-learning-models","2026-03-27T02:49:30.150509","2026-04-06T06:44:02.504517",[118,123,128,133,138,143],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3644,"Docker 容器运行时出现 `std::bad_alloc` 内存溢出错误怎么办？","默认 2GB 内存不足以运行此模型。建议增加 Docker 容器的内存限制至 4GB 以上。此外，可以尝试调整 TensorFlow 配置，在 `max-image-resolution-enhancer\u002Fcore\u002Fsrgan_controller.py` 第 63 行附近设置 `config.gpu_options.allow_growth = True` 并将并行线程数设为 1。","https:\u002F\u002Fgithub.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer\u002Fissues\u002F28",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},3645,"在 Apple Silicon (M1\u002FM2) Mac 上运行报错如何解决？","目前 TensorFlow 库尚未支持 M1 芯片组 (arm64\u002FARMv8)。运行时会报 SSE4.1 指令集不可用错误。暂时无法直接在 M1 架构上运行此镜像。","https:\u002F\u002Fgithub.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer\u002Fissues\u002F43",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},3646,"Docker 构建时出现 `wget: unable to resolve host address` 错误？","这通常是防火墙或 DNS 配置问题。请检查 Docker 安装后的 DNS 设置（参考 Docker 官方 Linux 后安装指南），或尝试解决防火墙配置以允许解析该主机地址。","https:\u002F\u002Fgithub.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer\u002Fissues\u002F37",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},3647,"Docker 构建过程中 `wget` 无法解析主机地址，如何绕过网络问题？","可以修改 `Dockerfile` 第 19 行的 `ARG model_bucket` 变量，将其更改为直接访问存储的链接：`https:\u002F\u002Fs3.us.cloud-object-storage.appdomain.cloud\u002Fcodait-cos-max\u002Fmax-image-resolution-enhancer\u002F1.0.0`。","https:\u002F\u002Fgithub.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer\u002Fissues\u002F54",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},3648,"Docker 构建时提示 `sha512sum` 校验失败或文件不存在？","这是因为模型文件下载路径失效。请更新 `Dockerfile` 中的 `ARG model_bucket` 链接，例如改为：`https:\u002F\u002Fcodait-cos-max.s3.us.cloud-object-storage.appdomain.cloud\u002Fmax-image-resolution-enhancer\u002F1.0.0`。","https:\u002F\u002Fgithub.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer\u002Fissues\u002F52",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},3649,"Docker 镜像构建失败，且遇到链接重定向问题？","该问题已在 PR #55 中修复。如果仍遇到问题，请确保使用正确的模型下载文本 URL，而不是嵌入链接。维护者确认问题已解决。","https:\u002F\u002Fgithub.com\u002FIBM\u002FMAX-Image-Resolution-Enhancer\u002Fissues\u002F50",[149,154,159],{"id":150,"version":151,"summary_zh":152,"released_at":153},103236,"v1.1.0","The 1.1.0 release follows some structural updates, a template rework, and a significant update to MAX-Base.","2019-06-18T16:35:53",{"id":155,"version":156,"summary_zh":157,"released_at":158},103237,"v1.0.1","Updated MAX base to 1.1.3.","2019-05-31T23:30:56",{"id":160,"version":161,"summary_zh":162,"released_at":163},103238,"v1.0.0","First release of this model","2019-03-29T18:47:00"]