[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mafiosnik777--enhancr":3,"tool-mafiosnik777--enhancr":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":79,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":102,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":117,"github_topics":118,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":139,"updated_at":140,"faqs":141,"releases":172},1370,"mafiosnik777\u002Fenhancr","enhancr","Video Frame Interpolation & Super Resolution using NVIDIA's TensorRT & Tencent's NCNN inference, beautifully crafted and packaged into a single app ","enhancr 是一款把“AI 补帧 + AI 超分”装进一个漂亮窗口的小软件。它能把 24 fps 的老片插值成 60 fps 的丝滑画面，也能把 720p 视频放大到 4K 并修复细节，全程只靠 AI 模型自动完成。  \n过去想实现同样效果，要么得折腾 Docker、WSL、PyTorch 环境，要么显卡只认 NVIDIA；enhancr 直接打包好 TensorRT（N 卡极速）和 NCNN（A 卡、Apple Silicon 也能跑），双击安装即可开用。  \n如果你是想给 vlog、动画或老电影提质的普通用户，或是需要批量产出高帧率高分辨率素材的视频设计师、影视后期，enhancr 都能省下大量配置时间。内置场景检测、实时预览、批量队列、自定义 ESRGAN 模型等功能，让“一键增强”既简单又可控。","\n# ![heading-icon](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_1a5deb3230b7.png)\n\n**enhancr** is an **elegant and easy to use** GUI for **Video Frame Interpolation** and **Video Upscaling** which takes advantage of artificial intelligence - built using **node.js** and **Electron**. It was created **to enhance the user experience** for anyone interested in enhancing video footage using artificial intelligence. The GUI was **designed to provide a stunning experience** powered by state-of-the-art technologies **without feeling clunky and outdated** like other alternatives.\n\n![gui-preview-image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_44262e9f3998.png)\n\nIt features blazing-fast **TensorRT** inference by NVIDIA, which can speed up AI processes **significantly**. **Pre-packaged, without the need to install Docker or WSL** (Windows Subsystem for Linux) - and **NCNN** inference by Tencent which is lightweight and runs on **NVIDIA**, **AMD** and even **Apple Silicon** - in contrast to the mammoth of an inference PyTorch is, which **only runs on NVIDIA GPUs**.\n\n# Features\n- Encodes video on the fly and reads frames from source video, without the need of extracting frames or loading into memory\n- Queue for batch processing\n- Live Preview integrated in the UI, without impact on performance\n- Allows chaining of interpolation, upscaling & restoration\n- Offers the possibility to trim videos before processing\n- Can load custom ESRGAN models in onnx & pth format and converts them automatically\n- Has Scene Detection built-in, to skip interpolation on scene change frames & mitigate artifacts\n- Color Themes for user customization\n- Discord Rich Presence, to show all your friends progress, current speed & what you're currently enhancing\n- Realtime Player (assuming you have a powerful enough GPU) with perfect support for audio, subtitles, fonts, attachments etc.\n- ... and much more\n\n# Installation\n\n**Release 0.9.9 features a free version 🎉**\nhttps:\u002F\u002Fdl.enhancr.app\u002Fsetup\u002Fenhancr-setup-free-0.9.9.exe\n\nTo ensure that you have the most recent version of the software and all necessary dependencies, we recommend downloading the installer from [Patreon](https:\u002F\u002Fwww.patreon.com\u002Fmafiosnik). \nPlease note that builds and an embeddable python environment for the **Pro** version **are not** provided through this repository.\n\n![installer](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_8a9d7751e73b.png)\n\n# Built-in engines\n\n## Interpolation\n\n>**RIFE (NCNN)** - [megvii-research](https:\u002F\u002Fgithub.com\u002Fmegvii-research)\u002F**[ECCV2022-RIFE](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FECCV2022-RIFE)** - powered by [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VapourSynth-RIFE-NCNN-Vulkan](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVapourSynth-RIFE-NCNN-Vulkan)**\n\n>**RIFE (TensorRT)** - [megvii-research](https:\u002F\u002Fgithub.com\u002Fmegvii-research)\u002F**[ECCV2022-RIFE](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FECCV2022-RIFE)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** & [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VSGAN-tensorrt-docker](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVSGAN-tensorrt-docker)**\n\n>**GMFSS - Union (PyTorch\u002FTensorRT)** - [98mxr](https:\u002F\u002Fgithub.com\u002F98mxr)\u002F**[GMFSS_Union](https:\u002F\u002Fgithub.com\u002F98mxr\u002FGMFSS_union)** - powered by [HolyWu](https:\u002F\u002Fgithub.com\u002FHolyWu)\u002F**[vs-gmfss_union](https:\u002F\u002Fgithub.com\u002FHolyWu\u002Fvs-gmfss_union)**\n\n>**GMFSS - Fortuna (PyTorch\u002FTensorRT)** - [98mxr](https:\u002F\u002Fgithub.com\u002F98mxr)\u002F**[GMFSS_Fortuna](https:\u002F\u002Fgithub.com\u002F98mxr\u002FGMFSS_Fortuna)** - powered by [HolyWu](https:\u002F\u002Fgithub.com\u002FHolyWu)\u002F**[vs-gmfss_fortuna](https:\u002F\u002Fgithub.com\u002FHolyWu\u002Fvs-gmfss_fortuna)**\n\n>**CAIN (NCNN)** - [myungsub](https:\u002F\u002Fgithub.com\u002Fmyungsub)\u002F**[CAIN](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN)** - powered by [mafiosnik](https:\u002F\u002Fgithub.com\u002Fmafiosnik777)\u002F**vsynth-cain-NCNN-vulkan** (unreleased)\n\n>**CAIN (DirectML)** - [myungsub](https:\u002F\u002Fgithub.com\u002Fmyungsub)\u002F**[CAIN](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**CAIN (TensorRT)** - [myungsub](https:\u002F\u002Fgithub.com\u002Fmyungsub)\u002F**[CAIN](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN)** - powered by [HubertSotnowski](https:\u002F\u002Fgithub.com\u002FHubertSotnowski)\u002F**[cain-TensorRT](https:\u002F\u002Fgithub.com\u002FHubertSotnowski\u002Fcain-TensorRT)**\n\n\n## Upscaling\n\n>**ShuffleCUGAN (NCNN)** - [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VSGAN-tensorrt-docker](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVSGAN-tensorrt-docker)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**ShuffleCUGAN (TensorRT)** - [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VSGAN-tensorrt-docker](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVSGAN-tensorrt-docker)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**RealESRGAN (NCNN)** - [xinntao](https:\u002F\u002Fgithub.com\u002Fxinntao)\u002F**[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**RealESRGAN (DirectML)** - [xinntao](https:\u002F\u002Fgithub.com\u002Fxinntao)\u002F**[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**RealESRGAN (TensorRT)** - [xinntao](https:\u002F\u002Fgithub.com\u002Fxinntao)\u002F**[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**RealCUGAN (TensorRT)** - [bilibili](https:\u002F\u002Fgithub.com\u002Fbilibili)\u002F**[ailab\u002FReal-CUGAN](https:\u002F\u002Fgithub.com\u002Fbilibili\u002Failab\u002Ftree\u002Fmain\u002FReal-CUGAN)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**SwinIR (TensorRT)** - [JingyunLiang](https:\u002F\u002Fgithub.com\u002FJingyunLiang)\u002F**[SwinIR](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR)** - powered by [mafiosnik777](https:\u002F\u002Fgithub.com\u002Fmafiosnik777)\u002F**SwinIR-TensorRT** (unreleased)\n\n## Restoration\n\n>**DPIR (DirectML)** - [cszn](https:\u002F\u002Fgithub.com\u002Fcszn)\u002F**[DPIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FDPIR)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**DPIR (TensorRT)** - [cszn](https:\u002F\u002Fgithub.com\u002Fcszn)\u002F**[DPIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FDPIR)** - powered by [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)**\n\n>**SCUNet (TensorRT)** - [cszn](https:\u002F\u002Fgithub.com\u002Fcszn)\u002F**[SCUNet](https:\u002F\u002Fgithub.com\u002Fcszn\u002FSCUNet)** - powered by [mafiosnik777](https:\u002F\u002Fgithub.com\u002Fmafiosnik777)\u002F**SCUNet-TensorRT** (unreleased)\n\n# System Requirements\n\n\n\n#### Minimum:\n - Dual Core CPU with Hyperthreading enabled\n - Vulkan-capable graphics processor for inference with NCNN \u002F DirectX 12-capable graphics processor for inference with DirectML\n - Windows 10\n\n#### Recommended:\n\n-   Quad Core Intel Kaby Lake\u002FAMD Ryzen or newer with Hyperthreading enabled\n-   16 GB RAM\n-   NVIDIA 2000 Series (Ampere) for TensorRT\n-   Windows 11\n\n\u003Csub>Sidenote: Starting with TensorRT 8.6, support for 2nd generation Kepler and Maxwell (900 Series and below) has been dropped. You will need at least a Pascal GPU (1000 series and up) and CUDA 12.0 + driver version >= 525.xx to run inference using TensorRT.\u003C\u002Fsub>\n\n# macOS and Linux Support\n\nThe GUI was created with cross-platform compatibility in mind and is compatible with both operating systems.\n**Our primary focus at the moment is ensuring a stable and fully functioning solution for Windows users, but support for Linux and macOS will be made available with the 1.0 update.**\n\n![enhancr-macos](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_81ccac3ba234.png)\n\nSupport for Apple Silicon is planned as well, ~~but I currently only have an Intel Macbook Pro available for testing~~ i'll get a Apple Silicon instance on Amazon AWS to implement this, in time for the 1.0 release.\n\n# Benchmarks\n\nInput size: 1920x1080 @ 2x\n\n|| RTX 2060S \u003Csup>1\u003C\u002Fsup> | RTX 3070 \u003Csup>2\u003C\u002Fsup>| RTX A4000 \u003Csup>3\u003C\u002Fsup> | RTX 3090 Ti \u003Csup>4\u003C\u002Fsup> | RTX 4090 \u003Csup>5\u003C\u002Fsup>\n|--|--|--|--|--|--|\n| RIFE \u002F rife-v4.6 (NCNN) | 53.78 fps | 64.08 fps | 80.56 fps | 86.24 fps | 136.13 fps |\n| RIFE \u002F rife-v4.6 (TensorRT) | 70.34 fps | 94.63 fps | 86.47 fps | 122.68 fps | 170.91 fps |\n| CAIN \u002F cvp-v6 (NCNN) | 9.42 fps | 10.56 fps | 13.42 fps | 17.36 fps | 44.87 fps |\n| CAIN \u002F cvp-v6 (TensorRT) | 45.41 fps | 63.84 fps | 81.23 fps | 112.87 fps | 183.46 fps |\n| GMFSS \u002F Up (PyTorch) | - | - | 4.32 fps | - | 16.35 fps |\n| GMFSS \u002F Union (PyTorch) | - | - | 3.68 fps | - | 13.93 fps |\n| GMFSS \u002F Union (TensorRT) | - | - | 6.79 fps | - | - |\n| RealESRGAN \u002F animevideov3 (TensorRT) | 7.64 fps | 9.10 fps | 8.49 fps | 18.66 fps | 38.67 fps |\n| RealCUGAN (TensorRT) | - | - | 5.96 fps | - | - |\n| SwinIR (PyTorch) | - | - | 0.43 fps | - | - |\n| DPIR \u002F Denoise (TensorRT) | 4.38 fps | 6.45 fps | 5.39 fps | 11.64 fps | 27.41 fps |\n\n\u003Csup>1\u003C\u002Fsup> \u003Csub>Ryzen 5 3600X - Gainward RTX 2060 Super @ Stock\u003C\u002Fsub>\n\n\u003Csup>2\u003C\u002Fsup> \u003Csub>Ryzen 7 3800X - Gigabyte RTX 3070 Eagle OC @ Stock\u003C\u002Fsub>\n\n\u003Csup>3\u003C\u002Fsup> \u003Csub>Ryzen 5 3600X - PNY RTX A4000 @ Stock \u003C\u002Fsub>\n\n\u003Csup>4\u003C\u002Fsup> \u003Csub>i9 12900KF - ASUS RTX 3090 Ti Strix OC @ ~2220MHz\u003C\u002Fsub>\n\n\u003Csup>5\u003C\u002Fsup> \u003Csub>Ryzen 9 5950X - ASUS RTX 4090 Strix OC - @ ~3100MHz with curve to achieve maximum performance\u003C\u002Fsub>\n\n# Troubleshooting and FAQ (Frequently Asked Questions)\n\nThis section has moved to the wiki: https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fwiki\n\nCheck it out to learn more about getting the most out of enhancr or how to fix various problems.\n\n# Inferences\n\n[TensorRT](https:\u002F\u002Fdeveloper.nvidia.com\u002Ftensorrt) is a highly optimized AI inference runtime for NVIDIA GPUs. It uses benchmarking to find the optimal kernel to use for your specific GPU, and there is an extra step to build an engine on the machine you are going to run the AI on. However, the resulting performance is also typically _much much_ better than any PyTorch or NCNN implementation.\n\n[NCNN](https:\u002F\u002Fgithub.com\u002FTencent\u002Fncnn) is a high-performance neural network inference computing framework optimized for mobile platforms. NCNN does not have any third party dependencies. It is cross-platform, and runs faster than all known open source frameworks on most major platforms. It supports NVIDIA, AMD, Intel Graphics and even Apple Silicon.\nNCNN is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.\n\n# Supporting this project\n\nI would be grateful if you could show your support for this project by contributing on [Patreon](https:\u002F\u002Fwww.patreon.com\u002Fmafiosnik) or through a donation on [PayPal](https:\u002F\u002Fwww.paypal.com\u002Fpaypalme\u002Fmafiosnik). Your support will help to accelerate development and bring more updates to the project. Additionally, if you have the skills, you can also contribute by opening a pull request. Regardless of the form of support you choose to give, know that it is greatly appreciated.\n\n# Plans for the future\n\nI am continuously working to improve the codebase, including addressing any inconsistencies that may have arisen due to time constraints. Regular updates will be released, including new features, bug fixes, and the incorporation of new technologies and models as they become available. Thank you for your understanding and support.\n\n# Credits\n\n>Our player depends on [mpv](https:\u002F\u002Fgithub.com\u002Fmpv-player\u002Fmpv) and [ModernX](https:\u002F\u002Fgithub.com\u002Fcyl0\u002FModernX) for the OSC.\n\n>Thanks to [HubertSontowski](https:\u002F\u002Fgithub.com\u002FHubertSotnowski) and [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar) for helping out with implementing CAIN.\n\n# Join the discord\n\nTo interact with the community, share your results or to get help when encountering any problems visit our [discord](https:\u002F\u002Fdsc.gg\u002Fenhancr). Previews of upcoming versions are gonna be showcased on there as well.\n","# ![heading-icon](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_1a5deb3230b7.png)\n\n**enhancr** 是一款 **优雅且易于使用的 GUI 工具**，专为 **视频帧插值** 和 **视频上色** 而设计，充分利用了人工智能技术——该工具基于 **Node.js** 和 **Electron** 构建。它旨在 **提升用户体验**，为所有对利用人工智能增强视频素材感兴趣的人士提供便利。这款 GUI 采用 **最先进的技术** 打造而成，力求为用户带来惊艳的使用体验，同时又不会像其他替代方案那样显得笨重或过时。\n\n![gui-preview-image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_44262e9f3998.png)\n\n它搭载了 NVIDIA 高速的 **TensorRT 推理引擎**，能够大幅加速 AI 处理流程。此外，该工具还支持 **预装版本**，无需安装 Docker 或 WSL（Windows 子系统 for Linux）——并且集成了腾讯的 **NCNN 推理引擎**，其轻量化设计使其能够在 **NVIDIA**、**AMD**，甚至 **Apple Silicon** 系统上运行——而相比之下，PyTorch 的推理性能则要庞大得多，只能在 NVIDIA GPU 上运行。\n\n# 功能特性\n- 可实时编码视频，并从源视频中逐帧读取数据，无需提取帧或将数据加载到内存中。\n- 提供批量处理队列功能。\n- UI 中内置实时预览功能，对性能无任何影响。\n- 支持插值、上色与修复的连续操作与链式调用。\n- 允许在处理前对视频进行裁剪。\n- 可以加载自定义的 ESRGAN 模型，格式包括 ONNX 和 PTH，并自动完成模型转换。\n- 内置场景检测功能，可跳过场景变化时的插值操作，有效减少伪影的产生。\n- 提供多种颜色主题，方便用户自定义界面风格。\n- 支持 Discord 丰富的存在感功能，可向所有好友展示您的进度、当前速度以及正在处理的内容。\n- 实时播放器（前提是您的 GPU 性能足够强大），完美支持音频、字幕、字体、附件等多种内容。\n- ……以及更多强大的功能！\n\n# 安装指南\n\n**版本 0.9.9 带有免费版 🎉**\nhttps:\u002F\u002Fdl.enhancr.app\u002Fsetup\u002Fenhancr-setup-free-0.9.9.exe\n\n为确保您使用的是最新版本的软件及所有必要的依赖项，我们建议您从 [Patreon](https:\u002F\u002Fwww.patreon.com\u002Fmafiosnik) 下载安装包。\n请注意，**Pro 版本** 的构建文件及可嵌入的 Python 环境并未通过本仓库提供。\n\n![installer](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_8a9d7751e73b.png)\n\n# 内置引擎\n\n## 插值\n\n>**RIFE（NCNN）** - [megvii-research](https:\u002F\u002Fgithub.com\u002Fmegvii-research)\u002F**[ECCV2022-RIFE](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FECCV2022-RIFE)** - 由 [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VapourSynth-RIFE-NCNN-Vulkan](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVapourSynth-RIFE-NCNN-Vulkan)** 提供支持。\n\n>**RIFE（TensorRT）** - [megvii-research](https:\u002F\u002Fgithub.com\u002Fmegvii-research)\u002F**[ECCV2022-RIFE](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FECCV2022-RIFE)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 与 [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VSGAN-tensorrt-docker](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVSGAN-tensorrt-docker)** 提供支持。\n\n>**GMFSS - Union（PyTorch\u002FTensorRT）** - [98mxr](https:\u002F\u002Fgithub.com\u002F98mxr)\u002F**[GMFSS_Union](https:\u002F\u002Fgithub.com\u002F98mxr\u002FGMFSS_union)** - 由 [HolyWu](https:\u002F\u002Fgithub.com\u002FHolyWu)\u002F**[vs-gmfss_union](https:\u002F\u002Fgithub.com\u002FHolyWu\u002Fvs-gmfss_union)** 提供支持。\n\n>**GMFSS - Fortuna（PyTorch\u002FTensorRT）** - [98mxr](https:\u002F\u002Fgithub.com\u002F98mxr)\u002F**[GMFSS_Fortuna](https:\u002F\u002Fgithub.com\u002F98mxr\u002FGMFSS_Fortuna)** - 由 [HolyWu](https:\u002F\u002Fgithub.com\u002FHolyWu)\u002F**[vs-gmfss_fortuna](https:\u002F\u002Fgithub.com\u002FHolyWu\u002Fvs-gmfss_fortuna)** 提供支持。\n\n>**CAIN（NCNN）** - [myungsub](https:\u002F\u002Fgithub.com\u002Fmyungsub)\u002F**[CAIN](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN)** - 由 [mafiosnik](https:\u002F\u002Fgithub.com\u002Fmafiosnik777)\u002F**vsynth-cain-NCNN-vulkan** 提供支持（尚未发布）。\n\n>**CAIN（DirectML）** - [myungsub](https:\u002F\u002Fgithub.com\u002Fmyungsub)\u002F**[CAIN](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**CAIN（TensorRT）** - [myungsub](https:\u002F\u002Fgithub.com\u002Fmyungsub)\u002F**[CAIN](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN)** - 由 [HubertSotnowski](https:\u002F\u002Fgithub.com\u002FHubertSotnowski)\u002F**[cain-TensorRT](https:\u002F\u002Fgithub.com\u002FHubertSotnowski\u002Fcain-TensorRT)** 提供支持。\n\n## 上色\n\n>**ShuffleCUGAN（NCNN）** - [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VSGAN-tensorrt-docker](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVSGAN-tensorrt-docker)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**ShuffleCUGAN（TensorRT）** - [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar)\u002F**[VSGAN-tensorrt-docker](https:\u002F\u002Fgithub.com\u002Fstyler00dollar\u002FVSGAN-tensorrt-docker)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**RealESRGAN（NCNN）** - [xinntao](https:\u002F\u002Fgithub.com\u002Fxinntao)\u002F**[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**RealESRGAN（DirectML）** - [xinntao](https:\u002F\u002Fgithub.com\u002Fxinntao)\u002F**[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**RealESRGAN（TensorRT）** - [xinntao](https:\u002F\u002Fgithub.com\u002Fxinntao)\u002F**[Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**RealCUGAN（TensorRT）** - [bilibili](https:\u002F\u002Fgithub.com\u002Fbilibili)\u002F**[ailab\u002FReal-CUGAN](https:\u002F\u002Fgithub.com\u002Fbilibili\u002Failab\u002Ftree\u002Fmain\u002FReal-CUGAN)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**SwinIR（TensorRT）** - [JingyunLiang](https:\u002F\u002Fgithub.com\u002FJingyunLiang)\u002F**[SwinIR](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR)** - 由 [mafiosnik777](https:\u002F\u002Fgithub.com\u002Fmafiosnik777)\u002F**SwinIR-TensorRT** 提供支持（尚未发布）。\n\n## 修复\n\n>**DPIR（DirectML）** - [cszn](https:\u002F\u002Fgithub.com\u002Fcszn)\u002F**[DPIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FDPIR)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**DPIR（TensorRT）** - [cszn](https:\u002F\u002Fgithub.com\u002Fcszn)\u002F**[DPIR](https:\u002F\u002Fgithub.com\u002Fcszn\u002FDPIR)** - 由 [AmusementClub](https:\u002F\u002Fgithub.com\u002FAmusementClub)\u002F**[vs-mlrt](https:\u002F\u002Fgithub.com\u002FAmusementClub\u002Fvs-mlrt)** 提供支持。\n\n>**SCUNet（TensorRT）** - [cszn](https:\u002F\u002Fgithub.com\u002Fcszn)\u002F**[SCUNet](https:\u002F\u002Fgithub.com\u002Fcszn\u002FSCUNet)** - 由 [mafiosnik777](https:\u002F\u002Fgithub.com\u002Fmafiosnik777)\u002F**SCUNet-TensorRT** 提供支持（尚未发布）。\n\n# 系统要求\n\n\n\n#### 最低配置：\n- 支持超线程技术的双核 CPU\n- 具备 Vulkan 功能的图形处理器，用于使用 NCNN 进行推理；或具备 DirectX 12 功能的图形处理器，用于使用 DirectML 进行推理\n- Windows 10\n\n#### 推荐配置：\n\n- 四核 Intel Kaby Lake\u002FAMD Ryzen 或更高版本，且支持超线程技术\n- 16 GB 内存\n- NVIDIA 2000 系列（Ampere）GPU，适用于 TensorRT\n- Windows 11\n\n\u003Csub>附注：自 TensorRT 8.6 版本起，已不再支持第二代 Kepler 和 Maxwell（900 系列及以下）架构。若要使用 TensorRT 进行推理，您至少需要一台 Pascal 架构的 GPU（1000 系列及以上），并配备 CUDA 12.0 及以上版本、驱动程序版本 ≥ 525.xx。\u003C\u002Fsub>\n\n# macOS 与 Linux 支持\n\n本 GUI 在设计时充分考虑了跨平台兼容性，可同时兼容两种操作系统。\n\n**目前，我们的主要目标是为 Windows 用户提供稳定且功能完备的解决方案；不过，Linux 和 macOS 的支持将在 1.0 版本更新中逐步推出。**\n\n![enhancr-macos](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_readme_81ccac3ba234.png)\n\n我们还计划支持 Apple Silicon，~~但目前我手中仅有 Intel Macbook Pro 用于测试~~ 我将尽快在 Amazon AWS 上搭建一套 Apple Silicon 实例，以便在 1.0 版本发布前完成相关部署。\n\n# 性能基准测试\n\n输入尺寸：1920x1080 @ 2倍分辨率\n\n|| RTX 2060S \u003Csup>1\u003C\u002Fsup> | RTX 3070 \u003Csup>2\u003C\u002Fsup>| RTX A4000 \u003Csup>3\u003C\u002Fsup> | RTX 3090 Ti \u003Csup>4\u003C\u002Fsup> | RTX 4090 \u003Csup>5\u003C\u002Fsup>\n|--|--|--|--|--|--|\n| RIFE \u002F rife-v4.6 (NCNN) | 53.78 fps | 64.08 fps | 80.56 fps | 86.24 fps | 136.13 fps |\n| RIFE \u002F rife-v4.6 (TensorRT) | 70.34 fps | 94.63 fps | 86.47 fps | 122.68 fps | 170.91 fps |\n| CAIN \u002F cvp-v6 (NCNN) | 9.42 fps | 10.56 fps | 13.42 fps | 17.36 fps | 44.87 fps |\n| CAIN \u002F cvp-v6 (TensorRT) | 45.41 fps | 63.84 fps | 81.23 fps | 112.87 fps | 183.46 fps |\n| GMFSS \u002F Up (PyTorch) | - | - | 4.32 fps | - | 16.35 fps |\n| GMFSS \u002F Union (PyTorch) | - | - | 3.68 fps | - | 13.93 fps |\n| GMFSS \u002F Union (TensorRT) | - | - | 6.79 fps | - | - |\n| RealESRGAN \u002F animevideov3 (TensorRT) | 7.64 fps | 9.10 fps | 8.49 fps | 18.66 fps | 38.67 fps |\n| RealCUGAN (TensorRT) | - | - | 5.96 fps | - | - |\n| SwinIR (PyTorch) | - | - | 0.43 fps | - | - |\n| DPIR \u002F Denoise (TensorRT) | 4.38 fps | 6.45 fps | 5.39 fps | 11.64 fps | 27.41 fps |\n\n\u003Csup>1\u003C\u002Fsup> \u003Csub>Ryzen 5 3600X - Gainward RTX 2060 Super @ 标准配置\u003C\u002Fsub>\n\n\u003Csup>2\u003C\u002Fsup> \u003Csub>Ryzen 7 3800X - Gigabyte RTX 3070 Eagle OC @ 标准配置\u003C\u002Fsub>\n\n\u003Csup>3\u003C\u002Fsup> \u003Csub>Ryzen 5 3600X - PNY RTX A4000 @ 标准配置\u003C\u002Fsub>\n\n\u003Csup>4\u003C\u002Fsup> \u003Csub>i9 12900KF - ASUS RTX 3090 Ti Strix OC @ ~2220MHz\u003C\u002Fsub>\n\n\u003Csup>5\u003C\u002Fsup> \u003Csub>Ryzen 9 5950X - ASUS RTX 4090 Strix OC - @ ~3100MHz，通过优化以实现最高性能\u003C\u002Fsub>\n\n# 故障排除与常见问题解答\n\n本部分现已迁移至 wiki：https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fwiki\n\n欢迎前往查看，了解更多关于如何充分发挥 enhancr 的潜力，以及如何解决各类问题的详细信息。\n\n# 推理\n\n[TensorRT](https:\u002F\u002Fdeveloper.nvidia.com\u002Ftensorrt) 是一款专为 NVIDIA GPU 优化的高效 AI 推理运行时。它通过基准测试来为您的特定 GPU 寻找最优的内核，并且在运行 AI 的机器上额外构建了一个引擎。然而，其最终性能通常会远超任何 PyTorch 或 NCNN 的实现方案。\n\n[NCNN](https:\u002F\u002Fgithub.com\u002FTencent\u002Fncnn) 是一款专为移动平台优化的高性能神经网络推理计算框架。NCNN 不依赖任何第三方库，支持跨平台，在大多数主流平台上均比所有已知的开源框架运行得更快。NCNN 支持 NVIDIA、AMD、Intel 显卡，甚至包括 Apple Silicon。\n目前，NCNN 已被广泛应用于腾讯旗下众多应用中，例如 QQ、Qzone、微信、Pitu 等。\n\n# 支持本项目\n\n如果您能通过 [Patreon](https:\u002F\u002Fwww.patreon.com\u002Fmafiosnik) 或通过 PayPal 捐赠的方式支持本项目，我们将不胜感激。您的支持将有助于加速开发进程，并为项目带来更多更新与改进。此外，如果您具备相关技能，也可以通过提交拉取请求来贡献力量。无论您选择以何种形式给予支持，我们都深表感谢！\n\n# 未来规划\n\n我将持续致力于优化代码库，包括及时解决因时间限制而产生的各种不一致问题。我们将会定期发布新版本，包含全新功能、漏洞修复，以及在新技术和新模型不断涌现之际将其纳入项目中。感谢您的理解与支持。\n\n# 资助与感谢\n\n>我们的播放器依赖于 [mpv](https:\u002F\u002Fgithub.com\u002Fmpv-player\u002Fmpv) 和 [ModernX](https:\u002F\u002Fgithub.com\u002Fcyl0\u002FModernX) 来实现 OSC 功能。\n\n>感谢 [HubertSontowski](https:\u002F\u002Fgithub.com\u002FHubertSotnowski) 和 [styler00dollar](https:\u002F\u002Fgithub.com\u002Fstyler00dollar) 在实现 CAIN 方面提供的帮助。\n\n# 加入 Discord 社区\n\n如需与社区互动、分享您的成果，或在遇到问题时寻求帮助，请访问我们的 [Discord](https:\u002F\u002Fdsc.gg\u002Fenhancr)。届时，我们也会在该平台上展示即将发布的版本预览。","# enhancr 快速上手指南（Windows 版）\n\n## 环境准备\n\n| 项目 | 最低要求 | 推荐配置 |\n|---|---|---|\n| 操作系统 | Windows 10 | Windows 11 |\n| CPU | 双核 + 超线程 | 四核 Intel Kaby Lake \u002F AMD Ryzen 及以上 |\n| 内存 | 8 GB | 16 GB |\n| GPU | 支持 Vulkan 或 DirectX 12 | NVIDIA RTX 2000 系列及以上（TensorRT 加速） |\n| 驱动 | Vulkan 1.2+ \u002F DX12 | NVIDIA 驱动 ≥ 525.xx + CUDA 12.0 |\n\n> 注意：TensorRT 8.6 起不再支持 GTX 900 系列及以下显卡，需 Pascal 架构（GTX 10 系列）或更新。\n\n## 安装步骤\n\n1. 下载安装包  \n   免费版 0.9.9（无需 Patreon）：  \n   [enhancr-setup-free-0.9.9.exe](https:\u002F\u002Fdl.enhancr.app\u002Fsetup\u002Fenhancr-setup-free-0.9.9.exe)\n\n2. 双击安装  \n   安装器已内置 Python 环境与全部依赖，无需额外安装 Docker \u002F WSL。\n\n3. 首次启动  \n   桌面快捷方式 `enhancr` → 自动完成初始化 → 进入主界面。\n\n## 基本使用\n\n### 1. 单任务示例：4K 视频补帧 + 超分\n\n1. 点击 **Add Video** 选择源文件。  \n2. 右侧 **Pipeline** 依次勾选：  \n   - Interpolation → 选择 `RIFE (TensorRT)`  \n   - Upscaling → 选择 `RealESRGAN (TensorRT)`  \n3. 设置输出目录 → 点击 **Start**。  \n4. 实时预览窗口可查看进度，处理完自动保存。\n\n### 2. 批量任务\n\n1. 主界面 **Queue** → **Add Folder** 导入多个视频。  \n2. 统一设置参数后点击 **Start Queue**。  \n3. 处理顺序按队列自动执行，支持暂停 \u002F 继续。\n\n### 3. 自定义 ESRGAN 模型\n\n1. 将 `.onnx` 或 `.pth` 模型放入  \n   `C:\\Users\\\u003C用户名>\\enhancr\\models\\esrgan\\custom`  \n2. 重启软件，Upscaling 下拉框即可看到自定义模型，选中即用。\n\n### 4. 快速参数说明\n\n| 参数 | 建议值 | 说明 |\n|---|---|---|\n| Interpolation Factor | 2× \u002F 4× | 帧率倍增倍数 |\n| Upscaling Factor | 2× \u002F 4× | 分辨率放大倍数 |\n| Scene Detection | ON | 避免场景切换处产生伪影 |\n| Trim | 可选 | 仅处理指定时间段 |\n\n完成！处理后的视频将保存在你设定的输出目录。","一位独立纪录片创作者需要修复一段珍贵的 90 年代低分辨率、低帧率家庭录像，以便在现代高清设备上流畅播放并参展。\n\n### 没有 enhancr 时\n- **环境配置极其繁琐**：为了运行 AI 模型，必须手动安装 Python、配置复杂的依赖库，甚至被迫学习 Docker 或 WSL，对非程序员极不友好。\n- **硬件兼容性差**：若使用高性能的 PyTorch 引擎，仅限 NVIDIA 显卡；若用其他轻量引擎，又往往牺牲画质或速度，无法灵活切换。\n- **工作流断裂且低效**：处理前需手动提取视频帧为图片序列，处理完再重新编码合成，不仅占用大量磁盘空间，还容易在场景切换处产生画面撕裂伪影。\n- **缺乏实时反馈**：无法在长时间渲染过程中预览效果，只能等到最终文件生成后才发现参数设置错误，导致时间浪费。\n\n### 使用 enhancr 后\n- **开箱即用的优雅体验**：直接运行封装好的图形界面应用，无需配置任何代码环境或子系统，内置 NCNN 和 TensorRT 引擎，一键启动。\n- **跨平台与高性能兼得**：利用 NVIDIA TensorRT 实现极速推理，或在 AMD 及 Apple Silicon 设备上通过 NCNN 流畅运行，根据硬件自动优化。\n- **流式处理与智能修复**：直接读取源视频进行“边读边算”，无需提取中间帧；内置场景检测功能，自动跳过镜头切换帧，有效消除伪影。\n- **所见即所得的交互**：集成无损性能的实时预览窗口，支持裁剪、队列批处理及自定义模型加载，让用户在渲染前即可确认最终效果。\n\nenhancr 将原本高门槛的命令行 AI 视频修复技术，转化为普通人也能轻松驾驭的可视化工作流，极大提升了影像复原的效率与质量。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmafiosnik777_enhancr_44262e9f.png","mafiosnik777","mafiosnik","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmafiosnik777_62bcc979.jpg",null,"Germany","https:\u002F\u002Fgithub.com\u002Fmafiosnik777",[82,86,90,94],{"name":83,"color":84,"percentage":85},"JavaScript","#f1e05a",50.7,{"name":87,"color":88,"percentage":89},"Python","#3572A5",31.3,{"name":91,"color":92,"percentage":93},"SCSS","#c6538c",12.7,{"name":95,"color":96,"percentage":97},"HTML","#e34c26",5.4,800,45,"2026-03-27T14:21:12","GPL-3.0",1,"Windows 10\u002F11, macOS, Linux","可选但强烈建议；NVIDIA Pascal（GTX 1000 系列）及以上、AMD、Intel Graphics 或 Apple Silicon；TensorRT 需 NVIDIA GPU 且驱动 ≥525.xx \u002F CUDA 12.0+；显存未说明","最低：未说明；推荐：16 GB",{"notes":107,"python":108,"dependencies":109},"Windows 安装包已集成所有依赖，无需额外安装 Docker 或 WSL；macOS 与 Linux 支持将在 1.0 版推出；Apple Silicon 支持正在开发；内置实时播放器需高性能 GPU 才能流畅运行","未说明（内置可嵌入 Python 环境，无需手动安装）",[110,111,112,113,114,115,116],"Node.js","Electron","TensorRT","NCNN","DirectML","mpv","ModernX",[35,14],[119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138],"artificial-intelligence","cain","dpir","frame-interpolation","ncnn","realesrgan","rife","super-resolution","tensorrt","upscaling","vapoursynth","video-processing","electron","esrgan","gmfss","realcugan","swinir","amd","intel","nvidia","2026-03-27T02:49:30.150509","2026-04-06T10:25:01.433016",[142,147,152,157,162,167],{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},6283,"为什么 4K 视频放大时报“Not enough memory resources”或速度极慢？","4K 分辨率对显存要求极高，通常会导致 VRAM 不足而报错。建议：1) 将分辨率降到 1080p 或更低再放大；2) 减少线程数（Threads）到 1；3) 使用 2x 而非 4x 放大倍率；4) 确认显卡剩余显存 ≥ 8 GB。","https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fissues\u002F22",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},6284,"竖屏视频放大后画面被旋转 90° 且画质很差，如何解决？","这是手机竖拍视频的常见现象：手机实际以横屏录制，再通过旋转标记让播放器竖屏显示。enhancr 目前不会自动读取旋转标记，因此输出为横屏。解决方法是先用 ffmpeg 或播放器把视频旋转回竖屏，再进行放大；或等待后续版本可能增加的自动旋转功能。","https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fissues\u002F17",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},6285,"AMD \u002F Apple Silicon 显卡能否使用硬件编码？支持哪些格式？","已支持。在设置中选择对应硬件编码器即可：AMD 选 h264_amf \u002F hevc_amf，Apple Silicon 选 h264_videotoolbox \u002F hevc_videotoolbox。格式方面现已支持 H.264、HEVC，后续计划加入 AV1。","https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fissues\u002F2",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},6286,"笔记本外接独显无法被识别，只调用核显怎么办？","1) 在 Windows「图形设置」里把 enhancr.exe 指定为“高性能”并选择外接 GPU；2) 若仍失败，可在设置里手动填写 GPU ID（0 通常是核显，1 为外接卡）；3) 确认外接卡驱动正常，且 NVENC 会话未超限（GTX 10 系最多 2 路并发）。","https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fissues\u002F16",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},6287,"恢复视频时出现「Unable to find a suitable output format for '='」怎么办？","这是早期版本 ffmpeg 参数格式错误导致的。升级到最新版 enhancr（≥ 0.9.9）即可解决；如仍报错，可尝试删除临时目录 %LOCALAPPDATA%\\Temp\\enhancr 后重试。","https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fissues\u002F21",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},6288,"免费版与 Pro 版有什么区别？如何获取 Pro 版？","免费版已包含 Real-ESRGAN、CAIN、RIFE 等主流模型，支持 DirectML\u002FNCNN 推理；Pro 版额外解锁全部 TensorRT 模型，速度可提升 3–5 倍。Pro 版需订阅作者的 Patreon（链接见 README），订阅后即可下载 Pro 构建。","https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fissues\u002F19",[173,178,183,188,193,198,203,208,213],{"id":174,"version":175,"summary_zh":176,"released_at":177},115570,"0.9.9","**enhancr Pro 0.9.9 is available for Silver and Gold Patrons on Patreon now**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Fenhancr-pro-0-9-84064916\r\n_Free version is available to download now as well._\r\n\r\n**Changes:**\r\n\r\n- Added fp16 i\u002Fo to RIFE\r\n- Replaced deprecated cupy.cuda.compile_with_cache with cupy.RawModule\r\n- Updated to Python 3.11, TensorRT 8.6.1, cudatoolkit 12.1, PyTorch 2.1.0 nightly & Vapoursynth R62\r\n- Implemented SwinIR (TensorRT)\r\n- Switched webm muxer back to ffmpeg\r\n- Filtering out generic VapourSynth error now, due to people not getting that it's not an error\r\n- Added RealESRGAN (DirectML)\r\n- Removed blur on Windows 10\r\n- Rewrote Queue Backend partially\r\n- Implemented CAIN (DirectML)\r\n- Added dynamic axes to RVPV2 -> extra step for exporting to onnx isn't necessary anymore\r\n- Added DPIR (DirectML)\r\n- Updated ThemeEngine to make UI a bit more consistent\r\n- Implemented SCUNet (TensorRT)\r\n- Small UI Changes overall\r\n\r\n**Bugfixes:**\r\n\r\n- Fixed Tiling for ShuffleCUGAN (NCNN)\r\n- Changed optShape calculation for DPIR\r\n- Fixed Subtitle Check when selecting mp4 output container\r\n- Re-enabled CUDNN tactics in TensorRT (thanks nvidia)\r\n- Fixed RIFE TensorRT engine build crashing\r\n- Scaled down Preview encoder (This fixes most \"[h264\u002Fh265\u002Fav1_nvenc @ xxxxxxxxxxxxxxx] No capable devices found\" errors. See FAQ in wiki for more infos.)\r\n- Fixed RVPV2 ONNX conversion\r\n- Added padding to NCNN for resolutions non divisible by 8, due to color channels not being put together properly after padding inside model\r\n- Fixed memory leak when reading media metadata that would occur when processing a lot of items in queue\r\n- Fixed duration formatting in Media Info\r\n- Discord Rich Presence properly resets to idle state when process is completed\u002Fcanceled now\r\n- Fixed custom pth ESRGAN models\r\n- Removed fp16 i\u002Fo from RVPV1 (TRT) to make it work again","2023-06-07T14:23:27",{"id":179,"version":180,"summary_zh":181,"released_at":182},115571,"0.9.8","**enhancr Pro 0.9.8 is available for Silver and Gold Patreons now**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Frelease-enhancr-81864462\r\n_Free version is available to download now as well._\r\n\r\n**Changes:**\r\n- Preview gets encoded on hardware now instead of software (No more performance impact for live preview)\r\n- Added realtime player with perfect support for audio, subtitles, fonts, attachments etc.\r\n- Updated to TensorRT 8.6.0 (Starting with TensorRT 8.6, support for 2nd generation Kepler and Maxwell (900 Series and below) has been dropped. You will need at least a Pascal GPU (1000 series and up) and CUDA 12.0 + driver version >= 525.xx to run inference using TensorRT)\r\n- Added upscale frameskip (Skips inference on duplicate frames, granting a major speed boost on animation)\r\n- Switched muxer from FFmpeg to mkvmerge\r\n- Added GMFSS Fortuna (PyTorch + TensorRT), the best Frame Interpolation Model for Animation available right now\r\n- Updated FFmpeg to 6.0 from nightly, due to hardware AV1 being in mainline now\r\n- Implemented fp16 i\u002Fo (in simple terms this means reduced bandwidth is used up, thanks to half floating-point RGBH input instead of RGBS)\r\n- Removed waifu2x, because of subpar quality\r\n- Improved UI and UX consistency when a process finishes\r\n- Added ShuffleCUGAN (NCNN + TensorRT) (Over 2x speed* of ESRGAN Compact with CUGAN like quality at half the VRAM requirements that CUGAN would normally need, made by @styler00dollar)\r\n- Removed old unneccessary audio and subtitle extractions -> Inference starts faster now\r\n- Inference uses half of available CPU threads now, due to better performance\r\n- While building an engine from onnx, trtexec gets properly killed now when cancelling the process\r\n- Various small UI changes\r\n\r\n*This comparison only applies to TensorRT, ShuffleCUGAN is around same speed as ESRGAN Compact in NCNN\r\n\r\n**Bugfixes:**\r\n- Fixed the key shortcut to export\r\n- Dragging around text in user inputs and terminal doesn't lock up the app anymore\r\n- Fixed \"height not divisble by 2\" when preview is enabled\r\n- Switching from FFmpeg to mkvmerge should have fixed the error that muxing takes forever in some cases\r\n- Fixed a rounding error for optShapes that would occur when converting to TensorRT Engine in Restoration Tab\r\n- Progress bar text doesn't overflow anymore into other UI elements when input filename is too long & has Chinese\u002FJapanese characters\r\n- Fixed Thumbnail Caching issues in Queue\r\n- Custom Model selection gets saved now and persists through app restarts like any other setting\r\n- Rendering correct thumbnail when toggling queue tab now\r\n- Fixed Scene Detection and Frameskip for GMFSS (TensorRT)\r\n- Button for clearing the queue is hidden now, while a process is running\r\n- Fixed UI breaking when pressing model tab on a specific page of settings","2023-04-24T20:04:49",{"id":184,"version":185,"summary_zh":186,"released_at":187},115572,"0.9.7","**Release 0.9.7 is available for Gold Patrons on Patreon now**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Frelease-enhancr-79850513\r\n_Silver Patrons will get Access in 3 days._\r\n\r\nSorry for the long waiting time, I got hit really bad with covid and currently focusing on a new app\u002Fproject for you guys.\r\nNext update is gonna be huge, I promise.\r\n\r\n**Changes:**\r\n- Implemented RealCUGAN (TensorRT) (Disclaimer: You might need to use tiling for 1080p on 8GB cards, else you'll run out of memory)\r\n- Implemented SwinIR (PyTorch) (It's really slow, but the quality is definitely worth it | Thanks to Bubblemint for providing the model btw)\r\n- Updated waifu2x to use faster to use faster ncnn mlrt runtime (30% speed increase)\r\n- Added Option to export as Frame Sequence\r\n- Removed menubar that would appear when pressing \"Alt\" key\r\n- Added window controls for project page on Windows 10\r\n- Increasing UI scale by 0.10\r\n- Added GIF Input support\r\n\r\n**Bugfixes:**\r\n- Fixed FFmpeg complaining about \"height not divisble by 2\" when sampled dimensions mismatch (This usually happens with DVD sources)\r\n- WebM export should work as expected now, due to re-encoding to Opus when muxing\r\n- Fixed Playing Videos\u002FOpening Output Folder via Queue context menu\r\n- Fixed Queue context menu glitching out when having other UI scale than default\r\n- Added trailing ? to muxer to make errors less likely on final muxing step\r\n- Fixed Subtitle & Input path check\r\n\r\n**Don't launch enhancr from the Installer by selecting the checkbox, haven't gotten around to fix the main app retaining Admin Privileges from the Setup yet. OAuth breaks with this. \r\nIf you did do it, just restart the app to fix.**","2023-03-10T22:15:23",{"id":189,"version":190,"summary_zh":191,"released_at":192},115573,"0.9.6","**Release 0.9.6 is available for Gold Patrons on Patreon now**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Frelease-enhancr-78147760\r\n_Silver Patrons will get Access in 3 days._\r\n\r\nThis might be the most huge update yet, enjoy!\r\n\r\n**Changes:**\r\n- Implemented GMFSS Union TensorRT (Disclaimer: Engine takes a long time to build ~ 20mins for 1080p)\r\n- Implemented CAIN RVPv2 model, trained by sudo (best CAIN model yet)\r\n- Using Mica Material instead of acrylic blur on Windows 11 now\r\n- Overhauled UI a bit\r\n- Implemented UI rescaling (finally)\r\n- Added DRM and Patreon OAuth\r\n- Updated ffmpeg\r\n- Set float32_matmul_precision to medium for small performance boost on GMFSS Torch inference\r\n- Updated TensorRT to 8.5.2 (Unfortunately you have to reconvert all your engines, because engines serialized in older versions of trt aren't compatible)\r\n- Implemented proper Threading\r\n- Removed pre-release Popup on Startup\r\n- Added Option to clear Queue\r\n- Implemented Asar for UI (should feel a lot snappier now and less startup time)\r\n- Added option to use custom 1x models for RealESRGAN in Restoration Tab\r\n\r\n**Bugfixes:**\r\n- Fixed RealESRGAN NCNN not working due to using onnxruntime instead of ncnn wrapper\r\n- Scene Detection is working properly on GMFSS now (slight skill issue with placement of frameskip and scdetect)\r\n- TensorRT inference should show up now systems with both integrated and dedicated Graphics\r\n- Fixed Scene Change Sensitivity value not saving and persisting after software restart\r\n- Timer in Discord Rich presence doesn't reset every 2 seconds while in inference step anymore\r\n- Fixed wrong dimensions when trying to convert restoration models to engine that are not DPIR\r\n- Fixed stuck progressbar and fps\u002Feta counter when cancelling process manually","2023-02-02T23:35:09",{"id":194,"version":195,"summary_zh":196,"released_at":197},115574,"0.9.5","Release 0.9.5 is available for Gold Patrons on Patreon now\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Frelease-enhancr-77207698\r\nSilver Patrons will get Access in 3 days.\r\n\r\nI hope everyone had a great start into the new year!\r\nThis update took quite a bit of time, because I needed to find a way to make CuPy and Cuda Toolkit working in a portable way, because they're needed for GMFSS.\r\nSorry for that! I can assure you the results GMFSS provides are absolutely worth the waiting time, though.\r\nI only can say it was absolute pain, can't even count how often I recompiled Torch and fought with Debuggers and CUDA the last few weeks, only for it to work on my system but not on others when I sent it out to testers. \r\nAnyways, it's ready now for you guys to enjoy!\r\n\r\nChanges:\r\n- Implemented true batch processing (you can drag multiple files or directories now into enhancr)\r\n- Added GMFUpSS \u002F GMFSS Union, the best Video Interpolation AI for animated content currently out there\r\n- Added automatic conversion to ONNX for pth ESRGAN models (both are supported now)\r\n- Injecting environment hook now and added own cudatoolkit with cupy and torch, making implementing almost any Video AI possible for the future\r\n- Terminal doesn't get spamed with \"Frame: xxx\u002Fxxx (x.xx fps)\" anymore, neatly displays in one line\r\n- Changed default Scene Change Sensitivity to 0.180\r\n- Added \"buildOnly\" flag to TensorRT engine conversion, reducing engine build times a bit\r\n- On request of someone a default project name was added, to prevent creating projects with blank names\r\n\r\nBugfixes:\r\n- Fixed Settings not properly saving\r\n- Correct enhancr version is being rendered in log now\r\n- Fixed error when trying to mux files with data streams\r\n- Project files on the welcome page only get rendered if they still exist now\r\n- Fixed loading spinner not properly hiding after some errors\r\n- Fixed optShapes error that would make onnx impossible to convert when disabling fp16\r\n- Fixed Issue where new CAIN engines wouldn't convert because of a formatting error\r\n\r\nBy the way we have a wiki now, if you didn't see it yet: https:\u002F\u002Fgithub.com\u002Fmafiosnik777\u002Fenhancr\u002Fwiki","2023-01-13T23:40:17",{"id":199,"version":200,"summary_zh":201,"released_at":202},115575,"0.9.4","**Hotfix 0.9.4 is available for Gold Patrons on Patreon now**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Fhotfix-enhancr-0-75911481\r\n_I won't count this towards early access because of the broken release yesterday, so 0.9.4 will be available to Silver Patrons in 2 days._\r\n\r\n**Changes:**\r\n- Added warning message if GPU driver doesn't support required NVENC API (Update your drivers to at least 522.25 or newer, if you want to use Hardware Encoding)\r\n\r\n**Bugfixes:**\r\n- Uncommented function that was necesseray to detect if queue has finished, effectively making UI unusable after running once (Queue works again)\r\n- Project paths get cut off again if path is too long, preventing it from wrapping out of container\r\n- Discord Rich Presence resets properly when stopping queue manually now\r\n- Dimensions get stored in queue objects now -> No more weird issues with converting engine for wrong resolution with multiple queue items\r\n- Fixed problems with integrated GPUs (Dedicated GPU gets detected properly in UI now)\r\n- Fixed Issues with Hardware Encoding","2022-12-14T15:03:16",{"id":204,"version":205,"summary_zh":206,"released_at":207},115576,"0.9.3","**enhancr 0.9.3 is out on Patreon now for Gold Patreons!**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Frelease-enhancr-75886925\r\n_Silver Patrons will get Access in 3 days._\r\n\r\n**Changes**:\r\n- Removed TencentARC\u002FAnimeSR Upscaler, due to poor speed & not very promising results\r\n- Added RealESRGAN (NCNN) for Intel & AMD Support\r\n- Added padding to resolutions that are non-divisible by 8\r\n- Added window border on Windows 10\r\n- Implemented Hardware Encoding for AV1 (RTX 4000 Series, AMD 7000, Intel Arc) and all other formats on other GPUs\r\n- Unsupported engines on non-NVIDIA GPUs get hidden now by default\r\n- Implemented Scene Change Sensitivity Setting\r\n- Lot of under the hood changes, thanks to @IcedShake\r\n\r\n**Bugfixes:**\r\n- Fixed Patreon & Discord Buttons having broken\u002Fno links\r\n- Preview works offline now (it pulled a dependency from network before)\r\n- Removed verbose output from trtexec in Upscaling\r\n- Fixed spcaces escaping path (sigh.. yet again)\r\n- Discord Rich Presence resets properly now on completion\r\n- Removed solid background on Welcome Page\r\n- Fixed media header border-radius clipping off when using too long filename\r\n- Queue item titles dont wrap into new line anymore effectively breaking ui\r\n- Fixed inconsistent behaviour of mediainfo container\r\n- Too long filenames dont clip out of progress bar anymore","2022-12-13T23:05:44",{"id":209,"version":210,"summary_zh":211,"released_at":212},115577,"0.9.2","**enhancr 0.9.2 is out on Patreon now!**\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Frelease-enhancr-75477003\r\n\r\n**Changes:**\r\n- Implemented TencentARC\u002FAnimeSR Upscaler (will probably get removed in next update again, due to poor speed & not very promising results)\r\n- Updated CAIN (TensorRT) - Stacking channels instead of width now, resulting in ca. 13% speed boost + no more need for PyTorch as dependency\r\n- Overhauled Discord Rich Presence, displays live fps, percentage & engine now\r\n- Streams don't get merged anymore when an error occurs + added proper error message\r\n- Added Setting to change Cache location, where files are being temporarily stored while being processed\r\n- Implemented Frameskip for Interpolation, which grants another performance improvement, especially on eastern Animation, due to containing a lot of static frames, which can be skipped by the AI now\r\n\r\n**Bugfixes:**\r\n- Fixed calling binaries from installation path with spaces (e.g user folders with sur- and last name)\r\n- Recompiled CAIN (NCNN) filter, with proper paths -> CVPv6 (NCNN) works again\r\n- Switched from ffms2 to lsmash, to mitigate some files not being able to be read\r\n- Fixed installer not being able to add registry keys on updates\r\n\r\n(This is the last update that will be released for Silver & Gold Patrons simultaneously, 3-Day Early Access for Gold Patrons takes effect now)","2022-12-04T03:21:32",{"id":214,"version":215,"summary_zh":216,"released_at":217},115578,"0.9.1","Download:\r\nhttps:\u002F\u002Fwww.patreon.com\u002Fposts\u002Fhotfix-enhancr-0-75172174","2022-11-28T11:22:49"]