[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-xahidbuffon--FUnIE-GAN":3,"similar-xahidbuffon--FUnIE-GAN":60},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":20,"owner_twitter":21,"owner_website":22,"owner_url":23,"languages":24,"stars":29,"forks":30,"last_commit_at":31,"license":32,"difficulty_score":33,"env_os":34,"env_gpu":35,"env_ram":34,"env_deps":36,"category_tags":41,"github_topics":44,"view_count":33,"oss_zip_url":20,"oss_zip_packed_at":20,"status":55,"created_at":56,"updated_at":57,"faqs":58,"releases":59},862,"xahidbuffon\u002FFUnIE-GAN","FUnIE-GAN","Fast underwater image enhancement for Improved Visual Perception. #TensorFlow #PyTorch #RAL2020","FUnIE-GAN 是一款专注于水下图像增强的开源 AI 工具，旨在通过深度学习技术显著提升水下环境的视觉感知能力。在水下拍摄中，光线衰减和散射常导致图像模糊、对比度低及色彩失真，严重影响后续的目标检测与姿态估计任务。FUnIE-GAN 利用生成对抗网络（GAN）架构，有效恢复图像细节与色彩，同时兼顾处理速度。\n\n该项目提供 TensorFlow 和 PyTorch 双框架实现，并包含基于 UIQM、SSIM 等指标的质量分析模块。其独特优势在于推理速度极快，支持在 Jetson AGX Xavier 等单板计算机上实现实时运行（48+ FPS），非常适合水下机器人部署场景。无论是从事计算机视觉研究的研究人员，还是致力于水下系统开发的工程师，都能借助 FUnIE-GAN 快速构建高效的水下视觉解决方案。相关论文已发表于 IEEE RA-L 2020，代码与数据集资源齐全，便于复现与扩展。","TensorFlow and PyTorch implementations of the paper *[Fast Underwater Image Enhancement for Improved Visual Perception (RA-L 2020)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9001231)* and other GAN-based models.\n\n![funie-fig](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxahidbuffon_FUnIE-GAN_readme_26c831ebcac2.jpg)\n\n### Resources\n- Training pipelines for **[FUnIE-GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9001231)** and **[UGAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8460552) ([original repo](https:\u002F\u002Fgithub.com\u002Fcameronfabbri\u002FUnderwater-Color-Correction))**  on [TensorFlow (Keras)](\u002FTF-Keras\u002F) and [PyTorch](\u002FPyTorch\u002F) \n- Modules for image quality analysis based on **UIQM**, **SSIM**, and **PSNR** ([see Evaluation](\u002FEvaluation\u002F))\n\n| Enhanced underwater imagery | Improved detection and pose estimation  | \n|:--------------------|:--------------------|\n| ![det-enh](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxahidbuffon_FUnIE-GAN_readme_8bd9e50ed099.gif) | ![det-gif](\u002Fdata\u002Fgif2.gif)     |\n\n\n### FUnIE-GAN Features\n- Provides competitive performance for underwater image enhancement\n- Offers real-time inference on single-board computers\n\t- 48+ FPS on Jetson AGX Xavier, 25+ FPS on Jetson TX2\n\t- 148+ FPS on Nvidia GTX 1080 \n- Suitable for underwater robotic deployments for enhanced vision \n\n\n### FUnIE-GAN Pointers\n- Paper: https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9001231\n- Preprint: https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.09766.pdf\n- Datasets: http:\u002F\u002Firvlab.cs.umn.edu\u002Fresources\u002Feuvp-dataset\n- Bibliography entry for citation:\n\t```\n\t@article{islam2019fast,\n\t    title={Fast Underwater Image Enhancement for Improved Visual Perception},\n\t    author={Islam, Md Jahidul and Xia, Youya and Sattar, Junaed},\n\t    journal={IEEE Robotics and Automation Letters (RA-L)},\n\t    volume={5},\n\t    number={2},\n\t    pages={3227--3234},\n\t    year={2020},\n\t    publisher={IEEE}\n\t}\n\t```\n\n### Underwater Image Enhancement: Recent Research and Resources \n#### 2019\n| Paper  | Theme | Code   | Data |\n|:------------------------|:---------------------|:---------------------|:---------------------|\n| [Multiscale Dense-GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8730425)  | Residual multiscale dense block as generator | | |\n| [Fusion-GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.06819)  | FGAN-based model, loss function formulation |  | [U45](https:\u002F\u002Fgithub.com\u002FIPNUISTlegal\u002Funderwater-test-dataset-U45-) |\n| [UDAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.09000) | U-Net denoising autoencoder |  | | \n| [VDSR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8763933) | ResNet-based model, loss function formulation  |  | | \n| [JWCDN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.05595) | Joint wavelength compensation and dehazing  |   | \n| [AWMD-Cycle-GAN](https:\u002F\u002Fwww.mdpi.com\u002F2077-1312\u002F7\u002F7\u002F200) | Adaptive weighting for multi-discriminator training  | | | \n| [WAug Encoder-Decoder](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fhtml\u002FAAMVEM\u002FJamadandi_Exemplar-based_Underwater_Image_Enhancement_Augmented_by_Wavelet_Corrected_Transforms_CVPRW_2019_paper.html) |  Encoder-decoder module with wavelet pooling and unpooling | [GitHub](https:\u002F\u002Fgithub.com\u002FAdarshMJ\u002FUnderwater-Image-Enhancement-via-Style-Transfer) | |\n| [Water-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05495) | Dataset and benchmark |[GitHub](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FWater-Net_Code) | [UIEB](https:\u002F\u002Fli-chongyi.github.io\u002Fproj_benchmark.html) |\n\n\n#### 2017-18\n| Paper  | Theme | Code   | Data |\n|:------------------------|:---------------------|:---------------------|:---------------------|\n| [UGAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8460552)  | Several GAN-based models, dataset formulation | [GitHub](https:\u002F\u002Fgithub.com\u002Fcameronfabbri\u002FUnderwater-Color-Correction) | [Uw-imagenet](http:\u002F\u002Firvlab.cs.umn.edu\u002Fresources\u002F) |\n| [Underwater-GAN](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-05792-3_7) | Loss function formulation, cGAN-based model | | |\n| [LAB-MSR](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231217305246) | Multi-scale Retinex-based framework | | |\n| [Water-GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7995024) | Data generation from in-air image and depth pairings | [GitHub](https:\u002F\u002Fgithub.com\u002Fkskin\u002FWaterGAN) | [MHL, Field data](https:\u002F\u002Fgithub.com\u002Fkskin\u002FWaterGAN) |\n| [UIE-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8296508)| CNN-based model for color correction and haze removal | | |\n\n#### Non-deep Models\n- [Sea-Thru](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FAkkaynak_Sea-Thru_A_Method_for_Removing_Water_From_Underwater_Images_CVPR_2019_paper.pdf) ([project page](https:\u002F\u002Fwww.deryaakkaynak.com\u002Fsea-thru))\n- [Haze-line-aware Color Restoration](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.01343.pdf) ([code](https:\u002F\u002Fgithub.com\u002Fdanaberman\u002Funderwater-hl))\n- [Local Color Mapping Combined with Color Transfer](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8659313) ([code](https:\u002F\u002Fgithub.com\u002Frprotasiuk\u002Funderwater_enhancement))\n- [Real-time Model-based Image Color Correction for Underwater Robots](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.06437) ([code](https:\u002F\u002Fgithub.com\u002Fdartmouthrobotics\u002Funderwater_color_enhance))\n- [All-In-One Underwater Image Enhancement using Domain-Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FUG2+%20Prize%20Challenge\u002FUplavikar_All-in-One_Underwater_Image_Enhancement_Using_Domain-Adversarial_Learning_CVPRW_2019_paper.pdf) ([code](https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FAll-In-One-Underwater-Image-Enhancement-using-Domain-Adversarial-Learning))\n- [Difference Backtracking Deblurring Method for Underwater Images](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11042-019-7420-z)\n- [Guided Trigonometric Bilateral Filter and Fast Automatic Color correction](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6738704)\n- [Red-channel Underwater Image Restoration](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1047320314001874) ([code](https:\u002F\u002Fgithub.com\u002Fagaldran\u002FUnderWater))\n\n\n#### Reviews, Metrics, and Benchmarks\n- [Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05320)\n- [Human-Visual-System-Inspired Underwater Image Quality Measures](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7305804)\n- [An Underwater Image Enhancement Benchmark Dataset and Beyond](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05495)\n- [An Experimental-based Review of Image Enhancement and Restoration Methods](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.03246) ([code](https:\u002F\u002Fgithub.com\u002Fwangyanckxx\u002FSingle-Underwater-Image-Enhancement-and-Color-Restoration))\n- [Diving Deeper into Underwater Image Enhancement: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.07863)\n- [A Revised Underwater Image Formation Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAkkaynak_A_Revised_Underwater_CVPR_2018_paper.pdf)\n\n\n\n","TensorFlow 和 PyTorch 对论文 *[Fast Underwater Image Enhancement for Improved Visual Perception (RA-L 2020)](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9001231)* 及其他基于 GAN（生成对抗网络）模型的实现。\n\n![funie-fig](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxahidbuffon_FUnIE-GAN_readme_26c831ebcac2.jpg)\n\n### 资源\n- **[FUnIE-GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9001231)** 和 **[UGAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8460552) ([原始仓库](https:\u002F\u002Fgithub.com\u002Fcameronfabbri\u002FUnderwater-Color-Correction))** 在 [TensorFlow (Keras)](\u002FTF-Keras\u002F) 和 [PyTorch](\u002FPyTorch\u002F) 上的训练流程\n- 基于 **UIQM（水下图像质量度量）**、**SSIM（结构相似性）** 和 **PSNR（峰值信噪比）** 的图像质量分析模块（[参见评估](\u002FEvaluation\u002F)）\n\n| 增强后的水下图像 | 改进的检测与姿态估计 | \n|:--------------------|:--------------------|\n| ![det-enh](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxahidbuffon_FUnIE-GAN_readme_8bd9e50ed099.gif) | ![det-gif](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxahidbuffon_FUnIE-GAN_readme_4f0a9490630e.gif)     |\n\n\n### FUnIE-GAN 特性\n- 为水下图像增强提供具有竞争力的性能\n- 支持在单板计算机上进行实时推理\n\t- Jetson AGX Xavier 上 48+ FPS（帧率），Jetson TX2 上 25+ FPS\n\t- Nvidia GTX 1080 上 148+ FPS \n- 适用于需要增强视觉的水下机器人部署 \n\n\n### FUnIE-GAN 指引\n- 论文：https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9001231\n- 预印本：https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.09766.pdf\n- 数据集：http:\u002F\u002Firvlab.cs.umn.edu\u002Fresources\u002Feuvp-dataset\n- 引用文献条目：\n\t```\n\t@article{islam2019fast,\n\t    title={Fast Underwater Image Enhancement for Improved Visual Perception},\n\t    author={Islam, Md Jahidul and Xia, Youya and Sattar, Junaed},\n\t    journal={IEEE Robotics and Automation Letters (RA-L)},\n\t    volume={5},\n\t    number={2},\n\t    pages={3227--3234},\n\t    year={2020},\n\t    publisher={IEEE}\n\t}\n\t```\n\n### 水下图像增强：近期研究与资源 \n#### 2019\n| 论文 | 主题 | 代码 | 数据 |\n|:------------------------|:---------------------|:---------------------|:---------------------|\n| [Multiscale Dense-GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8730425)  | 使用残差多尺度密集块作为生成器 | | |\n| [Fusion-GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.06819)  | 基于 FGAN 的模型，损失函数公式化 |  | [U45](https:\u002F\u002Fgithub.com\u002FIPNUISTlegal\u002Funderwater-test-dataset-U45-) |\n| [UDAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.09000) | U-Net 去噪自编码器 |  | | \n| [VDSR](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8763933) | 基于 ResNet 的模型，损失函数公式化  |  | | \n| [JWCDN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.05595) | 联合波长补偿与去雾  |   | \n| [AWMD-Cycle-GAN](https:\u002F\u002Fwww.mdpi.com\u002F2077-1312\u002F7\u002F7\u002F200) | 多判别器训练的自适应加权  | | | \n| [WAug Encoder-Decoder](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fhtml\u002FAAMVEM\u002FJamadandi_Exemplar-based_Underwater_Image_Enhancement_Augmented_by_Wavelet_Corrected_Transforms_CVPRW_2019_paper.html) | 带有小波池化和反池化的编码器 - 解码器模块 | [GitHub](https:\u002F\u002Fgithub.com\u002FAdarshMJ\u002FUnderwater-Image-Enhancement-via-Style-Transfer) | |\n| [Water-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05495) | 数据集与基准测试 |[GitHub](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FWater-Net_Code) | [UIEB](https:\u002F\u002Fli-chongyi.github.io\u002Fproj_benchmark.html) |\n\n\n#### 2017-18\n| 论文 | 主题 | 代码 | 数据 |\n|:------------------------|:---------------------|:---------------------|:---------------------|\n| [UGAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8460552)  | 多种基于 GAN 的模型，数据集构建 | [GitHub](https:\u002F\u002Fgithub.com\u002Fcameronfabbri\u002FUnderwater-Color-Correction) | [Uw-imagenet](http:\u002F\u002Firvlab.cs.umn.edu\u002Fresources\u002F) |\n| [Underwater-GAN](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-05792-3_7) | 损失函数公式化，基于 cGAN 的模型 | | |\n| [LAB-MSR](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231217305246) | 基于多尺度 Retinex 的框架 | | |\n| [Water-GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7995024) | 从空中图像与深度配对生成数据 | [GitHub](https:\u002F\u002Fgithub.com\u002Fkskin\u002FWaterGAN) | [MHL, Field data](https:\u002F\u002Fgithub.com\u002Fkskin\u002FWaterGAN) |\n| [UIE-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8296508)| 用于颜色校正和去雾的基于 CNN（卷积神经网络）的模型 | | |\n\n#### 非深度学习模型\n- [Sea-Thru](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FAkkaynak_Sea-Thru_A_Method_for_Removing_Water_From_Underwater_Images_CVPR_2019_paper.pdf) ([项目页面](https:\u002F\u002Fwww.deryaakkaynak.com\u002Fsea-thru))\n- [Haze-line-aware Color Restoration](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.01343.pdf) ([代码](https:\u002F\u002Fgithub.com\u002Fdanaberman\u002Funderwater-hl))\n- [Local Color Mapping Combined with Color Transfer](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8659313) ([代码](https:\u002F\u002Fgithub.com\u002Frprotasiuk\u002Funderwater_enhancement))\n- [Real-time Model-based Image Color Correction for Underwater Robots](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.06437) ([代码](https:\u002F\u002Fgithub.com\u002Fdartmouthrobotics\u002Funderwater_color_enhance))\n- [All-In-One Underwater Image Enhancement using Domain-Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FUG2+%20Prize%20Challenge\u002FUplavikar_All-in-One_Underwater_Image_Enhancement_Using_Domain-Adversarial_Learning_CVPRW_2019_paper.pdf) ([代码](https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FAll-In-One-Underwater-Image-Enhancement-using-Domain-Adversarial-Learning))\n- [Difference Backtracking Deblurring Method for Underwater Images](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11042-019-7420-z)\n- [Guided Trigonometric Bilateral Filter and Fast Automatic Color correction](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F6738704)\n- [Red-channel Underwater Image Restoration](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1047320314001874) ([代码](https:\u002F\u002Fgithub.com\u002Fagaldran\u002FUnderWater))\n\n\n#### 综述、指标与基准测试\n- [Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05320)\n- [Human-Visual-System-Inspired Underwater Image Quality Measures](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7305804)\n- [An Underwater Image Enhancement Benchmark Dataset and Beyond](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05495)\n- [An Experimental-based Review of Image Enhancement and Restoration Methods](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.03246) ([代码](https:\u002F\u002Fgithub.com\u002Fwangyanckxx\u002FSingle-Underwater-Image-Enhancement-and-Color-Restoration))\n- [Diving Deeper into Underwater Image Enhancement: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.07863)\n- [A Revised Underwater Image Formation Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAkkaynak_A_Revised_Underwater_CVPR_2018_paper.pdf)","# FUnIE-GAN 快速上手指南\n\nFUnIE-GAN 是一款用于水下图像增强的开源工具，基于 GAN 模型实现，支持 TensorFlow (Keras) 和 PyTorch 框架。该工具旨在提升水下视觉感知能力，适用于水下机器人部署，并可在单板上实现实时推理。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n- **操作系统**：Linux 或 Windows（推荐 Linux 以获得最佳性能）\n- **编程语言**：Python 3.x\n- **深度学习框架**：根据需求选择以下之一\n  - TensorFlow + Keras\n  - PyTorch\n- **硬件加速**（可选但推荐）：\n  - NVIDIA GPU（如 GTX 1080, Jetson AGX Xavier, Jetson TX2）以利用实时推理性能（FPS 可达 48+ ~ 148+）\n  - CUDA 及 cuDNN 环境配置\n\n## 安装步骤\n\n1. **克隆仓库**\n   将项目代码下载到本地：\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fyour-repo\u002FFUnIE-GAN.git\n   cd FUnIE-GAN\n   ```\n\n2. **选择框架路径**\n   根据您选择的深度学习框架，进入对应的目录：\n   - TensorFlow 用户：\n     ```bash\n     cd TF-Keras\u002F\n     ```\n   - PyTorch 用户：\n     ```bash\n     cd PyTorch\u002F\n     ```\n\n3. **安装依赖**\n   在对应目录下安装所需的 Python 包（通常包含 `requirements.txt`）：\n   ```bash\n   pip install -r requirements.txt\n   ```\n\n## 基本使用\n\n### 数据准备\n下载官方数据集用于训练或测试：\n- **EUVP Dataset**: http:\u002F\u002Firvlab.cs.umn.edu\u002Fresources\u002Feuvp-dataset\n\n### 训练与推理\n项目提供了针对 FUnIE-GAN 和 UGAN 的训练流水线。请参考对应框架目录下的脚本进行训练或推理。\n\n- **训练模型**：运行训练脚本加载数据集并进行迭代优化。\n- **图像增强**：使用训练好的模型对水下图像进行处理。\n\n### 效果评估\n项目内置了基于以下指标的质量分析模块，可用于评估增强后的图像质量：\n- **UIQM** (Underwater Image Quality Measure)\n- **SSIM** (Structural Similarity Index)\n- **PSNR** (Peak Signal-to-Noise Ratio)\n\n相关评估代码位于 `\u002FEvaluation\u002F` 目录下。","某海洋工程团队正在部署自主水下机器人进行海底管道巡检，依赖视觉系统实时识别裂缝与异物。\n\n### 没有 FUnIE-GAN 时\n- 原始视频受水体散射影响，呈现严重蓝绿色偏，关键细节模糊不清。\n- 传统去雾算法计算复杂度高，导致图像处理延迟大，无法实时反馈给控制系统。\n- 下游缺陷检测模型因输入图像质量差，特征提取困难，漏检率居高不下。\n- 嵌入式端算力有限，难以运行参数量大的通用图像增强网络。\n\n### 使用 FUnIE-GAN 后\n- FUnIE-GAN 快速校正颜色并增强对比度，有效还原管道表面的真实纹理与损伤痕迹。\n- 在 Jetson TX2 等单板上实现 25+ FPS 推理速度，满足水下实时导航与控制要求。\n- 预处理后的高质量图像显著提升检测模型置信度，大幅降低误报与漏报风险。\n- 模型架构轻量高效，完美适配水下机器人的边缘计算硬件资源限制。\n\nFUnIE-GAN 以极低延迟实现了水下图像的实时高清增强，赋能机器人在复杂环境中精准作业。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxahidbuffon_FUnIE-GAN_26c831eb.jpg","xahidbuffon","Md Jahidul Islam","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fxahidbuffon_ddf1b7d7.jpg","Assistant Professor at ECE, University of Florida | Ph.D. from CSE, University of Minnesota, Twin Cities. ","University of Florida","United States",null,"XahidBuffon","http:\u002F\u002Fxahidbuffon.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fxahidbuffon",[25],{"name":26,"color":27,"percentage":28},"Python","#3572A5",100,613,120,"2026-04-05T10:56:03","NOASSERTION",3,"未说明","推荐 NVIDIA GPU（如 GTX 1080），支持 Jetson 系列嵌入式平台",{"notes":37,"python":34,"dependencies":38},"该工具提供 TensorFlow(Keras) 和 PyTorch 双框架实现。在 Jetson AGX Xavier 上可达 48+ FPS，Jetson TX2 上达 25+ FPS，Nvidia GTX 1080 上达 148+ FPS。适用于水下机器人视觉增强任务。README 中未明确指定操作系统、Python 版本及具体依赖版本，需参考项目根目录下的 requirements.txt 或 setup.py 等配置文件获取详细信息。",[39,40],"TensorFlow\u002FKeras","PyTorch",[42,43],"图像","其他",[45,46,47,48,49,50,51,52,53,54],"image-enhancement","ugan","funie-gan","underwater-images","underwater-robotics","visual-perception","uiqm","ssim","psnr","measure","ready","2026-03-27T02:49:30.150509","2026-04-06T08:52:27.926032",[],[],[61,71,80,93,101,109],{"id":62,"name":63,"github_repo":64,"description_zh":65,"stars":66,"difficulty_score":33,"last_commit_at":67,"category_tags":68,"status":55},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[69,42,70],"开发框架","Agent",{"id":72,"name":73,"github_repo":74,"description_zh":75,"stars":76,"difficulty_score":77,"last_commit_at":78,"category_tags":79,"status":55},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[69,42,70],{"id":81,"name":82,"github_repo":83,"description_zh":84,"stars":85,"difficulty_score":77,"last_commit_at":86,"category_tags":87,"status":55},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[42,88,89,90,70,43,91,69,92],"数据工具","视频","插件","语言模型","音频",{"id":94,"name":95,"github_repo":96,"description_zh":97,"stars":98,"difficulty_score":33,"last_commit_at":99,"category_tags":100,"status":55},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[70,42,69,91,43],{"id":102,"name":103,"github_repo":104,"description_zh":105,"stars":106,"difficulty_score":33,"last_commit_at":107,"category_tags":108,"status":55},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[91,42,69,43],{"id":110,"name":111,"github_repo":112,"description_zh":113,"stars":114,"difficulty_score":77,"last_commit_at":115,"category_tags":116,"status":55},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[69,42]]