[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zhihongz--awesome-low-light-image-enhancement":3,"tool-zhihongz--awesome-low-light-image-enhancement":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",147882,2,"2026-04-09T11:32:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":78,"difficulty_score":89,"env_os":90,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":95,"github_topics":96,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":107,"updated_at":108,"faqs":109,"releases":140},5969,"zhihongz\u002Fawesome-low-light-image-enhancement","awesome-low-light-image-enhancement","This is a resouce list for low light image enhancement","awesome-low-light-image-enhancement 是一个专为低光照图像增强领域打造的开源资源合集。它致力于解决在夜间监控、自动驾驶、显微成像等场景中，因光线不足导致的图像噪点多、细节丢失及色彩失真等难题。通过系统性地整理该领域的核心资产，它为技术探索者提供了一站式的查阅入口。\n\n这份清单涵盖了从基础到前沿的丰富内容，包括 SID、LOL、ExDARK 等多个权威数据集，以及基于深度学习、直方图均衡化和 Retinex 理论等多种主流算法的代码实现与学术论文。此外，它还收录了相关的评估指标和综述文章，帮助用户全面对比不同方法的优劣。无论是正在寻找基准数据的研究人员，还是希望快速复现算法的开发者，亦或是需要评估技术可行性的工程师，都能从中获得极具价值的参考。该项目不仅降低了进入该领域的门槛，更通过开放的社区协作模式，持续推动低光照视觉技术的创新与发展。","# Awesome Low Light Image Enhancement\n\n**This is a resource list for low light image enhancement, including datasets, methods\u002Fcodes\u002Fpapers, metrics and so on.**\n\nLooking forward to your sharing! You can come up with your ideas and suggestions in the [issue](https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues) or directly [pull request](https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fpulls).\n\n\n## Introduction\n\nLow light imaging and low light image enhancement have wild applications in our daily life and different scientific research fields, like night surveillance, automated driving, fluorescence microscopy, high speed imaging and so on. However, there is still a long way to go in dealing with these tasks, considering the great challenges in low photon counts, low SNR, complicated noise models, etc. Here, we collect a list of resources related to low light image enhancement, including datasets, methods\u002Fcodes\u002Fpapers, metrics, and so on. We hope this can help to provide some help to the development of new methods and solutions to the low light tasks.\n\n\n\n## Table of Contents\n\n- [Highlights](#highlights)\n- [Datasets](#datasets)\n- [Review and Benchmark](#review-and-benchmark)\n- [Methods](#methods)\n  * [Learning-based methods](#learning-based-methods)\n  * [HE-based methods](#he-based-methods)\n  * [Retinex-based methods](#retinex-based-methods)\n  * [Other methods](#other-methods)\n- [Related Works](#related-works)\n- [Metrics](#metrics)\n- [More Reference](#more-reference)\n\n\n\n## Highlights\n\n:high_brightness: \u003Cfont color='red'> **News!** \u003C\u002Ffont>\n\n\n\n\n## Datasets\n\n|              Dataset              |                         Brief intro                          |                           Website                            |\n| :-------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n| [SID](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FChen_Learning_to_See_CVPR_2018_paper.html) | Learning to see in the dark ; \u003Cbr \u002F> contains 5094 raw shortexposure images, each with a corresponding long-exposure reference image (illuminance level: outdoor scene 0.2 lux - 5 lux; indoor scene: 0.03 lux - 0.3 lux) |  [link](https:\u002F\u002Fcchen156.github.io\u002FSID.html)  |\n| [ExDARK](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS1077314218304296)  | A collection of 7,363 low-light images from very low-light environments to twilight (i.e 10 different conditions) with 12 object classes (similar to PASCAL VOC) annotated on both image class level and local object bounding boxes. | [github](https:\u002F\u002Fgithub.com\u002Fcs-chan\u002FExclusively-Dark-Image-Dataset) |\n|    [LOL](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04560)   |     Deep Retinex Decomposition for Low-Light Enhancement     |      [link](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website)      |\n|  [SICE](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8259342\u002F)   | A large-scale multi-exposure image dataset, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images | [github](https:\u002F\u002Fgithub.com\u002Fcsjcai\u002FSICE)     |\n| [MIT-Adobe FiveK](http:\u002F\u002Fpeople.csail.mit.edu\u002Fvladb\u002Fphotoadjust\u002Fdb_imageadjust.pdf) | Learning Photographic Global Tonal Adjustment; \u003Cbr \u002F> a dataset consisting of 5,000 photographs, with both the original RAW images straight from the camera and adjusted versions by 5 trained photographers|  [link](https:\u002F\u002Fdata.csail.mit.edu\u002Fgraphics\u002Ffivek) |\n|  [DID](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FFu_Dancing_in_the_Dark_A_Benchmark_towards_General_Low-light_Video_ICCV_2023_paper.pdf)  |  A high-quality low-light video dataset with multiple exposures and cameras  | [link](https:\u002F\u002Fgithub.com\u002Fciki000\u002FDID#dancing-in-the-dark-a-benchmark-towards-general-low-light-video-enhancement) |\n|               DPED                | DSLR-quality photos on mobile devices with deep convolutional networks |          [link](http:\u002F\u002Fpeople.ee.ethz.ch\u002F~ihnatova)          |\n|           VIP-LowLight            |  Eight Natural Images Captured in Very Low-Light Conditions  | [link](https:\u002F\u002Fuwaterloo.ca\u002Fvision-image-processing-lab\u002Fresearch-demos\u002Fvip-lowlight-dataset) |\n|              ReNOIR               | RENOIR - A Dataset for Real Low-Light Image Noise Reduction  | [link](http:\u002F\u002Fadrianbarburesearch.blogspot.com\u002Fp\u002Frenoir-dataset.html) |\n|   [LLIV-Phone](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.10729)   |  The images and videos are taken by various phones' cameras under diverse illumination conditions and scenes | [link](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QS4FgT5aTQNYy-eHZ_A89rLoZgx_iysR\u002Fview?usp=sharing)\n|   [TM-DIED](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fvonikakis\u002Fdatasets\u002Ftm-died?authuser=0) | 222 JPEG photos constituting some of the most challenging cases for image enhancement and tone-mapping algorithms | [link](https:\u002F\u002Fwww.google.com\u002Furl?q=https%3A%2F%2Fwww.flickr.com%2Fgp%2F73847677%40N02%2FGRn3G6&sa=D&sntz=1&usg=AOvVaw3mOxOzBNN3OY1jKiRfVN7C) |\n| [DRV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FChen_Seeing_Motion_in_the_Dark_ICCV_2019_paper.html) | 202\t paired raw low-light image Dataset | [link](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FSeeing-Motion-in-the-Dark)\n|   [LIME](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7782813)   | A small amount of unpaired images for testing. | [link](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F0BwVzAzXoqrSXb3prWUV1YzBjZzg\u002Fview)  |\n|   [VV - Phos](https:\u002F\u002Frobotics.pme.duth.gr\u002Fresearch\u002Fphos\u002F) | A color image database of 15 scenes captured under different illumination conditions |   [link](http:\u002F\u002Frobotics.pme.duth.gr\u002Fphos2.html)        |\n|         The 500px Dataset         |    Exposure: A White-Box Photo Post-Processing Framework     |                              -                               |\n| The Extended Yale Face Database B | The extended Yale Face Database B contains 16128 images of 28 human subjects under 9 poses and 64 illumination conditions. | [link](http:\u002F\u002Fvision.ucsd.edu\u002F~iskwak\u002FExtYaleDatabase\u002FExtYaleB.html) |\n|    the nighttime image dataset    | A dataset which contains source images in bad visibility and their enhanced images processed by different enhancement algorithms |              [link](http:\u002F\u002Fmlg.idm.pku.edu.cn\u002F)              |\n|              VE-LOL               | A large-scale low-light image dataset serving both low\u002Fhigh-level vision with diversified scenes and contents as well as complex degradation in real scenarios, called Vision Enhancement in the LOw-Light condition (VE-LOL). |   [link](https:\u002F\u002Fflyywh.github.io\u002FIJCV2021LowLight_VELOL\u002F)   |\n|           SDSD           | Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment |       [github](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSDSD)       |\n|                MID                | Matching in the Dark: A Dataset for Matching Image Pairs of Low-light Scenes |    [link](https:\u002F\u002Fwenzhengchina.github.io\u002Fprojects\u002Fmid\u002F)     |\n|       DeepHDRVideo        | HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset |  [link](https:\u002F\u002Fguanyingc.github.io\u002FDeepHDRVideo-Dataset\u002F)   |\n|               LLVIP               | LLVIP: A visible-infrared paired dataset for low-light vision |         [link](https:\u002F\u002Fbupt-ai-cz.github.io\u002FLLVIP\u002F)          |\n|             RELLISUR              |  RELLISUR: A Real Low-Light Image Super-Resolution Dataset   |             [link](https:\u002F\u002Fvap.aau.dk\u002Frellisur\u002F)             |\n|               LSRW                | R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network; \u003Cbr \u002F>3170 paired images using the Nikon camera and 2480 paired images using the Huawei mobile phone. |    [github](https:\u002F\u002Fgithub.com\u002Fabcdef2000\u002FR2RNet#dataset)    |\n|                MCR                | Mono-colored raw Paired dataset; \u003Cbr \u002F>a dataset of colored raw and monochrome raw image pairs, captured with the same exposure setting. Each image has a resolution of 1280×1024. Totally 498 different scenes, each scene has 1 corresponding RGB and Monochrome ground truth and 8 different exposure color Raw inputs. | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_GWW1P1kjVBMFfN9AuaFq29w-kQ31ncd\u002Fview?usp=sharing) [Baidu Netdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1b3cmUenebeDT_8HdLGa9dQ?from=init&pwd=22cv) |\n|    Raw Image Low-Light Object     |                              -                               |    [link](https:\u002F\u002Fwiki.qut.edu.au\u002Fdisplay\u002Fcyphy\u002FDatasets)    |\n|             LRAICE                |   A Learning-to-Rank Approach for Image Color Enhancement    |                              -                               |\n|            LOM dataset            |  A paired low-light & over-exposure & normal-light multi-view dataset (for NeRF under low-light conditions) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1orgKEGApjwCm6G8xaupwHKxMbT2s9IAG\u002Fview?usp=sharing) [Baidu Netdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BGfstg2IpN0JZBlVaMG-eQ?pwd=ve1t) |\n\n\n## Review and Benchmark\n\n| Year | Pub       | Paper                                                        | Link                                                         | Note       |\n| :--: | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- |\n| 2021 | IJCV      | Benchmarking Low-Light Image Enhancement and Beyond          | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007%2Fs11263-020-01418-8) |            |\n| 2021 | IEEE PAMI | Low-Light Image and Video Enhancement Using Deep Learning: A Survey | [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTPAMI.2021.3126387)            |            |\n| 2022 | ArXiv     | Low-Light Image and Video Enhancement: A Comprehensive Survey and Beyond | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10772) [code](https:\u002F\u002Fgithub.com\u002Fshenzheng2000\u002Fllie_survey) |            |\n| 2023 | ArXiv     | DarkVision: A Benchmark for Low-Light Image\u002FVideo Perception | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06269)                      | DarkVision |\n| 2023 | Signal Process. | A comprehensive experiment-based review of low-light image enhancement methods and benchmarking low-light image quality assessment  | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0165168422003607) |            |\n\n\n## Methods\n\n### Learning-based methods\n\n| Year | Pub                     | Paper                                                        | Link                                                         | Note                 |\n| ---- | ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------- |\n| 2017 | ArXiv                   | MSR-net:Low-light Image Enhancement Using Deep Convolutional Network | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.02488v1.pdf)                | MSR-net              |\n| 2017 | ECCV                    | Deep Burst Denoising                                         | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.05790)                       |                      |\n| 2017 | VCIP                    | LLCNN: A convolutional neural network for low-light image enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8305143) [dataset](http:\u002F\u002Fdecsai.ugr.es\u002Fcvg\u002Fdbimagenes\u002F) | LLCNN                |\n| 2017 | Pattern Recognit.       | LLNet: A deep autoencoder approach to natural low-light image enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.patcog.2016.06.008)          | LLNet                |\n| 2017 | ACM Trans. Graph.       | Deep bilateral learning for real-time image enhancement      | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.02880) [web](https:\u002F\u002Fgroups.csail.mit.edu\u002Fgraphics\u002Fhdrnet\u002F) [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fhdrnet) | HDRNet               |\n| 2017 | ICCV                    | DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02470)                      |                      |\n| 2018 | BMVC                    | Deep Retinex Decomposition for Low-Light Enhancement         | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04560) [web](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website\u002F) [code](https:\u002F\u002Fgithub.com\u002Fweichen582\u002FRetinexNet) | Retinex-Net          |\n| 2018 | BMVC                    | MBLLEN: Low-light Image\u002FVideo Enhancement Using CNNs         | [pdf](http:\u002F\u002Fbmvc2018.org\u002Fcontents\u002Fpapers\u002F0700.pdf) [web](http:\u002F\u002Fphi-ai.org\u002Fproject\u002FMBLLEN\u002Fdefault.htm) [code](https:\u002F\u002Fgithub.com\u002FLvfeifan\u002FMBLLEN) | MBLLEN               |\n| 2018 | Pattern Recognit. Lett. | LightenNet: A Convolutional Neural Network for weakly illuminated image enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.patrec.2018.01.010)          | LightenNet           |\n| 2018 | CVPR                    | Learning to See in the Dark                                  | [pdf](https:\u002F\u002Fcchen156.github.io\u002Fpaper\u002F18CVPR_SID.pdf) [web](https:\u002F\u002Fcchen156.github.io\u002FSID.html) [code](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark.git) [dataset](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark) |                      |\n| 2018 | IEEE TIP                | Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images | [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTIP.2018.2794218) [code](https:\u002F\u002Fgithub.com\u002Fcsjcai\u002FSICE) | SICE                 |\n| 2018 | ACM TOG                 | Exposure: A White-Box Photo Post-Processing Framework        | [pdf](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3181974) [code](https:\u002F\u002Fgithub.com\u002Fyuanming-hu\u002Fexposure) |                      |\n| 2018 | FG conference           | GLADNet: Low-Light Enhancement Network with Global Awareness | [pdf](https:\u002F\u002Fgithub.com\u002Fdaooshee\u002Ffgworkshop18Gladnet\u002Fblob\u002Fmaster\u002Fwwj_fg2018.pdf)  [web](https:\u002F\u002Fdaooshee.github.io\u002Ffgworkshop18Gladnet\u002F) [code](https:\u002F\u002Fgithub.com\u002Fweichen582\u002FGLADNet) [dataset](https:\u002F\u002Fdaooshee.github.io\u002Ffgworkshop18Gladnet\u002F) | GLADNet              |\n| 2019 | IEEE TIP                | DeepISP: Towards Learning an End-to-End Image Processing Pipeline | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.06724)                      | DeepISP              |\n| 2019 | IEEE TIP                | Low-Light Image Enhancement via a Deep Hybrid Network        | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8692732)          |                      |\n| 2019 | IEEE TIP                | EnlightenGAN: Deep Light Enhancement without Paired Supervision | [code](https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FEnlightenGAN) [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.06972) | EnlightenGAN         |\n| 2019 | ACM MM                  | Kindling the Darkness: A Practical Low-light Image Enhancer  | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F1905.04161) [code](https:\u002F\u002Fgithub.com\u002Fzhangyhuaee\u002FKinD) [code+](https:\u002F\u002Fgithub.com\u002Fzhangyhuaee\u002FKinD_plus) | KinD                 |\n| 2019 | IEEE Access             | A Pipeline Neural Network for Low-Light Image Enhancement    | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8607964\u002F)         |                      |\n| 2019 | Neurocomputing          | Learning Digital Camera Pipeline for Extreme Low-Light Imaging | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.05939)                      |                      |\n| 2019 | CVPR                    | Underexposed Photo Enhancement Using Deep Illumination Estimation | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8953588) [code](https:\u002F\u002Fgithub.com\u002Fwangruixing\u002FDeepUPE) | DeepUPE              |\n| 2019 | ICCV                    | Enhancing Low Light Videos by Exploring High Sensitivity Camera Noise | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9011000)          |                      |\n| 2019 | ICIP                    | Enhancement of Weakly Illuminated Images by Deep Fusion Networks | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803041)          |                      |\n| 2019 | ICCP                    | A Bit Too Much? High Speed Imaging from Sparse Photon Counts | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8747325\u002F)         |                      |\n| 2019 | ICIP                    | Llrnet: A Multiscale Subband Learning Approach for Low Light Image Restoration | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803765)          | Llrnet               |\n| 2019 | ICIP                    | Low-Lightgan: Low-Light Enhancement Via Advanced Generative Adversarial Network With Task-Driven Training | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803328)          | Low-Lightgan         |\n| 2019 | ICME                    | RDGAN: Retinex Decomposition Based Adversarial Learning for Low-Light Enhancement | [code](https:\u002F\u002Fgithub.com\u002FWangJY06\u002FRDGAN\u002F) [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8785047) | RDGAN                |\n| 2019 | ICMEW                   | Low-Light Image Enhancement with Attention and Multi-level Feature Fusion | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8794872) |                      |\n| 2019 | PRCV                    | An Effective Network with ConvLSTM for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1007\u002F978-3-030-31723-2_19)          |                      |\n| 2019 | VISIGRAPP               | End-to-End Denoising of Dark Burst Images Using Recurrent Fully Convolutional Networks | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.07483v1)                    |                      |\n| 2020 | CVPR                    | Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FGuo_Zero-Reference_Deep_Curve_Estimation_for_Low-Light_Image_Enhancement_CVPR_2020_paper.pdf) [web](https:\u002F\u002Fli-chongyi.github.io\u002FProj_Zero-DCE.html) [code](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FZero-DCE) | Zero-DCE             |\n| 2020 | CVPR                    | Learning to Restore Low-Light Images via Decomposition-and-Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9156446)          |                      |\n| 2020 | CVPR                    | From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FYang_From_Fidelity_to_Perceptual_Quality_A_Semi-Supervised_Approach_for_Low-Light_CVPR_2020_paper.pdf) [web](https:\u002F\u002Fgithub.com\u002Fflyywh\u002FCVPR-2020-Semi-Low-Light) [slides](https:\u002F\u002Fgithub.com\u002Fflyywh\u002FCVPR-2020-Semi-Low-Light\u002Fblob\u002Fmaster) | DRBN                 |\n| 2020 | CVPR                    | DeepLPF: Deep Local Parametric Filters for Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13985) [code](https:\u002F\u002Fgithub.com\u002Fsjmoran\u002FDeepLPF) | DeepLPF              |\n| 2020 | IEEE PAMI               | Learning Image-adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-time | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9206076) [code](https:\u002F\u002Fgithub.com\u002FHuiZeng\u002FImage-Adaptive-3DLUT) | Image-Adaptive-3DLUT |\n| 2020 | IET Image Proc.         | Learning an Adaptive Model for Extreme Low-Light Raw Image Processing | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10447.pdf) [code](https:\u002F\u002Fgithub.com\u002F505030475\u002FExtremeLowLight) |                      |\n| 2020 | ArXiv                   | Visual Perception Model for Rapid and Adaptive Low-light Image Enhancement | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2005.07343) [code](https:\u002F\u002Fgithub.com\u002FMDLW\u002FLow-Light-Image-Enhancement) |                      |\n| 2020 | ArXiv                   | Self-supervised Image Enhancement Network: Training with Low Light Images Only | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11300) [code](https:\u002F\u002Fgithub.com\u002Fhitzhangyu\u002FSelf-supervised-Image-Enhancement-Network-Training-With-Low-Light-Images-Only) |                      |\n| 2020 | ICPR                    | Unsupervised Real-world Low-light Image Enhancement with Decoupled Networks | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2005.02818)                       |                      |\n| 2021 | IJCV                    | Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset | [pdf](https:\u002F\u002Flink.springer.com\u002F10.1007\u002Fs11263-021-01466-8) [code](https:\u002F\u002Fgithub.com\u002Fyu-li\u002FAGLLNet) |                      |\n| 2021 | CVPR                    | Retinex-Inspired Unrolling with Cooperative Prior Architecture Search for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_Retinex-Inspired_Unrolling_With_Cooperative_Prior_Architecture_Search_for_Low-Light_Image_CVPR_2021_paper.pdf) [web](http:\u002F\u002Fdutmedia.org\u002FRUAS\u002F) [code](https:\u002F\u002Fgithub.com\u002Fdut-media-lab\u002FRUAS) | RUAS                 |\n| 2021 | CVPR                    | Deep Denoising of Flash and No-Flash Pairs for Photography in Low-Light Environments | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXia_Deep_Denoising_of_Flash_and_No-Flash_Pairs_for_Photography_in_CVPR_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Flikesum\u002FdeepFnF) |                      |\n| 2021 | CVPR                    | Extreme Low-Light Environment-Driven Image Denoising over Permanently Shadowed Lunar Regions with a Physical Noise Model | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FMoseley_Extreme_Low-Light_Environment-Driven_Image_Denoising_Over_Permanently_Shadowed_Lunar_Regions_CVPR_2021_paper.pdf) | HORUS                |\n| 2021 | CVPR                    | Learning Temporal Consistency for Low Light Video Enhancement from Single Images | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Learning_Temporal_Consistency_for_Low_Light_Video_Enhancement_From_Single_CVPR_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fzkawfanx\u002FStableLLVE) |                      |\n| 2021 | CVPR                    | Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FSharma_Nighttime_Visibility_Enhancement_by_Increasing_the_Dynamic_Range_and_Suppression_CVPR_2021_paper.pdf) |                      |\n| 2021 | ICCV                    | Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWang_Seeing_Dynamic_Scene_in_the_Dark_A_High-Quality_Video_Dataset_ICCV_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSDSD) | SDSD                 |\n| 2021 | ICCV                    | HDR Video Reconstruction: A Coarse-to-Fine Network and a Real-World Benchmark Dataset | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FChen_HDR_Video_Reconstruction_A_Coarse-To-Fine_Network_and_a_Real-World_Benchmark_ICCV_2021_paper.pdf) [web](https:\u002F\u002Fguanyingc.github.io\u002FDeepHDRVideo) [code](https:\u002F\u002Fgithub.com\u002Fguanyingc\u002FDeepHDRVideo) | DeepHDRVideo         |\n| 2021 | ICCV                    | Matching in the Dark: A Dataset for Matching Image Pairs of Low-Light Scenes | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSong_Matching_in_the_Dark_A_Dataset_for_Matching_Image_Pairs_ICCV_2021_paper.pdf) [web](https:\u002F\u002Fwenzhengchina.github.io\u002Fprojects\u002Fmid\u002F) [code](https:\u002F\u002Fgithub.com\u002FWenzhengchina\u002FMatching-in-the-Dark) | MID                  |\n| 2021 | ICCV                    | Adaptive Unfolding Total Variation Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZheng_Adaptive_Unfolding_Total_Variation_Network_for_Low-Light_Image_Enhancement_ICCV_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FCharlieZCJ\u002FUTVNet\u002Ftree\u002F5e76495bf371371a7fc63a521fb6dd9de35ee241) | UTVNet               |\n| 2021 | ICCVW                   | LLVIP: A Visible-Infrared Paired Dataset for Low-Light Vision | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FRLQ\u002Fpapers\u002FJia_LLVIP_A_Visible-Infrared_Paired_Dataset_for_Low-Light_Vision_ICCVW_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fbupt-ai-cz\u002FLLVIP) [web](https:\u002F\u002Fbupt-ai-cz.github.io\u002FLLVIP\u002F) | LLVIP                |\n| 2021 | JVCIR                   | R2RNet: Low-Light Image Enhancement via Real-Low to Real-Normal Network | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2106.14501) [code](https:\u002F\u002Fgithub.com\u002Fabcdef2000\u002FR2RNet) | R2RNet               |\n| 2022 | CVPR                    | Toward Fast, Flexible, and Robust Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FMa_Toward_Fast_Flexible_and_Robust_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fvis-opt-group\u002FSCI) | SCI          |\n| 2022 | CVPR                    | Deep Color Consistent Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_Deep_Color_Consistent_Network_for_Low-Light_Image_Enhancement_CVPR_2022_paper.html) | DCC-Net              |\n| 2022 | CVPR                    | URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWu_URetinex-Net_Retinex-Based_Deep_Unfolding_Network_for_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FAndersonYong\u002FURetinex-Net) | URetinex-Net         |\n| 2022 | CVPR                    | Day-to-Night Image Synthesis for Training Nighttime Neural ISPs | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FPunnappurath_Day-to-Night_Image_Synthesis_for_Training_Nighttime_Neural_ISPs_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FSamsungLabs\u002Fday-to-night) |                      |\n| 2022 | CVPR                    | SNR-Aware Low-Light Image Enhancement                        | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FXu_SNR-Aware_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSNR-Aware-Low-Light-Enhance) |               |\n| 2022 | CVPR                    | Dancing Under the Stars: Video Denoising in Starlight        | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FMonakhova_Dancing_Under_the_Stars_Video_Denoising_in_Starlight_CVPR_2022_paper.html) |          |\n| 2022 | CVPR                    | Abandoning the Bayer-Filter To See in the Dark               | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FDong_Abandoning_the_Bayer-Filter_To_See_in_the_Dark_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FTCL-AILab\u002FAbandon_Bayer-Filter_See_in_the_Dark) |   |\n| 2022 | ECCV                    | Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.10564.pdf) [code](https:\u002F\u002Fgithub.com\u002Fjinyeying\u002Fnight-enhancement) |                      |\n| 2022 | ECCV                    | Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction | [pdf](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136790159.pdf) [code](https:\u002F\u002Fgithub.com\u002FKevinJ-Huang\u002FFECNet) |                      |\n| 2022 | ECCV                    | LEDNet: Joint Low-Light Enhancement and Deblurring in the Dark | [pdf](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-20068-7_33) [code](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FLEDNet) | LEDNet               |\n| 2022 | AAAI                    | Low-Light Image Enhancement with Normalizing Flow            | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.05923.pdf) [code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FLLFlow) [web](https:\u002F\u002Fwyf0912.github.io\u002FLLFlow\u002F) | LLFlow               |\n| 2022 | AAAI                    | Semantically contrastive learning for low-light image enhancement | [pdf](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F20046) [code](https:\u002F\u002Fgithub.com\u002FLingLIx\u002FSCL-LLE) [web](https:\u002F\u002Fdongl-group.github.io\u002Fproject_pages\u002FSCL-LLE.html) |        SCL-LLE              |\n| 2022 | AAAI                    | DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep Inconsistency Prior | [pdf](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v36i1.19995)              | DarkVisionNet        |\n| 2022 | ACM MM                  | ChebyLighter: Optimal Curve Estimation for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3503161.3548135) [code](https:\u002F\u002Fgithub.com\u002Feeerpjw\u002FChebyLighter)        | ChebyLighter         |\n| 2022 | BMCV                    | You only need 90K parameters to adapt light: a light weight transformer for image enhancement and exposure correction | [pdf](https:\u002F\u002Fbmvc2022.mpi-inf.mpg.de\u002F0238.pdf) [code](https:\u002F\u002Fgithub.com\u002Fcuiziteng\u002FIllumination-Adaptive-Transformer) | IAT  |\n| 2022 | IJCV                    | Low-Light Image Enhancement via Breaking down the Darkness   | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1007\u002Fs11263-022-01667-9) [code](https:\u002F\u002Fgithub.com\u002Fmingcv\u002FBread?utm_source=catalyzex.com) | Bread                |\n| 2022 | Neurocomputing          | Low-Light Image Enhancement with Knowledge Distillation      | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.neucom.2022.10.083)          |                      |\n| 2022 | Neurocomputing          | LSR: Lightening Super-Resolution Deep Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.neucom.2022.07.058)          | LSR                  |\n| 2022 | Pattern Recognit.       | Brain-like Retinex: A Biologically Plausible Retinex Algorithm for Low Light Image Enhancement | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1016\u002Fj.patcog.2022.109195)        |                      |\n| 2022 | Pattern Recognit.       | LAE-Net: A Locally-Adaptive Embedding Network for Low-Light Image Enhancement | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1016\u002Fj.patcog.2022.109039)        | LAE-Net              |\n| 2022 | Knowl-Based Syst        | LE-GAN: Unsupervised Low-Light Image Enhancement Network Using Attention Module and Identity Invariant Loss | [pdf](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0950705121011151) | LE-GAN               |\n| 2022 | Opt. Lasers Eng.        | Infrared and Low-Light Visible Image Fusion Based on Hybrid Multiscale Decomposition and Adaptive Light Adjustment | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.optlaseng.2022.107268)       |                      |\n| 2022 | Applied Soft Computing  | A predictive intelligence approach for low-light enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS1568494622004173) |                      |\n| 2022 | IEEE TMM  | Purifying Low-light Images via Near-Infrared Enlightened Image | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9999306\u002F) |                      |\n| 2022 | IEEE TNNLS  | DRLIE: Flexible Low-Light Image Enhancement via Disentangled Representations | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9833451\u002F) |                      |\n| 2022 | IEEE TCSVT  | EFINet: Restoration for Low-Light Images via Enhancement-Fusion Iterative Network | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9849123\u002F) [code](https:\u002F\u002Fgithub.com\u002Fkyrie111\u002FEFINet) |      EFINet      |\n| 2023 | Information Fusion      | A Mutually Boosting Dual Sensor Computational Camera for High Quality Dark Videography | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.inffus.2023.01.013) [code](https:\u002F\u002Fgithub.com\u002Fjarrycyx\u002Fdual-channel-low-light-video-public) | DCMAN                |\n| 2023 | Pattern Recognit.      | TreEnhance: A tree search method for low-light image enhancement | [pdf](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320322007282?via%3Dihub) [code](https:\u002F\u002Fgithub.com\u002FOcraM17\u002FTreEnhance) | TreEnhance                |\n| 2023 | AAAI                    | Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.11548) [code](https:\u002F\u002Fgithub.com\u002FTaoWangzj\u002FLLFormer) [web](https:\u002F\u002Ftaowangzj.github.io\u002Fprojects\u002FLLFormer\u002F) |         |\n| 2023 | AAAI                    | Low-Light Video Enhancement with Synthetic Event Guidance | [pdf](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F25257) |                      |\n| 2023 | AAAI                    | Polarization-Aware Low-Light Image Enhancement | [pdf](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F25486) [code](https:\u002F\u002Fgithub.com\u002Ffourson\u002FPolarization-Aware-Low-Light-Image-Enhancement) |                      |\n| 2023 | CVPR                    | DNF: Decouple and feedback network for seeing in the dark | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FJin_DNF_Decouple_and_Feedback_Network_for_Seeing_in_the_Dark_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fsrameo\u002Fdnf) | DNF |\n| 2023 | CVPR                    | Learning a simple low-light image enhancer from paired low-light instances | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FFu_Learning_a_Simple_Low-Light_Image_Enhancer_From_Paired_Low-Light_Instances_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fzhenqifu\u002Fpairlie) | PairLIE |\n| 2023 | CVPR                    | Learning semantic-aware knowledge guidance for low-light image enhancement | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWu_Learning_Semantic-Aware_Knowledge_Guidance_for_Low-Light_Image_Enhancement_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Flangmanbusi\u002Fsemantic-aware-low-light-image-enhancement) |  SKF |\n| 2023 | CVPR                    | Low-light image enhancement via structure modeling and guidance | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FXu_Low-Light_Image_Enhancement_via_Structure_Modeling_and_Guidance_CVPR_2023_paper.html) | |\n| 2023 | CVPR                    | Physics-guided ISO-Dependent sensor noise modeling for extreme low-light photography | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FCao_Physics-Guided_ISO-Dependent_Sensor_Noise_Modeling_for_Extreme_Low-Light_Photography_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fhappycaoyue\u002FLLD) | LLD |\n| 2023 | CVPR                    | Visibility constrained wide-band illumination spectrum design for seeing-in-the-dark | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FNiu_Visibility_Constrained_Wide-Band_Illumination_Spectrum_Design_for_Seeing-in-the-Dark_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fmyniuuu\u002Fvcsd)| VCSD |\n| 2023 | IEEE TMM    | Glow in the Dark: Low-Light Image Enhancement with External Memory | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177254\u002F) [code](https:\u002F\u002Fgithub.com\u002FLineves7\u002FEMNet) |   EMNet   |\n| 2023 | Mach. Vision Appl.  | LDNet: low-light image enhancement with joint lighting and denoising | [pdf](https:\u002F\u002Flink.springer.com\u002F10.1007\u002Fs00138-022-01365-z) | LDNet   |\n| 2023 | IEEE TPAMI | Learning With Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9914672\u002F) [code](https:\u002F\u002Fgithub.com\u002Fvis-opt-group\u002Fruas) | RUAS   |\n| 2023 | IEEE TIP | TSDN: Two-Stage Raw Denoising in the Dark | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10168136\u002F) | TSDN   |\n| 2023 | IEEE TIP | Unsupervised Low-Light Video Enhancement with Spatial-Temporal Co-attention Transformer | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10210621\u002F)  | LightenFormer  |\n| 2023 | IEEE TCYB| Deep Perceptual Image Enhancement Network for Exposure Restoration | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9693338\u002F)  | DPIENet  |\n| 2023 | SIGGRAPH ASIA | Low-light Image Enhancement with Wavelet-based Diffusion Models | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.00306.pdf) [code](https:\u002F\u002Fgithub.com\u002FJianghaiSCU\u002FDiffusion-Low-Light) | DiffLL  |\n| 2023 | ACM MM | CLE Diffusion: Controllable Light Enhancement Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06725) [code](https:\u002F\u002Fgithub.com\u002FYuyangYin\u002FCLEDiffusion) [web](https:\u002F\u002Fyuyangyin.github.io\u002FCLEDiffusion\u002F) | CLE Diffusion  |\n| 2023 | ACM MM | FourLLIE: Boosting Low-Light Image Enhancement by Fourier Frequency Information | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03033) [code](https:\u002F\u002Fgithub.com\u002Fwangchx67\u002FFourLLIE) | FourLLIE  |\n| 2023 | Pattern Recognit. | A reflectance re-weighted Retinex model for non-uniform and low-light image enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0031320323005216) |   |\n| 2023 | Pattern Recognit. | SurroundNet: Towards effective low-light image enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0031320323003035)  [code](https:\u002F\u002Fgithub.com\u002Fouc-ocean-group\u002Fsurroundnet)| SurroundNet  |\n| 2023 | ICCV  | Coherent event guided low-light video enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FLiang_Coherent_Event_Guided_Low-Light_Video_Enhancement_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fsherrycattt\u002FEvLowLight) [web](https:\u002F\u002Fsherrycattt.github.io\u002FEvLowLight\u002F) |      EvLowLight      |\n| 2023 | ICCV  | Dancing in the dark: A benchmark towards general low-light video enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FFu_Dancing_in_the_Dark_A_Benchmark_towards_General_Low-light_Video_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fciki000\u002FDID#dancing-in-the-dark-a-benchmark-towards-general-low-light-video-enhancement) |      DID      |\n| 2023 | ICCV  | Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.13164.pdf) |      Diff-Retinex      |\n| 2023 | ICCV  | Empowering low-light image enhancer through customized learnable priors | [pdf](http:\u002F\u002Fexport.arxiv.org\u002Fpdf\u002F2309.01958) [code](https:\u002F\u002Fgithub.com\u002Fzheng980629\u002FCUE) |      CUE      |\n| 2023 | ICCV  | ExposureDiffusion: Learning to expose for low-light image enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2307.07710.pdf) [code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FExposureDiffusion) |     ExposureDiffusion     |\n| 2023 | ICCV  | Implicit neural representation for cooperative low-light image enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FYang_Implicit_Neural_Representation_for_Cooperative_Low-light_Image_Enhancement_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FYsz2022\u002FNeRCo) |   NeRCo   |\n| 2023 | ICCV  | Low-light image enhancement with illumination-aware gamma correction and complete image modelling network | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.08220.pdf)  |      COMO-ViT    |\n| 2023 | ICCV  | Low-light image enhancement with multi-stage residue quantization and brightness-aware attention | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FLiu_Low-Light_Image_Enhancement_with_Multi-Stage_Residue_Quantization_and_Brightness-Aware_Attention_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FLiuYunlong99\u002FRQ-LLIE) |   RQ-LLIE    |\n| 2023 | ICCV  | Retinexformer: One-stage retinex-based transformer for low-light image enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06705) [code](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) |   Retinexformer    |\n| 2023 | ICCV  | Lighting up NeRF via unsupervised decomposition and enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10664) [code](https:\u002F\u002Fgithub.com\u002Fonpix\u002FLLNeRF) |   LLNeRF    |\n| 2023 | ICCV  | Generalized Lightness Adaptation with Channel Selective Normalization | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FYao_Generalized_Lightness_Adaptation_with_Channel_Selective_Normalization_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fmdyao\u002FCSNorm) |   CS-Norm    |\n| 2023 | PRICAI | Bootstrap diffusion model curve estimation for high resolution low-light image enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-981-99-7025-4_6) |   BDCE    |\n| 2023 | Appl. Mach. Learn. | LoLi-IEA: low-light image enhancement algorithm | [pdf](https:\u002F\u002Fwww.spiedigitallibrary.org\u002Fconference-proceedings-of-spie\u002F12675\u002F1267512\u002FLoLi-IEA-low-light-image-enhancement-algorithm\u002F10.1117\u002F12.2677422.short#_=_) [code](https:\u002F\u002Fgithub.com\u002Fxingyumex\u002FLoLi-IEA) |   LoLi-IEA    |\n| 2024 | IEEE Sens. Lett. | Integrating Graph Convolution Into a Deep Multilayer Framework for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10478172\u002F) [code](https:\u002F\u002Fgithub.com\u002Fsantoshpanda1995\u002FLightweightGCN-Model) |       |\n| 2024 | IEEE TIP | AnlightenDiff: Anchoring Diffusion Probabilistic Model on Low Light Image Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10740586?source=authoralert) [code](https:\u002F\u002Fgithub.com\u002Fallanchan339\u002FAnlightenDiff) | AnlightenDiff    |\n| 2024 | IEEE TCE | Back Projection Generative Strategy for Low and Normal Light Image Pairs with Enhanced Statistical Fidelity and Diversity | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10794693) [code](https:\u002F\u002Fgithub.com\u002Fallanchan339\u002FN2LDiff-BP) | N2LDiff-BP  |\n| 2025 | ICLR (Spotlight) | Reti-Diff: Illumination Degradation Image Restoration with Retinex-based Latent Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.11638) [code](https:\u002F\u002Fgithub.com\u002FChunmingHe\u002FReti-Diff) |   Reti-Diff   |\n| 2025 | Digital Signal Process. | CDAN: Convolutional dense attention-guided network for low-light image enhancement | [pdf](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS1051200424004275) [code](https:\u002F\u002Fgithub.com\u002FSinaRaoufi\u002FCDAN) |   CDAN    |\n| 2025 | CVPR | HVI: A New Color Space for Low-light Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20272) [code](https:\u002F\u002Fgithub.com\u002FFediory\u002FHVI-CIDNet) |  HVI-CIDNet    |\n| 2025 | ICIP |  RT-X Net: RGB-Thermal cross attention network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24705) [code](https:\u002F\u002Fgithub.com\u002Fjhakrraman\u002Frt-xnet) [web](https:\u002F\u002Fsites.google.com\u002Fview\u002Frt-xnet\u002Fhome) |   RT-X Net   |\n| 2025 | IJCV |  Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for Low-Light Image Enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-025-02551-y) [code](https:\u002F\u002Fgithub.com\u002FTAMI-UIB\u002FNonlocal-Retinex-Deep-Unfolding-Low-Light-Enhancement) |       |\n\n\n### HE-based methods\n\n| Year | Pub      | Paper                                                        | Link                                                         | Note  |\n| :--: | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----- |\n| 1990 | IEEE TCE | Contrast limited adaptive histogram equalization: speed and effectiveness | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F109340?arnumber=109340) | CLAHE |\n| 2007 | IEEE TCE | Brightness Preserving Dynamic Histogram Equalization for Image Contrast Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4429280) [code](codes\u002Fbpdhe.m) | BPDHE |\n| 2007 | IEEE TCE | A Dynamic Histogram Equalization for Image Contrast Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=4266947) | DHE   |\n| 2007 | IEEE TCE | Fast Image\u002FVideo Contrast Enhancement Based on Weighted Thresholded Histogram Equalization | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4266969?arnumber=4266969) | WTHE  |\n| 2011 | IEEE TIP | Contextual and Variational Contrast Enhancement              | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5773086\u002F) | CVC   |\n| 2013 | IEEE TIP | Contrast enhancement based on layered difference representation of 2D histograms | [pdf](http:\u002F\u002Fmcl.korea.ac.kr\u002Fprojects\u002FLDR\u002F2013_tip_cwlee_final_hq.pdf)  [web](http:\u002F\u002Fmcl.korea.ac.kr\u002Fcwlee_tip2013\u002F) | LDR   |\n| 2013 | ICASSP   | High efficient contrast enhancement using parametric approximation | [pdf](http:\u002F\u002F150.162.46.34:8080\u002Ficassp2013\u002Fpdfs\u002F0002444.pdf) | POHE  |\n\n\n- **See also: [link](https:\u002F\u002Fgithub.com\u002Felliestath\u002FHelpTest\u002Fblob\u002Fc7e269239a9d67bffc60f44ff1cae70d20770748\u002Fdocs\u002FImage%20Preprocessing.md)**\n\n\n\n### Retinex-based methods\n\n| Year | Pub                    | Paper                                                        | Link                                                         | Note  |\n| ---- | ---------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----- |\n| 1997 | IEEE TIP               | Properties and performance of a center\u002Fsurround retinex      | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F557356)           | SSR   |\n| 1997 | IEEE TIP               | A multiscale retinex for bridging the gap between color images and the human observation of scenes | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=597272) [code1](http:\u002F\u002Fwww.ipol.im\u002Fpub\u002Fart\u002F2014\u002F107\u002F) [code2](https:\u002F\u002Fgithub.com\u002FupcAutoLang\u002FMSRCR-Restoration) | MSRCR |\n| 2013 | SITIS                  | Adaptive Multiscale Retinex for Image Contrast Enhancement   | [code](codes\u002Famsr.m) [pdf](https:\u002F\u002Fdoi.ieeecomputersociety.org\u002F10.1109\u002FSITIS.2013.19) | AMSR  |\n| 2013 | IEEE TIP               | Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6512558) [web](https:\u002F\u002Fshuhangwang.wordpress.com\u002F2015\u002F12\u002F14\u002Fnaturalness-preserved-enhancement-algorithm-for-non-uniform-illumination-images\u002F) [code](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F096l3uy9vowgs4r\u002FCode.rar) | NPE   |\n| 2015 | IEEE TIP               | A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=7229296) [code](codes\u002FPM_SIRE.zip) | SRIE  |\n| 2016 | CVPR                   | A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation | [pdf](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FFu_A_Weighted_Variational_CVPR_2016_paper.pdf)  [code](codes\u002FWV_SIRE.zip) | SRIE  |\n| 2016 | Signal Processing      | A fusion-based enhancing method for weakly illuminated images | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.sigpro.2016.05.031) [code](codes\u002FMF.rar) | MF    |\n| 2017 | IEEE TIP               | LIME: Low-Light Image Enhancement via Illumination Map Estimation | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7782813\u002F) [code1](https:\u002F\u002Fgithub.com\u002FSy-Zhang\u002FLIME) [code2](https:\u002F\u002Fgithub.com\u002Festija\u002FLIME) [code3](https:\u002F\u002Fgithub.com\u002Fpvnieo\u002FLow-light-Image-Enhancement) | LIME  |\n| 2017 | ICCV                   | A Joint Intrinsic-Extrinsic Prior Model for Retinex          | [pdf](http:\u002F\u002Fcaibolun.github.io\u002Fpapers\u002FJieP.pdf) [web](http:\u002F\u002Fcaibolun.github.io\u002FJieP\u002F) [code](https:\u002F\u002Fgithub.com\u002Fcaibolun\u002FJieP\u002F) | JieP  |\n| 2018 | IEEE TIP               | Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model | [pdf](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fuploads\u002Fprod\u002F2018\u002F04\u002F2018-TIP-Structure-Revealing-Low-Light-Image-Enhancement-Via-Robust-Retinex-Model.pdf) [code1](https:\u002F\u002Fgithub.com\u002Fmartinli0822\u002FLow-light-image-enhancement)  [code2](codes\u002FrobustRetinex.m) |       |\n| 2018 | BMVC                    | Deep Retinex Decomposition for Low-Light Enhancement         | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04560) [web](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website\u002F) [code](https:\u002F\u002Fgithub.com\u002Fweichen582\u002FRetinexNet) | Retinex-Net          |\n| 2018 | Symmetry               | A Smart System for Low-Light Image Enhancement with Color Constancy and Detail Manipulation in Complex Light Environments | [pdf](https:\u002F\u002Fwww.mdpi.com\u002F2073-8994\u002F10\u002F12\u002F718\u002Fpdf)          |       |\n| 2019 | Symmetry               | Fractional-Order Fusion Model for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fwww.mdpi.com\u002F2073-8994\u002F11\u002F4\u002F574\u002Fpdf)           |       |\n| 2019 | ICIP                   | A Hybrid L2 −LP Variational Model For Single Low-Light Image Enhancement With Bright Channel Prior | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803197)          |       |\n| 2019 | IET Image Proc.        | Low light image enhancement based on non-uniform illumination prior model | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8911585)          | NIPM  |\n| 2019 | Comput. Graphics Forum | Dual illumination estimation for robust exposure correction  | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.13688.pdf) [code](https:\u002F\u002Fgithub.com\u002Fpvnieo\u002FLow-light-Image-Enhancement) |       |\n| 2019 | ICME                    | RDGAN: Retinex Decomposition Based Adversarial Learning for Low-Light Enhancement | [code](https:\u002F\u002Fgithub.com\u002FWangJY06\u002FRDGAN\u002F) [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8785047) | RDGAN                |\n| 2020 | ic-ETITE               | A comparative analysis of illumination estimation based Image Enhancement techniques | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9077919)          |       |\n| 2020 | IEEE TIP               | LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9056796) | LR3M  |\n| 2021 | CVPR                    | Retinex-Inspired Unrolling with Cooperative Prior Architecture Search for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_Retinex-Inspired_Unrolling_With_Cooperative_Prior_Architecture_Search_for_Low-Light_Image_CVPR_2021_paper.pdf) [web](http:\u002F\u002Fdutmedia.org\u002FRUAS\u002F) [code](https:\u002F\u002Fgithub.com\u002Fdut-media-lab\u002FRUAS) | RUAS   |\n| 2022 | CVPR                    | URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWu_URetinex-Net_Retinex-Based_Deep_Unfolding_Network_for_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FAndersonYong\u002FURetinex-Net) | URetinex-Net         |\n| 2022 | Pattern Recognit.       | Brain-like Retinex: A Biologically Plausible Retinex Algorithm for Low Light Image Enhancement | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1016\u002Fj.patcog.2022.109195)        |                      |\n| 2023 | Pattern Recognit. | A reflectance re-weighted Retinex model for non-uniform and low-light image enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0031320323005216) |   |\n| 2023 | Vis Comput             | Illumination estimation for nature preserving low-light image enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs00371-023-02770-9) | NPLIE |\n| 2023 | ICCV  | Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.13164.pdf) |      Diff-Retinex      |\n| 2023 | ICCV  | Retinexformer: One-stage retinex-based transformer for low-light image enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06705) [code](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) |   Retinexformer    |\n| 2025 | ICCV | GT-Mean Loss: A Simple Yet Effective Solution for Brightness Mismatch in Low-Light Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.20148) [code](https:\u002F\u002Fgithub.com\u002FjingxiLiao\u002FGT-mean-loss) |   GT-Mean loss   |\n| 2025 | ICLR (Spotlight) | Reti-Diff: Illumination Degradation Image Restoration with Retinex-based Latent Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.11638) [code](https:\u002F\u002Fgithub.com\u002FChunmingHe\u002FReti-Diff) |   Reti-Diff   |\n| 2025 | ICIP |  RT-X Net: RGB-Thermal cross attention network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24705) [code](https:\u002F\u002Fgithub.com\u002Fjhakrraman\u002Frt-xnet) [web](https:\u002F\u002Fsites.google.com\u002Fview\u002Frt-xnet\u002Fhome) |   RT-X Net   |\n| 2025 | ICIP |  A Retinex-Based Variational Model with A Nonlocal Gradient-Type Constraint for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F11084565) |       |\n| 2025 | IJCV |  Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for Low-Light Image Enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-025-02551-y) [code](https:\u002F\u002Fgithub.com\u002FTAMI-UIB\u002FNonlocal-Retinex-Deep-Unfolding-Low-Light-Enhancement) |       |\n\n\n### Other methods\n\n| Year | Pub             | Paper                                                        | Link                                                         | Note  |\n| :--: | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----- |\n| 2008 | IET Image Proc. | Fast centre-surround contrast modification                   | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4455541)          |       |\n| 2011 | ICME            | Fast efficient algorithm for enhancement of low lighting video | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=6012107) [code](codes\u002FXuanDong-Method.m) |       |\n| 2017 | ICCVW           | A New Low-Light Image Enhancement Algorithm Using Camera Response Model | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8265567\u002F) [code](https:\u002F\u002Fgithub.com\u002Fbaidut\u002FOpenCE\u002Fblob\u002Fmaster\u002Fours\u002FYing_2017_ICCV.m) |       |\n| 2017 | ArXiv           | A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00591) [code](https:\u002F\u002Fgithub.com\u002Fbaidut\u002FBIMEF) | BIMEF |\n| 2017 | ICCAIP          | A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework | [pdf](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007%2F978-3-319-64698-5_4) [web](https:\u002F\u002Fbaidut.github.io\u002FOpenCE\u002Fcaip2017.html) [code1](https:\u002F\u002Fgithub.com\u002Fbaidut\u002FOpenCE\u002Fblob\u002Fmaster\u002Fours\u002FYing_2017_CAIP.m) [code2](https:\u002F\u002Fgithub.com\u002FAndyHuang1995\u002FImage-Contrast-Enhancement) |       |\n| 2019 | IEEE TIP        | Low-Light Image Enhancement via the Absorption Light Scattering Model | [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTIP.2019.2922106)              | ALSM  |\n| 2019 | ICIP            | Fast Image Enhancement Based on Maximum and Guided Filters   | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803591)          |       |\n| 2025 | IJCV | A Traditional Approach for Color Constancy and Color Assimilation Illusions with Its Applications to Low-Light Image Enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-025-02595-0) | |\n\n\n## Related Works\n\n| Year | Pub      | Paper                                                        | Link                                                         | Note        | Tag              |\n| :--: | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------- | ---------------- |\n| 2012 | IST      | Improving the robustness in feature detection by local contrast enhancement | [dataset](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fvonikakis\u002Fdatasets)  |             |                  |\n| 2015 | ACM ToG  | Automatic Photo Adjustment Using Deep Neural Networks        | [web](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fhomepagezhichengyan\u002Fhome\u002Fdl_img_adjust) [code](https:\u002F\u002Fgithub.com\u002Fstephenyan1984\u002Fdl-image-enhance\u002Fwiki) [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.7725v2) |             |                  |\n| 2018 | CVPR     | Distort-and-Recover: Color Enhancement using Deep Reinforcement Learning | [code](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdistort-and-recover\u002F) [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FCVPR.2018.00621) |             |                  |\n| 2021 | TMM      | Recurrent exposure generation for low-light face detection   | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.10963) [code](https:\u002F\u002Fgithub.com\u002Fsherrycattt\u002FREGDet) | REGDet      | face detection   |\n| 2021 | CVPR     | HLA-Face: Joint High-Low Adaptation for Low Light Face Detection | [web](https:\u002F\u002Fdaooshee.github.io\u002FHLA-Face-Website\u002F) [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_HLA-Face_Joint_High-Low_Adaptation_for_Low_Light_Face_Detection_CVPR_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fdaooshee\u002FHLA-Face-Code) | HLA-Face    | face detection   |\n| 2021 | ICCV     | Multitask AET With Orthogonal Tangent Regularity for Dark Object Detection | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FCui_Multitask_AET_With_Orthogonal_Tangent_Regularity_for_Dark_Object_Detection_ICCV_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fcuiziteng\u002FICCV_MAET) | MAET        | object detection |\n| 2021 | ICCV     | Photon-Net: Photon-Starved Scene Inference using Single Photon Cameras | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FGoyal_Photon-Starved_Scene_Inference_Using_Single_Photon_Cameras_ICCV_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fbhavyagoyal\u002Fspclowlight\u002F) [video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r1YvHnGbi6k) | Photon-Net  | single photon    |\n| 2021 | ICCVW    | Single-Stage Face Detection under Extremely Low-Light Conditions | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FRLQ\u002Fpapers\u002FYu_Single-Stage_Face_Detection_Under_Extremely_Low-Light_Conditions_ICCVW_2021_paper.pdf) |             | face detection   |\n| 2021 | ICCVW    | DeLiEve-Net: Deblurring Low-Light Images with Light Streaks and Local Events | [pdf](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FDeLiEve-Net%3A-Deblurring-Low-light-Images-with-Light-Zhou-Teng\u002F105bf9ccbc749d976ab1f4b455d379f30b1d6508) | DeLiEve-Net | event camera     |\n| 2022 | ArXiv    | An Efficient Low-Light Restoration Transformer for Dark Light Field Images |                                                              | LRT         | light field      |\n| 2022 | ICCP| Robust Scene Inference under Noise-Blur Dual Corruptions | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.11643) [code](https:\u002F\u002Fgithub.com\u002Fbhavyagoyal\u002Fnoiseblurdual) [web](https:\u002F\u002Fwisionlab.com\u002Fproject\u002Fnoiseblurdual\u002F) | Noise-Blur Dual  | object detection\n| 2023 | ICCV     | FeatEnHancer: Enhancing Hierarchical Features for Object Detection and Beyond Under Low-Light Vision | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FHashmi_FeatEnHancer_Enhancing_Hierarchical_Features_for_Object_Detection_and_Beyond_Under_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FkhurramHashmi\u002FFeatEnHancer) [Web](https:\u002F\u002Fkhurramhashmi.github.io\u002FFeatEnHancer\u002F)| FeatEnHancer        | object detection and semantic segmentation |\n| 2023 | IEEE TIP | INFWIDE: Image and feature space wiener deconvolution network for non-blind image deblurring in low-light conditions | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10047966) [code](https:\u002F\u002Fgithub.com\u002Fzhihongz\u002FINFWIDE) | INFWIDE  | deblurring |\n| 2024 |  AAAI    | Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09093) [code](https:\u002F\u002Fgithub.com\u002Fcuiziteng\u002FAleth-NeRF) [web](https:\u002F\u002Fcuiziteng.github.io\u002FAleth_NeRF_web\u002F) |   Aleth-NeRF   | NeRF |\n\n\n## Metrics\n\n| Metric               |  Abbr  | Full-\u002FNon-Reference        | Link             | \n| :------------------: | ------ | ------------------------   | ---------------- |\n| Peak Signal to Noise Ratio | PSNR | Full-Reference |- |\n| Structural Similarity Index Measure | SSIM| Full-Reference | - |\n| Learned Perceptual Image Patch Similarity | LPIPS | Full-Reference | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002FPerceptualSimilarity) |\n| Lightness Order Error | LOE |  Non-Reference | [paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6512558) |\n| Natural Image Quality Evaluator | NIQE  | Non-Reference | [paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6353522)|\n| Mean Square Error | MSE | Full-Reference | - |\n| Mean Absolute Error | MAE | Full-Reference | - |\n| Smartphone Photography Attribute and Quality | SPAQ | Non-Reference | [code](https:\u002F\u002Fgithub.com\u002Fh4nwei\u002FSPAQ) |\n| Neural Image Assessment | NIMA | Non-Reference | [pytorch](https:\u002F\u002Fgithub.com\u002Fkentsyx\u002FNeural-IMage-Assessment) [tensorflow](https:\u002F\u002Fgithub.com\u002Ftitu1994\u002Fneural-image-assessment)|\n| Multi-scale Image Quality Transformer | MUSIQ | Non-Reference | [code](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fmusiq) |\n\n\n\n## More reference\n\n- https:\u002F\u002Fgithub.com\u002Fbaidut\u002FOpenCE\n- https:\u002F\u002Fgithub.com\u002Ftiandaoxiaowu\u002Fimage-enhancement-about-Retinex\n- https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FLighting-the-Darkness-in-the-Deep-Learning-Era-Open\n\n","# 优秀的低光图像增强\n\n**这是一个关于低光图像增强的资源列表，包括数据集、方法\u002F代码\u002F论文、评估指标等。**\n\n期待您的分享！您可以在[issue](https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues)中提出您的想法和建议，或直接提交[pull request](https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fpulls)。\n\n\n## 简介\n\n低光成像和低光图像增强在我们的日常生活中以及不同的科学研究领域有着广泛的应用，例如夜间监控、自动驾驶、荧光显微镜成像、高速成像等。然而，由于低光条件下光子数少、信噪比低、噪声模型复杂等诸多挑战，这些任务仍有很长的路要走。在此，我们整理了一份与低光图像增强相关的资源清单，涵盖数据集、方法\u002F代码\u002F论文、评估指标等内容。希望这份资源能够为开发新的方法和解决方案提供帮助，推动低光任务的研究进展。\n\n\n\n## 目录\n\n- [亮点](#highlights)\n- [数据集](#datasets)\n- [综述与基准测试](#review-and-benchmark)\n- [方法](#methods)\n  * [基于学习的方法](#learning-based-methods)\n  * [基于直方图均衡化的方法](#he-based-methods)\n  * [基于Retinex的方法](#retinex-based-methods)\n  * [其他方法](#other-methods)\n- [相关工作](#related-works)\n- [评估指标](#metrics)\n- [更多参考](#more-reference)\n\n\n\n## 亮点\n\n:high_brightness: \u003Cfont color='red'> **新闻！** \u003C\u002Ffont>\n\n\n\n\n## 数据集\n\n|              数据集              |                         简介                          |                           网站                            |\n| :-------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n| [SID](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FChen_Learning_to_See_CVPR_2018_paper.html) | 学习在黑暗中看清；\u003Cbr \u002F>包含5094张原始短曝光图像，每张都有一张对应的长曝光参考图像（光照水平：室外场景0.2 lux - 5 lux；室内场景：0.03 lux - 0.3 lux） |  [链接](https:\u002F\u002Fcchen156.github.io\u002FSID.html)  |\n| [ExDARK](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS1077314218304296)  | 一个包含7,363张低光照图像的数据集，涵盖了从极低光照环境到黄昏的各种条件（共10种），并标注了12个物体类别（类似于PASCAL VOC），包括图像级别和局部物体边界框。 | [github](https:\u002F\u002Fgithub.com\u002Fcs-chan\u002FExclusively-Dark-Image-Dataset) |\n|    [LOL](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04560)   |     用于低光照增强的深度视网膜分解     |      [链接](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website)      |\n|  [SICE](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8259342\u002F)   | 一个大规模多曝光图像数据集，包含589组精心挑选的高分辨率多曝光序列，共计4,413张图像 | [github](https:\u002F\u002Fgithub.com\u002Fcsjcai\u002FSICE)     |\n| [MIT-Adobe FiveK](http:\u002F\u002Fpeople.csail.mit.edu\u002Fvladb\u002Fphotoadjust\u002Fdb_imageadjust.pdf) | 学习摄影全局色调调整；\u003Cbr \u002F>该数据集由5,000张照片组成，既有相机直出的原始RAW图像，也有5位专业摄影师调整后的版本 |  [链接](https:\u002F\u002Fdata.csail.mit.edu\u002Fgraphics\u002Ffivek) |\n|  [DID](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FFu_Dancing_in_the_Dark_A_Benchmark_towards_General_Low-light_Video_ICCV_2023_paper.pdf)  |  一个高质量的多曝光、多摄像头低光照视频数据集  | [链接](https:\u002F\u002Fgithub.com\u002Fciki000\u002FDID#dancing-in-the-dark-a-benchmark-towards-general-low-light-video-enhancement) |\n|               DPED                | 使用深度卷积网络在移动设备上生成单反相机质量的照片 |          [链接](http:\u002F\u002Fpeople.ee.ethz.ch\u002F~ihnatova)          |\n|           VIP-LowLight            |  八张在极低光照条件下拍摄的自然图像  | [链接](https:\u002F\u002Fuwaterloo.ca\u002Fvision-image-processing-lab\u002Fresearch-demos\u002Fvip-lowlight-dataset) |\n|              ReNOIR               | RENOIR - 一个用于真实低光照图像降噪的数据集 | [链接](http:\u002F\u002Fadrianbarburesearch.blogspot.com\u002Fp\u002Frenoir-dataset.html) |\n|   [LLIV-Phone](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.10729)   |  图像和视频由各种手机摄像头在不同光照条件和场景下拍摄 | [链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QS4FgT5aTQNYy-eHZ_A89rLoZgx_iysR\u002Fview?usp=sharing)\n|   [TM-DIED](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fvonikakis\u002Fdatasets\u002Ftm-died?authuser=0) | 222张JPEG照片，构成了图像增强和色调映射算法中最具挑战性的案例之一 | [链接](https:\u002F\u002Fwww.google.com\u002Furl?q=https%3A%2F%2Fwww.flickr.com%2Fgp%2F73847677%40N02%2FGRn3G6&sa=D&sntz=1&usg=AOvVaw3mOxOzBNN3OY1jKiRfVN7C) |\n| [DRV](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FChen_Seeing_Motion_in_the_Dark_ICCV_2019_paper.html) | 202对配对的低光照原始图像数据集 | [链接](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FSeeing-Motion-in-the-Dark)\n|   [LIME](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7782813)   | 一小部分未配对的测试图像。 | [链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F0BwVzAzXoqrSXb3prWUV1YzBjZzg\u002Fview)  |\n|   [VV - Phos](https:\u002F\u002Frobotics.pme.duth.gr\u002Fresearch\u002Fphos\u002F) | 一个包含15个场景的彩色图像数据库，在不同光照条件下拍摄 |   [链接](http:\u002F\u002Frobotics.pme.duth.gr\u002Fphos2.html)        |\n|         The 500px Dataset         |    曝光：一个白盒式照片后期处理框架     |                              -                               |\n| 扩展耶鲁人脸数据库B | 扩展耶鲁人脸数据库B包含28个人在9种姿态和64种光照条件下的16128张图像。 | [链接](http:\u002F\u002Fvision.ucsd.edu\u002F~iskwak\u002FExtYaleDatabase\u002FExtYaleB.html) |\n|    夜间图像数据集    | 一个包含能见度较差的源图像及其通过不同增强算法处理后的增强图像的数据集 |              [链接](http:\u002F\u002Fmlg.idm.pku.edu.cn\u002F)              |\n|              VE-LOL               | 一个大规模低光照图像数据集，服务于低\u002F高阶视觉任务，涵盖多样化的场景和内容，并模拟真实场景中的复杂退化现象，称为低光照条件下的视觉增强（VE-LOL）。 |   [链接](https:\u002F\u002Fflyywh.github.io\u002FIJCV2021LowLight_VELOL\u002F)   |\n|           SDSD           | 在黑暗中观察动态场景：具有机电对齐的高质量视频数据集 |       [github](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSDSD)       |\n|                MID                | 黑暗中的匹配：一个用于低光照场景图像对匹配的数据集 |    [链接](https:\u002F\u002Fwenzhengchina.github.io\u002Fprojects\u002Fmid\u002F)     |\n|       DeepHDRVideo        | HDR视频重建：一种由粗到细的网络及其实景基准数据集 |  [链接](https:\u002F\u002Fguanyingc.github.io\u002FDeepHDRVideo-Dataset\u002F)   |\n|               LLVIP               | LLVIP：一个可见光-红外配对的低光照视觉数据集 |         [链接](https:\u002F\u002Fbupt-ai-cz.github.io\u002FLLVIP\u002F)          |\n|             RELLISUR              |  RELLISUR：一个真实的低光照图像超分辨率数据集   |             [链接](https:\u002F\u002Fvap.aau.dk\u002Frellisur\u002F)             |\n|               LSRW                | R2RNet：通过真实低到真实正常光照网络进行低光照图像增强；\u003Cbr \u002F>使用尼康相机的3170对配对图像，以及使用华为手机的2480对配对图像。 |    [github](https:\u002F\u002Fgithub.com\u002Fabcdef2000\u002FR2RNet#dataset)    |\n|                MCR                | 单色原始配对数据集；\u003Cbr \u002F>一组彩色与单色原始图像对，采用相同的曝光设置拍摄。每张图像分辨率为1280×1024。共有498个不同场景，每个场景对应1组RGB和单色真值图像，以及8种不同曝光的原始色彩输入。 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_GWW1P1kjVBMFfN9AuaFq29w-kQ31ncd\u002Fview?usp=sharing) [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1b3cmUenebeDT_8HdLGa9dQ?from=init&pwd=22cv) |\n|    原始图像低光照对象     |                              -                               |    [链接](https:\u002F\u002Fwiki.qut.edu.au\u002Fdisplay\u002Fcyphy\u002FDatasets)    |\n|             LRAICE                |   一种用于图像色彩增强的排序学习方法    |                              -                               |\n|            LOM数据集            |  一对低光照、过曝和正常光照的多视角数据集（用于低光照条件下的NeRF） | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1orgKEGApjwCm6G8xaupwHKxMbT2s9IAG\u002Fview?usp=sharing) [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BGfstg2IpN0JZBlVaMG-eQ?pwd=ve1t) |\n\n## 综述与基准测试\n\n| 年份 | 出版物       | 论文                                                        | 链接                                                         | 备注       |\n| :--: | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- |\n| 2021 | IJCV      | 低光照图像增强及其扩展领域的基准测试                          | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007%2Fs11263-020-01418-8) |            |\n| 2021 | IEEE PAMI | 基于深度学习的低光照图像与视频增强：综述                    | [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTPAMI.2021.3126387)            |            |\n| 2022 | ArXiv     | 低光照图像与视频增强：全面综述及未来展望                      | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10772) [代码](https:\u002F\u002Fgithub.com\u002Fshenzheng2000\u002Fllie_survey) |            |\n| 2023 | ArXiv     | DarkVision：低光照图像\u002F视频感知任务的基准测试                  | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06269)                      | DarkVision |\n| 2023 | Signal Process. | 基于实验的低光照图像增强方法综合评述及低光照图像质量评估基准测试 | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0165168422003607) |            |\n\n\n## 方法\n\n### 基于学习的方法\n\n| Year | Pub                     | Paper                                                        | Link                                                         | Note                 |\n| ---- | ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------- |\n| 2017 | ArXiv                   | MSR-net:Low-light Image Enhancement Using Deep Convolutional Network | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.02488v1.pdf)                | MSR-net              |\n| 2017 | ECCV                    | Deep Burst Denoising                                         | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.05790)                       |                      |\n| 2017 | VCIP                    | LLCNN: A convolutional neural network for low-light image enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8305143) [dataset](http:\u002F\u002Fdecsai.ugr.es\u002Fcvg\u002Fdbimagenes\u002F) | LLCNN                |\n| 2017 | Pattern Recognit.       | LLNet: A deep autoencoder approach to natural low-light image enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.patcog.2016.06.008)          | LLNet                |\n| 2017 | ACM Trans. Graph.       | Deep bilateral learning for real-time image enhancement      | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.02880) [web](https:\u002F\u002Fgroups.csail.mit.edu\u002Fgraphics\u002Fhdrnet\u002F) [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fhdrnet) | HDRNet               |\n| 2017 | ICCV                    | DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02470)                      |                      |\n| 2018 | BMVC                    | Deep Retinex Decomposition for Low-Light Enhancement         | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04560) [web](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website\u002F) [code](https:\u002F\u002Fgithub.com\u002Fweichen582\u002FRetinexNet) | Retinex-Net          |\n| 2018 | BMVC                    | MBLLEN: Low-light Image\u002FVideo Enhancement Using CNNs         | [pdf](http:\u002F\u002Fbmvc2018.org\u002Fcontents\u002Fpapers\u002F0700.pdf) [web](http:\u002F\u002Fphi-ai.org\u002Fproject\u002FMBLLEN\u002Fdefault.htm) [code](https:\u002F\u002Fgithub.com\u002FLvfeifan\u002FMBLLEN) | MBLLEN               |\n| 2018 | Pattern Recognit. Lett. | LightenNet: A Convolutional Neural Network for weakly illuminated image enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.patrec.2018.01.010)          | LightenNet           |\n| 2018 | CVPR                    | Learning to See in the Dark                                  | [pdf](https:\u002F\u002Fcchen156.github.io\u002Fpaper\u002F18CVPR_SID.pdf) [web](https:\u002F\u002Fcchen156.github.io\u002FSID.html) [code](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark.git) [dataset](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark) |                      |\n| 2018 | IEEE TIP                | Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images | [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTIP.2018.2794218) [code](https:\u002F\u002Fgithub.com\u002Fcsjcai\u002FSICE) | SICE                 |\n| 2018 | ACM TOG                 | Exposure: A White-Box Photo Post-Processing Framework        | [pdf](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3181974) [code](https:\u002F\u002Fgithub.com\u002Fyuanming-hu\u002Fexposure) |                      |\n| 2018 | FG conference           | GLADNet: Low-Light Enhancement Network with Global Awareness | [pdf](https:\u002F\u002Fgithub.com\u002Fdaooshee\u002Ffgworkshop18Gladnet\u002Fblob\u002Fmaster\u002Fwwj_fg2018.pdf)  [web](https:\u002F\u002Fdaooshee.github.io\u002Ffgworkshop18Gladnet\u002F) [code](https:\u002F\u002Fgithub.com\u002Fweichen582\u002FGLADNet) [dataset](https:\u002F\u002Fdaooshee.github.io\u002Ffgworkshop18Gladnet\u002F) | GLADNet              |\n| 2019 | IEEE TIP                | DeepISP: Towards Learning an End-to-End Image Processing Pipeline | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.06724)                      | DeepISP              |\n| 2019 | IEEE TIP                | Low-Light Image Enhancement via a Deep Hybrid Network        | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8692732)          |                      |\n| 2019 | IEEE TIP                | EnlightenGAN: Deep Light Enhancement without Paired Supervision | [code](https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FEnlightenGAN) [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.06972) | EnlightenGAN         |\n| 2019 | ACM MM                  | Kindling the Darkness: A Practical Low-light Image Enhancer  | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F1905.04161) [code](https:\u002F\u002Fgithub.com\u002Fzhangyhuaee\u002FKinD) [code+](https:\u002F\u002Fgithub.com\u002Fzhangyhuaee\u002FKinD_plus) | KinD                 |\n| 2019 | IEEE Access             | A Pipeline Neural Network for Low-Light Image Enhancement    | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8607964\u002F)         |                      |\n| 2019 | Neurocomputing          | Learning Digital Camera Pipeline for Extreme Low-Light Imaging | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.05939)                      |                      |\n| 2019 | CVPR                    | Underexposed Photo Enhancement Using Deep Illumination Estimation | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8953588) [code](https:\u002F\u002Fgithub.com\u002Fwangruixing\u002FDeepUPE) | DeepUPE              |\n| 2019 | ICCV                    | Enhancing Low Light Videos by Exploring High Sensitivity Camera Noise | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9011000)          |                      |\n| 2019 | ICIP                    | Enhancement of Weakly Illuminated Images by Deep Fusion Networks | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803041)          |                      |\n| 2019 | ICCP                    | A Bit Too Much? High Speed Imaging from Sparse Photon Counts | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8747325\u002F)         |                      |\n| 2019 | ICIP                    | Llrnet: A Multiscale Subband Learning Approach for Low Light Image Restoration | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803765)          | Llrnet               |\n| 2019 | ICIP                    | Low-Lightgan: Low-Light Enhancement Via Advanced Generative Adversarial Network With Task-Driven Training | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803328)          | Low-Lightgan         |\n| 2019 | ICME                    | RDGAN: Retinex Decomposition Based Adversarial Learning for Low-Light Enhancement | [code](https:\u002F\u002Fgithub.com\u002FWangJY06\u002FRDGAN\u002F) [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8785047) | RDGAN                |\n| 2019 | ICMEW                   | Low-Light Image Enhancement with Attention and Multi-level Feature Fusion | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8794872) |                      |\n| 2019 | PRCV                    | An Effective Network with ConvLSTM for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1007\u002F978-3-030-31723-2_19)          |                      |\n| 2019 | VISIGRAPP               | End-to-End Denoising of Dark Burst Images Using Recurrent Fully Convolutional Networks | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.07483v1)                    |                      |\n| 2020 | CVPR                    | Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FGuo_Zero-Reference_Deep_Curve_Estimation_for_Low-Light_Image_Enhancement_CVPR_2020_paper.pdf) [web](https:\u002F\u002Fli-chongyi.github.io\u002FProj_Zero-DCE.html) [code](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FZero-DCE) | Zero-DCE             |\n| 2020 | CVPR                    | Learning to Restore Low-Light Images via Decomposition-and-Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9156446)          |                      |\n| 2020 | CVPR                    | From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FYang_From_Fidelity_to_Perceptual_Quality_A_Semi-Supervised_Approach_for_Low-Light_CVPR_2020_paper.pdf) [web](https:\u002F\u002Fgithub.com\u002Fflyywh\u002FCVPR-2020-Semi-Low-Light) [slides](https:\u002F\u002Fgithub.com\u002Fflyywh\u002FCVPR-2020-Semi-Low-Light\u002Fblob\u002Fmaster) | DRBN                 |\n| 2020 | CVPR                    | DeepLPF: Deep Local Parametric Filters for Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13985) [code](https:\u002F\u002Fgithub.com\u002Fsjmoran\u002FDeepLPF) | DeepLPF              |\n| 2020 | IEEE PAMI               | Learning Image-adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-time | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9206076) [code](https:\u002F\u002Fgithub.com\u002FHuiZeng\u002FImage-Adaptive-3DLUT) | Image-Adaptive-3DLUT |\n| 2020 | IET Image Proc.         | Learning an Adaptive Model for Extreme Low-Light Raw Image Processing | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10447.pdf) [code](https:\u002F\u002Fgithub.com\u002F505030475\u002FExtremeLowLight) |                      |\n| 2020 | ArXiv                   | Visual Perception Model for Rapid and Adaptive Low-light Image Enhancement | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2005.07343) [code](https:\u002F\u002Fgithub.com\u002FMDLW\u002FLow-Light-Image-Enhancement) |                      |\n| 2020 | ArXiv                   | Self-supervised Image Enhancement Network: Training with Low Light Images Only | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11300) [code](https:\u002F\u002Fgithub.com\u002Fhitzhangyu\u002FSelf-supervised-Image-Enhancement-Network-Training-With-Low-Light-Images-Only) |                      |\n| 2020 | ICPR                    | Unsupervised Real-world Low-light Image Enhancement with Decoupled Networks | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2005.02818)                       |                      |\n| 2021 | IJCV                    | Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset | [pdf](https:\u002F\u002Flink.springer.com\u002F10.1007\u002Fs11263-021-01466-8) [code](https:\u002F\u002Fgithub.com\u002Fyu-li\u002FAGLLNet) |                      |\n| 2021 | CVPR                    | Retinex-Inspired Unrolling with Cooperative Prior Architecture Search for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_Retinex-Inspired_Unrolling_With_Cooperative_Prior_Architecture_Search_for_Low-Light_Image_CVPR_2021_paper.pdf) [web](http:\u002F\u002Fdutmedia.org\u002FRUAS\u002F) [code](https:\u002F\u002Fgithub.com\u002Fdut-media-lab\u002FRUAS) | RUAS                 |\n| 2021 | CVPR                    | Deep Denoising of Flash and No-Flash Pairs for Photography in Low-Light Environments | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXia_Deep_Denoising_of_Flash_and_No-Flash_Pairs_for_Photography_in_CVPR_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Flikesum\u002FdeepFnF) |                      |\n| 2021 | CVPR                    | Extreme Low-Light Environment-Driven Image Denoising over Permanently Shadowed Lunar Regions with a Physical Noise Model | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FMoseley_Extreme_Low-Light_Environment-Driven_Image_Denoising_Over_Permanently_Shadowed_Lunar_Regions_CVPR_2021_paper.pdf) | HORUS                |\n| 2021 | CVPR                    | Learning Temporal Consistency for Low Light Video Enhancement from Single Images | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Learning_Temporal_Consistency_for_Low_Light_Video_Enhancement_From_Single_CVPR_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fzkawfanx\u002FStableLLVE) |                      |\n| 2021 | CVPR                    | Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FSharma_Nighttime_Visibility_Enhancement_by_Increasing_the_Dynamic_Range_and_Suppression_CVPR_2021_paper.pdf) |                      |\n| 2021 | ICCV                    | Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWang_Seeing_Dynamic_Scene_in_the_Dark_A_High-Quality_Video_Dataset_ICCV_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSDSD) | SDSD                 |\n| 2021 | ICCV                    | HDR Video Reconstruction: A Coarse-to-Fine Network and a Real-World Benchmark Dataset | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FChen_HDR_Video_Reconstruction_A_Coarse-To-Fine_Network_and_a_Real-World_Benchmark_ICCV_2021_paper.pdf) [web](https:\u002F\u002Fguanyingc.github.io\u002FDeepHDRVideo) [code](https:\u002F\u002Fgithub.com\u002Fguanyingc\u002FDeepHDRVideo) | DeepHDRVideo         |\n| 2021 | ICCV                    | Matching in the Dark: A Dataset for Matching Image Pairs of Low-Light Scenes | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSong_Matching_in_the_Dark_A_Dataset_for_Matching_Image_Pairs_ICCV_2021_paper.pdf) [web](https:\u002F\u002Fwenzhengchina.github.io\u002Fprojects\u002Fmid\u002F) [code](https:\u002F\u002Fgithub.com\u002FWenzhengchina\u002FMatching-in-the-Dark) | MID                  |\n| 2021 | ICCV                    | Adaptive Unfolding Total Variation Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZheng_Adaptive_Unfolding_Total_Variation_Network_for_Low-Light_Image_Enhancement_ICCV_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FCharlieZCJ\u002FUTVNet\u002Ftree\u002F5e76495bf371371a7fc63a521fb6dd9de35ee241) | UTVNet               |\n| 2021 | ICCVW                   | LLVIP: A Visible-Infrared Paired Dataset for Low-Light Vision | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FRLQ\u002Fpapers\u002FJia_LLVIP_A_Visible-Infrared_Paired_Dataset_for_Low-Light_Vision_ICCVW_2021_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fbupt-ai-cz\u002FLLVIP) [web](https:\u002F\u002Fbupt-ai-cz.github.io\u002FLLVIP\u002F) | LLVIP                |\n| 2021 | JVCIR                   | R2RNet: Low-Light Image Enhancement via Real-Low to Real-Normal Network | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F2106.14501) [code](https:\u002F\u002Fgithub.com\u002Fabcdef2000\u002FR2RNet) | R2RNet               |\n| 2022 | CVPR                    | Toward Fast, Flexible, and Robust Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FMa_Toward_Fast_Flexible_and_Robust_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fvis-opt-group\u002FSCI) | SCI          |\n| 2022 | CVPR                    | Deep Color Consistent Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_Deep_Color_Consistent_Network_for_Low-Light_Image_Enhancement_CVPR_2022_paper.html) | DCC-Net              |\n| 2022 | CVPR                    | URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWu_URetinex-Net_Retinex-Based_Deep_Unfolding_Network_for_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FAndersonYong\u002FURetinex-Net) | URetinex-Net         |\n| 2022 | CVPR                    | Day-to-Night Image Synthesis for Training Nighttime Neural ISPs | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FPunnappurath_Day-to-Night_Image_Synthesis_for_Training_Nighttime_Neural_ISPs_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FSamsungLabs\u002Fday-to-night) |                      |\n| 2022 | CVPR                    | SNR-Aware Low-Light Image Enhancement                        | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FXu_SNR-Aware_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSNR-Aware-Low-Light-Enhance) |               |\n| 2022 | CVPR                    | Dancing Under the Stars: Video Denoising in Starlight        | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FMonakhova_Dancing_Under_the_Stars_Video_Denoising_in_Starlight_CVPR_2022_paper.html) |          |\n| 2022 | CVPR                    | Abandoning the Bayer-Filter To See in the Dark               | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FDong_Abandoning_the_Bayer-Filter_To_See_in_the_Dark_CVPR_2022_paper.html) [code](https:\u002F\u002Fgithub.com\u002FTCL-AILab\u002FAbandon_Bayer-Filter_See_in_the_Dark) |   |\n| 2022 | ECCV                    | Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.10564.pdf) [code](https:\u002F\u002Fgithub.com\u002Fjinyeying\u002Fnight-enhancement) |                      |\n| 2022 | ECCV                    | Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction | [pdf](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136790159.pdf) [code](https:\u002F\u002Fgithub.com\u002FKevinJ-Huang\u002FFECNet) |                      |\n| 2022 | ECCV                    | LEDNet: Joint Low-Light Enhancement and Deblurring in the Dark | [pdf](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-20068-7_33) [code](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FLEDNet) | LEDNet               |\n| 2022 | AAAI                    | Low-Light Image Enhancement with Normalizing Flow            | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.05923.pdf) [code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FLLFlow) [web](https:\u002F\u002Fwyf0912.github.io\u002FLLFlow\u002F) | LLFlow               |\n| 2022 | AAAI                    | Semantically contrastive learning for low-light image enhancement | [pdf](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F20046) [code](https:\u002F\u002Fgithub.com\u002FLingLIx\u002FSCL-LLE) [web](https:\u002F\u002Fdongl-group.github.io\u002Fproject_pages\u002FSCL-LLE.html) |        SCL-LLE              |\n| 2022 | AAAI                    | DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep Inconsistency Prior | [pdf](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v36i1.19995)              | DarkVisionNet        |\n| 2022 | ACM MM                  | ChebyLighter: Optimal Curve Estimation for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3503161.3548135) [code](https:\u002F\u002Fgithub.com\u002Feeerpjw\u002FChebyLighter)        | ChebyLighter         |\n| 2022 | BMCV                    | You only need 90K parameters to adapt light: a light weight transformer for image enhancement and exposure correction | [pdf](https:\u002F\u002Fbmvc2022.mpi-inf.mpg.de\u002F0238.pdf) [code](https:\u002F\u002Fgithub.com\u002Fcuiziteng\u002FIllumination-Adaptive-Transformer) | IAT  |\n| 2022 | IJCV                    | Low-Light Image Enhancement via Breaking down the Darkness   | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1007\u002Fs11263-022-01667-9) [code](https:\u002F\u002Fgithub.com\u002Fmingcv\u002FBread?utm_source=catalyzex.com) | Bread                |\n| 2022 | Neurocomputing          | Low-Light Image Enhancement with Knowledge Distillation      | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.neucom.2022.10.083)          |                      |\n| 2022 | Neurocomputing          | LSR: Lightening Super-Resolution Deep Network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.neucom.2022.07.058)          | LSR                  |\n| 2022 | Pattern Recognit.       | Brain-like Retinex: A Biologically Plausible Retinex Algorithm for Low Light Image Enhancement | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1016\u002Fj.patcog.2022.109195)        |                      |\n| 2022 | Pattern Recognit.       | LAE-Net: A Locally-Adaptive Embedding Network for Low-Light Image Enhancement | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1016\u002Fj.patcog.2022.109039)        | LAE-Net              |\n| 2022 | Knowl-Based Syst        | LE-GAN: Unsupervised Low-Light Image Enhancement Network Using Attention Module and Identity Invariant Loss | [pdf](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0950705121011151) | LE-GAN               |\n| 2022 | Opt. Lasers Eng.        | Infrared and Low-Light Visible Image Fusion Based on Hybrid Multiscale Decomposition and Adaptive Light Adjustment | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.optlaseng.2022.107268)       |                      |\n| 2022 | Applied Soft Computing  | A predictive intelligence approach for low-light enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS1568494622004173) |                      |\n| 2022 | IEEE TMM  | Purifying Low-light Images via Near-Infrared Enlightened Image | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9999306\u002F) |                      |\n| 2022 | IEEE TNNLS  | DRLIE: Flexible Low-Light Image Enhancement via Disentangled Representations | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9833451\u002F) |                      |\n| 2022 | IEEE TCSVT  | EFINet: Restoration for Low-Light Images via Enhancement-Fusion Iterative Network | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9849123\u002F) [code](https:\u002F\u002Fgithub.com\u002Fkyrie111\u002FEFINet) |      EFINet      |\n| 2023 | Information Fusion      | A Mutually Boosting Dual Sensor Computational Camera for High Quality Dark Videography | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.inffus.2023.01.013) [code](https:\u002F\u002Fgithub.com\u002Fjarrycyx\u002Fdual-channel-low-light-video-public) | DCMAN                |\n| 2023 | Pattern Recognit.      | TreEnhance: A tree search method for low-light image enhancement | [pdf](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320322007282?via%3Dihub) [code](https:\u002F\u002Fgithub.com\u002FOcraM17\u002FTreEnhance) | TreEnhance                |\n| 2023 | AAAI                    | Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.11548) [code](https:\u002F\u002Fgithub.com\u002FTaoWangzj\u002FLLFormer) [web](https:\u002F\u002Ftaowangzj.github.io\u002Fprojects\u002FLLFormer\u002F) |         |\n| 2023 | AAAI                    | Low-Light Video Enhancement with Synthetic Event Guidance | [pdf](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F25257) |                      |\n| 2023 | AAAI                    | Polarization-Aware Low-Light Image Enhancement | [pdf](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F25486) [code](https:\u002F\u002Fgithub.com\u002Ffourson\u002FPolarization-Aware-Low-Light-Image-Enhancement) |                      |\n| 2023 | CVPR                    | DNF: Decouple and feedback network for seeing in the dark | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FJin_DNF_Decouple_and_Feedback_Network_for_Seeing_in_the_Dark_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fsrameo\u002Fdnf) | DNF |\n| 2023 | CVPR                    | Learning a simple low-light image enhancer from paired low-light instances | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FFu_Learning_a_Simple_Low-Light_Image_Enhancer_From_Paired_Low-Light_Instances_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fzhenqifu\u002Fpairlie) | PairLIE |\n| 2023 | CVPR                    | Learning semantic-aware knowledge guidance for low-light image enhancement | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWu_Learning_Semantic-Aware_Knowledge_Guidance_for_Low-Light_Image_Enhancement_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Flangmanbusi\u002Fsemantic-aware-low-light-image-enhancement) |  SKF |\n| 2023 | CVPR                    | Low-light image enhancement via structure modeling and guidance | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FXu_Low-Light_Image_Enhancement_via_Structure_Modeling_and_Guidance_CVPR_2023_paper.html) | |\n| 2023 | CVPR                    | Physics-guided ISO-Dependent sensor noise modeling for extreme low-light photography | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FCao_Physics-Guided_ISO-Dependent_Sensor_Noise_Modeling_for_Extreme_Low-Light_Photography_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fhappycaoyue\u002FLLD) | LLD |\n| 2023 | CVPR                    | Visibility constrained wide-band illumination spectrum design for seeing-in-the-dark | [pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FNiu_Visibility_Constrained_Wide-Band_Illumination_Spectrum_Design_for_Seeing-in-the-Dark_CVPR_2023_paper.html) [code](https:\u002F\u002Fgithub.com\u002Fmyniuuu\u002Fvcsd)| VCSD |\n| 2023 | IEEE TMM    | Glow in the Dark: Low-Light Image Enhancement with External Memory | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177254\u002F) [code](https:\u002F\u002Fgithub.com\u002FLineves7\u002FEMNet) |   EMNet   |\n| 2023 | Mach. Vision Appl.  | LDNet: low-light image enhancement with joint lighting and denoising | [pdf](https:\u002F\u002Flink.springer.com\u002F10.1007\u002Fs00138-022-01365-z) | LDNet   |\n| 2023 | IEEE TPAMI | Learning With Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9914672\u002F) [code](https:\u002F\u002Fgithub.com\u002Fvis-opt-group\u002Fruas) | RUAS   |\n| 2023 | IEEE TIP | TSDN: Two-Stage Raw Denoising in the Dark | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10168136\u002F) | TSDN   |\n| 2023 | IEEE TIP | Unsupervised Low-Light Video Enhancement with Spatial-Temporal Co-attention Transformer | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10210621\u002F)  | LightenFormer  |\n| 2023 | IEEE TCYB| Deep Perceptual Image Enhancement Network for Exposure Restoration | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9693338\u002F)  | DPIENet  |\n| 2023 | SIGGRAPH ASIA | Low-light Image Enhancement with Wavelet-based Diffusion Models | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.00306.pdf) [code](https:\u002F\u002Fgithub.com\u002FJianghaiSCU\u002FDiffusion-Low-Light) | DiffLL  |\n| 2023 | ACM MM | CLE Diffusion: Controllable Light Enhancement Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06725) [code](https:\u002F\u002Fgithub.com\u002FYuyangYin\u002FCLEDiffusion) [web](https:\u002F\u002Fyuyangyin.github.io\u002FCLEDiffusion\u002F) | CLE Diffusion  |\n| 2023 | ACM MM | FourLLIE: Boosting Low-Light Image Enhancement by Fourier Frequency Information | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03033) [code](https:\u002F\u002Fgithub.com\u002Fwangchx67\u002FFourLLIE) | FourLLIE  |\n| 2023 | Pattern Recognit. | A reflectance re-weighted Retinex model for non-uniform and low-light image enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0031320323005216) |   |\n| 2023 | Pattern Recognit. | SurroundNet: Towards effective low-light image enhancement | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0031320323003035)  [code](https:\u002F\u002Fgithub.com\u002Fouc-ocean-group\u002Fsurroundnet)| SurroundNet  |\n| 2023 | ICCV  | Coherent event guided low-light video enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FLiang_Coherent_Event_Guided_Low-Light_Video_Enhancement_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fsherrycattt\u002FEvLowLight) [web](https:\u002F\u002Fsherrycattt.github.io\u002FEvLowLight\u002F) |      EvLowLight      |\n| 2023 | ICCV  | Dancing in the dark: A benchmark towards general low-light video enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FFu_Dancing_in_the_Dark_A_Benchmark_towards_General_Low-light_Video_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fciki000\u002FDID#dancing-in-the-dark-a-benchmark-towards-general-low-light-video-enhancement) |      DID      |\n| 2023 | ICCV  | Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.13164.pdf) |      Diff-Retinex      |\n| 2023 | ICCV  | Empowering low-light image enhancer through customized learnable priors | [pdf](http:\u002F\u002Fexport.arxiv.org\u002Fpdf\u002F2309.01958) [code](https:\u002F\u002Fgithub.com\u002Fzheng980629\u002FCUE) |      CUE      |\n| 2023 | ICCV  | ExposureDiffusion: Learning to expose for low-light image enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2307.07710.pdf) [code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FExposureDiffusion) |     ExposureDiffusion     |\n| 2023 | ICCV  | Implicit neural representation for cooperative low-light image enhancement | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FYang_Implicit_Neural_Representation_for_Cooperative_Low-light_Image_Enhancement_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FYsz2022\u002FNeRCo) |   NeRCo   |\n| 2023 | ICCV  | Low-light image enhancement with illumination-aware gamma correction and complete image modelling network | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.08220.pdf)  |      COMO-ViT    |\n| 2023 | ICCV  | Low-light image enhancement with multi-stage residue quantization and brightness-aware attention | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FLiu_Low-Light_Image_Enhancement_with_Multi-Stage_Residue_Quantization_and_Brightness-Aware_Attention_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002FLiuYunlong99\u002FRQ-LLIE) |   RQ-LLIE    |\n| 2023 | ICCV  | Retinexformer: One-stage retinex-based transformer for low-light image enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06705) [code](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) |   Retinexformer    |\n| 2023 | ICCV  | Lighting up NeRF via unsupervised decomposition and enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10664) [code](https:\u002F\u002Fgithub.com\u002Fonpix\u002FLLNeRF) |   LLNeRF    |\n| 2023 | ICCV  | Generalized Lightness Adaptation with Channel Selective Normalization | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FYao_Generalized_Lightness_Adaptation_with_Channel_Selective_Normalization_ICCV_2023_paper.pdf) [code](https:\u002F\u002Fgithub.com\u002Fmdyao\u002FCSNorm) |   CS-Norm    |\n| 2023 | PRICAI | Bootstrap diffusion model curve estimation for high resolution low-light image enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-981-99-7025-4_6) |   BDCE    |\n| 2023 | Appl. Mach. Learn. | LoLi-IEA: low-light image enhancement algorithm | [pdf](https:\u002F\u002Fwww.spiedigitallibrary.org\u002Fconference-proceedings-of-spie\u002F12675\u002F1267512\u002FLoLi-IEA-low-light-image-enhancement-algorithm\u002F10.1117\u002F12.2677422.short#_=_) [code](https:\u002F\u002Fgithub.com\u002Fxingyumex\u002FLoLi-IEA) |   LoLi-IEA    |\n| 2024 | IEEE Sens. Lett. | Integrating Graph Convolution Into a Deep Multilayer Framework for Low-Light Image Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10478172\u002F) [code](https:\u002F\u002Fgithub.com\u002Fsantoshpanda1995\u002FLightweightGCN-Model) |       |\n| 2024 | IEEE TIP | AnlightenDiff: Anchoring Diffusion Probabilistic Model on Low Light Image Enhancement | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10740586?source=authoralert) [code](https:\u002F\u002Fgithub.com\u002Fallanchan339\u002FAnlightenDiff) | AnlightenDiff    |\n| 2024 | IEEE TCE | Back Projection Generative Strategy for Low and Normal Light Image Pairs with Enhanced Statistical Fidelity and Diversity | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10794693) [code](https:\u002F\u002Fgithub.com\u002Fallanchan339\u002FN2LDiff-BP) | N2LDiff-BP  |\n| 2025 | ICLR (Spotlight) | Reti-Diff: Illumination Degradation Image Restoration with Retinex-based Latent Diffusion Model | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.11638) [code](https:\u002F\u002Fgithub.com\u002FChunmingHe\u002FReti-Diff) |   Reti-Diff   |\n| 2025 | Digital Signal Process. | CDAN: Convolutional dense attention-guided network for low-light image enhancement | [pdf](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS1051200424004275) [code](https:\u002F\u002Fgithub.com\u002FSinaRaoufi\u002FCDAN) |   CDAN    |\n| 2025 | CVPR | HVI: A New Color Space for Low-light Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20272) [code](https:\u002F\u002Fgithub.com\u002FFediory\u002FHVI-CIDNet) |  HVI-CIDNet    |\n| 2025 | ICIP |  RT-X Net: RGB-Thermal cross attention network for Low-Light Image Enhancement | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24705) [code](https:\u002F\u002Fgithub.com\u002Fjhakrraman\u002Frt-xnet) [web](https:\u002F\u002Fsites.google.com\u002Fview\u002Frt-xnet\u002Fhome) |   RT-X Net   |\n| 2025 | IJCV |  Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for Low-Light Image Enhancement | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-025-02551-y) [code](https:\u002F\u002Fgithub.com\u002FTAMI-UIB\u002FNonlocal-Retinex-Deep-Unfolding-Low-Light-Enhancement) |       |\n\n### 基于直方图的方法\n\n| 年份 | 出版物      | 论文                                                        | 链接                                                         | 备注  |\n| :--: | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----- |\n| 1990 | IEEE TCE | 对比度受限自适应直方图均衡化：速度与效果 | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F109340?arnumber=109340) | CLAHE |\n| 2007 | IEEE TCE | 用于图像对比度增强的亮度保持动态直方图均衡化 | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4429280) [代码](codes\u002Fbpdhe.m) | BPDHE |\n| 2007 | IEEE TCE | 一种用于图像对比度增强的动态直方图均衡化 | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=4266947) | DHE   |\n| 2007 | IEEE TCE | 基于加权阈值直方图均衡化的快速图像\u002F视频对比度增强 | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4266969?arnumber=4266969) | WTHE  |\n| 2011 | IEEE TIP | 基于上下文和变分的对比度增强              | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5773086\u002F) | CVC   |\n| 2013 | IEEE TIP | 基于二维直方图分层差分表示的对比度增强 | [pdf](http:\u002F\u002Fmcl.korea.ac.kr\u002Fprojects\u002FLDR\u002F2013_tip_cwlee_final_hq.pdf)  [网页](http:\u002F\u002Fmcl.korea.ac.kr\u002Fcwlee_tip2013\u002F) | LDR   |\n| 2013 | ICASSP   | 使用参数化近似的高效对比度增强 | [pdf](http:\u002F\u002F150.162.46.34:8080\u002Ficassp2013\u002Fpdfs\u002F0002444.pdf) | POHE  |\n\n\n- **另请参阅：[链接](https:\u002F\u002Fgithub.com\u002Felliestath\u002FHelpTest\u002Fblob\u002Fc7e269239a9d67bffc60f44ff1cae70d20770748\u002Fdocs\u002FImage%20Preprocessing.md)**\n\n\n\n### 基于Retinex的方法\n\n| 年份 | 发表期刊\u002F会议 | 论文标题                                                        | 链接                                                         | 备注  |\n| ---- | ---------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----- |\n| 1997 | IEEE TIP               | 中心-环绕式视网膜模型的特性与性能                              | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F557356)           | SSR   |\n| 1997 | IEEE TIP               | 一种多尺度视网膜模型：弥合彩色图像与人类场景观察之间的差距    | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=597272) [代码1](http:\u002F\u002Fwww.ipol.im\u002Fpub\u002Fart\u002F2014\u002F107\u002F) [代码2](https:\u002F\u002Fgithub.com\u002FupcAutoLang\u002FMSRCR-Restoration) | MSRCR |\n| 2013 | SITIS                  | 用于图像对比度增强的自适应多尺度视网膜模型                     | [代码](codes\u002Famsr.m) [pdf](https:\u002F\u002Fdoi.ieeecomputersociety.org\u002F10.1109\u002FSITIS.2013.19) | AMSR  |\n| 2013 | IEEE TIP               | 适用于非均匀光照图像的自然性保持增强算法                      | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6512558) [网页](https:\u002F\u002Fshuhangwang.wordpress.com\u002F2015\u002F12\u002F14\u002Fnaturalness-preserved-enhancement-algorithm-for-non-uniform-illumination-images\u002F) [代码](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F096l3uy9vowgs4r\u002FCode.rar) | NPE   |\n| 2015 | IEEE TIP               | 同时估计光照和反射率的图像增强概率方法                        | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=7229296) [代码](codes\u002FPM_SIRE.zip) | SRIE  |\n| 2016 | CVPR                   | 用于同时估计反射率和光照的加权变分模型                         | [pdf](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FFu_A_Weighted_Variational_CVPR_2016_paper.pdf)  [代码](codes\u002FWV_SIRE.zip) | SRIE  |\n| 2016 | Signal Processing      | 一种基于融合的弱光图像增强方法                                 | [pdf](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.sigpro.2016.05.031) [代码](codes\u002FMF.rar) | MF    |\n| 2017 | IEEE TIP               | LIME：基于光照图估计的低光照图像增强                          | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7782813\u002F) [代码1](https:\u002F\u002Fgithub.com\u002FSy-Zhang\u002FLIME) [代码2](https:\u002F\u002Fgithub.com\u002Festija\u002FLIME) [代码3](https:\u002F\u002Fgithub.com\u002Fpvnieo\u002FLow-light-Image-Enhancement) | LIME  |\n| 2017 | ICCV                   | 视网膜模型的联合内外参数先验模型                               | [pdf](http:\u002F\u002Fcaibolun.github.io\u002Fpapers\u002FJieP.pdf) [网页](http:\u002F\u002Fcaibolun.github.io\u002FJieP\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fcaibolun\u002FJieP\u002F) | JieP  |\n| 2018 | IEEE TIP               | 基于鲁棒视网膜模型的结构揭示型低光照图像增强                  | [pdf](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fuploads\u002Fprod\u002F2018\u002F04\u002F2018-TIP-Structure-Revealing-Low-Light-Image-Enhancement-Via-Robust-Retinex-Model.pdf) [代码1](https:\u002F\u002Fgithub.com\u002Fmartinli0822\u002FLow-light-image-enhancement)  [代码2](codes\u002FrobustRetinex.m) |       |\n| 2018 | BMVC                    | 用于低光照增强的深度视网膜分解                                 | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04560) [网页](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fweichen582\u002FRetinexNet) | Retinex-Net          |\n| 2018 | Symmetry               | 在复杂光照环境下具有颜色恒常性和细节操控功能的低光照图像智能增强系统 | [pdf](https:\u002F\u002Fwww.mdpi.com\u002F2073-8994\u002F10\u002F12\u002F718\u002Fpdf)          |       |\n| 2019 | Symmetry               | 用于低光照图像增强的分数阶融合模型                            | [pdf](https:\u002F\u002Fwww.mdpi.com\u002F2073-8994\u002F11\u002F4\u002F574\u002Fpdf)           |       |\n| 2019 | ICIP                   | 一种结合亮通道先验的混合L2−LP变分模型，用于单张低光照图像增强  | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803197)          |       |\n| 2019 | IET Image Proc.        | 基于非均匀光照先验模型的低光照图像增强                        | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8911585)          | NIPM  |\n| 2019 | Comput. Graphics Forum | 双重光照估计用于鲁棒曝光校正                                    | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.13688.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fpvnieo\u002FLow-light-Image-Enhancement) |       |\n| 2019 | ICME                    | RDGAN：基于视网膜分解的对抗学习用于低光照增强                 | [代码](https:\u002F\u002Fgithub.com\u002FWangJY06\u002FRDGAN\u002F) [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8785047) | RDGAN                |\n| 2020 | ic-ETITE               | 基于光照估计的图像增强技术比较分析                             | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9077919)          |       |\n| 2020 | IEEE TIP               | LR3M：基于低秩正则化视网膜模型的鲁棒低光照增强                | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9056796) | LR3M  |\n| 2021 | CVPR                    | 受视网膜启发的展开网络结合协同先验架构搜索用于低光照图像增强  | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_Retinex-Inspired_Unrolling_With_Cooperative_Prior_Architecture_Search_for_Low-Light_Image_CVPR_2021_paper.pdf) [网页](http:\u002F\u002Fdutmedia.org\u002FRUAS\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fdut-media-lab\u002FRUAS) | RUAS   |\n| 2022 | CVPR                    | URetinex-Net：基于视网膜的深度展开网络用于低光照图像增强        | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWu_URetinex-Net_Retinex-Based_Deep_Unfolding_Network_for_Low-Light_Image_Enhancement_CVPR_2022_paper.html) [代码](https:\u002F\u002Fgithub.com\u002FAndersonYong\u002FURetinex-Net) | URetinex-Net         |\n| 2022 | Pattern Recognit.       | 类脑视网膜：一种生物合理的低光照图像增强视网膜算法            | [pdf](http:\u002F\u002Fdx.doi.org\u002F10.1016\u002Fj.patcog.2022.109195)        |                      |\n| 2023 | Pattern Recognit. | 一种反射率重加权的视网膜模型，用于非均匀及低光照图像增强        | [pdf](https:\u002F\u002Flinkinghub.elsevier.com\u002Fretrieve\u002Fpii\u002FS0031320323005216) |   |\n| 2023 | Vis Comput             | 用于自然性保护的低光照图像增强中的光照估计                    | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs00371-023-02770-9) | NPLIE |\n| 2023 | ICCV  | Diff-Retinex：用生成扩散模型重新思考低光照图像增强              | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.13164.pdf) |      Diff-Retinex      |\n| 2023 | ICCV  | Retinexformer：用于低光照图像增强的一阶段视网膜基Transformer   | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06705) [代码](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) |   Retinexformer    |\n| 2025 | ICCV | GT-Mean Loss：一种简单而有效的解决低光照图像增强中亮度不匹配的方法 | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.20148) [代码](https:\u002F\u002Fgithub.com\u002FjingxiLiao\u002FGT-mean-loss) |   GT-Mean loss   |\n| 2025 | ICLR (Spotlight) | Reti-Diff：基于视网膜的潜在扩散模型用于光照退化图像修复       | [pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.11638) [代码](https:\u002F\u002Fgithub.com\u002FChunmingHe\u002FReti-Diff) |   Reti-Diff   |\n| 2025 | ICIP |  RT-X Net：用于低光照图像增强的RGB-热红外交叉注意力网络         | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24705) [代码](https:\u002F\u002Fgithub.com\u002Fjhakrraman\u002Frt-xnet) [网页](https:\u002F\u002Fsites.google.com\u002Fview\u002Frt-xnet\u002Fhome) |   RT-X Net   |\n| 2025 | ICIP | 一种带有非局部梯度型约束的视网膜基变分模型，用于低光照图像增强 | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F11084565) |       |\n| 2025 | IJCV |  非局部视网膜基变分模型及其深度展开孪生体，用于低光照图像增强  | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-025-02551-y) [代码](https:\u002F\u002Fgithub.com\u002FTAMI-UIB\u002FNonlocal-Retinex-Deep-Unfolding-Low-Light-Enhancement) |       |\n\n### 其他方法\n\n| 年份 | 出版物             | 论文                                                        | 链接                                                         | 备注  |\n| :--: | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----- |\n| 2008 | IET Image Proc. | 快速中心-周围对比度增强方法                                   | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4455541)          |       |\n| 2011 | ICME            | 用于低光照视频增强的快速高效算法                             | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=6012107) [代码](codes\u002FXuanDong-Method.m) |       |\n| 2017 | ICCVW           | 基于相机响应模型的新型低光照图像增强算法                     | [pdf](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8265567\u002F) [代码](https:\u002F\u002Fgithub.com\u002Fbaidut\u002FOpenCE\u002Fblob\u002Fmaster\u002Fours\u002FYing_2017_ICCV.m) |       |\n| 2017 | ArXiv           | 一种受生物启发的多曝光融合框架，用于低光照图像增强           | [pdf](http:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00591) [代码](https:\u002F\u002Fgithub.com\u002Fbaidut\u002FBIMEF) | BIMEF |\n| 2017 | ICCAIP          | 基于曝光融合框架的新型图像对比度增强算法                     | [pdf](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007%2F978-3-319-64698-5_4) [网页](https:\u002F\u002Fbaidut.github.io\u002FOpenCE\u002Fcaip2017.html) [代码1](https:\u002F\u002Fgithub.com\u002Fbaidut\u002FOpenCE\u002Fblob\u002Fmaster\u002Fours\u002FYing_2017_CAIP.m) [代码2](https:\u002F\u002Fgithub.com\u002FAndyHuang1995\u002FImage-Contrast-Enhancement) |       |\n| 2019 | IEEE TIP        | 基于吸收与散射光模型的低光照图像增强                       | [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTIP.2019.2922106)              | ALSM  |\n| 2019 | ICIP            | 基于最大值滤波器和引导滤波器的快速图像增强                   | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803591)          |       |\n| 2025 | IJCV | 一种用于颜色恒常性和颜色同化错觉的传统方法及其在低光照图像增强中的应用 | [pdf](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-025-02595-0) | |\n\n\n## 相关工作\n\n| 年份 | 出版物      | 论文                                                        | 链接                                                         | 注释        | 标签              |\n| :--: | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------- | ---------------- |\n| 2012 | IST      | 通过局部对比度增强提高特征检测的鲁棒性                      | [数据集](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fvonikakis\u002Fdatasets)  |             |                  |\n| 2015 | ACM ToG  | 使用深度神经网络自动调整照片                               | [网页](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fhomepagezhichengyan\u002Fhome\u002Fdl_img_adjust) [代码](https:\u002F\u002Fgithub.com\u002Fstephenyan1984\u002Fdl-image-enhance\u002Fwiki) [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.7725v2) |             |                  |\n| 2018 | CVPR     | 扭曲与恢复：利用深度强化学习进行色彩增强                    | [代码](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdistort-and-recover\u002F) [pdf](https:\u002F\u002Fdoi.org\u002F10.1109\u002FCVPR.2018.00621) |             |                  |\n| 2021 | TMM      | 用于低光照下人脸检测的循环曝光生成                          | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.10963) [代码](https:\u002F\u002Fgithub.com\u002Fsherrycattt\u002FREGDet) | REGDet      | 人脸检测   |\n| 2021 | CVPR     | HLA-Face：低光照下人脸检测的高低适应联合方法                 | [网页](https:\u002F\u002Fdaooshee.github.io\u002FHLA-Face-Website\u002F) [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_HLA-Face_Joint_High-Low_Adaptation_for_Low_Light_Face_Detection_CVPR_2021_paper.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fdaooshee\u002FHLA-Face-Code) | HLA-Face    | 人脸检测   |\n| 2021 | ICCV     | 具有正交切线正则化的多任务AET，用于暗目标检测                | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FCui_Multitask_AET_With_Orthogonal_Tangent_Regularity_for_Dark_Object_Detection_ICCV_2021_paper.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fcuiziteng\u002FICCV_MAET) | MAET        | 目标检测 |\n| 2021 | ICCV     | Photon-Net：使用单光子相机对光子匮乏场景进行推理             | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FGoyal_Photon-Starved_Scene_Inference_Using_Single_Photon_Cameras_ICCV_2021_paper.pdf) [代码](https:\u002F\u002Fgithub.com\u002Fbhavyagoyal\u002Fspclowlight\u002F) [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r1YvHnGbi6k) | Photon-Net  | 单光子    |\n| 2021 | ICCVW    | 极低光照条件下的单阶段人脸检测                              | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FRLQ\u002Fpapers\u002FYu_Single-Stage_Face_Detection_Under_Extremely_Low-Light_Conditions_ICCVW_2021_paper.pdf) |             | 人脸检测   |\n| 2021 | ICCVW    | DeLiEve-Net：利用光条纹和局部事件去模糊低光照图像            | [pdf](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FDeLiEve-Net%3A-Deblurring-Low-light-Images-with-Light-Zhou-Teng\u002F105bf9ccbc749d976ab1f4b455d379f30b1d6508) | DeLiEve-Net | 事件相机     |\n| 2022 | ArXiv    | 一种高效的低光照复原Transformer，适用于暗光场图像            |                                                              | LRT         | 光场      |\n| 2022 | ICCP| 在噪声-模糊双重干扰下的鲁棒场景推理                         | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.11643) [代码](https:\u002F\u002Fgithub.com\u002Fbhavyagoyal\u002Fnoiseblurdual) [网页](https:\u002F\u002Fwisionlab.com\u002Fproject\u002Fnoiseblurdual\u002F) | 噪声-模糊双重 | 目标检测 |\n| 2023 | ICCV     | FeatEnHancer：在低光照条件下提升层次化特征，用于目标检测及其他任务 | [pdf](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FHashmi_FeatEnHancer_Enhancing_Hierarchical_Features_for_Object_Detection_and_Beyond_Under_ICCV_2023_paper.pdf) [代码](https:\u002F\u002Fgithub.com\u002FkhurramHashmi\u002FFeatEnHancer) [网页](https:\u002F\u002Fkhurramhashmi.github.io\u002FFeatEnHancer\u002F) | FeatEnHancer        | 目标检测和语义分割 |\n| 2023 | IEEE TIP | INFWIDE：用于低光照条件下非盲去模糊的图像及特征空间维纳反卷积网络 | [pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10047966) [代码](https:\u002F\u002Fgithub.com\u002Fzhihongz\u002FINFWIDE) | INFWIDE  | 去模糊 |\n| 2024 | AAAI    | Aleth-NeRF：具有遮蔽场假设的光照自适应NeRF                   | [pdf](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.09093) [代码](https:\u002F\u002Fgithub.com\u002Fcuiziteng\u002FAleth-NeRF) [网页](https:\u002F\u002Fcuiziteng.github.io\u002FAleth_NeRF_web\u002F) |   Aleth-NeRF   | NeRF |\n\n## 评价指标\n\n| 指标               | 缩写  | 全参考\u002F无参考        | 链接             | \n| :------------------: | ------ | ------------------------   | ---------------- |\n| 峰值信噪比 | PSNR | 全参考 |- |\n| 结构相似性指数 | SSIM| 全参考 | - |\n| 学习感知图像块相似性 | LPIPS | 全参考 | [代码](https:\u002F\u002Fgithub.com\u002Frichzhang\u002FPerceptualSimilarity) |\n| 明度顺序误差 | LOE | 无参考 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6512558) |\n| 自然图像质量评估器 | NIQE  | 无参考 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6353522)|\n| 均方误差 | MSE | 全参考 | - |\n| 平均绝对误差 | MAE | 全参考 | - |\n| 智能手机摄影属性与质量 | SPAQ | 无参考 | [代码](https:\u002F\u002Fgithub.com\u002Fh4nwei\u002FSPAQ) |\n| 神经网络图像评估 | NIMA | 无参考 | [PyTorch](https:\u002F\u002Fgithub.com\u002Fkentsyx\u002FNeural-IMage-Assessment) [TensorFlow](https:\u002F\u002Fgithub.com\u002Ftitu1994\u002Fneural-image-assessment)|\n| 多尺度图像质量Transformer | MUSIQ | 无参考 | [代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fmusiq) |\n\n\n\n## 更多参考\n\n- https:\u002F\u002Fgithub.com\u002Fbaidut\u002FOpenCE\n- https:\u002F\u002Fgithub.com\u002Ftiandaoxiaowu\u002Fimage-enhancement-about-Retinex\n- https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FLighting-the-Darkness-in-the-Deep-Learning-Era-Open","# Awesome Low-Light Image Enhancement 快速上手指南\n\n`awesome-low-light-image-enhancement` 并非一个单一的可执行软件或 Python 包，而是一个**资源汇总列表（Awesome List）**，收录了低光照图像增强领域的数据集、论文、代码库、评测指标及综述文章。\n\n本指南将指导开发者如何利用该列表快速找到适合的工具、数据集，并运行典型的开源算法（以列表中经典的 **RetinexNet** 或 **SID** 为例）。\n\n## 环境准备\n\n在开始使用列表中的具体算法前，请确保您的开发环境满足以下通用要求（大多数基于深度学习的方法均适用）：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS (部分代码可能需调整)，Windows (建议使用 WSL2)。\n*   **Python 版本**: 3.6 - 3.9 (具体视选定算法的 `requirements.txt` 而定)。\n*   **深度学习框架**: PyTorch 或 TensorFlow (列表中两种框架的项目均有，需根据目标项目安装)。\n*   **硬件加速**: 推荐使用 NVIDIA GPU (CUDA 10.0+) 以加速训练和推理过程。\n*   **前置依赖**:\n    *   Git\n    *   pip 或 conda\n    *   OpenCV (`opencv-python`)\n    *   图像处理库 (`Pillow`, `numpy`, `scipy`)\n\n> **国内加速建议**：\n> *   **PyPI 源**: 使用清华源或阿里源加速 Python 包安装。\n>     ```bash\n>     pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003Cpackage_name>\n>     ```\n> *   **Git 克隆**: 若访问 GitHub 缓慢，可使用镜像站或配置 SSH。\n> *   **模型权重\u002F数据集**: 列表中部分数据集提供百度网盘链接（如 MCR, LOM dataset），国内用户可直接下载。\n\n## 安装步骤\n\n由于这是一个资源列表，您不需要安装 \"awesome-low-light-image-enhancement\" 本身。您需要做的是：**从列表中选择一个具体的项目（Method）并克隆其代码库**。\n\n以下以列表中经典的 **Deep Retinex Decomposition (RetinexNet)** 为例演示安装流程：\n\n1.  **克隆项目代码**\n    从该列表指向的原始仓库获取代码（此处以常见的实现为例）：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fweichaochen66\u002FRetinexNet.git\n    cd RetinexNet\n    ```\n\n2.  **创建虚拟环境 (推荐)**\n    ```bash\n    conda create -n retinex python=3.7\n    conda activate retinex\n    ```\n\n3.  **安装依赖**\n    根据项目内的 `requirements.txt` 安装依赖。若无该文件，通常需安装基础视觉库：\n    ```bash\n    # 使用国内镜像源加速\n    pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple tensorflow-gpu==1.15.0 opencv-python numpy scipy pillow\n    ```\n    *(注：具体框架版本请参照所选项目的原始 README，较新项目多基于 PyTorch)*\n\n4.  **准备数据**\n    从本资源列表的 **Datasets** 章节下载数据集（如 **LOL** 数据集）。\n    *   访问 [LOL Dataset Link](https:\u002F\u002Fdaooshee.github.io\u002FBMVC2018website) 下载。\n    *   将解压后的数据放入项目指定的目录（通常为 `data\u002F` 或 `dataset\u002F`）。\n\n## 基本使用\n\n安装完成后，即可运行脚本进行低光照图像增强。不同项目的命令略有差异，以下为通用操作逻辑：\n\n### 1. 测试\u002F推理 (Inference)\n使用预训练模型对单张图片或文件夹进行增强。\n\n```bash\n# 示例命令：运行测试脚本，指定输入图片和输出路径\npython test.py --input_dir .\u002Fdata\u002Fyour_low_light_image.png --output_dir .\u002Fresults\u002F --model_path .\u002Fpretrained_model\u002F\n```\n\n*   `--input_dir`: 待处理的低光照图片路径。\n*   `--model_path`: 下载的预训练权重文件路径。\n*   `--output_dir`: 增强后图片的保存路径。\n\n### 2. 训练模型 (Training)\n如果您希望使用特定数据集（如 **SID** 或 **VE-LOL**）重新训练模型：\n\n```bash\n# 示例命令：启动训练\npython train.py --data_dir .\u002Fdata\u002FLOL\u002F --checkpoint_dir .\u002Fcheckpoints\u002F --epochs 100\n```\n\n### 3. 探索更多工具\n回到 `awesome-low-light-image-enhancement` 列表：\n*   **查找最新 SOTA**: 查看 **Methods** -> **Learning-based methods** 表格中 2023-2024 年的论文链接。\n*   **获取数据**: 在 **Datasets** 表格中寻找适合您场景的数据集（如夜间监控选 **ExDARK**，视频增强选 **DID** 或 **SDSD**）。\n*   **对比评测**: 参考 **Review and Benchmark** 部分的综述论文，了解各算法在 PSNR\u002FSSIM 等指标上的表现。\n\n> **提示**: 每个具体项目的详细参数和用法，请务必查阅该项目仓库下的 `README.md` 文件，因为本列表仅作为索引入口。","某安防监控团队正在处理夜间城市街道的原始监控录像，试图从极暗画面中识别可疑车辆与行人特征。\n\n### 没有 awesome-low-light-image-enhancement 时\n- 开发人员需耗费数周时间在海量的学术论文和代码库中盲目搜索，难以区分哪些低光增强算法适合当前的低信噪比场景。\n- 缺乏统一的评测标准（Metrics）和权威数据集（如 SID 或 LOL），导致无法量化对比不同模型的修复效果，只能凭肉眼主观判断。\n- 自行复现经典论文（如 Retinex 或基于学习的方法）时，常因缺少官方代码或预处理脚本而陷入环境配置泥潭，项目进度严重滞后。\n- 面对复杂的噪声模型和极低光子计数问题，团队只能使用传统的直方图均衡化，导致画面出现严重的色彩失真和噪点放大。\n\n### 使用 awesome-low-light-image-enhancement 后\n- 团队直接利用该资源清单中的\"Methods\"分类，快速锁定了针对极低照度优化的 SOTA 模型代码，将算法选型时间从数周缩短至半天。\n- 借助清单整理的专用数据集（如 ExDARK）和标准化评估指标，迅速建立了客观的模型测试基准，精准筛选出最适合夜间监控的解决方案。\n- 通过清单提供的成熟代码链接和复现指南，避免了重复造轮子，开发人员能直接将精力集中在业务逻辑适配而非底层算法调试上。\n- 应用清单推荐的先进深度学习方案后，原本漆黑一片的监控画面被清晰还原，车辆牌照与人脸细节得以保留，且噪点控制优异。\n\nawesome-low-light-image-enhancement 通过一站式整合数据、算法与评估体系，将低光图像增强的研发门槛大幅降低，让团队能专注于解决实际的视觉感知难题。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzhihongz_awesome-low-light-image-enhancement_c1cd1b2f.png","zhihongz","Zhihong Zhang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzhihongz_013f439b.png","Ph.D. | Research Engineer.","Tsinghua University","Beijing, China",null,"zhihongz.github.io","https:\u002F\u002Fgithub.com\u002Fzhihongz",[82],{"name":83,"color":84,"percentage":85},"MATLAB","#e16737",100,1798,239,"2026-04-08T12:23:01",5,"","未说明",{"notes":93,"python":91,"dependencies":94},"该仓库是一个资源列表（Awesome List），汇集了低光图像增强相关的数据集、论文、代码链接和基准测试，本身不是一个可直接运行的单一软件工具。因此，README 中未包含具体的操作系统、硬件配置或依赖库要求。具体的运行环境需求需参考列表中各个具体方法（Methods）对应的原始代码仓库。",[],[15,14],[97,98,99,100,101,102,103,104,105,106],"low-light","image-enhancement","low-light-image-enhancement","computer-vision","image-processing","retinex","histogram-equalization","deep-learning","image-enhancment","contrast-enhancement","2026-03-27T02:49:30.150509","2026-04-10T03:07:33.508289",[110,115,120,125,130,135],{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},27063,"如何提交新的低光图像增强论文或数据集到该仓库？","您可以直接在 GitHub 上创建一个新的 Issue，提供论文的标题、发表会议\u002F期刊、项目名称、论文链接（如 arXiv）以及代码仓库链接。如果是相关的数据集，也请一并提供下载链接（如 Google Drive 或百度网盘）及提取码。维护者审核后会将其添加到列表中。","https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues\u002F35",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},27064,"运行 AMSR 算法代码时出现图像亮度反转怎么办？","该仓库中的 AMSR 代码是从互联网收集的，维护者使用 'memorial.hdr' 测试时也发现了类似问题，无法保证该特定代码实现的完全正确性。如果您发现错误并知道如何修复，欢迎提交 Pull Request (PR) 来修正代码。","https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues\u002F6",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},27065,"MSR (Multi-Scale Retinex) 算法如何输出光照图 (Light Map)？","在 multiscaleRetinex 函数中，光照图通常不是直接作为主要输出返回的。根据 Retinex 理论，输出图像 OUT 通常是输入图像 I 与估计的光照图 T 的比值或对数差。若要获取光照图，通常需要修改源码，在 SSlog 或相关计算步骤中将中间变量 T（即光照分量）单独保存或返回。具体实现需查看 `multiscaleRetinex.m` 内部逻辑，通常在单尺度 Retinex (SSR) 计算后得到的模糊图像即为光照图。","https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues\u002F9",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},27066,"发现仓库中某篇论文的链接失效或错误，该如何处理？","您可以在仓库中提一个 Issue，明确指出哪篇论文（例如 \"TSDN\"）的链接有误，并提供正确的 URL 地址（如 IEEE Xplore 或其他官方链接）。维护者在确认后会尽快修正该链接。","https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues\u002F20",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},27067,"提交的论文如果被接收，会被归类到哪个部分？","论文会根据其技术特点被归类到相应的方法类别中。例如，基于 Retinex 理论的论文（如 RT-X Net, Reti-Diff）会被添加到 \"Retinex-based methods\" 部分；涉及视频增强、NeRF 或特定损失函数的论文也会被标注并放入对应的分类表中。您可以在提交时建议具体的分类。","https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues\u002F33",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},27068,"除了论文代码，仓库是否收录相关的低光数据集？","是的，仓库也收录与低光图像增强相关的数据集。例如，针对 NeRF 的低光条件多视图数据集（如 LOM 数据集，包含低光、过曝和正常光的多视图数据），作者可以通过 Issue 提供下载链接（Google Drive 或百度网盘），维护者会将其添加到资源列表中。","https:\u002F\u002Fgithub.com\u002Fzhihongz\u002Fawesome-low-light-image-enhancement\u002Fissues\u002F22",[]]