[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-oneTaken--Awesome-Denoise":3,"tool-oneTaken--Awesome-Denoise":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,52],"视频",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[14,35],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":78,"languages":73,"stars":79,"forks":80,"last_commit_at":81,"license":82,"difficulty_score":83,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":91,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":101,"updated_at":102,"faqs":103,"releases":104},4257,"oneTaken\u002FAwesome-Denoise","Awesome-Denoise","One-paper-one-short-contribution-summary of all latest image\u002Fburst\u002Fvideo Denoising papers with code & citation published in top conference and journal.","Awesome-Denoise 是一个专注于图像、连拍及视频去噪领域的开源资源汇总项目。它系统性地整理了发表在顶级会议和期刊上的最新去噪论文，并为每篇论文提供了代码实现链接与引用信息，旨在解决研究人员在海量文献中难以快速定位高质量、可复现算法的痛点。\n\n该资源库通过色彩空间（RGB\u002FRaw）、图像类型（单张\u002F连拍\u002F视频）以及噪声模型（如高斯噪声、真实相机噪声等）三个维度对论文进行精细分类，帮助用户高效筛选所需技术。其独特亮点在于不仅涵盖了传统的监督学习方法，还专门收录了 Noise2Noise、Noise2Void 等前沿的自监督去噪成果，并整理了 SIDD、SID 等权威基准数据集的详细资料。\n\nAwesome-Denoise 特别适合计算机视觉领域的研究人员、算法工程师以及高校学生使用。无论是希望跟进最新学术进展，还是寻找可落地的去噪代码基线，用户都能在此获得一站式的支持，从而大幅降低调研成本，加速研发进程。","\n# Awesome-Denoise \n\nThere are three main factors to divide these papers into different catrgories to have a better idea.  \nSometimes raw domain denoising papers would use some ISP to convert to sRGB domain, So use Both to cover this situation.  \nSometimes video denoising papers degrade to burst denoising, even single image denoising, always use Video tag to cover this situation.  \n\n* Color Space\n  * RGB\n  * Raw\n  * Both\n\n* Image Kind\n  * Single\n  * Burst\n  * Video\n\n* Noise Model  \n  * AWGN(Additive White Gaussian Noise model)  \n  * PG(Posion Gaussian noise model)  \n  * GAN(Gan based noise model)  \n  * Real(camera or dlsr devices real noise model)  \n  * Prior\n    * Low Rank\n    * Sparsity\n    * self similarity\n\n## benchmark dataset  \n\n* SIDD, CVPR 2018, citation 256\n  * [A High-Quality Denoising Dataset for Smartphone Cameras](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAbdelhamed_A_High-Quality_Denoising_CVPR_2018_paper.pdf)\n  * [Matlab](https:\u002F\u002Fgithub.com\u002FAbdoKamel\u002Fsidd-ground-truth-image-estimation)\n* RENOIR, JVCIR 2018, citation 106\n  * [RENOIR–A dataset for real low-light image noise reduction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.8230.pdf)\n  * [broken dataset link](http:\u002F\u002Fadrianbarburesearch.blogspot.com\u002Fp\u002Frenoir-dataset.html)\n* PolyU, arxiv 2018, citation 108\n  * [Real-world Noisy Image Denoising: A New Benchmark](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.02603.pdf)\n  * [Matlab](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FPolyU-Real-World-Noisy-Images-Dataset)\n* SID, CVPR 2018, citation 595\n  * [Learning to see in the dark](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Learning_to_See_CVPR_2018_paper.pdf)\n  * [Tensorflow](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark)\n* DND, CVPR 2017, citation 296\n  * [Benchmarking Denoising Algorithms with Real Photographs](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FPlotz_Benchmarking_Denoising_Algorithms_CVPR_2017_paper.pdf)\n  * [homepage](https:\u002F\u002Fnoise.visinf.tu-darmstadt.de\u002F)\n* NaM, CVPR 2016, citation 148\n  * [A Holistic Approach to Cross-Channel Image Noise Modeling and its Application to Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FNam_A_Holistic_Approach_CVPR_2016_paper.pdf)|\n\n\n# self-supervised denoising\nvideo denoising\n+ [Unsupervised deep video denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FSheth_Unsupervised_Deep_Video_Denoising_ICCV_2021_paper.html)\n  + ICCV 2021, UDVD\n+ [Recurrent Self-Supervised Video Denoising with Denser\nReceptive Field](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.03608.pdf)\n  + ACM MM 2023, [code](https:\u002F\u002Fgithub.com\u002FWang-XIaoDingdd\u002FRDRF)\n\nimage denoising\n\n|Index|Year|Pub|Title|cite|\n|:---:|:---:|:---:|:---:|:---:|\n|1|2018|ICML|[Noise2Noise: Learning image restoration without clean data](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.04189.pdf)|1236|\n|2|2019|CVPR|[Noise2void-learning denoising from single noisy images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FKrull_Noise2Void_-_Learning_Denoising_From_Single_Noisy_Images_CVPR_2019_paper.html)|748|\n|3|2019|ICML|[Noise2self: Blind denoising by self-supervision](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fbatson19a.html)|441|\n|4|2019|NeurIPS|[High-quality self-supervised deep image denoising](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F8920-high-quality-self-supervised-deep-image-denoising)|247|\n|5|2019|arxiv|[Unsupervised image noise modeling with self-consistent GAN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.05762.pdf)|13|\n|6|2020|Frontiers in Computer Science|[Probabilistic noise2void: Unsupervised content-aware denoising](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffcomp.2020.00005\u002Ffull)|119|\n|7|2020|TIP|[Noisy-as-clean: Learning self-supervised denoising from corrupted image](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06878.pdf)|112|\n|8|2020|CVPR|[Self2self with dropout: Learning self-supervised denoising from single image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FQuan_Self2Self_With_Dropout_Learning_Self-Supervised_Denoising_From_Single_Image_CVPR_2020_paper.html)|201|\n|9|2020|CVPR|[Noisier2noise: Learning to denoise from unpaired noisy data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FMoran_Noisier2Noise_Learning_to_Denoise_From_Unpaired_Noisy_Data_CVPR_2020_paper.html)|125|\n|10|2020|NeurIPS|[Noise2Same: Optimizing a self-supervised bound for image denoising](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fea6b2efbdd4255a9f1b3bbc6399b58f4-Abstract.html)|57|\n|11|2021|NeurIPS|[Noise2score: tweedie's approach to self-supervised image denoising without clean images](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F077b83af57538aa183971a2fe0971ec1-Abstract.html)|32|\n|12|2021|CVPR|[Neighbor2neighbor: Self-supervised denoising from single noisy images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FHuang_Neighbor2Neighbor_Self-Supervised_Denoising_From_Single_Noisy_Images_CVPR_2021_paper.html)|135|\n|13|2021|CVPR|[Recorrupted-to-recorrupted: unsupervised deep learning for image denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FPang_Recorrupted-to-Recorrupted_Unsupervised_Deep_Learning_for_Image_Denoising_CVPR_2021_paper.html)|85|\n|14|2022|TIP|Neighbor2Neighbor: A Self-Supervised Framework for Deep Image Denoising|7|\n|15|2022|CVPR|[Ap-bsn: Self-supervised denoising for real-world images via asymmetric pd and blind-spot network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FLee_AP-BSN_Self-Supervised_Denoising_for_Real-World_Images_via_Asymmetric_PD_and_CVPR_2022_paper.html)|27|\n|16|2022|CVPR|[CVF-SID: Cyclic multi-variate function for self-supervised image denoising by disentangling noise from image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FNeshatavar_CVF-SID_Cyclic_Multi-Variate_Function_for_Self-Supervised_Image_Denoising_by_Disentangling_CVPR_2022_paper.html)|20|\n|17|2022|CVPR|[Self-supervised deep image restoration via adaptive stochastic gradient langevin dynamics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWang_Self-Supervised_Deep_Image_Restoration_via_Adaptive_Stochastic_Gradient_Langevin_Dynamics_CVPR_2022_paper.html)|7|\n|18|2022|CVPR|[Noise distribution adaptive self-supervised image denoising using tweedie distribution and score matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FKim_Noise_Distribution_Adaptive_Self-Supervised_Image_Denoising_Using_Tweedie_Distribution_and_CVPR_2022_paper.html)|5|\n|19|2022|CVPR|[Blind2unblind: Self-supervised image denoising with visible blind spots](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWang_Blind2Unblind_Self-Supervised_Image_Denoising_With_Visible_Blind_Spots_CVPR_2022_paper.html)|29|\n|20|2022|CVPR|[Idr: Self-supervised image denoising via iterative data refinement](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_IDR_Self-Supervised_Image_Denoising_via_Iterative_Data_Refinement_CVPR_2022_paper.html)|22|\n|21|2023|CVPR|[Spatially Adaptive Self-Supervised Learning for Real-World Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLi_Spatially_Adaptive_Self-Supervised_Learning_for_Real-World_Image_Denoising_CVPR_2023_paper.html)|1|\n|22|2023|CVPR|[LG-BPN: Local and Global Blind-Patch Network for Self-Supervised Real-World Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWang_LG-BPN_Local_and_Global_Blind-Patch_Network_for_Self-Supervised_Real-World_Denoising_CVPR_2023_paper.html)|0|\n|23|2023|CVPR|[Zero-Shot Noise2Noise: Efficient Image Denoising Without Any Data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FMansour_Zero-Shot_Noise2Noise_Efficient_Image_Denoising_Without_Any_Data_CVPR_2023_paper.html)|1|\n|24|2023|CVPR|[Patch-Craft Self-Supervised Training for Correlated Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FVaksman_Patch-Craft_Self-Supervised_Training_for_Correlated_Image_Denoising_CVPR_2023_paper.html)|\n|25|2023|arxiv|[Unleashing the Power of Self-Supervised Image Denoising: A Comprehensive Review](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.00247.pdf)|\n|26|2023|ICCV|[Random Sub-Samples Generation for Self-Supervised Real Image Denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2307.16825.pdf)|\n|27|2023|ICCV|[Score Priors Guided Deep Variational Inference for Unsupervised Real-World Single Image Denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.04682.pdf)|\n|28|2023|ICCV|[Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.06776.pdf)|\n\n# by year\n## 2020\n\n|Pub|Title|Code|Cite|\n|:---:|:---:|:---:|:---:|\n|TIP|[Noisy-As-Clean: Learning Self-supervised Denoising from Corrupted Image](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20TIP_NAC.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FNoisy-As-Clean-TIP2020)|47|\n|TIP|[Blind universal Bayesian image denoising with Gaussian noise level learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=9024220)|-|43|\n|TIP|[Learning Deformable Kernels for Image and Video Denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.06903.pdf)|-|24|\n|TIP|Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and Video Denoising|-|10|\n|TIP|[Deep Graph-Convolutional Image Denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.08448.pdf)|-|64|\n|TIP|[NLH : A Blind Pixel-level Non-local Method for Real-world Image Denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06834.pdf)|-|34|\n|TIP|[Image Denoising via Sequential Ensemble Learning](https:\u002F\u002Fcpb-us-w2.wpmucdn.com\u002Fblog.nus.edu.sg\u002Fdist\u002F8\u002F10877\u002Ffiles\u002F2020\u002F03\u002FTIP2020_ensemble.pdf)|-|13|\n|TIP|[Connecting Image Denoising and High-Level Vision Tasks via Deep Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.01826.pdf)|-|70|\n|CVPR|[Memory-Efficient Hierarchical Neural Architecture Search for Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Memory-Efficient_Hierarchical_Neural_Architecture_Search_for_Image_Denoising_CVPR_2020_paper.pdf)|-|33|\n|CVPR|[A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWei_A_Physics-Based_Noise_Formation_Model_for_Extreme_Low-Light_Raw_Denoising_CVPR_2020_paper.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002FVandermode\u002FELD)|50|\n|CVPR|[Supervised Raw Video Denoising With a Benchmark Dataset on Dynamic Scenes](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FYue_Supervised_Raw_Video_Denoising_With_a_Benchmark_Dataset_on_Dynamic_CVPR_2020_paper.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcao-cong\u002FRViDeNet)|26|Both|Video|Real|\n|CVPR|[Transfer Learning From Synthetic to Real-Noise Denoising With Adaptive Instance Normalization](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FKim_Transfer_Learning_From_Synthetic_to_Real-Noise_Denoising_With_Adaptive_Instance_CVPR_2020_paper.pdf)|-|60|\n|CVPR|[Self2Self With Dropout: Learning Self-Supervised Denoising From Single Image](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQuan_Self2Self_With_Dropout_Learning_Self-Supervised_Denoising_From_Single_Image_CVPR_2020_paper.pdf)|-|73|\n|CVPR|[Noisier2Noise: Learning to Denoise From Unpaired Noisy Data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FMoran_Noisier2Noise_Learning_to_Denoise_From_Unpaired_Noisy_Data_CVPR_2020_paper.pdf)|-|40|\n|CVPR|[Joint Demosaicing and Denoising With Self Guidance](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Joint_Demosaicing_and_Denoising_With_Self_Guidance_CVPR_2020_paper.pdf)|-|26|\n|CVPR|[FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FTassano_FastDVDnet_Towards_Real-Time_Deep_Video_Denoising_Without_Flow_Estimation_CVPR_2020_paper.pdf)|-|72|RGB|Video|AWGN|\n|CVPR|[CycleISP: Real Image Restoration via Improved Data Synthesis](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZamir_CycleISP_Real_Image_Restoration_via_Improved_Data_Synthesis_CVPR_2020_paper.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fswz30\u002FCycleISP)|93|\n|CVPR|[Basis Prediction Networks for Effective Burst Denoising With Large Kernels](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FXia_Basis_Prediction_Networks_for_Effective_Burst_Denoising_With_Large_Kernels_CVPR_2020_paper.pdf)|-|18|\n|CVPR|[Superkernel Neural Architecture Search for Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw31\u002FMozejko_Superkernel_Neural_Architecture_Search_for_Image_Denoising_CVPRW_2020_paper.pdf)|-|5|\n|ECCV|[Spatial-Adaptive Network for Single Image Denoising](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123750171.pdf)|-|34|\n|ECCV|[A Decoupled Learning Scheme for Real-world Burst Denoising from Raw Images](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123700154.pdf)|-|3|\n|ECCV|[Burst Denoising via Temporally Shifted Wavelet Transforms](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123580239.pdf)|-|0|\n|ECCV|[Unpaired Learning of Deep Image Denoising](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123490341.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002FXHWXD\u002FDBSN)|24|\n|ECCV|[Dual Adversarial Network: Toward Real-world Noise Removal and Noise Generation](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123550035.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002FzsyOAOA\u002FDANet)|39|\n|ECCV|[Learning Camera-Aware Noise Models](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123690341.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Farcchang1236\u002FCA-NoiseGAN)|9|\n|ECCV|[Practical Deep Raw Image Denoising on Mobile Devices](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123510001.pdf)|[MegEngine](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPMRID)|15|Raw|Single|PG|\n|ECCV|[Reconstructing the Noise Manifold for Image Denoising](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123540596.pdf)|-|2|\n|NN|[Deep Learning on Image Denoising : An Overview](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.13171.pdf)|-|247|\n|WACV|[Identifying recurring patterns with deep neural networks for natural image denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fpapers\u002FXia_Identifying_Recurring_Patterns_with_Deep_Neural_Networks_for_Natural_Image_WACV_2020_paper.pdf)|-|11|\n|ICASSP|[Attention Mechanism Enhanced Kernel Prediction Networks for Denoising of Burst Images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.08313.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fz-bingo\u002FAttention-Mechanism-Enhanced-KPN)|4|\n|Arxiv|[Low-light Image Restoration with Short- and Long-exposure Raw Pairs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00199.pdf)|-|6|\n\n## 2019  \n\n|Pub|Title|Code|Cite|\n|:---:|:---:|:---:|:---:|\n|TIP|[Optimal combination of image denoisers](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.06712.pdf)|-|13|\n|TIP|[High ISO JPEG Image Denoising by Deep Fusion of Collaborative and Convolutional Filtering](https:\u002F\u002Fsci-hub.se\u002Fhttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8684332\u002F)|-|6|\n|TIP|[Texture variation adaptive image denoising with nonlocal PCA](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11282.pdf)|-|11|\n|TIP|[Color Image and Multispectral Image Denoising Using Block Diagonal Representation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.03954.pdf)|-|7|\n|TIP|Tchebichef and Adaptive Steerable-Based Total Variation Model for Image Denoising|-|23|\n|TIP|[Iterative Joint Image Demosaicking and Denoising Using a Residual Denoising Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06403.pdf)|-|55|\n|TIP|Content-Adaptive Noise Estimation for Color Images with Cross-Channel Noise Modeling|-|4|\n|TPAMI|[Real-world Image Denoising with Deep Boosting](https:\u002F\u002Fsci-hub.se\u002Fhttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8733117\u002F)|[Tensorflow](https:\u002F\u002Fgithub.com\u002Fngchc\u002FdeepBoosting)|29|\n|JVCIR|Vst-net: Variance-stabilizing transformation inspired network for poisson denoising|[Matlab](https:\u002F\u002Fgithub.com\u002Fyqx7150\u002FVST-Net)|14|\n|NIPS|[Variational Denoising Network: Toward Blind Noise Modeling and Removal](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8446-variational-denoising-network-toward-blind-noise-modeling-and-removal.pdf)|-|110|\n|NIPS|[High-Quality Self-Supervised Deep Image Denoising](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8920-high-quality-self-supervised-deep-image-denoising.pdf)|-|138|\n|ICML|[Noise2Self: Blind Denoising by Self-Supervision](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.11365.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fczbiohub\u002Fnoise2self)|244|\n|ICML|[Plug-and-play methods provably converge with properly trained denoisers](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.05406.pdf)|-|125|\n|CVPR|[Unsupervised Domain Adaptation for ToF Data Denoising with Adversarial Learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FAgresti_Unsupervised_Domain_Adaptation_for_ToF_Data_Denoising_With_Adversarial_Learning_CVPR_2019_paper.pdf)|-|26|\n|CVPR|[Robust Subspace Clustering with Independent and Piecewise Identically Distributed Noise Modeling](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLi_Robust_Subspace_Clustering_With_Independent_and_Piecewise_Identically_Distributed_Noise_CVPR_2019_paper.pdf)|-|15|\n|CVPR|[Toward convolutional blind denoising of real photographs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FGuo_Toward_Convolutional_Blind_Denoising_of_Real_Photographs_CVPR_2019_paper.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002FGuoShi28\u002FCBDNet)|458|\n|CVPR|[FOCNet: A Fractional Optimal Control Network for Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FJia_FOCNet_A_Fractional_Optimal_Control_Network_for_Image_Denoising_CVPR_2019_paper.pdf)|-|62|\n|CVPR|[Noise2void-learning denoising from single noisy images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKrull_Noise2Void_-_Learning_Denoising_From_Single_Noisy_Images_CVPR_2019_paper.pdf)|-|406|\n|CVPR|[Unprocessing images for learned raw denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FBrooks_Unprocessing_Images_for_Learned_Raw_Denoising_CVPR_2019_paper.pdf)|-|186|\n|CVPR|[Training deep learning based image denoisers from undersampled measurements without ground truth and without image prior](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhussip_Training_Deep_Learning_Based_Image_Denoisers_From_Undersampled_Measurements_Without_CVPR_2019_paper.pdf)|-|28|\n|CVPR|[Model-blind video denoising via frame-to-frame training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FEhret_Model-Blind_Video_Denoising_via_Frame-To-Frame_Training_CVPR_2019_paper.pdf)|[other](https:\u002F\u002Fgithub.com\u002Ftehret\u002Fblind-denoising)|44|\n|ICCV|[Self-Guided Network for Fast Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGu_Self-Guided_Network_for_Fast_Image_Denoising_ICCV_2019_paper.pdf)|-|78|\n|ICCV|[Noise flow: Noise modeling with conditional normalizing flows](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FAbdelhamed_Noise_Flow_Noise_Modeling_With_Conditional_Normalizing_Flows_ICCV_2019_paper.pdf)|-|74|\n|ICCV|[Joint Demosaicking and Denoising by Fine-Tuning of Bursts of Raw Images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FEhret_Joint_Demosaicking_and_Denoising_by_Fine-Tuning_of_Bursts_of_Raw_ICCV_2019_paper.pdf)|-|34|\n|ICCV|[Fully Convolutional Pixel Adaptive Image Denoiser](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FCha_Fully_Convolutional_Pixel_Adaptive_Image_Denoiser_ICCV_2019_paper.pdf)|[Keras](https:\u002F\u002Fgithub.com\u002Fcsm9493\u002FFC-AIDE-Keras)|27|\n|ICCV|[Enhancing Low Light Videos by Exploring High Sensitivity Camera Noise](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Enhancing_Low_Light_Videos_by_Exploring_High_Sensitivity_Camera_Noise_ICCV_2019_paper.pdf)|-|14|\n|ICCV|[CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGupta_CIIDefence_Defeating_Adversarial_Attacks_by_Fusing_Class-Specific_Image_Inpainting_and_ICCV_2019_paper.pdf)|-|21|\n|ICCV|[Real Image Denoising with Feature Attention](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.07396.pdf)|-|192|\n|CVPRW|[GRDN:Grouped Residual Dense Network for Real Image Denoising and GAN-based Real-world Noise Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FKim_GRDNGrouped_Residual_Dense_Network_for_Real_Image_Denoising_and_GAN-Based_CVPRW_2019_paper.pdf)|-|65|\n|CVPRW|[Learning raw image denoising with bayer pattern unification and bayer preserving augmentation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FLiu_Learning_Raw_Image_Denoising_With_Bayer_Pattern_Unification_and_Bayer_CVPRW_2019_paper.pdf)|-|29|\n|CVPRW|[Deep iterative down-up CNN for image denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FYu_Deep_Iterative_Down-Up_CNN_for_Image_Denoising_CVPRW_2019_paper.pdf)|-|69|\n|CVPRW|[Densely Connected Hierarchical Network for Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FPark_Densely_Connected_Hierarchical_Network_for_Image_Denoising_CVPRW_2019_paper.pdf)|-|55|\n|CVPRW|[ViDeNN: Deep Blind Video Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FClaus_ViDeNN_Deep_Blind_Video_Denoising_CVPRW_2019_paper.pdf)|-|42|\n|CVPRW|[Real Photographs Denoising With Noise Domain Adaptation and Attentive Generative Adversarial Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FLin_Real_Photographs_Denoising_With_Noise_Domain_Adaptation_and_Attentive_Generative_CVPRW_2019_paper.pdf)|-|15|\n|CVPRW|[Learning Deep Image Priors for Blind Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FHou_Learning_Deep_Image_Priors_for_Blind_Image_Denoising_CVPRW_2019_paper.pdf)|-|4|\n|ICIP|[DVDnet: A fast network for deep video denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.11890.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fm-tassano\u002Fdvdnet)|45|RGB|Video|AWGN|\n|ICIP|[Multi-kernel prediction networks for denoising of burst images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.05392.pdf)|-|17|\n|ICIP|A non-local cnn for video denoising|-|31|\n|AAAI|Adaptation Strategies for Applying AWGN-based Denoiser to Realistic Noise|-|4|\n|arxiv|[When AWGN-based Denoiser Meets Real Noises](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.03485.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fyzhouas\u002FPD-Denoising-pytorch)|29|\n|arxiv|[Generating training data for denoising real rgb images via camera pipeline simulation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.08825.pdf)|-|19|\n|arxiv|[Learning Deformable Kernels for Image and Video Denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.06903.pdf)|-|24|\n|arxiv|[Gan2gan: Generative noise learning for blind image denoising with single noisy images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.10488.pdf)|-|12|\n\n## 2018  \n\n|Pub|Title|Code|Cite|\n|:---:|:---:|:---:|:---:|\n|TIP|Weighted Tensor Rank-1 Decomposition for Nonlocal Image Denoising|-|19|\n|TIP|Towards Optimal Denoising of Image Contrast|-|8|\n|TIP|[Time-of-Flight Range Measurement in Low- sensing Environment : Noise Analysis and Complex-domain Non-local Denoising](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FMihail_Georgiev4\u002Fpublication\u002F323233188_Time-of-Flight_Range_Measurement_in_Low-Sensing_Environment_Noise_Analysis_and_Complex-Domain_Non-Local_Denoising\u002Flinks\u002F5b2373750f7e9b0e374893a7\u002FTime-of-Flight-Range-Measurement-in-Low-Sensing-Environment-Noise-Analysis-and-Complex-Domain-Non-Local-Denoising.pdf)|-|10|\n|TIP|[Statistical Nearest Neighbors for Image Denoising](https:\u002F\u002Fresearch.nvidia.com\u002Fsites\u002Fdefault\u002Ffiles\u002Fpubs\u002F2018-09_Statistical-Nearest-Neighbors\u002FStatistical%20Nearest%20Neighbors%20for%20Image%20Denoising.pdf)|-|29|\n|TIP|[Joint Denoising \u002F Compression of Image Contours via Shape Prior and Context Tree](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.00268.pdf)|-|5|\n|TIP|[Image Restoration by Iterative Denoising and Backward Projections](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06647.pdf)|-|110|\n|TIP|Corrupted reference image quality assessment of denoised images|-|11|\n|TIP|[FFDNet: Toward a fast and flexible solution for CNN-based image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.04026.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002Fcszn\u002FFFDNet)|1103|\n|TIP|[External prior guided internal prior learning for real-world noisy image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.04505.pdf)|-|92|\n|TIP|[Class-aware fully convolutional Gaussian and Poisson denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.06562.pdf)|[Tensorflow](https:\u002F\u002Fgithub.com\u002FTalRemez\u002Fdeep_class_aware_denoising)|54|\n|TIP|[VIDOSAT: High-dimensional sparsifying transform learning for online video denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.00947.pdf)|-|23|\n|TIP|[Effective and fast estimation for image sensor noise via constrained weighted least squares](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FJiantao_Zhou\u002Fpublication\u002F323563338_Effective_and_Fast_Estimation_for_Image_Sensor_Noise_Via_Constrained_Weighted_Least_Squares\u002Flinks\u002F5acdcaa6a6fdcc87840afac1\u002FEffective-and-Fast-Estimation-for-Image-Sensor-Noise-Via-Constrained-Weighted-Least-Squares.pdf)|-|20|\n|ToG|Denoising with kernel prediction and asymmetric loss functions|-|106|\n|TMM|Gradient prior-aided cnn denoiser with separable convolution-based optimization of feature dimension|-|22|\n|NIPS|[Training deep learning based denoisers without ground truth data](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7587-training-deep-learning-based-denoisers-without-ground-truth-data.pdf)|-|75|\n|ICML|[Noise2Noise: Learning Image Restoration without Clean Data](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.04189.pdf)|-|758|\n|CVPR|[Burst denoising with kernel prediction networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMildenhall_Burst_Denoising_With_CVPR_2018_paper.pdf)|-|224|\n|CVPR|[Image Blind Denoising With Generative Adversarial Network Based Noise Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Image_Blind_Denoising_CVPR_2018_paper.pdf)|-|352|\n|CVPR|[Universal Denoising Networks : A Novel CNN Architecture for Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLefkimmiatis_Universal_Denoising_Networks_CVPR_2018_paper.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002Fcig-skoltech\u002FUDNet)|209|\n|ECCV|[Deep burst denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FClement_Godard_Deep_Burst_Denoising_ECCV_2018_paper.pdf)|-|74|\n|ECCV|[Deep boosting for image denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FChang_Chen_Deep_Boosting_for_ECCV_2018_paper.pdf)|-|50|\n|ECCV|[A trilateral weighted sparse coding scheme for real-world image denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FXU_JUN_A_Trilateral_Weighted_ECCV_2018_paper.pdf)|-|180|\n|ECCV|[Deep image demosaicking using a cascade of convolutional residual denoising networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FFilippos_Kokkinos_Deep_Image_Demosaicking_ECCV_2018_paper.pdf)|-|68|\n|IJCAI|[Connecting image denoising and high-level vision tasks via deep learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.01826.pdf)|-|70|\n|IJCAI|[When image denoising meets high-level vision tasks: A deep learning approach](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.04284.pdf)|-|160|\n|JVCIR|[RENOIR–A dataset for real low-light image noise reduction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.8230.pdf)|-|106|\n|TCI|[Convolutional neural networks for noniterative reconstruction of compressively sensed images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.04669.pdf)|-|83|\n|ACCV|[Dn-resnet: Efficient deep residual network for image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.06766.pdf)|-|22|\n|ICIP|[Image Denoising for Image Retrieval by Cascading a Deep Quality Assessment Network](http:\u002F\u002Fwww.ee.iisc.ac.in\u002Fnew\u002Fpeople\u002Ffaculty\u002Fsoma.biswas\u002FPapers\u002Fbiju_icip2018.pdf)|-|9|\n|arxiv|[Correction by projection: Denoising images with generative adversarial networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.04477.pdf)|-|47|\n|arxiv|[Non-local video denoising by CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.12758.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Faxeldavy\u002Fvnlnet)|31|\n|arxiv|[Iterative residual network for deep joint image demosaicking and denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06403.pdf)|-|9|\n|arxiv|[Fully convolutional pixel adaptive image denoiser](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07569.pdf)|-|27|\n|arxiv|[Fast, trainable, multiscale denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.06130.pdf)|-|6|\n|arxiv|[Deep learning for image denoising: a survey](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.05052.pdf)|-|90|\n\n## 2017  \n\n|Publ|Title|Code|Cite|\n|:---:|:---:|:---:|:---:|\n|TIP|[Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1608.03981.pdf)|-|4387|\n|TIP|Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise|-|29|\n|TIP|Reweighted Low-Rank Matrix Analysis with Structural Smoothness for Image Denoising|-|40|\n|TIP|Category-specific object image denoising|-|31|\n|TIP|[Affine Non-Local Means Image Denoising](https:\u002F\u002Frepositori.upf.edu\u002Fbitstream\u002Fhandle\u002F10230\u002F37095\u002Fballester_trans26_affi.pdf?sequence=1&isAllowed=y)|-|39|\n|CVPR|[Image Denoising via CNNs: An Adversarial Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017_workshops\u002Fw12\u002Fpapers\u002FDivakar_Image_Denoising_via_CVPR_2017_paper.pdf)|-|71|\n|CVPR|[Non-local color image denoising with convolutional neural networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLefkimmiatis_Non-Local_Color_Image_CVPR_2017_paper.pdf)|-|274|\n|CVPR|[Learning Deep CNN Denoiser Prior for Image Restoration](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FZhang_Learning_Deep_CNN_CVPR_2017_paper.pdf)|-|1277|\n|ICCV|[Learning Proximal Operators : Using Denoising Networks for Regularizing Inverse Imaging Problems](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FMeinhardt_Learning_Proximal_Operators_ICCV_2017_paper.pdf)|-|246|\n|ICCV|[Multi-channel Weighted Nuclear Norm Minimization for Real Color Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FXu_Multi-Channel_Weighted_Nuclear_ICCV_2017_paper.pdf)|-|230|\n|ICCV|[Joint Adaptive Sparsity and Low-Rankness on the Fly: An Online Tensor Reconstruction Scheme for Video Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FWen_Joint_Adaptive_Sparsity_ICCV_2017_paper.pdf)|-|40|\n|ICCV|[Blob Reconstruction Using Unilateral Second Order Gaussian Kernels with Application to High-ISO Long-Exposure Image Denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FWang_Blob_Reconstruction_Using_ICCV_2017_paper.pdf)|-|10|\n|ICIP|[Image denoising using group sparsity residual and external nonlocal self-similarity prior](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1701.00723.pdf)|-|7|\n|arxiv|[Block-matching convolutional neural network for image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.00524.pdf)|-|50|\n|arxiv|[Learning pixel-distribution prior with wider convolution for image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.09135.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002Fcswin\u002FWIN)|19|\n|arxiv|[Chaining identity mapping modules for image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.02933.pdf)|-|12|\n|ICTAI|[Dilated deep residual network for image denoising](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.05473.pdf)|-|73|\n\n## before 2017  \n\n|Year|Publication|Title|Code|Citation|\n|:---:|:---:|:---:|:---:|:---:|\n|2016|CVPR|[Deep Gaussian conditional random field network: A model-based deep network for discriminative denoising](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FVemulapalli_Deep_Gaussian_Conditional_CVPR_2016_paper.pdf)|-|68|\n|2016|CVPR|[From Noise Modeling to Blind Image Denoising](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FZhu_From_Noise_Modeling_CVPR_2016_paper.pdf)|-|67|\n|2016|TIP|Patch-based video denoising with optical flow estimation|-|99|\n|2016|ToG|Deep joint demosaicking and denoising|-|336|\n|2016|ICASSP|Fast depth image denoising and enhancement using a deep convolutional network|-|62|\n|2015|ICCV|[An efficient statistical method for image noise level estimation](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FChen_An_Efficient_Statistical_ICCV_2015_paper.pdf)|-|184|\n|2015|TIP|Image-specific prior adaptation for denoising|-|19|\n|2015|IPOL|[The noise clinic: a blind image denoising algorithm](http:\u002F\u002Fwww.ipol.im\u002Fpub\u002Fart\u002F2015\u002F125\u002Farticle_lr.pdf)|-|112|\n|2014|TIP|Practical signal-dependent noise parameter estimation from a single noisy image|-|86|\n|2014|-|[Photon, Poisson Noise](http:\u002F\u002Fpeople.csail.mit.edu\u002Fhasinoff\u002Fpubs\u002Fhasinoff-photon-2011-preprint.pdf)|-|107|\n|2012|CVPR|[Image denoising: Can plain neural networks compete with BM3D?](https:\u002F\u002Fhcburger.com\u002Ffiles\u002Fneuraldenoising.pdf)|-|1246|\n|2012|ICIP|The dominance of Poisson noise in color digital cameras|-|29|\n|2009|SP|[Clipped noisy images: Heteroskedastic modeling and practical denoising](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FAlessandro_Foi\u002Fpublication\u002F220227880_Clipped_noisy_images_Heteroskedastic_modeling_and_practical_denoising\u002Flinks\u002F5b7d594c299bf1d5a71c4b11\u002FClipped-noisy-images-Heteroskedastic-modeling-and-practical-denoising.pdf)|-|129|\n|2008|TIP|[Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F194121585.pdf)|Matlab|723|\n|2007|TIP|[Image denoising by sparse 3-D transform-domain collaborative filtering](http:\u002F\u002Fweb.eecs.utk.edu\u002F~hqi\u002Fece692\u002Freferences\u002Fnoise-BM3D-tip07.pdf)|-|7357|\n|2007|TPAMI|[Automatic estimation and removal of noise from a single image](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.228.3525&rep=rep1&type=pdf)|-|599|\n|2005|CVPR|[A non-local algorithm for image denoising](http:\u002F\u002Faudio.rightmark.org\u002Flukin\u002Fmsu\u002FNonLocal.pdf)|-|7477|\n|2019|Books|CMOS: Circuit Design, Layout, and Simulation: Forth Edition|-|5390|\n|2018|Books|Denoising of photographic images and video: fundamentals, open challenges and new trends|-|14|\n","# 令人惊叹的去噪\n\n为了更好地理解这些论文，我们可以从三个主要方面将其划分为不同的类别。  \n有时，原始域去噪论文会使用一些ISP处理将图像转换为sRGB域，因此使用“Both”来涵盖这种情况。  \n同样，视频去噪论文有时也会退化为成簇图像去噪，甚至单张图像去噪，因此始终使用“Video”标签来覆盖这类情况。\n\n* 颜色空间\n  * RGB\n  * 原始域\n  * 两者皆有\n\n* 图像类型\n  * 单张\n  * 成簇图像\n  * 视频\n\n* 噪声模型  \n  * AWGN（加性高斯白噪声模型）  \n  * PG（泊松-高斯噪声模型）  \n  * GAN（基于生成对抗网络的噪声模型）  \n  * Real（相机或单反设备中的真实噪声模型）  \n  * 先验知识\n    * 低秩\n    * 稀疏性\n    * 自相似性\n\n## 基准数据集  \n\n* SIDD，CVPR 2018，引用次数256\n  * [面向智能手机相机的高质量去噪数据集](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAbdelhamed_A_High-Quality_Denoising_CVPR_2018_paper.pdf)\n  * [Matlab代码](https:\u002F\u002Fgithub.com\u002FAbdoKamel\u002Fsidd-ground-truth-image-estimation)\n* RENOIR，JVCIR 2018，引用次数106\n  * [RENOIR——用于真实低光照图像降噪的数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.8230.pdf)\n  * [数据集链接已失效](http:\u002F\u002Fadrianbarburesearch.blogspot.com\u002Fp\u002Frenoir-dataset.html)\n* PolyU，arXiv 2018，引用次数108\n  * [真实世界噪声图像去噪：一个新的基准](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.02603.pdf)\n  * [Matlab代码](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FPolyU-Real-World-Noisy-Images-Dataset)\n* SID，CVPR 2018，引用次数595\n  * [学习在黑暗中看清](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Learning_to_See_CVPR_2018_paper.pdf)\n  * [TensorFlow代码](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark)\n* DND，CVPR 2017，引用次数296\n  * [基于真实照片的去噪算法基准测试](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FPlotz_Benchmarking_Denoising_Algorithms_CVPR_2017_paper.pdf)\n  * [主页](https:\u002F\u002Fnoise.visinf.tu-darmstadt.de\u002F)\n* NaM，CVPR 2016，引用次数148\n  * [跨通道图像噪声建模的整体方法及其在图像去噪中的应用](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FNam_A_Holistic_Approach_CVPR_2016_paper.pdf)|\n\n# 自监督去噪\n视频去噪\n+ [无监督深度视频去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fhtml\u002FSheth_Unsupervised_Deep_Video_Denoising_ICCV_2021_paper.html)\n  + ICCV 2021, UDVD\n+ [具有更密集感受野的循环自监督视频去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.03608.pdf)\n  + ACM MM 2023, [代码](https:\u002F\u002Fgithub.com\u002FWang-XIaoDingdd\u002FRDRF)\n\n图像去噪\n\n|序号|年份|期刊\u002F会议|标题|引用次数|\n|:---:|:---:|:---:|:---:|:---:|\n|1|2018|ICML|[Noise2Noise：无需干净数据即可学习图像修复](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.04189.pdf)|1236|\n|2|2019|CVPR|[Noise2void：从单张噪声图像中学习去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FKrull_Noise2Void_-_Learning_Denoising_From_Single_Noisy_Images_CVPR_2019_paper.html)|748|\n|3|2019|ICML|[Noise2self：通过自监督进行盲去噪](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fbatson19a.html)|441|\n|4|2019|NeurIPS|[高质量自监督深度图像去噪](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F8920-high-quality-self-supervised-deep-image-denoising)|247|\n|5|2019|arxiv|[使用自一致GAN进行无监督图像噪声建模](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.05762.pdf)|13|\n|6|2020|Frontiers in Computer Science|[概率性Noise2void：无监督的内容感知去噪](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffcomp.2020.00005\u002Ffull)|119|\n|7|2020|TIP|[Noisy-as-clean：从损坏图像中学习自监督去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06878.pdf)|112|\n|8|2020|CVPR|[带有丢弃的Self2self：从单张图像中学习自监督去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FQuan_Self2Self_With_Dropout_Learning_Self-Supervised_Denoising_From_Single_Image_CVPR_2020_paper.html)|201|\n|9|2020|CVPR|[Noisier2noise：从不成对的噪声数据中学习去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FMoran_Noisier2Noise_Learning_to_Denoise_From_Unpaired_Noisy_Data_CVPR_2020_paper.html)|125|\n|10|2020|NeurIPS|[Noise2Same：优化图像去噪的自监督界](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fea6b2efbdd4255a9f1b3bbc6399b58f4-Abstract.html)|57|\n|11|2021|NeurIPS|[Noise2score：利用特威迪方法实现无清洁图像的自监督图像去噪](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F077b83af57538aa183971a2fe0971ec1-Abstract.html)|32|\n|12|2021|CVPR|[Neighbor2neighbor：从单张噪声图像中进行自监督去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FHuang_Neighbor2Neighbor_Self-Supervised_Denoising_From_Single_Noisy_Images_CVPR_2021_paper.html)|135|\n|13|2021|CVPR|[Recorrupted-to-recorrupted：用于图像去噪的无监督深度学习](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FPang_Recorrupted-to-Recorrupted_Unsupervised_Deep_Learning_for_Image_Denoising_CVPR_2021_paper.html)|85|\n|14|2022|TIP|Neighbor2Neighbor：一种用于深度图像去噪的自监督框架|7|\n|15|2022|CVPR|[Ap-bsn：通过非对称PD和盲点网络实现真实世界图像的自监督去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FLee_AP-BSN_Self-Supervised_Denoising_for_Real-World_Images_via_Asymmetric_PD_and_CVPR_2022_paper.html)|27|\n|16|2022|CVPR|[CVF-SID：通过解缠噪声与图像的循环多元函数实现自监督图像去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FNeshatavar_CVF-SID_Cyclic_Multi-Variate_Function_for_Self-Supervised_Image_Denoising_by_Disentangling_CVPR_2022_paper.html)|20|\n|17|2022|CVPR|[通过自适应随机梯度朗之万动力学实现自监督深度图像修复](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWang_Self-Supervised_Deep_Image_Restoration_via_Adaptive_Stochastic_Gradient_Langevin_Dynamics_CVPR_2022_paper.html)|7|\n|18|2022|CVPR|[利用特威迪分布和分数匹配进行噪声分布自适应的自监督图像去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FKim_Noise_Distribution_Adaptive_Self-Supervised_Image_Denoising_Using_Tweedie_Distribution_and_CVPR_2022_paper.html)|5|\n|19|2022|CVPR|[Blind2unblind：带有可见盲区的自监督图像去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FWang_Blind2Unblind_Self-Supervised_Image_Denoising_With_Visible_Blind_Spots_CVPR_2022_paper.html)|29|\n|20|2022|CVPR|[IDR：通过迭代数据精炼实现自监督图像去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fhtml\u002FZhang_IDR_Self-Supervised_Image_Denoising_via_Iterative_Data_Refinement_CVPR_2022_paper.html)|22|\n|21|2023|CVPR|[用于真实世界图像去噪的空间自适应自监督学习](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLi_Spatially_Adaptive_Self-Supervised_Learning_for_Real-World_Image_Denoising_CVPR_2023_paper.html)|1|\n|22|2023|CVPR|[LG-BPN：用于自监督真实世界去噪的局部与全局盲补丁网络](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWang_LG-BPN_Local_and_Global_Blind-Patch_Network_for_Self-Supervised_Real-World_Denoising_CVPR_2023_paper.html)|0|\n|23|2023|CVPR|[零样本Noise2Noise：无需任何数据的高效图像去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FMansour_Zero-Shot_Noise2Noise_Efficient_Image_Denoising_Without_Any_Data_CVPR_2023_paper.html)|1|\n|24|2023|CVPR|[针对相关图像去噪的补丁工艺自监督训练](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FVaksman_Patch-Craft_Self-Supervised_Training_for_Correlated_Image_Denoising_CVPR_2023_paper.html)|\n|25|2023|arxiv|[释放自监督图像去噪的力量：综合综述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.00247.pdf)|\n|26|2023|ICCV|[用于自监督真实图像去噪的随机子样本生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2307.16825.pdf)|\n|27|2023|ICCV|[由分数先验引导的无监督真实世界单张图像深度变分推断去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.04682.pdf)|\n|28|2023|ICCV|[通过自我协作的并行生成对抗分支实现在真实场景中的无监督图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.06776.pdf)|\n\n# 按年份\n\n## 2020年\n\n|期刊|标题|代码|引用次数|\n|:---:|:---:|:---:|:---:|\n|TIP|[Noisy-As-Clean：从损坏图像中学习自监督去噪](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20TIP_NAC.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FNoisy-As-Clean-TIP2020)|47|\n|TIP|[带有高斯噪声水平学习的盲通用贝叶斯图像去噪](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=9024220)|-|43|\n|TIP|[用于图像和视频去噪的可变形卷积核学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.06903.pdf)|-|24|\n|TIP|用于图像和视频去噪的空间及时空像素聚合学习|-|10|\n|TIP|[深度图卷积图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.08448.pdf)|-|64|\n|TIP|[NLH：一种用于真实世界图像去噪的盲像素级非局部方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.06834.pdf)|-|34|\n|TIP|[通过序列集成学习进行图像去噪](https:\u002F\u002Fcpb-us-w2.wpmucdn.com\u002Fblog.nus.edu.sg\u002Fdist\u002F8\u002F10877\u002Ffiles\u002F2020\u002F03\u002FTIP2020_ensemble.pdf)|-|13|\n|TIP|[通过深度学习连接图像去噪与高层视觉任务](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.01826.pdf)|-|70|\n|CVPR|[面向图像去噪的内存高效分层神经架构搜索](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Memory-Efficient_Hierarchical_Neural_Architecture_Search_for_Image_Denoising_CVPR_2020_paper.pdf)|-|33|\n|CVPR|[用于极端低光照Raw图像去噪的基于物理的噪声形成模型](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWei_A_Physics-Based_Noise_Formation_Model_for_Extreme_Low-Light_Raw_Denoising_CVPR_2020_paper.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002FVandermode\u002FELD)|50|\n|CVPR|[利用动态场景基准数据集进行有监督的Raw视频去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FYue_Supervised_Raw_Video_Denoising_With_a_Benchmark_Dataset_on_Dynamic_CVPR_2020_paper.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fcao-cong\u002FRViDeNet)|26|两者|视频|真实|\n|CVPR|[使用自适应实例归一化从合成到真实噪声去噪的迁移学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FKim_Transfer_Learning_From_Synthetic_to_Real-Noise_Denoising_With_Adaptive_Instance_CVPR_2020_paper.pdf)|-|60|\n|CVPR|[带丢弃的Self2Self：从单张图像中学习自监督去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQuan_Self2Self_With_Dropout_Learning_Self-Supervised_Denoising_From_Single_Image_CVPR_2020_paper.pdf)|-|73|\n|CVPR|[Noisier2Noise：从未配对的噪声数据中学习去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FMoran_Noisier2Noise_Learning_to_Denoise_From_Unpaired_Noisy_Data_CVPR_2020_paper.pdf)|-|40|\n|CVPR|[带有自我引导的联合去马赛克与去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Joint_Demosaicing_and_Denoising_With_Self_Guidance_CVPR_2020_paper.pdf)|-|26|\n|CVPR|[FastDVDnet：无需光流估计的实时深度视频去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FTassano_FastDVDnet_Towards_Real-Time_Deep_Video_Denoising_Without_Flow_Estimation_CVPR_2020_paper.pdf)|-|72|RGB|视频|AWGN|\n|CVPR|[CycleISP：通过改进的数据合成实现真实图像恢复](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZamir_CycleISP_Real_Image_Restoration_via_Improved_Data_Synthesis_CVPR_2020_paper.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fswz30\u002FCycleISP)|93|\n|CVPR|[用于大卷积核有效Burst去噪的基础预测网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FXia_Basis_Prediction_Networks_for_Effective_Burst_Denoising_With_Large_Kernels_CVPR_2020_paper.pdf)|-|18|\n|CVPR|[用于图像去噪的超核神经架构搜索](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw31\u002FMozejko_Superkernel_Neural_Architecture_Search_for_Image_Denoising_CVPRW_2020_paper.pdf)|-|5|\n|ECCV|[用于单张图像去噪的空间自适应网络](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123750171.pdf)|-|34|\n|ECCV|[从Raw图像中进行真实世界Burst去噪的解耦学习方案](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123700154.pdf)|-|3|\n|ECCV|[通过时移小波变换进行Burst去噪](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123580239.pdf)|-|0|\n|ECCV|[深度图像去噪的未配对学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123490341.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002FXHWXD\u002FDBSN)|24|\n|ECCV|[双对抗网络：迈向真实世界的去噪与噪声生成](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123550035.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002FzsyOAOA\u002FDANet)|39|\n|ECCV|[学习相机感知噪声模型](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123690341.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Farcchang1236\u002FCA-NoiseGAN)|9|\n|ECCV|[移动设备上的实用深度Raw图像去噪](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123510001.pdf)|[MegEngine](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPMRID)|15|Raw|单张|PG|\n|ECCV|[为图像去噪重建噪声流形](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123540596.pdf)|-|2|\n|NN|[图像去噪中的深度学习：综述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.13171.pdf)|-|247|\n|WACV|[利用深度神经网络识别自然图像去噪中的重复模式](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fpapers\u002FXia_Identifying_Recurring_Patterns_with_Deep_Neural_Networks_for_Natural_Image_WACV_2020_paper.pdf)|-|11|\n|ICASSP|[注意力机制增强的内核预测网络用于Burst图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.08313.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Fz-bingo\u002FAttention-Mechanism-Enhanced-KPN)|4|\n|Arxiv|[利用短曝光和长曝光Raw图像对进行低光照图像恢复](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00199.pdf)|-|6|\n\n## 2019年\n\n|期刊|标题|代码|引用数|\n|:---:|:---:|:---:|:---:|\n|TIP|[图像去噪器的最优组合](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.06712.pdf)|-|13|\n|TIP|[基于协同与卷积滤波深度融合的高ISO JPEG图像去噪](https:\u002F\u002Fsci-hub.se\u002Fhttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8684332\u002F)|-|6|\n|TIP|[基于非局部主成分分析的纹理变化自适应图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11282.pdf)|-|11|\n|TIP|[利用块对角表示进行彩色图像和多光谱图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.03954.pdf)|-|7|\n|TIP|基于切比雪夫和自适应可定向总变差模型的图像去噪|-|23|\n|TIP|[利用残差去噪网络进行迭代联合图像去马赛克与去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06403.pdf)|-|55|\n|TIP|具有跨通道噪声建模的彩色图像内容自适应噪声估计|-|4|\n|TPAMI|[基于深度提升的真实世界图像去噪](https:\u002F\u002Fsci-hub.se\u002Fhttps:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8733117\u002F)|[TensorFlow](https:\u002F\u002Fgithub.com\u002Fngchc\u002FdeepBoosting)|29|\n|JVCIR|Vst-net：受方差稳定变换启发的泊松去噪网络|[Matlab](https:\u002F\u002Fgithub.com\u002Fyqx7150\u002FVST-Net)|14|\n|NIPS|[变分去噪网络：迈向盲噪声建模与去除](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8446-variational-denoising-network-toward-blind-noise-modeling-and-removal.pdf)|-|110|\n|NIPS|[高质量自监督深度图像去噪](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8920-high-quality-self-supervised-deep-image-denoising.pdf)|-|138|\n|ICML|[Noise2Self：基于自监督的盲去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.11365.pdf)|[PyTorch](https:\u002F\u002Fgithub.com\u002Fczbiohub\u002Fnoise2self)|244|\n|ICML|[插拔式方法在训练得当的去噪器下可证明收敛](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.05406.pdf)|-|125|\n|CVPR|[利用对抗学习实现ToF数据去噪的无监督域适应](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FAgresti_Unsupervised_Domain_Adaptation_for_ToF_Data_Denoising_With_Adversarial_Learning_CVPR_2019_paper.pdf)|-|26|\n|CVPR|[具有独立且分段同分布噪声建模的鲁棒子空间聚类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLi_Robust_Subspace_Clustering_With_Independent_and_Piecewise_Identically_Distributed_Noise_CVPR_2019_paper.pdf)|-|15|\n|CVPR|[迈向真实照片的卷积盲去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FGuo_Toward_Convolutional_Blind_Denoising_of_Real_Photographs_CVPR_2019_paper.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002FGuoShi28\u002FCBDNet)|458|\n|CVPR|[FOCNet：用于图像去噪的分数阶最优控制网络](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FJia_FOCNet_A_Fractional_Optimal_Control_Network_for_Image_Denoising_CVPR_2019_paper.pdf)|-|62|\n|CVPR|[Noise2void——从单张噪声图像中学习去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKrull_Noise2Void_-_Learning_Denoising_From_Single_Noisy_Images_CVPR_2019_paper.pdf)|-|406|\n|CVPR|[为学习原始图像去噪而对图像进行“反处理”](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FBrooks_Unprocessing_Images_for_Learned_Raw_Denoising_CVPR_2019_paper.pdf)|-|186|\n|CVPR|[无需真值和图像先验，仅从欠采样测量中训练基于深度学习的图像去噪器](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhussip_Training_Deep_Learning_Based_Image_Denoisers_From_Undersampled_Measurements_Without_CVPR_2019_paper.pdf)|-|28|\n|CVPR|[通过帧间训练实现模型无关的视频去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FEhret_Model-Blind_Video_Denoising_via_Frame-To-Frame_Training_CVPR_2019_paper.pdf)|[其他](https:\u002F\u002Fgithub.com\u002Ftehret\u002Fblind-denoising)|44|\n|ICCV|[用于快速图像去噪的自引导网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGu_Self-Guided_Network_for_Fast_Image_Denoising_ICCV_2019_paper.pdf)|-|78|\n|ICCV|[噪声流：基于条件归一化流的噪声建模](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FAbdelhamed_Noise_Flow_Noise_Modeling_With_Conditional_Normalizing_Flows_ICCV_2019_paper.pdf)|-|74|\n|ICCV|[通过微调原始图像序列实现联合去马赛克与去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FEhret_Joint_Demosaicking_and_Denoising_by_Fine-Tuning_of_Bursts_of_Raw_ICCV_2019_paper.pdf)|-|34|\n|ICCV|[全卷积像素自适应图像去噪器](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FCha_Fully_Convolutional_Pixel_Adaptive_Image_Denoiser_ICCV_2019_paper.pdf)|[Keras](https:\u002F\u002Fgithub.com\u002Fcsm9493\u002FFC-AIDE-Keras)|27|\n|ICCV|[通过探索高感光度相机噪声来增强低光照视频](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Enhancing_Low_Light_Videos_by_Exploring_High_Sensitivity_Camera_Noise_ICCV_2019_paper.pdf)|-|14|\n|ICCV|[CIIDefence：通过融合特定类别图像修复与图像去噪来抵御对抗攻击](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGupta_CIIDefence_Defeating_Adversarial_Attacks_by_Fusing_Class-Specific_Image_Inpainting_and_ICCV_2019_paper.pdf)|-|21|\n|ICCV|[带有特征注意力的真实图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.07396.pdf)|-|192|\n|CVPRW|[GRDN：用于真实图像去噪及GAN驱动的真实噪声建模的分组残差密集网络](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FKim_GRDNGrouped_Residual_Dense_Network_for_Real_Image_Denoising_and_GAN-Based_CVPRW_2019_paper.pdf)|-|65|\n|CVPRW|[通过拜耳模式统一和拜耳保持增强来学习原始图像去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FLiu_Learning_Raw_Image_Denoising_With_Bayer_Pattern_Unification_and_Bayer_CVPRW_2019_paper.pdf)|-|29|\n|CVPRW|[用于图像去噪的深度迭代上下文CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FYu_Deep_Iterative_Down-Up_CNN_for_Image_Denoising_CVPRW_2019_paper.pdf)|-|69|\n|CVPRW|[用于图像去噪的密集连接层次网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FPark_Densely_Connected_Hierarchical_Network_for_Image_Denoising_CVPRW_2019_paper.pdf)|-|55|\n|CVPRW|[ViDeNN：深度盲视频去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FClaus_ViDeNN_Deep_Blind_Video_Denoising_CVPRW_2019_paper.pdf)|-|42|\n|CVPRW|[通过噪声域适应和注意力生成对抗网络对真实照片进行去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FLin_Real_Photographs_Denoising_With_Noise_Domain_Adaptation_and_Attentive_Generative_CVPRW_2019_paper.pdf)|-|15|\n|CVPRW|[为盲图像去噪学习深度图像先验](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FNTIRE\u002FHou_Learning_Deep_Image_Priors_for_Blind_Image_Denoising_CVPRW_2019_paper.pdf)|-|4|\n|ICIP|[DVDnet：用于深度视频去噪的快速网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.11890.pdf)|[PyTorch](https:\u002F\u002Fgithub.com\u002Fm-tassano\u002Fdvdnet)|45|RGB|视频|AWGN|\n|ICIP|[用于批量图像去噪的多核预测网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.05392.pdf)|-|17|\n|ICIP|用于视频去噪的非局部CNN|-|31|\n|AAAI|将基于AWGN的去噪器应用于现实噪声时的适应策略|-|4|\n|arxiv|[当基于AWGN的去噪器遇到真实噪声时](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.03485.pdf)|[PyTorch](https:\u002F\u002Fgithub.com\u002Fyzhouas\u002FPD-Denoising-pytorch)|29|\n|arxiv|[通过相机管线仿真生成用于真实RGB图像去噪的训练数据](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.08825.pdf)|-|19|\n|arxiv|[学习用于图像和视频去噪的可变形内核](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.06903.pdf)|-|24|\n|arxiv|[Gan2gan：利用单张噪声图像进行盲图像去噪的生成式噪声学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.10488.pdf)|-|12|\n\n## 2018  \n\n|期刊|标题|代码|引用|\n|:---:|:---:|:---:|:---:|\n|TIP|用于非局部图像去噪的加权张量秩1分解|-|19|\n|TIP|迈向图像对比度的最佳去噪|-|8|\n|TIP|[低感知环境下的飞行时间测距：噪声分析与复数域非局部去噪](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FMihail_Georgiev4\u002Fpublication\u002F323233188_Time-of-Flight_Range_Measurement_in_Low-Sensing_Environment_Noise_Analysis_and_Complex-Domain_Non-Local_Denoising\u002Flinks\u002F5b2373750f7e9b0e374893a7\u002FTime-of-Flight-Range-Measurement-in-Low-Sensing-Environment-Noise-Analysis-and-Complex-Domain-Non-Local-Denoising.pdf)|-|10|\n|TIP|[用于图像去噪的统计近邻](https:\u002F\u002Fresearch.nvidia.com\u002Fsites\u002Fdefault\u002Ffiles\u002Fpubs\u002F2018-09_Statistical-Nearest-Neighbors\u002FStatistical%20Nearest%20Neighbors%20for%20Image%20Denoising.pdf)|-|29|\n|TIP|[通过形状先验和上下文树进行图像轮廓的联合去噪\u002F压缩](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.00268.pdf)|-|5|\n|TIP|[通过迭代去噪和反向投影进行图像恢复](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06647.pdf)|-|110|\n|TIP|去噪后图像的损坏参考图像质量评估|-|11|\n|TIP|[FFDNet：面向基于CNN的图像去噪的快速灵活解决方案](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.04026.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002Fcszn\u002FFFDNet)|1103|\n|TIP|[外部先验引导的内部先验学习用于真实世界噪声图像的去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.04505.pdf)|-|92|\n|TIP|[类感知全卷积高斯和泊松去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.06562.pdf)|[Tensorflow](https:\u002F\u002Fgithub.com\u002FTalRemez\u002Fdeep_class_aware_denoising)|54|\n|TIP|[VIDOSAT：用于在线视频去噪的高维稀疏变换学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.00947.pdf)|-|23|\n|TIP|[通过约束加权最小二乘法对图像传感器噪声进行有效且快速的估计](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FJiantao_Zhou\u002Fpublication\u002F323563338_Effective_and_Fast_Estimation_for_Image_Sensor_Noise_Via_Constrained_Weighted_Least_Squares\u002Flinks\u002F5acdcaa6a6fdcc87840afac1\u002FEffective-and-Fast-Estimation-for-Image-Sensor-Noise-Via-Constrained-Weighted-Least-Squares.pdf)|-|20|\n|ToG|使用核预测和非对称损失函数进行去噪|-|106|\n|TMM|基于梯度先验辅助的CNN去噪器，采用可分离卷积优化特征维度|-|22|\n|NIPS|[无需真实标签数据训练基于深度学习的去噪器](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7587-training-deep-learning-based-denoisers-without-ground-truth-data.pdf)|-|75|\n|ICML|[Noise2Noise：无需干净数据学习图像修复](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.04189.pdf)|-|758|\n|CVPR|[利用核预测网络进行突发图像去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMildenhall_Burst_Denoising_With_CVPR_2018_paper.pdf)|-|224|\n|CVPR|[基于生成对抗网络噪声建模的图像盲去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Image_Blind_Denoising_CVPR_2018_paper.pdf)|-|352|\n|CVPR|[通用去噪网络：一种用于图像去噪的新型CNN架构](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLefkimmiatis_Universal_Denoising_Networks_CVPR_2018_paper.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002Fcig-skoltech\u002FUDNet)|209|\n|ECCV|[深度突发去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FClement_Godard_Deep_Burst_Denoising_ECCV_2018_paper.pdf)|-|74|\n|ECCV|[用于图像去噪的深度提升](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FChang_Chen_Deep_Boosting_for_ECCV_2018_paper.pdf)|-|50|\n|ECCV|[一种用于真实世界图像去噪的三边加权稀疏编码方案](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FXU_JUN_A_Trilateral_Weighted_ECCV_2018_paper.pdf)|-|180|\n|ECCV|[使用卷积残差去噪网络级联进行深度图像去马赛克](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FFilippos_Kokkinos_Deep_Image_Demosaicking_ECCV_2018_paper.pdf)|-|68|\n|IJCAI|[通过深度学习将图像去噪与高层视觉任务连接起来](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.01826.pdf)|-|70|\n|IJCAI|[当图像去噪与高层视觉任务相遇时：一种深度学习方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.04284.pdf)|-|160|\n|JVCIR|[RENOIR——一个用于真实低光照图像降噪的数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1409.8230.pdf)|-|106|\n|TCI|[用于无迭代重建压缩感知图像的卷积神经网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.04669.pdf)|-|83|\n|ACCV|[Dn-resnet：高效的深度残差网络用于图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.06766.pdf)|-|22|\n|ICIP|[通过级联深度质量评估网络实现用于图像检索的图像去噪](http:\u002F\u002Fwww.ee.iisc.ac.in\u002Fnew\u002Fpeople\u002Ffaculty\u002Fsoma.biswas\u002FPapers\u002Fbiju_icip2018.pdf)|-|9|\n|arxiv|[投影校正：利用生成对抗网络进行图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.04477.pdf)|-|47|\n|arxiv|[基于CNN的非局部视频去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.12758.pdf)|[Pytorch](https:\u002F\u002Fgithub.com\u002Faxeldavy\u002Fvnlnet)|31|\n|arxiv|[用于深度联合图像去马赛克和去噪的迭代残差网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06403.pdf)|-|9|\n|arxiv|[全卷积像素自适应图像去噪器](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07569.pdf)|-|27|\n|arxiv|[快速、可训练的多尺度去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.06130.pdf)|-|6|\n|arxiv|[用于图像去噪的深度学习：综述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.05052.pdf)|-|90|\n\n## 2017年  \n\n|出版物|标题|代码|引用|\n|:---:|:---:|:---:|:---:|\n|TIP|[超越高斯去噪器：用于图像去噪的深度CNN残差学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1608.03981.pdf)|-|4387|\n|TIP|通过图像传感器噪声的泊松混合建模改进去噪|-|29|\n|TIP|结合结构平滑性的重加权低秩矩阵分析用于图像去噪|-|40|\n|TIP|特定类别目标图像去噪|-|31|\n|TIP|[仿射非局部均值图像去噪](https:\u002F\u002Frepositori.upf.edu\u002Fbitstream\u002Fhandle\u002F10230\u002F37095\u002Fballester_trans26_affi.pdf?sequence=1&isAllowed=y)|-|39|\n|CVPR|[基于CNN的图像去噪：一种对抗性方法](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017_workshops\u002Fw12\u002Fpapers\u002FDivakar_Image_Denoising_via_CVPR_2017_paper.pdf)|-|71|\n|CVPR|[使用卷积神经网络的非局部彩色图像去噪](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLefkimmiatis_Non-Local_Color_Image_CVPR_2017_paper.pdf)|-|274|\n|CVPR|[为图像恢复学习深度CNN去噪先验](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FZhang_Learning_Deep_CNN_CVPR_2017_paper.pdf)|-|1277|\n|ICCV|[学习邻近算子：利用去噪网络正则化逆成像问题](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FMeinhardt_Learning_Proximal_Operators_ICCV_2017_paper.pdf)|-|246|\n|ICCV|[用于真实彩色图像去噪的多通道加权核范数最小化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FXu_Multi-Channel_Weighted_Nuclear_ICCV_2017_paper.pdf)|-|230|\n|ICCV|[在线联合自适应稀疏性和低秩性：用于视频去噪的在线张量重建方案](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FWen_Joint_Adaptive_Sparsity_ICCV_2017_paper.pdf)|-|40|\n|ICCV|[使用单侧二阶高斯核进行斑点重建及其在高ISO长曝光图像去噪中的应用](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FWang_Blob_Reconstruction_Using_ICCV_2017_paper.pdf)|-|10|\n|ICIP|[利用组稀疏残差和外部非局部自相似先验进行图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1701.00723.pdf)|-|7|\n|arxiv|[基于块匹配的卷积神经网络用于图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.00524.pdf)|-|50|\n|arxiv|[利用更宽的卷积学习像素分布先验进行图像去噪](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.09135.pdf)|[Matlab](https:\u002F\u002Fgithub.com\u002Fcswin\u002FWIN)|19|\n|arxiv|[用于图像去噪的恒等映射模块串联](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.02933.pdf)|-|12|\n|ICTAI|[用于图像去噪的空洞深度残差网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.05473.pdf)|-|73|\n\n## 2017年之前  \n\n|年份|出版物|标题|代码|引用|\n|:---:|:---:|:---:|:---:|:---:|\n|2016|CVPR|[深度高斯条件随机场网络：一种基于模型的深度网络用于判别式去噪](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FVemulapalli_Deep_Gaussian_Conditional_CVPR_2016_paper.pdf)|-|68|\n|2016|CVPR|[从噪声建模到盲图像去噪](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FZhu_From_Noise_Modeling_CVPR_2016_paper.pdf)|-|67|\n|2016|TIP|基于光流估计的分块视频去噪|-|99|\n|2016|ToG|深度联合去马赛克与去噪|-|336|\n|2016|ICASSP|利用深度卷积网络快速进行深度图像去噪与增强|-|62|\n|2015|ICCV|[一种高效的图像噪声水平估计统计方法](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FChen_An_Efficient_Statistical_ICCV_2015_paper.pdf)|-|184|\n|2015|TIP|针对去噪的图像特异性先验自适应|-|19|\n|2015|IPOL|[噪声诊所：一种盲图像去噪算法](http:\u002F\u002Fwww.ipol.im\u002Fpub\u002Fart\u002F2015\u002F125\u002Farticle_lr.pdf)|-|112|\n|2014|TIP|从单幅噪声图像中进行实用的信号相关噪声参数估计|-|86|\n|2014|-|[光子、泊松噪声](http:\u002F\u002Fpeople.csail.mit.edu\u002Fhasinoff\u002Fpubs\u002Fhasinoff-photon-2011-preprint.pdf)|-|107|\n|2012|CVPR|[图像去噪：普通神经网络能否与BM3D竞争？](https:\u002F\u002Fhcburger.com\u002Ffiles\u002Fneuraldenoising.pdf)|-|1246|\n|2012|ICIP|彩色数码相机中泊松噪声的主导地位|-|29|\n|2009|SP|[截断的噪声图像：异方差建模与实用去噪](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FAlessandro_Foi\u002Fpublication\u002F220227880_Clipped_noisy_images_Heteroskedastic_modeling_and_practical_denoising\u002Flinks\u002F5b7d594c299bf1d5a71c4b11\u002FClipped-noisy-images-Heteroskedastic-modeling-and-practical-denoising.pdf)|-|129|\n|2008|TIP|[单幅原始数据的实用泊松-高斯噪声建模与拟合](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F194121585.pdf)|Matlab|723|\n|2007|TIP|[通过稀疏三维变换域协同滤波进行图像去噪](http:\u002F\u002Fweb.eecs.utk.edu\u002F~hqi\u002Fece692\u002Freferences\u002Fnoise-BM3D-tip07.pdf)|-|7357|\n|2007|TPAMI|[自动估计并去除单幅图像中的噪声](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.228.3525&rep=rep1&type=pdf)|-|599|\n|2005|CVPR|[一种用于图像去噪的非局部算法](http:\u002F\u002Faudio.rightmark.org\u002Flukin\u002Fmsu\u002FNonLocal.pdf)|-|7477|\n|2019|书籍|CMOS：电路设计、版图与仿真：第四版|-|5390|\n|2018|书籍|摄影图像与视频去噪：基础、开放挑战及新趋势|-|14|","# Awesome-Denoise 快速上手指南\n\nAwesome-Denoise 是一个汇总了图像与视频去噪领域前沿论文、数据集及代码资源的开源列表。它涵盖了自监督学习、真实噪声建模、不同色彩空间（RGB\u002FRaw）及多种噪声类型（高斯、泊松 - 高斯、真实相机噪声等）的研究成果。\n\n本指南将帮助你快速了解该项目的核心资源分类，并引导你获取相关基准数据集和复现经典算法。\n\n## 环境准备\n\n由于 Awesome-Denoise 本身是一个资源索引库（Awesome List），而非单一的独立软件包，因此“环境准备”主要指运行列表中推荐的具体算法代码所需的通用深度学习环境。大多数现代去噪算法基于 PyTorch 或 TensorFlow。\n\n### 系统要求\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+), macOS, 或 Windows (WSL2 推荐)\n*   **GPU**: 支持 CUDA 的 NVIDIA 显卡 (推荐显存 ≥ 8GB，用于训练或处理高分辨率视频)\n*   **Python**: 3.7 或更高版本\n\n### 前置依赖\n建议创建一个独立的虚拟环境以避免依赖冲突：\n\n```bash\npython -m venv denoise_env\nsource denoise_env\u002Fbin\u002Factivate  # Linux\u002FmacOS\n# 或\ndenoise_env\\Scripts\\activate     # Windows\n```\n\n安装通用的深度学习基础库（以 PyTorch 为例，国内开发者推荐使用清华源加速）：\n\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\npip install opencv-python numpy matplotlib scipy\n```\n\n## 安装步骤\n\nAwesome-Denoise 项目本身无需通过 `pip` 安装，只需克隆仓库即可获取完整的论文列表、数据集链接和对应代码库索引。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fcaojunxu\u002FAwesome-Denoise.git\ncd Awesome-Denoise\n```\n\n> **提示**：如果 GitHub 连接缓慢，可使用国内镜像加速：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002FAwesome-Denoise.git\n> ```\n> *(注：若 Gitee 无同步镜像，请尝试配置 Git 代理或使用上述标准命令)*\n\n克隆完成后，你可以在目录中查阅 `README.md`，根据需求查找特定算法（如 `Noise2Void`, `FastDVDnet`, `CycleISP` 等）的官方代码仓库链接。\n\n## 基本使用\n\n使用流程通常为：**选择算法 -> 克隆具体代码库 -> 准备数据集 -> 运行推理\u002F训练**。以下以列表中经典的自监督去噪算法 **Noise2Void** 和其常用的基准数据集 **SIDD** 为例演示基本流程。\n\n### 1. 获取基准数据集 (SIDD)\nSIDD (Smartphone Image Denoising Dataset) 是评估真实手机噪声去噪效果的核心数据集。\n\n*   **论文与数据主页**: [A High-Quality Denoising Dataset for Smartphone Cameras](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAbdelhamed_A_High-Quality_Denoising_CVPR_2018_paper.pdf)\n*   **Ground Truth 估计工具**: [Matlab Code](https:\u002F\u002Fgithub.com\u002FAbdoKamel\u002Fsidd-ground-truth-image-estimation)\n\n下载数据后，通常目录结构如下：\n```text\ndatasets\u002F\n└── SIDD\u002F\n    ├── train\u002F\n    │   ├── noisy\u002F\n    │   └── gt\u002F\n    └── val\u002F\n```\n\n### 2. 运行示例算法 (以 Noise2Void 为例)\n假设你从列表中找到了 Noise2Void 的官方实现（通常托管在作者的个人 GitHub 上），克隆并运行推理的典型命令如下：\n\n```bash\n# 克隆具体算法仓库 (示例地址，请以 README 中最新链接为准)\ngit clone https:\u002F\u002Fgithub.com\u002Fjuglab\u002Fn2v.git\ncd n2v\n\n# 安装该算法特定依赖\npip install -r requirements.txt\n\n# 运行单张图像去噪示例 (Python 脚本)\n# 注意：具体参数需参考该仓库的文档，此处为通用示意\npython examples\u002Fdemo_denoising.py \\\n    --input_path ..\u002Fdatasets\u002FSIDD\u002Fval\u002Fnoisy\u002Fimage_001.png \\\n    --output_path .\u002Fresults\u002Fdenoised_image_001.png \\\n    --model_type n2v\n```\n\n### 3. 资源分类检索指南\n在 `README.md` 中，你可以利用以下标签快速定位适合你场景的工具：\n\n*   **按色彩空间**:\n    *   `RGB`: 适用于常规 sRGB 图像去噪。\n    *   `Raw`: 适用于相机原始数据去噪（通常结合 ISP 流程）。\n    *   `Both`: 同时支持两种域。\n*   **按图像类型**:\n    *   `Single`: 单帧图像去噪。\n    *   `Burst`: 连拍序列去噪。\n    *   `Video`: 视频序列去噪（利用时域信息）。\n*   **按噪声模型**:\n    *   `AWGN`: 加性高斯白噪声（合成数据常用）。\n    *   `Real`: 真实相机\u002F单反噪声（最具挑战性，推荐关注 `SIDD`, `DND`, `PolyU` 数据集相关论文）。\n    *   `Self-supervised`: 无干净真值标签的训练方法（如 `Noise2Noise`, `Blind2Unblind`）。\n\n通过查阅列表中对应的论文链接和代码仓库，你可以深入复现 2016 年至 2023 年的各类 SOTA（State-of-the-Art）去噪模型。","某计算机视觉团队正在为一款夜间安防监控摄像头开发去噪算法，急需在缺乏干净参考图的情况下提升低光照视频画质。\n\n### 没有 Awesome-Denoise 时\n- **文献检索如大海捞针**：团队成员需手动在 arXiv、CVPR 等各大会议网站逐个搜索\"self-supervised denoising\"或\"video denoising\"，耗时数周仍难以覆盖最新成果。\n- **代码复现门槛极高**：找到的论文往往缺少官方代码链接，或仓库已失效（如 RENOIR 数据集链接断裂），导致无法验证算法效果。\n- **场景匹配困难**：难以快速区分哪些模型适用于“真实相机噪声（Real）”而非简单的高斯噪声（AWGN），更不清楚哪些视频去噪方法可退化为单帧图像处理。\n- **基准测试混乱**：面对 SIDD、SID、DND 等多个数据集，缺乏统一的引用数据和适用场景说明，导致选型决策依赖主观猜测。\n\n### 使用 Awesome-Denoise 后\n- **一站式获取前沿方案**：直接通过分类标签（如 Video + Real + Self-supervised）锁定 ICCV 2021 的 UDVD 或 ACM MM 2023 的 RDRF 等最新论文，将调研时间从数周缩短至几小时。\n- **代码与数据即取即用**：每个条目均附带有效的 GitHub 代码库和数据集下载指引，甚至标注了 TensorFlow 或 Matlab 实现版本，大幅降低复现成本。\n- **精准匹配业务需求**：利用颜色空间（Raw\u002FRGB）和噪声模型（GAN\u002FReal）的分类维度，迅速排除仅适用于合成噪声的模型，锁定针对真实监控噪点的算法。\n- **权威基准辅助决策**：参考列表中清晰的引用次数（如 SID 高达 595 次）和发表 venue，快速评估算法成熟度，避免在实验阶段踩坑。\n\nAwesome-Denoise 将原本碎片化、高门槛的去噪技术调研过程，转化为高效、精准的工程选型流程，让研发团队能专注于算法落地而非资料搜集。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FoneTaken_Awesome-Denoise_a384217f.png","oneTaken",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FoneTaken_f29fc40e.jpg","Game Pattern Hard！","Megvii Research","Beijing","https:\u002F\u002Fgithub.com\u002FoneTaken",501,56,"2026-03-17T11:26:29","MIT",5,"","未说明",{"notes":87,"python":85,"dependencies":88},"Awesome-Denoise 是一个去噪论文和资源的汇总列表（Awesome List），并非一个独立的、可直接运行的软件工具或代码库。README 中列出了多个不同的研究项目（如 Noise2Void, FastDVDnet 等），每个项目都有各自独立的代码仓库和环境需求。用户需根据具体想要复现的论文，前往其对应的 GitHub 链接查看具体的运行环境要求。部分链接提供了 PyTorch 或 TensorFlow 的实现参考。",[],[35,90,14],"插件",[92,93,94,95,96,97,98,99,100],"denosing","awesome","papers","deep-learning","code","pytorch","tensorflow","matlab","restoration","2026-03-27T02:49:30.150509","2026-04-06T15:04:42.816662",[],[]]