[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jiwei0921--SOD-CNNs-based-code-summary-":3,"tool-jiwei0921--SOD-CNNs-based-code-summary-":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,2,"2026-04-10T11:39:34",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":79,"languages":76,"stars":80,"forks":81,"last_commit_at":82,"license":76,"difficulty_score":83,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":90,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":94,"updated_at":95,"faqs":96,"releases":132},6491,"jiwei0921\u002FSOD-CNNs-based-code-summary-","SOD-CNNs-based-code-summary-","The summary of code and paper for salient object detection with deep learning.","SOD-CNNs-based-code-summary-是一个专注于深度学习显著性目标检测（SOD）的开源资源库，旨在为研究者和开发者提供该领域最新的论文与代码汇总。显著性目标检测致力于让计算机像人眼一样快速识别图像或视频中最引人注目的区域，但在技术快速迭代的背景下，追踪从传统 2D RGB 到复杂的 3D 深度、4D 光场及视频 SOD 等多模态进展颇具挑战。\n\n该项目系统性地整理了涵盖上述所有主流方向的前沿成果，不仅提供了详细的论文列表，还附带了对应的开源代码链接，极大地降低了复现算法和对比实验的门槛。其独特亮点在于更新极为及时，近期已收录了包括 AAAI 2025、PAMI 等顶级会议期刊的最新研究，甚至涵盖了伪装目标检测等相关任务。此外，仓库还提供了数据集下载指引、评估指标说明以及性能排行榜，形成了一站式的学习与研究闭环。\n\n无论是希望快速入门的学生、需要追踪最新技术动态的科研人员，还是正在寻找基线模型进行二次开发的工程师，都能从中获得极具价值的参考。通过这份持续维护的清单，用户可以高效地把握深度学习时代显著性检测技术的发展脉络，加速科研与创新进程。","# SOD CNNs-based Read List       \n\nIn this repository, we mainly focus on deep learning based saliency methods (**2D RGB, 3D RGB-D\u002FT, Video SOD and 4D Light Field**) and provide a summary (**Code and Paper**). We hope this repo can help you to better understand saliency detection in the deep learning era.        \n\n--------------------------------------------------------------------------------------\n :heavy_exclamation_mark:  **2D SOD**: Add two AAAI'25 papers, one PAMI paper.                 \n :heavy_exclamation_mark:  **3D SOD**: Add two AAAA'25 papers, one ACM MM'24 paper.    \n :heavy_exclamation_mark:  **LF SOD**: Add two IEEE TCSVT papers, one arXiv'24 paper.   \n :heavy_exclamation_mark:  **Video SOD** :  Add one NeurIPS24 paper. \n \n [Camouflaged Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10274) is a closely-related task of SOD, with paper summary of [this link](https:\u002F\u002Fgithub.com\u002FChunmingHe\u002Fawesome-concealed-object-segmentation).\n\n:running: **We will keep updating it.** :running:    \n--------------------------------------------------------------------------------------\n\n\n------\n \n\n## Content:\n\n1. [An overview of the Paper List](#overall)\n2. [2D RGB Saliency Detection](#2DSOD) \n3. [3D RGB-D\u002FT Saliency Detection](#3DSOD) \n4. [4D Light Field Saliency Detection](#4DSOD) \n5. [Video Saliency Detection](#VSOD) \n6. [Survery and earlier Methods](#survey) \n7. [The SOD dataset download](#data) \n8. [Evaluation Metrics](#eval) \n9. [SOD Leaderboard](#leaderboard)\n\n\n\n------\n\n\u003Ca name=\"overall\">\u003C\u002Fa>   \n# Overall \n![avatar](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjiwei0921_SOD-CNNs-based-code-summary-_readme_d86585555f7c.jpg)\n    \n\u003Ca name=\"2DSOD\">\u003C\u002Fa> \n# 2D RGB Saliency Detection \u003Ca id=\"2D RGB Saliency Detection\" class=\"anchor\" href=\"2D RGB Saliency Detection\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa>    \n\n## 2025      \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n:triangular_flag_on_post: 01 | **PAMI** | Conditional Diffusion Models for Camouflaged and Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10834569)\u002F[Code](https:\u002F\u002Fgithub.com\u002FRapisurazurite\u002FCamoDiffusion)    \n:triangular_flag_on_post: 02 | **AAAI** | MSV-PCT: Multi-Sparse-View Enhanced Transformer Framework for Salient Object Detection in Point Clouds | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F32892)\u002FCode      \n:triangular_flag_on_post: 03 | **AAAI** | Exploring Salient Object Detection with Adder Neural Networks | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F33028)\u002FCode  \n\n## 2024      \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **WACV** | Unsupervised and semi-supervised co-salient object detection via segmentation frequency statistics | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06654.pdf)\u002FCode   \n02 | **WACV** | 3SD: Self-Supervised Saliency Detection With No Labels | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2024\u002Fpapers\u002FYasarla_3SD_Self-Supervised_Saliency_Detection_With_No_Labels_WACV_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frajeevyasarla\u002F3SD)    \n03 | **WACV** | Learning Saliency From Fixations | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2024\u002Fpapers\u002FDjilali_Learning_Saliency_From_Fixations_WACV_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FYasserdahouML\u002FSalTR)     \n04 | **WACV** | Salient Object Detection for Images Taken by People With Vision Impairments | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2024\u002Fpapers\u002FReynolds_Salient_Object_Detection_for_Images_Taken_by_People_With_Vision_WACV_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fvizwiz.org\u002Ftasks-and-datasets\u002Fsalient-object-detection\u002F)     \n05 | **WACV** | Defense Against Adversarial Cloud Attack on Remote Sensing Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06654.pdf)\u002FCode  \n06 | **ICASSP** | Zero-Shot Co-salient Object Detection Framework | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.05499)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhkxiao\u002Fzs-cosod)  \n07 | **CVPR** | VSCode: General Visual Salient and Camouflaged Object Detection with 2D Prompt Learning | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.15011.pdf)\u002FCode\n08 | **AAAI** | WeakPCSOD: Overcoming the Bias of Box Annotations for Weakly Supervised Point Cloud Salient Object Detection | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F28403)\u002FCode\n09 | **AAAI** | SeqRank: Sequential Ranking of Salient Objects | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F27964)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fguanhuankang\u002FSeqRank) \n10 | **AAAI** | Finding Visual Saliency in Continuous Spike Stream | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F28610)\u002F[Code](https:\u002F\u002Fgithub.com\u002FBIT-Vision\u002FSVS) \n11 | **CVPR** | COSALPURE: Learning Concept from Group Images for Robust Co-Saliency | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18554.pdf)\u002F[Code](https:\u002F\u002Fv1len.github.io\u002FCosalPure\u002F) \n12 | **IJCAI** | Unified Unsupervised Salient Object Detection via Knowledge Transfer | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.14759)\u002F[Code](https:\u002F\u002Fgithub.com\u002FI2-Multimedia-Lab\u002FA2S-v3) \n13 | **TII** | MINet: Multi-scale Interactive Network for Real-time Salient Object Detection of Strip Steel Surface Defects | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.16096)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKunye-Shen\u002FMINet) \n14 | **ICML** | Size-invariance Matters: Rethinking Metrics and Losses for Imbalanced Multi-object Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.09782)\u002F[Code](https:\u002F\u002Fgithub.com\u002FFerry-Li\u002FSI-SOD) \n15 | **ICML** | Spider: A Unified Framework for Context-dependent Concept Segmentation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.01002)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FSpider-UniCDSeg) \n16 | **ICML** | Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset | Paper\u002F[Code](https:\u002F\u002Fgithub.com\u002FLiamLian0727\u002FUSIS10K) \n17 | **CVPR** | Domain Separation Graph Neural Networks for Saliency Object Ranking | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FWu_Domain_Separation_Graph_Neural_Networks_for_Saliency_Object_Ranking_CVPR_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FWu-ZJ\u002FDSGNN) \n18 | **CVPR** | Advancing Saliency Ranking with Human Fixations: Dataset Models and Benchmarks | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FDeng_Advancing_Saliency_Ranking_with_Human_Fixations_Dataset_Models_and_Benchmarks_CVPR_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FEricDengbowen\u002FQAGNet) \n19 | **CVPR** | Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FLiu_Task-Adaptive_Saliency_Guidance_for_Exemplar-free_Class_Incremental_Learning_CVPR_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fscok30\u002Ftass) \n20 | **CVPR** | Unsupervised Salient Instance Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FTian_Unsupervised_Salient_Instance_Detection_CVPR_2024_paper.pdf)\u002FCode\n21 | **CVPR** | DiffSal: Joint Audio and Video Learning for Diffusion Saliency Prediction | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FXiong_DiffSal_Joint_Audio_and_Video_Learning_for_Diffusion_Saliency_Prediction_CVPR_2024_paper.pdf)\u002F[Code](https:\u002F\u002Fjunwenxiong.github.io\u002FDiffSal) \n22 | **TMM** | ADMNet: Attention-guided Densely Multi-scale Network for Lightweight Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10555313)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKunye-Shen\u002FADMNet)\n23 | **ACMMM** | Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.04326)\u002F[Code](https:\u002F\u002Fgithub.com\u002FBellyBeauty\u002FMDSAM)\n24 | **ACMMM** | Instance-Level Panoramic Audio-Visual Saliency Detection and Ranking | [Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0Q9zTGHOda)\u002FCode\n25 | **ECCV** | CONDA: Condensed Deep Association Learning for Co-Salient Object Detection | [Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F06695.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fdragonlee258079\u002FCONDA)\n26 | **ECCV** | Self-supervised co-salient object detection via feature correspondences at multiple scales | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.11107)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsourachakra\u002FSCoSPARC)\n27 | **ECCV** | SHINE: Saliency-aware HIerarchical NEgative Ranking for Compositional Temporal Grounding | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2407.05118)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzxccade\u002FSHINE)  \n28 | **ECCV** | DSMix: Distortion-Induced Saliency Map Based Pre-training for No-Reference Image Quality Assessment | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2407.03886)\u002F[Code](https:\u002F\u002Fgithub.com\u002FI2-Multimedia-Lab\u002FDSMix)\n29 | **ECCV** | Salience-Based Adaptive Masking: Revisiting Token Dynamics for Enhanced Pre-training | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.08327)\u002FCode\n30 | **ECCV** | Data Augmentation via Latent Diffusion for Saliency Prediction | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.07307)\u002F[Code](https:\u002F\u002Fgithub.com\u002FIVRL\u002FAugSal)\n31 | **PAMI** | Divide-and-Conquer: Confluent Triple-Flow Network for RGB-T Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10778650\u002Fauthors#authors)\u002F[Code](https:\u002F\u002Fgithub.com\u002FCSer-Tang-hao\u002FConTriNet_RGBT-SOD)\n\n\n## 2023      \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | LeNo: Adversarial Robust Salient Object Detection Networks with Learnable Noise | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15392)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fssecv\u002FLeNo)  \n02 | **AAAI** | Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.06493.pdf)\u002FCode  \n03 | **AAAI** | Memory-aided Contrastive Consensus Learning for Co-salient Object Detection | [Paper](https:\u002F\u002Fscholar.google.com\u002Fcitations?view_op=list_works&hl=en&hl=en&user=TZRzWOsAAAAJ)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZhengPeng7\u002FMCCL#)   \n04 | **TNNLS** | Multi-Projection Fusion and Refinement Network for Salient Object Detection in 360◦ Omnidirectional Image | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.12378.pdf)\u002F[Code](https:\u002F\u002Frmcong.github.io\u002Fproj_MPFRNet.html)  \n05 | **IEEE TIP** | Boosting Broader Receptive Fields for Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10006743)\u002F[Code](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FBBRF-TIP)   \n06 | **IEEE TPAMI** | Co-Salient Object Detection with Co-Representation Purification | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.07670.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZZY816\u002FCoRP)   \n07 | **CVPR** | Texture-guided Saliency Distilling for Unsupervised Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.05921.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fmoothes\u002FA2S-v2)   \n08 | **CVPR** | Discriminative Co-Saliency and Background Mining Transformer for Co-Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.00514.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fdragonlee258079\u002FDMT)   \n09 | **CVPR** | Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.11502.pdf)\u002F[Code](https:\u002F\u002Fayankumarbhunia.github.io\u002FSketch2Saliency\u002F)   \n10 | **CVPR** | Boosting Low-Data Instance Segmentation by Unsupervised Pre-training with Saliency Prompt | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.01171.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flifuguan\u002Fsaliency_prompt)   \n11 | **CVPR** | Pixels, Regions, and Objects: Multiple Enhancement for Salient Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWang_Pixels_Regions_and_Objects_Multiple_Enhancement_for_Salient_Object_Detection_CVPR_2023_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyiwangtz\u002FMENet)   \n12 | **CVPR** | Co-Salient Object Detection with Uncertainty-aware Group Exchange-Masking | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWu_Co-Salient_Object_Detection_With_Uncertainty-Aware_Group_Exchange-Masking_CVPR_2023_paper.pdf)\u002FCode  \n13 | **CVPR** | Modeling the Distributional Uncertainty for Salient Object Detection Models | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FTian_Modeling_the_Distributional_Uncertainty_for_Salient_Object_Detection_Models_CVPR_2023_paper.pdf)\u002F[Code](https:\u002F\u002Fnpucvr.github.io\u002FDistributional_uncer\u002F)  \n14 | **ACM MM** | Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.03826.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FDrowsyMon\u002FRMFormer)  \n15 | **TOMM** | PAV-SOD: A New Task Towards Panoramic Audiovisual Saliency Detection | [Paper](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-1RcARcbz4pACFzkjXcp6MP8R9CGScqI\u002Fview)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJun-Pu\u002FPAV-SOD)  \n16 | **ICCV** | Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FWang_Counterfactual-based_Saliency_Map_Towards_Visual_Contrastive_Explanations_for_Neural_Networks_ICCV_2023_paper.pdf)\u002FCode\n17 | **ACM MM** | Distortion-aware Transformer in 360° Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03359)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyjzhao19981027\u002FDATFormer\u002F) \n18 | **ACM MM** | Towards End-to-End Unsupervised Saliency Detection with Self-Supervised Top-Down Context | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3581783.3612212)\u002FCode \n19 | **ACM MM** | Partitioned Saliency Ranking with Dense Pyramid Transformers | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.00236.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fssecv\u002FPSR) \n20 | **ACM MM** | Co-Salient Object Detection with Semantic-Level Consensus Extraction and Dispersion | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07753v1)\u002FCode  \n21 | **TMM** | Towards Complete and Detail-Preserved Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10287608)\u002F[Code](https:\u002F\u002Fgithub.com\u002FBarCodeReader\u002FSelfReformer) \n22 | **arXiv** | Unified-modal Salient Object Detection via Adaptive Prompt Learning | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.16835.pdf)\u002FCode\n23 | **arXiv** | All in One: RGB, RGB-D, and RGB-T Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.14746.pdf)\u002FCode\n24 | **NeurIPS** | What Do Deep Saliency Models Learn about Visual Attention? | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09679)\u002F[Code](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.09679.pdf)  \n25 | **PAMI** | CADC++: Advanced Consensus-Aware Dynamic Convolution for Co-Salient Object Detection | [Paper](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftp\u002F5555\u002F01\u002F10339864\u002F1SBL7kZYYyA)\u002FCode  \n26 | **IEEE TIP** | USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10102831)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLinHong-HIT\u002FUSOD10K)  \n27 | **IEEE TMM** | Spectrum-driven Mixed-frequency Network for Hyperspectral Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10313066\u002F)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flaprf\u002FSMN)  \n28 | **IEEE TIP** | Rethinking Object Saliency Ranking: A Novel Whole-flow Processing Paradigm | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.03226.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMengkeSong\u002FSaliency-Ranking-Paradigm) \n\n\n\n\n\n\n## 2022       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | Unsupervised Domain Adaptive Salient Object Detection Through Uncertainty-Aware Pseudo-Label Learning | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-604.YanP.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKinpzz\u002FUDASOD-UPL)  \n02 | **AAAI** | A Causal Debiasing Framework for Unsupervised Salient Object Detection | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-108.LinX.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJaiharish-passion07\u002FAI_Project)  \n03 | **AAAI** | Energy-Based Generative Cooperative Saliency Prediction | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-1516.ZhangJ.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FSalCoopNets)  \n04 | **AAAI** | Weakly-Supervised Salient Object Detection Using Point Supervison | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-461.GaoS.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fshuyonggao\u002FPSOD)    \n05 | **AAAI** | TRACER: Extreme Attention Guided Salient Object Tracing Network | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.07380.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKarel911\u002FTRACER)  \n06 | **AAAI** | I can find you! Boundary-guided Separated Attention Network for Camouflaged Object Detection | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-6565.ZhuH.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FWolfberryCoke\u002FBSA-Net)  \n07 | **WACV** | Recursive Contour-Saliency Blending Network for Accurate Salient Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FKe_Recursive_Contour-Saliency_Blending_Network_for_Accurate_Salient_Object_Detection_WACV_2022_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FBarCodeReader\u002FRCSB-PyTorch)  \n08 | **IEEE TPAMI** | PoolNet+: Exploring the Potential of Pooling for Salient Object Detection | [Paper](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21PAMI-PoolNet.pdf)\u002F[Code](http:\u002F\u002Fmmcheng.net\u002Fpoolnet\u002F)  \n09 | **IEEE TPAMI** | A Highly Efficient Model to Study the Semantics of Salient Object Detection | [Paper](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21PAMI-Sal100K.pdf)\u002F[Code](https:\u002F\u002Fmmcheng.net\u002Fsod100k\u002F)  \n10 | **IEEE TGRS** | Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.08049)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMathLee\u002FCorrNet)  \n11 | **TOMM** | Disentangle Saliency Detection into Cascaded Detail Modeling and Body Filling | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.04112.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKingJamesSong\u002FDisentangleSaliency)    \n12 | **TMM** | Noise-Sensitive Adversarial Learning for Weakly Supervised Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9716868\u002Fauthors#authors)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwuweia123\u002FIEEE-TMM-NSALWSS) \n13 | **ArXiv** | Joint Learning of Salient Object Detection, Depth Estimation and Contour Extraction | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04895.pdf)\u002FCode \n14 | **ArXiv** | A Unified Transformer Framework for Group-based Segmentation: Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04708.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsuyukun666\u002FUFO) \n15 | **IEEE TCyb** | Adjacent Context Coordination Network for Salient Object Detection in Optical Remote Sensing Images | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13664.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMathLee\u002FACCoNet) \n16 | **IEEE TCyb** | Edge-guided Recurrent Positioning Network for Salient Object Detection in Optical Remote Sensing Images | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9756846)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKunye-Shen\u002FERPNet) \n17 | **IEEE TCSVT** | Progressive Dual-attention Residual Network for Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9745960)\u002FCode \n18 | **IEEE TCyb** | Global-and-Local Collaborative Learning for Co-Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08917.pdf)\u002F[Code](https:\u002F\u002Frmcong.github.io\u002Fproj_GLNet.html) \n19 | **ArXiv** | An Energy-Based Prior for Generative Saliency | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08803.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FEBMGSOD) \n20 | **IEEE TIP** | EDN: Salient Object Detection via Extremely-Downsampled Network | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.13093.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyuhuan-wu\u002FEDN) \n21 | **IEEE TPAMI** | Salient Object Detection via Integrity Learning | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.07663.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fmczhuge\u002FICON) \n22 | **IEEE TCSVT** | TCNet:Co-salient Object Detection via Parallel Interaction of Transformers and CNNs | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9968016)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhangqiao970914\u002FTCNet)   \n23 | **ArXiv** | Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.03650)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fmoothes\u002FA2S-USOD)  \n24 | **CVPR** | Zoom In and Out: A Mixed-scale Triplet Network for Camouflaged Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.02688.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flartpang\u002FZoomNet)  \n25 | **CVPR** | Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.09258.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ftsingqguo\u002Fjadena) \n26 | **CVPR** | Democracy Does Matter: Comprehensive Feature Mining for Co-salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05787.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsiyueyu\u002FDCFM) \n27 | **CVPR** | Pyramid Grafting Network for One-Stage High Resolution Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05041.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FPGNet) \n28 | **CVPR** | Deep Saliency Prior for Reducing Visual Distraction | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FAberman_Deep_Saliency_Prior_for_Reducing_Visual_Distraction_CVPR_2022_paper.pdf)\u002F[Code](https:\u002F\u002Fdeep-saliency-prior.github.io\u002F) \n29 | **CVPR** | Multi-Source Uncertainty Mining for Deep Unsupervised Saliency Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FWang_Multi-Source_Uncertainty_Mining_for_Deep_Unsupervised_Saliency_Detection_CVPR_2022_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyifanw90\u002FUMNet)   \n30 | **CVPR** | Bi-Directional Object-Context Prioritization Learning for Saliency Ranking | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FTian_Bi-Directional_Object-Context_Prioritization_Learning_for_Saliency_Ranking_CVPR_2022_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FGrassBro\u002FOCOR) \n31 | **CVPR** | Does text attract attention on e-commerce images: A novel saliency prediction dataset and method | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FJiang_Does_Text_Attract_Attention_on_E-Commerce_Images_A_Novel_Saliency_CVPR_2022_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fleafy-lee\u002FE-commercial-dataset)  \n32 | **CVPRW** | Pyramidal Attention for Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.06788.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ftanveer-hussain) \n33 | **CVPRW** | Unsupervised Salient Object Detection with Spectral Cluster Voting | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022W\u002FL3D-IVU\u002Fpapers\u002FShin_Unsupervised_Salient_Object_Detection_With_Spectral_Cluster_Voting_CVPRW_2022_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FNoelShin\u002Fselfmask) \n34 | **ECCV** | KD-SCFNet: Towards More Accurate and Efficient Salient Object Detection via Knowledge Distillation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.02178.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FzhangjinCV\u002FKD-SCFNet) \n35 | **ECCV** | Salient Object Detection for Point Clouds | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.11889.pdf)\u002F[Code](https:\u002F\u002Fgit.openi.org.cn\u002FOpenPointCloud\u002FPCSOD) \n36 | **PR** | BiconNet: An Edge-preserved Connectivity-based Approach for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.00334.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZyun-Y\u002FBiconNets) \n37 | **IEEE TCyb** | DNA: Deeply-supervised Nonlinear Aggregation for Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9345433)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002FDNA) \n38 | **ACMM** | Synthetic Data Supervised Salient Object Detection | [Paper](http:\u002F\u002Fwww.digitalimaginggroup.ca\u002Fmembers\u002FShuo\u002FACM_Multimedia_2022_final_version.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwuzhenyubuaa\u002FSODGAN) \n39 | **IEEE TCSVT** | A Weakly Supervised Learning Framework for Salient Object Detection via Hybrid Labels | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.02957.pdf)\u002F[Code](https:\u002F\u002Frmcong.github.io\u002Fproj_Hybrid-Label-SOD.html) \n40 | **CVPRW** | Unsupervised Salient Object Detection with Spectral Cluster Voting | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.12614.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FNoelShin\u002Fselfmask) \n41 | **IEEE TMM** | View-aware Salient Object Detection for 360° Omnidirectional Image | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.13222.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJanySunny\u002FODI-SOD) \n42 | **ACCV** | Revisiting Image Pyramid Structure for High Resolution Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.09475)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fplemeri\u002FInSPyReNet) \n43 | **ECCV** | Saliency Hierarchy Modeling via Generative Kernels for Salient Object Detection | [Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136880564.pdf)\u002FCode\n44 | **IEEE TIP** | Salient Object Detection via Dynamic Scale Routing | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.13821.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwuzhenyubuaa\u002FDPNet)\n45 | **NeurIPS** | MOVE: Unsupervised Movable Object Segmentation and Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.07920.pdf)\u002FCode\n46 | **PR** | BiconNet: An Edge-preserved Connectivity-based Approach for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00334)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZyun-Y\u002FBiconNets)  \n\n  \n\n\n\n## 2021       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | Structure-Consistent Weakly Supervised Salient Object Detection with Local Saliency Coherence | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.04404.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsiyueyu\u002FSCWSSOD\u002Ftree\u002Ff8650567cbbc8df5bf6edc32a633c47a885574cd)\n02 | **AAAI** | Pyramidal Feature Shrinking for Salient Object Detection | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-1322.MaM.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FPFSNet) \n03 | **AAAI** | Locate Globally, Segment Locally: A Progressive Architecture with Knowledge Review Network for Salient Object Detection | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-4841.XuB.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fbradleybin\u002FLocate-Globally-Segment-locally-A-Progressive-Architecture-With-Knowledge-Review-Network-for-SOD)  \n04 | **AAAI** | Multi-Scale Graph Fusion for Co-Saliency Detection | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F16951)\u002FCode\n05 | **AAAI** | Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.06856.pdf)\u002FCode\n06 | **ICIP** | Multiscale IoU: A Metric for Evaluation of Salient Object Detection with Fine Structures | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14572.pdf)\u002FCode\n07 | **TCSVT** | Weakly-Supervised Saliency Detection via Salient Object Subitizing | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.00932.pdf)\u002FCode \n08 | **TIP** | SAMNet: Stereoscopically Attentive Multi-scale Network for Lightweight Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9381668)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002FFastSaliency) \n09 | **IJCAI** | C2FNet: Context-aware Cross-level Fusion Network for Camouflaged Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.12555.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fthograce\u002FC2FNet)\n10 | **CVPR** | Railroad is not a Train: Saliency as Pseudo-pixel Supervision for Weakly Supervised Semantic Segmentation | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLee_Railroad_Is_Not_a_Train_Saliency_As_Pseudo-Pixel_Supervision_for_CVPR_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhalbielee\u002FEPS)\n11 | **CVPR** | Prototype-Guided Saliency Feature Learning for Person Search | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FKim_Prototype-Guided_Saliency_Feature_Learning_for_Person_Search_CVPR_2021_paper.pdf)\u002FCode\n12 | **CVPR** | Mesh Saliency: An Independent Perceptual Measure or A Derivative of Image Saliency? | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FSong_Mesh_Saliency_An_Independent_Perceptual_Measure_or_a_Derivative_of_CVPR_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frsong\u002FMIMO-GAN)\n13 | **CVPR** | Weakly-Supervised Instance Segmentation via Class-Agnostic Learning With Salient Images | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Weakly-Supervised_Instance_Segmentation_via_Class-Agnostic_Learning_With_Salient_Images_CVPR_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FBoxCaseg)\n14 | **CVPR** | DeepACG: Co-Saliency Detection via Semantic-Aware Contrast Gromov-Wasserstein Distance | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_DeepACG_Co-Saliency_Detection_via_Semantic-Aware_Contrast_Gromov-Wasserstein_Distance_CVPR_2021_paper.pdf)\u002FCode\n15 | **CVPR** | Black-Box Explanation of Object Detectors via Saliency Maps | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FPetsiuk_Black-Box_Explanation_of_Object_Detectors_via_Saliency_Maps_CVPR_2021_paper.pdf)\u002FCode\n16 | **CVPR** | From Semantic Categories to Fixations: A Novel Weakly-Supervised Visual-Auditory Saliency Detection Approach | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_From_Semantic_Categories_to_Fixations_A_Novel_Weakly-Supervised_Visual-Auditory_Saliency_CVPR_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fguotaowang\u002FSTANet)\n17 | **CVPR** | CAMERAS: Enhanced Resolution and Sanity Preserving Class Activation Mapping for Image Saliency | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FJalwana_CAMERAS_Enhanced_Resolution_and_Sanity_Preserving_Class_Activation_Mapping_for_CVPR_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FVisMIL\u002FCAMERAS)\n18 | **CVPR** | Saliency-Guided Image Translation | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FJiang_Saliency-Guided_Image_Translation_CVPR_2021_paper.pdf)\u002FCode\n19 | **CVPR** | Group Collaborative Learning for Co-Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.01108.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ffanq15\u002FGCoNet)\n20 | **CVPR** | Uncertainty-aware Joint Salient Object and Camouflaged Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.02628.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FJoint_COD_SOD)\n21 | **ACMM** | Auto-MSFNet: Search Multi-scale Fusion Network for Salient Object Detection | [Paper](https:\u002F\u002Fgithub.com\u002FLiuTingWed\u002FAuto-MSFNet)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLiuTingWed\u002FAuto-MSFNet) \n22 | **IEEE TIP** | Decomposition and Completion Network for Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9479697\u002Ffigures#figures)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwuzhe71\u002FDCN) \n23 | **ICCV** | Visual Saliency Transformer | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12099.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FVST#visual-saliency-transformer-vst) \n24 | **ICCV** | Disentangled High Quality Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03551.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fluckybird1994\u002FHQSOD) \n25 | **ICCV** | iNAS: Integral NAS for Device-Aware Salient Object Detection | [Paper](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21ICCV-iNAS.pdf)\u002F[Code](https:\u002F\u002Fmmcheng.net\u002Finas\u002F) \n26 | **ICCV** | Scene Context-Aware Salient Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSiris_Scene_Context-Aware_Salient_Object_Detection_ICCV_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FSirisAvishek\u002FScene_Context_Aware_Saliency) \n27 | **ICCV** | MFNet: Multi-Filter Directive Network for Weakly Supervised Salient Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FPiao_MFNet_Multi-Filter_Directive_Network_for_Weakly_Supervised_Salient_Object_Detection_ICCV_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FMFNet) \n28 | **ICCV** | Salient Object Ranking with Position-Preserved Attention | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FFang_Salient_Object_Ranking_With_Position-Preserved_Attention_ICCV_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FEricFH\u002FSOR) \n29 | **ICCV** | Summarize and Search: Learning Consensus-aware Dynamic Convolution for Co-Saliency Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_Summarize_and_Search_Learning_Consensus-Aware_Dynamic_Convolution_for_Co-Saliency_Detection_ICCV_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FCADC) \n30 | **IEEE TIP** | Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.08393.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJinming-Su\u002FPurNet) \n31 | **ACMM** | Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475494)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhaozhirui\u002FCTDNet)\n32 | **NeurIPS** | Learning Generative Vision Transformer with Energy-Based Latent Space for Saliency Prediction | [Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F8289889263db4a40463e3f358bb7c7a1-Paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FEBMGSOD)   \n33 | **NeurIPS** | Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural Networks | [Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F398410ece9d7343091093a2a7f8ee381-Paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fbit-ml\u002FDyReg-GNN) \n34 | **IEEE TIP** | Progressive Self-Guided Loss for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.02412.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fysyscool\u002FPSGLoss) \n35 | **IEEE TMM** | Dense Attention-guided Cascaded Network for Salient Object Detection of Strip Steel Surface Defects | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9632537)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzxforchid\u002FDACNet) \n36 | **IEEE TIP** | Rethinking the U-Shape Structure for Salient Object Detection | [Paper](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP-CII.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzal0302\u002FCII) \n\n\n\n\n## 2020       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | Progressive Feature Polishing Network for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.05942.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fchenquan-cq\u002FPFPN)       \n02 | **AAAI** | Global Context-Aware Progressive Aggregation Network for Salient Object Detection | [Paper](https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FGCPANet\u002Fblob\u002Fmaster\u002FGCPANet.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FGCPANet)     \n03 | **AAAI** | F3Net: Fusion, Feedback and Focus for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.11445.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fweijun88\u002FF3Net)    \n04 | **AAAI** | Multi-spectral Salient Object Detection by Adversarial Domain Adaptation | [Paper](https:\u002F\u002Fcse.sc.edu\u002F~songwang\u002Fdocument\u002Faaai20b.pdf)\u002F[Code](https:\u002F\u002Ftsllb.github.io\u002FMultiSOD.html) \n05 | **AAAI** | Multi-Type Self-Attention Guided Degraded Saliency Detection | [Paper](https:\u002F\u002Fcse.sc.edu\u002F~songwang\u002Fdocument\u002Faaai20a.pdf)\u002FCode \n06 | **CVPR** | Weakly-Supervised Salient Object Detection via Scribble Annotations | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Weakly-Supervised_Salient_Object_Detection_via_Scribble_Annotations_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FScribble_Saliency)  \n07 | **CVPR** | Taking a Deeper Look at the Co-salient Object Detection | [Paper](http:\u002F\u002Fdpfan.net\u002Fwp-content\u002Fuploads\u002FCoSalBenchmark_CVPR2020.pdf)\u002F[Code](http:\u002F\u002Fdpfan.net\u002FCoSOD3K\u002F)  \n08 | **CVPR** | Multi-scale Interactive Network for Salient Object Detection | [Paper](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gUYu0hO_8Xc5jgpzetuOVFDrqeSOiKZN\u002Fview?usp=sharing)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flartpang\u002FMINet)  \n09 | **CVPR** | Interactive Two-Stream Decoder for Accurate and Fast Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhou_Interactive_Two-Stream_Decoder_for_Accurate_and_Fast_Saliency_Detection_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fmoothes\u002FITSD-pytorch)  \n10 | **CVPR** | Label Decoupling Framework for Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWei_Label_Decoupling_Framework_for_Salient_Object_Detection_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fweijun88\u002FLDF)  \n11 | **CVPR** | Adaptive Graph Convolutional Network with Attention Graph Clustering for Co-saliency Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Adaptive_Graph_Convolutional_Network_With_Attention_Graph_Clustering_for_Co-Saliency_CVPR_2020_paper.pdf)\u002FCode\n12 | **ECCV** | Highly Efficient Salient Object Detection with 100K Parameters | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20EccvSal100k.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMCG-NKU\u002FSal100K)\n13 | **ECCV** | n-Reference Transfer Learning for Saliency Prediction | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123530494.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fluoyan407\u002Fn-reference)   \n14 | **ECCV** | Gradient-Induced Co-Saliency Detection | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123570443.pdf)\u002F[Code](http:\u002F\u002Fzhaozhang.net\u002Fcoca.html)   \n13 | **ECCV** | Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating Back-Propagation for Saliency Detection | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123620341.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FNoise-aware-ABP-Saliency)  \n15 | **ECCV** | Suppress and Balance: A Simple Gated Network for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.08074.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FGateNet-RGB-Saliency) \n16 | **IEEE TIP** | Dynamic Feature Integration for Simultaneous Detection of Salient Object, Edge and Skeleton | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20TIP-DFI.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fbackseason\u002FDFI)\n17 | **IEEE TIP** | CAGNet: Content-Aware Guidance for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.13168)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMehrdad-Noori\u002FCAGNet)\n18 | **IEEE TCYB** | Lightweight Salient Object Detection via Hierarchical Visual Perception Learning | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9285193)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002FFastSaliency)\n19 | **NeurIPS** | CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.04887.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frmcong\u002FCoADNet_NeurIPS20)\n20 | **NeurIPS** | Few-Cost Salient Object Detection with Adversarial-Paced Learning | [Paper](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F8fc687aa152e8199fe9e73304d407bca-Paper.pdf)\u002F[Code](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F8fc687aa152e8199fe9e73304d407bca-Supplemental.zip)\n21 | **NeurIPS** | ICNet: Intra-saliency Correlation Network for Co-Saliency Detection | [Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fd961e9f236177d65d21100592edb0769-Paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fblanclist\u002FICNet)\n\n\n  \n\n\n\n\n\n## 2019       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | AFNet: Attentive Feedback Network for Boundary-aware Salient Object Detection | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1n-dRVC4sLWCmhhD5bnVXqg)\u002F[Code](https:\u002F\u002Fgithub.com\u002FArcherFMY\u002FAFNet)  \n02 | **CVPR** | BASNet: Boundary Aware Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FQin_BASNet_Boundary-Aware_Salient_Object_Detection_CVPR_2019_paper.html)\u002F[Code](https:\u002F\u002Fgithub.com\u002FNathanUA\u002FBASNet) \n03 | **CVPR** | CPD: Cascaded Partial Decoder for Accurate and Fast Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.08739.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwuzhe71\u002FCPD-CVPR2019)\n04 | **CVPR** | Multi-source weak supervision for saliency detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.00566.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Fmws)\n05 | **CVPR** | MLMSNet:A Mutual Learning Method for Salient Object Detection with intertwined Multi-Supervision | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1EUxabfnEi_l5-ghUI3_qVQ)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJosephineRabbit\u002FMLMSNet)\n06 | **CVPR** | CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhang_CapSal_Leveraging_Captioning_to_Boost_Semantics_for_Salient_Object_Detection_CVPR_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhangludl\u002Fcode-and-dataset-for-CapSal)\n07 | **CVPR** | PoolNet: A Simple Pooling-Based Design for Real-Time Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09569.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fbackseason\u002FPoolNet) \n08 | **CVPR** | An Iterative and Cooperative Top-down and Bottom-up Inference Network for Salient Object Detection | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F19cvprIterativeSOD.pdf)\u002FCode\n09 | **CVPR** | Pyramid Feature Attention Network for Saliency detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.00179.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FCaitinZhao\u002Fcvpr2019_Pyramid-Feature-Attention-Network-for-Saliency-detection)\n10 | **AAAI** | Deep Embedding Features for Salient Object Detection | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1HfyavmYB2NYUMe8CSe2qCw)\u002FCode\n11 | **ICIP** | Salient Object Detection Via Deep Hierarchical Context Aggregation And Multi-Layer Supervision | [Paper](https:\u002F\u002Fgithub.com\u002FZhangC2\u002FSaliency-DHCA-ML_S)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZhangC2\u002FSaliency-DHCA-ML_S)\n12 | **IEEE TCSVT** | AADF-Net: Aggregating Attentional Dilated Features for Salient Object | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8836095)\u002F[Code](https:\u002F\u002Fgithub.com\u002FgithubBingoChen\u002FAADF-Net)\n13 | **IEEE TCyb** | ROSA: Robust Salient Object Detection against Adversarial Attacks | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.03434.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flhaof\u002FROSA-Robust-Salient-Object-Detection-Against-Adversarial-Attacks)\n14 | **arXiv** | DSAL-GAN: DENOISING BASED SALIENCY PREDICTION WITH GENERATIVE ADVERSARIAL NETWORKS | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.01215.pdf)\u002FCode\n15 | **arXiv** | SAC-Net: Spatial Attenuation Context for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.10152.pdf)\u002FCode\n16 | **arXiv** | SE2Net: Siamese Edge-Enhancement Network for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.00048.pdf)\u002FCode\n17 | **arXiv** | Region Refinement Network for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.11443.pdf)\u002FCode\n18 | **arXiv** | Contour Loss: Boundary-Aware Learning for Salient Object Segmentation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.01975.pdf)\u002FCode\n19 | **arXiv** | OGNet: Salient Object Detection with Output-guided Attention Module | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.07449.pdf)\u002FCode\n20 | **arXiv** | Edge-guided Non-local Fully Convolutional Network for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.02460.pdf)\u002FCode\n21 | **ICCV** | FLoss:Optimizing the F-measure for Threshold-free Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FZhao_Optimizing_the_F-Measure_for_Threshold-Free_Salient_Object_Detection_ICCV_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzeakey\u002Ficcv2019-fmeasure)\n22  | **ICCV** | Stacked Cross Refinement Network for Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWu_Stacked_Cross_Refinement_Network_for_Edge-Aware_Salient_Object_Detection_ICCV_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwuzhe71\u002FSCRN)\n23 | **ICCV** | Selectivity or Invariance: Boundary-aware Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.10066.pdf)\u002FCode\n24 | **ICCV** | HRSOD:Towards High-Resolution Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.07274.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyi94code\u002FHRSOD)\n25 | **ICCV** | EGNet:Edge Guidance Network for Salient Object Detection | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F19ICCV_EGNetSOD.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJXingZhao\u002FEGNet)\n26 | **ICCV** | Structured Modeling of Joint Deep Feature and Prediction Refinement for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.04366.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fxuyingyue\u002FDeepUnifiedCRF_iccv19)\n27 | **ICCV** | Employing Deep Part-Object Relationships for Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLiu_Employing_Deep_Part-Object_Relationships_for_Salient_Object_Detection_ICCV_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fliuyi1989\u002FTSPOANet)  \n28 | **NeurIPS** | Deep Robust Unsupervised Saliency Prediction With Self-Supervision | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.13055.pdf)\u002F[Code](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F10GlmenXR7nEJyRlmPHouvHP-g9KfUW1F\u002Fview)   \n29 | **CVPR** | Salient Object Detection With Pyramid Attention and Salient Edges | [Paper](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F332751907_Salient_Object_Detection_With_Pyramid_Attention_and_Salient_Edges)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FPAGE-Net)      \n\n    \n\n## 2018\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | A Bi-Directional Message Passing Model for Salient Object Detection | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1akKVVipD8vIIv0XFrWND5Q)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhangludl\u002FA-bi-directional-message-passing-model-for-salient-object-detection)  \n02 | **CVPR** | PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06433)\u002F[Code](https:\u002F\u002Fgithub.com\u002FUgness\u002FPiCANet-Implementation)\n03 | **CVPR** | PAGR: Progressive Attention Guided Recurrent Network for Salient Object Detection | [Paper](https:\u002F\u002Fgithub.com\u002Fzhangxiaoning666\u002FPAGR)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyangbinb\u002FSalMetric\u002Ftree\u002Fmaster\u002FPAGRN)\n04 | **CVPR** | Learning to promote saliency detectors | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1QvDmqruH8oU51_GrgsuXoA)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Flps)\n05 | **CVPR** | Detect Globally, Refine Locally: A Novel Approach to Saliency Detection | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ydLI0koPfndehqMOAwrK_Q)\u002F[Code](https:\u002F\u002Fgithub.com\u002FTiantianWang\u002FCVPR18_detect_globally_refine_locally)\n06 | **CVPR** | Salient Object Detection Driven by Fixation Prediction | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Salient_Object_Detection_CVPR_2018_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FASNet)\n07 | **IJCAI** | R3Net: Recurrent Residual Refinement Network for Saliency Detection | [Paper](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2018\u002F0095.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzijundeng\u002FR3Net)\n08 | **IJCAI** | LFR: Salient Object Detection by Lossless Feature Reflection | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1DAyPHe_z0LJpKK8DxKF2dg)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal\u002Fblob\u002Fmaster\u002FIIAU2018.md)\n09 | **ECCV** | Contour Knowledge Transfer for Salient Object Detection | [Paper](http:\u002F\u002Flink-springer-com-s.vpn.whu.edu.cn:9440\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-030-01267-0_22.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flixin666\u002FC2SNet)\n10 | **ECCV** | Reverse Attention for Salient Object Detection | [Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.09940)\u002F[Code](https:\u002F\u002Fgithub.com\u002FShuhanChen\u002FRAS_ECCV18)\n11 | **IEEE TIP** | An unsupervised game-theoretic approach to saliency detection | [Paper](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1U1O4oFK6ZALSghPjJv_5nA)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Fuga)\n12 | **arXiv** | Agile Amulet: Real-Time Salient Object Detection with Contextual Attention | [Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.06960)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal\u002Fblob\u002Fmaster\u002FIIAU2018.md)\n13 | **arXiv** | HyperFusion-Net: Densely Reflective Fusion for Salient Object Detection | [Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.05142)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal\u002Fblob\u002Fmaster\u002FIIAU2018.md)\n14 | **arXiv** | (TBOS)Three Birds One Stone: A Unified Framework for Salient Object Segmentation, Edge Detection and Skeleton Extraction | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.09860.pdf)\u002FCode\n15 | **CVPR** | Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10910)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fkris-singh\u002FDeep-Unsupervised-Saliency-Detection)\n\n\n    \n \n## 2017\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | DSS: Deeply Supervised Salient Object Detection with Short Connections | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04849)\u002F[Code](https:\u002F\u002Fgithub.com\u002FAndrew-Qibin\u002FDSS)\n02 | **CVPR** | Non-Local Deep Features for Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLuo_Non-Local_Deep_Features_CVPR_2017_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhimingluo\u002FNLDF)\n03 | **CVPR** | Learning to Detect Salient Objects with Image-level Supervision | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FWang_Learning_to_Detect_CVPR_2017_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fscott89\u002FWSS)\n04 | **CVPR** | SalGAN: visual saliency prediction with adversarial networks | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1701.01081)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n05 | **ICCV** | A Stagewise Refinement Model for Detecting Salient Objects in Images | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FWang_A_Stagewise_Refinement_ICCV_2017_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n06 | **ICCV** | Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Amulet_Aggregating_Multi-Level_ICCV_2017_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n07 | **ICCV** | Learning Uncertain Convolutional Features for Accurate Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Learning_Uncertain_Convolutional_ICCV_2017_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n08 | **ICCV** | Supervision by Fusion: Towards Unsupervised Learning of Deep Salient Object Detector  | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Supervision_by_Fusion_ICCV_2017_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhangyuygss\u002FSVFSal.caffe)\n\n \n  \n\n## 2016\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | DHSNet: Deep hierarchical saliency network for salient object detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FLiu_DHSNet_Deep_Hierarchical_CVPR_2016_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FGuanWenlong\u002FDHSNet-PyTorch)\n02 | **CVPR** | ELD: Deep Saliency with Encoded Low level Distance Map and High Level Features | [Paper](http:\u002F\u002Fwww.arxiv.org\u002Fpdf\u002F1604.05495v1.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fgylee1103\u002FSaliencyELD)\n03 | **ECCV** | RFCN: Saliency detection with recurrent fully convolutional networks | [Paper](http:\u002F\u002F202.118.75.4\u002Flu\u002FPaper\u002FECCV2016\u002F0865.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002FRFCN)\n\n\n\u003Ca name=\"3DSOD\">\u003C\u002Fa>\n# 3D RGB-D\u002FT Saliency Detection \u003Ca id=\"3D RGB-D Saliency Detection\" class=\"anchor\" href=\"3D RGB-D Saliency Detection\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n\n## 2025       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n:triangular_flag_on_post: 01 | **AAAI** | DiMSOD: A Diffusion-Based Framework for Multi-Modal Salient Object Detection | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F33096)\u002FCode  \n:triangular_flag_on_post: 02 | **AAAI** | SMR-Net: Semantic-Guided Mutually Reinforcing Network for Cross-Modal Image Fusion and Salient Object Detection | [Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F32933)\u002FCode    \n\n## 2024       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **ICASSP** | A Saliency Enhanced Feature Fusion based multiscale RGB-D Salient Object Detection Network | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.11914.pdf)\u002FCode  \n02 | **IJCV** | Cross-Modal Fusion and Progressive Decoding Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-024-02020-y)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhu-xh\u002FCPNet)  \n03 | **TMM** | UniTR: A Unified TRansformer-based Framework for Co-object and Multi-modal Saliency Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10444934)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fruohaoguo\u002FUniTR) \n04 | **IEEE TIP** | Quality-aware Selective Fusion Network for V-D-T Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.07655)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLx-Bao\u002FQSFNet) \n05 | **IEEE TCSVT** | Learning Adaptive Fusion Bank for Multi-modal Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.01127)\u002F[Code](https:\u002F\u002Fgithub.com\u002FAngknpng\u002FLAFB) \n06 | **IEEE TMM** | Alignment-Free RGBT Salient Object Detection: Semantics-guided Asymmetric Correlation Network and A Unified Benchmark | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.00917)\u002F[Code](https:\u002F\u002Fgithub.com\u002FAngknpng\u002FSACNet) \n07 | **ACMMM** | Backdoor Attacks on Bimodal Salient Object Detection with RGB-Thermal Data | [Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=fBeeQlkIM8)\u002FCode\n08 | **ECCV** | CoLA: Conditional Dropout and Language-driven Robust Dual-modal Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2407.06780)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fssecv\u002FCoLA)\n\n\n## 2023      \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **TCSVT** | HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.03036.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FHRTransNet)  \n02 | **IEEE TIP** | CAVER: Cross-Modal View-Mixed Transformer for Bi-Modal Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10015667)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flartpang\u002FCAVER) \n03 | **IEEE TIP** | LSNet: Lightweight Spatial Boosting Network for Detecting Salient Objects in RGB-Thermal Images | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10042233)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzyrant\u002FLSNet) \n04 | **ICME** | Scribble-Supervised RGB-T Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.09733.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FRGBTScribble-ICME2023) \n05 | **TCSVT** | Mutual Information Regularization for Weakly-supervised RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.03630.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fbaneitixiaomai\u002FMIRV) \n06 | **Information Fusion** | An Interactively Reinforced Paradigm for Joint Infrared-Visible Image Fusion and Saliency Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09999)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwdhudiekou\u002FIRFS) \n07 | **IEEE TMM** | CATNet: A Cascaded and Aggregated Transformer Network For RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10179145)\u002F[Code](https:\u002F\u002Fgithub.com\u002FROC-Star\u002FCATNet\u002F) \n08 | **ACM MM** | Point-aware Interaction and CNN-induced Refinement Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.08930.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frmcong\u002FPICR-Net_ACMMM23) \n09 | **IEEE TIP** | Depth Injection Framework for RGBD Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10258039)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZakeiswo\u002FDIF) \n10 | **ACM MM** | Modality Profile - A New Critical Aspect to be Considered When Generating RGB-D Salient Object Detection Training Set | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3581783.3611985)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXueHaoWang-Beijing\u002FModalityProfile_MM23\u002F)\n11 | **ACM MM** | Saliency Prototype for RGB-D and RGB-T Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3581783.3612466)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZZ2490\u002FSPNet)  \n12 | **NeurIPS** | DVSOD: RGB-D Video Salient Object Detection | [Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hm1Ih3uLII)\u002F[Code](https:\u002F\u002Fgithub.com\u002FDVSOD\u002FDVSOD-Baseline)  \n\n\n## 2022       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVMJ** | Specificity-preserving RGB-D Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.08162)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FSPNet?utm_source=catalyzex.com)   \n02 | **AAAI** | Self-Supervised Pretraining for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-4882.ZhaoX.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FSSLSOD)   \n03 | **IEEE TPAMI** | MobileSal: Extremely Efficient RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21PAMI_MobileSal.pdf)\u002F[Code](https:\u002F\u002Fmmcheng.net\u002Fmobilesal\u002F)   \n04 | **IEEE TIP** | Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.00100.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FRobert-xiaoqiang\u002FDS-Net)   \n05 | **IEEE TIP** | Learning Discriminative Cross-modality Features for RGB-D Saliency Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9678058)\u002FCode  \n06 | **IEEE TIP** | Weakly Supervised RGB-D Salient Object Detection with Prediction Consistency Training and Active Scribble Boosting | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9720104)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXuYunqiu\u002FscribbleRGB-DSOD)\n07 | **ICLR** | Promoting Saliency From Depth: Deep Unsupervised RGB-D Saliency Detection | [Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BZnnMbt0pW)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FDSU)  \n08 | **ArXiv** | DFTR: Depth-supervised Hierarchical Feature Fusion Transformer for Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.06429.pdf)\u002FCode  \n09 | **ArXiv** | GroupTransNet: Group Transformer Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10785.pdf)\u002FCode  \n10 | **ArXiv** | CAVER: Cross-Modal View-Mixed Transformer for Bi-Modal Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.02363.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flartpang\u002FCAVER)\n11 | **PR** | Encoder Deep Interleaved Network with Multi-scale Aggregation for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0031320322001479)\u002FCode  \n12 | **CVPRW** | Pyramidal Attention for Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.06788.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ftanveer-hussain) \n13 | **TMM** | Depth-induced Gap-reducing Network for RGB-D Salient Object Detection: An Interaction, Guidance and Refinement Approach | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9769984)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fssecv\u002FDIGR-Net) \n14 | **TMM** | C2DFNet: Criss-Cross Dynamic Filter Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9813422)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FC2DFNet) \n15 | **ArXiv** | Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.03105.pdf)\u002FCode  \n16 | **IEEE TCSVT** | Cross-Collaborative Fusion-Encoder Network for Robust RGB-Thermal Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9801871)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fgbliao\u002FCCFENet)  \n17 | **IEEE TIP** | Learning Implicit Class Knowledge for RGB-D Co-Salient Object Detection with Transformers | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9810116)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FCTNet)  \n18 | **ACMM** | Depth-inspired Label Mining for Unsupervised RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3503161.3548037?casa_token=9IfKDOr4970AAAAA:yWl9tbPTwlCtnXJE7-Vuj7rHxxBPi39zLVoeb1rgFwZEDVNdeK3Y8SYO0gkyT98kCKd2nhtI1Et2190)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fyoungtboy\u002FDLM)  \n19 | **3DV** | Robust RGB-D Fusion for Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.01762.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FZongwei97\u002FRFnet)  \n20 | **ArXiv** | Depth Quality-Inspired Feature Manipulation for Efficient RGB-D and Video Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.03918.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzwbx\u002FDFM-Net)  \n21 | **ECCV** | SPSN: Superpixel Prototype Sampling Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.07898.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FHydragon516\u002FSPSN)  \n22 | **ECCV** | MVSalNet:Multi-View Augmentation for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136890268.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FHeart-eartH\u002FMVSalNet)  \n23 | **IJCV** | Learnable Depth-Sensitive Attention for Deep RGB-D Saliency Detection with Multi-modal Fusion Architecture Search | [Paper](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-022-01646-0)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsunpeng1996\u002FDSA2F)   \n24 | **IEEE TNNLS** | 3-D Convolutional Neural Networks for RGB-D Salient Object Detection and Beyond | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9889257)\u002F[Code](https:\u002F\u002Fgithub.com\u002FQianChen98\u002FRD3D)   \n25 | **IEEE TIP** | Improving RGB-D Salient Object Detection via Modality-aware Decoder | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9894275?casa_token=x6Stwtpf_igAAAAA:_ivL1dWDAHq29mTPgl4ctDVhwf6qbonXaQZ5t1PFqGwvDzVk4w28lEbwVt-9yQJ15C4zuI7TaFQ)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMengkeSong\u002FMaD)   \n26 | **IEEE TIP** | CIR-Net: Cross-modality interaction and refinement for RGB-D salient object detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02843)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frmcong\u002FCIRNet_TIP2022)   \n27 | **IEEE TCSVT** | HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9869666?casa_token=tYGCtPgo5kkAAAAA:WWYviL3djEpBBRvds_DtYaAfdqnV5Qvdq7DaS4b6Dk9lQc9beLj4hQ9T8fLNpYeU9ku71v96abg)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FHRTransNet) \n28 | **IEEE TMM** | Does Thermal Really Always Matter for RGB-T Salient Object Detection? | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.04266.pdf)\u002F[Code](https:\u002F\u002Frmcong.github.io\u002Fproj_TNet.html) \n29 | **IEEE TCSVT** | Modality-Induced Transfer-Fusion Network for RGB-D and RGB-T Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9925217?casa_token=gFFqPMx0N7sAAAAA:1DpXKX-b2jvTF1Zwcf-gtJkyj0ZW-lxbRcJb60rO0BiLFJqTbpg7Sl0VGhe2Ku62Rqtg2AfFyfY)\u002FCode  \n30 | **IEEE TIP** | Joint Learning of Salient Object Detection, Depth Estimation and Contour Extraction | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04895.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FMMFT) \n31 | **IJCV** | Delving into Calibrated Depth for Accurate RGB-D Salient Object Detection | [Paper](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-022-01734-1)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FHiBo-UA) \n32 | **IEEE TCSVT** | MoADNet: Mobile Asymmetric Dual-Stream Networks for Real-Time and Lightweight RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9789193)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fkingkung2016\u002FMoADNet) \n\n\n\n\n## 2021       \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.11832.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsunpeng1996\u002FDSA2F)   \n02 | **CVPR** | Calibrated RGB-D Saliency Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FJi_Calibrated_RGB-D_Salient_Object_Detection_CVPR_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FDCF)  \n03 | **AAAI** | RGB-D Salient Object Detection via 3D Convolutional Neural Networks | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.10241.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FPPOLYpubki\u002FRD3D)\n04 | **IEEE TIP** | Hierarchical Alternate Interaction Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9371407)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMathLee\u002FHAINet)\n05 | **IEEE TIP** | CDNet: Complementary Depth Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9366409)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fblanclist\u002FCDNet)\n06 | **IEEE TIP** | RGB-D Salient Object Detection with Ubiquitous Target Awareness | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.03425.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FUTA)\n07 | **ICME** | BTS-Net: Bi-directional Transfer-and-Selection Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.01784.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzwbx\u002FBTS-Net)\n08 | **ACMM** | Depth Quality-Inspired Feature Manipulation for Efficient RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.01779.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzwbx\u002FDFM-Net)  \n09 | **ACMM** | TriTransNet RGB-D Salient Object Detection with a Triplet Transformer Embedding Network | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03990.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FTriTransNet-RGB-D-Salient-Object-Detection-with-a-Triplet-Transformer-Embedding-Network)\n10 | **ICCV** | RGB-D Saliency Detection via Cascaded Mutual Information Minimization | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07246.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002Fcascaded_rgbd_sod)\n11 | **ICCV** | Specificity-preserving RGB-D Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.08162.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FSPNet)\n12 | **ACMM** | Cross-modality Discrepant Interaction Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.01971.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002F1437539743\u002FCDINet-ACM-MM21)\n13 | **IEEE TIP** | Dynamic Selective Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9605221\u002Fauthors#authors)\u002F[Code](https:\u002F\u002Fgithub.com\u002FBrook-Wen\u002FDSNet)\n14 | **IJCV** | CNN-based RGB-D Salient Object Detection: Learn, Select and Fuse | [Paper](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-021-01452-0)\u002FCode\n15 | **NeurIPS** | Joint Semantic Mining for Weakly Supervised RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F642e92efb79421734881b53e1e1b18b6-Paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FJSM)   \n16 | **IEEE TMM** | CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9424966)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzyrant\u002FCCAFNet)   \n17 | **IEEE TETCI** | APNet: Adversarial-Learning-Assistance and Perceived Importance Fusion Network for All-Day RGB-T Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9583676)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzyrant\u002FAPNet)   \n\n\n\n\n## 2020\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **IEEE TIP** | ICNet: Information Conversion Network for RGB-D Based Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9024241\u002Fauthors)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMathLee\u002FICNet-for-RGBD-SOD)  \n02 | **CVPR** | JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FFu_JL-DCF_Joint_Learning_and_Densely-Cooperative_Fusion_Framework_for_RGB-D_Salient_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fkerenfu\u002FJLDCF)  \n03 | **CVPR** | UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_UC-Net_Uncertainty_Inspired_RGB-D_Saliency_Detection_via_Conditional_Variational_Autoencoders_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FUCNet)  \n04 | **CVPR** | A2dele: Adaptive and Attentive Depth Distiller for Efficient RGB-D Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FPiao_A2dele_Adaptive_and_Attentive_Depth_Distiller_for_Efficient_RGB-D_Salient_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FCVPR2020-A2dele)  \n05 | **CVPR** | Select, Supplement and Focus for RGB-D Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Select_Supplement_and_Focus_for_RGB-D_Saliency_Detection_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FCVPR_SSF-RGBD)   \n06 | **CVPR** | Learning Selective Self-Mutual Attention for RGB-D Saliency Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Learning_Selective_Self-Mutual_Attention_for_RGB-D_Saliency_Detection_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FS2MA)   \n07 | **ECCV** | Accurate RGB-D Salient Object Detection via Collaborative Learning | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.11782.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FCoNet)\n08 | **ECCV** | Cross-Modal Weighting Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.04901.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMathLee\u002FCMWNet)\n09 | **ECCV** | BBS-Net: RGB-D Salient Object Detection with a Bifurcated Backbone Strategy Network | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.02713.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzyjwuyan\u002FBBS-Net)\n10 | **ECCV** | Hierarchical Dynamic Filtering Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.06227.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flartpang\u002FHDFNet)\n11 | **ECCV** | Progressively Guided Alternate Refinement Network for RGB-D Salient Object Detection | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123530511.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FShuhanChen\u002FPGAR_ECCV20)\n12 | **ECCV** | RGB-D Salient Object Detection with Cross-Modality Modulation and Selection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07051.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FcmMS-ECCV20)\n13 | **ECCV** | Cascade Graph Neural Networks for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.03087.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLA30\u002FCas-Gnn)   \n14 | **ECCV** | A Single Stream Network for Robust and Real-time RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.06811.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FDANet-RGBD-Saliency)  \n15 | **ECCV** | Asymmetric Two-Stream Architecture for Accurate RGB-D Saliency Detection | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123730375.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsxfduter\u002FASTA)   \n16 | **ACMM** | Is Depth Really Necessary for Salient Object Detection? | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.00269.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJiaweiZhao-git\u002FDASNet)\n17 | **ACMM** | MMNet: Multi-Stage and Multi-Scale Fusion Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394171.3413523)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fgbliao\u002FMMNet)\n18 | **ACMM** | Feature Reintegration over Differential Treatment: A Top-down and Adaptive Fusion Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394171.3413969)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjack-admiral\u002FACM-MM-FRDT)\n19 | **IEEE TIP** | RGBD Salient Object Detection via Disentangled Cross-Modal Fusion | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9165931)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhaochen593\u002FDisen_Fuse_TIP2020)\n20 | **IEEE TIP** | Improved Saliency Detection in RGB-D Images Using Two-Phase Depth Estimation and Selective Deep Fusion | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8976428)\u002FCode\n21 | **IEEE TIP** | Depth Potentiality-Aware Gated Attention Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.08608.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FDPANet)\n22 | **IEEE TNNLS** | D3Net:Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.06781.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FD3NetBenchmark)\n23 | **IEEE TCSVT** | Revisiting Feature Fusion for RGB-T Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9161021)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fnexiakele\u002FRevisiting-Feature-Fusion-for-RGB-T-Salient-Object-Detection)\n\n\n\n\n## 2019\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **ICCV** | DMRA: Depth-induced Multi-scale Recurrent Attention Network for Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPiao_Depth-Induced_Multi-Scale_Recurrent_Attention_Network_for_Saliency_Detection_ICCV_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FDMRA_RGBD-SOD)\n02 | **CVPR** | CPFP: Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F19cvprRrbdSOD.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJXingZhao\u002FContrastPrior)\n03 | **IEEE TIP** | Three-stream Attention-aware Network for RGB-D Salient Object Detection | [Paper](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8603756\u002F)\u002FCode\n04 | **IEEE PR** | Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for RGB-D salient object detection | [Paper](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0031320318303054)\u002FCode\n05 | **IEEE Access** | AFNet: Adaptive Fusion for RGB-D Salient Object Detection | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1901.01369?context=cs.CV)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLucia-Ningning\u002FAdaptive_Fusion_RGBD_Saliency_Detection)\n06 | **IEEE TIP** | RGB-T Salient Object Detection via Fusing Multi-Level CNN Features | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8935533)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fnexiakele\u002FRGB-T-Salient-Object-Detection-via-Fusing-Multi-level-CNN-Features)\n07 | **IEEE TMM** | RGB-T image saliency detection via collaborative graph learning | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8744296)\u002FCode\n\n   \n\n## 2018\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | PCA: Progressively Complementarity-aware Fusion Network for RGB-D Salient Object Detection | [Paper](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F329741351_Progressively_Complementarity-Aware_Fusion_Network_for_RGB-D_Salient_Object_Detection)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhaochen593\u002FPCA-Fuse_RGBD_CVPR18)\n02 | **IEEE TIP** | Co-saliency detection for RGBD images based on multi-constraint feature matching and cross label propagation | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1710.05172)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frmcong\u002FResults-for-2018TIP-RGBD-Co-saliency)\n03 | **ICME** | PDNet: Prior-Model Guided Depth-enhanced Network for Salient Object Detection | [Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.08636)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fcai199626\u002FPDNet)\n\n  \n\n## 2017\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **ICCV** | Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features | [Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.03607)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fsshige\u002Frgbd-saliency)\n02 | **IEEE TIP** | DF: RGBD Salient Object Detection via Deep Fusion | [Paper](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.03333)\u002F[Code](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Y-PqAjuH9xREBjfl7H45HA)\n03 | **IEEE TCyb** | CTMF: Cnns-based rgb-d saliency detection via cross-view transfer and multiview fusion | [Paper](http:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F6221036\u002F6352949\u002F08091125.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fhaochen593\u002FPCA-Fuse_RGBD_CVPR18)\n\n  \n## Traditional methods\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **MTA** | RGBD co-saliency detection via multiple kernel boosting and fusion | [Paper](http:\u002F\u002Fwww.onacademic.com\u002Fdetail\u002Fjournal_1000040179260010_4758.html)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fivpshu\u002FRGBD-co-saliency-detection-via-multiple-kernel-boosting-and-fusion)\n02 | **ICCV17** | An Innovative Salient Object Detection Using Center-Dark Channel Prior | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1710.04071v4)\u002F[Code](https:\u002F\u002Fgithub.com\u002FChunbiaoZhu\u002FACVR2017)\n03 | **IEEE SPL** | Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion | [Paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F1710.05174)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frmcong\u002FCode-for-DCMC-method)\n04 | **IEEE SPL** | RGBD Co-saliency Detection via Bagging-Based Clustering | [Paper](http:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=7582474)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fivpshu\u002FRGBD-co-saliency-detection-via-bagging-based-clustering)\n05 | **CVPR** | Exploiting Global Priors for RGB-D Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_workshops_2015\u002FW14\u002Fhtml\u002FRen_Exploiting_Global_Priors_2015_CVPR_paper.html)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJianqiangRen\u002FGlobal_Priors_RGBD_Saliency_Detection)\n\n\n\n\u003Ca name=\"4DSOD\">\u003C\u002Fa>\n# 4D Light Field Saliency Detection  \u003Ca id=\"4D Light Field Saliency Detection\" class=\"anchor\" href=\"4D Light Field Saliency Detection\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **TOMM** | MCA: Saliency Detection on Light Field: A Multi-Cue Approach | [Paper](http:\u002F\u002Fwww.linliang.net\u002Fwp-content\u002Fuploads\u002F2017\u002F07\u002FACMTOM_Saliency.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fpencilzhang\u002FHFUT-Lytro-dataset)\n02 | **IJCAI** | DILF: Saliency Detection with a Deeper Investigation of Light Field | [Paper](http:\u002F\u002Fpdfs.semanticscholar.org\u002F4b17\u002Ffca1d67862e1fbffaf9ac64a1a73e0f20904.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fpencilzhang\u002Flightfieldsaliency_ijcai15)\n03 | **CVPR** | WSC: A Weighted Sparse Coding Framework for Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fpapers\u002FLi_A_Weighted_Sparse_2015_CVPR_paper.pdf)\u002FCode\n04 | **IEEE PAMI** | Saliency Detection on Light-Field | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=7570181)\u002F[Code](https:\u002F\u002Fdownload.csdn.net\u002Fdownload\u002Fdeepvl\u002F8076323?fps=1&locationNum=9)\n05 | **ICCV** | Deep Learning for Light Field Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Deep_Learning_for_Light_Field_Saliency_Detection_ICCV_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FICCV2019_Deeplightfield_Saliency)\n06 | **NeurIPS** | Memory-oriented Decoder for Light Field Salient Object Detection | [Paper](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8376-memory-oriented-decoder-for-light-field-salient-object-detection.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FMoLF)\n07 | **AAAI** | Exploit and Replace: An Asymmetrical Two-Stream Architecture for Versatile Light Field Saliency Detection | [Paper](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1uPkpB51MRMm_Zmvh1M2Z3nc3D8r32MR9\u002Fview?usp=drivesdk)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FAAAI2020-Exploit-and-Replace-Light-Field-Saliency)\n08 | **IEEE TCSVT** | A Multi-Task Collaborative Network for Light Field Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9153018)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhangqiudan\u002FMTCNet-Lightfield)  \n09 | **ArXiv** | DUT-LFSaliency: Versatile Dataset and Light Field-to-RGB Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.15124.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FDUTLF-V2)   \n10 | **ArXiv** | Learning Synergistic Attention for Light Field Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.13916.pdf)\u002FCode\n11 | **ArXiv** | CMA-Net: A Cascaded Mutual Attention Network for Light Field Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.00949.pdf)\u002FCode\n12 | **IEEE TCyB** | PANet: Patch-Aware Network for Light Field Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9517032)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjyydlut\u002FIEEE-TCYB-PANet)\n13 | **ACMM21** | Occlusion-aware Bi-directional Guided Network for Light Field Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475312?casa_token=wbPMsKJlIUgAAAAA:YVsFNQb65PB4D6FGlMwtYtYi5nR4YCE1tJw_7frdEMm_exQIDw5dFzjIW0AjmwqlO1XEOEbz-g)\u002F[Code](https:\u002F\u002Fgithub.com\u002FTimsty1\u002FOBGNet)\n14 | **ICCV21** | Light Field Saliency Detection with Dual Local Graph Learning and Reciprocative Guidance | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00698.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwangbo-zhao\u002F2021ICCV-DLGLRG)\n15| **CVPR22** | Learning from Pixel-Level Noisy Label : A New Perspective for Light Field Saliency Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.13456.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOLobbCode\u002FNoiseLF)  \n16| **NC** | MEANet: Multi-modal edge-aware network for light field salient object detection | [Paper](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0925231222003502)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fjiangyao-scu\u002FMEANet)\n17| **IEEE TIP** | Exploring Spatial Correlation for Light Field Saliency Detection: Expansion from a Single View | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9894273?casa_token=1mIHAJs5QB4AAAAA:vvhqsmbsJWjL9qGTjvOUWngBkgn9BJGkPY6M91tm2Tp-mhswCbmhtIU7cr5R6qT4vCqsU9L57kw)\u002F[Code]()\n18| **IEEE TIP** | Geometry Auxiliary Salient ObjectDetection for Light Fields via Graph Neural Networks | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9527158)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzhangqiudan\u002FGeoSOD-Lightfield)  \n19| **ACMM** | LFBCNet: Light Field Boundary-aware and Cascaded Interaction Network for Salient Object Detection | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3503161.3548275?casa_token=ifuWtYwl-roAAAAA:aSGUDEbp5YTrX7fxS0r7gEWq_kYKhOFom0VQ_6topWxvgArBopbmlvcAn7kXkjpo6jf9LEWX4vgivgU)\u002FCode  \n20| **IEEE TIP** | Weakly-Supervised Salient Object Detection on Light Fields | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9900489\u002Fauthors#authors)\u002FCode\n21| **IEEE TPAMI** | A Thorough Benchmark and a New Model for Light Field Saliency Detection | [Paper](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftp\u002F5555\u002F01\u002F10012539\u002F1JNmt6JGKu4)\u002F[Code](https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenDatasets)  \n22| **ICME23** | Guided Focal Stack Refinement Network for Light Field Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.05260.pdf)\u002FCode  \n23| **IEEE TCSVT** | LFTransNet: Light Field Salient Object Detection via a Learnable Weight Descriptor | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10138590?casa_token=rJeI2PnLzwQAAAAA:nnJc89z7hCRfJH3C-GtVjybe1HL11dZVoWOxzZ45d4Jn623BW4ZM9bS8DdyBiuvW-2zeyW7fdYJgkQ)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FLFTransNet)  \n24| **IEEE TCSVT** | Light Field Salient Object Detection with Sparse Views via Complementary and Discriminative Interaction Network | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10168184)\u002F[Code](https:\u002F\u002Fgithub.com\u002FGilbertRC\u002FLFSOD-CDINet)  \n:triangular_flag_on_post: 25| **arXiv** | LF Tracy: A Unified Single-Pipeline Approach for Salient Object Detection in Light Field Cameras | [Paper](https:\u002F\u002Fbrowse.arxiv.org\u002Fabs\u002F2401.16712)\u002F[Code](https:\u002F\u002Fgithub.com\u002FFeiBryantkit\u002FLF-Tracy)  \n\n\n\u003Ca name=\"VSOD\">\u003C\u002Fa>      \n# Video Salient Object Detection  \u003Ca id=\"Video Salient Object Detection\" class=\"anchor\" href=\"Video Salient Object Detection\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n\n## 2024  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n:triangular_flag_on_post: 01 | **AAAI** | A Motion-aware Spatio-temporal Graph for Video Salient Object Ranking | [Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=VUBtAcQN44)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzyf-815\u002FVSOR\u002Ftree\u002Fmain)  \n\n## 2023  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | Panoramic Video Salient Object Detection with Ambisonic Audio Guidance | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.14419.pdf)\u002FCode  \n\n## 2022  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | You Only Infer Once: Cross-Modal Meta-Transfer for Referring Video Object Segmentation | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-1100.LiD.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FSparklins\u002FYOFO)   \n02 | **AAAI** | Siamese Network with Interactive Transformer for Video Object Segmentation | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-702.LanM.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLANMNG\u002FSITVOS)   \n03 | **AAAI** | Iteratively Selecting an Easy Reference Frame Makes Unsupervised Video Object Segmentation Easier | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-11964.LeeY.pdf)\u002FCode   \n04 | **AAAI** | Reliable Propagation-Correction Modulation for Video Object Segmentation | [Paper](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-4288.XuX.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FJerryX1110\u002FRPCMVOS)     \n05 | **WACV** | Video Salient Object Detection via Contrastive Features and Attention Modules | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.02368.pdf)\u002FCode  \n06 | **ICIP** | Depth-Cooperated Trimodal Network for Video Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.06060.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fluyukang\u002FDCTNet)  \n07 | **ArXiv** | Learning Video Salient Object Detection Progressively from Unlabeled Videos | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02008)\u002FCode  \n08 | **ArXiv** | Rethinking Video Salient Object Ranking | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.17257)\u002FCode  \n09 | **ACMM** | Weakly Supervised Video Salient Object Detection via Point Supervision | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.07269.pdf)\u002FCode  \n10 | **ECCV** | Hierarchical Feature Alignment Network for Unsupervised Video Object Segmentation | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.08485)\u002F[Code](https:\u002F\u002Fgithub.com\u002FNUST-Machine-Intelligence-Laboratory\u002FHFAN)  \n11 | **ECCV** | XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.07115.pdf)\u002F[Code](https:\u002F\u002Fhkchengrex.github.io\u002FXMem\u002F)  \n12 | **ACMM** | Bidirectionally Learning Dense Spatio-temporal Feature Propagation Network for Unsupervised Video Object Segmentation | [Paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3503161.3548039?casa_token=xbckiU4No2wAAAAA:hpKejtoDLTyeTRtCNao2PHacfpfR7HRV38JOieDNbF-C67SAKaXTTswqs_yC8DDp7at-rUkYyc1N5I0)\u002FCode  \n13 | **NeurIPS** | Semi-Supervised Video Salient Object Detection Based on Uncertainty-Guided Pseudo Labels | [Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BOQr80FBX_)\u002F[Code](https:\u002F\u002Fgithub.com\u002FLanezzz\u002FUGPL)   \n\n\n## 2021  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | Weakly Supervised Video Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.02391.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwangbo-zhao\u002FWSVSOD)     \n02 | **ArXiv** | Video Salient Object Detection via Adaptive Local-Global Refinement | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14360.pdf)\u002FCode    \n03 | **ICIP** | Guidance and Teaching Network for Video Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.10110.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FGewelsJI\u002FGTNet)    \n04 | **ACMM** | Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object Segmentation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.05076.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FMulti-Source-APS-ZVOS)   \n05 | **ICCV** | Dynamic Context-Sensitive Filtering Network for Video Salient Object Detection | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_Dynamic_Context-Sensitive_Filtering_Network_for_Video_Salient_Object_Detection_ICCV_2021_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FRoudgers\u002FDCFNet)   \n06 | **ICCV** | Full-Duplex Strategy for Video Object Segmentation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03151.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FGewelsJI\u002FFSNet) \n07 | **ICCV** | Deep Transport Network for Unsupervised Video Object Segmentation | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_Deep_Transport_Network_for_Unsupervised_Video_Object_Segmentation_ICCV_2021_paper.pdf)\u002FCode\n08 | **IEEE TIP** | Exploring Rich and Efficient Spatial Temporal Interactions for Real Time Video Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9390381)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fguotaowang\u002FSTVS)\n\n\n## 2020  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | STAViS: Spatio-Temporal AudioVisual Saliency Network | [Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FTsiami_STAViS_Spatio-Temporal_AudioVisual_Saliency_Network_CVPR_2020_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fatsiami\u002FSTAViS)  \n02 | **ECCV** | Unified Image and Video Saliency Modeling | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.05477.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Frdroste\u002Funisal)    \n03 | **ECCV** | Measuring the importance of temporal features in video saliency | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123730664.pdf)\u002FCode  \n04 | **ECCV** | TENet: Triple Excitation Network for Video Salient Object Detection | [Paper](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123500205.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FOliverRensu\u002FTENet-Triple-Excitation-Network-for-Video-Salient-Object-Detection) \n05 | **IEEE TIP** | Learning Long-term Structural Dependencies for Video Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9199537)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fbowangscut\u002FLSD_GCN-for-VSOD)  \n06 | **IEEE Access** | Cross Complementary Fusion Network for Video Salient Object Detection | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=9250449)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fzi-yang-w\u002FCCNet) \n07 | **AAAI** | Pyramid Constrained Self-Attention Network for Fast Video Salient Object Detection | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20AAAI-PCSA.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fguyuchao\u002FPyramidCSA)   \n\n## 2019  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **ICCV** | Motion Guided Attention for Video Salient Object Detection | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.07061)\u002F[Code](https:\u002F\u002Fgithub.com\u002Flhaof\u002FMotion-Guided-Attention)  \n02 | **ICCV** | Semi-Supervised Video Salient Object Detection Using Pseudo-Labels | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FYan_Semi-Supervised_Video_Salient_Object_Detection_Using_Pseudo-Labels_ICCV_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FKinpzz\u002FRCRNet-Pytorch)   \n03 | **ICCV** | Temporally-Aggregating Spatial Encoder-Decoder Network for Video Saliency Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMin_TASED-Net_Temporally-Aggregating_Spatial_Encoder-Decoder_Network_for_Video_Saliency_Detection_ICCV_2019_paper.html)\u002F[Code](https:\u002F\u002Fgithub.com\u002FMichiganCOG\u002FTASED-Net)   \n04 | **ICCV** | RANet：Ranking attention Network for Fast Video Object Segmentation | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.06647)\u002F[Code](https:\u002F\u002Fgithub.com\u002FStorife\u002FRANet)   \n05 | **CVPR** | Shifting More Attention to Video Salient Objection Detection | [Paper](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FDAVSOD\u002Fblob\u002Fmaster\u002F%5B2019%5D%5BCVPR%5D%5BOral%5D【SSAV】【DAVSOD】Shifting%20More%20Attention%20to%20Video%20Salient%20Object%20Detection.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FDAVSOD)   \n06 | **CVPR** | Learning Unsupervised Video Object Segmentation through Visual Attention | [Paper](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F332751903_Learning_Unsupervised_Video_Object_Segmentation_Through_Visual_Attention)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FAGS)   \n07 | **CVPR** | See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLu_See_More_Know_More_Unsupervised_Video_Object_Segmentation_With_Co-Attention_CVPR_2019_paper.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fcarrierlxk\u002FCOSNet)  \n08 | **IEEE TIP** | Improving Robust Video Saliency Detection based on Long-term Spatial-Temporal Information | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8811767)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fguotaowang\u002FTIP_LSTI)  \n\n\n\n## 2018  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **ECCV** | Pyramid Dilated Deeper CoonvLSTM for Video Salient Object Detection | [Paper](https:\u002F\u002Fgithub.com\u002Fshenjianbing\u002FPDBConvLSTM\u002Fblob\u002Fmaster\u002FPyramid%20Dilated%20Deeper%20CoonvLSTM%20for%20Video%20Salient%20Object%20Detection.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fshenjianbing\u002FPDB-ConvLSTM)\n02 | **ECCV** | DeepVS: A Deep Learning Based Video Saliency Prediction Approach | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLai_Jiang_DeepVS_A_Deep_ECCV_2018_paper.html)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fremega\u002FOMCNN_2CLSTM)\n03 | **CVPR** | Revisiting Video Saliency: A Large-scale Benchmark and a New Model | [Paper](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FDHF1K\u002Fblob\u002Fmaster\u002F(pami19)DynamicSaliency.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FDHF1K)  \n04 | **CVPR** | Flow Guided Recurrent Neural Encoder for Video Salient Object Detection | [Paper](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002FCameraReady\u002F1226.pdf)\u002FCode  \n05 | **IEEE TIP** | Video Salient Object Detection via Fully Convolutional Networks | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.00871.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FViSalientObject)\n\n## 2017  \n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | **IEEE TIP** | Learning to Detect Video Saliency with HEVC Features | [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7742914\u002F)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fremega\u002FCompressd_Domain_SaliencyPrediction)\n\n\n\u003Ca name=\"survey\">\u003C\u002Fa>  \n# Earlier Methods  \u003Ca id=\"Earlier Methods\" class=\"anchor\" href=\"Earlier Methods\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n\n**No.** | **Pub.** | **Title** | **Links** \n:-: | :-: | :-  | :-: \n01 | IEEE TIP15 | Salient object detection: A benchmark | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1501.02741.pdf)\u002FCode\n02 | IEEE TCSVT18 | Review of visual saliency detectionwith comprehensive information | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.03391.pdf)\u002FCode\n03 | ACM TIST18 | A review of co-saliency detection algorithms: Fundamentals, applications, and challenges | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1604.07090.pdf)\u002FCode\n04 | IEEE TSP18 | Advanced deep-learning techniques for salient and category-specific object detection: A survey| [Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8253582)\u002FCode\n05 | IJCV18 | Attentive systems: A survey | [Paper](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-017-1042-6)\u002FProject\n06 | ECCV18 | Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground | [Paper](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F18ECCV-SOCBenchmark.pdf)\u002F[Code](http:\u002F\u002Fdpfan.net\u002Fsocbenchmark\u002F)\n07 | CVM18 | Salient object detection: A survey | [Paper](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs41095-019-0149-9.pdf)\u002FCode\n08 | IEEE TNNLS19 | Salient Object detection with deep learning: Areview | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.05511.pdf)\u002FCode\n09 | arXiv19 |  Salient Object Detection in the Deep Learning Era-An In-Depth Survey | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09146.pdf)\u002F[Code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FSODsurvey) \n10 | CVM21 | RGB-D Salient Object Detection: A Survey | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00230)\u002F[Code](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FRGBD-SODsurvey)\n\nThe part of the collection is thanks to [Deng-Ping Fan](http:\u002F\u002Fdpfan.net) and [Tao Zhou](https:\u002F\u002Fgithub.com\u002Ftaozh2017).\n\n* Salient Object Detection in the Deep Learning Era: An In-Depth Survey. [paper link](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09146.pdf).\n* This is a paper list published by another author. [here](https:\u002F\u002Fgithub.com\u002FArcherFMY\u002FPaper_Reading_List\u002Fblob\u002Fmaster\u002FImage-01-Salient-Object-Detection.md)\n* RGB-D Salient Object Detection: A Survey. [project link](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FRGBD-SODsurvey).\n\n\n\u003Ca name=\"data\">\u003C\u002Fa>  \n# The SOD dataset download    \u003Ca id=\"The SOD dataset download\" class=\"anchor\" href=\"The SOD dataset download\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n* 2D SOD datasets [download1](https:\u002F\u002Fgithub.com\u002FTinyGrass\u002FSODdataset) or [download2](https:\u002F\u002Fgithub.com\u002FArcherFMY\u002Fsal_eval_toolbox), [download3](https:\u002F\u002Fgithub.com\u002Fmagic428\u002Fawesome-segmentation-saliency-dataset).\n* 3D SOD datasets [download](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FRGBD-SOD-datasets).  \n* 4D SOD datasets [download](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FMoLF).\n* Video SOD datasets [download](http:\u002F\u002Fdpfan.net\u002FDAVSOD\u002F).\n\n\u003Ca name=\"eval\">\u003C\u002Fa>\n# Evaluation Metrics  \u003Ca id=\"Evaluation Metrics\" class=\"anchor\" href=\"Evaluation Metrics\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n* Saliency maps evaluation.      \nThis link near all evaluation metrics for salient object detection including E-measure, S-measure, F-measure, MAE scores and PR curves or bar metrics.\nYou can found in [here](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSaliency-Evaluation-Toolbox).        \n\n* Saliency Dataset evaluation.       \nThis repo can compute the ratio of obj.area and obj.contrast on binary saliency dataset. This Toolbox contains two evaluation metrics, including obj(object).area and obj.contrast.     \nYou can found in [here](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSaliency-dataset-evaluation).      \n\n\u003Ca name=\"leaderboard\">\u003C\u002Fa>\n# Comparison with state-of-the-arts  \u003Ca id=\"Comparison with state-of-the-arts\" class=\"anchor\" href=\"Comparison with state-of-the-arts\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n* [Here](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsalient-object-detection-on-duts-te) includes the performance comparison of almost all 2D salient object detection algorithms. \n* [Here](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Frgb-d-salient-object-detection) includes the performance comparison of almost all 3D RGB-D salient object detection algorithms. \n\n\n### AI Conference Deadlines\n[Realted AI Conference deadline](https:\u002F\u002Faideadlin.es\u002F?sub=ML,CV,NLP,RO,SP,DM)     \n[Realted AI Conference Accepted Rate](https:\u002F\u002Fgithub.com\u002Flixin4ever\u002FConference-Acceptance-Rate)\n","# 基于CNN的显著性检测论文列表       \n\n本仓库主要聚焦于基于深度学习的显著性检测方法（**2D RGB、3D RGB-D\u002FT、视频显著性检测及4D光场显著性检测**），并提供相关代码和论文的汇总。我们希望这个仓库能够帮助您更好地理解深度学习时代的显著性检测技术。        \n\n--------------------------------------------------------------------------------------\n :heavy_exclamation_mark:  **2D SOD**：新增两篇AAAI 2025论文，以及一篇PAMI论文。                 \n :heavy_exclamation_mark:  **3D SOD**：新增两篇AAAI 2025论文，以及一篇ACM MM 2024论文。    \n :heavy_exclamation_mark:  **LF SOD**：新增两篇IEEE TCSVT论文，以及一篇arXiv 2024预印本论文。   \n :heavy_exclamation_mark:  **视频SOD**：新增一篇NeurIPS 2024论文。 \n \n [伪装目标检测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10274) 是与显著性检测密切相关的任务，其论文摘要可参见[此链接](https:\u002F\u002Fgithub.com\u002FChunmingHe\u002Fawesome-concealed-object-segmentation)。\n\n:running: **我们将持续更新本仓库。** :running:    \n--------------------------------------------------------------------------------------\n\n\n------\n \n\n## 内容：\n\n1. [论文列表概览](#overall)\n2. [2D RGB显著性检测](#2DSOD) \n3. [3D RGB-D\u002FT显著性检测](#3DSOD) \n4. [4D光场显著性检测](#4DSOD) \n5. [视频显著性检测](#VSOD) \n6. [综述及早期方法](#survey) \n7. [SOD数据集下载](#data) \n8. [评估指标](#eval) \n9. [SOD排行榜](#leaderboard)\n\n\n\n------\n\n\u003Ca name=\"overall\">\u003C\u002Fa>   \n# 总体 \n![头像](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjiwei0921_SOD-CNNs-based-code-summary-_readme_d86585555f7c.jpg)\n    \n\u003Ca name=\"2DSOD\">\u003C\u002Fa> \n# 2D RGB显著性检测 \u003Ca id=\"2D RGB显著性检测\" class=\"anchor\" href=\"#2D RGB显著性检测\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa>    \n\n## 2025      \n**序号** | **发表期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n:triangular_flag_on_post: 01 | **PAMI** | 用于伪装与显著目标检测的条件扩散模型 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10834569)\u002F[代码](https:\u002F\u002Fgithub.com\u002FRapisurazurite\u002FCamoDiffusion)    \n:triangular_flag_on_post: 02 | **AAAI** | MSV-PCT：用于点云中显著目标检测的多稀疏视角增强Transformer框架 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F32892)\u002F代码      \n:triangular_flag_on_post: 03 | **AAAI** | 利用加法神经网络探索显著目标检测 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F33028)\u002F代码\n\n## 2024      \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **WACV** | 基于分割频率统计的无监督和半监督共显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06654.pdf)\u002F代码   \n02 | **WACV** | 3SD：无需标签的自监督显著性检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2024\u002Fpapers\u002FYasarla_3SD_Self-Supervised_Saliency_Detection_With_No_Labels_WACV_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frajeevyasarla\u002F3SD)    \n03 | **WACV** | 从注视点学习显著性 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2024\u002Fpapers\u002FDjilali_Learning_Saliency_From_Fixations_WACV_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FYasserdahouML\u002FSalTR)     \n04 | **WACV** | 针对视障人士拍摄图像的显著目标检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2024\u002Fpapers\u002FReynolds_Salient_Object_Detection_for_Images_Taken_by_People_With_Vision_WACV_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fvizwiz.org\u002Ftasks-and-datasets\u002Fsalient-object-detection\u002F)     \n05 | **WACV** | 面向遥感显著目标检测的对抗云攻击防御 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06654.pdf)\u002F代码  \n06 | **ICASSP** | 零样本共显著目标检测框架 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.05499)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhkxiao\u002Fzs-cosod)  \n07 | **CVPR** | VSCode：基于2D提示学习的通用视觉显著与伪装目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.15011.pdf)\u002F代码\n08 | **AAAI** | WeakPCSOD：克服框标注偏差的弱监督点云显著目标检测 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F28403)\u002F代码\n09 | **AAAI** | SeqRank：显著对象的序列排序 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F27964)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fguanhuankang\u002FSeqRank) \n10 | **AAAI** | 在连续尖峰流中寻找视觉显著性 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F28610)\u002F[代码](https:\u002F\u002Fgithub.com\u002FBIT-Vision\u002FSVS) \n11 | **CVPR** | COSALPURE：从群体图像中学习概念以实现鲁棒的共显著性 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18554.pdf)\u002F[代码](https:\u002F\u002Fv1len.github.io\u002FCosalPure\u002F) \n12 | **IJCAI** | 基于知识迁移的统一无监督显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.14759)\u002F[代码](https:\u002F\u002Fgithub.com\u002FI2-Multimedia-Lab\u002FA2S-v3) \n13 | **TII** | MINet：用于带钢表面缺陷实时显著目标检测的多尺度交互网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.16096)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKunye-Shen\u002FMINet) \n14 | **ICML** | 尺寸不变性至关重要：重新思考不平衡多目标显著目标检测的指标与损失 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.09782)\u002F[代码](https:\u002F\u002Fgithub.com\u002FFerry-Li\u002FSI-SOD) \n15 | **ICML** | Spider：面向上下文依赖概念分割的统一框架 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.01002)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FSpider-UniCDSeg) \n16 | **ICML** | 潜入水下：Segment Anything Model引导下的水下显著实例分割及大规模数据集 | 论文\u002F[代码](https:\u002F\u002Fgithub.com\u002FLiamLian0727\u002FUSIS10K) \n17 | **CVPR** | 用于显著目标排序的领域分离图神经网络 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FWu_Domain_Separation_Graph_Neural_Networks_for_Saliency_Object_Ranking_CVPR_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FWu-ZJ\u002FDSGNN) \n18 | **CVPR** | 利用人眼注视推进显著性排序：数据集、模型与基准 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FDeng_Advancing_Saliency_Ranking_with_Human_Fixations_Dataset_Models_and_Benchmarks_CVPR_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FEricDengbowen\u002FQAGNet) \n19 | **CVPR** | 面向无示例类别增量学习的任务自适应显著性指导 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FLiu_Task-Adaptive_Saliency_Guidance_for_Exemplar-free_Class_Incremental_Learning_CVPR_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fscok30\u002Ftass) \n20 | **CVPR** | 无监督显著实例检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FTian_Unsupervised_Salient_Instance_Detection_CVPR_2024_paper.pdf)\u002F代码\n21 | **CVPR** | DiffSal：面向扩散显著性预测的音视频联合学习 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FXiong_DiffSal_Joint_Audio_and_Video_Learning_for_Diffusion_Saliency_Prediction_CVPR_2024_paper.pdf)\u002F[代码](https:\u002F\u002Fjunwenxiong.github.io\u002FDiffSal) \n22 | **TMM** | ADMNet：注意力引导的密集多尺度网络用于轻量级显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10555313)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKunye-Shen\u002FADMNet)\n23 | **ACMMM** | 多尺度且细节增强的Segment Anything Model用于显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.04326)\u002F[代码](https:\u002F\u002Fgithub.com\u002FBellyBeauty\u002FMDSAM)\n24 | **ACMMM** | 实例级全景视听显著性检测与排序 | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=0Q9zTGHOda)\u002F代码\n25 | **ECCV** | CONDA：用于共显著目标检测的凝聚型深度关联学习 | [论文](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F06695.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fdragonlee258079\u002FCONDA)\n26 | **ECCV** | 基于多尺度特征对应关系的自监督共显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.11107)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsourachakra\u002FSCoSPARC)\n27 | **ECCV** | SHINE：面向组合式时序定位的显著性感知分层负采样排序 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2407.05118)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzxccade\u002FSHINE)  \n28 | **ECCV** | DSMix：基于畸变诱导显著图的无参考图像质量评估预训练 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2407.03886)\u002F[代码](https:\u002F\u002Fgithub.com\u002FI2-Multimedia-Lab\u002FDSMix)\n29 | **ECCV** | 基于显著性的自适应掩码：重访标记动态以增强预训练 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.08327)\u002F代码\n30 | **ECCV** | 基于潜在扩散的数据增强用于显著性预测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.07307)\u002F[代码](https:\u002F\u002Fgithub.com\u002FIVRL\u002FAugSal)\n31 | **PAMI** | 分而治之：用于RGB-T显著目标检测的融合三流网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10778650\u002Fauthors#authors)\u002F[代码](https:\u002F\u002Fgithub.com\u002FCSer-Tang-hao\u002FConTriNet_RGBT-SOD)\n\n## 2023      \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | LeNo：具有可学习噪声的对抗鲁棒显著性目标检测网络 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15392)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fssecv\u002FLeNo)  \n02 | **AAAI** | 像素即一切：用于显著性目标检测的对抗轨迹集成主动学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.06493.pdf)\u002F代码  \n03 | **AAAI** | 用于协同显著性目标检测的记忆辅助对比一致性学习 | [论文](https:\u002F\u002Fscholar.google.com\u002Fcitations?view_op=list_works&hl=en&user=TZRzWOsAAAAJ)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZhengPeng7\u002FMCCL#)   \n04 | **TNNLS** | 用于360°全方位图像中显著性目标检测的多投影融合与精炼网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.12378.pdf)\u002F[代码](https:\u002F\u002Frmcong.github.io\u002Fproj_MPFRNet.html)  \n05 | **IEEE TIP** | 提升更广的感受野以增强显著性目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10006743)\u002F[代码](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FBBRF-TIP)   \n06 | **IEEE TPAMI** | 基于协同表示净化的协同显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.07670.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZZY816\u002FCoRP)   \n07 | **CVPR** | 无监督显著性目标检测中的纹理引导显著性蒸馏 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.05921.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fmoothes\u002FA2S-v2)   \n08 | **CVPR** | 用于协同显著性目标检测的判别式协同显著性和背景挖掘Transformer | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.00514.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fdragonlee258079\u002FDMT)   \n09 | **CVPR** | Sketch2Saliency：从人类草图中学习检测显著性目标 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.11502.pdf)\u002F[代码](https:\u002F\u002Fayankumarbhunia.github.io\u002FSketch2Saliency\u002F)   \n10 | **CVPR** | 通过基于显著性的无监督预训练提升低数据量实例分割性能 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.01171.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flifuguan\u002Fsaliency_prompt)   \n11 | **CVPR** | 像素、区域与物体：显著性目标检测的多重增强 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWang_Pixels_Regions_and_Objects_Multiple_Enhancement_for_Salient_Object_Detection_CVPR_2023_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyiwangtz\u002FMENet)   \n12 | **CVPR** | 基于不确定性感知的群体交换掩码的协同显著性目标检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWu_Co-Salient_Object_Detection_With_Uncertainty-Aware_Group_Exchange-Masking_CVPR_2023_paper.pdf)\u002F代码  \n13 | **CVPR** | 显著性目标检测模型的分布不确定性建模 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FTian_Modeling_the_Distributional_Uncertainty_for_Salient_Object_Detection_Models_CVPR_2023_paper.pdf)\u002F[代码](https:\u002F\u002Fnpucvr.github.io\u002FDistributional_uncer\u002F)  \n14 | **ACM MM** | 用于高分辨率显著性目标检测的循环多尺度Transformer | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.03826.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FDrowsyMon\u002FRMFormer)  \n15 | **TOMM** | PAV-SOD：面向全景视听显著性检测的新任务 | [论文](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-1RcARcbz4pACFzkjXcp6MP8R9CGScqI\u002Fview)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJun-Pu\u002FPAV-SOD)  \n16 | **ICCV** | 基于反事实的显著性图：迈向神经网络的视觉对比解释 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FWang_Counterfactual-based_Saliency_Map_Towards_Visual_Contrastive_Explanations_for_Neural_Networks_ICCV_2023_paper.pdf)\u002F代码\n17 | **ACM MM** | 360°显著性目标检测中的畸变感知Transformer | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03359)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyjzhao19981027\u002FDATFormer\u002F) \n18 | **ACM MM** | 基于自监督自顶向下上下文的端到端无监督显著性检测 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3581783.3612212)\u002F代码 \n19 | **ACM MM** | 基于密集金字塔Transformer的分区式显著性排序 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.00236.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fssecv\u002FPSR) \n20 | **ACM MM** | 基于语义级共识提取与分散的协同显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07753v1)\u002F代码  \n21 | **TMM** | 向完整且细节保留的显著性目标检测迈进 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10287608)\u002F[代码](https:\u002F\u002Fgithub.com\u002FBarCodeReader\u002FSelfReformer) \n22 | **arXiv** | 基于适应性提示学习的统一模态显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.16835.pdf)\u002F代码\n23 | **arXiv** | 一体化：RGB、RGB-D和RGB-T显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.14746.pdf)\u002F代码\n24 | **NeurIPS** | 深度显著性模型学到了关于视觉注意的什么？ | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09679)\u002F[代码](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.09679.pdf)  \n25 | **PAMI** | CADC++：用于协同显著性目标检测的先进共识感知动态卷积 | [论文](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftp\u002F5555\u002F01\u002F10339864\u002F1SBL7kZYYyA)\u002F代码  \n26 | **IEEE TIP** | USOD10K：一个新的水下显著性目标检测基准数据集 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10102831)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLinHong-HIT\u002FUSOD10K)  \n27 | **IEEE TMM** | 基于光谱驱动的混合频率网络用于高光谱显著性目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10313066\u002F)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flaprf\u002FSMN)  \n28 | **IEEE TIP** | 重新思考目标显著性排序：一种全新的全流程处理范式 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.03226.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMengkeSong\u002FSaliency-Ranking-Paradigm)\n\n## 2022       \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | 基于不确定性感知伪标签学习的无监督域适应显著目标检测 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-604.YanP.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKinpzz\u002FUDASOD-UPL)  \n02 | **AAAI** | 用于无监督显著目标检测的因果去偏框架 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-108.LinX.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJaiharish-passion07\u002FAI_Project)  \n03 | **AAAI** | 基于能量模型的生成式协同显著性预测 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-1516.ZhangJ.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FSalCoopNets)  \n04 | **AAAI** | 基于点监督的弱监督显著目标检测 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-461.GaoS.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fshuyonggao\u002FPSOD)    \n05 | **AAAI** | TRACER：极端注意力引导的显著目标追踪网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.07380.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKarel911\u002FTRACER)  \n06 | **AAAI** | 我能找到你！基于边界引导的分离注意力网络用于伪装目标检测 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-6565.ZhuH.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FWolfberryCoke\u002FBSA-Net)  \n07 | **WACV** | 用于精确显著目标检测的递归轮廓-显著性融合网络 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FKe_Recursive_Contour-Saliency_Blending_Network_for_Accurate_Salient_Object_Detection_WACV_2022_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FBarCodeReader\u002FRCSB-PyTorch)  \n08 | **IEEE TPAMI** | PoolNet+：探索池化在显著目标检测中的潜力 | [论文](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21PAMI-PoolNet.pdf)\u002F[代码](http:\u002F\u002Fmmcheng.net\u002Fpoolnet\u002F)  \n09 | **IEEE TPAMI** | 一种高效模型用于研究显著目标检测的语义 | [论文](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21PAMI-Sal100K.pdf)\u002F[代码](https:\u002F\u002Fmmcheng.net\u002Fsod100k\u002F)  \n10 | **IEEE TGRS** | 基于特征相关性的光学遥感图像轻量级显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.08049)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMathLee\u002FCorrNet)  \n11 | **TOMM** | 将显著性检测解耦为级联细节建模与主体填充 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.04112.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKingJamesSong\u002FDisentangleSaliency)    \n12 | **TMM** | 针对弱监督显著目标检测的噪声敏感对抗学习 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9716868\u002Fauthors#authors)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwuweia123\u002FIEEE-TMM-NSALWSS) \n13 | **ArXiv** | 显著目标检测、深度估计和轮廓提取的联合学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04895.pdf)\u002F代码 \n14 | **ArXiv** | 用于分组分割的统一Transformer框架：共同分割、共同显著性检测和视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04708.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsuyukun666\u002FUFO) \n15 | **IEEE TCyb** | 用于光学遥感图像中显著目标检测的邻近上下文协调网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13664.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMathLee\u002FACCoNet) \n16 | **IEEE TCyb** | 用于光学遥感图像中显著目标检测的边缘引导循环定位网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9756846)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKunye-Shen\u002FERPNet) \n17 | **IEEE TCSVT** | 用于显著目标检测的渐进式双注意力残差网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9745960)\u002F代码 \n18 | **IEEE TCyb** | 用于共同显著性检测的全局与局部协同学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08917.pdf)\u002F[代码](https:\u002F\u002Frmcong.github.io\u002Fproj_GLNet.html) \n19 | **ArXiv** | 用于生成式显著性检测的能量基先验 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08803.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FEBMGSOD) \n20 | **IEEE TIP** | EDN：通过极低采样率网络进行显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.13093.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyuhuan-wu\u002FEDN) \n21 | **IEEE TPAMI** | 通过完整性学习进行显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.07663.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fmczhuge\u002FICON) \n22 | **IEEE TCSVT** | TCNet：通过Transformer与CNN并行交互进行共同显著性检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9968016)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhangqiao970914\u002FTCNet)   \n23 | **ArXiv** | 激活到显著性：为无监督显著目标检测构建高质量标签 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.03650)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fmoothes\u002FA2S-USOD)  \n24 | **CVPR** | 放大与缩小：用于伪装目标检测的混合尺度三元组网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.02688.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flartpang\u002FZoomNet)  \n25 | **CVPR** | 你能发现变色龙吗？针对共同显著性检测的对抗性伪装图像 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.09258.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ftsingqguo\u002Fjadena) \n26 | **CVPR** | 民主很重要：用于共同显著性检测的全面特征挖掘 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05787.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsiyueyu\u002FDCFM) \n27 | **CVPR** | 用于单阶段高分辨率显著性检测的金字塔嫁接网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.05041.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FPGNet) \n28 | **CVPR** | 用于减少视觉干扰的深度显著性先验 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FAberman_Deep_Saliency_Prior_for_Reducing_Visual_Distraction_CVPR_2022_paper.pdf)\u002F[代码](https:\u002F\u002Fdeep-saliency-prior.github.io\u002F) \n29 | **CVPR** | 用于深度无监督显著性检测的多源不确定性挖掘 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FWang_Multi-Source_Uncertainty_Mining_for_Deep_Unsupervised_Saliency_Detection_CVPR_2022_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyifanw90\u002FUMNet)   \n30 | **CVPR** | 用于显著性排序的双向对象-上下文优先级学习 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FTian_Bi-Directional_Object-Context_Prioritization_Learning_for_Saliency_Ranking_CVPR_2022_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FGrassBro\u002FOCOR) \n31 | **CVPR** | 文本是否会在电商图像上吸引注意力：一个新的显著性预测数据集和方法 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FJiang_Does_Text_Attract_Attention_on_E-Commerce_Images_A_Novel_Saliency_CVPR_2022_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fleafy-lee\u002FE-commercial-dataset)  \n32 | **CVPRW** | 用于显著性检测的金字塔式注意力 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.06788.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ftanveer-hussain) \n33 | **CVPRW** | 带有谱聚类投票的无监督显著目标检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022W\u002FL3D-IVU\u002Fpapers\u002FShin_Unsupervised_Salient_Object_Detection_With_Spectral_Cluster_Voting_CVPRW_2022_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FNoelShin\u002Fselfmask) \n34 | **ECCV** | KD-SCFNet：通过知识蒸馏实现更准确高效的显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.02178.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FzhangjinCV\u002FKD-SCFNet) \n35 | **ECCV** | 点云中的显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.11889.pdf)\u002F[代码](https:\u002F\u002Fgit.openi.org.cn\u002FOpenPointCloud\u002FPCSOD) \n36 | **PR** | BiconNet：一种保留边缘的连通性导向显著目标检测方法 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.00334.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZyun-Y\u002FBiconNets) \n37 | **IEEE TCyb** | DNA：用于显著目标检测的深度监督非线性聚合 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9345433)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002FDNA) \n38 | **ACMM** | 合成数据监督下的显著目标检测 | [论文](http:\u002F\u002Fwww.digitalimaginggroup.ca\u002Fmembers\u002FShuo\u002FACM_Multimedia_2022_final_version.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwuzhenyubuaa\u002FSODGAN) \n39 | **IEEE TCSVT** | 用于显著目标检测的混合标签弱监督学习框架 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.02957.pdf)\u002F[代码](https:\u002F\u002Frmcong.github.io\u002Fproj_Hybrid-Label-SOD.html) \n40 | **CVPRW** | 带有谱聚类投票的无监督显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.12614.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FNoelShin\u002Fselfmask) \n41 | **IEEE TMM** | 适用于360°全方位图像的视图感知显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.13222.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJanySunny\u002FODI-SOD) \n42 | **ACCV** | 重新审视图像金字塔结构以用于高分辨率显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.09475)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fplemeri\u002FInSPyReNet) \n43 | **ECCV** | 通过生成式核进行显著性层次建模以用于显著目标检测 | [论文](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136880564.pdf)\u002F代码\n44 | **IEEE TIP** | 通过动态尺度路由进行显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.13821.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwuzhenyubuaa\u002FDPNet)\n45 | **NeurIPS** | MOVE：无监督可移动物体分割与检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.07920.pdf)\u002F代码\n46 | **PR** | BiconNet：一种保留边缘的连通性导向显著目标检测方法 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00334)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZyun-Y\u002FBiconNets)\n\n\n\n## 2021       \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | 基于局部显著性一致性的结构一致性弱监督显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.04404.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsiyueyu\u002FSCWSSOD\u002Ftree\u002Ff8650567cbbc8df5bf6edc32a633c47a885574cd)\n02 | **AAAI** | 用于显著目标检测的金字塔特征收缩 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-1322.MaM.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FPFSNet) \n03 | **AAAI** | 全局定位，局部分割：一种带有知识回顾网络的渐进式架构用于显著目标检测 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-4841.XuB.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fbradleybin\u002FLocate-Globally-Segment-locally-A-Progressive-Architecture-With-Knowledge-Review-Network-for-SOD)  \n04 | **AAAI** | 多尺度图融合用于协同显著性检测 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F16951)\u002F代码\n05 | **AAAI** | 通过读者感知的主题建模和显著性检测生成多样化评论 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.06856.pdf)\u002F代码\n06 | **ICIP** | 多尺度IoU：一种用于评估具有精细结构的显著目标检测的指标 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14572.pdf)\u002F代码\n07 | **TCSVT** | 基于显著目标瞬间识别的弱监督显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.00932.pdf)\u002F代码 \n08 | **TIP** | SAMNet：用于轻量级显著目标检测的立体注意多尺度网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9381668)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002FFastSaliency) \n09 | **IJCAI** | C2FNet：用于伪装目标检测的上下文感知跨层融合网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.12555.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fthograce\u002FC2FNet)\n10 | **CVPR** | 铁路不是火车：将显著性作为伪像素监督用于弱监督语义分割 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLee_Railroad_Is_Not_a_Train_Saliency_As_Pseudo-Pixel_Supervision_for_CVPR_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhalbielee\u002FEPS)\n11 | **CVPR** | 用于人员搜索的原型引导显著性特征学习 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FKim_Prototype-Guided_Saliency_Feature_Learning_for_Person_Search_CVPR_2021_paper.pdf)\u002F代码\n12 | **CVPR** | 网格显著性：独立的感知度量还是图像显著性的衍生品？ | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FSong_Mesh_Saliency_An_Independent_Perceptual_Measure_or_a_Derivative_of_CVPR_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frsong\u002FMIMO-GAN)\n13 | **CVPR** | 基于显著图像的类别无关学习的弱监督实例分割 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Weakly-Supervised_Instance_Segmentation_via_Class-Agnostic_Learning_With_Salient_Images_CVPR_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FBoxCaseg)\n14 | **CVPR** | DeepACG：基于语义感知对比Gromov-Wasserstein距离的协同显著性检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_DeepACG_Co-Saliency_Detection_via_Semantic-Aware_Contrast_Gromov-Wasserstein_Distance_CVPR_2021_paper.pdf)\u002F代码\n15 | **CVPR** | 基于显著性图的对象检测器黑盒解释 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FPetsiuk_Black-Box_Explanation_of_Object_Detectors_via_Saliency_Maps_CVPR_2021_paper.pdf)\u002F代码\n16 | **CVPR** | 从语义类别到注视点：一种新颖的弱监督视觉-听觉显著性检测方法 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_From_Semantic_Categories_to_Fixations_A_Novel_Weakly-Supervised_Visual-Auditory_Saliency_CVPR_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fguotaowang\u002FSTANet)\n17 | **CVPR** | CAMERAS：用于图像显著性检测的增强分辨率与保真度类激活映射 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FJalwana_CAMERAS_Enhanced_Resolution_and_Sanity_Preserving_Class_Activation_Mapping_for_CVPR_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FVisMIL\u002FCAMERAS)\n18 | **CVPR** | 显著性引导的图像转换 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FJiang_Saliency-Guided_Image_Translation_CVPR_2021_paper.pdf)\u002F代码\n19 | **CVPR** | 用于协同显著目标检测的群体协作学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.01108.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ffanq15\u002FGCoNet)\n20 | **CVPR** | 不确定性感知的显著目标与伪装目标联合检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.02628.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FJoint_COD_SOD)\n21 | **ACMM** | Auto-MSFNet：用于显著目标检测的多尺度融合网络搜索 | [论文](https:\u002F\u002Fgithub.com\u002FLiuTingWed\u002FAuto-MSFNet)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLiuTingWed\u002FAuto-MSFNet) \n22 | **IEEE TIP** | 用于显著目标检测的分解与补全网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9479697\u002Ffigures#figures)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwuzhe71\u002FDCN) \n23 | **ICCV** | 视觉显著性Transformer | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12099.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FVST#visual-saliency-transformer-vst) \n24 | **ICCV** | 解耦高质量显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03551.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fluckybird1994\u002FHQSOD) \n25 | **ICCV** | iNAS：面向设备的显著目标检测的整数型NAS | [论文](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21ICCV-iNAS.pdf)\u002F[代码](https:\u002F\u002Fmmcheng.net\u002Finas\u002F) \n26 | **ICCV** | 场景上下文感知的显著目标检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FSiris_Scene_Context-Aware_Salient_Object_Detection_ICCV_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FSirisAvishek\u002FScene_Context_Aware_Saliency) \n27 | **ICCV** | MFNet：用于弱监督显著目标检测的多滤波器指令网络 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FPiao_MFNet_Multi-Filter_Directive_Network_for_Weakly_Supervised_Salient_Object_Detection_ICCV_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FMFNet) \n28 | **ICCV** | 带位置保留注意力的显著对象排序 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FFang_Salient_Object_Ranking_With_Position-Preserved_Attention_ICCV_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FEricFH\u002FSOR) \n29 | **ICCV** | 总结与搜索：学习共识感知的动态卷积用于协同显著性检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_Summarize_and_Search_Learning_Consensus-Aware_Dynamic_Convolution_for_Co-Saliency_Detection_ICCV_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FCADC) \n30 | **IEEE TIP** | 带净化机制和结构相似性损失的显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.08393.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJinming-Su\u002FPurNet) \n31 | **ACMM** | 补充三边解码器用于快速准确的显著目标检测 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475494)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhaozhirui\u002FCTDNet)\n32 | **NeurIPS** | 基于能量模型潜在空间的学习型生成视觉Transformer用于显著性预测 | [论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F8289889263db4a40463e3f358bb7c7a1-Paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FEBMGSOD)   \n33 | **NeurIPS** | 为时空图神经网络发现动态显著区域 | [论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F398410ece9d7343091093a2a7f8ee381-Paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fbit-ml\u002FDyReg-GNN) \n34 | **IEEE TIP** | 用于显著目标检测的渐进式自引导损失 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.02412.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fysyscool\u002FPSGLoss) \n35 | **IEEE TMM** | 密集注意力引导的级联网络用于带钢表面缺陷的显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9632537)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzxforchid\u002FDACNet) \n36 | **IEEE TIP** | 重新思考用于显著目标检测的U型结构 | [论文](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21TIP-CII.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzal0302\u002FCII)\n\n## 2020年       \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | 用于显著性目标检测的渐进式特征精炼网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.05942.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fchenquan-cq\u002FPFPN)       \n02 | **AAAI** | 用于显著性目标检测的全局上下文感知渐进聚合网络 | [论文](https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FGCPANet\u002Fblob\u002Fmaster\u002FGCPANet.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FGCPANet)     \n03 | **AAAI** | F3Net：融合、反馈与聚焦用于显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.11445.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fweijun88\u002FF3Net)    \n04 | **AAAI** | 基于对抗域适应的多光谱显著性目标检测 | [论文](https:\u002F\u002Fcse.sc.edu\u002F~songwang\u002Fdocument\u002Faaai20b.pdf)\u002F[代码](https:\u002F\u002Ftsllb.github.io\u002FMultiSOD.html) \n05 | **AAAI** | 多类型自注意力引导的退化显著性检测 | [论文](https:\u002F\u002Fcse.sc.edu\u002F~songwang\u002Fdocument\u002Faaai20a.pdf)\u002F代码 \n06 | **CVPR** | 基于涂鸦标注的弱监督显著性目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Weakly-Supervised_Salient_Object_Detection_via_Scribble_Annotations_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FScribble_Saliency)  \n07 | **CVPR** | 深入探讨协同显著性目标检测 | [论文](http:\u002F\u002Fdpfan.net\u002Fwp-content\u002Fuploads\u002FCoSalBenchmark_CVPR2020.pdf)\u002F[代码](http:\u002F\u002Fdpfan.net\u002FCoSOD3K\u002F)  \n08 | **CVPR** | 用于显著性目标检测的多尺度交互网络 | [论文](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gUYu0hO_8Xc5jgpzetuOVFDrqeSOiKZN\u002Fview?usp=sharing)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flartpang\u002FMINet)  \n09 | **CVPR** | 用于准确且快速显著性检测的交互式双流解码器 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhou_Interactive_Two-Stream_Decoder_for_Accurate_and_Fast_Saliency_Detection_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fmoothes\u002FITSD-pytorch)  \n10 | **CVPR** | 显著性目标检测的标签解耦框架 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWei_Label_Decoupling_Framework_for_Salient_Object_Detection_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fweijun88\u002FLDF)  \n11 | **CVPR** | 带有注意力图聚类的自适应图卷积网络用于协同显著性检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Adaptive_Graph_Convolutional_Network_With_Attention_Graph_Clustering_for_Co-Saliency_CVPR_2020_paper.pdf)\u002F代码\n12 | **ECCV** | 参数量仅为10万的高效显著性目标检测 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20EccvSal100k.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMCG-NKU\u002FSal100K)\n13 | **ECCV** | n参考迁移学习用于显著性预测 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123530494.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fluoyan407\u002Fn-reference)   \n14 | **ECCV** | 梯度诱导的协同显著性检测 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123570443.pdf)\u002F[代码](http:\u002F\u002Fzhaozhang.net\u002Fcoca.html)   \n13 | **ECCV** | 基于交替反向传播从噪声标签中学习噪声感知编码器-解码器用于显著性检测 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123620341.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FNoise-aware-ABP-Saliency)  \n15 | **ECCV** | 抑制与平衡：一种用于显著性目标检测的简单门控网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.08074.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FGateNet-RGB-Saliency) \n16 | **IEEE TIP** | 用于同时检测显著性目标、边缘和骨架的动态特征集成 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20TIP-DFI.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fbackseason\u002FDFI)\n17 | **IEEE TIP** | CAGNet：面向显著性目标检测的内容感知引导 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.13168)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMehrdad-Noori\u002FCAGNet)\n18 | **IEEE TCYB** | 基于层次化视觉感知学习的轻量级显著性目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9285193)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002FFastSaliency)\n19 | **NeurIPS** | CoADNet：用于协同显著性目标检测的协作式聚合与分发网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.04887.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frmcong\u002FCoADNet_NeurIPS20)\n20 | **NeurIPS** | 基于对抗式节奏学习的低成本显著性目标检测 | [论文](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F8fc687aa152e8199fe9e73304d407bca-Paper.pdf)\u002F[代码](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F8fc687aa152e8199fe9e73304d407bca-Supplemental.zip)\n21 | **NeurIPS** | ICNet：用于协同显著性检测的内部显著性相关性网络 | [论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fd961e9f236177d65d21100592edb0769-Paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fblanclist\u002FICNet)\n\n## 2019       \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | AFNet：面向边界感知显著目标检测的注意力反馈网络 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1n-dRVC4sLWCmhhD5bnVXqg)\u002F[代码](https:\u002F\u002Fgithub.com\u002FArcherFMY\u002FAFNet)  \n02 | **CVPR** | BASNet：边界感知显著目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FQin_BASNet_Boundary-Aware_Salient_Object_Detection_CVPR_2019_paper.html)\u002F[代码](https:\u002F\u002Fgithub.com\u002FNathanUA\u002FBASNet) \n03 | **CVPR** | CPD：用于准确快速显著目标检测的级联部分解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.08739.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwuzhe71\u002FCPD-CVPR2019)\n04 | **CVPR** | 多源弱监督下的显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.00566.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Fmws)\n05 | **CVPR** | MLMSNet：一种基于交织多监督的显著目标检测互学习方法 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1EUxabfnEi_l5-ghUI3_qVQ)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJosephineRabbit\u002FMLMSNet)\n06 | **CVPR** | CapSal：利用字幕生成增强显著目标检测语义 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhang_CapSal_Leveraging_Captioning_to_Boost_Semantics_for_Salient_Object_Detection_CVPR_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhangludl\u002Fcode-and-dataset-for-CapSal)\n07 | **CVPR** | PoolNet：一种基于简单池化的实时显著目标检测设计 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09569.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fbackseason\u002FPoolNet) \n08 | **CVPR** | 用于显著目标检测的迭代式协同自顶向下与自底向上推理网络 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F19cvprIterativeSOD.pdf)\u002F代码\n09 | **CVPR** | 用于显著性检测的金字塔特征注意力网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.00179.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FCaitinZhao\u002Fcvpr2019_Pyramid-Feature-Attention-Network-for-Saliency-detection)\n10 | **AAAI** | 用于显著目标检测的深度嵌入特征 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1HfyavmYB2NYUMe8CSe2qCw)\u002F代码\n11 | **ICIP** | 基于深度层次化上下文聚合与多层监督的显著目标检测 | [论文](https:\u002F\u002Fgithub.com\u002FZhangC2\u002FSaliency-DHCA-ML_S)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZhangC2\u002FSaliency-DHCA-ML_S)\n12 | **IEEE TCSVT** | AADF-Net：用于显著目标检测的注意力扩张特征聚合 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8836095)\u002F[代码](https:\u002F\u002Fgithub.com\u002FgithubBingoChen\u002FAADF-Net)\n13 | **IEEE TCyb** | ROSA：对抗攻击下的鲁棒显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.03434.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flhaof\u002FROSA-Robust-Salient-Object-Detection-Against-Adversarial-Attacks)\n14 | **arXiv** | DSAL-GAN：基于去噪的生成对抗网络显著性预测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.01215.pdf)\u002F代码\n15 | **arXiv** | SAC-Net：用于显著目标检测的空间衰减上下文 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.10152.pdf)\u002F代码\n16 | **arXiv** | SE2Net：用于显著目标检测的双目边缘增强网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.00048.pdf)\u002F代码\n17 | **arXiv** | 用于显著目标检测的区域细化网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.11443.pdf)\u002F代码\n18 | **arXiv** | 轮廓损失：面向显著目标分割的边界感知学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.01975.pdf)\u002F代码\n19 | **arXiv** | OGNet：带有输出引导注意力模块的显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.07449.pdf)\u002F代码\n20 | **arXiv** | 边缘引导的非局部全卷积网络用于显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.02460.pdf)\u002F代码\n21 | **ICCV** | FLoss：优化无阈值显著目标检测的F度量 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FZhao_Optimizing_the_F-Measure_for_Threshold-Free_Salient_Object_Detection_ICCV_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzeakey\u002Ficcv2019-fmeasure)\n22 | **ICCV** | 用于显著目标检测的堆叠交叉细化网络 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWu_Stacked_Cross_Refinement_Network_for_Edge-Aware_Salient_Object_Detection_ICCV_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwuzhe71\u002FSCRN)\n23 | **ICCV** | 选择性还是不变性：边界感知显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.10066.pdf)\u002F代码\n24 | **ICCV** | HRSOD：迈向高分辨率显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1908.07274.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyi94code\u002FHRSOD)\n25 | **ICCV** | EGNet：用于显著目标检测的边缘引导网络 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F19ICCV_EGNetSOD.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJXingZhao\u002FEGNet)\n26 | **ICCV** | 显著目标检测中联合深度特征与预测精炼的结构化建模 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.04366.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fxuyingyue\u002FDeepUnifiedCRF_iccv19)\n27 | **ICCV** | 利用深度部件-目标关系进行显著目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLiu_Employing_Deep_Part-Object_Relationships_for_Salient_Object_Detection_ICCV_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fliuyi1989\u002FTSPOANet)  \n28 | **NeurIPS** | 基于自监督的深度鲁棒无监督显著性预测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.13055.pdf)\u002F[代码](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F10GlmenXR7nEJyRlmPHouvHP-g9KfUW1F\u002Fview)   \n29 | **CVPR** | 基于金字塔注意力和显著边缘的显著目标检测 | [论文](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F332751907_Salient_Object_Detection_With_Pyramid_Attention_and_Salient_Edges)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FPAGE-Net)\n\n## 2018年\n**序号** | **发表会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | 用于显著性目标检测的双向消息传递模型 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1akKVVipD8vIIv0XFrWND5Q)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhangludl\u002FA-bi-directional-message-passing-model-for-salient-object-detection)  \n02 | **CVPR** | PiCANet：学习像素级上下文注意力进行显著性检测 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06433)\u002F[代码](https:\u002F\u002Fgithub.com\u002FUgness\u002FPiCANet-Implementation)\n03 | **CVPR** | PAGR：渐进式注意力引导循环网络用于显著性目标检测 | [论文](https:\u002F\u002Fgithub.com\u002Fzhangxiaoning666\u002FPAGR)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyangbinb\u002FSalMetric\u002Ftree\u002Fmaster\u002FPAGRN)\n04 | **CVPR** | 学习提升显著性检测器性能 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1QvDmqruH8oU51_GrgsuXoA)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Flps)\n05 | **CVPR** | 全局检测，局部精修：一种新颖的显著性检测方法 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ydLI0koPfndehqMOAwrK_Q)\u002F[代码](https:\u002F\u002Fgithub.com\u002FTiantianWang\u002FCVPR18_detect_globally_refine_locally)\n06 | **CVPR** | 基于注视点预测驱动的显著性目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Salient_Object_Detection_CVPR_2018_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FASNet)\n07 | **IJCAI** | R3Net：用于显著性检测的递归残差精修网络 | [论文](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2018\u002F0095.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzijundeng\u002FR3Net)\n08 | **IJCAI** | LFR：无损特征反射的显著性目标检测 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1DAyPHe_z0LJpKK8DxKF2dg)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal\u002Fblob\u002Fmaster\u002FIIAU2018.md)\n09 | **ECCV** | 显著性目标检测中的轮廓知识迁移 | [论文](http:\u002F\u002Flink-springer-com-s.vpn.whu.edu.cn:9440\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-030-01267-0_22.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flixin666\u002FC2SNet)\n10 | **ECCV** | 用于显著性目标检测的反向注意力 | [论文](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.09940)\u002F[代码](https:\u002F\u002Fgithub.com\u002FShuhanChen\u002FRAS_ECCV18)\n11 | **IEEE TIP** | 一种无监督的博弈论方法用于显著性检测 | [论文](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1U1O4oFK6ZALSghPjJv_5nA)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Fuga)\n12 | **arXiv** | 敏捷护身符：基于上下文注意力的实时显著性目标检测 | [论文](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.06960)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal\u002Fblob\u002Fmaster\u002FIIAU2018.md)\n13 | **arXiv** | HyperFusion-Net：用于显著性目标检测的密集反射融合 | [论文](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.05142)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal\u002Fblob\u002Fmaster\u002FIIAU2018.md)\n14 | **arXiv** | (TBOS)一石三鸟：显著性目标分割、边缘检测和骨架提取的统一框架 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.09860.pdf)\u002F代码\n15 | **CVPR** | 深度无监督显著性检测：多噪声标签视角 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10910)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fkris-singh\u002FDeep-Unsupervised-Saliency-Detection)\n\n\n    \n \n## 2017年\n**序号** | **发表会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | DSS：具有短连接的深度监督显著性目标检测 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04849)\u002F[代码](https:\u002F\u002Fgithub.com\u002FAndrew-Qibin\u002FDSS)\n02 | **CVPR** | 用于显著性目标检测的非局部深层特征 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLuo_Non-Local_Deep_Features_CVPR_2017_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhimingluo\u002FNLDF)\n03 | **CVPR** | 在图像级别监督下学习检测显著性目标 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FWang_Learning_to_Detect_CVPR_2017_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fscott89\u002FWSS)\n04 | **CVPR** | SalGAN：利用对抗网络进行视觉显著性预测 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1701.01081)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n05 | **ICCV** | 用于检测图像中显著性目标的分阶段精修模型 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FWang_A_Stagewise_Refinement_ICCV_2017_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n06 | **ICCV** | Amulet：聚合多层级卷积特征用于显著性目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Amulet_Aggregating_Multi-Level_ICCV_2017_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n07 | **ICCV** | 学习不确定卷积特征以实现精确的显著性检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Learning_Uncertain_Convolutional_ICCV_2017_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPchank\u002Fcaffe-sal)\n08 | **ICCV** | 融合监督：迈向深度显著性目标检测器的无监督学习 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Supervision_by_Fusion_ICCV_2017_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhangyuygss\u002FSVFSal.caffe)\n\n \n\n## 2016年\n**序号** | **发表会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | DHSNet：用于显著性目标检测的深度层次化显著性网络 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FLiu_DHSNet_Deep_Hierarchical_CVPR_2016_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FGuanWenlong\u002FDHSNet-PyTorch)\n02 | **CVPR** | ELD：结合编码的低层距离图与高层特征的深度显著性 | [论文](http:\u002F\u002Fwww.arxiv.org\u002Fpdf\u002F1604.05495v1.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fgylee1103\u002FSaliencyELD)\n03 | **ECCV** | RFCN：利用循环全卷积网络进行显著性检测 | [论文](http:\u002F\u002F202.118.75.4\u002Flu\u002FPaper\u002FECCV2016\u002F0865.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002FRFCN)\n\n\n\u003Ca name=\"3DSOD\">\u003C\u002Fa>\n# 3D RGB-D\u002FT 显著性检测 \u003Ca id=\"3D RGB-D 显著性检测\" class=\"anchor\" href=\"#3D RGB-D 显著性检测\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n\n## 2025年       \n**序号** | **发表会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n:triangular_flag_on_post: 01 | **AAAI** | DiMSOD：基于扩散的多模态显著性目标检测框架 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F33096)\u002F代码  \n:triangular_flag_on_post: 02 | **AAAI** | SMR-Net：语义引导的互增强网络用于跨模态图像融合和显著性目标检测 | [论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F32933)\u002F代码\n\n## 2024       \n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **ICASSP** | 基于显著性增强特征融合的多尺度RGB-D显著目标检测网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.11914.pdf)\u002F代码  \n02 | **IJCV** | 用于RGB-D显著目标检测的跨模态融合与渐进解码网络 | [论文](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-024-02020-y)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhu-xh\u002FCPNet)  \n03 | **TMM** | UniTR：基于统一Transformer框架的共目标与多模态显著性检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10444934)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fruohaoguo\u002FUniTR) \n04 | **IEEE TIP** | 面向V-D-T显著目标检测的质量感知选择性融合网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.07655)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLx-Bao\u002FQSFNet) \n05 | **IEEE TCSVT** | 用于多模态显著目标检测的学习型自适应融合库 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.01127)\u002F[代码](https:\u002F\u002Fgithub.com\u002FAngknpng\u002FLAFB) \n06 | **IEEE TMM** | 无对齐RGBT显著目标检测：语义引导的非对称相关网络及统一基准 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.00917)\u002F[代码](https:\u002F\u002Fgithub.com\u002FAngknpng\u002FSACNet) \n07 | **ACMMM** | 基于RGB-热数据的双模态显著目标检测后门攻击 | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=fBeeQlkIM8)\u002F代码\n08 | **ECCV** | CoLA：条件丢弃与语言驱动的鲁棒双模态显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2407.06780)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fssecv\u002FCoLA)\n\n\n## 2023      \n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **TCSVT** | HRTransNet：基于HRFormer的双模态显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.03036.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FHRTransNet)  \n02 | **IEEE TIP** | CAVER：用于双模态显著目标检测的跨模态视图混合Transformer | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10015667)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flartpang\u002FCAVER) \n03 | **IEEE TIP** | LSNet：用于RGB-热图像中显著目标检测的轻量级空间增强网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10042233)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzyrant\u002FLSNet) \n04 | **ICME** | 手绘标注监督下的RGB-T显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.09733.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FRGBTScribble-ICME2023) \n05 | **TCSVT** | 用于弱监督RGB-D显著目标检测的互信息正则化 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.03630.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fbaneitixiaomai\u002FMIRV) \n06 | **Information Fusion** | 一种交互式强化范式的红外-可见光图像联合融合与显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09999)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwdhudiekou\u002FIRFS) \n07 | **IEEE TMM** | CATNet：用于RGB-D显著目标检测的级联聚合Transformer网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10179145)\u002F[代码](https:\u002F\u002Fgithub.com\u002FROC-Star\u002FCATNet\u002F) \n08 | **ACM MM** | 用于RGB-D显著目标检测的点感知交互与CNN诱导精炼网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.08930.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frmcong\u002FPICR-Net_ACMMM23) \n09 | **IEEE TIP** | 用于RGBD显著目标检测的深度注入框架 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10258039)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZakeiswo\u002FDIF) \n10 | **ACM MM** | 模态特征——生成RGB-D显著目标检测训练集时需考虑的新关键因素 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3581783.3611985)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXueHaoWang-Beijing\u002FModalityProfile_MM23\u002F)\n11 | **ACM MM** | RGB-D和RGB-T显著目标检测的显著性原型 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3581783.3612466)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZZ2490\u002FSPNet)  \n12 | **NeurIPS** | DVSOD：RGB-D视频中的显著目标检测 | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hm1Ih3uLII)\u002F[代码](https:\u002F\u002Fgithub.com\u002FDVSOD\u002FDVSOD-Baseline)\n\n## 2022       \n**序号** | **期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVMJ** | 保持特异性的RGB-D显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.08162)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FSPNet?utm_source=catalyzex.com)   \n02 | **AAAI** | 针对RGB-D显著目标检测的自监督预训练 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-4882.ZhaoX.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FSSLSOD)   \n03 | **IEEE TPAMI** | MobileSal：极高效的RGB-D显著目标检测 | [论文](https:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F21PAMI_MobileSal.pdf)\u002F[代码](https:\u002F\u002Fmmcheng.net\u002Fmobilesal\u002F)   \n04 | **IEEE TIP** | 利用未标注的RGB图像提升RGB-D显著性检测性能 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.00100.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FRobert-xiaoqiang\u002FDS-Net)   \n05 | **IEEE TIP** | 学习用于RGB-D显著性检测的判别性跨模态特征 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9678058)\u002F代码  \n06 | **IEEE TIP** | 基于预测一致性训练和主动涂鸦增强的弱监督RGB-D显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9720104)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXuYunqiu\u002FscribbleRGB-DSOD)\n07 | **ICLR** | 从深度中促进显著性：深度无监督RGB-D显著性检测 | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BZnnMbt0pW)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FDSU)  \n08 | **ArXiv** | DFTR：用于显著目标检测的深度监督分层特征融合Transformer | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.06429.pdf)\u002F代码  \n09 | **ArXiv** | GroupTransNet：用于RGB-D显著目标检测的组变换网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.10785.pdf)\u002F代码  \n10 | **ArXiv** | CAVER：用于双模态显著目标检测的跨模态视图混合Transformer | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.02363.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flartpang\u002FCAVER)\n11 | **PR** | 具有多尺度聚合的编码器深度交织网络，用于RGB-D显著目标检测 | [论文](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0031320322001479)\u002F代码  \n12 | **CVPRW** | 用于显著性检测的金字塔注意力机制 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.06788.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ftanveer-hussain) \n13 | **TMM** | 用于RGB-D显著目标检测的深度诱导差距缩小网络：一种交互、引导与精炼方法 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9769984)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fssecv\u002FDIGR-Net) \n14 | **TMM** | C2DFNet：用于RGB-D显著目标检测的交叉动态滤波器网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9813422)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FC2DFNet) \n15 | **ArXiv** | 基于双Swin Transformer的互交互网络，用于RGB-D显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.03105.pdf)\u002F代码  \n16 | **IEEE TCSVT** | 用于鲁棒RGB-热显著目标检测的跨协作融合编码器网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9801871)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fgbliao\u002FCCFENet)  \n17 | **IEEE TIP** | 使用Transformer学习隐式类别知识，用于RGB-D协同显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9810116)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FCTNet)  \n18 | **ACMM** | 受深度启发的无监督RGB-D显著目标检测中的标签挖掘 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3503161.3548037?casa_token=9IfKDOr4970AAAAA:yWl9tbPTwlCtnXJE7-Vuj7rHxxBPi39zLVoeb1rgFwZEDVNdeK3Y8SYO0gkyT98kCKd2nhtI1Et2190)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fyoungtboy\u002FDLM)  \n19 | **3DV** | 用于显著性检测的鲁棒RGB-D融合 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.01762.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FZongwei97\u002FRFnet)  \n20 | **ArXiv** | 受深度质量启发的特征操作，用于高效RGB-D和视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.03918.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzwbx\u002FDFM-Net)  \n21 | **ECCV** | SPSN：用于RGB-D显著目标检测的超像素原型采样网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.07898.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FHydragon516\u002FSPSN)  \n22 | **ECCV** | MVSalNet：多视角增强用于RGB-D显著目标检测 | [论文](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136890268.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FHeart-eartH\u002FMVSalNet)  \n23 | **IJCV** | 可学习的深度敏感注意力，结合多模态融合架构搜索，用于深度RGB-D显著性检测 | [论文](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-022-01646-0)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsunpeng1996\u002FDSA2F)   \n24 | **IEEE TNNLS** | 用于RGB-D显著目标检测及更广泛应用的三维卷积神经网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9889257)\u002F[代码](https:\u002F\u002Fgithub.com\u002FQianChen98\u002FRD3D)   \n25 | **IEEE TIP** | 通过模态感知解码器改进RGB-D显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9894275?casa_token=x6Stwtpf_igAAAAA:_ivL1dWDAHq29mTPgl4ctDVhwf6qbonXaQZ5t1PFqGwvDzVk4w28lEbwVt-9yQJ15C4zuI7TaFQ)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMengkeSong\u002FMaD)   \n26 | **IEEE TIP** | CIR-Net：用于RGB-D显著目标检测的跨模态交互与精炼 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02843)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frmcong\u002FCIRNet_TIP2022)   \n27 | **IEEE TCSVT** | HRTransNet：HRFormer驱动的双模态显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9869666?casa_token=tYGCtPgo5kkAAAAA:WWYviL3djEpBBRvds_DtYaAfdqnV5Qvdq7DaS4b6Dk9lQc9lQc9beLj4hQ9T8fLNpYeU9ku71v96abg)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FHRTransNet) \n28 | **IEEE TMM** | 热成像在RGB-T显著目标检测中真的总是重要吗？ | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.04266.pdf)\u002F[代码](https:\u002F\u002Frmcong.github.io\u002Fproj_TNet.html) \n29 | **IEEE TCSVT** | 用于RGB-D和RGB-T显著目标检测的模态诱导迁移融合网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9925217?casa_token=gFFqPMx0N7sAAAAA:1DpXKX-b2jvTF1Zwcf-gtJkyj0ZW-lxbRcJb60rO0BiLFJqTbpg7Sl0VGhe2Ku62Rqtg2AfFyfY)\u002F代码  \n30 | **IEEE TIP** | 显著目标检测、深度估计和轮廓提取的联合学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.04895.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FMMFT) \n31 | **IJCV** | 深入研究校准深度以实现精确的RGB-D显著目标检测 | [论文](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-022-01734-1)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FHiBo-UA) \n32 | **IEEE TCSVT** | MoADNet：用于实时轻量级RGB-D显著目标检测的移动非对称双流网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9789193)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fkingkung2016\u002FMoADNet)\n\n## 2021年       \n**序号** | **会议\u002F期刊** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | 基于深度敏感注意力机制和自动多模态融合的深度RGB-D显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.11832.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsunpeng1996\u002FDSA2F)   \n02 | **CVPR** | 校准型RGB-D显著目标检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FJi_Calibrated_RGB-D_Salient_Object_Detection_CVPR_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FDCF)  \n03 | **AAAI** | 基于3D卷积神经网络的RGB-D显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.10241.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FPPOLYpubki\u002FRD3D)\n04 | **IEEE TIP** | 用于RGB-D显著目标检测的层次化交替交互网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9371407)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMathLee\u002FHAINet)\n05 | **IEEE TIP** | CDNet：用于RGB-D显著目标检测的互补深度网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9366409)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fblanclist\u002FCDNet)\n06 | **IEEE TIP** | 具有普遍目标感知的RGB-D显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.03425.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FiCVTEAM\u002FUTA)\n07 | **ICME** | BTS-Net：用于RGB-D显著目标检测的双向迁移与选择网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.01784.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzwbx\u002FBTS-Net)\n08 | **ACMM** | 受深度质量启发的特征操控用于高效RGB-D显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.01779.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzwbx\u002FDFM-Net)  \n09 | **ACMM** | TriTransNet RGB-D显著目标检测，采用三元Transformer嵌入网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03990.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FTriTransNet-RGB-D-Salient-Object-Detection-with-a-Triplet-Transformer-Embedding-Network)\n10 | **ICCV** | 基于级联互信息最小化的RGB-D显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07246.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002Fcascaded_rgbd_sod)\n11 | **ICCV** | 保留特异性的RGB-D显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.08162.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FSPNet)\n12 | **ACMM** | 用于RGB-D显著目标检测的跨模态差异交互网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.01971.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002F1437539743\u002FCDINet-ACM-MM21)\n13 | **IEEE TIP** | 用于RGB-D显著目标检测的动态选择网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9605221\u002Fauthors#authors)\u002F[代码](https:\u002F\u002Fgithub.com\u002FBrook-Wen\u002FDSNet)\n14 | **IJCV** | 基于CNN的RGB-D显著目标检测：学习、选择与融合 | [论文](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-021-01452-0)\u002F代码\n15 | **NeurIPS** | 用于弱监督RGB-D显著目标检测的联合语义挖掘 | [论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F642e92efb79421734881b53e1e1b18b6-Paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FJSM)   \n16 | **IEEE TMM** | CCAFNet：用于检测RGB-D图像中显著目标的交叉流与跨尺度自适应融合网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9424966)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzyrant\u002FCCAFNet)   \n17 | **IEEE TETCI** | APNet：用于全天候RGB-T显著目标检测的对抗学习辅助与感知重要性融合网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9583676)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzyrant\u002FAPNet)\n\n## 2020年\n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **IEEE TIP** | ICNet：基于RGB-D的显著性目标检测信息转换网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9024241\u002Fauthors)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMathLee\u002FICNet-for-RGBD-SOD)  \n02 | **CVPR** | JL-DCF：用于RGB-D显著性目标检测的联合学习与密集协作融合框架 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FFu_JL-DCF_Joint_Learning_and_Densely-Cooperative_Fusion_Framework_for_RGB-D_Salient_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fkerenfu\u002FJLDCF)  \n03 | **CVPR** | UC-Net：基于条件变分自编码器的不确定性启发式RGB-D显著性检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_UC-Net_Uncertainty_Inspired_RGB-D_Saliency_Detection_via_Conditional_Variational_Autoencoders_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJingZhang617\u002FUCNet)  \n04 | **CVPR** | A2dele：用于高效RGB-D显著性目标检测的自适应注意力深度蒸馏器 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FPiao_A2dele_Adaptive_and_Attentive_Depth_Distiller_for_Efficient_RGB-D_Salient_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FCVPR2020-A2dele)  \n05 | **CVPR** | 选择、补充与聚焦：面向RGB-D显著性检测的方法 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_Select_Supplement_and_Focus_for_RGB-D_Saliency_Detection_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FCVPR_SSF-RGBD)   \n06 | **CVPR** | 学习选择性自互注意机制用于RGB-D显著性检测 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Learning_Selective_Self-Mutual_Attention_for_RGB-D_Saliency_Detection_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fnnizhang\u002FS2MA)   \n07 | **ECCV** | 基于协同学习的精确RGB-D显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.11782.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FCoNet)\n08 | **ECCV** | 用于RGB-D显著性目标检测的跨模态加权网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.04901.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMathLee\u002FCMWNet)\n09 | **ECCV** | BBS-Net：采用分叉骨干策略的RGB-D显著性目标检测网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.02713.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzyjwuyan\u002FBBS-Net)\n10 | **ECCV** | 用于RGB-D显著性目标检测的层次化动态滤波网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.06227.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flartpang\u002FHDFNet)\n11 | **ECCV** | 用于RGB-D显著性目标检测的渐进引导交替精炼网络 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123530511.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FShuhanChen\u002FPGAR_ECCV20)\n12 | **ECCV** | 基于跨模态调制与选择的RGB-D显著性目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07051.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FcmMS-ECCV20)\n13 | **ECCV** | 用于RGB-D显著性目标检测的级联图神经网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.03087.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLA30\u002FCas-Gnn)   \n14 | **ECCV** | 用于鲁棒实时RGB-D显著性目标检测的单流网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.06811.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FDANet-RGBD-Saliency)  \n15 | **ECCV** | 用于精确RGB-D显著性检测的非对称双流架构 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123730375.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsxfduter\u002FASTA)   \n16 | **ACMM** | 显著性目标检测真的需要深度信息吗？ | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.00269.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJiaweiZhao-git\u002FDASNet)\n17 | **ACMM** | MMNet：用于RGB-D显著性目标检测的多阶段多尺度融合网络 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394171.3413523)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fgbliao\u002FMMNet)\n18 | **ACMM** | 差异化处理下的特征再整合：一种自顶向下且适应性的融合网络用于RGB-D显著性目标检测 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394171.3413969)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjack-admiral\u002FACM-MM-FRDT)\n19 | **IEEE TIP** | 基于解耦跨模态融合的RGBD显著性目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9165931)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhaochen593\u002FDisen_Fuse_TIP2020)\n20 | **IEEE TIP** | 利用两阶段深度估计和选择性深度融合提升RGB-D图像中的显著性检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8976428)\u002F代码\n21 | **IEEE TIP** | 针对RGB-D显著性目标检测的深度潜力感知门控注意力网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.08608.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FDPANet)\n22 | **IEEE TNNLS** | D3Net：重新思考RGB-D显著性目标检测——模型、数据集与大规模基准测试 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.06781.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FD3NetBenchmark)\n23 | **IEEE TCSVT** | 再探RGB-T显著性目标检测中的特征融合 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9161021)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fnexiakele\u002FRevisiting-Feature-Fusion-for-RGB-T-Salient-Object-Detection)\n\n\n\n\n## 2019年\n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **ICCV** | DMRA：深度诱导的多尺度循环注意力网络用于显著性检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPiao_Depth-Induced_Multi-Scale_Recurrent_Attention_Network_for_Saliency_Detection_ICCV_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FDMRA_RGBD-SOD)\n02 | **CVPR** | CPFP：对比先验与流体金字塔融合用于RGBD显著性目标检测 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F19cvprRrbdSOD.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJXingZhao\u002FContrastPrior)\n03 | **IEEE TIP** | 用于RGB-D显著性目标检测的三流注意力感知网络 | [论文](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8603756\u002F)\u002F代码\n04 | **IEEE PR** | 具有多尺度多路径及跨模态交互的多模态融合网络，用于RGB-D显著性目标检测 | [论文](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0031320318303054)\u002F代码\n05 | **IEEE Access** | AFNet：用于RGB-D显著性目标检测的适应性融合网络 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1901.01369?context=cs.CV)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLucia-Ningning\u002FAdaptive_Fusion_RGBD_Saliency_Detection)\n06 | **IEEE TIP** | 通过融合多层CNN特征进行RGB-T显著性目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8935533)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fnexiakele\u002FRGB-T-Salient-Object-Detection-via-Fusing-Multi-level-CNN-Features)\n07 | **IEEE TMM** | 通过协作图学习进行RGB-T图像显著性检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=8744296)\u002F代码\n\n## 2018年\n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | PCA：用于RGB-D显著性目标检测的渐进式互补感知融合网络 | [论文](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F329741351_Progressively_Complementarity-Aware_Fusion_Network_for_RGB-D_Salient_Object_Detection)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhaochen593\u002FPCA-Fuse_RGBD_CVPR18)\n02 | **IEEE TIP** | 基于多约束特征匹配和跨标签传播的RGBD图像协同显著性检测 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1710.05172)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frmcong\u002FResults-for-2018TIP-RGBD-Co-saliency)\n03 | **ICME** | PDNet：先验模型引导的深度增强显著性目标检测网络 | [论文](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.08636)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fcai199626\u002FPDNet)\n\n  \n\n## 2017年\n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **ICCV** | 利用背景包围、深度对比和自顶向下特征学习RGB-D显著性目标检测 | [论文](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.03607)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fsshige\u002Frgbd-saliency)\n02 | **IEEE TIP** | DF：基于深度融合的RGB-D显著性目标检测 | [论文](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.03333)\u002F[代码](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Y-PqAjuH9xREBjfl7H45HA)\n03 | **IEEE TCyb** | CTMF：基于跨视图迁移和多视角融合的CNN驱动RGB-D显著性检测 | [论文](http:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F6221036\u002F6352949\u002F08091125.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fhaochen593\u002FPCA-Fuse_RGBD_CVPR18)\n\n  \n## 传统方法\n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **MTA** | 基于多核提升与融合的RGB-D协同显著性检测 | [论文](http:\u002F\u002Fwww.onacademic.com\u002Fdetail\u002Fjournal_1000040179260010_4758.html)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fivpshu\u002FRGBD-co-saliency-detection-via-multiple-kernel-boosting-and-fusion)\n02 | **ICCV17** | 基于中心暗通道先验的创新显著性目标检测 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1710.04071v4)\u002F[代码](https:\u002F\u002Fgithub.com\u002FChunbiaoZhu\u002FACVR2017)\n03 | **IEEE SPL** | 基于深度置信度分析和多线索融合的立体图像显著性检测 | [论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F1710.05174)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frmcong\u002FCode-for-DCMC-method)\n04 | **IEEE SPL** | 基于自助法聚类的RGB-D协同显著性检测 | [论文](http:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=7582474)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fivpshu\u002FRGBD-co-saliency-detection-via-bagging-based-clustering)\n05 | **CVPR** | 利用全局先验进行RGB-D显著性检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_workshops_2015\u002FW14\u002Fhtml\u002FRen_Exploiting_Global_Priors_2015_CVPR_paper.html)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJianqiangRen\u002FGlobal_Priors_RGBD_Saliency_Detection)\n\n\n\n\u003Ca name=\"4DSOD\">\u003C\u002Fa>\n\n# 4D光场显著性检测  \u003Ca id=\"4D Light Field Saliency Detection\" class=\"anchor\" href=\"4D Light Field Saliency Detection\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **TOMM** | MCA：基于多线索的光场显著性检测 | [论文](http:\u002F\u002Fwww.linliang.net\u002Fwp-content\u002Fuploads\u002F2017\u002F07\u002FACMTOM_Saliency.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fpencilzhang\u002FHFUT-Lytro-dataset)\n02 | **IJCAI** | DILF：深入探究光场的显著性检测 | [论文](http:\u002F\u002Fpdfs.semanticscholar.org\u002F4b17\u002Ffca1d67862e1fbffaf9ac64a1a73e0f20904.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fpencilzhang\u002Flightfieldsaliency_ijcai15)\n03 | **CVPR** | WSC：用于显著性检测的加权稀疏编码框架 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fpapers\u002FLi_A_Weighted_Sparse_2015_CVPR_paper.pdf)\u002F代码\n04 | **IEEE PAMI** | 光场上的显著性检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=7570181)\u002F[代码](https:\u002F\u002Fdownload.csdn.net\u002Fdownload\u002Fdeepvl\u002F8076323?fps=1&locationNum=9)\n05 | **ICCV** | 基于深度学习的光场显著性检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Deep_Learning_for_Light_Field_Saliency_Detection_ICCV_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FICCV2019_Deeplightfield_Saliency)\n06 | **NeurIPS** | 面向记忆的解码器用于光场显著目标检测 | [论文](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8376-memory-oriented-decoder-for-light-field-salient-object-detection.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FMoLF)\n07 | **AAAI** | 利用与替换：一种非对称双流架构用于多功能光场显著性检测 | [论文](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1uPkpB51MRMm_Zmvh1M2Z3nc3D8r32MR9\u002Fview?usp=drivesdk)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FAAAI2020-Exploit-and-Replace-Light-Field-Saliency)\n08 | **IEEE TCSVT** | 用于光场显著目标检测的多任务协同网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9153018)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhangqiudan\u002FMTCNet-Lightfield)  \n09 | **ArXiv** | DUT-LFSaliency：多功能数据集及光场到RGB的显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.15124.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOIPLab-DUT\u002FDUTLF-V2)   \n10 | **ArXiv** | 学习协同注意力用于光场显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.13916.pdf)\u002F代码\n11 | **ArXiv** | CMA-Net：用于光场显著目标检测的级联互注意网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.00949.pdf)\u002F代码\n12 | **IEEE TCyB** | PANet：用于光场显著目标检测的补丁感知网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9517032)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjyydlut\u002FIEEE-TCYB-PANet)\n13 | **ACMM21** | 用于光场显著目标检测的遮挡感知双向引导网络 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475312?casa_token=wbPMsKJlIUgAAAAA:YVsFNQb65PB4D6FGlMwtYtYi5nR4YCE1tJw_7frdEMm_exQIDw5dFzjIW0AjmwqlO1XEOEbz-g)\u002F[代码](https:\u002F\u002Fgithub.com\u002FTimsty1\u002FOBGNet)\n14 | **ICCV21** | 基于双重局部图学习和互惠引导的光场显著性检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00698.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwangbo-zhao\u002F2021ICCV-DLGLRG)\n15| **CVPR22** | 从像素级噪声标签中学习：光场显著性检测的新视角 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.13456.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOLobbCode\u002FNoiseLF)  \n16| **NC** | MEANet：用于光场显著目标检测的多模态边缘感知网络 | [论文](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0925231222003502)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fjiangyao-scu\u002FMEANet)\n17| **IEEE TIP** | 探索空间相关性用于光场显著性检测：从单视图扩展 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9894273?casa_token=1mIHAJs5QB4AAAAA:vvhqsmbsJWjL9qGTjvOUWngBkgn9BJGkPY6M91tm2Tp-mhswCbmhtIU7cr5R6qT4vCqsU9L57kw)\u002F[代码]()\n18| **IEEE TIP** | 基于图神经网络的光场几何辅助显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9527158)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzhangqiudan\u002FGeoSOD-Lightfield)  \n19| **ACMM** | LFBCNet：用于显著目标检测的光场边界感知级联交互网络 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3503161.3548275?casa_token=ifuWtYwl-roAAAAA:aSGUDEbp5YTrX7fxS0r7gEWq_kYKhOFom0VQ_6topWxvgArBopbmlvcAn7kXkjpo6jf9LEWX4vgivgU)\u002F代码  \n20| **IEEE TIP** | 光场上的弱监督显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9900489\u002Fauthors#authors)\u002F代码\n21| **IEEE TPAMI** | 光场显著性检测的全面基准测试与新模型 | [论文](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftp\u002F5555\u002F01\u002F10012539\u002F1JNmt6JGKu4)\u002F[代码](https:\u002F\u002Fopeni.pcl.ac.cn\u002FOpenDatasets)  \n22| **ICME23** | 用于光场显著目标检测的引导焦点堆栈精炼网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.05260.pdf)\u002F代码  \n23| **IEEE TCSVT** | LFTransNet：通过可学习权重描述符进行光场显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10138590?casa_token=rJeI2PnLzwQAAAAA:nnJc89z7hCRfJH3C-GtVjybe1HL11dZVoWOxzZ45d4Jn623BW4ZM9bS8DdyBiuvW-2zeyW7fdYJgkQ)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fliuzywen\u002FLFTransNet)  \n24| **IEEE TCSVT** | 基于互补与判别交互网络的稀疏视图光场显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10168184)\u002F[代码](https:\u002F\u002Fgithub.com\u002FGilbertRC\u002FLFSOD-CDINet)  \n:triangular_flag_on_post: 25| **arXiv** | LF Tracy：一种统一的单管道方法用于光场相机中的显著目标检测 | [论文](https:\u002F\u002Fbrowse.arxiv.org\u002Fabs\u002F2401.16712)\u002F[代码](https:\u002F\u002Fgithub.com\u002FFeiBryantkit\u002FLF-Tracy)  \n\n\n\u003Ca name=\"VSOD\">\u003C\u002Fa>      \n# 视频显著目标检测  \u003Ca id=\"Video Salient Object Detection\" class=\"anchor\" href=\"Video Salient Object Detection\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n\n## 2024  \n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n:triangular_flag_on_post: 01 | **AAAI** | 一种运动感知的时空图用于视频显著目标排序 | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=VUBtAcQN44)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzyf-815\u002FVSOR\u002Ftree\u002Fmain)  \n\n## 2023  \n**序号** | **发表期刊\u002F会议** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | 全景视频显著目标检测与Ambisonic音频引导 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.14419.pdf)\u002F代码\n\n## 2022  \n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **AAAI** | 只推理一次：用于引用视频目标分割的跨模态元迁移学习 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-1100.LiD.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FSparklins\u002FYOFO)   \n02 | **AAAI** | 基于交互式Transformer的孪生网络用于视频目标分割 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-702.LanM.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLANMNG\u002FSITVOS)   \n03 | **AAAI** | 迭代选择一个简单的参考帧使无监督视频目标分割更容易 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-11964.LeeY.pdf)\u002F代码   \n04 | **AAAI** | 用于视频目标分割的可靠传播-校正调制 | [论文](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-4288.XuX.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FJerryX1110\u002FRPCMVOS)     \n05 | **WACV** | 基于对比特征和注意力模块的视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.02368.pdf)\u002F代码  \n06 | **ICIP** | 用于视频显著目标检测的深度协同三模态网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.06060.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fluyukang\u002FDCTNet)  \n07 | **ArXiv** | 从无标签视频中逐步学习视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02008)\u002F代码  \n08 | **ArXiv** | 重新思考视频显著目标排序 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.17257)\u002F代码  \n09 | **ACMM** | 基于点监督的弱监督视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.07269.pdf)\u002F代码  \n10 | **ECCV** | 用于无监督视频目标分割的层次化特征对齐网络 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.08485)\u002F[代码](https:\u002F\u002Fgithub.com\u002FNUST-Machine-Intelligence-Laboratory\u002FHFAN)  \n11 | **ECCV** | XMem：基于阿特金森-希夫林记忆模型的长期视频目标分割 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.07115.pdf)\u002F[代码](https:\u002F\u002Fhkchengrex.github.io\u002FXMem\u002F)  \n12 | **ACMM** | 用于无监督视频目标分割的双向密集时空特征传播网络 | [论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3503161.3548039?casa_token=xbckiU4No2wAAAAA:hpKejtoDLTyeTRtCNao2PHacfpfR7HRV38JOieDNbF-C67SAKaXTTswqs_yC8DDp7at-rUkYyc1N5I0)\u002F代码  \n13 | **NeurIPS** | 基于不确定性引导伪标签的半监督视频显著目标检测 | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=BOQr80FBX_)\u002F[代码](https:\u002F\u002Fgithub.com\u002FLanezzz\u002FUGPL)   \n\n\n## 2021  \n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | 弱监督视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.02391.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwangbo-zhao\u002FWSVSOD)     \n02 | **ArXiv** | 基于自适应局部-全局精炼的视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.14360.pdf)\u002F代码    \n03 | **ICIP** | 用于视频显著目标检测的引导与教学网络 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.10110.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FGewelsJI\u002FGTNet)    \n04 | **ACMM** | 多源融合与自动预测器选择用于零样本视频目标分割 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.05076.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FXiaoqi-Zhao-DLUT\u002FMulti-Source-APS-ZVOS)   \n05 | **ICCV** | 用于视频显著目标检测的动态上下文敏感滤波网络 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_Dynamic_Context-Sensitive_Filtering_Network_for_Video_Salient_Object_Detection_ICCV_2021_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FRoudgers\u002FDCFNet)   \n06 | **ICCV** | 用于视频目标分割的全双工策略 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03151.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FGewelsJI\u002FFSNet) \n07 | **ICCV** | 用于无监督视频目标分割的深度传输网络 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_Deep_Transport_Network_for_Unsupervised_Video_Object_Segmentation_ICCV_2021_paper.pdf)\u002F代码\n08 | **IEEE TIP** | 探索丰富高效的时空交互以实现实时视频显著目标检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9390381)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fguotaowang\u002FSTVS)\n\n\n## 2020  \n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **CVPR** | STAViS：时空音视显著性网络 | [论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FTsiami_STAViS_Spatio-Temporal_AudioVisual_Saliency_Network_CVPR_2020_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fatsiami\u002FSTAViS)  \n02 | **ECCV** | 统一的图像和视频显著性建模 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.05477.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Frdroste\u002Funisal)    \n03 | **ECCV** | 测量视频显著性中时间特征的重要性 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123730664.pdf)\u002F代码  \n04 | **ECCV** | TENet：用于视频显著目标检测的三重激励网络 | [论文](http:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123500205.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FOliverRensu\u002FTENet-Triple-Excitation-Network-for-Video-Salient-Object-Detection) \n05 | **IEEE TIP** | 学习视频显著目标检测中的长期结构依赖关系 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9199537)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fbowangscut\u002FLSD_GCN-for-VSOD)  \n06 | **IEEE Access** | 用于视频显著目标检测的交叉互补融合网络 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=9250449)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fzi-yang-w\u002FCCNet) \n07 | **AAAI** | 用于快速视频显著目标检测的金字塔约束自注意力网络 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F20AAAI-PCSA.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fguyuchao\u002FPyramidCSA)\n\n## 2019  \n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **ICCV** | 基于运动引导注意力的视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.07061)\u002F[代码](https:\u002F\u002Fgithub.com\u002Flhaof\u002FMotion-Guided-Attention)  \n02 | **ICCV** | 使用伪标签的半监督视频显著目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FYan_Semi-Supervised_Video_Salient_Object_Detection_Using_Pseudo-Labels_ICCV_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FKinpzz\u002FRCRNet-Pytorch)   \n03 | **ICCV** | 用于视频显著性检测的时序聚合空间编码器-解码器网络 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMin_TASED-Net_Temporally-Aggregating_Spatial_Encoder-Decoder_Network_for_Video_Saliency_Detection_ICCV_2019_paper.html)\u002F[代码](https:\u002F\u002Fgithub.com\u002FMichiganCOG\u002FTASED-Net)   \n04 | **ICCV** | RANet：快速视频目标分割的排序注意力网络 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.06647)\u002F[代码](https:\u002F\u002Fgithub.com\u002FStorife\u002FRANet)   \n05 | **CVPR** | 将更多注意力转向视频显著目标检测 | [论文](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FDAVSOD\u002Fblob\u002Fmaster\u002F%5B2019%5D%5BCVPR%5D%5BOral%5D【SSAV】【DAVSOD】Shifting%20More%20Attention%20to%20Video%20Salient%20Object%20Detection.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FDAVSOD)   \n06 | **CVPR** | 通过视觉注意力学习无监督视频目标分割 | [论文](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F332751903_Learning_Unsupervised_Video_Object_Segmentation_Through_Visual_Attention)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FAGS)   \n07 | **CVPR** | 看得更多，了解得更多：基于协同注意力暹罗网络的无监督视频目标分割 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLu_See_More_Know_More_Unsupervised_Video_Object_Segmentation_With_Co-Attention_CVPR_2019_paper.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fcarrierlxk\u002FCOSNet)  \n08 | **IEEE TIP** | 基于长期时空信息的鲁棒视频显著性检测改进 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8811767)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fguotaowang\u002FTIP_LSTI)  \n\n\n\n## 2018  \n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **ECCV** | 金字塔扩张深层CoonvLSTM用于视频显著目标检测 | [论文](https:\u002F\u002Fgithub.com\u002Fshenjianbing\u002FPDBConvLSTM\u002Fblob\u002Fmaster\u002FPyramid%20Dilated%20Deeper%20CoonvLSTM%20for%20Video%20Salient%20Object%20Detection.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fshenjianbing\u002FPDB-ConvLSTM)\n02 | **ECCV** | DeepVS：一种基于深度学习的视频显著性预测方法 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLai_Jiang_DeepVS_A_Deep_ECCV_2018_paper.html)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fremega\u002FOMCNN_2CLSTM)\n03 | **CVPR** | 重访视频显著性：大规模基准与新模型 | [论文](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FDHF1K\u002Fblob\u002Fmaster\u002F(pami19)DynamicSaliency.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FDHF1K)  \n04 | **CVPR** | 流引导循环神经网络编码器用于视频显著目标检测 | [论文](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002FCameraReady\u002F1226.pdf)\u002F代码  \n05 | **IEEE TIP** | 基于全卷积网络的视频显著目标检测 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.00871.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FViSalientObject)\n\n## 2017  \n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | **IEEE TIP** | 利用HEVC特征学习视频显著性检测 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7742914\u002F)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fremega\u002FCompressd_Domain_SaliencyPrediction)\n\n\n\u003Ca name=\"survey\">\u003C\u002Fa>  \n# 早期方法  \u003Ca id=\"Earlier Methods\" class=\"anchor\" href=\"#Earlier Methods\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n\n**序号** | **发表** | **标题** | **链接** \n:-: | :-: | :-  | :-: \n01 | IEEE TIP15 | 显著目标检测：一个基准 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1501.02741.pdf)\u002F代码\n02 | IEEE TCSVT18 | 综合信息下的视觉显著性检测综述 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.03391.pdf)\u002F代码\n03 | ACM TIST18 | 共同显著性检测算法综述：基础、应用与挑战 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1604.07090.pdf)\u002F代码\n04 | IEEE TSP18 | 显著及类别特定目标检测的先进深度学习技术：综述 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8253582)\u002F代码\n05 | IJCV18 | 注意力系统：综述 | [论文](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-017-1042-6)\u002F项目\n06 | ECCV18 | 杂乱中的显著对象：将显著目标检测推向前沿 | [论文](http:\u002F\u002Fmftp.mmcheng.net\u002FPapers\u002F18ECCV-SOCBenchmark.pdf)\u002F[代码](http:\u002F\u002Fdpfan.net\u002Fsocbenchmark\u002F)\n07 | CVM18 | 显著目标检测：综述 | [论文](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs41095-019-0149-9.pdf)\u002F代码\n08 | IEEE TNNLS19 | 深度学习下的显著目标检测：综述 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.05511.pdf)\u002F代码\n09 | arXiv19 | 深度学习时代的显著目标检测——深入综述 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09146.pdf)\u002F[代码](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FSODsurvey) \n10 | CVM21 | RGB-D显著目标检测：综述 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00230)\u002F[代码](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FRGBD-SODsurvey)\n\n本部分内容感谢 [Deng-Ping Fan](http:\u002F\u002Fdpfan.net) 和 [Tao Zhou](https:\u002F\u002Fgithub.com\u002Ftaozh2017)。\n\n* 深度学习时代的显著目标检测：深入综述。[论文链接](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09146.pdf)。\n* 这是另一位作者发布的论文列表。[这里](https:\u002F\u002Fgithub.com\u002FArcherFMY\u002FPaper_Reading_List\u002Fblob\u002Fmaster\u002FImage-01-Salient-Object-Detection.md)\n* RGB-D显著目标检测：综述。[项目链接](https:\u002F\u002Fgithub.com\u002Ftaozh2017\u002FRGBD-SODsurvey)。\n\n\n\u003Ca name=\"data\">\u003C\u002Fa>  \n# SOD数据集下载    \u003Ca id=\"The SOD dataset download\" class=\"anchor\" href=\"#The SOD dataset download\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n* 2D SOD数据集 [下载1](https:\u002F\u002Fgithub.com\u002FTinyGrass\u002FSODdataset) 或 [下载2](https:\u002F\u002Fgithub.com\u002FArcherFMY\u002Fsal_eval_toolbox), [下载3](https:\u002F\u002Fgithub.com\u002Fmagic428\u002Fawesome-segmentation-saliency-dataset)。\n* 3D SOD数据集 [下载](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FRGBD-SOD-datasets)。  \n* 4D SOD数据集 [下载](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FMoLF)。\n* 视频SOD数据集 [下载](http:\u002F\u002Fdpfan.net\u002FDAVSOD\u002F)。\n\n\u003Ca name=\"eval\">\u003C\u002Fa>\n\n# 评估指标  \u003Ca id=\"Evaluation Metrics\" class=\"anchor\" href=\"Evaluation Metrics\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n* 显着性图评估。      \n此链接提供了显着目标检测的所有评估指标，包括 E-measure、S-measure、F-measure、MAE 分数以及 PR 曲线或柱状图指标。\n您可以在 [这里](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSaliency-Evaluation-Toolbox) 找到。\n\n* 显着性数据集评估。       \n该仓库可以计算二值显着性数据集上目标区域和目标对比度的比率。该工具箱包含两个评估指标，分别是 obj(object).area 和 obj.contrast。     \n您可以在 [这里](https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSaliency-dataset-evaluation) 找到。\n\n\u003Ca name=\"leaderboard\">\u003C\u002Fa>\n# 与最先进方法的比较  \u003Ca id=\"Comparison with state-of-the-arts\" class=\"anchor\" href=\"Comparison with state-of-the-arts\" aria-hidden=\"true\">\u003Cspan class=\"octicon octicon-link\">\u003C\u002Fspan>\u003C\u002Fa> \n* [这里](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsalient-object-detection-on-duts-te) 包含几乎所有 2D 显着目标检测算法的性能对比。\n* [这里](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Frgb-d-salient-object-detection) 包含几乎所有 3D RGB-D 显着目标检测算法的性能对比。\n\n\n### AI会议截止日期\n[相关AI会议截止日期](https:\u002F\u002Faideadlin.es\u002F?sub=ML,CV,NLP,RO,SP,DM)     \n[相关AI会议接受率](https:\u002F\u002Fgithub.com\u002Flixin4ever\u002FConference-Acceptance-Rate)","# SOD-CNNs-based-code-summary 快速上手指南\n\n本仓库是一个基于深度学习的显著性目标检测（Saliency Object Detection, SOD）资源汇总列表，涵盖 2D RGB、3D RGB-D\u002FT、视频及光场显著性检测等方向。它主要提供相关论文的链接和代码库地址，旨在帮助开发者快速定位最新的研究成果和开源实现。\n\n> **注意**：本仓库本身不包含统一的推理代码或模型权重，而是指向各个独立论文项目的代码库。以下指南将指导您如何浏览列表并运行具体的算法代码。\n\n## 环境准备\n\n由于列表中包含了从 2023 年到 2025 年的多种不同架构（如 Transformer、Diffusion Models、CNNs 等），不同项目对环境的要求略有差异。建议准备以下通用基础环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS\n*   **Python**: 3.8 或更高版本\n*   **深度学习框架**: PyTorch 1.9+ (绝大多数项目基于 PyTorch)\n*   **硬件要求**: 支持 CUDA 的 NVIDIA GPU (显存建议 8GB 以上，部分大模型需 16GB+)\n*   **前置依赖**:\n    *   Git\n    *   pip 或 conda (推荐使用 conda 管理虚拟环境)\n\n## 安装步骤\n\n由于这是一个资源索引库，您不需要安装\"SOD-CNNs-based-code-summary\"本身，而是需要克隆您感兴趣的具体项目代码。以下是通用的操作流程：\n\n### 1. 克隆具体项目代码\n在 README 的表格中找到您需要的论文（例如 2024 年 CVPR 的 `3SD` 或 2025 年 AAAI 的 `MSV-PCT`），点击 **Code** 链接进入其 GitHub 页面，然后执行：\n\n```bash\ngit clone \u003C目标项目的 GitHub 地址>\ncd \u003C目标项目文件夹名称>\n```\n\n### 2. 创建虚拟环境并安装依赖\n大多数项目都提供了 `requirements.txt` 文件。建议使用以下命令安装依赖（国内用户可使用清华源加速）：\n\n```bash\n# 创建虚拟环境 (以 pytorch_sod 为例)\nconda create -n pytorch_sod python=3.8 -y\nconda activate pytorch_sod\n\n# 安装 PyTorch (根据CUDA版本选择，此处为示例，请访问 pytorch.org 获取最新命令)\n# 国内加速镜像示例 (清华源)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装项目特定依赖\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 下载数据集\n显著性检测任务通常需要特定的数据集（如 DUT-OMRON, ECSSD, PASCAL-S 等）。\n*   查看具体项目 README 中的 \"Data Preparation\" 章节。\n*   部分项目在 `README` 中提供了百度网盘或 Google Drive 链接，请下载后放置到项目指定的 `data\u002F` 或 `dataset\u002F` 目录下。\n\n## 基本使用\n\n每个项目的具体运行命令可能不同，但通常遵循“训练”和“测试\u002F推理”两种模式。以下以典型的 SOD 项目结构为例：\n\n### 1. 单张图片推理 (Testing\u002FInference)\n大多数项目提供预训练模型用于直接测试。\n\n```bash\n# 典型命令格式 (请替换为具体项目的脚本名和参数)\npython test.py --model_path checkpoints\u002Fbest_model.pth --img_path data\u002Ftest_images\u002Fexample.jpg --save_dir results\u002F\n```\n\n*   `--model_path`: 指向下载的预训练权重文件 (.pth)。\n*   `--img_path`: 输入图片路径。\n*   `--save_dir`: 生成的显著性图保存路径。\n\n### 2. 模型训练 (Training)\n如果您希望复现论文结果或在自定义数据上训练：\n\n```bash\n# 典型命令格式\npython train.py --config configs\u002Fconfig.yaml --gpu_ids 0 --batch_size 16\n```\n\n*   `--config`: 配置文件，包含网络结构和超参数。\n*   `--gpu_ids`: 指定使用的 GPU 编号。\n\n### 3. 评估指标 (Evaluation)\n生成预测图后，通常需要使用官方提供的评估脚本来计算 MAE, F-measure, S-measure 等指标：\n\n```bash\npython eval.py --pred_dir results\u002F --gt_dir data\u002Fgt_masks\u002F\n```\n\n---\n**提示**：对于列表中带有 :triangular_flag_on_post: 标记的最新论文（如 2025 年 AAAI\u002FPAMI 文章），请优先查看其原仓库的 `README.md`，因为新提出的架构可能需要特殊的依赖或数据处理流程。","某计算机视觉实验室的研究团队正致力于开发一套适用于视障人士辅助眼镜的实时显著性检测系统，需要快速锁定并复现最适合移动端部署的最新算法。\n\n### 没有 SOD-CNNs-based-code-summary- 时\n- **文献检索如大海捞针**：研究人员需在 arXiv、IEEE、CVPR 等多个分散平台手动搜索\"Salient Object Detection\"，难以区分哪些是纯理论综述，哪些包含可运行代码。\n- **技术选型盲目低效**：面对 2D RGB、3D RGB-D 及光场（Light Field）等多种输入模态，缺乏统一对比视角，容易选错不适合眼镜摄像头的算法架构。\n- **复现成本极高**：找到论文后常发现官方代码缺失或链接失效，尤其是针对“视障人群图像”等特定场景的算法，往往浪费数周时间清洗数据却跑不通基线。\n- **前沿动态滞后**：难以及时捕捉到如 AAAI'25 或 NeurIPS'24 上关于无监督学习或抗对抗攻击的最新突破，导致研发起点落后于业界水平。\n\n### 使用 SOD-CNNs-based-code-summary- 后\n- **一站式资源聚合**：直接在该仓库中按模态（如 2D\u002F3D\u002F视频）筛选，瞬间获取包含论文与对应 GitHub 代码链接的完整清单，省去跨站搜索时间。\n- **精准匹配应用场景**：通过目录快速定位到\"WACV 2024 关于视障人士拍摄的图像显著性检测”专题，直接复用经过验证的数据集和基准模型。\n- **复现路径清晰通畅**：利用整理好的代码链接和评估指标（Evaluation Metrics）说明，团队在两天内即可成功跑通基线模型并开始在自有数据上调优。\n- **紧跟学术最前沿**：借助持续更新的标记（如新增的 AAAI'25 论文），团队立即引入了最新的稀疏视图增强 Transformer 框架，显著提升了系统在复杂环境下的鲁棒性。\n\nSOD-CNNs-based-code-summary- 将原本需要数周的文献调研与代码验证周期压缩至几天，让研发团队能专注于核心算法的创新而非基础资源的搜集。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjiwei0921_SOD-CNNs-based-code-summary-_2a8d932c.png","jiwei0921","wei ji","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjiwei0921_120affd1.png",null,"Yale University","New Haven","https:\u002F\u002Fgithub.com\u002Fjiwei0921",895,150,"2026-04-10T11:24:51",4,"","未说明",{"notes":87,"python":85,"dependencies":88},"该仓库是一个基于深度学习的显著性检测（SOD）论文和代码汇总列表，涵盖了 2D、3D、视频及光场等多个方向。它本身不是一个单一的独立软件工具，因此 README 中未提供统一的运行环境、依赖库或硬件需求。具体的环境配置需参考列表中各个独立项目（如 CamoDiffusion, 3SD, MINet 等）对应的代码仓库链接及其各自的 README 文件。",[],[15],[91,92,93],"saliency-detection","salient-object-detection","paper-list","2026-03-27T02:49:30.150509","2026-04-11T10:02:44.035898",[97,102,107,112,117,122,127],{"id":98,"question_zh":99,"answer_zh":100,"source_url":101},29373,"显著性检测中的 max-F、F-measure 和 mean F-measure 有什么区别？应该对比哪个指标？","通常论文中若未明确说明是 meanF 还是 maxF，建议参照最新的显著性检测论文来评估结果的大致水平。一般而言，希望 F-measure（无论是平均还是最大）越高越好，同时 MAE（平均绝对误差）越低越好。如果模型创新性很强，某些指标稍低也是可以接受的。在对比时，最好确认对方论文的具体计算方式，或者同时报告 meanF 和 maxF 以便全面比较。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F2",{"id":103,"question_zh":104,"answer_zh":105,"source_url":106},29374,"如何获取 3D (STEREO) 或 RGBD 数据集？原始尺寸图像在哪里下载？","对于 RGBD 数据集，可以参考 ContrastPrior 项目使用的数据（链接：https:\u002F\u002Fgithub.com\u002FJXingZhao\u002FContrastPrior）。关于原始尺寸图像，部分提供的数据集链接可能仅包含缩放后（如 256*256）的图像，如果需要原始尺寸，建议查看相关论文的数据集部分或访问专门的 RGBD-SOD 数据集仓库（如 https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FRGBD-SOD-datasets）获取更完整的信息。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F1",{"id":108,"question_zh":109,"answer_zh":110,"source_url":111},29375,"论文中提到的 DF 数据集下载链接失效了怎么办？","DF 数据集的下载链接可能已被作者关闭或失效。如果遇到这种情况，建议直接联系原论文作者询问最新下载地址，或者在相关的学术论坛和社区中搜索是否有镜像资源。维护者确认该链接确实可能已不可用。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F3",{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},29376,"新手入门显著性检测（SOD）领域应该如何阅读论文？需要关注代码吗？","1. 阅读顺序：建议先从 2020-2021 年的最新文章开始读，读完这些会对整个领域有基本了解，然后再粗读早期（如 2016-2020）的文章以梳理发展脉络。\n2. 阅读重点：可以参考知乎上关于“如何高效阅读论文”的博客获取技巧。\n3. 代码关注：建议从最新工作中选取几个具有代表性的方法，尝试复现和使用其代码，这有助于深入理解。\n4. 图标含义：列表中带红旗图标的论文代表该领域最新出现的文章。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F9",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},29377,"不同训练集设置（如仅用 DUT 训练 vs 混合 NJU2K+NLPR+DUT 训练）下的 DUT 测试结果对比是否公平？","不同训练设置下的模型测试结果确实存在差异，主要原因是数据集间的域偏移（domain shift）。例如，DUT 数据集多捕获于生活场景，包含低光、低对比度环境，而 NJUD 包含较多网络图片。目前的工作通常采用两种训练设置分别测试。为了公平比较，建议参考最新的校准方法论文（如 CVPR21 的 'Calibrated RGB-D Salient Object Detection' 或 TPAMI21 的 'Uncertainty Inspired RGB-D Saliency Detection'），并参考标准的训练集划分方案（见 https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FRGBD-SOD-datasets）。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F6",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},29378,"如何在汇总表中添加已发表的最新论文和代码？","如果您有发表在顶级会议或期刊（如 TPAMI）上的最新论文并希望被收录到汇总表中，可以通过提交 Issue 的方式申请。请在 Issue 中提供论文的官方链接（如 IEEE Xplore）以及开源代码仓库地址（GitHub 链接）。维护者在核实后会将其添加到总结表格中。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F17",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},29379,"找不到某篇特定论文（如 GCPANet）的 PDF 文件怎么办？","许多论文的 PDF 文件通常直接包含在作者发布的代码仓库文件夹内。例如，GCPANet 的论文可以在其 GitHub 代码库的根目录或特定文件夹中找到（路径示例：https:\u002F\u002Fgithub.com\u002FJosephChenHub\u002FGCPANet\u002Fblob\u002Fmaster\u002FGCPANet.pdf）。建议在查找论文时，优先访问其对应的代码仓库。","https:\u002F\u002Fgithub.com\u002Fjiwei0921\u002FSOD-CNNs-based-code-summary-\u002Fissues\u002F4",[]]