[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zziz--pwc":3,"tool-zziz--pwc":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":80,"stars":82,"forks":83,"last_commit_at":84,"license":80,"difficulty_score":85,"env_os":86,"env_gpu":87,"env_ram":87,"env_deps":88,"category_tags":91,"github_topics":92,"view_count":85,"oss_zip_url":80,"oss_zip_packed_at":80,"status":17,"created_at":101,"updated_at":102,"faqs":103,"releases":104},3158,"zziz\u002Fpwc","pwc","This repository is no longer maintained.","pwc 是一个专注于机器学习领域的开源项目清单，旨在将顶尖学术论文与其对应的代码实现进行系统化整理。它主要解决了科研与工程实践中“论文难复现、代码难寻找”的痛点，通过按年份和会议（如 CVPR、NIPS、ICML 等）分类，让使用者能快速定位到带有官方或社区实现代码的高质量研究。\n\n该项目特别适合人工智能研究人员、算法工程师以及计算机视觉领域的开发者使用。对于希望跟进最新技术动态、复现经典模型或寻找项目灵感的专业人士而言，pwc 提供了一条从理论到实践的高效路径。其独特的亮点在于不仅列出了论文标题和会议来源，还直接关联了 GitHub 代码仓库，并直观展示了项目的星标数量，帮助用户迅速识别出社区关注度高、质量可靠的开源作品。尽管原仓库已停止维护，但其整理的历年经典文献与代码映射关系，至今仍是探索深度学习发展历程的宝贵资源库。","\u003Cdiv align=\"left\">\n\u003Ch1>\n    \u003Cimg alt=\"HEADER\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzziz_pwc_readme_33b2b51ad9ea.jpg\" width=\"900\" height=\"300\">\u003C\u002Fimg>\n\u003C\u002Fh1>\n\n| [2018](#2018) | [2017](#2017) | [2016](#2016) | [2015](#2015) | [2014](#2014) | [2013](#2013) | 2012 | 2011 | 2010 | 2009 | 2008 | [![Tweet](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttp\u002Fshields.io.svg?style=social)](https:\u002F\u002Ftwitter.com\u002Fintent\u002Ftweet?text=Papers%20with%20code.%20Sorted%20by%20stars.%20Updated%20weekly.%20&url=https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc&via=fvzaur&hashtags=machinelearning,paper,code,github) | [Suggestions](https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc\u002Fissues\u002F1) |    \n|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|\n\nThis work is in continuous progress and update. We are adding new PWC everyday! Tweet me [@fvzaur](https:\u002F\u002Ftwitter.com\u002Ffvzaur)   \nUse [this](https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc\u002Fissues\u002F11) thread to request us your favorite conference to be added to our watchlist and to PWC list.   \n#### Weekly updated pushed! \n\n## 2018\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Video-to-Video Synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.06601) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fvid2vid) | 5578 | \n| [Deep Image Prior](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FUlyanov_Deep_Image_Prior_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002Fdeep-image-prior) | 3736 | \n| [StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChoi_StarGAN_Unified_Generative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyunjey\u002FStarGAN) | 3405 | \n| [Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYao_Feng_Joint_3D_Face_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYadiraF\u002FPRNet) | 2434 | \n| [Learning to See in the Dark](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Learning_to_See_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark) | 2326 | \n| [Glow: Generative Flow with Invertible 1x1 Convolutions](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03039v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fglow) | 2088 | \n| [Squeeze-and-Excitation Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 1477 | \n| [Efficient Neural Architecture Search via Parameters Sharing](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fpham18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch) | 1382 | \n| [Multimodal Unsupervised Image-to-image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXun_Huang_Multimodal_Unsupervised_Image-to-image_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FMUNIT) | 1296 | \n| [Non-Local Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Non-Local_Neural_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fvideo-nonlocal-net) | 992 | \n| [Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHara_Can_Spatiotemporal_3D_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkenshohara\u002F3D-ResNets-PyTorch) | 924 | \n| [Single-Shot Refinement Neural Network for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Single-Shot_Refinement_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsfzhang15\u002FRefineDet) | 875 | \n| [Image Generation From Scene Graphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJohnson_Image_Generation_From_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fsg2im) | 851 | \n| [GANimation: Anatomically-aware Facial Animation from a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAlbert_Pumarola_Anatomically_Coherent_Facial_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Falbertpumarola\u002FGANimation) | 772 | \n| [Simple Baselines for Human Pose Estimation and Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBin_Xiao_Simple_Baselines_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fhuman-pose-estimation.pytorch) | 752 | \n| [Visualizing the Loss Landscape of Neural Nets](http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.09913v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftomgoldstein\u002Floss-landscape) | 724 | \n| [Detect-and-Track: Efficient Pose Estimation in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGirdhar_Detect-and-Track_Efficient_Pose_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectAndTrack) | 650 | \n| [Relation Networks for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Relation_Networks_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FRelation-Networks-for-Object-Detection) | 635 | \n| [Generative Image Inpainting With Contextual Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_Generative_Image_Inpainting_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJiahuiYu\u002Fgenerative_inpainting) | 609 | \n| [PointCNN](http:\u002F\u002Farxiv.org\u002Fabs\u002F1801.07791v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fyangyanli\u002FPointCNN) | 607 | \n| [Look at Boundary: A Boundary-Aware Face Alignment Algorithm](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Look_at_Boundary_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwywu\u002FLAB) | 575 | \n| [Pelee: A Real-Time Object Detection System on Mobile Devices](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FRobert-JunWang\u002FPelee) | 548 | \n| [Distractor-aware Siamese Networks for Visual Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002FDaSiamRPN) | 545 | \n| [Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fathalye18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fanishathalye\u002Fobfuscated-gradients) | 535 | \n| [Which Training Methods for GANs do actually Converge?](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fmescheder18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FGAN_stability) | 520 | \n| [End-to-End Recovery of Human Shape and Pose](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKanazawa_End-to-End_Recovery_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fakanazawa\u002Fhmr) | 502 | \n| [Taskonomy: Disentangling Task Transfer Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZamir_Taskonomy_Disentangling_Task_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002Ftaskonomy) | 502 | \n| [Cascaded Pyramid Network for Multi-Person Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Cascaded_Pyramid_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenyilun95\u002Ftf-cpn) | 497 | \n| [Neural 3D Mesh Renderer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKato_Neural_3D_Mesh_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhiroharu-kato\u002Fneural_renderer) | 489 | \n| [Zero-Shot Recognition via Semantic Embeddings and Knowledge Graphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Zero-Shot_Recognition_via_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJudyYe\u002Fzero-shot-gcn) | 489 | \n| [In-Place Activated BatchNorm for Memory-Optimized Training of DNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBulo_In-Place_Activated_BatchNorm_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmapillary\u002Finplace_abn) | 485 | \n| [The Unreasonable Effectiveness of Deep Features as a Perceptual Metric](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_The_Unreasonable_Effectiveness_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002FPerceptualSimilarity) | 447 | \n| [Frustum PointNets for 3D Object Detection From RGB-D Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Frustum_PointNets_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Ffrustum-pointnets) | 434 | \n| [The Lovász-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBerman_The_LovaSz-Softmax_Loss_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbermanmaxim\u002FLovaszSoftmax) | 416 | \n| [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHengshuang_Zhao_ICNet_for_Real-Time_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhszhao\u002FICNet) | 415 | \n| [PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSun_PWC-Net_CNNs_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FPWC-Net) | 398 | \n| [Efficient Interactive Annotation of Segmentation Datasets With Polygon-RNN++](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAcuna_Efficient_Interactive_Annotation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffidler-lab\u002Fpolyrnn-pp-pytorch) | 397 | \n| [Gibson Env: Real-World Perception for Embodied Agents](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXia_Gibson_Env_Real-World_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FGibsonEnv) | 385 | \n| [Acquisition of Localization Confidence for Accurate Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBorui_Jiang_Acquisition_of_Localization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fvacancy\u002FPreciseRoIPooling) | 384 | \n| [Noise2Noise: Learning Image Restoration without Clean Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flehtinen18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fyu4u\u002Fnoise2noise) | 370 | \n| [GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_GeoNet_Geometric_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyzcjtr\u002FGeoNet) | 359 | \n| [GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYin_GeoNet_Unsupervised_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyzcjtr\u002FGeoNet) | 359 | \n| [A Style-Aware Content Loss for Real-time HD Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FArtsiom_Sanakoyeu_A_Style-aware_Content_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FCompVis\u002Fadaptive-style-transfer) | 349 | \n| [Soccer on Your Tabletop](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRematas_Soccer_on_Your_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkrematas\u002Fsoccerontable) | 338 | \n| [Pyramid Stereo Matching Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChang_Pyramid_Stereo_Matching_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJiaRenChang\u002FPSMNet) | 335 | \n| [Neural Baby Talk](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLu_Neural_Baby_Talk_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FNeuralBabyTalk) | 332 | \n| [License Plate Detection and Recognition in Unconstrained Scenarios](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSergio_Silva_License_Plate_Detection_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsergiomsilva\u002Falpr-unconstrained) | 326 | \n| [Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDong_Supervision-by-Registration_An_Unsupervised_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsupervision-by-registration) | 326 | \n| [Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FNanyang_Wang_Pixel2Mesh_Generating_3D_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fnywang16\u002FPixel2Mesh) | 323 | \n| [Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMascharka_Transparency_by_Design_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdavidmascharka\u002Ftbd-nets) | 317 | \n| [Fast End-to-End Trainable Guided Filter](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Fast_End-to-End_Trainable_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwuhuikai\u002FDeepGuidedFilter) | 312 | \n| [Deep Clustering for Unsupervised Learning of Visual Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMathilde_Caron_Deep_Clustering_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeepcluster) | 302 | \n| [Deep Photo Enhancer: Unpaired Learning for Image Enhancement From Photographs With GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Deep_Photo_Enhancer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnothinglo\u002FDeep-Photo-Enhancer) | 294 | \n| [Neural Relational Inference for Interacting Systems](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkipf18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fethanfetaya\u002FNRI) | 289 | \n| [Adversarially Regularized Autoencoders](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhao18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjakezhaojb\u002FARAE) | 282 | \n| [Learning to Adapt Structured Output Space for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTsai_Learning_to_Adapt_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwasidennis\u002FAdaptSegNet) | 280 | \n| [Convolutional Neural Networks With Alternately Updated Clique](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_Convolutional_Neural_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiboing\u002FCliqueNet) | 272 | \n| [Learning to Segment Every Thing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Learning_to_Segment_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fseg_every_thing) | 269 | \n| [Supervising Unsupervised Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1709.05262v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fquinnliu\u002FmachineLearning) | 262 | \n| [LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHui_LiteFlowNet_A_Lightweight_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftwhui\u002FLiteFlowNet) | 261 | \n| [Bilinear Attention Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07932v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjnhwkim\u002Fban-vqa) | 258 | \n| [ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSachin_Mehta_ESPNet_Efficient_Spatial_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsacmehta\u002FESPNet) | 254 | \n| [An intriguing failing of convolutional neural networks and the CoordConv solution](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03247) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmkocabas\u002FCoordConv-pytorch) | 249 | \n| [End-to-End Learning of Motion Representation for Video Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFan_End-to-End_Learning_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLijieFan\u002Ftvnet) | 238 | \n| [Image Super-Resolution Using Very Deep Residual Channel Attention Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYulun_Zhang_Image_Super-Resolution_Using_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyulunzhang\u002FRCAN) | 234 | \n| [Iterative Visual Reasoning Beyond Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Iterative_Visual_Reasoning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fendernewton\u002Fiter-reason) | 228 | \n| [Semi-Parametric Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Semi-Parametric_Image_Synthesis_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxjqicuhk\u002FSIMS) | 226 | \n| [Compressed Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Compressed_Video_Action_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchaoyuaw\u002Fpytorch-coviar) | 225 | \n| [Style Aggregated Network for Facial Landmark Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDong_Style_Aggregated_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FD-X-Y\u002FSAN) | 223 | \n| [Pose-Robust Face Recognition via Deep Residual Equivariant Mapping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_Pose-Robust_Face_Recognition_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpenincillin\u002FDREAM) | 220 | \n| [Multi-Content GAN for Few-Shot Font Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAzadi_Multi-Content_GAN_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fazadis\u002FMC-GAN) | 218 | \n| [GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fyou18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FJiaxuanYou\u002Fgraph-generation) | 214 | \n| [Referring Relationships](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKrishna_Referring_Relationships_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FReferringRelationships) | 210 | \n| [MoCoGAN: Decomposing Motion and Content for Video Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTulyakov_MoCoGAN_Decomposing_Motion_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsergeytulyakov\u002Fmocogan) | 205 | \n| [Latent Alignment and Variational Attention](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03756v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fharvardnlp\u002Fvar-attn) | 204 | \n| [LayoutNet: Reconstructing the 3D Room Layout From a Single RGB Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZou_LayoutNet_Reconstructing_the_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzouchuhang\u002FLayoutNet) | 202 | \n| [Large-Scale Point Cloud Semantic Segmentation With Superpoint Graphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLandrieu_Large-Scale_Point_Cloud_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Floicland\u002Fsuperpoint_graph) | 197 | \n| [An End-to-End TextSpotter With Explicit Alignment and Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHe_An_End-to-End_TextSpotter_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftonghe90\u002Ftextspotter) | 195 | \n| [DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKupyn_DeblurGAN_Blind_Motion_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRaphaelMeudec\u002Fdeblur-gan) | 189 | \n| [SPLATNet: Sparse Lattice Networks for Point Cloud Processing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSu_SPLATNet_Sparse_Lattice_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fsplatnet) | 188 | \n| [Attentive Generative Adversarial Network for Raindrop Removal From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQian_Attentive_Generative_Adversarial_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frui1996\u002FDeRaindrop) | 186 | \n| [Single View Stereo Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLuo_Single_View_Stereo_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flawy623\u002FSVS) | 182 | \n| [MegaDepth: Learning Single-View Depth Prediction From Internet Photos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_MegaDepth_Learning_Single-View_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flixx2938\u002FMegaDepth) | 181 | \n| [ECO: Efficient Convolutional Network for Online Video Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMohammadreza_Zolfaghari_ECO_Efficient_Convolutional_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmzolfaghari\u002FECO-efficient-video-understanding) | 180 | \n| [Unsupervised Feature Learning via Non-Parametric Instance Discrimination](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Unsupervised_Feature_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhirongw\u002Flemniscate.pytorch) | 180 | \n| [ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLin_ST-GAN_Spatial_Transformer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenhsuanlin\u002Fspatial-transformer-GAN) | 179 | \n| [Video Based Reconstruction of 3D People Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAlldieck_Video_Based_Reconstruction_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthmoa\u002Fvideoavatars) | 179 | \n| [Social GAN: Socially Acceptable Trajectories With Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGupta_Social_GAN_Socially_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fagrimgupta92\u002Fsgan) | 178 | \n| [Learning Category-Specific Mesh Reconstruction from Image Collections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngjoo_Kanazawa_Learning_Category-Specific_Mesh_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fakanazawa\u002Fcmr) | 176 | \n| [Realistic Evaluation of Deep Semi-Supervised Learning Algorithms](http:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09170v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Frealistic-ssl-evaluation) | 175 | \n| [BSN: Boundary Sensitive Network for Temporal Action Proposal Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTianwei_Lin_BSN_Boundary_Sensitive_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fwzmsltw\u002FBSN-boundary-sensitive-network) | 175 | \n| [Group Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYuxin_Wu_Group_Normalization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fshaohua0116\u002FGroup-Normalization-Tensorflow) | 175 | \n| [Real-Time Seamless Single Shot 6D Object Pose Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTekin_Real-Time_Seamless_Single_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fsingleshotpose) | 174 | \n| [MVSNet: Depth Inference for Unstructured Multi-view Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYao_Yao_MVSNet_Depth_Inference_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYoYo000\u002FMVSNet) | 174 | \n| [Neural Motifs: Scene Graph Parsing With Global Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZellers_Neural_Motifs_Scene_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frowanz\u002Fneural-motifs) | 171 | \n| [Learning a Single Convolutional Super-Resolution Network for Multiple Degradations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Learning_a_Single_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcszn\u002FSRMD) | 169 | \n| [Optimizing Video Object Detection via a Scale-Time Lattice](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Optimizing_Video_Object_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhellock\u002Fscale-time-lattice) | 168 | \n| [MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMuhammed_Kocabas_MultiPoseNet_Fast_Multi-Person_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsalihkaragoz\u002Fpose-residual-network-pytorch) | 167 | \n| [Unsupervised Cross-Dataset Person Re-Identification by Transfer Learning of Spatial-Temporal Patterns](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLv_Unsupervised_Cross-Dataset_Person_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahangchen\u002FTFusion) | 166 | \n| [Weakly Supervised Instance Segmentation Using Class Peak Response](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhou_Weakly_Supervised_Instance_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FPRM) | 166 | \n| [PlaneNet: Piece-Wise Planar Reconstruction From a Single RGB Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_PlaneNet_Piece-Wise_Planar_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fart-programmer\u002FPlaneNet) | 164 | \n| [Residual Dense Network for Image Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Residual_Dense_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyulunzhang\u002FRDN) | 163 | \n| [Embodied Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDas_Embodied_Question_Answering_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FEmbodiedQA) | 162 | \n| [Evolved Policy Gradients](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04821v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002FEPG) | 160 | \n| [Camera Style Adaptation for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhong_Camera_Style_Adaptation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhunzhong07\u002FCamStyle) | 159 | \n| [Weakly and Semi Supervised Human Body Part Parsing via Pose-Guided Knowledge Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFang_Weakly_and_Semi_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMVIG-SJTU\u002FWSHP) | 159 | \n| [Scale-Recurrent Network for Deep Image Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTao_Scale-Recurrent_Network_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangsutx\u002FSRN-Deblur) | 159 | \n| [Unsupervised Learning of Monocular Depth Estimation and Visual Odometry With Deep Feature Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhan_Unsupervised_Learning_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHuangying-Zhan\u002FDepth-VO-Feat) | 158 | \n| [Relational recurrent neural networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.01822) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FL0SG\u002Frelational-rnn-pytorch) | 157 | \n| [Densely Connected Pyramid Dehazing Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Densely_Connected_Pyramid_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhezhangsprinter\u002FDCPDN) | 155 | \n| [Image Inpainting for Irregular Holes Using Partial Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuilin_Liu_Image_Inpainting_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fpytorch-inpainting-with-partial-conv) | 153 | \n| [SO-Net: Self-Organizing Network for Point Cloud Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_SO-Net_Self-Organizing_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flijx10\u002FSO-Net) | 152 | \n| [Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSun_Pix3D_Dataset_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxingyuansun\u002Fpix3d) | 152 | \n| [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_ShuffleNet_An_Extremely_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcamel007\u002FCaffe-ShuffleNet) | 152 | \n| [DenseASPP for Semantic Segmentation in Street Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_DenseASPP_for_Semantic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDeepMotionAIResearch\u002FDenseASPP) | 151 | \n| [Facelet-Bank for Fast Portrait Manipulation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Facelet-Bank_for_Fast_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyingcong\u002FFacelet_Bank) | 150 | \n| [Self-Imitation Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Foh18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjunhyukoh\u002Fself-imitation-learning) | 145 | \n| [Graph R-CNN for Scene Graph Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJianwei_Yang_Graph_R-CNN_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fjwyang\u002Fgraph-rcnn.pytorch) | 144 | \n| [A Closer Look at Spatiotemporal Convolutions for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTran_A_Closer_Look_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Firhumshafkat\u002FR2Plus1D-PyTorch) | 143 | \n| [Cross-Domain Weakly-Supervised Object Detection Through Progressive Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FInoue_Cross-Domain_Weakly-Supervised_Object_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fcross-domain-detection) | 143 | \n| [Quantized Densely Connected U-Nets for Efficient Landmark Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhiqiang_Tang_Quantized_Densely_Connected_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhiqiangdon\u002FCU-Net) | 143 | \n| [Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXia_Li_Recurrent_Squeeze-and-Excitation_Context_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FXiaLiPKU\u002FRESCAN) | 142 | \n| [Two-Stream Convolutional Networks for Dynamic Texture Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTesfaldet_Two-Stream_Convolutional_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fryersonvisionlab\u002Ftwo-stream-dyntex-synth) | 141 | \n| [Integral Human Pose Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiao_Sun_Integral_Human_Pose_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FJimmySuen\u002Fintegral-human-pose) | 141 | \n| [Adaptive Affinity Fields for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJyh-Jing_Hwang_Adaptive_Affinity_Field_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftwke18\u002FAdaptive_Affinity_Fields) | 141 | \n| [LSTM Pose Machines](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLuo_LSTM_Pose_Machines_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flawy623\u002FLSTM_Pose_Machines) | 141 | \n| [Structure Inference Net: Object Detection Using Scene-Level Context and Instance-Level Relationships](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Structure_Inference_Net_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchoasup\u002FSIN) | 140 | \n| [Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Recovering_Realistic_Texture_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FCVPR18-SFTGAN) | 139 | \n| [Image-Image Domain Adaptation With Preserved Self-Similarity and Domain-Dissimilarity for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDeng_Image-Image_Domain_Adaptation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSimon4Yan\u002FLearning-via-Translation) | 137 | \n| [Learning to Compare: Relation Network for Few-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSung_Learning_to_Compare_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FLearningToCompare_ZSL) | 135 | \n| [CosFace: Large Margin Cosine Loss for Deep Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_CosFace_Large_Margin_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyule-li\u002FCosFace) | 135 | \n| [Deep Depth Completion of a Single RGB-D Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Deep_Depth_Completion_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyindaz\u002FDeepCompletionRelease) | 134 | \n| [Deep Back-Projection Networks for Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHaris_Deep_Back-Projection_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falterzero\u002FDBPN-Pytorch) | 132 | \n| [Context Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKim_Context_Embedding_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthunlp\u002FCANE) | 131 | \n| [Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKendall_Multi-Task_Learning_Using_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falexgkendall\u002Fmultitaskvision) | 131 | \n| [Perturbative Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJuefei-Xu_Perturbative_Neural_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjuefeix\u002Fpnn.pytorch) | 130 | \n| [Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwang18h.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsyang1993\u002Fgst-tacotron) | 129 | \n| [Fast and Accurate Online Video Object Segmentation via Tracking Parts](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCheng_Fast_and_Accurate_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJingchunCheng\u002FFAVOS) | 129 | \n| [Nonlinear 3D Face Morphable Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTran_Nonlinear_3D_Face_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftranluan\u002FNonlinear_Face_3DMM) | 128 | \n| [BodyNet: Volumetric Inference of 3D Human Body Shapes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGul_Varol_BodyNet_Volumetric_Inference_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgulvarol\u002Fbodynet) | 126 | \n| [3D-CODED: 3D Correspondences by Deep Deformation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FThibault_Groueix_Shape_correspondences_from_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FThibaultGROUEIX\u002F3D-CODED) | 125 | \n| [DeepMVS: Learning Multi-View Stereopsis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_DeepMVS_Learning_Multi-View_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphuang17\u002FDeepMVS) | 125 | \n| [Hierarchical Imitation and Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fle18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhoangminhle\u002Fhierarchical_IL_RL) | 124 | \n| [Domain Adaptive Faster R-CNN for Object Detection in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Domain_Adaptive_Faster_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuhuayc\u002Fda-faster-rcnn) | 123 | \n| [L4: Practical loss-based stepsize adaptation for deep learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05074v4) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmartius-lab\u002Fl4-optimizer) | 123 | \n| [A Generative Adversarial Approach for Zero-Shot Learning From Noisy Texts](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhu_A_Generative_Adversarial_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FEthanZhu90\u002FZSL_GAN_CVPR18) | 122 | \n| [Recurrent Relational Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1711.08028v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frasmusbergpalm\u002Frecurrent-relational-networks) | 121 | \n| [Gated Path Planning Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flee18c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flileee\u002Fgated-path-planning-networks) | 121 | \n| [PSANet: Point-wise Spatial Attention Network for Scene Parsing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHengshuang_Zhao_PSANet_Point-wise_Spatial_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhszhao\u002FPSANet) | 121 | \n| [Rethinking Feature Distribution for Loss Functions in Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWan_Rethinking_Feature_Distribution_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWeitaoVan\u002FL-GM-loss) | 120 | \n| [Density-Aware Single Image De-Raining Using a Multi-Stream Dense Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Density-Aware_Single_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhezhangsprinter\u002FDID-MDN) | 118 | \n| [FOTS: Fast Oriented Text Spotting With a Unified Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_FOTS_Fast_Oriented_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangxiluning\u002FFOTS.PyTorch) | 118 | \n| [ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTaihong_Xiao_ELEGANT_Exchanging_Latent_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FPrinsphield\u002FELEGANT) | 117 | \n| [PU-Net: Point Cloud Upsampling Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_PU-Net_Point_Cloud_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyulequan\u002FPU-Net) | 117 | \n| [PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMallya_PackNet_Adding_Multiple_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpacknet) | 117 | \n| [Long-term Tracking in the Wild: a Benchmark](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FEfstratios_Gavves_Long-term_Tracking_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Foxuva\u002Flong-term-tracking-benchmark) | 116 | \n| [Factoring Shape, Pose, and Layout From the 2D Image of a 3D Scene](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTulsiani_Factoring_Shape_Pose_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002Ffactored3d) | 114 | \n| [Repulsion Loss: Detecting Pedestrians in a Crowd](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Repulsion_Loss_Detecting_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbailvwangzi\u002Frepulsion_loss_ssd) | 113 | \n| [Unsupervised Attention-guided Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.02311) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FAlamiMejjati\u002FUnsupervised-Attention-guided-Image-to-Image-Translation) | 110 | \n| [Attention-based Deep Multiple Instance Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Filse18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FAttentionDeepMIL) | 109 | \n| [Learning Blind Video Temporal Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWei-Sheng_Lai_Real-Time_Blind_Video_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fphoenix104104\u002Ffast_blind_video_consistency) | 109 | \n| [Noisy Natural Gradient as Variational Inference](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhang18l.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fwlwkgus\u002FNoisyNaturalGradient) | 108 | \n| [End-to-End Weakly-Supervised Semantic Alignment](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRocco_End-to-End_Weakly-Supervised_Semantic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fweakalign) | 106 | \n| [Decoupled Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Decoupled_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002FDCNets) | 105 | \n| [LiDAR-Video Driving Dataset: Learning Driving Policies Effectively](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_LiDAR-Video_Driving_Dataset_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdriving-behavior\u002FDBNet) | 104 | \n| [MAttNet: Modular Attention Network for Referring Expression Comprehension](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_MAttNet_Modular_Attention_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flichengunc\u002FMAttNet) | 104 | \n| [LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FLQ-Nets) | 103 | \n| [FSRNet: End-to-End Learning Face Super-Resolution With Facial Priors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_FSRNet_End-to-End_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftyshiwo\u002FFSRNet) | 100 | \n| [Deep Mutual Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Deep_Mutual_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYingZhangDUT\u002FDeep-Mutual-Learning) | 100 | \n| [Macro-Micro Adversarial Network for Human Parsing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYawei_Luo_Macro-Micro_Adversarial_Network_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FRoyalVane\u002FMMAN) | 98 | \n| [ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDai_ScanComplete_Large-Scale_Scene_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fangeladai\u002FScanComplete) | 97 | \n| [Learning Depth From Monocular Videos Using Direct Methods](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Learning_Depth_From_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMightyChaos\u002FLKVOLearner) | 97 | \n| [VITON: An Image-Based Virtual Try-On Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHan_VITON_An_Image-Based_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxthan\u002FVITON) | 95 | \n| [Cascade R-CNN: Delving Into High Quality Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCai_Cascade_R-CNN_Delving_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fguoruoqian\u002Fcascade-rcnn_Pytorch) | 93 | \n| [Learning Human-Object Interactions by Graph Parsing Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSiyuan_Qi_Learning_Human-Object_Interactions_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fgpnn) | 93 | \n| [Future Frame Prediction for Anomaly Detection – A New Baseline](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Future_Frame_Prediction_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStevenLiuWen\u002Fano_pred_cvpr2018) | 92 | \n| [Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShao-Hua_Sun_Multi-view_to_Novel_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fshaohua0116\u002FMultiview2Novelview) | 92 | \n| [Tell Me Where to Look: Guided Attention Inference Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Tell_Me_Where_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falokwhitewolf\u002FGuided-Attention-Inference-Network) | 91 | \n| [Neural Kinematic Networks for Unsupervised Motion Retargetting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVillegas_Neural_Kinematic_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frubenvillegas\u002Fcvpr2018nkn) | 90 | \n| [Learning SO(3) Equivariant Representations with Spherical CNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FCarlos_Esteves_Learning_SO3_Equivariant_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdaniilidis-group\u002Fspherical-cnn) | 89 | \n| [One-Shot Unsupervised Cross Domain Translation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.06029v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsagiebenaim\u002FOneShotTranslation) | 89 | \n| [Synthesizing Images of Humans in Unseen Poses](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBalakrishnan_Synthesizing_Images_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbalakg\u002Fposewarp-cvpr2018) | 88 | \n| [Depth-aware CNN for RGB-D Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWeiyue_Wang_Depth-aware_CNN_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FDepthAwareCNN) | 88 | \n| [Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FArun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpiggyback) | 88 | \n| [Knowledge Aided Consistency for Weakly Supervised Phrase Grounding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Knowledge_Aided_Consistency_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FKAC-Net) | 87 | \n| [CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_CSRNet_Dilated_Convolutional_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleeyeehoo\u002FCSRNet-pytorch) | 87 | \n| [Neural Arithmetic Logic Units](http:\u002F\u002Farxiv.org\u002Fabs\u002F1808.00508v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FllSourcell\u002FNeural_Arithmetic_Logic_Units) | 87 | \n| [A PID Controller Approach for Stochastic Optimization of Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAn_A_PID_Controller_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftensorboy\u002FPIDOptimizer) | 87 | \n| [VITAL: VIsual Tracking via Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSong_VITAL_VIsual_Tracking_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fybsong00\u002FVital_release) | 86 | \n| [Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Learning_Spatial-Temporal_Regularized_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flifeng9472\u002FSTRCF) | 86 | \n| [Recurrent Pixel Embedding for Instance Grouping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKong_Recurrent_Pixel_Embedding_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faimerykong\u002FRecurrent-Pixel-Embedding-for-Instance-Grouping) | 85 | \n| [SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_SGPN_Similarity_Group_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FSGPN) | 84 | \n| [Multi-Scale Location-Aware Kernel Representation for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Multi-Scale_Location-Aware_Kernel_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHwang64\u002FMLKP) | 84 | \n| [Repeatability Is Not Enough: Learning Affine Regions via Discriminability](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDmytro_Mishkin_Repeatability_Is_Not_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fducha-aiki\u002Faffnet) | 84 | \n| [“Zero-Shot” Super-Resolution Using Deep Internal Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShocher_Zero-Shot_Super-Resolution_Using_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fassafshocher\u002FZSSR) | 84 | \n| [DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYuliang_Zou_DF-Net_Unsupervised_Joint_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fvt-vl-lab\u002FDF-Net) | 82 | \n| [Multi-View Consistency as Supervisory Signal for Learning Shape and Pose Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTulsiani_Multi-View_Consistency_as_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FmvcSnP) | 80 | \n| [Factorizable Net: An Efficient Subgraph-based Framework for Scene Graph Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYikang_LI_Factorizable_Net_An_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyikang-li\u002FFactorizableNet) | 78 | \n| [Generalizing A Person Retrieval Model Hetero- and Homogeneously](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhun_Zhong_Generalizing_A_Person_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhunzhong07\u002FHHL) | 78 | \n| [Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_Crafting_a_Toolchain_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuke93\u002FRL-Restore) | 77 | \n| [Pairwise Confusion for Fine-Grained Visual Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAbhimanyu_Dubey_Improving_Fine-Grained_Visual_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fabhimanyudubey\u002Fconfusion) | 77 | \n| [Learning to Reweight Examples for Robust Deep Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fren18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdanieltan07\u002Flearning-to-reweight-examples) | 76 | \n| [Improving Generalization via  Scalable Neighborhood Component Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhirong_Wu_Improving_Embedding_Generalization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fsnca.pytorch) | 76 | \n| [SparseMAP: Differentiable Sparse Structured Inference](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fniculae18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fvene\u002Fsparsemap) | 75 | \n| [PDE-Net: Learning PDEs from Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flong18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FZichaoLong\u002FPDE-Net) | 75 | \n| [Pose-Normalized Image Generation for Person Re-identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuelin_Qian_Pose-Normalized_Image_Generation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fnaiq\u002FPN_GAN) | 75 | \n| [Disentangled Person Image Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMa_Disentangled_Person_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcharliememory\u002FDisentangled-Person-Image-Generation) | 75 | \n| [Learning to Navigate for Fine-grained Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZe_Yang_Learning_to_Navigate_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyangze0930\u002FNTS-Net) | 74 | \n| [Superpixel Sampling Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FVarun_Jampani_Superpixel_Sampling_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fssn_superpixels) | 74 | \n| [Shift-Net: Image Inpainting via Deep Feature Rearrangement](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FZhaoyi-Yan\u002FShift-Net_pytorch) | 74 | \n| [3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngela_Dai_3DMV_Joint_3D-Multi-View_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fangeladai\u002F3DMV) | 74 | \n| [Ordinal Depth Supervision for 3D Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPavlakos_Ordinal_Depth_Supervision_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeopavlakos\u002Fordinal-pose3d) | 74 | \n| [Path-Level Network Transformation for Efficient Architecture Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fcai18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhan-cai\u002FPathLevel-EAS) | 73 | \n| [Diverse Image-to-Image Translation via Disentangled Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHsin-Ying_Lee_Diverse_Image-to-Image_Translation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftaki0112\u002FDRIT-Tensorflow) | 72 | \n| [Visual Feature Attribution Using Wasserstein GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBaumgartner_Visual_Feature_Attribution_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Forobix\u002FVisual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch) | 72 | \n| [Real-World Anomaly Detection in Surveillance Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSultani_Real-World_Anomaly_Detection_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWaqasSultani\u002FAnomalyDetectionCVPR2018) | 72 | \n| [Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Self-Supervised_Adversarial_Hashing_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flelan-li\u002FSSAH) | 72 | \n| [Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSiyuan_Huang_Monocular_Scene_Parsing_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fthusiyuan\u002Fholistic_scene_parsing) | 72 | \n| [Learning to Find Good Correspondences](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYi_Learning_to_Find_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flearned-correspondence-release) | 72 | \n| [Learning Less Is More - 6D Camera Localization via 3D Surface Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBrachmann_Learning_Less_Is_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvislearn\u002FLessMore) | 72 | \n| [Object Level Visual Reasoning in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFabien_Baradel_Object_Level_Visual_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffabienbaradel\u002Fobject_level_visual_reasoning) | 71 | \n| [Weakly-Supervised Semantic Segmentation Network With Deep Seeded Region Growing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_Weakly-Supervised_Semantic_Segmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fspeedinghzl\u002FDSRG) | 71 | \n| [Avatar-Net: Multi-Scale Zero-Shot Style Transfer by Feature Decoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSheng_Avatar-Net_Multi-Scale_Zero-Shot_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLucasSheng\u002Favatar-net) | 71 | \n| [Fast and Accurate Single Image Super-Resolution via Information Distillation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHui_Fast_and_Accurate_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZheng222\u002FIDN-Caffe) | 71 | \n| [Regularizing RNNs for Caption Generation by Reconstructing the Past With the Present](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Regularizing_RNNs_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenxinpeng\u002FARNet) | 70 | \n| [Multi-Shot Pedestrian Re-Identification via Sequential Decision Making](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Multi-Shot_Pedestrian_Re-Identification_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FTuSimple\u002Frl-multishot-reid) | 70 | \n| [PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FUy_PointNetVLAD_Deep_Point_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmikacuy\u002Fpointnetvlad) | 69 | \n| [Progressive Neural Architecture Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChenxi_Liu_Progressive_Neural_Architecture_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftitu1994\u002Fprogressive-neural-architecture-search) | 68 | \n| [Generative Neural Machine Translation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.05138v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FZhenYangIACAS\u002FNMT_GAN) | 68 | \n| [Learning Latent Super-Events to Detect Multiple Activities in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPiergiovanni_Learning_Latent_Super-Events_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpiergiaj\u002Fsuper-events-cvpr18) | 67 | \n| [Generate to Adapt: Aligning Domains Using Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSankaranarayanan_Generate_to_Adapt_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyogeshbalaji\u002FGenerate_To_Adapt) | 67 | \n| [Adversarial Feature Augmentation for Unsupervised Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVolpi_Adversarial_Feature_Augmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fadversarial-feature-augmentation) | 67 | \n| [Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Learning_Attentions_Residual_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002FRASNet) | 67 | \n| [Pointwise Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHua_Pointwise_Convolutional_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fscenenn\u002Fpointwise) | 67 | \n| [Optimizing the Latent Space of Generative Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbojanowski18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftneumann\u002Fminimal_glo) | 66 | \n| [Part-Aligned Bilinear Representations for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYumin_Suh_Part-Aligned_Bilinear_Representations_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyuminsuh\u002Fpart_bilinear_reid) | 64 | \n| [Geometry-Aware Learning of Maps for Camera Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBrahmbhatt_Geometry-Aware_Learning_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamarth-robo\u002FMapNet) | 63 | \n| [Fighting Fake News: Image Splice Detection via Learned Self-Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJacob_Huh_Fighting_Fake_News_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fminyoungg\u002Fselfconsistency) | 62 | \n| [Isolating Sources of Disentanglement in Variational Autoencoders](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04942v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frtqichen\u002Fbeta-tcvae) | 62 | \n| [Neural Program Synthesis from Diverse Demonstration Videos](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsun18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fshaohua0116\u002Fdemo2program) | 62 | \n| [Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhaoyang_Lv_Learning_Rigidity_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Flearningrigidity) | 61 | \n| [Rotation-Sensitive Regression for Oriented Scene Text Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiao_Rotation-Sensitive_Regression_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMhLiao\u002FRRD) | 61 | \n| [Human Semantic Parsing for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKalayeh_Human_Semantic_Parsing_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Femrahbasaran\u002FSPReID) | 61 | \n| [Unsupervised Discovery of Object Landmarks as Structural Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Unsupervised_Discovery_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYutingZhang\u002Flmdis-rep) | 61 | \n| [IQA: Visual Question Answering in Interactive Environments](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGordon_IQA_Visual_Question_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanielgordon10\u002Fthor-iqa-cvpr-2018) | 60 | \n| [Hierarchical Long-term Video Prediction without Supervision](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwichers18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Flong-term-video-prediction-without-supervision) | 60 | \n| [Unsupervised Domain Adaptation for 3D Keypoint Estimation via View Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXingyi_Zhou_Unsupervised_Domain_Adaptation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002F3DKeypoints-DA) | 60 | \n| [Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification by Stepwise Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Exploit_the_Unknown_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYu-Wu\u002FExploit-Unknown-Gradually) | 59 | \n| [Neural Style Transfer via Meta Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShen_Neural_Style_Transfer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FFalongShen\u002Fstyletransfer) | 59 | \n| [Frame-Recurrent Video Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSajjadi_Frame-Recurrent_Video_Super-Resolution_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsmsajjadi\u002FFRVSR) | 58 | \n| [PlaneMatch: Patch Coplanarity Prediction for Robust RGB-D Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYifei_Shi_PlaneMatch_Patch_Coplanarity_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyifeishi\u002FPlaneMatch) | 57 | \n| [CBAM: Convolutional Block Attention Module](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYoungkl0726\u002FConvolutional-Block-Attention-Module) | 57 | \n| [Decorrelated Batch Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_Decorrelated_Batch_Normalization_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fumich-vl\u002FDecorrelatedBN) | 57 | \n| [Learning Conditioned Graph Structures for Interpretable Visual Question Answering](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Faimbrain\u002Fvqa-project) | 57 | \n| [Hierarchical Bilinear Pooling for Fine-Grained Visual Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChaojian_Yu_Hierarchical_Bilinear_Pooling_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FChaojianYu\u002FHierarchical-Bilinear-Pooling) | 57 | \n| [Leveraging Unlabeled Data for Crowd Counting by Learning to Rank](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Leveraging_Unlabeled_Data_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxialeiliu\u002FCrowdCountingCVPR18) | 56 | \n| [Deep Marching Cubes: Learning Explicit Surface Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiao_Deep_Marching_Cubes_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyiyiliao\u002Fdeep_marching_cubes) | 56 | \n| [Learning From Synthetic Data: Addressing Domain Shift for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSankaranarayanan_Learning_From_Synthetic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fswamiviv\u002FLSD-seg) | 56 | \n| [LF-Net: Learning Local Features from Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09662) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flf-net-release) | 55 | \n| [Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBaris_Gecer_Semi-supervised_Adversarial_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fbarisgecer\u002Ffacegan) | 55 | \n| [Discriminability Objective for Training Descriptive Captions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLuo_Discriminability_Objective_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002FDiscCaptioning) | 54 | \n| [BlockDrop: Dynamic Inference Paths in Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_BlockDrop_Dynamic_Inference_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FTushar-N\u002Fblockdrop) | 54 | \n| [Conditional Probability Models for Deep Image Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMentzer_Conditional_Probability_Models_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffab-jul\u002Fimgcomp-cvpr) | 54 | \n| [Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPeng_Jointly_Optimize_Data_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhiqiangdon\u002Fpose-adv-aug) | 54 | \n| [Learning towards Minimum Hyperspherical Energy](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09298v4) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002FMHE) | 54 | \n| [DeepVS: A Deep Learning Based Video Saliency Prediction Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLai_Jiang_DeepVS_A_Deep_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fremega\u002FOMCNN_2CLSTM) | 53 | \n| [Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWei_Liu_Learning_Efficient_Single-stage_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fliuwei16\u002FALFNet) | 52 | \n| [Learning Pixel-Level Semantic Affinity With Image-Level Supervision for Weakly Supervised Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAhn_Learning_Pixel-Level_Semantic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiwoon-ahn\u002Fpsa) | 52 | \n| [Wasserstein Introspective Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_Wasserstein_Introspective_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkjunelee\u002FWINN) | 51 | \n| [SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_SketchyGAN_Towards_Diverse_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwchen342\u002FSketchyGAN) | 51 | \n| [Self-produced Guidance for Weakly-supervised Object Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaolin_Zhang_Self-produced_Guidance_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxiaomengyc\u002FSPG) | 51 | \n| [Measuring abstract reasoning in neural networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsantoro18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fabstract-reasoning-matrices) | 51 | \n| [A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.01361) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FXenderLiu\u002FUFDN) | 51 | \n| [RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPaschalidou_RayNet_Learning_Volumetric_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpaschalidoud\u002Fraynet) | 51 | \n| [Coloring with Words: Guiding Image Colorization Through Text-based Palette Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHyojin_Bahng_Coloring_with_Words_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fawesome-davian\u002FText2Colors) | 50 | \n| [Efficient end-to-end learning for quantizable representations](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fjeong18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmaestrojeong\u002FDeep-Hash-Table-ICML18) | 50 | \n| [Visual Question Generation as Dual Task of Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Visual_Question_Generation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyikang-li\u002FiQAN) | 50 | \n| [Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkhan18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Femtiyaz\u002Fvadam) | 49 | \n| [Surface Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKostrikov_Surface_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangzhongshi\u002FSurfaceNetworks) | 48 | \n| [Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwu18h.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSandbox3aster\u002FDeep-K-Means-pytorch) | 48 | \n| [Stacked Cross Attention for Image-Text Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKuang-Huei_Lee_Stacked_Cross_Attention_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fkuanghuei\u002FSCAN) | 48 | \n| [Actor and Observer: Joint Modeling of First and Third-Person Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSigurdsson_Actor_and_Observer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgsig\u002Factor-observer) | 48 | \n| [Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJiang_Super_SloMo_High_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FTheFairBear\u002FSuper-SlowMo) | 47 | \n| [Learning-based Video Motion Magnification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTae-Hyun_Oh_Learning-based_Video_Motion_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002F12dmodel\u002Fdeep_motion_mag) | 47 | \n| [Pose Partition Networks for Multi-Person Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuecheng_Nie_Pose_Partition_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNieXC\u002Fpytorch-ppn) | 47 | \n| [Neural Autoregressive Flows](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fhuang18d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FCW-Huang\u002FNAF) | 47 | \n| [Weakly- and Semi-Supervised Panoptic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAnurag_Arnab_Weakly-_and_Semi-Supervised_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fqizhuli\u002FWeakly-Supervised-Panoptic-Segmentation) | 46 | \n| [Video Re-localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYang_Feng_Video_Re-localization_via_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffengyang0317\u002Fvideo_reloc) | 46 | \n| [Real-time 'Actor-Critic' Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBoyu_Chen_Real-time_Actor-Critic_Tracking_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fbychen515\u002FACT) | 46 | \n| [Black-box Adversarial Attacks with Limited Queries and Information](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Filyas18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flabsix\u002Flimited-blackbox-attacks) | 46 | \n| [Hyperbolic Entailment Cones for Learning Hierarchical Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fganea18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdalab\u002Fhyperbolic_cones) | 46 | \n| [Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXu_Structured_Attention_Guided_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanxuhk\u002FStructuredAttentionDepthEstimation) | 46 | \n| [Differentiable Compositional Kernel Learning for Gaussian Processes](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsun18e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fssydasheng\u002FNeural-Kernel-Network) | 45 | \n| [Visualizing and Understanding Atari Agents](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fgreydanus18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fgreydanus\u002Fvisualize_atari) | 45 | \n| [Image Manipulation with Perceptual Discriminators](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDiana_Sungatullina_Image_Manipulation_with_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fegorzakharov\u002FPerceptualGAN) | 45 | \n| [Learning Intrinsic Image Decomposition From Watching the World](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Learning_Intrinsic_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flixx2938\u002Funsupervised-learning-intrinsic-images) | 45 | \n| [Overcoming Catastrophic Forgetting with Hard Attention to the Task](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fserra18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjoansj\u002Fhat) | 44 | \n| [Learning Pose Specific Representations by Predicting Different Views](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPoier_Learning_Pose_Specific_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpoier\u002FPreView) | 44 | \n| [Zero-Shot Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAnkan_Bansal_Zero-Shot_Object_Detection_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsalman-h-khan\u002FZSD_Release) | 43 | \n| [Mean Field Multi-Agent Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fyang18d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmlii\u002Fmfrl) | 43 | \n| [Partial Adversarial Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhangjie_Cao_Partial_Adversarial_Domain_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FPADA) | 43 | \n| [Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuecheng_Nie_Mutual_Learning_to_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNieXC\u002Fpytorch-mula) | 43 | \n| [Robust Classification With Convolutional Prototype Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_Robust_Classification_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYangHM\u002FConvolutional-Prototype-Learning) | 43 | \n| [SimplE Embedding for Link Prediction in Knowledge Graphs](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04868v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMehran-k\u002FSimplE) | 42 | \n| [PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwang18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FYunbo426\u002Fpredrnn-pp) | 42 | \n| [Learning to Blend Photos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWei-Chih_Hung_Learning_to_Blend_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhfslyc\u002FLearnToBlend) | 42 | \n| [Mask-Guided Contrastive Attention Model for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSong_Mask-Guided_Contrastive_Attention_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdevelopfeng\u002FMGCAM) | 41 | \n| [Link Prediction Based on Graph Neural Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09691v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmuhanzhang\u002FSEAL) | 41 | \n| [Generalisation in humans and deep neural networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1808.08750v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frgeirhos\u002Fgeneralisation-humans-DNNs) | 41 | \n| [Towards Binary-Valued Gates for Robust LSTM Training](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fli18c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhuohan123\u002Fg2-lstm) | 41 | \n| [Multi-scale Residual Network for Image Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJuncheng_Li_Multi-scale_Residual_Network_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMIVRC\u002FMSRN-PyTorch) | 41 | \n| [Fully Motion-Aware Network for Video Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShiyao_Wang_Fully_Motion-Aware_Network_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fwangshy31\u002FMANet_for_Video_Object_Detection) | 41 | \n| [Interpretable Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Interpretable_Convolutional_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fseongjunyun\u002FCNN-with-Dual-Local-and-Global-Attention) | 40 | \n| [Generative Adversarial Perturbations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPoursaeed_Generative_Adversarial_Perturbations_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FOmidPoursaeed\u002FGenerative_Adversarial_Perturbations) | 40 | \n| [The Sound of Pixels](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHang_Zhao_The_Sound_of_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Froudimit\u002FMUSIC_dataset) | 40 | \n| [Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Towards_Faster_Training_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangtaoxie\u002Ffast-MPN-COV) | 40 | \n| [Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRamprasaath_Ramasamy_Selvaraju_Choose_Your_Neuron_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Framprs\u002Fneuron-importance-zsl) | 40 | \n| [Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09987) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002FMulti-View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) | 40 | \n| [Learning Warped Guidance for Blind Face Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaoming_Li_Learning_Warped_Guidance_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fcsxmli2016\u002FGFRNet) | 39 | \n| [Adversarial Complementary Learning for Weakly Supervised Object Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Adversarial_Complementary_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxiaomengyc\u002FACoL) | 39 | \n| [Learning Semantic Representations for Unsupervised Domain Adaptation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fxie18c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FMid-Push\u002FMoving-Semantic-Transfer-Network) | 39 | \n| [Neural Architecture Search with Bayesian Optimisation and Optimal Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.07191) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkirthevasank\u002Fnasbot) | 39 | \n| [Mutual Information Neural Estimation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbelghazi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FMasanoriYamada\u002FMine_pytorch) | 39 | \n| [NetGAN: Generating Graphs via Random Walks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbojchevski18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdanielzuegner\u002Fnetgan) | 39 | \n| [Learning to Evaluate Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCui_Learning_to_Evaluate_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fcvpr18-caption-eval) | 38 | \n| [Hyperbolic Neural Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09112v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdalab\u002Fhyperbolic_nn) | 37 | \n| [Unsupervised Geometry-Aware Representation for 3D Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHelge_Rhodin_Unsupervised_Geometry-Aware_Representation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhrhodin\u002FUnsupervisedGeometryAwareRepresentationLearning) | 37 | \n| [Adversarially Learned One-Class Classifier for Novelty Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSabokrou_Adversarially_Learned_One-Class_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkhalooei\u002FALOCC-CVPR2018) | 37 | \n| [Disentangling by Factorising](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkim18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002F1Konny\u002FFactorVAE) | 37 | \n| [Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fweiss18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftech-srl\u002Flstar_extraction) | 37 | \n| [Tangent Convolutions for Dense Prediction in 3D](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTatarchenko_Tangent_Convolutions_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftatarchm\u002Ftangent_conv) | 37 | \n| [Few-Shot Image Recognition by Predicting Parameters From Activations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQiao_Few-Shot_Image_Recognition_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjoe-siyuan-qiao\u002FFewShot-CVPR) | 37 | \n| [Real-Time Monocular Depth Estimation Using Synthetic Data With Domain Adaptation via Image Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAtapour-Abarghouei_Real-Time_Monocular_Depth_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fatapour\u002FmonocularDepth-Inference) | 37 | \n| [Generalizing to Unseen Domains via Adversarial Data Augmentation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.12018v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fgeneralize-unseen-domains) | 36 | \n| [SeGAN: Segmenting and Generating the Invisible](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FEhsani_SeGAN_Segmenting_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fehsanik\u002FSeGAN) | 36 | \n| [Graphical Generative Adversarial Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03429v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzhenxuan00\u002Fgraphical-gan) | 36 | \n| [PieAPP: Perceptual Image-Error Assessment Through Pairwise Preference](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPrashnani_PieAPP_Perceptual_Image-Error_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fprashnani\u002FPerceptualImageError) | 36 | \n| [Gated Fusion Network for Single Image Dehazing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRen_Gated_Fusion_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frwenqi\u002FGFN-dehazing) | 35 | \n| [Neural Code Comprehension: A Learnable Representation of Code Semantics](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.07336v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fspcl\u002Fncc) | 35 | \n| [Eye In-Painting With Exemplar Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDolhansky_Eye_In-Painting_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangqianhui\u002FExemplar-GAN-Eye-Inpainting-Tensorflow) | 35 | \n| [Deep One-Class Classification](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fruff18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flukasruff\u002FDeep-SVDD) | 34 | \n| [Deep Regression Tracking with Shrinkage Loss](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiankai_Lu_Deep_Regression_Tracking_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fchaoma99\u002FDSLT) | 34 | \n| [Deflecting Adversarial Attacks With Pixel Deflection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPrakash_Deflecting_Adversarial_Attacks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiamaaditya\u002Fpixel-deflection) | 34 | \n| [Learning Visual Question Answering by Bootstrapping Hard Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMateusz_Malinowski_Learning_Visual_Question_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgnouhp\u002FPyTorch-AdaHAN) | 33 | \n| [Human-Centric Indoor Scene Synthesis Using Stochastic Grammar](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Human-Centric_Indoor_Scene_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fhuman-centric-scene-synthesis) | 33 | \n| [Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNguyen_Improved_Fusion_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcvlab-tohoku\u002FDense-CoAttention-Network) | 33 | \n| [CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_CleanNet_Transfer_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkuanghuei\u002Fclean-net) | 33 | \n| [Speaker-Follower Models for Vision-and-Language Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.02724) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fspeaker_follower) | 33 | \n| [Improving Shape Deformation in Unsupervised Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAaron_Gokaslan_Improving_Shape_Deformation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fbrownvc\u002Fganimorph) | 33 | \n| [Learning Single-View 3D Reconstruction with Limited Pose Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuandao_Yang_A_Unified_Framework_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon) | 33 | \n| [3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02547) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmariogeiger\u002Fse3cnn) | 33 | \n| [Adversarial Logit Pairing](http:\u002F\u002Farxiv.org\u002Fabs\u002F1803.06373v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flabsix\u002Fadversarial-logit-pairing-analysis) | 32 | \n| [Attention in Convolutional LSTM for Gesture Recognition](https:\u002F\u002Fnips.cc\u002FConferences\u002F2018\u002FSchedule?showEvent=11207) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FGuangmingZhu\u002FAttentionConvLSTM) | 32 | \n| [Graph-Cut RANSAC](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBarath_Graph-Cut_RANSAC_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanini\u002Fgraph-cut-ransac) | 32 | \n| [Neural Guided Constraint Logic Programming for Program Synthesis](http:\u002F\u002Farxiv.org\u002Fabs\u002F1809.02840v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fxuexue\u002Fneuralkanren) | 32 | \n| [Learning Dynamic Memory Networks for Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTianyu_Yang_Learning_Dynamic_Memory_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fskyoung\u002FMemTrack) | 32 | \n| [GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZixin_Luo_Learning_Local_Descriptors_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flzx551402\u002Fgeodesc) | 32 | \n| [A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpokaxpoka\u002Fdeep_Mahalanobis_detector) | 32 | \n| [Flow-Grounded Spatial-Temporal Video Prediction from Still Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYijun_Li_Flow-Grounded_Spatial-Temporal_Video_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FFlowGrounded-VideoPrediction) | 32 | \n| [Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLei_Zhu_Bi-directional_Feature_Pyramid_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzijundeng\u002FBDRAR) | 32 | \n| [On the Robustness of Semantic Segmentation Models to Adversarial Attacks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FArnab_On_the_Robustness_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhmph\u002Fadversarial-attacks) | 31 | \n| [Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCui_Large_Scale_Fine-Grained_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fcvpr18-inaturalist-transfer) | 31 | \n| [SketchyScene: Richly-Annotated Scene Sketches](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChangqing_Zou_SketchyScene_Richly-Annotated_Scene_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FSketchyScene\u002FSketchyScene) | 31 | \n| [Deep Randomized Ensembles for Metric Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHong_Xuan_Randomized_Ensemble_Embeddings_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flittleredxh\u002FDREML) | 30 | \n| [Deep High Dynamic Range Imaging with Large Foreground Motions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShangzhe_Wu_Deep_High_Dynamic_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Felliottwu\u002FDeepHDR) | 30 | \n| [Revisiting Video Saliency: A Large-Scale Benchmark and a New Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Revisiting_Video_Saliency_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FDHF1K) | 30 | \n| [Blazingly Fast Video Object Segmentation With Pixel-Wise Metric Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Blazingly_Fast_Video_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuhuayc\u002Ffast-vos) | 30 | \n| [Deep Model-Based 6D Pose Refinement in RGB](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFabian_Manhardt_Deep_Model-Based_6D_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffabi92\u002Feccv18-rgb_pose_refinement) | 30 | \n| [TOM-Net: Learning Transparent Object Matting From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_TOM-Net_Learning_Transparent_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fguanyingc\u002FTOM-Net) | 30 | \n| [Quaternion Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuanyu_Zhu_Quaternion_Convolutional_Neural_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FTParcollet\u002FQuaternion-Convolutional-Neural-Networks-for-End-to-End-Automatic-Speech-Recognition) | 30 | \n| [Densely Connected Attention Propagation for Reading Comprehension](https:\u002F\u002Fnips.cc\u002FConferences\u002F2018\u002FSchedule?showEvent=11481) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fvanzytay\u002FNIPS2018_DECAPROP) | 30 | \n| [A Trilateral Weighted Sparse Coding Scheme for Real-World Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXU_JUN_A_Trilateral_Weighted_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FTWSC-ECCV2018) | 30 | \n| [Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fco-reyes18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fwyndwarrior\u002FSectar) | 29 | \n| [Video Rain Streak Removal by Multiscale Convolutional Sparse Coding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Video_Rain_Streak_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMinghanLi\u002FMS-CSC-Rain-Streak-Removal) | 29 | \n| [Recurrent Scene Parsing With Perspective Understanding in the Loop](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKong_Recurrent_Scene_Parsing_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faimerykong\u002FRecurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop) | 29 | \n| [Single Shot Scene Text Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLluis_Gomez_Single_Shot_Scene_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flluisgomez\u002Fsingle-shot-str) | 29 | \n| [Toward Characteristic-Preserving Image-based Virtual Try-On Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBochao_Wang_Toward_Characteristic-Preserving_Image-based_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsergeywong\u002Fcp-vton) | 29 | \n| [Explainable Neural Computation via Stack Neural Module Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRonghang_Hu_Explainable_Neural_Computation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fsnmn) | 29 | \n| [Exploring Disentangled Feature Representation Beyond Face Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Exploring_Disentangled_Feature_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FD2AE-Face-Generator) | 29 | \n| [Controllable Video Generation With Sparse Trajectories](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHao_Controllable_Video_Generation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzekunhao1995\u002FControllableVideoGen) | 28 | \n| [Layer-structured 3D Scene Inference via View Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShubham_Tulsiani_Layer-structured_3D_Scene_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Flayered-scene-inference) | 28 | \n| [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLiang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fqixuxiang\u002Fdeeplabv3plus) | 28 | \n| [PiCANet: Learning Pixel-Wise Contextual Attention for Saliency Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_PiCANet_Learning_Pixel-Wise_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FUgness\u002FPiCANet-Implementation) | 28 | \n| [Learning Rich Features for Image Manipulation Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhou_Learning_Rich_Features_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLarryJiang134\u002FImage_manipulation_detection) | 27 | \n| [Fast Video Object Segmentation by Reference-Guided Mask Propagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FOh_Fast_Video_Object_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fseoungwugoh\u002FRGMP) | 27 | \n| [3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZi_Jian_Yew_3DFeat-Net_Weakly_Supervised_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyewzijian\u002F3DFeatNet) | 27 | \n| [Who Let the Dogs Out? Modeling Dog Behavior From Visual Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FEhsani_Who_Let_the_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fehsanik\u002FdogTorch) | 27 | \n| [EC-Net: an Edge-aware Point set Consolidation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLequan_Yu_EC-Net_an_Edge-aware_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyulequan\u002FEC-Net) | 27 | \n| [Interpretable Intuitive Physics Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTian_Ye_Interpretable_Intuitive_Physics_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftianye95\u002Finterpretable-intuitive-physics-model) | 27 | \n| [Learning a Discriminative Feature Network for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_Learning_a_Discriminative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FlxtGH\u002Fdfn_seg) | 26 | \n| [Partial Transfer Learning With Selective Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_Partial_Transfer_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FSAN) | 26 | \n| [Cross-Modal Deep Variational Hand Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSpurr_Cross-Modal_Deep_Variational_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fspurra\u002Fvae-hands-3d) | 26 | \n| [Between-Class Learning for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTokozume_Between-Class_Learning_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmil-tokyo\u002Fbc_learning_image) | 26 | \n| [AON: Towards Arbitrarily-Oriented Text Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCheng_AON_Towards_Arbitrarily-Oriented_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhuizhang0110\u002FAON) | 26 | \n| [Conditional Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLin_Conditional_Image-to-Image_Translation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-Conditional-image-to-image-translation) | 25 | \n| [Learning Convolutional Networks for Content-Weighted Image Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Learning_Convolutional_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flimuhit\u002FImageCompression) | 25 | \n| [Diversity Regularized Spatiotemporal Attention for Video-Based Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Diversity_Regularized_Spatiotemporal_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShuangLI59\u002FDiversity-Regularized-Spatiotemporal-Attention) | 25 | \n| [Dynamic Multimodal Instance Segmentation Guided by Natural Language Queries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FEdgar_Margffoy-Tuay_Dynamic_Multimodal_Instance_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FBCV-Uniandes\u002Fquery-objseg) | 25 | \n| [CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBatsos_CBMV_A_Coalesced_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkbatsos\u002FCBMV) | 25 | \n| [Deep Texture Manifold for Ground Terrain Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXue_Deep_Texture_Manifold_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiaxue1993\u002FDeep-Encoding-Pooling-Network-DEP-) | 25 | \n| [Audio-Visual Event Localization in Unconstrained Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYapeng_Tian_Audio-Visual_Event_Localization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYapengTian\u002FAVE-ECCV18) | 25 | \n| [First Order Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fseward18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Ffirst_order_gan) | 25 | \n| [Visual Coreference Resolution in Visual Dialog using Neural Module Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSatwik_Kottur_Visual_Coreference_Resolution_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fcorefnmn) | 25 | \n| [SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFaraone_SYQ_Learning_Symmetric_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjulianfaraone\u002FSYQ) | 24 | \n| [Deep Reinforcement Learning of Marked Temporal Point Processes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09360v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FNetworks-Learning\u002Ftpprl) | 24 | \n| [Explicit Inductive Bias for Transfer Learning with Convolutional Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fli18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fholyseven\u002FTransferLearningClassification) | 24 | \n| [LEGO: Learning Edge With Geometry All at Once by Watching Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_LEGO_Learning_Edge_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhenheny\u002FLEGO) | 24 | \n| [Verisimilar Image Synthesis for Accurate Detection and Recognition of Texts in Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFangneng_Zhan_Verisimilar_Image_Synthesis_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffnzhan\u002FVerisimilar-Image-Synthesis-for-Accurate-Detection-and-Recognition-of-Texts-in-Scenes) | 24 | \n| [Multi-Agent Diverse Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGhosh_Multi-Agent_Diverse_Generative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Farnabgho\u002FMADGAN) | 23 | \n| [Face Aging With Identity-Preserved Conditional Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Face_Aging_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdawei6875797\u002FFace-Aging-with-Identity-Preserved-Conditional-Generative-Adversarial-Networks) | 23 | \n| [Learning to Separate Object Sounds by Watching Unlabeled Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRuohan_Gao_Learning_to_Separate_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Frhgao\u002Fseparating-object-sounds) | 23 | \n| [Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsuganuma18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsg-nm\u002FEvolutionary-Autoencoders) | 23 | \n| [To Trust Or Not To Trust A Classifier](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11783v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FTrustScore) | 23 | \n| [Im2Flow: Motion Hallucination From Static Images for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGao_Im2Flow_Motion_Hallucination_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frhgao\u002FIm2Flow) | 22 | \n| [ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjianzhangcs\u002FISTA-Net) | 22 | \n| [Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLin_Hallucinated-IQA_No-Reference_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkwanyeelin\u002FHIQA) | 22 | \n| [Anonymous Walk Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fivanov18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fnd7141\u002FAWE) | 22 | \n| [Learning to Multitask](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07541v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjfutoma\u002FMGP-RNN) | 22 | \n| [CondenseNet: An Efficient DenseNet Using Learned Group Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_CondenseNet_An_Efficient_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmarkdtw\u002Fcondensenet-tensorflow) | 22 | \n| [HashGAN: Deep Learning to Hash With Pair Conditional Wasserstein GAN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_HashGAN_Deep_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FHashGAN) | 22 | \n| [Hierarchical Relational Networks for Group Activity Recognition and Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMostafa_Ibrahim_Hierarchical_Relational_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmostafa-saad\u002Fhierarchical-relational-network) | 22 | \n| [Collaborative and Adversarial Network for Unsupervised Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Collaborative_and_Adversarial_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmahfuj9346449\u002FiCAN) | 22 | \n| [Geometry-Aware Scene Text Detection With Instance Transformation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Geometry-Aware_Scene_Text_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzlmzju\u002Fitn) | 22 | \n| [Learning to Promote Saliency Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZeng_Learning_to_Promote_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Flps) | 21 | \n| [CSGNet: Neural Shape Parser for Constructive Solid Geometry](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSharma_CSGNet_Neural_Shape_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHippogriff\u002FCSGNet) | 21 | \n| [Local Spectral Graph Convolution for Point Set Feature Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChu_Wang_Local_Spectral_Graph_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffate3439\u002FLocalSpecGCN) | 21 | \n| [HiDDeN: Hiding Data with Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJiren_Zhu_HiDDeN_Hiding_Data_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fjirenz\u002FHiDDeN) | 21 | \n| [GraphBit: Bitwise Interaction Mining via Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDuan_GraphBit_Bitwise_Interaction_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fduanyq14\u002FGraphBit) | 20 | \n| [Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Stacked_Conditional_Generative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDeepInsight-PCALab\u002FST-CGAN) | 20 | \n| [Fully-Convolutional Point Networks for Large-Scale Point Clouds](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDario_Rethage_Fully-Convolutional_Point_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdrethage\u002Ffully-convolutional-point-network) | 20 | \n| [Learning Superpixels With Segmentation-Aware Affinity Loss](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTu_Learning_Superpixels_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwctu\u002FSEAL) | 20 | \n| [Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Zero-Shot_Visual_Recognition_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzjuchenlong\u002Fsp-aen.cvpr18) | 20 | \n| [Crowd Counting With Deep Negative Correlation Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShi_Crowd_Counting_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshizenglin\u002FDeep-NCL) | 20 | \n| [Dimensionality-Driven Learning with Noisy Labels](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fma18d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fxingjunm\u002Fdimensionality-driven-learning) | 20 | \n| [Objects that Sound](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRelja_Arandjelovic_Objects_that_Sound_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Frohitrango\u002Fobjects-that-sound) | 20 | \n| [Deep Expander Networks: Efficient Deep Networks from Graph Theory](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAmeya_Prabhu_Deep_Expander_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FDrImpossible\u002FDeep-Expander-Networks) | 19 | \n| [Low-Shot Learning With Large-Scale Diffusion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDouze_Low-Shot_Learning_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Flow-shot-with-diffusion) | 19 | \n| [Low-Shot Learning With Imprinted Weights](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Low-Shot_Learning_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYU1ut\u002Fimprinted-weights) | 19 | \n| [Cross-Domain Self-Supervised Multi-Task Feature Learning Using Synthetic Imagery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRen_Cross-Domain_Self-Supervised_Multi-Task_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjason718\u002Fgame-feature-learning) | 19 | \n| [Learning Descriptor Networks for 3D Shape Synthesis and Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXie_Learning_Descriptor_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjianwen-xie\u002F3DDescriptorNet) | 19 | \n| [Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAnanya_Harsh_Jha_Disentangling_Factors_of_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fananyahjha93\u002Fcycle-consistent-vae) | 19 | \n| [CTAP: Complementary Temporal Action Proposal Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJiyang_Gao_CTAP_Complementary_Temporal_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fjiyanggao\u002FCTAP) | 18 | \n| [DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07445v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdojoteef\u002Fdvae) | 18 | \n| [Conditional Image-Text Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBryan_Plummer_Conditional_Image-Text_Embedding_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FBryanPlummer\u002Fcite) | 18 | \n| [EPINET: A Fully-Convolutional Neural Network Using Epipolar Geometry for Depth From Light Field Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShin_EPINET_A_Fully-Convolutional_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchshin10\u002Fepinet) | 18 | \n| [Glimpse Clouds: Human Activity Recognition From Unstructured Feature Points](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBaradel_Glimpse_Clouds_Human_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffabienbaradel\u002Fglimpse_clouds) | 18 | \n| [Bayesian Optimization of Combinatorial Structures](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbaptista18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbaptistar\u002FBOCS) | 18 | \n| [FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVerma_FeaStNet_Feature-Steered_Graph_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnitika-verma\u002FFeaStNet) | 18 | \n| [Learning Type-Aware Embeddings for Fashion Compatibility](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMariya_Vasileva_Learning_Type-Aware_Embeddings_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmvasil\u002Ffashion-compatibility) | 17 | \n| [Sliced Wasserstein Distance for Learning Gaussian Mixture Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKolouri_Sliced_Wasserstein_Distance_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fskolouri\u002Fswgmm) | 17 | \n| [Revisiting Deep Intrinsic Image Decompositions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFan_Revisiting_Deep_Intrinsic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffqnchina\u002FIntrinsicImage) | 17 | \n| [A Spectral Approach to Gradient Estimation for Implicit Distributions](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fshi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fthjashin\u002Fspectral-stein-grad) | 17 | \n| [Hierarchical Novelty Detection for Visual Object Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_Hierarchical_Novelty_Detection_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkibok90\u002Fcvpr2018-hnd) | 17 | \n| [Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJoo_Total_Capture_A_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMyzhencai\u002FTotal-Capture) | 17 | \n| [Learning Generative ConvNets via Multi-Grid Modeling and Sampling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGao_Learning_Generative_ConvNets_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fruiqigao\u002FMultigrid_learning) | 17 | \n| [Learning 3D Shape Completion From Laser Scan Data With Weak Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FStutz_Learning_3D_Shape_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdavidstutz\u002Fcvpr2018-shape-completion) | 17 | \n| [Triplet Loss in Siamese Network for Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXingping_Dong_Triplet_Loss_with_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fshenjianbing\u002FTripletTracking) | 17 | \n| [Adversarial Attack on Graph Structured Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fdai18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FHanjun-Dai\u002Fgraph_adversarial_attack) | 17 | \n| [Arbitrary Style Transfer With Deep Feature Reshuffle](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGu_Arbitrary_Style_Transfer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FStyle-Feature-Reshuffle) | 17 | \n| [Visual Question Reasoning on General Dependency Tree](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_Visual_Question_Reasoning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbezorro\u002FACMN-Pytorch) | 17 | \n| [Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHuang_Predicting_Gaze_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhyf015\u002Fegocentric-gaze-prediction) | 16 | \n| [Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04034) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fytsmiling\u002Flmt) | 16 | \n| [Coded Sparse Matrix Multiplication](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwang18e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fksopyla\u002FCudaDotProd) | 16 | \n| [Weakly-Supervised Action Segmentation With Iterative Soft Boundary Assignment](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDing_Weakly-Supervised_Action_Segmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZephyr-D\u002FTCFPN-ISBA) | 16 | \n| [Recovering 3D Planes from a Single Image via Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFengting_Yang_Recovering_3D_Planes_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffuy34\u002Fplanerecover) | 16 | \n| [SegStereo: Exploiting Semantic Information for Disparity Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuorun_Yang_SegStereo_Exploiting_Semantic_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyangguorun\u002FSegStereo) | 16 | \n| [Functional Gradient Boosting based on Residual Network Perception](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fnitanda18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fanitan0925\u002FResFGB) | 16 | \n| [NAG: Network for Adversary Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMopuri_NAG_Network_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Fnag) | 16 | \n| [Generative Probabilistic Novelty Detection with Adversarial Autoencoders](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02588v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpodgorskiy\u002FGPND) | 16 | \n| [Hashing as Tie-Aware Learning to Rank](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHe_Hashing_as_Tie-Aware_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkunhe\u002FTALR) | 15 | \n| [Pose Proposal Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSekii_Pose_Proposal_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsalihkaragoz\u002FMultiPerson-pose-estimation) | 15 | \n| [Convolutional Sequence to Sequence Model for Human Dynamics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Convolutional_Sequence_to_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchaneyddtt\u002FConvolutional-Sequence-to-Sequence-Model-for-Human-Dynamics) | 15 | \n| [Joint Pose and Expression Modeling for Facial Expression Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Joint_Pose_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FFFZhang1231\u002FFacial-expression-recognition) | 15 | \n| [Grounding Referring Expressions in Images by Variational Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Grounding_Referring_Expressions_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuleiniu\u002Fvc) | 15 | \n| [Rethinking the Form of Latent States in Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBo_Dai_Rethinking_the_Form_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002F2dcaption_eccv2018) | 15 | \n| [Open Set Domain Adaptation by Backpropagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKuniaki_Saito_Adversarial_Open_Set_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYU1ut\u002Fopenset-DA) | 15 | \n| [Neural Sign Language Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCamgoz_Neural_Sign_Language_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fneccam\u002Fnslt) | 15 | \n| [SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYifan_Xu_SpiderCNN_Deep_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxyf513\u002FSpiderCNN) | 15 | \n| [Efficient Neural Audio Synthesis](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkalchbrenner18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffedden\u002FTensorFlow-Efficient-Neural-Audio-Synthesis) | 15 | \n| [Deep Learning Under Privileged Information Using Heteroscedastic Dropout](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLambert_Deep_Learning_Under_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjohnwlambert\u002Fdlupi-heteroscedastic-dropout) | 14 | \n| [Image Transformer](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fparmar18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fssingal05\u002FImageTransformer) | 14 | \n| [Learning to Understand Image Blur](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Learning_to_Understand_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLotuslisa\u002FUnderstand_Image_Blur) | 14 | \n| [Learning and Using the Arrow of Time](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWei_Learning_and_Using_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdonglaiw\u002FAoT_TCAM) | 14 | \n| [Action Sets: Weakly Supervised Action Segmentation Without Ordering Constraints](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRichard_Action_Sets_Weakly_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falexanderrichard\u002Faction-sets) | 14 | \n| [Learning to Forecast and Refine Residual Motion for Image-to-Video Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLong_Zhao_Learning_to_Forecast_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgaryzhao\u002FFRGAN) | 14 | \n| [Multi-Scale Weighted Nuclear Norm Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYair_Multi-Scale_Weighted_Nuclear_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FnoamyairTC\u002FMSWNNM) | 14 | \n| [Synthesizing Robust Adversarial Examples](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fathalye18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fprabhant\u002Fsynthesizing-robust-adversarial-examples) | 13 | \n| [Fine-Grained Visual Categorization using Meta-Learning Optimization with Sample Selection of Auxiliary Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYabin_Zhang_Fine-Grained_Visual_Categorization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYabinZhang1994\u002FMetaFGNet) | 13 | \n| [Assessing Generative Models via Precision and Recall](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.00035v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmsmsajjadi\u002Fprecision-recall-distributions) | 13 | \n| [Deep Diffeomorphic Transformer Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDetlefsen_Deep_Diffeomorphic_Transformer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSkafteNicki\u002Fddtn) | 13 | \n| [Learning by Asking Questions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMisra_Learning_by_Asking_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyanghoonkim\u002Fquestion_generation) | 13 | \n| [Towards Human-Machine Cooperation: Self-Supervised Sample Mining for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Towards_Human-Machine_Cooperation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyanxp\u002FSSM) | 13 | \n| [Variational Autoencoders for Deforming 3D Mesh Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTan_Variational_Autoencoders_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faldehydecho\u002FMesh-VAE) | 13 | \n| [Min-Entropy Latent Model for Weakly Supervised Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWan_Min-Entropy_Latent_Model_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWinfrand\u002FMELM) | 13 | \n| [Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAnderson_Bottom-Up_and_Top-Down_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWentong-DST\u002Fup-down-captioner) | 13 | \n| [Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flee18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fyoonholee\u002FMT-net) | 13 | \n| [Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Learning_a_Discriminative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhubeihubei\u002FDFL-CNN-pytorch) | 13 | \n| [Finding Influential Training Samples for Gradient Boosted Decision Trees](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsharchilev18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbsharchilev\u002Finfluence_boosting) | 13 | \n| [Gesture Recognition: Focus on the Hands](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNarayana_Gesture_Recognition_Focus_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbeckabec\u002FHandDetection) | 12 | \n| [Cross-View Image Synthesis Using Conditional GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRegmi_Cross-View_Image_Synthesis_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkregmi\u002Fcross-view-image-synthesis) | 12 | \n| [Joint Optimization Framework for Learning With Noisy Labels](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTanaka_Joint_Optimization_Framework_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDaikiTanaka-UT\u002FJointOptimization) | 12 | \n| [Future Person Localization in First-Person Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYagi_Future_Person_Localization_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftakumayagi\u002Ffpl) | 12 | \n| [AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZheng_Shou_AutoLoc_Weakly-supervised_Temporal_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhengshou\u002FAutoLoc) | 12 | \n| [Learning Transferable Architectures for Scalable Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZoph_Learning_Transferable_Architectures_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faussetg\u002Fnasnet.pytorch) | 12 | \n| [Clipped Action Policy Gradient](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ffujita18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fpfnet-research\u002Fcapg) | 12 | \n| [Mix and Match Networks: Encoder-Decoder Alignment for Zero-Pair Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Mix_and_Match_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyaxingwang\u002FMix-and-match-networks) | 12 | \n| [Decouple Learning for Parameterized Image Operators](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FQingnan_Fan_Learning_to_Learn_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffqnchina\u002FDecoupleLearning) | 12 | \n| [Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fqi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fgeneralized-earley-parser) | 12 | \n| [Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04768) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fneitzal\u002Fadaptive-skip-intervals) | 12 | \n| [AMNet: Memorability Estimation With Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFajtl_AMNet_Memorability_Estimation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fok1zjf\u002FAMNet) | 12 | \n| [Adversarial Time-to-Event Modeling](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchapfuwa18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fpaidamoyo\u002Fadversarial_time_to_event) | 12 | \n| [Reversible Recurrent Neural Networks](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgan3sh500\u002Frevrnn) | 12 | \n| [Human Pose Estimation With Parsing Induced Learner](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNie_Human_Pose_Estimation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNieXC\u002Fpytorch-pil) | 11 | \n| [ShapeStacks: Learning Vision-Based Physical Intuition for Generalised Object Stacking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FOliver_Groth_ShapeStacks_Learning_Vision-Based_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fogroth\u002Fshapestacks) | 11 | \n| [A Joint Sequence Fusion Model for Video Question Answering and Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYoungjae_Yu_A_Joint_Sequence_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyj-yu\u002Flsmdc) | 11 | \n| [Learning Face Age Progression: A Pyramid Architecture of GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_Learning_Face_Age_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fajithvallabai\u002FPyramid-Architecture-of-GANs) | 11 | \n| [Robust Physical-World Attacks on Deep Learning Visual Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FEykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fevtimovi\u002Frobust_physical_perturbations) | 11 | \n| [High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fpearce18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FTeaPearce\u002FDeep_Learning_Prediction_Intervals) | 11 | \n| [Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Famit18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fron-amit\u002Fmeta-learning-adjusting-priors) | 11 | \n| [Multimodal Explanations: Justifying Decisions and Pointing to the Evidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPark_Multimodal_Explanations_Justifying_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSeth-Park\u002FMultimodalExplanations) | 11 | \n| [Accelerating Natural Gradient with Higher-Order Invariance](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsong18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fhigher_order_invariance) | 11 | \n| [Hierarchical Multi-Label Classification Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwehrmann18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fomoju\u002FreceiptdID) | 11 | \n| [Convolutional Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAneja_Convolutional_Image_Captioning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FcaptionGeneration.torch) | 11 | \n| [Boosting Domain Adaptation by Discovering Latent Domains](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMancini_Boosting_Domain_Adaptation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002Flatent_domains_DA) | 11 | \n| [Logo Synthesis and Manipulation With Clustered Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSage_Logo_Synthesis_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falex-sage\u002Flogo-gen) | 10 | \n| [PacGAN: The power of two samples in generative adversarial networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.04086v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffjxmlzn\u002FPacGAN) | 10 | \n| [Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLong_Attention_Clusters_Purely_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpomonam\u002FAttentionCluster) | 10 | \n| [End-to-End Incremental Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFrancisco_M._Castro_End-to-End_Incremental_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffmcp\u002FEndToEndIncrementalLearning) | 10 | \n| [Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLyu_Multi-Oriented_Scene_Text_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJK-Rao\u002FCorner_Segmentation_TextDetection) | 10 | \n| [On GANs and GMMs](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.12462v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feitanrich\u002Fgans-n-gmms) | 10 | \n| [Salient Object Detection Driven by Fixation Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Salient_Object_Detection_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FASNet) | 9 | \n| [Semantic Video Segmentation by Gated Recurrent Flow Propagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNilsson_Semantic_Video_Segmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FD-Nilsson\u002FGRFP) | 9 | \n| [Constraint-Aware Deep Neural Network Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChangan_Chen_Constraints_Matter_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FChanganVR\u002FConstraintAwareCompression) | 9 | \n| [Statistically-motivated Second-order Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKaicheng_Yu_Statistically-motivated_Second-order_Pooling_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fkcyu2014\u002Fsmsop) | 9 | \n| [Excitation Backprop for RNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBargal_Excitation_Backprop_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsbargal\u002FCaffe-ExcitationBP-RNNs) | 9 | \n| [Analyzing Uncertainty in Neural Machine Translation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fott18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fanalyzing-uncertainty-nmt) | 9 | \n| [Learning Dynamics of Linear Denoising Autoencoders](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fpretorius18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Farnupretorius\u002Flindaedynamics_icml2018) | 9 | \n| [Saliency Detection in 360° Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZiheng_Zhang_Saliency_Detection_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxuyanyu-shh\u002FSaliency-detection-in-360-video) | 9 | \n| [Density Adaptive Point Set Registration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLawin_Density_Adaptive_Point_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffelja633\u002FDARE) | 9 | \n| [Decoupled Parallel Backpropagation with Convergence Guarantee](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fhuo18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fslowbull\u002FDDG) | 9 | \n| [Classification from Pairwise Similarity and Unlabeled Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbao18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flevelfour\u002FSU_Classification) | 9 | \n| [oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fainsworth18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsamuela\u002Foi-vae) | 9 | \n| [Modeling Sparse Deviations for Compressed Sensing using Generative Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fdhar18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fsparse_gen) | 9 | \n| [Pixels, Voxels, and Views: A Study of Shape Representations for Single View 3D Object Shape Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShin_Pixels_Voxels_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdaeyun\u002Fobject-shapes-cvpr18) | 9 | \n| [Towards Open-Set Identity Preserving Face Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBao_Towards_Open-Set_Identity_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchloeguoqing\u002FTowards-Open-Set-Identity-Preserving-Face-Synthesis) | 9 | \n| [Five-Point Fundamental Matrix Estimation for Uncalibrated Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBarath_Five-Point_Fundamental_Matrix_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanini\u002Ffive-point-fundamental) | 8 | \n| [BourGAN: Generative Networks with Metric Embeddings](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07674) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fa554b554\u002FBourGAN) | 8 | \n| [Fast Information-theoretic Bayesian Optimisation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fru18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Frubinxin\u002FFITBO) | 8 | \n| [Deep Variational Reinforcement Learning for POMDPs](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Figl18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Foxwhirl\u002FDeep-Variational-Reinforcement-Learning) | 8 | \n| [Specular-to-Diffuse Translation for Multi-View Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShihao_Wu_Specular-to-Diffuse_Translation_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fwsh312\u002FS2Dnet) | 8 | \n| [Dynamic Conditional Networks for Few-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFang_Zhao_Dynamic_Conditional_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FZhaoJ9014\u002FDynamic-Conditional-Networks-for-Few-Shot-Learning.pytorch) | 8 | \n| [Learning Facial Action Units From Web Images With Scalable Weakly Supervised Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhao_Learning_Facial_Action_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzkl20061823\u002FWSC) | 8 | \n| [High-Resolution Image Synthesis and Semantic Manipulation With Conditional GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenxli\u002FHigh-Resolution-Image-Synthesis-and-Semantic-Manipulation-with-Conditional-GANsl-) | 8 | \n| [Deep Defense: Training DNNs with Improved Adversarial Robustness](http:\u002F\u002Farxiv.org\u002Fabs\u002F1803.00404v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FZiangYan\u002Fdeepdefense.pytorch) | 8 | \n| [Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchen18g.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fchentingpc\u002Fkdcode-lm) | 8 | \n| [Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-based Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHiroaki_Santo_Light_Structure_from_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhiroaki-santo\u002Flight-structure-from-pin-motion) | 7 | \n| [Non-metric Similarity Graphs for Maximum Inner Product Search](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fstanis-morozov\u002Fip-nsw) | 7 | \n| [Towards Realistic Predictors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FPei_Wang_Towards_Realistic_Predictors_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fpeiwang062\u002Ftowards-realistic-predictors) | 7 | \n| [Deep Non-Blind Deconvolution via Generalized Low-Rank Approximation](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frwenqi\u002FNBD-GLRA) | 7 | \n| [Don’t Just Assume  Look and Answer: Overcoming Priors for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAgrawal_Dont_Just_Assume_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FAishwaryaAgrawal\u002FGVQA) | 7 | \n| [Learning Dual Convolutional Neural Networks for Low-Level Vision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPan_Learning_Dual_Convolutional_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgalad-loth\u002FDualCNN-TF) | 7 | \n| [The Mirage of Action-Dependent Baselines in Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftucker18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Fmirage-rl) | 7 | \n| [DVQA: Understanding Data Visualizations via Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKafle_DVQA_Understanding_Data_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkushalkafle\u002FDVQA_dataset) | 7 | \n| [A Two-Step Disentanglement Method](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHadad_A_Two-Step_Disentanglement_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnaamahadad\u002FA-Two-Step-Disentanglement-Method) | 7 | \n| [Detecting and Correcting for Label Shift with Black Box Predictors](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flipton18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Flabel-shift) | 7 | \n| [Conditional Prior Networks for Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYanchao_Yang_Conditional_Prior_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYanchaoYang\u002FConditional-Prior-Networks) | 7 | \n| [Generative Adversarial Learning Towards Fast Weakly Supervised Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShen_Generative_Adversarial_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshenyunhang\u002FGAL-fWSD) | 7 | \n| [Adversarial Learning with Local Coordinate Coding](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fcao18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fguoyongcs\u002FLCCGAN) | 7 | \n| [Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKuen_Stochastic_Downsampling_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FSDPoint) | 7 | \n| [AttnGAN: Fine-Grained Text to Image Generation With Attentional Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXu_AttnGAN_Fine-Grained_Text_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWentong-DST\u002Fattn-gan) | 7 | \n| [Learning to Explain: An Information-Theoretic Perspective on Model Interpretation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchen18j.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fnickvosk\u002Facl2015-dataset-learning-to-explain-entity-relationships) | 7 | \n| [Banach Wasserstein GAN](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.06621v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fadler-j\u002Fbwgan) | 7 | \n| [Gradually Updated Neural Networks for Large-Scale Image Recognition](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fqiao18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjoe-siyuan-qiao\u002FGUNN) | 7 | \n| [Learning Steady-States of Iterative Algorithms over Graphs](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fdai18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FHanjun-Dai\u002Fsteady_state_embedding) | 7 | \n| [Progressive Attention Guided Recurrent Network for Salient Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Progressive_Attention_Guided_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangxiaoning666\u002FPAGR) | 7 | \n| [Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPang_Zoom_and_Learn_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FArtifineuro\u002Fzole) | 6 | \n| [Unsupervised holistic image generation from key local patches](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDonghoon_Lee_Unsupervised_holistic_image_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhellbell\u002FKeyPatchGan) | 6 | \n| [Inner Space Preserving Generative Pose Machine](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShuangjun_Liu_Inner_Space_Preserving_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fostadabbas\u002Fisp-gpm) | 6 | \n| [Bilevel Programming for Hyperparameter Optimization and Meta-Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ffranceschi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fprolearner\u002Fhyper-representation) | 6 | \n| [Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSun_Optical_Flow_Guided_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkitsune999\u002FOptical-Flow-Guided-Feature) | 6 | \n| [Breaking the Activation Function Bottleneck through Adaptive Parameterization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08574) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fflennerhag\u002Falstm) | 6 | \n| [Ultra Large-Scale Feature Selection using Count-Sketches](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Faghazadeh18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Frdspring1\u002FMISSION) | 6 | \n| [Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Dynamic_Scene_Deblurring_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhjwustc\u002Fcvpr18_rnn_deblur_matcaffe) | 6 | \n| [Orthogonally Decoupled Variational Gaussian Processes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1809.08820v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhughsalimbeni\u002Forth_decoupled_var_gps) | 6 | \n| [Batch Bayesian Optimization via Multi-objective Acquisition Ensemble for Automated Analog Circuit Design](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flyu18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAlaya-in-Matrix\u002FMACE) | 6 | \n| [A Modulation Module for Multi-task Learning with Applications in Image Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiangyun_Zhao_A_Modulation_Module_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FZhaoxiangyun\u002FMulti-Task-Modulation-Module) | 6 | \n| [A Memory Network Approach for Story-Based Temporal Summarization of 360° Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_A_Memory_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsangho-vision\u002FPFMN) | 6 | \n| [Towards Effective Low-Bitwidth Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnowgood\u002FQuantizeCNNModel) | 5 | \n| [Disentangling Factors of Variation by Mixing Them](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Disentangling_Factors_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHuQyang\u002FDisentangling-Factors-of-Variation-by-Mixing-Them) | 5 | \n| [Weakly-supervised Video Summarization using Variational Encoder-Decoder and Web Prior](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSijia_Cai_Weakly-supervised_Video_Summarization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fcssjcai\u002Fvesd) | 5 | \n| [Learning Longer-term Dependencies in RNNs with Auxiliary Losses](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftrinh18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbelepi93\u002Frnn-auxiliary-loss) | 5 | \n| [Contour Knowledge Transfer for Salient Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXin_Li_Contour_Knowledge_Transfer_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flixin666\u002FC2SNet) | 5 | \n| [HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FThomas_Robert_HybridNet_Classification_and_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdakshitagrawal97\u002FHybridNet) | 5 | \n| [Sidekick Policy Learning for Active Visual Exploration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSanthosh_Kumar_Ramakrishnan_Sidekick_Policy_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsrama2512\u002Fsidekicks) | 5 | \n| [Learning to Localize Sound Source in Visual Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSenocak_Learning_to_Localize_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fardasnck\u002Flearning_to_localize_sound) | 5 | \n| [Neural Architecture Optimization](http:\u002F\u002Farxiv.org\u002Fabs\u002F1808.07233v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdicarlolab\u002Farchconvnets) | 5 | \n| [COLA: Decentralized Linear Learning](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fepfml\u002Fcola) | 5 | \n| [Diverse and Coherent Paragraph Generation from Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMoitreya_Chatterjee_Diverse_and_Coherent_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmetro-smiles\u002FCapG_RevG_Code) | 5 | \n| [DRACO: Byzantine-resilient Distributed Training via Redundant Gradients](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchen18l.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhwang595\u002FDraco) | 5 | \n| [Inter and Intra Topic Structure Learning with Word Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhao18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fethanhezhao\u002FWEDTM) | 5 | \n| [Estimating the Success of Unsupervised Image to Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLior_Wolf_Estimating_the_Success_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsagiebenaim\u002Fgan_bound) | 5 | \n| [Dynamic-Structured Semantic Propagation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiang_Dynamic-Structured_Semantic_Propagation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flimberc\u002FDSSPN) | 5 | \n| [The Description Length of Deep Learning models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.07044) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fleonardblier\u002Fdescriptionlengthdeeplearning) | 5 | \n| [Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FPeiliang_LI_Stereo_Vision-based_Semantic_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhanghanduo\u002Fstereo_semantic_mapping) | 5 | \n| [Blind Justice: Fairness with Encrypted Sensitive Attributes](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkilbertus18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fnikikilbertus\u002Fblind-justice) | 5 | \n| [Transfer Learning via Learning to Transfer](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwei18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FQuebecAI\u002Fwebcam-transfer-learning-v1) | 5 | \n| [Deepcode: Feedback Codes via Deep Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.00801v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhyejikim1\u002FDeepcode) | 4 | \n| [Configurable Markov Decision Processes](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fmetelli18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Falbertometelli\u002FConfigurable-Markov-Decision-Processes-ICML-2018) | 4 | \n| [A Framework for Evaluating 6-DOF Object Trackers](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMathieu_Garon_A_Framework_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flvsn\u002F6DOF_tracking_evaluation) | 4 | \n| [Differentially Private Database Release via Kernel Mean Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbalog18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmatejbalog\u002FRKHS-private-database) | 4 | \n| [Recognizing Human Actions as the Evolution of Pose Estimation Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Recognizing_Human_Actions_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnkliuyifang\u002FSkeleton-based-Human-Action-Recognition) | 4 | \n| [Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FOrekondy_Connecting_Pixels_to_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fvisual_redactions) | 4 | \n| [DeLS-3D: Deep Localization and Segmentation With a 3D Semantic Map](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_DeLS-3D_Deep_Localization_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpengwangucla\u002FDeLS-3D) | 4 | \n| [Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FEric_Muller-Budack_Geolocation_Estimation_of_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FTIBHannover\u002FGeoEstimation) | 4 | \n| [Tracking Emerges by Colorizing Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FCarl_Vondrick_Self-supervised_Tracking_by_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FOh-Yoojin\u002FTracking-Emerges-by-Colorizing-Videos) | 4 | \n| [Diverse Conditional Image Generation by Stochastic Regression with Latent Drop-Out Codes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYang_He_Diverse_Conditional_Image_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FSSAW14\u002FImage_Generation_with_Latent_Code) | 4 | \n| [Inference Suboptimality in Variational Autoencoders](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fcremer18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flxuechen\u002Finference-suboptimality) | 4 | \n| [Black Box FDR](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftansey18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftansey\u002Fbb-fdr) | 4 | \n| [Feedback-Prop: Convolutional Neural Network Inference Under Partial Evidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Feedback-Prop_Convolutional_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fuvavision\u002Ffeedbackprop) | 4 | \n| [Quadrature-based features for kernel approximation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03832v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fquffka\u002Fquffka) | 4 | \n| [Joint Representation and Truncated Inference Learning for Correlation Filter based Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYingjie_Yao_Joint_Representation_and_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftourmaline612\u002FRTINet) | 4 | \n| [Transferable Adversarial Perturbations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBruce_Hou_Transferable_Adversarial_Perturbations_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fvinayprabhu\u002FGainsboro-box-attacks-) | 4 | \n| [Single Image Water Hazard Detection using FCN with Reflection Attention Units](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaofeng_Han_Single_Image_Water_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FCow911\u002FSingleImageWaterHazardDetectionWithRAU) | 4 | \n| [Multimodal Generative Models for Scalable Weakly-Supervised Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05335v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmhw32\u002Fmultimodal-vae-public) | 4 | \n| [Importance Weighted Transfer of Samples in Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftirinzoni18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAndreaTirinzoni\u002Fiw-transfer-rl) | 3 | \n| [Feature Generating Networks for Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXian_Feature_Generating_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fakku1506\u002FFeature-Generating-Networks-for-ZSL) | 3 | \n| [DICOD: Distributed Convolutional Coordinate Descent for Convolutional Sparse Coding](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fmoreau18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FtomMoral\u002FDicod) | 3 | \n| [CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmaple-research-lab\u002FCapProNet) | 3 | \n| [Bidirectional Retrieval Made Simple](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWehrmann_Bidirectional_Retrieval_Made_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjwehrmann\u002Fchain-vse) | 3 | \n| [Multilingual Anchoring: Interactive Topic Modeling and Alignment Across Languages](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fforest-snow\u002Fmtanchor_demo) | 3 | \n| [A Hybrid l1-l0 Layer Decomposition Model for Tone Mapping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiang_A_Hybrid_l1-l0_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FL1L0_TM-CVPR2018) | 3 | \n| [Spatially-Adaptive Filter Units for Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTabernik_Spatially-Adaptive_Filter_Units_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fskokec\u002FDAU-ConvNet) | 3 | \n| [Learning to Branch](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbalcan18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FStoneyJackson\u002Fgithub-workflow-activity) | 3 | \n| [Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FIBM\u002FContrastive-Explanation-Method) | 3 | \n| [Lifelong Learning via Progressive Distillation and Retrospection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSaihui_Hou_Progressive_Lifelong_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhshustc\u002FECCV18_Lifelong_Learning) | 3 | \n| [CLEAR: Cumulative LEARning for One-Shot One-Class Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKozerawski_CLEAR_Cumulative_LEARning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJKozerawski\u002FCLEAR-osoc) | 3 | \n| [Not to Cry Wolf: Distantly Supervised Multitask Learning in Critical Care](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fschwab18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fd909b\u002FDSMT-Nets) | 3 | \n| [Learning Answer Embeddings for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Learning_Answer_Embeddings_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhexiang-hu\u002Fanswer_embedding) | 3 | \n| [Information Constraints on Auto-Encoding Variational Bayes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08672v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fromain-lopez\u002FHCV) | 3 | \n| [Parallel Bayesian Network Structure Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fgao18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbign8\u002FPyStruct) | 3 | \n| [Ring Loss: Convex Feature Normalization for Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZheng_Ring_Loss_Convex_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvsatyakumar\u002FRing-Loss-Keras) | 3 | \n| [Teaching Categories to Human Learners With Visual Explanations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAodha_Teaching_Categories_to_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmacaodha\u002Fexplain_teach) | 3 | \n| [Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhang18g.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhangjiong724\u002Fspectral-RNN) | 3 | \n| [Deep Burst Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FClement_Godard_Deep_Burst_Denoising_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmrharicot\u002Fdeep_burst_denoising) | 3 | \n| [Convergent Tree Backup and Retrace with Function Approximation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftouati18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fahmed-touati\u002Fconvergent-off-policy) | 3 | \n| [Gaze Prediction in Dynamic 360° Immersive Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXu_Gaze_Prediction_in_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxuyanyu-shh\u002FVR-EyeTracking) | 3 | \n| [Statistical Recurrent Models on Manifold valued Data](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11204v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzhenxingjian\u002FSPD-SRU) | 3 | \n| [End-to-End Flow Correlation Tracking With Spatial-Temporal Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhu_End-to-End_Flow_Correlation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhengzhugithub\u002FFlowTrack) | 3 | \n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ back to top\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2017\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Bridging the Gap Between Value and Policy Based Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6870-bridging-the-gap-between-value-and-policy-based-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels) | 46593 | \n| [REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6856-rebar-low-variance-unbiased-gradient-estimates-for-discrete-latent-variable-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels) | 46593 | \n| [Focal Loss for Dense Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLin_Focal_Loss_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Mask R-CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHe_Mask_R-CNN_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmatterport\u002FMask_RCNN) | 9493 | \n| [Deep Photo Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLuan_Deep_Photo_Style_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluanfujun\u002Fdeep-photo-styletransfer) | 8655 | \n| [LightGBM: A Highly Efficient Gradient Boosting Decision Tree](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FLightGBM) | 7536 | \n| [Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7112-scalable-trust-region-method-for-deep-reinforcement-learning-using-kronecker-factored-approximation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines) | 6449 | \n| [Attention is All you Need](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7181-attention-is-all-you-need.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensor2tensor) | 6288 | \n| [Large Pose 3D Face Reconstruction From a Single Image via Direct Volumetric CNN Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FJackson_Large_Pose_3D_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FAaronJackson\u002Fvrn) | 3354 | \n| [Densely Connected Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Densely_Connected_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet) | 3130 | \n| [A Unified Approach to Interpreting Model Predictions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap) | 3122 | \n| [Deformable Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDai_Deformable_Convolutional_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FDeformable-ConvNets) | 2165 | \n| [ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6859-elf-an-extensive-lightweight-and-flexible-research-platform-for-real-time-strategy-games.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FELF) | 1823 | \n| [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FQi_PointNet_Deep_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet) | 1523 | \n| [Improved Training of Wasserstein GANs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7159-improved-training-of-wasserstein-gans.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Figul222\u002Fimproved_wgan_training) | 1405 | \n| [Fully Convolutional Instance-Aware Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Fully_Convolutional_Instance-Aware_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FFCIS) | 1395 | \n| [Aggregated Residual Transformations for Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXie_Aggregated_Residual_Transformations_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt) | 1361 | \n| [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLedig_Photo-Realistic_Single_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fsrgan) | 1301 | \n| [Unsupervised Image-to-Image Translation Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6672-unsupervised-image-to-image-translation-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmingyuliutw\u002Funit) | 1205 | \n| [Photographic Image Synthesis With Cascaded Refinement Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Photographic_Image_Synthesis_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCQFIO\u002FPhotographicImageSynthesis) | 1142 | \n| [High-Resolution Image Inpainting Using Multi-Scale Neural Patch Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_High-Resolution_Image_Inpainting_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002FFaster-High-Res-Neural-Inpainting) | 1072 | \n| [SphereFace: Deep Hypersphere Embedding for Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_SphereFace_Deep_Hypersphere_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002Fsphereface) | 1048 | \n| [Deep Feature Flow for Video Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhu_Deep_Feature_Flow_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FDeep-Feature-Flow) | 966 | \n| [Bayesian GAN](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6953-bayesian-gan.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fandrewgordonwilson\u002Fbayesgan) | 942 | \n| [Pyramid Scene Parsing Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhao_Pyramid_Scene_Parsing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhszhao\u002FPSPNet) | 934 | \n| [Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7098-efficient-modeling-of-latent-information-in-supervised-learning-using-gaussian-processes.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FSheffieldML\u002FGPy) | 906 | \n| [Finding Tiny Faces](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_Finding_Tiny_Faces_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpeiyunh\u002Ftiny) | 856 | \n| [Toward Multimodal Image-to-Image Translation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6650-toward-multimodal-image-to-image-translation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002FBiCycleGAN) | 794 | \n| [Learning to Discover Cross-Domain Relations with Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkim17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDiscoGAN-pytorch) | 784 | \n| [YOLO9000: Better, Faster, Stronger](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRedmon_YOLO9000_Better_Faster_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphilipperemy\u002Fyolo-9000) | 773 | \n| [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7095-pointnet-deep-hierarchical-feature-learning-on-point-sets-in-a-metric-space.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet2) | 772 | \n| [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ffinn17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcbfinn\u002Fmaml) | 729 | \n| [FlowNet 2.0: Evolution of Optical Flow Estimation With Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIlg_FlowNet_2.0_Evolution_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fflownet2) | 720 | \n| [Channel Pruning for Accelerating Very Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHe_Channel_Pruning_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyihui-he\u002Fchannel-pruning) | 649 | \n| [Dilated Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYu_Dilated_Residual_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffyu\u002Fdrn) | 640 | \n| [Inferring and Executing Programs for Visual Reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FJohnson_Inferring_and_Executing_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fclevr-iep) | 636 | \n| [DSOD: Learning Deeply Supervised Object Detectors From Scratch](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FShen_DSOD_Learning_Deeply_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fszq0214\u002FDSOD) | 582 | \n| [Arbitrary Style Transfer in Real-Time With Adaptive Instance Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Arbitrary_Style_Transfer_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxunhuang1995\u002FAdaIN-style) | 572 | \n| [Accelerating Eulerian Fluid Simulation With Convolutional Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ftompson17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FFluidNet) | 570 | \n| [Learning Disentangled Representations with Semi-Supervised Deep Generative Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7174-learning-disentangled-representations-with-semi-supervised-deep-generative-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fprobtorch\u002Fprobtorch) | 556 | \n| [Inductive Representation Learning on Large Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6703-inductive-representation-learning-on-large-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwilliamleif\u002FGraphSAGE) | 552 | \n| [Regressing Robust and Discriminative 3D Morphable Models With a Very Deep Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTran_Regressing_Robust_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fanhttran\u002F3dmm_cnn) | 537 | \n| [How Far Are We From Solving the 2D & 3D Face Alignment Problem? (And a Dataset of 230,000 3D Facial Landmarks)](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBulat_How_Far_Are_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002F1adrianb\u002F2D-and-3D-face-alignment) | 526 | \n| [SSH: Single Stage Headless Face Detector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FNajibi_SSH_Single_Stage_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmahyarnajibi\u002FSSH) | 515 | \n| [Learning From Simulated and Unsupervised Images Through Adversarial Training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShrivastava_Learning_From_Simulated_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fsimulated-unsupervised-tensorflow) | 492 | \n| [Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FNguyen_Plug__Play_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FEvolving-AI-Lab\u002Fppgn) | 487 | \n| [Video Frame Interpolation via Adaptive Convolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FNiklaus_Video_Frame_Interpolation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-sepconv) | 482 | \n| [Video Frame Interpolation via Adaptive Separable Convolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FNiklaus_Video_Frame_Interpolation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-sepconv) | 482 | \n| [GMS: Grid-based Motion Statistics for Fast, Ultra-Robust Feature Correspondence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBian_GMS_Grid-based_Motion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJiawangBian\u002FGMS-Feature-Matcher) | 460 | \n| [Joint Detection and Identification Feature Learning for Person Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXiao_Joint_Detection_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShuangLI59\u002Fperson_search) | 459 | \n| [Dual Path Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7033-dual-path-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcypw\u002FDPNs) | 451 | \n| [Flow-Guided Feature Aggregation for Video Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Flow-Guided_Feature_Aggregation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FFlow-Guided-Feature-Aggregation) | 436 | \n| [Deep Image Matting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Deep_Image_Matting_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJoker316701882\u002FDeep-Image-Matting) | 434 | \n| [Richer Convolutional Features for Edge Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Richer_Convolutional_Features_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002Frcf) | 399 | \n| [Annotating Object Instances With a Polygon-RNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCastrejon_Annotating_Object_Instances_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffidler-lab\u002Fpolyrnn-pp-pytorch) | 397 | \n| [Recurrent Highway Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzilly17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjulian121266\u002FRecurrentHighwayNetworks) | 397 | \n| [Detect to Track and Track to Detect](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFeichtenhofer_Detect_to_Track_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002FDetect-Track) | 387 | \n| [RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_RefineNet_Multi-Path_Refinement_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fguosheng\u002Frefinenet) | 379 | \n| [Detecting Oriented Text in Natural Images by Linking Segments](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShi_Detecting_Oriented_Text_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdengdan\u002Fseglink) | 364 | \n| [Deep Lattice Networks and Partial Monotonic Functions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6891-deep-lattice-networks-and-partial-monotonic-functions.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Flattice) | 349 | \n| [Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6719-mean-teachers-are-better-role-models-weight-averaged-consistency-targets-improve-semi-supervised-deep-learning-results.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FCuriousAI\u002Fmean-teacher\u002F) | 347 | \n| [RON: Reverse Connection With Objectness Prior Networks for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKong_RON_Reverse_Connection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftaokong\u002FRON) | 345 | \n| [Universal Style Transfer via Feature Transforms](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6642-universal-style-transfer-via-feature-transforms.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FUniversalStyleTransfer) | 344 | \n| [Residual Attention Network for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Residual_Attention_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffwang91\u002Fresidual-attention-network) | 329 | \n| [One-Shot Video Object Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCaelles_One-Shot_Video_Object_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fscaelles\u002FOSVOS-TensorFlow) | 316 | \n| [Accurate Single Stage Detector Using Recurrent Rolling Convolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRen_Accurate_Single_Stage_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FxiaohaoChen\u002Frrc_detection) | 314 | \n| [Feature Pyramid Networks for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_Feature_Pyramid_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Funsky\u002FFPN) | 310 | \n| [Efficient softmax approximation for GPUs](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fgrave17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fadaptive-softmax) | 304 | \n| [OctNet: Learning Deep 3D Representations at High Resolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRiegler_OctNet_Learning_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgriegler\u002Foctnet) | 302 | \n| [Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLai_Deep_Laplacian_Pyramid_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphoenix104104\u002FLapSRN) | 301 | \n| [Pixel Recursive Super Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDahl_Pixel_Recursive_Super_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fnilboy\u002Fpixel-recursive-super-resolution) | 301 | \n| [Self-Critical Sequence Training for Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRennie_Self-Critical_Sequence_Training_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fself-critical.pytorch) | 299 | \n| [Age Progression\u002FRegression by Conditional Adversarial Autoencoder](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Age_ProgressionRegression_by_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZZUTK\u002FFace-Aging-CAAE) | 297 | \n| [Style Transfer from Non-Parallel Text by Cross-Alignment](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7259-style-transfer-from-non-parallel-text-by-cross-alignment.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fshentianxiao\u002Flanguage-style-transfer) | 296 | \n| [Dilated Recurrent Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6613-dilated-recurrent-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcode-terminator\u002FDilatedRNN) | 285 | \n| [Lifting From the Deep: Convolutional 3D Pose Estimation From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTome_Lifting_From_the_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDenisTome\u002FLifting-from-the-Deep-release) | 280 | \n| [DeepBach: a Steerable Model for Bach Chorales Generation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhadjeres17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FGhadjeres\u002FDeepBach) | 276 | \n| [The Predictron:  End-To-End Learning and Planning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fsilver17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhongwen\u002Fpredictron) | 274 | \n| [Convolutional Sequence to Sequence Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fgehring17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftobyyouup\u002Fconv_seq2seq) | 258 | \n| [OptNet: Differentiable Optimization as a Layer in Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Famos17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Foptnet) | 245 | \n| [Prototypical Networks for Few-shot Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6996-prototypical-networks-for-few-shot-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjakesnell\u002Fprototypical-networks) | 244 | \n| [Deep Voice: Real-time Neural Text-to-Speech](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Farik17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fisraelg99\u002Fdeepvoice) | 242 | \n| [Reinforcement Learning with Deep Energy-Based Policies](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhaarnoja17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhaarnoja\u002Fsoftqlearning) | 233 | \n| [Learning Deep CNN Denoiser Prior for Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Learning_Deep_CNN_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcszn\u002FIRCNN) | 231 | \n| [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7240-gans-trained-by-a-two-time-scale-update-rule-converge-to-a-local-nash-equilibrium.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbioinf-jku\u002FTTUR) | 229 | \n| [A Point Set Generation Network for 3D Object Reconstruction From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFan_A_Point_Set_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffanhqme\u002FPointSetGeneration) | 228 | \n| [Deeply Supervised Salient Object Detection With Short Connections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHou_Deeply_Supervised_Salient_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJoker316701882\u002FSalient-Object-Detection) | 228 | \n| [BlitzNet: A Real-Time Deep Network for Scene Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDvornik_BlitzNet_A_Real-Time_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdvornikita\u002Fblitznet) | 227 | \n| [Language Modeling with Gated Convolutional Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fdauphin17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fanantzoid\u002FLanguage-Modeling-GatedCNN) | 221 | \n| [Unlabeled Samples Generated by GAN Improve the Person Re-Identification Baseline in Vitro](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZheng_Unlabeled_Samples_Generated_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson-reID_GAN) | 215 | \n| [Stacked Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Stacked_Generative_Adversarial_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxunhuang1995\u002FSGAN) | 215 | \n| [RMPE: Regional Multi-Person Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFang_RMPE_Regional_Multi-Person_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FMVIG-SJTU\u002FRMPE) | 215 | \n| [Knowing When to Look: Adaptive Attention via a Visual Sentinel for Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLu_Knowing_When_to_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FAdaptiveAttention) | 214 | \n| [Generative Face Completion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Generative_Face_Completion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FGenerativeFaceCompletion) | 212 | \n| [VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLee_VPGNet_Vanishing_Point_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSeokjuLee\u002FVPGNet) | 210 | \n| [The Reversible Residual Network: Backpropagation Without Storing Activations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6816-the-reversible-residual-network-backpropagation-without-storing-activations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frenmengye\u002Frevnet-public) | 210 | \n| [Recurrent Scale Approximation for Object Detection in CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Recurrent_Scale_Approximation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FRSA-for-object-detection) | 209 | \n| [Learning From Synthetic Humans](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FVarol_Learning_From_Synthetic_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgulvarol\u002Fsurreal) | 207 | \n| [Spatially Adaptive Computation Time for Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFigurnov_Spatially_Adaptive_Computation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmfigurnov\u002Fsact) | 203 | \n| [Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Beyond_Face_Rotation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHRLTY\u002FTP-GAN) | 202 | \n| [3D Bounding Box Estimation Using Deep Learning and Geometry](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMousavian_3D_Bounding_Box_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsmallcorgi\u002F3D-Deepbox) | 200 | \n| [Multi-View 3D Object Detection Network for Autonomous Driving](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_Multi-View_3D_Object_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbostondiditeam\u002FMV3D) | 199 | \n| [Visual Dialog](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDas_Visual_Dialog_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbatra-mlp-lab\u002Fvisdial) | 199 | \n| [Interpretable Explanations of Black Boxes by Meaningful Perturbation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFong_Interpretable_Explanations_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-explain-black-box) | 192 | \n| [Inverse Compositional Spatial Transformer Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_Inverse_Compositional_Spatial_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenhsuanlin\u002Finverse-compositional-STN) | 189 | \n| [FastMask: Segment Multi-Scale Object Candidates in One Shot](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_FastMask_Segment_Multi-Scale_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvoidrank\u002FFastMask) | 189 | \n| [OnACID: Online Analysis of Calcium Imaging Data in Real Time](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6832-onacid-online-analysis-of-calcium-imaging-data-in-real-time.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsimonsfoundation\u002Fcaiman) | 189 | \n| [Semantic Scene Completion From a Single Depth Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSong_Semantic_Scene_Completion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshurans\u002Fsscnet) | 188 | \n| [Learning Efficient Convolutional Networks Through Network Slimming](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Learning_Efficient_Convolutional_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002Fslimming) | 186 | \n| [Learning Feature Pyramids for Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FYang_Learning_Feature_Pyramids_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002FPyraNet) | 185 | \n| [Be Your Own Prada: Fashion Synthesis With Structural Coherence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Be_Your_Own_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzhusz\u002FICCV17-fashionGAN) | 183 | \n| [Scene Graph Generation by Iterative Message Passing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Scene_Graph_Generation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FdanfeiX\u002Fscene-graph-TF-release) | 182 | \n| [Fast Image Processing With Fully-Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Fast_Image_Processing_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCQFIO\u002FFastImageProcessing) | 180 | \n| [Learning Multiple Tasks with Multilinear Relationship Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6757-learning-multiple-tasks-with-multilinear-relationship-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FXlearn) | 178 | \n| [Learning to Reason: End-To-End Module Networks for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHu_Learning_to_Reason_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fn2nmn) | 178 | \n| [Single Shot Text Detector With Regional Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHe_Single_Shot_Text_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FBestSonny\u002FSSTD) | 176 | \n| [Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment With Limited Resources](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBulat_Binarized_Convolutional_Landmark_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fbinary-human-pose-estimation) | 175 | \n| [Deep Feature Interpolation for Image Content Changes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FUpchurch_Deep_Feature_Interpolation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpaulu\u002Fdeepfeatinterp) | 170 | \n| [On Human Motion Prediction Using Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMartinez_On_Human_Motion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Funa-dinosauria\u002Fhuman-motion-prediction) | 167 | \n| [Image Super-Resolution via Deep Recursive Residual Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTai_Image_Super-Resolution_via_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftyshiwo\u002FDRRN_CVPR17) | 163 | \n| [Learning Cross-Modal Embeddings for Cooking Recipes and Food Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSalvador_Learning_Cross-Modal_Embeddings_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftorralba-lab\u002Fim2recipe) | 160 | \n| [Input Convex Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Famos17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Ficnn) | 159 | \n| [Simple Does It: Weakly Supervised Instance and Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKhoreva_Simple_Does_It_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphilferriere\u002Ftfwss) | 159 | \n| [Low-Shot Visual Recognition by Shrinking and Hallucinating Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHariharan_Low-Shot_Visual_Recognition_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Flow-shot-shrink-hallucinate) | 158 | \n| [Oriented Response Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhou_Oriented_Response_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FORN) | 157 | \n| [Soft Proposal Networks for Weakly Supervised Object Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Soft_Proposal_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyeezhu\u002FSPN.pytorch) | 154 | \n| [Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmescheder17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FAdversarialVariationalBayes) | 147 | \n| [Axiomatic Attribution for Deep Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fsundararajan17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhiranumn\u002FIntegratedGradients) | 146 | \n| [Gradient Episodic Memory for Continual Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7225-gradient-episodic-memory-for-continual-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FGradientEpisodicMemory) | 146 | \n| [DSAC - Differentiable RANSAC for Camera Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBrachmann_DSAC_-_Differentiable_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcvlab-dresden\u002FDSAC) | 144 | \n| [Attend to You: Personalized Image Captioning With Context Sequence Memory Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPark_Attend_to_You_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcesc-park\u002Fattend2u) | 143 | \n| [Conditional Similarity Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FVeit_Conditional_Similarity_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fandreasveit\u002Fconditional-similarity-networks) | 142 | \n| [Language Modeling with Recurrent Highway Hypernetworks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6919-language-modeling-with-recurrent-highway-hypernetworks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjsuarez5341\u002FRecurrent-Highway-Hypernetworks-NIPS) | 141 | \n| [Triple Generative Adversarial Nets](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6997-triple-generative-adversarial-nets.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzhenxuan00\u002Ftriple-gan) | 138 | \n| [Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6974-interpolated-policy-gradient-merging-on-policy-and-off-policy-gradient-estimation-for-deep-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fshaneshixiang\u002Frllabplusplus) | 138 | \n| [One-Sided Unsupervised Domain Mapping](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6677-one-sided-unsupervised-domain-mapping.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsagiebenaim\u002FDistanceGAN) | 137 | \n| [Detecting Visual Relationships With Deep Relational Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDai_Detecting_Visual_Relationships_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002Fdrnet_cvpr2017) | 137 | \n| [Attentive Recurrent Comparators](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fshyam17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Farc-pytorch) | 136 | \n| [Towards 3D Human Pose Estimation in the Wild: A Weakly-Supervised Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhou_Towards_3D_Human_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpose-hg-3d) | 136 | \n| [Learning a Multi-View Stereo Machine](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6640-learning-a-multi-view-stereo-machine.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fakar43\u002Flsm) | 135 | \n| [Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7145-deep-learning-for-precipitation-nowcasting-a-benchmark-and-a-new-model.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsxjscience\u002FHKO-7) | 134 | \n| [Multi-Context Attention for Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChu_Multi-Context_Attention_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fpose-attention) | 131 | \n| [Controlling Perceptual Factors in Neural Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGatys_Controlling_Perceptual_Factors_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleongatys\u002FNeuralImageSynthesis) | 130 | \n| [Bayesian Compression for Deep Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6921-bayesian-compression-for-deep-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FKarenUllrich\u002FTutorial_BayesianCompressionForDL) | 130 | \n| [Adversarial Discriminative Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTzeng_Adversarial_Discriminative_Domain_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcorenel\u002Fpytorch-adda) | 129 | \n| [Working hard to know your neighbor's margins: Local descriptor learning loss](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7068-working-hard-to-know-your-neighbors-margins-local-descriptor-learning-loss.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDagnyT\u002Fhardnet) | 128 | \n| [Concrete Dropout](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6949-concrete-dropout.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fyaringal\u002FConcreteDropout) | 127 | \n| [SegFlow: Joint Learning for Video Object Segmentation and Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCheng_SegFlow_Joint_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJingchunCheng\u002FSegFlow) | 127 | \n| [Segmentation-Aware Convolutional Networks Using Local Attention Masks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHarley_Segmentation-Aware_Convolutional_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faharley\u002Fsegaware) | 126 | \n| [Detail-Revealing Deep Video Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTao_Detail-Revealing_Deep_Video_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiangsutx\u002FSPMC_VideoSR) | 126 | \n| [CREST: Convolutional Residual Learning for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSong_CREST_Convolutional_Residual_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fybsong00\u002FCREST-Release) | 126 | \n| [Discriminative Correlation Filter With Channel and Spatial Reliability](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLukezic_Discriminative_Correlation_Filter_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falanlukezic\u002Fcsr-dcf) | 124 | \n| [SVDNet for Pedestrian Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSun_SVDNet_for_Pedestrian_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsyfafterzy\u002FSVDNet-for-Pedestrian-Retrieval) | 121 | \n| [Semantic Image Synthesis via Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDong_Semantic_Image_Synthesis_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fwoozzu\u002Fdong_iccv_2017) | 121 | \n| [Spatiotemporal Multiplier Networks for Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFeichtenhofer_Spatiotemporal_Multiplier_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002Fst-resnet) | 121 | \n| [PoseTrack: Joint Multi-Person Pose Estimation and Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIqbal_PoseTrack_Joint_Multi-Person_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiqbalu\u002FPoseTrack-CVPR2017) | 121 | \n| [Hierarchical Attentive Recurrent Tracking](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6898-hierarchical-attentive-recurrent-tracking.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fakosiorek\u002Fhart) | 121 | \n| [Good Semi-supervised Learning That Requires a Bad GAN](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7229-good-semi-supervised-learning-that-requires-a-bad-gan.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkimiyoung\u002Fssl_bad_gan) | 120 | \n| [Deep Watershed Transform for Instance Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBai_Deep_Watershed_Transform_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmin2209\u002Fdwt) | 120 | \n| [Associative Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHaeusser_Associative_Domain_Adaptation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhaeusser\u002Flearning_by_association) | 119 | \n| [Learning by Association -- A Versatile Semi-Supervised Training Method for Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHaeusser_Learning_by_Association_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhaeusser\u002Flearning_by_association) | 119 | \n| [Value Prediction Network](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7192-value-prediction-network.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjunhyukoh\u002Fvalue-prediction-network) | 119 | \n| [Unrestricted Facial Geometry Reconstruction Using Image-To-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSela_Unrestricted_Facial_Geometry_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmatansel\u002Fpix2vertex) | 119 | \n| [MemNet: A Persistent Memory Network for Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTai_MemNet_A_Persistent_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftyshiwo\u002FMemNet) | 119 | \n| [Bayesian Optimization with Gradients](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7111-bayesian-optimization-with-gradients.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwujian16\u002FCornell-MOE) | 117 | \n| [TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6749-terngrad-ternary-gradients-to-reduce-communication-in-distributed-deep-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwenwei202\u002Fterngrad) | 117 | \n| [Compressed Sensing using Generative Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbora17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAshishBora\u002Fcsgm) | 116 | \n| [Switching Convolutional Neural Network for Crowd Counting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSam_Switching_Convolutional_Neural_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Fcrowd-counting-scnn) | 116 | \n| [WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDurand_WILDCAT_Weakly_Supervised_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdurandtibo\u002Fwildcat.pytorch) | 116 | \n| [Show, Adapt and Tell: Adversarial Training of Cross-Domain Image Captioner](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Show_Adapt_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftsenghungchen\u002Fshow-adapt-and-tell) | 115 | \n| [Video Frame Synthesis Using Deep Voxel Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Video_Frame_Synthesis_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fliuziwei7\u002Fvoxel-flow) | 114 | \n| [Multiple Instance Detection Network With Online Instance Classifier Refinement](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTang_Multiple_Instance_Detection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fppengtang\u002Foicr) | 113 | \n| [Deep Pyramidal Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHan_Deep_Pyramidal_Residual_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjhkim89\u002FPyramidNet) | 112 | \n| [Train longer, generalize better: closing the generalization gap in large batch training of neural networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FbigBatch) | 112 | \n| [Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Split-Brain_Autoencoders_Unsupervised_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fsplitbrainauto) | 110 | \n| [Unite the People: Closing the Loop Between 3D and 2D Human Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLassner_Unite_the_People_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fclassner\u002Fup) | 110 | \n| [Learning Combinatorial Optimization Algorithms over Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7214-learning-combinatorial-optimization-algorithms-over-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHanjun-Dai\u002Fgraph_comb_opt) | 109 | \n| [FeUdal Networks for Hierarchical Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fvezhnevets17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdmakian\u002Ffeudal_networks) | 107 | \n| [ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLuo_ThiNet_A_Filter_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FRoll920\u002FThiNet) | 105 | \n| [Learning a Deep Embedding Model for Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Learning_a_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FDeepEmbeddingModel_ZSL) | 104 | \n| [ECO: Efficient Convolution Operators for Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDanelljan_ECO_Efficient_Convolution_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnicewsyly\u002FECO) | 103 | \n| [SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_SCA-CNN_Spatial_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzjuchenlong\u002Fsca-cnn.cvpr17) | 102 | \n| [Multi-View Supervision for Single-View Reconstruction via Differentiable Ray Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTulsiani_Multi-View_Supervision_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002Fdrc) | 100 | \n| [Task-based End-to-end Model Learning in Stochastic Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7132-task-based-end-to-end-model-learning-in-stochastic-optimization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fe2e-model-learning) | 100 | \n| [Learning to Compose Domain-Specific Transformations for Data Augmentation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6916-learning-to-compose-domain-specific-transformations-for-data-augmentation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Ftanda) | 97 | \n| [Genetic CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXie_Genetic_CNN_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faqibsaeed\u002FGenetic-CNN) | 97 | \n| [HashNet: Deep Learning to Hash by Continuation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCao_HashNet_Deep_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FHashNet) | 97 | \n| [Interleaved Group Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Interleaved_Group_Convolutions_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhellozting\u002FInterleavedGroupConvolutions) | 95 | \n| [Deeply-Learned Part-Aligned Representations for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhao_Deeply-Learned_Part-Aligned_Representations_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzlmzju\u002Fpart_reid) | 95 | \n| [Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6635-best-of-both-worlds-transferring-knowledge-from-discriminative-learning-to-a-generative-visual-dialog-model.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FvisDial.pytorch) | 94 | \n| [Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Multi-Scale_Continuous_CRFs_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanxuhk\u002FContinuousCRF-CNN) | 93 | \n| [Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTatarchenko_Octree_Generating_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fogn) | 92 | \n| [Semantic Autoencoder for Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKodirov_Semantic_Autoencoder_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FElyorcv\u002FSAE) | 92 | \n| [Deep Hyperspherical Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6984-deep-hyperspherical-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002FSphereNet) | 92 | \n| [Decoupled Neural Interfaces using Synthetic Gradients](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fjaderberg17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fdni.pytorch) | 90 | \n| [Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6960-geometric-matrix-completion-with-recurrent-multi-graph-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffmonti\u002Fmgcnn) | 90 | \n| [Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6780-practical-bayesian-optimization-for-model-fitting-with-bayesian-adaptive-direct-search.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flacerbi\u002Fbads) | 90 | \n| [Optical Flow Estimation Using a Spatial Pyramid Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRanjan_Optical_Flow_Estimation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-spynet) | 90 | \n| [AMC: Attention guided Multi-modal Correlation Learning for Image Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_AMC_Attention_guided_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FAMC_ATT) | 90 | \n| [Deep Video Deblurring for Hand-Held Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSu_Deep_Video_Deblurring_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshuochsu\u002FDeepVideoDeblurring) | 89 | \n| [Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6784-unsupervised-learning-of-disentangled-and-interpretable-representations-from-sequential-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwnhsu\u002FFactorizedHierarchicalVAE) | 88 | \n| [Causal Effect Inference with Deep Latent-Variable Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7223-causal-effect-inference-with-deep-latent-variable-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FCEVAE) | 87 | \n| [GANs for Biological Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOsokin_GANs_for_Biological_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faosokin\u002Fbiogans) | 85 | \n| [MMD GAN: Towards Deeper Understanding of Moment Matching Network](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6815-mmd-gan-towards-deeper-understanding-of-moment-matching-network.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FOctoberChang\u002FMMD-GAN) | 84 | \n| [Representation Learning by Learning to Count](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FNoroozi_Representation_Learning_by_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fgitlimlab\u002FRepresentation-Learning-by-Learning-to-Count) | 84 | \n| [Optical Flow in Mostly Rigid Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWulff_Optical_Flow_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjswulff\u002Fmrflow) | 83 | \n| [Fast-Slow Recurrent Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7173-fast-slow-recurrent-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Famujika\u002FFast-Slow-LSTM) | 82 | \n| [Unsupervised Video Summarization With Adversarial LSTM Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMahasseni_Unsupervised_Video_Summarization_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fj-min\u002FAdversarial_Video_Summary) | 82 | \n| [Constrained Policy Optimization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fachiam17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjachiam\u002Fcpo) | 81 | \n| [A-NICE-MC: Adversarial Training for MCMC](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7099-a-nice-mc-adversarial-training-for-mcmc.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjiamings\u002Fa-nice-mc) | 80 | \n| [Coarse-To-Fine Volumetric Prediction for Single-Image 3D Human Pose](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPavlakos_Coarse-To-Fine_Volumetric_Prediction_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeopavlakos\u002Fc2f-vol-train) | 80 | \n| [End-To-End Instance Segmentation With Recurrent Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRen_End-To-End_Instance_Segmentation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frenmengye\u002Frec-attend-public) | 78 | \n| [DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGurumurthy_DeLiGAN__Generative_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Fdeligan) | 78 | \n| [Learning Shape Abstractions by Assembling Volumetric Primitives](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTulsiani_Learning_Shape_Abstractions_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FvolumetricPrimitives) | 77 | \n| [Local Binary Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJuefei-Xu_Local_Binary_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjuefeix\u002Flbcnn.torch) | 77 | \n| [Raster-To-Vector: Revisiting Floorplan Transformation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Raster-To-Vector_Revisiting_Floorplan_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fart-programmer\u002FFloorplanTransformation) | 76 | \n| [Positive-Unlabeled Learning with Non-Negative Risk Estimator](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6765-positive-unlabeled-learning-with-non-negative-risk-estimator.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkiryor\u002FnnPUlearning) | 76 | \n| [Hard-Aware Deeply Cascaded Embedding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FYuan_Hard-Aware_Deeply_Cascaded_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FPkuRainBow\u002FHard-Aware-Deeply-Cascaded-Embedding_release) | 75 | \n| [Deep Image Harmonization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTsai_Deep_Image_Harmonization_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwasidennis\u002FDeepHarmonization) | 73 | \n| [Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDai_Shape_Completion_Using_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fangeladai\u002Fcnncomplete) | 73 | \n| [Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Not_All_Pixels_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliuziwei7\u002Fregion-conv) | 73 | \n| [Improved Stereo Matching With Constant Highway Networks and Reflective Confidence Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShaked_Improved_Stereo_Matching_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famitshaked\u002Fresmatch) | 72 | \n| [Query-Guided Regression Network With Context Policy for Phrase Grounding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Query-Guided_Regression_Network_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FQRC-Net) | 72 | \n| [Top-Down Visual Saliency Guided by Captions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRamanishka_Top-Down_Visual_Saliency_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FVisionLearningGroup\u002Fcaption-guided-saliency) | 72 | \n| [Feedback Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZamir_Feedback_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famir32002\u002Ffeedback-networks) | 72 | \n| [What Actions Are Needed for Understanding Human Actions in Videos?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSigurdsson_What_Actions_Are_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fgsig\u002Factions-for-actions) | 71 | \n| [Xception: Deep Learning With Depthwise Separable Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChollet_Xception_Deep_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftstandley\u002FXception-PyTorch) | 71 | \n| [Action-Decision Networks for Visual Tracking With Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYun_Action-Decision_Networks_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhellbell\u002FADNet) | 71 | \n| [Video Propagation Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJampani_Video_Propagation_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvarunjampani\u002Fvideo_prop_networks) | 70 | \n| [Image-To-Image Translation With Conditional Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIsola_Image-To-Image_Translation_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FwilliamFalcon\u002Fpix2pix-keras) | 70 | \n| [Quality Aware Network for Set to Set Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Quality_Aware_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FQuality-Aware-Network) | 69 | \n| [Self-Supervised Learning of Visual Features Through Embedding Images Into Text Topic Spaces](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGomez_Self-Supervised_Learning_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flluisgomez\u002FTextTopicNet) | 69 | \n| [Deep Subspace Clustering Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6608-deep-subspace-clustering-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpanji1990\u002FDeep-subspace-clustering-networks) | 68 | \n| [Escape From Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKlokov_Escape_From_Cells_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fkdnet.pytorch) | 68 | \n| [A Distributional Perspective on Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbellemare17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSilvicek\u002Fdistributional-dqn) | 68 | \n| [Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Physically-Based_Rendering_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyindaz\u002Fpbrs) | 67 | \n| [Deep Transfer Learning with Joint Adaptation Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flong17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FUSTCPCS\u002FCVPR2018_attention) | 67 | \n| [Training Deep Networks without Learning Rates Through Coin Betting](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6811-training-deep-networks-without-learning-rates-through-coin-betting.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbremen79\u002Fcocob) | 66 | \n| [Full Resolution Image Compression With Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FToderici_Full_Resolution_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002F1zb\u002Fpytorch-image-comp-rnn) | 66 | \n| [SurfaceNet: An End-To-End 3D Neural Network for Multiview Stereopsis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FJi_SurfaceNet_An_End-To-End_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FmjiUST\u002FSurfaceNet) | 66 | \n| [Doubly Stochastic Variational Inference for Deep Gaussian Processes](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7045-doubly-stochastic-variational-inference-for-deep-gaussian-processes.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FICL-SML\u002FDoubly-Stochastic-DGP) | 66 | \n| [TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGao_TURN_TAP_Temporal_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiyanggao\u002FTURN-TAP) | 66 | \n| [Jointly Attentive Spatial-Temporal Pooling Networks for Video-Based Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXu_Jointly_Attentive_Spatial-Temporal_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshuangjiexu\u002FSpatial-Temporal-Pooling-Networks-ReID) | 65 | \n| [Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSoltani_Synthesizing_3D_Shapes_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FAmir-Arsalan\u002FSynthesize3DviaDepthOrSil) | 65 | \n| [Dance Dance Convolution](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fdonahue17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fchrisdonahue\u002Fddc) | 65 | \n| [Borrowing Treasures From the Wealthy: Deep Transfer Learning Through Selective Joint Fine-Tuning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGe_Borrowing_Treasures_From_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZYYSzj\u002FSelective-Joint-Fine-tuning) | 64 | \n| [Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Curriculum_Domain_Adaptation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FYangZhang4065\u002FAdaptationSeg) | 64 | \n| [Toward Controlled Generation of Text](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhu17e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FGBLin5566\u002Ftoward-controlled-generation-of-text-pytorch) | 63 | \n| [Person Re-Identification in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZheng_Person_Re-Identification_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliangzheng06\u002FPRW-baseline) | 63 | \n| [ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7133-alice-towards-understanding-adversarial-learning-for-joint-distribution-matching.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FChunyuanLI\u002FALICE) | 63 | \n| [Differentiable Learning of Logical Rules for Knowledge Base Reasoning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6826-differentiable-learning-of-logical-rules-for-knowledge-base-reasoning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffanyangxyz\u002FNeural-LP) | 62 | \n| [Person Search With Natural Language Description](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Person_Search_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShuangLI59\u002FPerson-Search-with-Natural-Language-Description) | 61 | \n| [Multi-Channel Weighted Nuclear Norm Minimization for Real Color Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXu_Multi-Channel_Weighted_Nuclear_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FMCWNNM-ICCV2017) | 61 | \n| [Playing for Benchmarks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRichter_Playing_for_Benchmarks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FPatrykChrabaszcz\u002FCanonical_ES_Atari) | 61 | \n| [Unsupervised Learning by Predicting Noise](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbojanowski17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fnoise-as-targets) | 60 | \n| [Localizing Moments in Video With Natural Language](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHendricks_Localizing_Moments_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FLisaAnne\u002FLocalizingMoments) | 60 | \n| [End-To-End 3D Face Reconstruction With Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDou_End-To-End_3D_Face_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShownX\u002Fmxnet-E2FAR) | 60 | \n| [CoupleNet: Coupling Global Structure With Local Parts for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_CoupleNet_Coupling_Global_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftshizys\u002FCoupleNet) | 59 | \n| [AdaGAN: Boosting Generative Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7126-adagan-boosting-generative-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftolstikhin\u002Fadagan) | 59 | \n| [Convolutional Gaussian Processes](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6877-convolutional-gaussian-processes.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmarkvdw\u002Fconvgp\u002F) | 57 | \n| [A Deep Regression Architecture With Two-Stage Re-Initialization for High Performance Facial Landmark Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLv_A_Deep_Regression_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshaoxiaohu\u002FFace_Alignment_Two_Stage_Re-initialization) | 57 | \n| [Modeling Relationships in Referential Expressions With Compositional Modular Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_Modeling_Relationships_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fcmn) | 57 | \n| [Curiosity-driven Exploration by Self-supervised Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fpathak17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fpytorch-noreward-rl) | 56 | \n| [Wavelet-SRNet: A Wavelet-Based CNN for Multi-Scale Face Super Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhhb072\u002FWaveletSRNet) | 56 | \n| [The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7252-the-neural-hawkes-process-a-neurally-self-modulating-multivariate-point-process.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHMEIatJHU\u002Fneurawkes) | 56 | \n| [Online and Linear-Time Attention by Enforcing Monotonic Alignments](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fraffel17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcraffel\u002Fmad) | 56 | \n| [Neural Expectation Maximization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7246-neural-expectation-maximization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsjoerdvansteenkiste\u002FNeural-EM) | 56 | \n| [Dense-Captioning Events in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKrishna_Dense-Captioning_Events_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Franjaykrishna\u002Fdensevid_eval) | 55 | \n| [Factorized Bilinear Models for Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLi_Factorized_Bilinear_Models_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flyttonhao\u002FFactorized-Bilinear-Network) | 55 | \n| [Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6910-net-trim-convex-pruning-of-deep-neural-networks-with-performance-guarantee.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDNNToolBox\u002FNet-Trim-v1) | 54 | \n| [On-the-fly Operation Batching in Dynamic Computation Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6986-on-the-fly-operation-batching-in-dynamic-computation-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fneulab\u002Fdynet-benchmark) | 54 | \n| [Visual Translation Embedding Network for Visual Relation Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Visual_Translation_Embedding_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzawlin\u002Fcvpr17_vtranse) | 54 | \n| [Learning Blind Motion Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWieschollek_Learning_Blind_Motion_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcgtuebingen\u002Flearning-blind-motion-deblurring) | 54 | \n| [A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6951-a-disentangled-recognition-and-nonlinear-dynamics-model-for-unsupervised-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsimonkamronn\u002Fkvae) | 53 | \n| [Towards Diverse and Natural Image Descriptions via a Conditional GAN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDai_Towards_Diverse_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002Fgancaption_iccv2017) | 53 | \n| [CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShou_CDC_Convolutional-De-Convolutional_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FColumbiaDVMM\u002FCDC) | 53 | \n| [A Generic Deep Architecture for Single Image Reflection Removal and Image Smoothing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFan_A_Generic_Deep_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffqnchina\u002FCEILNet) | 52 | \n| [Deep IV: A Flexible Approach for Counterfactual Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhartford17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjhartford\u002FDeepIV) | 52 | \n| [Triangle Generative Adversarial Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7109-triangle-generative-adversarial-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLiqunChen0606\u002FTriangle-GAN) | 51 | \n| [EAST: An Efficient and Accurate Scene Text Detector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhou_EAST_An_Efficient_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKathrine94\u002FEAST) | 51 | \n| [SST: Single-Stream Temporal Action Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBuch_SST_Single-Stream_Temporal_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Franjaykrishna\u002FSST) | 51 | \n| [Predicting Deeper Into the Future of Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLuc_Predicting_Deeper_Into_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FSegmPred) | 51 | \n| [L2-Net: Deep Learning of Discriminative Patch Descriptor in Euclidean Space](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTian_L2-Net_Deep_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuruntian\u002FL2-Net) | 51 | \n| [TALL: Temporal Activity Localization via Language Query](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGao_TALL_Temporal_Activity_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiyanggao\u002FTALL) | 50 | \n| [Hybrid Reward Architecture for Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7123-hybrid-reward-architecture-for-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMaluuba\u002Fhra) | 50 | \n| [Fast Fourier Color Constancy](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBarron_Fast_Fourier_Color_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fffcc) | 49 | \n| [Modulating early visual processing by language](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7237-modulating-early-visual-processing-by-language.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FGuessWhatGame\u002Fguesswhat) | 49 | \n| [Adversarial Examples for Semantic Segmentation and Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXie_Adversarial_Examples_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcihangxie\u002FDAG) | 49 | \n| [Learning Discrete Representations via Information Maximizing Self-Augmented Training](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhu17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fweihua916\u002Fimsat) | 49 | \n| [Efficient Diffusion on Region Manifolds: Recovering Small Objects With Compact CNN Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIscen_Efficient_Diffusion_on_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahmetius\u002Fdiffusion-retrieval) | 48 | \n| [Real Time Image Saliency for Black Box Classifiers](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7272-real-time-image-saliency-for-black-box-classifiers.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FPiotrDabkowski\u002Fpytorch-saliency) | 48 | \n| [FC4: Fully Convolutional Color Constancy With Confidence-Weighted Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_FC4_Fully_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuanming-hu\u002Ffc4) | 47 | \n| [Multiple People Tracking by Lifted Multicut and Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTang_Multiple_People_Tracking_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjutanke\u002Fcabbage) | 47 | \n| [Learned D-AMP: Principled Neural Network based Compressive Image Recovery](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6774-learned-d-amp-principled-neural-network-based-compressive-image-recovery.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fricedsp\u002FD-AMP_Toolbox) | 47 | \n| [GP CaKe: Effective brain connectivity with causal kernels](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6696-gp-cake-effective-brain-connectivity-with-causal-kernels.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLucaAmbrogioni\u002FGP-CaKe-project) | 46 | \n| [Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6854-predicting-organic-reaction-outcomes-with-weisfeiler-lehman-network.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwengong-jin\u002Fnips17-rexgen) | 46 | \n| [Semantic Video CNNs Through Representation Warping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGadde_Semantic_Video_CNNs_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fraghudeep\u002Fnetwarp_public) | 46 | \n| [Grammar Variational Autoencoder](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkusner17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fepisodeyang\u002Fgrammar_variational_autoencoder) | 46 | \n| [EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSajjadi_EnhanceNet_Single_Image_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmsmsajjadi\u002FEnhanceNet-Code) | 46 | \n| [Safe Model-based Reinforcement Learning with Stability Guarantees](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6692-safe-model-based-reinforcement-learning-with-stability-guarantees.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbefelix\u002Fsafe_learning) | 45 | \n| [Deep Spectral Clustering Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flaw17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fwlwkgus\u002FDeepSpectralClustering) | 45 | \n| [Semantic Compositional Networks for Visual Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGan_Semantic_Compositional_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhegan27\u002FSemantic_Compositional_Nets) | 45 | \n| [On-Demand Learning for Deep Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGao_On-Demand_Learning_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frhgao\u002Fon-demand-learning) | 45 | \n| [Video Pixel Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkalchbrenner17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002F3ammor\u002FVideo-Pixel-Networks) | 45 | \n| [Stabilizing Training of Generative Adversarial Networks through Regularization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6797-stabilizing-training-of-generative-adversarial-networks-through-regularization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frothk\u002FStabilizing_GANs) | 45 | \n| [Structured Bayesian Pruning via Log-Normal Multiplicative Noise](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7254-structured-bayesian-pruning-via-log-normal-multiplicative-noise.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fnecludov\u002Fgroup-sparsity-sbp) | 44 | \n| [Deriving Neural Architectures from Sequence and Graph Kernels](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flei17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Ficml17_knn) | 44 | \n| [Masked Autoregressive Flow for Density Estimation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6828-masked-autoregressive-flow-for-density-estimation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgpapamak\u002Fmaf) | 44 | \n| [Unsupervised Adaptation for Deep Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTonioni_Unsupervised_Adaptation_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FUnsupervised-Adaptation-for-Deep-Stereo) | 44 | \n| [Learning Residual Images for Face Attribute Manipulation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShen_Learning_Residual_Images_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZhongdao\u002FFaceAttributeManipulation) | 43 | \n| [Learning to Generate Long-term Future via Hierarchical Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fvillegas17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Frubenvillegas\u002Ficml2017hierchvid) | 43 | \n| [Accurate Optical Flow via Direct Cost Volume Processing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Accurate_Optical_Flow_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FIntelVCL\u002Fdcflow) | 42 | \n| [Generalized Orderless Pooling Performs Implicit Salient Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSimon_Generalized_Orderless_Pooling_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcvjena\u002Falpha_pooling) | 42 | \n| [Comparative Evaluation of Hand-Crafted and Learned Local Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSchonberger_Comparative_Evaluation_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahojnnes\u002Flocal-feature-evaluation) | 42 | \n| [SchNet: A continuous-filter convolutional neural network for modeling quantum interactions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6700-schnet-a-continuous-filter-convolutional-neural-network-for-modeling-quantum-interactions.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fatomistic-machine-learning\u002FSchNet) | 41 | \n| [Temporal Generative Adversarial Nets With Singular Value Clipping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSaito_Temporal_Generative_Adversarial_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpfnet-research\u002Ftgan) | 41 | \n| [Multiplicative Normalizing Flows for Variational Bayesian Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flouizos17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FMNF_VBNN) | 41 | \n| [Neural Scene De-Rendering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWu_Neural_Scene_De-Rendering_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiajunwu\u002Fnsd) | 40 | \n| [Semantic Image Inpainting With Deep Generative Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYeh_Semantic_Image_Inpainting_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FChengBinJin\u002Fsemantic-image-inpainting) | 40 | \n| [A Linear-Time Kernel Goodness-of-Fit Test](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6630-a-linear-time-kernel-goodness-of-fit-test.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwittawatj\u002Fkernel-gof) | 40 | \n| [Least Squares Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMao_Least_Squares_Generative_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FGunhoChoi\u002FLSGAN-TF) | 39 | \n| [Diversified Texture Synthesis With Feed-Forward Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Diversified_Texture_Synthesis_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FMultiTextureSynthesis) | 39 | \n| [No Fuss Distance Metric Learning Using Proxies](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMovshovitz-Attias_No_Fuss_Distance_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdichotomies\u002Fproxy-nca) | 38 | \n| [Template Matching With Deformable Diversity Similarity](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTalmi_Template_Matching_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Froimehrez\u002FDDIS) | 38 | \n| [What's in a Question: Using Visual Questions as a Form of Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGanju_Whats_in_a_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsidgan\u002Fwhats_in_a_question) | 38 | \n| [Face Normals \"In-The-Wild\" Using Fully Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTrigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftrigeorgis\u002Fface_normals_cvpr17) | 38 | \n| [Conditional Image Synthesis with Auxiliary Classifier GANs](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fodena17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Facgan-pytorch) | 37 | \n| [Neural Episodic Control](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fpritzel17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FEndingCredits\u002FNeural-Episodic-Control) | 37 | \n| [3D-PRNN: Generating Shape Primitives With Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZou_3D-PRNN_Generating_Shape_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzouchuhang\u002F3D-PRNN) | 37 | \n| [Structured Embedding Models for Grouped Data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6629-structured-embedding-models-for-grouped-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmariru\u002Fstructured_embeddings) | 36 | \n| [Learning Active Learning from Data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7010-learning-active-learning-from-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fksenia-konyushkova\u002FLAL) | 36 | \n| [Unified Deep Supervised Domain Adaptation and Generalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMotiian_Unified_Deep_Supervised_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsamotiian\u002FCCSA) | 35 | \n| [Transformation-Grounded Image Generation Network for Novel 3D View Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPark_Transformation-Grounded_Image_Generation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsilverbottlep\u002Ftvsn) | 35 | \n| [Structured Attentions for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Structured_Attentions_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshtechair\u002Fvqa-sva) | 34 | \n| [Geometric Loss Functions for Camera Pose Regression With Deep Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKendall_Geometric_Loss_Functions_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffuturely\u002Fdeep-camera-relocalization) | 34 | \n| [VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FClark_VidLoc_A_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffuturely\u002Fdeep-camera-relocalization) | 34 | \n| [QMDP-Net: Deep Learning for Planning under Partial Observability](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7055-qmdp-net-deep-learning-for-planning-under-partial-observability.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FAdaCompNUS\u002Fqmdp-net) | 34 | \n| [Using Ranking-CNN for Age Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_Using_Ranking-CNN_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRankingCNN\u002FUsing-Ranking-CNN-for-Age-Estimation) | 33 | \n| [Hierarchical Boundary-Aware Neural Encoder for Video Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBaraldi_Hierarchical_Boundary-Aware_Neural_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYugnaynehc\u002Fbanet) | 33 | \n| [Unsupervised Learning of Disentangled Representations from Video](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7028-unsupervised-learning-of-disentangled-representations-from-video.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fedenton\u002Fdrnet-py) | 32 | \n| [Deep Learning on Lie Groups for Skeleton-Based Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Deep_Learning_on_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzzhiwu\u002FLieNet) | 32 | \n| [Deep Variation-Structured Reinforcement Learning for Visual Relationship and Attribute Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiang_Deep_Variation-Structured_Reinforcement_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnexusapoorvacus\u002FDeepVariationStructuredRL) | 32 | \n| [3D Point Cloud Registration for Localization Using a Deep Neural Network Auto-Encoder](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FElbaz_3D_Point_Cloud_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgilbaz\u002FLORAX) | 32 | \n| [StyleNet: Generating Attractive Visual Captions With Styles](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGan_StyleNet_Generating_Attractive_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkacky24\u002Fstylenet) | 32 | \n| [Dynamic Word Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbamler17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FYingyuLiang\u002FSemanticVector) | 32 | \n| [Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7071-learning-to-prune-deep-neural-networks-via-layer-wise-optimal-brain-surgeon.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcsyhhu\u002FL-OBS) | 31 | \n| [Continual Learning Through Synaptic Intelligence](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzenke17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fganguli-lab\u002Fpathint) | 31 | \n| [Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPohlen_Full-Resolution_Residual_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhiwonjoon\u002Ftf-frrn) | 31 | \n| [Learning Detection With Diverse Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAzadi_Learning_Detection_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fazadis\u002FLDDP) | 31 | \n| [LCNN: Lookup-Based Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBagherinezhad_LCNN_Lookup-Based_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhessamb\u002Flcnn) | 31 | \n| [Towards Accurate Multi-Person Pose Estimation in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPapandreou_Towards_Accurate_Multi-Person_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhackiey\u002Fkeypoints) | 30 | \n| [Real-Time Neural Style Transfer for Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Real-Time_Neural_Style_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcuraai00\u002FRT-StyleTransfer-forVideo) | 30 | \n| [Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FShetty_Speaking_the_Same_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FrakshithShetty\u002FcaptionGAN) | 30 | \n| [Deep Co-Occurrence Feature Learning for Visual Object Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShih_Deep_Co-Occurrence_Feature_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyafangshih\u002FDeep-COOC) | 29 | \n| [Joint distribution optimal transportation for domain adaptation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6963-joint-distribution-optimal-transportation-for-domain-adaptation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frflamary\u002FJDOT) | 29 | \n| [Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCao_Realtime_Multi-Person_2D_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FPoseAIChallenger\u002Fmxnet_pose_for_AI_challenger) | 29 | \n| [SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkim17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdalgu90\u002Fsplitnet-wrn) | 29 | \n| [The Statistical Recurrent Unit](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Foliva17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FDLHacks\u002FSRU) | 29 | \n| [A Unified Approach of Multi-Scale Deep and Hand-Crafted Features for Defocus Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPark_A_Unified_Approach_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzzangjinsun\u002FDHDE_CVPR17) | 28 | \n| [Learning Spread-Out Local Feature Descriptors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Learning_Spread-Out_Local_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FColumbiaDVMM\u002FSpread-out_Local_Feature_Descriptor) | 28 | \n| [Event-Based Visual Inertial Odometry](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhu_Event-Based_Visual_Inertial_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdaniilidis-group\u002Fevent_feature_tracking) | 27 | \n| [DropoutNet: Addressing Cold Start in Recommender Systems](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7081-dropoutnet-addressing-cold-start-in-recommender-systems.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flayer6ai-labs\u002FDropoutNet) | 27 | \n| [Phrase Localization and Visual Relationship Detection With Comprehensive Image-Language Cues](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FPlummer_Phrase_Localization_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FBryanPlummer\u002Fpl-clc) | 27 | \n| [Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPavlakos_Harvesting_Multiple_Views_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeopavlakos\u002Fharvesting) | 27 | \n| [Deep 360 Pilot: Learning a Deep Agent for Piloting Through 360deg Sports Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_Deep_360_Pilot_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Feborboihuc\u002FDeep360Pilot-CVPR17) | 27 | \n| [Neural Message Passing for Quantum Chemistry](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fgilmer17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Fmpnn) | 27 | \n| [State-Frequency Memory Recurrent Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhu17c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhhkunming\u002FState-Frequency-Memory-Recurrent-Neural-Networks) | 27 | \n| [DeepCD: Learning Deep Complementary Descriptors for Patch Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FYang_DeepCD_Learning_Deep_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshamangary\u002FDeepCD) | 26 | \n| [Contrastive Learning for Image Captioning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6691-contrastive-learning-for-image-captioning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002Fclcaption_nips2017) | 26 | \n| [Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6760-stochastic-optimization-with-variance-reduction-for-infinite-datasets-with-finite-sum-structure.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Falbietz\u002Fstochs) | 26 | \n| [Learning High Dynamic Range From Outdoor Panoramas](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Learning_High_Dynamic_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjacenfox\u002Fldr2hdr-public) | 26 | \n| [Speed\u002FAccuracy Trade-Offs for Modern Convolutional Object Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_SpeedAccuracy_Trade-Offs_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frayanelleuch\u002FSpeed-accuracy-trade-offs-for-modern-convolutional-object-detectors) | 26 | \n| [Learning to Detect Salient Objects With Image-Level Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Learning_to_Detect_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fscott89\u002FWSS) | 26 | \n| [Improved Variational Autoencoders for Text Modeling using Dilated Convolutions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fyang17d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fryokamoi\u002Fdcnn_textvae) | 26 | \n| [Interspecies Knowledge Transfer for Facial Keypoint Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRashid_Interspecies_Knowledge_Transfer_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmenorashid\u002Fanimal_human_kp) | 25 | \n| [YASS: Yet Another Spike Sorter](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6989-yass-yet-another-spike-sorter.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpaninski-lab\u002Fyass) | 25 | \n| [Open Set Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBusto_Open_Set_Domain_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHeliot7\u002Fopen-set-da) | 25 | \n| [Domain-Adaptive Deep Network Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMasana_Domain-Adaptive_Deep_Network_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmmasana\u002FDALR) | 24 | \n| [Long Short-Term Memory Kalman Filters: Recurrent Neural Estimators for Pose Regularization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCoskun_Long_Short-Term_Memory_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSeleucia\u002Flstmkf_ICCV2017) | 24 | \n| [Temporal Context Network for Activity Localization in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDai_Temporal_Context_Network_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fvdavid70619\u002FTCN) | 24 | \n| [Incremental Learning of Object Detectors Without Catastrophic Forgetting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FShmelkov_Incremental_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkshmelkov\u002Fincremental_detectors) | 24 | \n| [Dense Captioning With Joint Inference and Visual Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_Dense_Captioning_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flinjieyangsc\u002Fdensecap) | 24 | \n| [Universal Adversarial Perturbations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMoosavi-Dezfooli_Universal_Adversarial_Perturbations_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Ffast-feature-fool) | 24 | \n| [Asymmetric Tri-training for Unsupervised Domain Adaptation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fsaito17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fvtddggg\u002FATDA) | 24 | \n| [Reducing Reparameterization Gradient Variance](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6961-reducing-reparameterization-gradient-variance.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fandymiller\u002FReducedVarianceReparamGradients) | 24 | \n| [Exploiting Saliency for Object Segmentation From Image Level Labels](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FOh_Exploiting_Saliency_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcoallaoh\u002FGuidedLabelling) | 24 | \n| [A Dirichlet Mixture Model of Hawkes Processes for Event Sequence Clustering](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6734-a-dirichlet-mixture-model-of-hawkes-processes-for-event-sequence-clustering.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHongtengXu\u002FHawkes-Process-Toolkit) | 24 | \n| [Shading Annotations in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKovacs_Shading_Annotations_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkovibalu\u002Fsaw_release) | 24 | \n| [Straight to Shapes: Real-Time Detection of Encoded Shapes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJetley_Straight_to_Shapes_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftorrvision\u002Fstraighttoshapes) | 23 | \n| [Dual Discriminator Generative Adversarial Nets](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6860-dual-discriminator-generative-adversarial-nets.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftund\u002FD2GAN) | 23 | \n| [Zero-Order Reverse Filtering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTao_Zero-Order_Reverse_Filtering_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiangsutx\u002FDeFilter) | 23 | \n| [Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7026-variational-walkback-learning-a-transition-operator-as-a-stochastic-recurrent-net.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fanirudh9119\u002Fwalkback_nips17) | 23 | \n| [Learning Spherical Convolution for Fast Features from 360° Imagery](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6656-learning-spherical-convolution-for-fast-features-from-360-imagery.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsammy-su\u002FSpherical-Convolution) | 22 | \n| [Learning to Detect Sepsis with a Multitask Gaussian Process RNN Classifier](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ffutoma17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjfutoma\u002FMGP-RNN) | 22 | \n| [Deep Cross-Modal Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJiang_Deep_Cross-Modal_Hashing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangqy\u002FDCMH-CVPR2017) | 22 | \n| [When Unsupervised Domain Adaptation Meets Tensor Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLu_When_Unsupervised_Domain_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpoppinace\u002FTAISL) | 22 | \n| [Image Super-Resolution Using Dense Skip Connections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTong_Image_Super-Resolution_Using_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkweisamx\u002FTensorFlow-SR-DenseNet) | 22 | \n| [Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Multimodal_Transfer_A_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffullfanta\u002Fmultimodal_transfer) | 22 | \n| [STD2P: RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHe_STD2P_RGBD_Semantic_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSSAW14\u002FSTD2P) | 22 | \n| [Learning Continuous Semantic Representations of Symbolic Expressions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fallamanis17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmast-group\u002Feqnet) | 22 | \n| [Deep Growing Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWang_Deep_Growing_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FQData\u002Fdeep2Read) | 21 | \n| [Combined Group and Exclusive Sparsity for Deep Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fyoon17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjaehong-yoon93\u002FCGES) | 21 | \n| [Hash Embeddings for Efficient Word Representations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7078-hash-embeddings-for-efficient-word-representations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdsv77\u002Fhashembedding\u002F) | 21 | \n| [Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6850-accuracy-first-selecting-a-differential-privacy-level-for-accuracy-constrained-erm.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsteven7woo\u002FAccuracy-First-Differential-Privacy) | 21 | \n| [Disentangled Representation Learning GAN for Pose-Invariant Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTran_Disentangled_Representation_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangjunh\u002FDR-GAN-by-pytorch) | 21 | \n| [Learning to Pivot with Adversarial Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6699-learning-to-pivot-with-adversarial-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fglouppe\u002Fpaper-learning-to-pivot) | 21 | \n| [Learning Dynamic Siamese Network for Visual Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGuo_Learning_Dynamic_Siamese_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftsingqguo\u002FDSiam) | 21 | \n| [POSEidon: Face-From-Depth for Driver Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBorghi_POSEidon_Face-From-Depth_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgdubrg\u002FPOSEidon-Biwi) | 20 | \n| [Deep Metric Learning via Facility Location](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSong_Deep_Metric_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FCongWeilin\u002Fcluster-loss-tensorflow) | 20 | \n| [Automatic Spatially-Aware Fashion Concept Discovery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHan_Automatic_Spatially-Aware_Fashion_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxthan\u002Ffashion-200k) | 20 | \n| [The Numerics of GANs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6779-the-numerics-of-gans.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FTheNumericsOfGANs) | 20 | \n| [From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGong_From_Motion_Blur_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdonggong1\u002Fmotion-flow-syn) | 20 | \n| [Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fadepierre\u002FCaffe_CycleGAN) | 20 | \n| [Zero-Inflated Exponential Family Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fliu17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fblei-lab\u002Fzero-inflated-embedding) | 20 | \n| [InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6971-infogail-interpretable-imitation-learning-from-visual-demonstrations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Finfogail) | 20 | \n| [Weakly-Supervised Learning of Visual Relations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FPeyre_Weakly-Supervised_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjpeyre\u002Funrel) | 20 | \n| [Multi-Label Image Recognition by Recurrently Discovering Attentional Regions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWang_Multi-Label_Image_Recognition_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJames-Yip\u002FAttentionImageClass) | 20 | \n| [Scene Parsing With Global Context Embedding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHung_Scene_Parsing_With_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhfslyc\u002FGCPNet) | 20 | \n| [Context Selection for Embedding Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7067-context-selection-for-embedding-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fblei-lab\u002Fcontext-selection-embedding) | 20 | \n| [Deep Mean-Shift Priors for Image Restoration](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6678-deep-mean-shift-priors-for-image-restoration.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FsiavashBigdeli\u002FDMSP) | 20 | \n| [Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Skeleton_Key_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffeiyu1990\u002FSkeleton-key) | 20 | \n| [Fully-Adaptive Feature Sharing in Multi-Task Networks With Applications in Person Attribute Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLu_Fully-Adaptive_Feature_Sharing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluyongxi\u002Fdeep_share) | 19 | \n| [Learning Compact Geometric Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKhoury_Learning_Compact_Geometric_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmarckhoury\u002FCGF) | 19 | \n| [Structured Generative Adversarial Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6979-structured-generative-adversarial-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthudzj\u002FStructuredGAN) | 19 | \n| [Joint Gap Detection and Inpainting of Line Drawings](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSasaki_Joint_Gap_Detection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkaidlc\u002FCVPR2017_linedrawings) | 19 | \n| [Chained Multi-Stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZolfaghari_Chained_Multi-Stream_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmzolfaghari\u002Fchained-multistream-networks) | 19 | \n| [Adversarial Feature Matching for Text Generation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzhang17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FJeff-HOU\u002FUROP-Adversarial-Feature-Matching-for-Text-Generation) | 18 | \n| [BIER - Boosting Independent Embeddings Robustly](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOpitz_BIER_-_Boosting_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmop\u002Fbier) | 18 | \n| [Predictive-Corrective Networks for Action Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDave_Predictive-Corrective_Networks_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fachalddave\u002Fpredictive-corrective) | 18 | \n| [Stochastic Generative Hashing](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fdai17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdoubling\u002FStochastic_Generative_Hashing) | 18 | \n| [A Bayesian Data Augmentation Approach for Learning Deep Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6872-a-bayesian-data-augmentation-approach-for-learning-deep-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftoantm\u002Fkeras-bda) | 18 | \n| [Attentive Semantic Video Generation Using Captions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMarwah_Attentive_Semantic_Video_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSingularity42\u002Fcap2vid) | 18 | \n| [MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_MDNet_A_Semantically_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzizhaozhang\u002Fmdnet-cvpr2017) | 18 | \n| [Deep Unsupervised Similarity Learning Using Partially Ordered Sets](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBautista_Deep_Unsupervised_Similarity_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fasanakoy\u002Fdeep_unsupervised_posets) | 17 | \n| [DualNet: Learn Complementary Features for Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHou_DualNet_Learn_Complementary_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fustc-vim\u002Fdualnet) | 17 | \n| [Neural system identification for large populations separating “what” and “where”](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6942-neural-system-identification-for-large-populations-separating-what-and-where.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdavid-klindt\u002FNIPS2017) | 17 | \n| [FALKON: An Optimal Large Scale Kernel Method](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6978-falkon-an-optimal-large-scale-kernel-method.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLCSL\u002FFALKON_paper) | 17 | \n| [Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Deep_Future_Gaze_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMengmi\u002Fdeepfuturegaze_gan) | 17 | \n| [Deep Learning with Topological Signatures](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6761-deep-learning-with-topological-signatures.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fc-hofer\u002Fnips2017) | 17 | \n| [Streaming Sparse Gaussian Process Approximations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6922-streaming-sparse-gaussian-process-approximations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthangbui\u002Fstreaming_sparse_gp) | 17 | \n| [RPAN: An End-To-End Recurrent Pose-Attention Network for Action Recognition in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDu_RPAN_An_End-To-End_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fagethen\u002FRPAN) | 17 | \n| [Awesome Typography: Statistics-Based Text Effects Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_Awesome_Typography_Statistics-Based_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwilliamyang1991\u002FText-Effects-Transfer) | 17 | \n| [RoomNet: End-To-End Room Layout Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLee_RoomNet_End-To-End_Room_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FGitBoSun\u002Froomnet) | 17 | \n| [Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSong_Deep_Spatial-Semantic_Attention_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyuchuochuo1023\u002FDeep_SBIR_tf) | 16 | \n| [Deep Supervised Discrete Hashing](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6842-deep-supervised-discrete-hashing.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fliqi-casia\u002FDSDH-HashingCode) | 16 | \n| [Few-Shot Learning Through an Information Retrieval  Lens](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6820-few-shot-learning-through-an-information-retrieval-lens.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FeleniTriantafillou\u002Ffew_shot_mAP_public) | 16 | \n| [Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7023-estimating-accuracy-from-unlabeled-data-a-probabilistic-logic-approach.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feaplatanios\u002Fmakina) | 16 | \n| [Learning to Push the Limits of Efficient FFT-Based Image Deconvolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKruse_Learning_to_Push_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fuschmidt83\u002Ffourier-deconvolution-network) | 16 | \n| [Federated Multi-Task Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7029-federated-multi-task-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgingsmith\u002Ffmtl) | 16 | \n| [Label Distribution Learning Forests](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6685-label-distribution-learning-forests.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fshenwei1231\u002Fcaffe-LDLForests) | 16 | \n| [Deep Multitask Architecture for Integrated 2D and 3D Human Sensing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPopa_Deep_Multitask_Architecture_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falinionutpopa\u002Fdmhs) | 16 | \n| [Estimating Mutual Information for Discrete-Continuous Mixtures](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7180-estimating-mutual-information-for-discrete-continuous-mixtures.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwgao9\u002Fmixed_KSG) | 16 | \n| [Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGolestaneh_Spatially-Varying_Blur_Detection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fisalirezag\u002FHiFST) | 16 | \n| [StyleBank: An Explicit Representation for Neural Image Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_StyleBank_An_Explicit_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjxcodetw\u002FStylebank) | 16 | \n| [Surface Normals in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Surface_Normals_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fumich-vl\u002Fsurface_normals) | 15 | \n| [Automatic Discovery of the Statistical Types of Variables in a Dataset](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fvalera17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FivaleraM\u002FDataTypes) | 15 | \n| [Learning Diverse Image Colorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDeshpande_Learning_Diverse_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faditya12agd5\u002Fdivcolor) | 15 | \n| [Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMeinhardt_Learning_Proximal_Operators_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftum-vision\u002Flearn_prox_ops) | 15 | \n| [Non-Local Deep Features for Salient Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLuo_Non-Local_Deep_Features_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FAceCoooool\u002FNLFD-pytorch) | 15 | \n| [Structure-Measure: A New Way to Evaluate Foreground Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFan_Structure-Measure_A_New_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FS-measure) | 15 | \n| [Shallow Updates for Deep Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6906-shallow-updates-for-deep-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FShallow-Updates-for-Deep-RL\u002FShallow_Updates_for_Deep_RL) | 15 | \n| [Wasserstein Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Farjovsky17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fluslab\u002FscRNAseq-WGAN-GP) | 15 | \n| [Recurrent 3D Pose Sequence Machines](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_Recurrent_3D_Pose_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMudeLin\u002FRPSM) | 15 | \n| [Variational Dropout Sparsifies Deep Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmolchanov17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsoskek\u002Fvariational_dropout_sparsifies_dnn) | 15 | \n| [Captioning Images With Diverse Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FVenugopalan_Captioning_Images_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvsubhashini\u002Fnoc) | 15 | \n| [Off-policy evaluation for slate recommendation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6954-off-policy-evaluation-for-slate-recommendation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fadith387\u002Fslates_semisynth_expts) | 15 | \n| [Attributes2Classname: A Discriminative Model for Attribute-Based Unsupervised Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDemirel_Attributes2Classname_A_Discriminative_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fberkandemirel\u002Fattributes2classname) | 14 | \n| [Benchmarking Denoising Algorithms With Real Photographs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPlotz_Benchmarking_Denoising_Algorithms_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flbasek\u002Fimage-denoising-benchmark) | 14 | \n| [Neural Aggregation Network for Video Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_Neural_Aggregation_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjinyanxu\u002FNeural-Aggregation-Network-for-Video-Face-Recognition) | 14 | \n| [Learned Contextual Feature Reweighting for Image Geo-Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKim_Learned_Contextual_Feature_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhyojinie\u002Fcrn) | 14 | \n| [Streaming Weak Submodularity: Interpreting Neural Networks on the Fly](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6993-streaming-weak-submodularity-interpreting-neural-networks-on-the-fly.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feelenberg\u002Fstreak) | 14 | \n| [CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBao_CVAE-GAN_Fine-Grained_Image_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyanzhicong\u002FVAE-GAN) | 14 | \n| [VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGan_VQS_Linking_Segmentations_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCold-Winter\u002Fvqs) | 14 | \n| [Spherical convolutions and their application in molecular modelling](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6935-spherical-convolutions-and-their-application-in-molecular-modelling.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdeepfold\u002FNIPS2017) | 14 | \n| [Multi-Information Source Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7016-multi-information-source-optimization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdeepfold\u002FNIPS2017) | 14 | \n| [Convolutional Neural Network Architecture for Geometric Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRocco_Convolutional_Neural_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhjweide\u002Fconvnet-for-geometric-matching) | 14 | \n| [Neural Face Editing With Intrinsic Image Disentangling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShu_Neural_Face_Editing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhixinshu\u002FNeuralFaceEditing) | 14 | \n| [Realistic Dynamic Facial Textures From a Single Image Using GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOlszewski_Realistic_Dynamic_Facial_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002FICCV-2017-Paper) | 14 | \n| [Predictive State Recurrent Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7186-predictive-state-recurrent-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcmdowney\u002Fpsrnn) | 13 | \n| [Deep TextSpotter: An End-To-End Trainable Scene Text Localization and Recognition Framework](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBusta_Deep_TextSpotter_An_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FVeitL\u002FOCR) | 13 | \n| [ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6932-extremeweather-a-large-scale-climate-dataset-for-semi-supervised-detection-localization-and-understanding-of-extreme-weather-events.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feracah\u002Fhur-detect) | 13 | \n| [Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6614-hunt-for-the-unique-stable-sparse-and-fast-feature-learning-on-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FvermaMachineLearning\u002FFGSD) | 13 | \n| [Consensus Convolutional Sparse Coding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChoudhury_Consensus_Convolutional_Sparse_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fvccimaging\u002FCCSC_code_ICCV2017) | 13 | \n| [Weakly Supervised Affordance Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSawatzky_Weakly_Supervised_Affordance_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fykztawas\u002FWeakly-Supervised-Affordance-Detection) | 13 | \n| [Joint Learning of Object and Action Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKalogeiton_Joint_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fvkalogeiton\u002Fjoint-object-action-learning) | 13 | \n| [Light Field Blind Motion Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSrinivasan_Light_Field_Blind_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpratulsrinivasan\u002FLight_Field_Blind_Motion_Deblurring) | 13 | \n| [Asynchronous Stochastic Gradient Descent with Delay Compensation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzheng17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FDelayed-Compensation-Asynchronous-Stochastic-Gradient-Descent-for-Multiverso) | 13 | \n| [Unrolled Memory Inner-Products: An Abstract GPU Operator for Efficient Vision-Related Computations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLin_Unrolled_Memory_Inner-Products_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjohnjohnlin\u002FUMI) | 12 | \n| [Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7125-maximizing-subset-accuracy-with-recurrent-neural-networks-in-multi-label-classification.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FJinseokNam\u002Fmlc2seq) | 12 | \n| [Self-Organized Text Detection With Minimal Post-Processing via Border Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWu_Self-Organized_Text_Detection_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsaicoco\u002Ftf-sotd) | 12 | \n| [Coordinated Multi-Agent Imitation Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fle17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhoangminhle\u002FMultiAgent-ImitationLearning) | 12 | \n| [Gradient descent GAN optimization is locally stable](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7142-gradient-descent-gan-optimization-is-locally-stable.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fgradient_regularized_gan) | 12 | \n| [Removing Rain From Single Images via a Deep Detail Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFu_Removing_Rain_From_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FXMU-smartdsp\u002FRemoving_Rain) | 12 | \n| [Convexified Convolutional Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzhang17f.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhangyuc\u002FCCNN) | 12 | \n| [Multigrid Neural Architectures](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKe_Multigrid_Neural_Architectures_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbuttomnutstoast\u002FMultigrid-Neural-Architectures) | 12 | \n| [VegFru: A Domain-Specific Dataset for Fine-Grained Visual Categorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHou_VegFru_A_Domain-Specific_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fustc-vim\u002Fvegfru) | 12 | \n| [Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7255-attend-and-predict-understanding-gene-regulation-by-selective-attention-on-chromatin.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FQData\u002FAttentiveChrome) | 12 | \n| [Differential Angular Imaging for Material Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXue_Differential_Angular_Imaging_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiaxue1993\u002FDAIN) | 12 | \n| [A Multilayer-Based Framework for Online Background Subtraction With Freely Moving Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_A_Multilayer-Based_Framework_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FEthanZhu90\u002FMultilayerBSMC_ICCV17) | 11 | \n| [Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6821-formal-guarantees-on-the-robustness-of-a-classifier-against-adversarial-manipulation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmax-andr\u002Fcross-lipschitz) | 11 | \n| [Max-value Entropy Search for Efficient Bayesian Optimization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fwang17e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzi-w\u002FMax-value-Entropy-Search) | 11 | \n| [Higher-Order Integration of Hierarchical Convolutional Activations for Fine-Grained Visual Categorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCai_Higher-Order_Integration_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcssjcai\u002Fhihca) | 11 | \n| [Generalized Deep Image to Image Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSanthanam_Generalized_Deep_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvenkai\u002FRBDN) | 11 | \n| [Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOh_Adversarial_Image_Perturbation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcoallaoh\u002FAIP) | 11 | \n| [Predicting Human Activities Using Stochastic Grammar](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FQi_Predicting_Human_Activities_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fgrammar-activity-prediction) | 11 | \n| [DESIRE: Distant Future Prediction in Dynamic Scenes With Interacting Agents](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLee_DESIRE_Distant_Future_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyadrimz\u002FDESIRE) | 11 | \n| [Fisher GAN](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6845-fisher-gan.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftomsercu\u002FFisherGAN) | 11 | \n| [High-Order Attention Models for Visual Question Answering](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6957-high-order-attention-models-for-visual-question-answering.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fidansc\u002FHighOrderAtten) | 11 | \n| [IM2CAD](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIzadinia_IM2CAD_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyyong119\u002FIM2CAD) | 11 | \n| [On Fairness and Calibration](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7151-on-fairness-and-calibration.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgpleiss\u002Fequalized_odds_and_calibration) | 11 | \n| [DeepPermNet: Visual Permutation Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSanta_Cruz_DeepPermNet_Visual_Permutation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frfsantacruz\u002Fdeep-perm-net) | 10 | \n| [f-GANs in an Information Geometric Nutshell](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6649-f-gans-in-an-information-geometric-nutshell.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fqulizhen\u002Ffgan_info_geometric) | 10 | \n| [Revisiting IM2GPS in the Deep Learning Era](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FVo_Revisiting_IM2GPS_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flugiavn\u002Frevisiting-im2gps) | 10 | \n| [Attentional Correlation Filter Network for Adaptive Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChoi_Attentional_Correlation_Filter_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjongwon20000\u002FACFN) | 10 | \n| [Learning Cross-Modal Deep Representations for Robust Pedestrian Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Learning_Cross-Modal_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanxuhk\u002FCMT-CNN) | 10 | \n| [Confident Multiple Choice Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flee17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fchhwang\u002Fcmcl) | 10 | \n| [Curriculum Dropout](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMorerio_Curriculum_Dropout_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpmorerio\u002Fcurriculum-dropout) | 9 | \n| [Cognitive Mapping and Planning for Visual Navigation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGupta_Cognitive_Mapping_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fagiantwhale\u002Fcognitive-mapping-agent) | 9 | \n| [Optimized Pre-Processing for Discrimination Prevention](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6988-optimized-pre-processing-for-discrimination-prevention.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffair-preprocessing\u002Fnips2017) | 9 | \n| [Learning Motion Patterns in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTokmakov_Learning_Motion_Patterns_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpirahansiah\u002Fopencv) | 9 | \n| [Scalable Log Determinants for Gaussian Process Kernel Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7212-scalable-log-determinants-for-gaussian-process-kernel-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkd383\u002FGPML_SLD) | 9 | \n| [A Hierarchical Approach for Generating Descriptive Image Paragraphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKrause_A_Hierarchical_Approach_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FInnerPeace-Wu\u002Fim2p-tensorflow) | 9 | \n| [Deep Crisp Boundaries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Deep_Crisp_Boundaries_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWangyupei\u002FCED) | 9 | \n| [Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6611-breaking-the-nonsmooth-barrier-a-scalable-parallel-method-for-composite-optimization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffabianp\u002FProxASAGA) | 9 | \n| [Practical Data-Dependent Metric Compression with Provable Guarantees](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6855-practical-data-dependent-metric-compression-with-provable-guarantees.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftalwagner\u002Fquadsketch) | 9 | \n| [Do Deep Neural Networks Suffer from Crowding?](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7146-do-deep-neural-networks-suffer-from-crowding.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FCBMM\u002Feccentricity) | 9 | \n| [A Non-Convex Variational Approach to Photometric Stereo Under Inaccurate Lighting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FQueau_A_Non-Convex_Variational_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyqueau\u002Frobust_ps) | 9 | \n| [End-To-End Learning of Geometry and Context for Deep Stereo Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKendall_End-To-End_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fliuruijin17\u002FRickLiuGC) | 9 | \n| [From Bayesian Sparsity to Gated Recurrent Nets](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7139-from-bayesian-sparsity-to-gated-recurrent-nets.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhehaodele\u002FSBL-LSTM-Net) | 8 | \n| [Regret Minimization in MDPs with Options without Prior Knowledge](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6909-regret-minimization-in-mdps-with-options-without-prior-knowledge.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FRonanFR\u002FUCRL) | 8 | \n| [Following Gaze in Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRecasens_Following_Gaze_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frecasens\u002FGaze-Following) | 8 | \n| [Model-Powered Conditional Independence Test](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6888-model-powered-conditional-independence-test.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frajatsen91\u002FCCIT) | 8 | \n| [Cost efficient gradient boosting](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6753-cost-efficient-gradient-boosting.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsvenpeter42\u002FLightGBM-CEGB) | 8 | \n| [Reflectance Adaptive Filtering Improves Intrinsic Image Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FNestmeyer_Reflectance_Adaptive_Filtering_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftnestmeyer\u002Freflectance-filtering) | 8 | \n| [DeepNav: Learning to Navigate Large Cities](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBrahmbhatt_DeepNav_Learning_to_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamarth-robo\u002Fdeepnav_cvpr17) | 8 | \n| [Look, Listen and Learn](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FArandjelovic_Look_Listen_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FKajiyu\u002FLLLNet) | 8 | \n| [Attention-Aware Face Hallucination via Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCao_Attention-Aware_Face_Hallucination_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fykshi\u002Ffacehallucination) | 8 | \n| [Plan, Attend, Generate: Planning for Sequence-to-Sequence Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7131-plan-attend-generate-planning-for-sequence-to-sequence-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDutil\u002FPAG) | 8 | \n| [Introspective Neural Networks for Generative Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLazarow_Introspective_Neural_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fintermilan\u002Finng) | 8 | \n| [Affinity Clustering: Hierarchical Clustering at Scale](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7262-affinity-clustering-hierarchical-clustering-at-scale.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMahsaDerakhshan\u002FAffinityClustering) | 8 | \n| [Gaze Embeddings for Zero-Shot Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKaressli_Gaze_Embeddings_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNoura-kr\u002FCVPR17) | 8 | \n| [Input Switched Affine Networks: An RNN Architecture Designed for Interpretability](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ffoerster17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fphilipperemy\u002Ftensorflow-isan-rnn) | 8 | \n| [Online multiclass boosting](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6693-online-multiclass-boosting.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fyhjung88\u002FOnlineBoostingWithVFDT) | 8 | \n| [Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOrekondy_Towards_a_Visual_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fvpa) | 8 | \n| [SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCamgoz_SubUNets_End-To-End_Hand_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fneccam\u002FSubUNets) | 7 | \n| [Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6713-learning-koopman-invariant-subspaces-for-dynamic-mode-decomposition.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthetak11\u002Flearning-kis) | 7 | \n| [Unsupervised Monocular Depth Estimation With Left-Right Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGodard_Unsupervised_Monocular_Depth_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyukitsuji\u002Fmonodepth_chainer) | 7 | \n| [Personalized Image Aesthetics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRen_Personalized_Image_Aesthetics_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Falanspike\u002FpersonalizedImageAesthetics) | 7 | \n| [Reasoning About Fine-Grained Attribute Phrases Using Reference Games](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSu_Reasoning_About_Fine-Grained_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjongchyisu\u002Fattribute_phrases) | 7 | \n| [Lost Relatives of the Gumbel Trick](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbalog17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmatejbalog\u002Fgumbel-relatives) | 7 | \n| [Weakly Supervised Learning of Deep Metrics for Stereo Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTulyakov_Weakly_Supervised_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftlkvstepan\u002Fmc-cnn-ws) | 7 | \n| [Centered Weight Normalization in Accelerating Training of Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Centered_Weight_Normalization_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FhuangleiBuaa\u002FCenteredWN) | 6 | \n| [Scalable Planning with Tensorflow for Hybrid Nonlinear Domains](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7207-scalable-planning-with-tensorflow-for-hybrid-nonlinear-domains.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwuga214\u002FTOOLBOX-Learning-and-Planning-through-Backpropagation) | 6 | \n| [Convex Global 3D Registration With Lagrangian Duality](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBriales_Convex_Global_3D_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjbriales\u002FCVPR17) | 6 | \n| [Building a Regular Decision Boundary With Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FOyallon_Building_a_Regular_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fdeep_separation_contraction) | 6 | \n| [Learning Spatial Regularization With Image-Level Supervisions for Multi-Label Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhu_Learning_Spatial_Regularization_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FEnjia\u002FSpatial-Regularization-Network-in-Tensorflow) | 6 | \n| [Forecasting Human Dynamics From Static Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChao_Forecasting_Human_Dynamics_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fywchao\u002Fskeleton2d3d) | 6 | \n| [AOD-Net: All-In-One Dehazing Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLi_AOD-Net_All-In-One_Dehazing_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fweber0522bb\u002FAODnet-by-pytorch) | 6 | \n| [K-Medoids For K-Means Seeding](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7104-k-medoids-for-k-means-seeding.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fidiap\u002Fzentas) | 6 | \n| [Diverse Image Annotation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWu_Diverse_Image_Annotation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwubaoyuan\u002FDIA) | 6 | \n| [Practical Hash Functions for Similarity Estimation and Dimensionality Reduction](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7239-practical-hash-functions-for-similarity-estimation-and-dimensionality-reduction.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzera\u002FNips_MT) | 6 | \n| [Deep Adaptive Image Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChang_Deep_Adaptive_Image_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHongtaoYang\u002FDAC-tensorflow) | 6 | \n| [Robust Adversarial Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fpinto17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FJekyll1021\u002FRARL) | 6 | \n| [Improving Training of Deep Neural Networks via Singular Value Bounding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJia_Improving_Training_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkui-jia\u002Fsvb) | 6 | \n| [Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6838-analyzing-hidden-representations-in-end-to-end-automatic-speech-recognition-systems.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fboknilev\u002Fasr-repr-analysis) | 6 | \n| [Tensor Belief Propagation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fwrigley17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fakxlr\u002Ftbp) | 6 | \n| [Sparse convolutional coding for neuronal assembly detection](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6958-sparse-convolutional-coding-for-neuronal-assembly-detection.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsccfnad\u002FSparse-convolutional-coding-for-neuronal-assembly-detection) | 6 | \n| [Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBousmalis_Unsupervised_Pixel-Level_Domain_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frhythm92\u002FUnsupervised-Pixel-Level-Domain-Adaptation-with-GAN) | 6 | \n| [Bayesian inference on random simple graphs with power law degree distributions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flee17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjuho-lee\u002Fpowerlawgraph) | 6 | \n| [Tensor Biclustering](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6730-tensor-biclustering.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FSoheilFeizi\u002FTensor-Biclustering) | 6 | \n| [Riemannian approach to batch normalization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7107-riemannian-approach-to-batch-normalization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMinhyungCho\u002Friemannian-batch-normalization) | 6 | \n| [Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FThewlis_Unsupervised_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Falldbi\u002FFactorized-Spatial-Embeddings) | 6 | \n| [Rolling-Shutter-Aware Differential SfM and Image Rectification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhuang_Rolling-Shutter-Aware_Differential_SfM_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FThomasZiegler\u002FRS-aware-differential-SfM) | 5 | \n| [Active Decision Boundary Annotation With Deep Generative Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuijser_Active_Decision_Boundary_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FMiriamHu\u002FActiveBoundary) | 5 | \n| [Object Co-Skeletonization With Co-Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJerripothula_Object_Co-Skeletonization_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjkoteswarrao\u002FObject-Co-skeletonization-with-Co-segmentation) | 5 | \n| [Discover and Learn New Objects From Documentaries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_Discover_and_Learn_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhellock\u002Fdocumentary-learning) | 5 | \n| [Understanding Black-box Predictions via Influence Functions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkoh17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Feolecvk\u002FInfluenceFunctions) | 5 | \n| [Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPatrini_Making_Deep_Neural_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FGarrettLee\u002Flabel_noise_correction) | 5 | \n| [Decoupling \"when to update\" from \"how to update\"](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6697-decoupling-when-to-update-from-how-to-update.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Femalach\u002FUpdateByDisagreement) | 5 | \n| [MarioQA: Answering Questions by Watching Gameplay Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMun_MarioQA_Answering_Questions_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJonghwanMun\u002FMarioQA) | 5 | \n| [Differentially private Bayesian learning on distributed data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6915-differentially-private-bayesian-learning-on-distributed-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDPBayes\u002Fdca-nips2017) | 5 | \n| [Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSelvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcydonia999\u002FGrad-CAM-in-TensorFlow) | 5 | \n| [Question Asking as Program Generation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6705-question-asking-as-program-generation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fanselmrothe\u002Fquestion_dataset) | 5 | \n| [Conic Scan-and-Cover algorithms for nonparametric topic modeling](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6977-conic-scan-and-cover-algorithms-for-nonparametric-topic-modeling.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmoonfolk\u002FGeometric-Topic-Modeling) | 5 | \n| [Lip Reading Sentences in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChung_Lip_Reading_Sentences_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flsrock1\u002FWLSNet_pytorch) | 5 | \n| [ROAM: A Rich Object Appearance Model With Application to Rotoscoping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMiksik_ROAM_A_Rich_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fomiksik\u002Froam) | 5 | \n| [NeuralFDR: Learning Discovery Thresholds from Hypothesis Features](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6752-neuralfdr-learning-discovery-thresholds-from-hypothesis-features.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffxia22\u002FNeuralFDR) | 5 | \n| [Viraliency: Pooling Local Virality](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAlameda-Pineda_Viraliency_Pooling_Local_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxavirema\u002Flena_pooling) | 5 | \n| [Learning Algorithms for Active Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbachman17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fvtphan\u002FCode4Brownies) | 5 | \n| [Point to Set Similarity Based Deep Feature Learning for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhou_Point_to_Set_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamaonline\u002FPoint-to-Set-Similarity-Based-Deep-Feature-Learning-for-Person-Re-identification) | 5 | \n| [Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSzeto_Click_Here_Human-Localized_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frszeto\u002Fclick-here-cnn) | 5 | \n| [The World of Fast Moving Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRozumnyi_The_World_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FqixuanHou\u002FMapping-My-Break) | 5 | \n| [Cross-Modality Binary Code Learning via Fusion Similarity Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Cross-Modality_Binary_Code_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLynnHongLiu\u002FFSH) | 5 | \n| [Testing and Learning on Distributions with Symmetric Noise Invariance](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6733-testing-and-learning-on-distributions-with-symmetric-noise-invariance.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhcllaw\u002Fphase_learn) | 5 | \n| [Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7268-sticking-the-landing-simple-lower-variance-gradient-estimators-for-variational-inference.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgeoffroeder\u002Fiwae) | 5 | \n| [Diving into the shallows: a computational perspective on large-scale shallow learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6968-diving-into-the-shallows-a-computational-perspective-on-large-scale-shallow-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FEigenPro\u002FEigenPro-tensorflow) | 5 | \n| [Rotation Equivariant Vector Field Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMarcos_Rotation_Equivariant_Vector_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdmarcosg\u002FRotEqNet) | 5 | \n| [Recursive Sampling for the Nystrom Method](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6973-recursive-sampling-for-the-nystrom-method.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcnmusco\u002Frecursive-nystrom) | 5 | \n| [Learning From Video and Text via Large-Scale Discriminative Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMiech_Learning_From_Video_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fantoine77340\u002Ficcv17learning) | 5 | \n| [Global optimization of Lipschitz functions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmalherbe17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSycor4x\u002Flipo) | 5 | \n| [Device Placement Optimization with Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmirhoseini17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Findrajeet95\u002FDevice-Placement-Optimization-with-Reinforcement-Learning) | 4 | \n| [Alternating Direction Graph Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLe-Huu_Alternating_Direction_Graph_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnetw0rkf10w\u002Fadgm) | 4 | \n| [MEC: Memory-efficient Convolution for Deep Neural Network](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fcho17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FCSshengxy\u002FMEC) | 4 | \n| [Expert Gate: Lifelong Learning With a Network of Experts](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAljundi_Expert_Gate_Lifelong_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frahafaljundi\u002FExpert-Gate) | 4 | \n| [A Simple yet Effective Baseline for 3D Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMartinez_A_Simple_yet_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fnulledge\u002Fbilinear) | 4 | \n| [On Structured Prediction Theory with Calibrated Convex Surrogate Losses](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6634-on-structured-prediction-theory-with-calibrated-convex-surrogate-losses.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Faosokin\u002FconsistentSurrogates_derivations) | 4 | \n| [Sub-sampled Cubic Regularization for Non-convex Optimization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkohler17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdalab\u002Fsubsampled_cubic_regularization) | 4 | \n| [Generalized Semantic Preserving Hashing for N-Label Cross-Modal Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMandal_Generalized_Semantic_Preserving_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdevraj89\u002FGeneralized-Semantic-Preserving-Hashing-for-N-Label-Cross-Modal-Retrieval) | 4 | \n| [Bottleneck Conditional Density Estimation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fshu17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FRuiShu\u002Fbcde) | 4 | \n| [Learning Cooperative Visual Dialog Agents With Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDas_Learning_Cooperative_Visual_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fschopra8\u002FCooperative_Vis_Diag_RL) | 4 | \n| [Multi-way Interacting Regression via Factorization Machines](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6853-multi-way-interacting-regression-via-factorization-machines.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmoonfolk\u002FMiFM) | 4 | \n| [Joint Discovery of Object States and Manipulation Actions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FAlayrac_Joint_Discovery_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjalayrac\u002Fobject-states-action) | 4 | \n| [Predicting Salient Face in Multiple-Face Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Predicting_Salient_Face_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftonysy\u002Fsalient-face-in-MUVFET) | 4 | \n| [From Red Wine to Red Tomato: Composition With Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMisra_From_Red_Wine_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fimisra\u002Fcomposing_cvpr17) | 4 | \n| [Encoder Based Lifelong Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRannen_Encoder_Based_Lifelong_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frahafaljundi\u002FEncoder-Based-Lifelong-learning) | 4 | \n| [Deep Recurrent Neural Network-Based Identification of Precursor microRNAs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6882-deep-recurrent-neural-network-based-identification-of-precursor-micrornas.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feleventh83\u002FdeepMiRGene) | 4 | \n| [Guarantees for Greedy Maximization of Non-submodular Functions with Applications](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbian17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbianan\u002Fnon-submodular-max) | 4 | \n| [Pose-Aware Person Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKumar_Pose-Aware_Person_Recognition_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvijaykumar01\u002Fperson_recognition) | 4 | \n| [Zero-Shot Recognition Using Dual Visual-Semantic Mapping Paths](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Zero-Shot_Recognition_Using_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYanaLee\u002FZero-Shot-Recognition-using-Dual-Visual-Semantic-Mapping-Paths) | 4 | \n| [Asynchronous Distributed Variational Gaussian Processes for Regression](nan) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhao-peng\u002FADVGP) | 3 | \n| [Saliency Pattern Detection by Ranking Structured Trees](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Saliency_Pattern_Detection_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzhulei2016\u002FRST-saliency) | 3 | \n| [Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6849-toward-goal-driven-neural-network-models-for-the-rodent-whisker-trigeminal-system.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fneuroailab\u002Fwhisker_model) | 3 | \n| [Learning Non-Maximum Suppression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHosang_Learning_Non-Maximum_Suppression_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FXingchenYu\u002Fpedestrian_detection_iosapp) | 3 | \n| [Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fcong17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmingyuanzhou\u002FDeepLDA_TLASGR_MCMC) | 3 | \n| [Discriminative Bimodal Networks for Visual Localization and Detection With Natural Language Queries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Discriminative_Bimodal_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYutingZhang\u002Fdbnet-caffe-matlab) | 3 | \n| [AdaNet: Adaptive Structural Learning of Artificial Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fcortes17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdavidabek1\u002Fadanet) | 3 | \n| [Large Margin Object Tracking With Circulant Feature Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Large_Margin_Object_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsallymmx\u002FLMCF) | 3 | \n| [Compatible Reward Inverse Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6800-compatible-reward-inverse-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Falbertometelli\u002Fcrirl) | 3 | \n| [Adversarial Surrogate Losses for Ordinal Regression](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6659-adversarial-surrogate-losses-for-ordinal-regression.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frizalzaf\u002Fadversarial-ordinal) | 3 | \n| [Non-monotone Continuous DR-submodular  Maximization: Structure and Algorithms](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6652-continuous-dr-submodular-maximization-structure-and-algorithms.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbianan) | 3 | \n| [Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7154-unifying-pac-and-regret-uniform-pac-bounds-for-episodic-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fchrodan\u002FFiniteEpisodicRL.jl) | 3 | \n| [A framework for Multi-A(rmed)\u002FB(andit) Testing with Online FDR Control](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7177-a-framework-for-multi-armedbandit-testing-with-online-fdr-control.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffanny-yang\u002FMABFDR) | 3 | \n| [Counting Everyday Objects in Everyday Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChattopadhyay_Counting_Everyday_Objects_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fprithv1\u002Fcvpr2017_counting) | 3 | \n| [Loss Max-Pooling for Semantic Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBulo_Loss_Max-Pooling_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjjkke88\u002FLMP) | 3 | \n| [Aesthetic Critiques Generation for Photos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChang_Aesthetic_Critiques_Generation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkunghunglu\u002FDeepPhotoCritic-ICCV17) | 3 | \n| [Expectation Propagation with Stochastic Kinetic Model in Complex Interaction Systems](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6798-expectation-propagation-with-stochastic-kinetic-model-in-complex-interaction-systems.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flefangcs\u002FExpectation-Propagation-with-Stochastic-Kinetic-Model-in-Complex-Interaction-Systems) | 3 | \n| [Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7049-near-optimal-edge-evaluation-in-explicit-generalized-binomial-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsanjibac\u002Fmatlab_learning_collision_checking) | 3 | \n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ back to top\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2016\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [R-FCN: Object Detection via Region-based Fully Convolutional Networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6465-r-fcn-object-detection-via-region-based-fully-convolutional-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Image Style Transfer Using Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGatys_Image_Style_Transfer_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fneural-style) | 16435 | \n| [Deep Residual Learning for Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHe_Deep_Residual_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks) | 4468 | \n| [Convolutional Pose Machines](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWei_Convolutional_Pose_Machines_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZheC\u002FRealtime_Multi-Person_Pose_Estimation) | 3260 | \n| [Synthetic Data for Text Localisation in Natural Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGupta_Synthetic_Data_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fankush-me\u002FSynthText) | 787 | \n| [Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLi_Combining_Markov_Random_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchuanli11\u002FCNNMRF) | 731 | \n| [Instance-Aware Semantic Segmentation via Multi-Task Network Cascades](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FDai_Instance-Aware_Semantic_Segmentation_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdaijifeng001\u002FMNC) | 433 | \n| [Learning Multi-Domain Convolutional Neural Networks for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FNam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHyeonseobNam\u002FMDNet) | 350 | \n| [Convolutional Two-Stream Network Fusion for Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FFeichtenhofer_Convolutional_Two-Stream_Network_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002Ftwostreamfusion) | 342 | \n| [Learning Deep Features for Discriminative Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhou_Learning_Deep_Features_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjazzsaxmafia\u002FWeakly_detector) | 323 | \n| [Deep Metric Learning via Lifted Structured Feature Embedding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSong_Deep_Metric_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frksltnl\u002FDeep-Metric-Learning-CVPR16) | 251 | \n| [Learning Deep Representations of Fine-Grained Visual Descriptions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FReed_Learning_Deep_Representations_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Freedscot\u002Fcvpr2016) | 229 | \n| [Eye Tracking for Everyone](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKrafka_Eye_Tracking_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002FGazeCapture) | 223 | \n| [NetVLAD: CNN Architecture for Weakly Supervised Place Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FArandjelovic_NetVLAD_CNN_Architecture_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRelja\u002Fnetvlad) | 204 | \n| [Staple: Complementary Learners for Real-Time Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBertinetto_Staple_Complementary_Learners_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbertinetto\u002Fstaple) | 183 | \n| [Joint Unsupervised Learning of Deep Representations and Image Clusters](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Joint_Unsupervised_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjwyang\u002FJULE.torch) | 182 | \n| [Accurate Image Super-Resolution Using Very Deep Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKim_Accurate_Image_Super-Resolution_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJongchan\u002Ftensorflow-vdsr) | 182 | \n| [Temporal Action Localization in Untrimmed Videos via Multi-Stage CNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShou_Temporal_Action_Localization_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhengshou\u002Fscnn) | 167 | \n| [LocNet: Improving Localization Accuracy for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGidaris_LocNet_Improving_Localization_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgidariss\u002FLocNet) | 155 | \n| [Shallow and Deep Convolutional Networks for Saliency Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPan_Shallow_and_Deep_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fimatge-upc\u002Fsaliency-2016-cvpr) | 153 | \n| [Compact Bilinear Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGao_Compact_Bilinear_Pooling_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgy20073\u002Fcompact_bilinear_pooling) | 148 | \n| [Learning Compact Binary Descriptors With Unsupervised Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLin_Learning_Compact_Binary_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkevinlin311tw\u002Fcvpr16-deepbit) | 144 | \n| [Dynamic Image Networks for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBilen_Dynamic_Image_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhbilen\u002Fdynamic-image-nets) | 133 | \n| [Rethinking the Inception Architecture for Computer Vision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSzegedy_Rethinking_the_Inception_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMoodstocks\u002Finception-v3.torch) | 130 | \n| [Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSong_Deep_Sliding_Shapes_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshurans\u002FDeepSlidingShape) | 126 | \n| [Context Encoders: Feature Learning by Inpainting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPathak_Context_Encoders_Feature_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjazzsaxmafia\u002FInpainting) | 124 | \n| [TI-Pooling: Transformation-Invariant Pooling for Feature Learning in Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLaptev_TI-Pooling_Transformation-Invariant_Pooling_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdlaptev\u002FTI-pooling) | 109 | \n| [Weakly Supervised Deep Detection Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBilen_Weakly_Supervised_Deep_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhbilen\u002FWSDDN) | 103 | \n| [Natural Language Object Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHu_Natural_Language_Object_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fnatural-language-object-retrieval) | 100 | \n| [Deeply-Recursive Convolutional Network for Image Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKim_Deeply-Recursive_Convolutional_Network_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiny2001\u002Fdeeply-recursive-cnn-tf) | 96 | \n| [Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShi_Real-Time_Single_Image_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleftthomas\u002FESPCN) | 92 | \n| [Image Question Answering Using Convolutional Neural Network With Dynamic Parameter Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FNoh_Image_Question_Answering_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHyeonwooNoh\u002FDPPnet) | 88 | \n| [Recurrent Convolutional Network for Video-Based Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMcLaughlin_Recurrent_Convolutional_Network_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fniallmcl\u002FRecurrent-Convolutional-Video-ReID) | 82 | \n| [A Comparative Study for Single Image Blind Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLai_A_Comparative_Study_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphoenix104104\u002Fcvpr16_deblur_study) | 82 | \n| [Neural Module Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FAndreas_Neural_Module_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHarshTrivedi\u002Fnmn-pytorch) | 81 | \n| [Stacked Attention Networks for Image Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Stacked_Attention_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzcyang\u002Fimageqa-san) | 78 | \n| [Progressive Prioritized Multi-View Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLocher_Progressive_Prioritized_Multi-View_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falexlocher\u002Fhpmvs) | 73 | \n| [Marr Revisited: 2D-3D Alignment via Surface Normal Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBansal_Marr_Revisited_2D-3D_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faayushbansal\u002FMarrRevisited) | 72 | \n| [A Hierarchical Deep Temporal Model for Group Activity Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FIbrahim_A_Hierarchical_Deep_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmostafa-saad\u002Fdeep-activity-rec) | 71 | \n| [Towards Open Set Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBendale_Towards_Open_Set_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fabhijitbendale\u002FOSDN) | 71 | \n| [Robust 3D Hand Pose Estimation in Single Depth Images: From Single-View CNN to Multi-View CNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGe_Robust_3D_Hand_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeliuhao\u002FCVPR2016_HandPoseEstimation) | 70 | \n| [Bilateral Space Video Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMaerki_Bilateral_Space_Video_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fowang\u002FBilateralVideoSegmentation) | 63 | \n| [Deep Compositional Captioning: Describing Novel Object Categories Without Paired Training Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHendricks_Deep_Compositional_Captioning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLisaAnne\u002FDCC) | 57 | \n| [Efficient 3D Room Shape Recovery From a Single Panorama](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Efficient_3D_Room_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYANG-H\u002FPanoramix) | 55 | \n| [Non-Local Image Dehazing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBerman_Non-Local_Image_Dehazing_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanaberman\u002Fnon-local-dehazing) | 50 | \n| [Video Segmentation via Object Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FTsai_Video_Segmentation_via_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwasidennis\u002FObjectFlow) | 50 | \n| [Deep Supervised Hashing for Fast Image Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLiu_Deep_Supervised_Hashing_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyg33717\u002FDSH_tensorflow) | 50 | \n| [Deep Region and Multi-Label Learning for Facial Action Unit Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhao_Deep_Region_and_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzkl20061823\u002FDRML) | 43 | \n| [CRAFT Objects From Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_CRAFT_Objects_From_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbyangderek\u002FCRAFT) | 41 | \n| [Slicing Convolutional Neural Network for Crowd Video Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShao_Slicing_Convolutional_Neural_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famandajshao\u002FSlicing-CNN) | 40 | \n| [Sketch Me That Shoe](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYu_Sketch_Me_That_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fseuliufeng\u002FDeepSBIR) | 39 | \n| [Image Captioning With Semantic Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYou_Image_Captioning_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchapternewscu\u002Fimage-captioning-with-semantic-attention) | 35 | \n| [Deep Saliency With Encoded Low Level Distance Map and High Level Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLee_Deep_Saliency_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgylee1103\u002FSaliencyELD) | 34 | \n| [A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPerazzi_A_Benchmark_Dataset_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdavisvideochallenge\u002Fdavis-matlab) | 33 | \n| [A Dual-Source Approach for 3D Pose Estimation From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYasin_A_Dual-Source_Approach_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiqbalu\u002F3D_Pose_Estimation_CVPR2016) | 32 | \n| [Learning Local Image Descriptors With Deep Siamese and Triplet Convolutional Networks by Minimising Global Loss Functions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FG_Learning_Local_Image_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvijaykbg\u002Fdeep-patchmatch) | 30 | \n| [Ordinal Regression With Multiple Output CNN for Age Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FNiu_Ordinal_Regression_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluoyetx\u002FOrdinalRegression) | 30 | \n| [Structured Feature Learning for Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChu_Structured_Feature_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchuxiaoselena\u002FStructuredFeature) | 29 | \n| [Unsupervised Learning of Edges](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLi_Unsupervised_Learning_of_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhappyharrycn\u002Funsupervised_edges) | 29 | \n| [PatchBatch: A Batch Augmented Loss for Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGadot_PatchBatch_A_Batch_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDediGadot\u002FPatchBatch) | 27 | \n| [Dense Human Body Correspondences Using Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWei_Dense_Human_Body_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhalimacc\u002FDenseHumanBodyCorrespondences) | 27 | \n| [Actionness Estimation Using Hybrid Fully Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWang_Actionness_Estimation_Using_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwanglimin\u002FActionness-Estimation) | 26 | \n| [You Only Look Once: Unified, Real-Time Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FRedmon_You_Only_Look_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fandersy005\u002Fkeras-yolo) | 26 | \n| [Fast Training of Triplet-Based Deep Binary Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhuang_Fast_Training_of_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxwzy\u002FTriplet-deep-hash-pytorch) | 25 | \n| [Recurrent Attention Models for Depth-Based Person Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHaque_Recurrent_Attention_Models_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahaque\u002Fram_person_id) | 24 | \n| [Detecting Vanishing Points Using Global Image Context in a Non-Manhattan World](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhai_Detecting_Vanishing_Points_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fviibridges\u002Fgc-horizon-detector) | 22 | \n| [First Person Action Recognition Using Deep Learned Descriptors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSingh_First_Person_Action_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsuriyasingh\u002FEgoConvNet) | 21 | \n| [Proposal Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHam_Proposal_Flow_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbsham\u002FProposalFlow) | 20 | \n| [Scale-Aware Alignment of Hierarchical Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChen_Scale-Aware_Alignment_of_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuhuayc\u002Falign-hier) | 20 | \n| [Quantized Convolutional Neural Networks for Mobile Devices](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWu_Quantized_Convolutional_Neural_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FOluwoleOyetoke\u002FComputer_Vision_Using_TensorFlowLite) | 20 | \n| [Semantic Segmentation With Boundary Neural Fields](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBertasius_Semantic_Segmentation_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgberta\u002FBNF_globalization) | 19 | \n| [Single-Image Crowd Counting via Multi-Column Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Single-Image_Crowd_Counting_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fuestcchicken\u002Fcrowd-counting-MCNN) | 19 | \n| [Accumulated Stability Voting: A Robust Descriptor From Descriptors of Multiple Scales](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Accumulated_Stability_Voting_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshamangary\u002FASV) | 19 | \n| [Structure From Motion With Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FCrocco_Structure_From_Motion_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanylaksono\u002FAndroid-SfM-client) | 17 | \n| [Bottom-Up and Top-Down Reasoning With Hierarchical Rectified Gaussians](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHu_Bottom-Up_and_Top-Down_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpeiyunh\u002Frg-mpii) | 16 | \n| [Semantic Filtering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Semantic_Filtering_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshenshen-hungry\u002FSemantic-CNN) | 16 | \n| [Online Detection and Classification of Dynamic Hand Gestures With Recurrent 3D Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMolchanov_Online_Detection_and_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbreadbread1984\u002FR3DCNN) | 16 | \n| [ReconNet: Non-Iterative Reconstruction of Images From Compressively Sensed Measurements](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKuldeepKulkarni\u002FReconNet) | 15 | \n| [Interactive Segmentation on RGBD Images via Cue Selection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FFeng_Interactive_Segmentation_on_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZVsion\u002Frgbd_image_segmentation) | 14 | \n| [Object Contour Detection With a Fully Convolutional Encoder-Decoder Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Object_Contour_Detection_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRaj-08\u002Ftensorflow-object-contour-detection) | 14 | \n| [Automatic Content-Aware Color and Tone Stylization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLee_Automatic_Content-Aware_Color_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjinyu121\u002FACACTS) | 12 | \n| [Similarity Learning With Spatial Constraints for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChen_Similarity_Learning_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdapengchen123\u002FSCSP) | 11 | \n| [Personalizing Human Video Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FCharles_Personalizing_Human_Video_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjjcharles\u002Fpersonalized_pose) | 10 | \n| [Visually Indicated Sounds](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FOwens_Visually_Indicated_Sounds_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FVIG) | 9 | \n| [Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHou_Patch-Based_Convolutional_Neural_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcheersyouran\u002Fcancer-detector) | 9 | \n| [Region Ranking SVM for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWei_Region_Ranking_SVM_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzijunwei\u002FRegion-Ranking-SVM) | 8 | \n| [Pairwise Matching Through Max-Weight Bipartite Belief Propagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Pairwise_Matching_Through_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzzhang1987\u002FHungarianBP) | 8 | \n| [Deep Hand: How to Train a CNN on 1 Million Hand Images When Your Data Is Continuous and Weakly Labelled](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKoller_Deep_Hand_How_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fneccam\u002FTF-DeepHand) | 8 | \n| [Cross-Stitch Networks for Multi-Task Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMisra_Cross-Stitch_Networks_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhelloyide\u002FCross-stitch-Networks-for-Multi-task-Learning) | 8 | \n| [Learning a Discriminative Null Space for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Learning_a_Discriminative_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FNullSpace_ReID) | 8 | \n| [Efficient Deep Learning for Stereo Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLuo_Efficient_Deep_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhaojeng-wang\u002Fdl_stereo_matching) | 7 | \n| [Globally Optimal Manhattan Frame Estimation in Real-Time](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FJoo_Globally_Optimal_Manhattan_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKyungdon\u002Fmf_estimation) | 7 | \n| [Where to Look: Focus Regions for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShih_Where_to_Look_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkevjshih\u002Fwtl_vqa) | 7 | \n| [Detecting Migrating Birds at Night](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHuang_Detecting_Migrating_Birds_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002FBirdDetection) | 7 | \n| [Unsupervised Learning From Narrated Instruction Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FAlayrac_Unsupervised_Learning_From_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjalayrac\u002FinstructionVideos) | 7 | \n| [Efficient and Robust Color Consistency for Community Photo Collections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPark_Efficient_and_Robust_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsyncle\u002Fphoto_consistency) | 7 | \n| [Recurrent Attentional Networks for Saliency Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKuen_Recurrent_Attentional_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangxiaoning666\u002FPAGR) | 7 | \n| [3D Shape Attributes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FFouhey_3D_Shape_Attributes_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpetermalcolm\u002Festimate3DStep) | 6 | \n| [Beyond Local Search: Tracking Objects Everywhere With Instance-Specific Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhu_Beyond_Local_Search_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FGaoCode\u002FEBT) | 5 | \n| [Functional Faces: Groupwise Dense Correspondence Using Functional Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Functional_Faces_Groupwise_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcazhang\u002FfuncFaces) | 5 | \n| [Visual Tracking Using Attention-Modulated Disintegration and Integration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChoi_Visual_Tracking_Using_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjongwon20000\u002FSCT) | 5 | \n| [Improving Human Action Recognition by Non-Action Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWang_Improving_Human_Action_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyangwangx\u002FNonActionShot) | 4 | \n| [Prior-Less Compressible Structure From Motion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKong_Prior-Less_Compressible_Structure_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkongchen1992\u002Fcompressible-sfm) | 4 | \n| [DenseCap: Fully Convolutional Localization Networks for Dense Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FJohnson_DenseCap_Fully_Convolutional_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frampage644\u002Fdensecap-tensorflow) | 4 | \n| [Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLu_Tensor_Robust_Principal_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcanyilu\u002FTensor-Robust-Principal-Component-Analysis-TRPCA) | 4 | \n| [Force From Motion: Decoding Physical Sensation in a First Person Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPark_Force_From_Motion_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjyhjinghwang\u002FForce_from_Motion_Gravity_Models) | 3 | \n| [Context-Aware Gaussian Fields for Non-Rigid Point Set Registration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWang_Context-Aware_Gaussian_Fields_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgwang-cv\u002FCA-LapGF-Demo) | 3 | \n| [Using Spatial Order to Boost the Elimination of Incorrect Feature Matches](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FTalker_Using_Spatial_Order_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliortalker\u002FSpatialOrder) | 3 | \n| [Fast Algorithms for Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLavin_Fast_Algorithms_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fistoony\u002Fwinograd-convolutional-nn) | 3 | \n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ back to top\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2015\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Faster R-CNN: Towards Real-Time Object Detectionwith Region Proposal Networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Fast R-CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FGirshick_Fast_R-CNN_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Conditional Random Fields as Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZheng_Conditional_Random_Fields_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftorrvision\u002Fcrfasrnn) | 1189 | \n| [Fully Convolutional Networks for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLong_Fully_Convolutional_Networks_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshekkizh\u002FFCN.tensorflow) | 911 | \n| [Learning to Track: Online Multi-Object Tracking by Decision Making](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FXiang_Learning_to_Track_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyuxng\u002FMDP_Tracking) | 308 | \n| [Learning to Compare Image Patches via Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZagoruyko_Learning_to_Compare_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fcvpr15deepcompare) | 300 | \n| [Learning Deconvolution Network for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FNoh_Learning_Deconvolution_Network_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHyeonwooNoh\u002FDeconvNet) | 296 | \n| [Single Image Super-Resolution From Transformed Self-Exemplars](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHuang_Single_Image_Super-Resolution_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002FSelfExSR) | 289 | \n| [Sequence to Sequence - Video to Text](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FVenugopalan_Sequence_to_Sequence_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjazzsaxmafia\u002Fvideo_to_sequence) | 239 | \n| [Deep Colorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FCheng_Deep_Colorization_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fcolorization-pytorch) | 198 | \n| [Deep Neural Decision Forests](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKontschieder_Deep_Neural_Decision_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002Ffully-differentiable-deep-ndf-tf) | 192 | \n| [Hierarchical Convolutional Features for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FMa_Hierarchical_Convolutional_Features_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002FCF2) | 179 | \n| [Render for CNN: Viewpoint Estimation in Images Using CNNs Trained With Rendered 3D Model Views](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSu_Render_for_CNN_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FShapeNet\u002FRenderForCNN) | 176 | \n| [Realtime Edge-Based Visual Odometry for a Monocular Camera](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTarrio_Realtime_Edge-Based_Visual_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJuanTarrio\u002Frebvo) | 175 | \n| [Understanding Deep Image Representations by Inverting Them](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FMahendran_Understanding_Deep_Image_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faravindhm\u002Fdeep-goggle) | 154 | \n| [Context-Aware CNNs for Person Head Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FVu_Context-Aware_CNNs_for_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faosokin\u002Fcnn_head_detection) | 153 | \n| [Show and Tell: A Neural Image Caption Generator](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FVinyals_Show_and_Tell_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKranthiGV\u002FPretrained-Show-and-Tell-model) | 141 | \n| [Face Alignment by Coarse-to-Fine Shape Searching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhu_Face_Alignment_by_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhusz\u002FCVPR15-CFSS) | 140 | \n| [An Improved Deep Learning Architecture for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FAhmed_An_Improved_Deep_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNing-Ding\u002FImplementation-CVPR2015-CNN-for-ReID) | 127 | \n| [FaceNet: A Unified Embedding for Face Recognition and Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FSchroff_FaceNet_A_Unified_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliorshk\u002Ffacenet_pytorch) | 124 | \n| [Depth-Based Hand Pose Estimation: Data, Methods, and Challenges](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSupancic_Depth-Based_Hand_Pose_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjsupancic\u002Fdeep_hand_pose) | 121 | \n| [DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FNewcombe_DynamicFusion_Reconstruction_and_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmihaibujanca\u002Fdynamicfusion) | 118 | \n| [Massively Parallel Multiview Stereopsis by Surface Normal Diffusion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FGalliani_Massively_Parallel_Multiview_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkysucix\u002Fgipuma) | 105 | \n| [Learning to Propose Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FKrahenbuhl_Learning_to_Propose_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphilkr\u002Flpo) | 91 | \n| [Learning Spatially Regularized Correlation Filters for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDanelljan_Learning_Spatially_Regularized_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flifeng9472\u002FSTRCF) | 86 | \n| [A Convolutional Neural Network Cascade for Face Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_A_Convolutional_Neural_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmks0601\u002FA-Convolutional-Neural-Network-Cascade-for-Face-Detection) | 85 | \n| [Discriminative Learning of Deep Convolutional Feature Point Descriptors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSimo-Serra_Discriminative_Learning_of_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fetrulls\u002Fdeepdesc-release) | 77 | \n| [Unsupervised Visual Representation Learning by Context Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDoersch_Unsupervised_Visual_Representation_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcdoersch\u002Fdeepcontext) | 73 | \n| [Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FNguyen_Deep_Neural_Networks_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fabhijitbendale\u002FOSDN) | 71 | \n| [Deep Filter Banks for Texture Recognition and Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FCimpoi_Deep_Filter_Banks_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmcimpoi\u002Fdeep-fbanks) | 68 | \n| [Saliency Detection by Multi-Context Deep Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhao_Saliency_Detection_by_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRobert0812\u002Fdeepsaldet) | 66 | \n| [Multi-Objective Convolutional Learning for Face Labeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLiu_Multi-Objective_Convolutional_Learning_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLiusifei\u002FFace_Parsing_2016) | 55 | \n| [Finding Action Tubes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FGkioxari_Finding_Action_Tubes_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgkioxari\u002FActionTubes) | 51 | \n| [Category-Specific Object Reconstruction From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FKar_Category-Specific_Object_Reconstruction_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fakar43\u002FCategoryShapes) | 48 | \n| [Convolutional Color Constancy](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FBarron_Convolutional_Color_Constancy_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyuanming-hu\u002Ffc4) | 47 | \n| [Face Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSnape_Face_Flow_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshashanktyagi\u002FHyperFace-TensorFlow-implementation) | 45 | \n| [P-CNN: Pose-Based CNN Features for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FCheron_P-CNN_Pose-Based_CNN_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fgcheron\u002FP-CNN) | 45 | \n| [Learning From Massive Noisy Labeled Data for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FXiao_Learning_From_Massive_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FCysu\u002Fnoisy_label) | 45 | \n| [Image Specificity](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FJas_Image_Specificity_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FburliEnterprises\u002Ftensorflow-image-classifier) | 40 | \n| [Predicting Depth, Surface Normals and Semantic Labels With a Common Multi-Scale Convolutional Architecture](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FEigen_Predicting_Depth_Surface_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FRostifar\u002FNYUDepthNet) | 35 | \n| [Neural Activation Constellations: Unsupervised Part Model Discovery With Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSimon_Neural_Activation_Constellations_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcvjena\u002Fpart_constellation_models) | 35 | \n| [VQA: Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FAntol_VQA_Visual_Question_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fimatge-upc\u002Fvqa-2016-cvprw) | 35 | \n| [Mid-Level Deep Pattern Mining](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_Mid-Level_Deep_Pattern_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FyaoliUoA\u002FMDPM) | 34 | \n| [PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKendall_PoseNet_A_Convolutional_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffuturely\u002Fdeep-camera-relocalization) | 34 | \n| [Parsimonious Labeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDokania_Parsimonious_Labeling_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faimerykong\u002FPixel-Attentional-Gating) | 33 | \n| [Car That Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FJain_Car_That_Knows_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fasheshjain399\u002FICCV2015_Brain4Cars) | 33 | \n| [Recurrent Convolutional Neural Network for Object Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLiang_Recurrent_Convolutional_Neural_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJimLee4530\u002FRCNN) | 32 | \n| [TILDE: A Temporally Invariant Learned DEtector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FVerdie_TILDE_A_Temporally_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkmyid\u002FTILDE) | 30 | \n| [In Defense of Color-Based Model-Free Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FPossegger_In_Defense_of_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002FDAT) | 30 | \n| [Fast Bilateral-Space Stereo for Synthetic Defocus](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FBarron_Fast_Bilateral-Space_Stereo_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftvandenzegel\u002Ffast_bilateral_space_stereo) | 29 | \n| [Phase-Based Frame Interpolation for Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FMeyer_Phase-Based_Frame_Interpolation_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fowang\u002FPhaseBasedInterpolation) | 28 | \n| [Understanding Tools: Task-Oriented Object Modeling, Learning and Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhu_Understanding_Tools_Task-Oriented_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxiaozhuchacha\u002FKinect2Toolbox) | 27 | \n| [Deeply Learned Attributes for Crowded Scene Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FShao_Deeply_Learned_Attributes_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famandajshao\u002Fwww_deep_crowd) | 27 | \n| [Unconstrained 3D Face Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FRoth_Unconstrained_3D_Face_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNJUPole\u002FCVPR2015-Unconstrained-3D-Face-Reconstruction) | 26 | \n| [Viewpoints and Keypoints](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FTulsiani_Viewpoints_and_Keypoints_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FViewpointsAndKeypoints) | 25 | \n| [Holistically-Nested Edge Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FXie_Holistically-Nested_Edge_Detection_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fs9xie\u002Fhed_release-deprecated) | 25 | \n| [Going Deeper With Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FSzegedy_Going_Deeper_With_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnutszebra\u002Fgooglenet) | 25 | \n| [Reconstructing the World* in Six Days *(As Captured by the Yahoo 100 Million Image Dataset)](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHeinly_Reconstructing_the_World_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjheinly\u002Fstreaming_connected_component_discovery) | 25 | \n| [Data-Driven 3D Voxel Patterns for Object Category Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FXiang_Data-Driven_3D_Voxel_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuxng\u002F3DVP) | 24 | \n| [L0TV: A New Method for Image Restoration in the Presence of Impulse Noise](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FYuan_L0TV_A_New_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpeisuke\u002FL0TV) | 22 | \n| [Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhang_Beyond_Frontal_Faces_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FBeyond-Frontal-Faces) | 21 | \n| [Understanding Deep Features With Computer-Generated Imagery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FAubry_Understanding_Deep_Features_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmathieuaubry\u002Ffeatures_analysis) | 19 | \n| [HICO: A Benchmark for Recognizing Human-Object Interactions in Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FChao_HICO_A_Benchmark_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fywchao\u002Fhico_benchmark) | 18 | \n| [Structured Feature Selection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FGao_Structured_Feature_Selection_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcsliangdu\u002FFSASL) | 17 | \n| [Learning Large-Scale Automatic Image Colorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDeshpande_Learning_Large-Scale_Automatic_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faditya12agd5\u002Ficcv15_lscolorization) | 17 | \n| [Semantic Component Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FMurdock_Semantic_Component_Analysis_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faubry74\u002Fvisual-word2vec) | 17 | \n| [Simultaneous Feature Learning and Hash Coding With Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLai_Simultaneous_Feature_Learning_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHYPJUDY\u002Fcaffe-dnnh) | 16 | \n| [3D Object Reconstruction From Hand-Object Interactions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTzionas_3D_Object_Reconstruction_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdimtziwnas\u002FInHandScanningICCV15_Reconstruction) | 15 | \n| [Learning Temporal Embeddings for Complex Video Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FRamanathan_Learning_Temporal_Embeddings_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Feevignesh\u002Fvideovector) | 14 | \n| [Learning to See by Moving](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FAgrawal_Learning_to_See_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpulkitag\u002Flearning-to-see-by-moving) | 14 | \n| [Reflection Removal Using Ghosting Cues](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FShih_Reflection_Removal_Using_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthongnguyendev\u002Fsingle_image) | 14 | \n| [Where to Buy It: Matching Street Clothing Photos in Online Shops](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKiapour_Where_to_Buy_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjfuentescpp\u002Fwhere_to_buy_it) | 14 | \n| [Oriented Edge Forests for Boundary Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHallman_Oriented_Edge_Forests_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamhallman\u002Foef) | 13 | \n| [A Large-Scale Car Dataset for Fine-Grained Categorization and Verification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FYang_A_Large-Scale_Car_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbogger\u002Fcaffe-multigpu) | 11 | \n| [Appearance-Based Gaze Estimation in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhang_Appearance-Based_Gaze_Estimation_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftrakaros\u002FMPIIGaze) | 10 | \n| [Learning a Descriptor-Specific 3D Keypoint Detector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSalti_Learning_a_Descriptor-Specific_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FKeypoint-Learning) | 10 | \n| [Robust Image Filtering Using Joint Static and Dynamic Guidance](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHam_Robust_Image_Filtering_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbsham\u002FSDFilter) | 10 | \n| [Partial Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZheng_Partial_Person_Re-Identification_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flingxiao-he\u002FDeep-Spatial-Feature-Reconstruction-for-Partial-Person-Re-identification) | 9 | \n| [High Quality Structure From Small Motion for Rolling Shutter Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FIm_High_Quality_Structure_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsunghoonim\u002FSfSM) | 9 | \n| [Boosting Object Proposals: From Pascal to COCO](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FPont-Tuset_Boosting_Object_Proposals_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjponttuset\u002FBOP) | 8 | \n| [Convolutional Channel Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FYang_Convolutional_Channel_Features_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fbyangderek\u002FCCF) | 8 | \n| [Live Repetition Counting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FLevy_Live_Repetition_Counting_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftomrunia\u002FDeepRepICCV2015) | 8 | \n| [Unsupervised Learning of Visual Representations Using Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FWang_Unsupervised_Learning_of_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcoreylynch\u002Funsupervised-triplet-embedding) | 8 | \n| [Supervised Discrete Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FShen_Supervised_Discrete_Hashing_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgoukoutaki\u002FFSDH) | 7 | \n| [Multi-View Convolutional Neural Networks for 3D Shape Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSu_Multi-View_Convolutional_Neural_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshawnxu1318\u002FMVCNN-Multi-View-Convolutional-Neural-Networks) | 7 | \n| [Simpler Non-Parametric Methods Provide as Good or Better Results to Multiple-Instance Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FVenkatesan_Simpler_Non-Parametric_Methods_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fragavvenkatesan\u002Fnp-mil) | 7 | \n| [Finding Distractors In Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FFried_Finding_Distractors_In_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fohadf\u002Fdistractors) | 7 | \n| [Piecewise Flat Embedding for Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FYu_Piecewise_Flat_Embedding_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fchaoweifang\u002FPFE) | 7 | \n| [Long-Term Correlation Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FMa_Long-Term_Correlation_Tracking_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmalreddysid\u002Flong-term-correlation-tracking) | 6 | \n| [Towards Open World Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FBendale_Towards_Open_World_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fabhijitbendale\u002FOWR) | 6 | \n| [Pooled Motion Features for First-Person Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FRyoo_Pooled_Motion_Features_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FUSCDataScience\u002Fhadoop-pot) | 6 | \n| [Simultaneous Deep Transfer Across Domains and Tasks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTzeng_Simultaneous_Deep_Transfer_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmahfujau\u002Fdomain_adaptation_iccv15) | 6 | \n| [What Makes an Object Memorable?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDubey_What_Makes_an_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FqixuanHou\u002FMapping-My-Break) | 5 | \n| [Mining Semantic Affordances of Visual Object Categories](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FChao_Mining_Semantic_Affordances_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fywchao\u002Fsemantic_affordance) | 5 | \n| [Dense Semantic Correspondence Where Every Pixel is a Classifier](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FBristow_Dense_Semantic_Correspondence_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhbristow\u002Fepic) | 5 | \n| [Segment Graph Based Image Filtering: Fast Structure-Preserving Smoothing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZhang_Segment_Graph_Based_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffeihuzhang\u002FSGF) | 5 | \n| [Fast Randomized Singular Value Thresholding for Nuclear Norm Minimization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FOh_Fast_Randomized_Singular_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHlG4399\u002FFRSVT) | 5 | \n| [Unsupervised Generation of a Viewpoint Annotated Car Dataset From Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSedaghat_Unsupervised_Generation_of_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Funsup-car-dataset) | 5 | \n| [Multi-Label Cross-Modal Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FRanjan_Multi-Label_Cross-Modal_Retrieval_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FViresh-R\u002Fml-CCA) | 4 | \n| [Superdifferential Cuts for Binary Energies](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FTaniai_Superdifferential_Cuts_for_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ft-taniai\u002FSDC_CVPR2015) | 4 | \n| [Pose Induction for Novel Object Categories](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTulsiani_Pose_Induction_for_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FposeInduction) | 4 | \n| [Efficient Minimal-Surface Regularization of Perspective Depth Maps in Variational Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FGraber_Efficient_Minimal-Surface_Regularization_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FVLOGroup\u002Fsurface-area-regularization) | 4 | \n| [Low-Rank Matrix Factorization Under General Mixture Noise Distributions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FCao_Low-Rank_Matrix_Factorization_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxiangyongcao\u002FPMoEP) | 4 | \n| [Robust Saliency Detection via Regularized Random Walks Ranking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_Robust_Saliency_Detection_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuanyc06\u002Frr) | 3 | \n| [Simultaneous Video Defogging and Stereo Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_Simultaneous_Video_Defogging_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLashuk1729\u002FDIP-Project-Video-Dehazing) | 3 | \n| [Hyperspectral Super-Resolution by Coupled Spectral Unmixing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FLanaras_Hyperspectral_Super-Resolution_by_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flanha\u002FSupResPALM) | 3 | \n| [Oriented Object Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FHe_Oriented_Object_Proposals_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffrutuozo29\u002FWebServiceRESTFul) | 3 | \n| [kNN Hashing With Factorized Neighborhood Representation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDing_kNN_Hashing_With_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdooook\u002FkNN-hashing) | 3 | \n| [Minimum Barrier Salient Object Detection at 80 FPS](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZhang_Minimum_Barrier_Salient_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FcoderSkyChen\u002FMBS_Cplus_c-) | 3 | \n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ back to top\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2014\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FGirshick_Rich_Feature_Hierarchies_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frbgirshick\u002Frcnn) | 1681 | \n| [Locally Optimized Product Quantization for Approximate Nearest Neighbor Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKalantidis_Locally_Optimized_Product_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyahoo\u002Flopq) | 437 | \n| [Clothing Co-Parsing by Joint Image Segmentation and Labeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FYang_Clothing_Co-Parsing_by_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fclothing-co-parsing) | 218 | \n| [Multiscale Combinatorial Grouping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FArbelaez_Multiscale_Combinatorial_Grouping_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjponttuset\u002Fmcg) | 185 | \n| [Face Alignment at 3000 FPS via Regressing Local Binary Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FRen_Face_Alignment_at_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluoyetx\u002Fface-alignment-at-3000fps) | 164 | \n| [Cross-Scale Cost Aggregation for Stereo Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZhang_Cross-Scale_Cost_Aggregation_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frookiepig\u002FCrossScaleStereo) | 106 | \n| [Transfer Joint Matching for Unsupervised Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FLong_Transfer_Joint_Matching_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FUSTCPCS\u002FCVPR2018_attention) | 67 | \n| [Deep Learning Face Representation from Predicting 10,000 Classes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FSun_Deep_Learning_Face_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjoyhuang9473\u002Fdeepid-implementation) | 62 | \n| [BING: Binarized Normed Gradients for Objectness Estimation at 300fps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FCheng_BING_Binarized_Normed_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falessandroferrari\u002FBING-Objectness) | 44 | \n| [One Millisecond Face Alignment with an Ensemble of Regression Trees](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKazemi_One_Millisecond_Face_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FjjrCN\u002FERT-GBDT_Face_Alignment) | 43 | \n| [3D Reconstruction from Accidental Motion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FYu_3D_Reconstruction_from_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffyu\u002Ftiny) | 42 | \n| [Predicting Matchability](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FHartmann_Predicting_Matchability_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjacekm-git\u002FBetBoy) | 38 | \n| [Dense Semantic Image Segmentation with Objects and Attributes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZheng_Dense_Semantic_Image_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbittnt\u002FImageSpirit) | 28 | \n| [Scene-Independent Group Profiling in Crowd](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FShao_Scene-Independent_Group_Profiling_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famandajshao\u002Fcrowd_group_profile) | 28 | \n| [Shrinkage Fields for Effective Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FSchmidt_Shrinkage_Fields_for_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fuschmidt83\u002Fshrinkage-fields) | 25 | \n| [Adaptive Color Attributes for Real-Time Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FDanelljan_Adaptive_Color_Attributes_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmostafaizz\u002FColorTracker) | 25 | \n| [Minimal Scene Descriptions from Structure from Motion Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FCao_Minimal_Scene_Descriptions_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcaosong\u002Fminimal_scene) | 22 | \n| [Parallax-tolerant Image Stitching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZhang_Parallax-tolerant_Image_Stitching_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgain2217\u002FRobust_Elastic_Warping) | 20 | \n| [Learning Mid-level Filters for Person Re-identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZhao_Learning_Mid-level_Filters_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRobert0812\u002Fmidfilter_reid) | 20 | \n| [Fast Edge-Preserving PatchMatch for Large Displacement Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FBao_Fast_Edge-Preserving_PatchMatch_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flinchaobao\u002FEPPM) | 18 | \n| [Product Sparse Coding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FGe_Product_Sparse_Coding_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fksopyla\u002FCudaDotProd) | 16 | \n| [Convolutional Neural Networks for No-Reference Image Quality Assessment](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKang_Convolutional_Neural_Networks_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flidq92\u002FCNNIQA) | 16 | \n| [Seeing 3D Chairs: Exemplar Part-based 2D-3D Alignment using a Large Dataset of CAD Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FAubry_Seeing_3D_Chairs_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmathieuaubry\u002Fseeing3Dchairs) | 15 | \n| [StoryGraphs: Visualizing Character Interactions as a Timeline](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FTapaswi_StoryGraphs_Visualizing_Character_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmakarandtapaswi\u002FStoryGraphs_CVPR2014) | 14 | \n| [Nonparametric Part Transfer for Fine-grained Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FGoring_Nonparametric_Part_Transfer_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcvjena\u002Ffinegrained-cvpr2014) | 13 | \n| [Scalable Multitask Representation Learning for Scene Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FLapin_Scalable_Multitask_Representation_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmlapin\u002Fcvpr14mtl) | 11 | \n| [Investigating Haze-relevant Features in A Learning Framework for Image Dehazing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FTang_Investigating_Haze-relevant_Features_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzlinker\u002Fhaze_2014) | 7 | \n| [Reconstructing PASCAL VOC](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FVicente_Reconstructing_PASCAL_VOC_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyihui-he\u002Freconstructing-pascal-voc) | 6 | \n| [Collaborative Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FLiu_Collaborative_Hashing_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002F27359794\u002Flsh-collab-filtering) | 6 | \n| [Tell Me What You See and I will Show You Where It Is](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FXu_Tell_Me_What_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMarkipTheMudkip\u002Fin-class-project-2) | 6 | \n| [Salient Region Detection via High-Dimensional Color Transform](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKim_Salient_Region_Detection_2014_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjhkim89\u002FSaliency-HDCT) | 6 | \n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ back to top\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2013\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [A generic decentralized trust management framework](http:\u002F\u002Fwww.cs.technion.ac.il\u002Fusers\u002Fwwwb\u002Fcgi-bin\u002Ftr-get.cgi\u002F2012\u002FMSC\u002FMSC-2012-22.pdf) | SPE | [code](https:\u002F\u002Fgithub.com\u002Famitport\u002Fgraphpack) | 6 | \n","\u003Cdiv align=\"left\">\n\u003Ch1>\n    \u003Cimg alt=\"HEADER\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzziz_pwc_readme_33b2b51ad9ea.jpg\" width=\"900\" height=\"300\">\u003C\u002Fimg>\n\u003C\u002Fh1>\n\n| [2018](#2018) | [2017](#2017) | [2016](#2016) | [2015](#2015) | [2014](#2014) | [2013](#2013) | 2012 | 2011 | 2010 | 2009 | 2008 | [![转发](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttp\u002Fshields.io.svg?style=social)](https:\u002F\u002Ftwitter.com\u002Fintent\u002Ftweet?text=Papers%20with%20code.%20Sorted%20by%20stars.%20Updated%20weekly.%20&url=https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc&via=fvzaur&hashtags=machinelearning,paper,code,github) | [建议](https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc\u002Fissues\u002F1) |    \n|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|\n\n本项目持续进行中，每日更新。我们每天都会添加新的PWC！欢迎在Twitter上@我 [@fvzaur](https:\u002F\u002Ftwitter.com\u002Ffvzaur)   \n如需将您喜爱的会议加入我们的关注列表及PWC列表，请使用[此帖](https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc\u002Fissues\u002F11)进行请求。   \n#### 每周更新推送！\n\n## 2018\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Video-to-Video Synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.06601) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fvid2vid) | 5578 | \n| [Deep Image Prior](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FUlyanov_Deep_Image_Prior_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDmitryUlyanov\u002Fdeep-image-prior) | 3736 | \n| [StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChoi_StarGAN_Unified_Generative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyunjey\u002FStarGAN) | 3405 | \n| [Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYao_Feng_Joint_3D_Face_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYadiraF\u002FPRNet) | 2434 | \n| [Learning to See in the Dark](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Learning_to_See_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcchen156\u002FLearning-to-See-in-the-Dark) | 2326 | \n| [Glow: Generative Flow with Invertible 1x1 Convolutions](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03039v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fglow) | 2088 | \n| [Squeeze-and-Excitation Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhujie-frank\u002FSENet) | 1477 | \n| [Efficient Neural Architecture Search via Parameters Sharing](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fpham18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch) | 1382 | \n| [Multimodal Unsupervised Image-to-image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXun_Huang_Multimodal_Unsupervised_Image-to-image_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FMUNIT) | 1296 | \n| [Non-Local Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Non-Local_Neural_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fvideo-nonlocal-net) | 992 | \n| [Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHara_Can_Spatiotemporal_3D_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkenshohara\u002F3D-ResNets-PyTorch) | 924 | \n| [Single-Shot Refinement Neural Network for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Single-Shot_Refinement_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsfzhang15\u002FRefineDet) | 875 | \n| [Image Generation From Scene Graphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJohnson_Image_Generation_From_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fsg2im) | 851 | \n| [GANimation: Anatomically-aware Facial Animation from a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAlbert_Pumarola_Anatomically_Coherent_Facial_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Falbertpumarola\u002FGANimation) | 772 | \n| [Simple Baselines for Human Pose Estimation and Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBin_Xiao_Simple_Baselines_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fhuman-pose-estimation.pytorch) | 752 | \n| [Visualizing the Loss Landscape of Neural Nets](http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.09913v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftomgoldstein\u002Floss-landscape) | 724 | \n| [Detect-and-Track: Efficient Pose Estimation in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGirdhar_Detect-and-Track_Efficient_Pose_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectAndTrack) | 650 | \n| [Relation Networks for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Relation_Networks_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FRelation-Networks-for-Object-Detection) | 635 | \n| [Generative Image Inpainting With Contextual Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_Generative_Image_Inpainting_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJiahuiYu\u002Fgenerative_inpainting) | 609 | \n| [PointCNN](http:\u002F\u002Farxiv.org\u002Fabs\u002F1801.07791v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fyangyanli\u002FPointCNN) | 607 | \n| [Look at Boundary: A Boundary-Aware Face Alignment Algorithm](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Look_at_Boundary_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwywu\u002FLAB) | 575 | \n| [Pelee: A Real-Time Object Detection System on Mobile Devices](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FRobert-JunWang\u002FPelee) | 548 | \n| [Distractor-aware Siamese Networks for Visual Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002FDaSiamRPN) | 545 | \n| [Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fathalye18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fanishathalye\u002Fobfuscated-gradients) | 535 | \n| [Which Training Methods for GANs do actually Converge?](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fmescheder18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FGAN_stability) | 520 | \n| [End-to-End Recovery of Human Shape and Pose](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKanazawa_End-to-End_Recovery_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fakanazawa\u002Fhmr) | 502 | \n| [Taskonomy: Disentangling Task Transfer Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZamir_Taskonomy_Disentangling_Task_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002Ftaskonomy) | 502 | \n| [Cascaded Pyramid Network for Multi-Person Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Cascaded_Pyramid_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenyilun95\u002Ftf-cpn) | 497 | \n| [Neural 3D Mesh Renderer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKato_Neural_3D_Mesh_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhiroharu-kato\u002Fneural_renderer) | 489 | \n| [Zero-Shot Recognition via Semantic Embeddings and Knowledge Graphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Zero-Shot_Recognition_via_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJudyYe\u002Fzero-shot-gcn) | 489 | \n| [In-Place Activated BatchNorm for Memory-Optimized Training of DNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBulo_In-Place_Activated_BatchNorm_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmapillary\u002Finplace_abn) | 485 | \n| [The Unreasonable Effectiveness of Deep Features as a Perceptual Metric](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_The_Unreasonable_Effectiveness_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002FPerceptualSimilarity) | 447 | \n| [Frustum PointNets for 3D Object Detection From RGB-D Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Frustum_PointNets_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Ffrustum-pointnets) | 434 | \n| [The Lovász-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBerman_The_LovaSz-Softmax_Loss_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbermanmaxim\u002FLovaszSoftmax) | 416 | \n| [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHengshuang_Zhao_ICNet_for_Real-Time_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhszhao\u002FICNet) | 415 | \n| [PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSun_PWC-Net_CNNs_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FPWC-Net) | 398 | \n| [Efficient Interactive Annotation of Segmentation Datasets With Polygon-RNN++](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAcuna_Efficient_Interactive_Annotation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffidler-lab\u002Fpolyrnn-pp-pytorch) | 397 | \n| [Gibson Env: Real-World Perception for Embodied Agents](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXia_Gibson_Env_Real-World_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FGibsonEnv) | 385 | \n| [Acquisition of Localization Confidence for Accurate Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBorui_Jiang_Acquisition_of_Localization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fvacancy\u002FPreciseRoIPooling) | 384 | \n| [Noise2Noise: Learning Image Restoration without Clean Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flehtinen18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fyu4u\u002Fnoise2noise) | 370 | \n| [GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_GeoNet_Geometric_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyzcjtr\u002FGeoNet) | 359 | \n| [GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYin_GeoNet_Unsupervised_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyzcjtr\u002FGeoNet) | 359 | \n| [A Style-Aware Content Loss for Real-time HD Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FArtsiom_Sanakoyeu_A_Style-aware_Content_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FCompVis\u002Fadaptive-style-transfer) | 349 | \n| [Soccer on Your Tabletop](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRematas_Soccer_on_Your_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkrematas\u002Fsoccerontable) | 338 | \n| [Pyramid Stereo Matching Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChang_Pyramid_Stereo_Matching_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJiaRenChang\u002FPSMNet) | 335 | \n| [Neural Baby Talk](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLu_Neural_Baby_Talk_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FNeuralBabyTalk) | 332 | \n| [License Plate Detection and Recognition in Unconstrained Scenarios](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSergio_Silva_License_Plate_Detection_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsergiomsilva\u002Falpr-unconstrained) | 326 | \n| [Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDong_Supervision-by-Registration_An_Unsupervised_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsupervision-by-registration) | 326 | \n| [Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FNanyang_Wang_Pixel2Mesh_Generating_3D_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fnywang16\u002FPixel2Mesh) | 323 | \n| [Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMascharka_Transparency_by_Design_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdavidmascharka\u002Ftbd-nets) | 317 | \n| [Fast End-to-End Trainable Guided Filter](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Fast_End-to-End_Trainable_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwuhuikai\u002FDeepGuidedFilter) | 312 | \n| [Deep Clustering for Unsupervised Learning of Visual Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMathilde_Caron_Deep_Clustering_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeepcluster) | 302 | \n| [Deep Photo Enhancer: Unpaired Learning for Image Enhancement From Photographs With GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Deep_Photo_Enhancer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnothinglo\u002FDeep-Photo-Enhancer) | 294 | \n| [Neural Relational Inference for Interacting Systems](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkipf18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fethanfetaya\u002FNRI) | 289 | \n| [Adversarially Regularized Autoencoders](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhao18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjakezhaojb\u002FARAE) | 282 | \n| [Learning to Adapt Structured Output Space for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTsai_Learning_to_Adapt_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwasidennis\u002FAdaptSegNet) | 280 | \n| [Convolutional Neural Networks With Alternately Updated Clique](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_Convolutional_Neural_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiboing\u002FCliqueNet) | 272 | \n| [Learning to Segment Every Thing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Learning_to_Segment_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fseg_every_thing) | 269 | \n| [Supervising Unsupervised Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1709.05262v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fquinnliu\u002FmachineLearning) | 262 | \n| [LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHui_LiteFlowNet_A_Lightweight_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftwhui\u002FLiteFlowNet) | 261 | \n| [Bilinear Attention Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07932v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjnhwkim\u002Fban-vqa) | 258 | \n| [ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSachin_Mehta_ESPNet_Efficient_Spatial_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsacmehta\u002FESPNet) | 254 | \n| [An intriguing failing of convolutional neural networks and the CoordConv solution](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03247) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmkocabas\u002FCoordConv-pytorch) | 249 | \n| [End-to-End Learning of Motion Representation for Video Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFan_End-to-End_Learning_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLijieFan\u002Ftvnet) | 238 | \n| [Image Super-Resolution Using Very Deep Residual Channel Attention Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYulun_Zhang_Image_Super-Resolution_Using_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyulunzhang\u002FRCAN) | 234 | \n| [Iterative Visual Reasoning Beyond Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Iterative_Visual_Reasoning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fendernewton\u002Fiter-reason) | 228 | \n| [Semi-Parametric Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Semi-Parametric_Image_Synthesis_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxjqicuhk\u002FSIMS) | 226 | \n| [Compressed Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Compressed_Video_Action_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchaoyuaw\u002Fpytorch-coviar) | 225 | \n| [Style Aggregated Network for Facial Landmark Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDong_Style_Aggregated_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FD-X-Y\u002FSAN) | 223 | \n| [Pose-Robust Face Recognition via Deep Residual Equivariant Mapping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_Pose-Robust_Face_Recognition_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpenincillin\u002FDREAM) | 220 | \n| [Multi-Content GAN for Few-Shot Font Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAzadi_Multi-Content_GAN_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fazadis\u002FMC-GAN) | 218 | \n| [GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fyou18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FJiaxuanYou\u002Fgraph-generation) | 214 | \n| [Referring Relationships](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKrishna_Referring_Relationships_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FReferringRelationships) | 210 | \n| [MoCoGAN: Decomposing Motion and Content for Video Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTulyakov_MoCoGAN_Decomposing_Motion_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsergeytulyakov\u002Fmocogan) | 205 | \n| [Latent Alignment and Variational Attention](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03756v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fharvardnlp\u002Fvar-attn) | 204 | \n| [LayoutNet: Reconstructing the 3D Room Layout From a Single RGB Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZou_LayoutNet_Reconstructing_the_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzouchuhang\u002FLayoutNet) | 202 | \n| [Large-Scale Point Cloud Semantic Segmentation With Superpoint Graphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLandrieu_Large-Scale_Point_Cloud_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Floicland\u002Fsuperpoint_graph) | 197 | \n| [An End-to-End TextSpotter With Explicit Alignment and Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHe_An_End-to-End_TextSpotter_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftonghe90\u002Ftextspotter) | 195 | \n| [DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKupyn_DeblurGAN_Blind_Motion_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRaphaelMeudec\u002Fdeblur-gan) | 189 | \n| [SPLATNet: Sparse Lattice Networks for Point Cloud Processing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSu_SPLATNet_Sparse_Lattice_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fsplatnet) | 188 | \n| [Attentive Generative Adversarial Network for Raindrop Removal From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQian_Attentive_Generative_Adversarial_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frui1996\u002FDeRaindrop) | 186 | \n| [Single View Stereo Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLuo_Single_View_Stereo_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flawy623\u002FSVS) | 182 | \n| [MegaDepth: Learning Single-View Depth Prediction From Internet Photos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_MegaDepth_Learning_Single-View_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flixx2938\u002FMegaDepth) | 181 | \n| [ECO: Efficient Convolutional Network for Online Video Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMohammadreza_Zolfaghari_ECO_Efficient_Convolutional_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmzolfaghari\u002FECO-efficient-video-understanding) | 180 | \n| [Unsupervised Feature Learning via Non-Parametric Instance Discrimination](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Unsupervised_Feature_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhirongw\u002Flemniscate.pytorch) | 180 | \n| [ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLin_ST-GAN_Spatial_Transformer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenhsuanlin\u002Fspatial-transformer-GAN) | 179 | \n| [Video Based Reconstruction of 3D People Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAlldieck_Video_Based_Reconstruction_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthmoa\u002Fvideoavatars) | 179 | \n| [Social GAN: Socially Acceptable Trajectories With Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGupta_Social_GAN_Socially_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fagrimgupta92\u002Fsgan) | 178 | \n| [Learning Category-Specific Mesh Reconstruction from Image Collections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngjoo_Kanazawa_Learning_Category-Specific_Mesh_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fakanazawa\u002Fcmr) | 176 | \n| [Realistic Evaluation of Deep Semi-Supervised Learning Algorithms](http:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09170v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Frealistic-ssl-evaluation) | 175 | \n| [BSN: Boundary Sensitive Network for Temporal Action Proposal Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTianwei_Lin_BSN_Boundary_Sensitive_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fwzmsltw\u002FBSN-boundary-sensitive-network) | 175 | \n| [Group Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYuxin_Wu_Group_Normalization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fshaohua0116\u002FGroup-Normalization-Tensorflow) | 175 | \n| [Real-Time Seamless Single Shot 6D Object Pose Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTekin_Real-Time_Seamless_Single_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fsingleshotpose) | 174 | \n| [MVSNet: Depth Inference for Unstructured Multi-view Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYao_Yao_MVSNet_Depth_Inference_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYoYo000\u002FMVSNet) | 174 | \n| [Neural Motifs: Scene Graph Parsing With Global Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZellers_Neural_Motifs_Scene_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frowanz\u002Fneural-motifs) | 171 | \n| [Learning a Single Convolutional Super-Resolution Network for Multiple Degradations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Learning_a_Single_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcszn\u002FSRMD) | 169 | \n| [Optimizing Video Object Detection via a Scale-Time Lattice](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Optimizing_Video_Object_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhellock\u002Fscale-time-lattice) | 168 | \n| [MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMuhammed_Kocabas_MultiPoseNet_Fast_Multi-Person_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsalihkaragoz\u002Fpose-residual-network-pytorch) | 167 | \n| [Unsupervised Cross-Dataset Person Re-Identification by Transfer Learning of Spatial-Temporal Patterns](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLv_Unsupervised_Cross-Dataset_Person_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahangchen\u002FTFusion) | 166 | \n| [Weakly Supervised Instance Segmentation Using Class Peak Response](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhou_Weakly_Supervised_Instance_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FPRM) | 166 | \n| [PlaneNet: Piece-Wise Planar Reconstruction From a Single RGB Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_PlaneNet_Piece-Wise_Planar_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fart-programmer\u002FPlaneNet) | 164 | \n| [Residual Dense Network for Image Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Residual_Dense_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyulunzhang\u002FRDN) | 163 | \n| [Embodied Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDas_Embodied_Question_Answering_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FEmbodiedQA) | 162 | \n| [Evolved Policy Gradients](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04821v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002FEPG) | 160 | \n| [Camera Style Adaptation for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhong_Camera_Style_Adaptation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhunzhong07\u002FCamStyle) | 159 | \n| [Weakly and Semi Supervised Human Body Part Parsing via Pose-Guided Knowledge Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFang_Weakly_and_Semi_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMVIG-SJTU\u002FWSHP) | 159 | \n| [Scale-Recurrent Network for Deep Image Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTao_Scale-Recurrent_Network_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangsutx\u002FSRN-Deblur) | 159 | \n| [Unsupervised Learning of Monocular Depth Estimation and Visual Odometry With Deep Feature Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhan_Unsupervised_Learning_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHuangying-Zhan\u002FDepth-VO-Feat) | 158 | \n| [Relational recurrent neural networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.01822) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FL0SG\u002Frelational-rnn-pytorch) | 157 | \n| [Densely Connected Pyramid Dehazing Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Densely_Connected_Pyramid_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhezhangsprinter\u002FDCPDN) | 155 | \n| [Image Inpainting for Irregular Holes Using Partial Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuilin_Liu_Image_Inpainting_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fpytorch-inpainting-with-partial-conv) | 153 | \n| [SO-Net: Self-Organizing Network for Point Cloud Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_SO-Net_Self-Organizing_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flijx10\u002FSO-Net) | 152 | \n| [Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSun_Pix3D_Dataset_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxingyuansun\u002Fpix3d) | 152 | \n| [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_ShuffleNet_An_Extremely_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcamel007\u002FCaffe-ShuffleNet) | 152 | \n| [DenseASPP for Semantic Segmentation in Street Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_DenseASPP_for_Semantic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDeepMotionAIResearch\u002FDenseASPP) | 151 | \n| [Facelet-Bank for Fast Portrait Manipulation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Facelet-Bank_for_Fast_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyingcong\u002FFacelet_Bank) | 150 | \n| [Self-Imitation Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Foh18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjunhyukoh\u002Fself-imitation-learning) | 145 | \n| [Graph R-CNN for Scene Graph Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJianwei_Yang_Graph_R-CNN_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fjwyang\u002Fgraph-rcnn.pytorch) | 144 | \n| [A Closer Look at Spatiotemporal Convolutions for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTran_A_Closer_Look_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Firhumshafkat\u002FR2Plus1D-PyTorch) | 143 | \n| [Cross-Domain Weakly-Supervised Object Detection Through Progressive Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FInoue_Cross-Domain_Weakly-Supervised_Object_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fcross-domain-detection) | 143 | \n| [Quantized Densely Connected U-Nets for Efficient Landmark Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhiqiang_Tang_Quantized_Densely_Connected_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhiqiangdon\u002FCU-Net) | 143 | \n| [Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXia_Li_Recurrent_Squeeze-and-Excitation_Context_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FXiaLiPKU\u002FRESCAN) | 142 | \n| [Two-Stream Convolutional Networks for Dynamic Texture Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTesfaldet_Two-Stream_Convolutional_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fryersonvisionlab\u002Ftwo-stream-dyntex-synth) | 141 | \n| [Integral Human Pose Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiao_Sun_Integral_Human_Pose_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FJimmySuen\u002Fintegral-human-pose) | 141 | \n| [Adaptive Affinity Fields for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJyh-Jing_Hwang_Adaptive_Affinity_Field_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftwke18\u002FAdaptive_Affinity_Fields) | 141 | \n| [LSTM Pose Machines](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLuo_LSTM_Pose_Machines_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flawy623\u002FLSTM_Pose_Machines) | 141 | \n| [Structure Inference Net: Object Detection Using Scene-Level Context and Instance-Level Relationships](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Structure_Inference_Net_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchoasup\u002FSIN) | 140 | \n| [Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Recovering_Realistic_Texture_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FCVPR18-SFTGAN) | 139 | \n| [Image-Image Domain Adaptation With Preserved Self-Similarity and Domain-Dissimilarity for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDeng_Image-Image_Domain_Adaptation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSimon4Yan\u002FLearning-via-Translation) | 137 | \n| [Learning to Compare: Relation Network for Few-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSung_Learning_to_Compare_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FLearningToCompare_ZSL) | 135 | \n| [CosFace: Large Margin Cosine Loss for Deep Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_CosFace_Large_Margin_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyule-li\u002FCosFace) | 135 | \n| [Deep Depth Completion of a Single RGB-D Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Deep_Depth_Completion_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyindaz\u002FDeepCompletionRelease) | 134 | \n| [Deep Back-Projection Networks for Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHaris_Deep_Back-Projection_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falterzero\u002FDBPN-Pytorch) | 132 | \n| [Context Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKim_Context_Embedding_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthunlp\u002FCANE) | 131 | \n| [Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKendall_Multi-Task_Learning_Using_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falexgkendall\u002Fmultitaskvision) | 131 | \n| [Perturbative Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJuefei-Xu_Perturbative_Neural_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjuefeix\u002Fpnn.pytorch) | 130 | \n| [Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwang18h.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsyang1993\u002Fgst-tacotron) | 129 | \n| [Fast and Accurate Online Video Object Segmentation via Tracking Parts](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCheng_Fast_and_Accurate_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJingchunCheng\u002FFAVOS) | 129 | \n| [Nonlinear 3D Face Morphable Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTran_Nonlinear_3D_Face_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftranluan\u002FNonlinear_Face_3DMM) | 128 | \n| [BodyNet: Volumetric Inference of 3D Human Body Shapes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGul_Varol_BodyNet_Volumetric_Inference_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgulvarol\u002Fbodynet) | 126 | \n| [3D-CODED: 3D Correspondences by Deep Deformation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FThibault_Groueix_Shape_correspondences_from_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FThibaultGROUEIX\u002F3D-CODED) | 125 | \n| [DeepMVS: Learning Multi-View Stereopsis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_DeepMVS_Learning_Multi-View_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphuang17\u002FDeepMVS) | 125 | \n| [Hierarchical Imitation and Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fle18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhoangminhle\u002Fhierarchical_IL_RL) | 124 | \n| [Domain Adaptive Faster R-CNN for Object Detection in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Domain_Adaptive_Faster_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuhuayc\u002Fda-faster-rcnn) | 123 | \n| [L4: Practical loss-based stepsize adaptation for deep learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05074v4) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmartius-lab\u002Fl4-optimizer) | 123 | \n| [A Generative Adversarial Approach for Zero-Shot Learning From Noisy Texts](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhu_A_Generative_Adversarial_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FEthanZhu90\u002FZSL_GAN_CVPR18) | 122 | \n| [Recurrent Relational Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1711.08028v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frasmusbergpalm\u002Frecurrent-relational-networks) | 121 | \n| [Gated Path Planning Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flee18c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flileee\u002Fgated-path-planning-networks) | 121 | \n| [PSANet: Point-wise Spatial Attention Network for Scene Parsing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHengshuang_Zhao_PSANet_Point-wise_Spatial_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhszhao\u002FPSANet) | 121 | \n| [Rethinking Feature Distribution for Loss Functions in Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWan_Rethinking_Feature_Distribution_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWeitaoVan\u002FL-GM-loss) | 120 | \n| [Density-Aware Single Image De-Raining Using a Multi-Stream Dense Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Density-Aware_Single_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhezhangsprinter\u002FDID-MDN) | 118 | \n| [FOTS: Fast Oriented Text Spotting With a Unified Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_FOTS_Fast_Oriented_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangxiluning\u002FFOTS.PyTorch) | 118 | \n| [ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTaihong_Xiao_ELEGANT_Exchanging_Latent_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FPrinsphield\u002FELEGANT) | 117 | \n| [PU-Net: Point Cloud Upsampling Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_PU-Net_Point_Cloud_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyulequan\u002FPU-Net) | 117 | \n| [PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMallya_PackNet_Adding_Multiple_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpacknet) | 117 | \n| [Long-term Tracking in the Wild: a Benchmark](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FEfstratios_Gavves_Long-term_Tracking_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Foxuva\u002Flong-term-tracking-benchmark) | 116 | \n| [Factoring Shape, Pose, and Layout From the 2D Image of a 3D Scene](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTulsiani_Factoring_Shape_Pose_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002Ffactored3d) | 114 | \n| [Repulsion Loss: Detecting Pedestrians in a Crowd](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Repulsion_Loss_Detecting_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbailvwangzi\u002Frepulsion_loss_ssd) | 113 | \n| [Unsupervised Attention-guided Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.02311) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FAlamiMejjati\u002FUnsupervised-Attention-guided-Image-to-Image-Translation) | 110 | \n| [Attention-based Deep Multiple Instance Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Filse18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FAttentionDeepMIL) | 109 | \n| [Learning Blind Video Temporal Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWei-Sheng_Lai_Real-Time_Blind_Video_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fphoenix104104\u002Ffast_blind_video_consistency) | 109 | \n| [Noisy Natural Gradient as Variational Inference](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhang18l.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fwlwkgus\u002FNoisyNaturalGradient) | 108 | \n| [End-to-End Weakly-Supervised Semantic Alignment](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRocco_End-to-End_Weakly-Supervised_Semantic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fweakalign) | 106 | \n| [Decoupled Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Decoupled_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002FDCNets) | 105 | \n| [LiDAR-Video Driving Dataset: Learning Driving Policies Effectively](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_LiDAR-Video_Driving_Dataset_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdriving-behavior\u002FDBNet) | 104 | \n| [MAttNet: Modular Attention Network for Referring Expression Comprehension](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_MAttNet_Modular_Attention_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flichengunc\u002FMAttNet) | 104 | \n| [LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FLQ-Nets) | 103 | \n| [FSRNet: End-to-End Learning Face Super-Resolution With Facial Priors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_FSRNet_End-to-End_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftyshiwo\u002FFSRNet) | 100 | \n| [Deep Mutual Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Deep_Mutual_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYingZhangDUT\u002FDeep-Mutual-Learning) | 100 | \n| [Macro-Micro Adversarial Network for Human Parsing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYawei_Luo_Macro-Micro_Adversarial_Network_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FRoyalVane\u002FMMAN) | 98 | \n| [ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDai_ScanComplete_Large-Scale_Scene_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fangeladai\u002FScanComplete) | 97 | \n| [Learning Depth From Monocular Videos Using Direct Methods](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Learning_Depth_From_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMightyChaos\u002FLKVOLearner) | 97 | \n| [VITON: An Image-Based Virtual Try-On Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHan_VITON_An_Image-Based_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxthan\u002FVITON) | 95 | \n| [Cascade R-CNN: Delving Into High Quality Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCai_Cascade_R-CNN_Delving_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fguoruoqian\u002Fcascade-rcnn_Pytorch) | 93 | \n| [Learning Human-Object Interactions by Graph Parsing Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSiyuan_Qi_Learning_Human-Object_Interactions_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fgpnn) | 93 | \n| [Future Frame Prediction for Anomaly Detection – A New Baseline](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Future_Frame_Prediction_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FStevenLiuWen\u002Fano_pred_cvpr2018) | 92 | \n| [Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShao-Hua_Sun_Multi-view_to_Novel_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fshaohua0116\u002FMultiview2Novelview) | 92 | \n| [Tell Me Where to Look: Guided Attention Inference Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Tell_Me_Where_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falokwhitewolf\u002FGuided-Attention-Inference-Network) | 91 | \n| [Neural Kinematic Networks for Unsupervised Motion Retargetting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVillegas_Neural_Kinematic_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frubenvillegas\u002Fcvpr2018nkn) | 90 | \n| [Learning SO(3) Equivariant Representations with Spherical CNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FCarlos_Esteves_Learning_SO3_Equivariant_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdaniilidis-group\u002Fspherical-cnn) | 89 | \n| [One-Shot Unsupervised Cross Domain Translation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.06029v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsagiebenaim\u002FOneShotTranslation) | 89 | \n| [Synthesizing Images of Humans in Unseen Poses](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBalakrishnan_Synthesizing_Images_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbalakg\u002Fposewarp-cvpr2018) | 88 | \n| [Depth-aware CNN for RGB-D Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWeiyue_Wang_Depth-aware_CNN_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FDepthAwareCNN) | 88 | \n| [Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FArun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Farunmallya\u002Fpiggyback) | 88 | \n| [Knowledge Aided Consistency for Weakly Supervised Phrase Grounding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Knowledge_Aided_Consistency_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FKAC-Net) | 87 | \n| [CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_CSRNet_Dilated_Convolutional_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleeyeehoo\u002FCSRNet-pytorch) | 87 | \n| [Neural Arithmetic Logic Units](http:\u002F\u002Farxiv.org\u002Fabs\u002F1808.00508v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FllSourcell\u002FNeural_Arithmetic_Logic_Units) | 87 | \n| [A PID Controller Approach for Stochastic Optimization of Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAn_A_PID_Controller_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftensorboy\u002FPIDOptimizer) | 87 | \n| [VITAL: VIsual Tracking via Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSong_VITAL_VIsual_Tracking_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fybsong00\u002FVital_release) | 86 | \n| [Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Learning_Spatial-Temporal_Regularized_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flifeng9472\u002FSTRCF) | 86 | \n| [Recurrent Pixel Embedding for Instance Grouping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKong_Recurrent_Pixel_Embedding_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faimerykong\u002FRecurrent-Pixel-Embedding-for-Instance-Grouping) | 85 | \n| [SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_SGPN_Similarity_Group_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flaughtervv\u002FSGPN) | 84 | \n| [Multi-Scale Location-Aware Kernel Representation for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Multi-Scale_Location-Aware_Kernel_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHwang64\u002FMLKP) | 84 | \n| [Repeatability Is Not Enough: Learning Affine Regions via Discriminability](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDmytro_Mishkin_Repeatability_Is_Not_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fducha-aiki\u002Faffnet) | 84 | \n| [“Zero-Shot” Super-Resolution Using Deep Internal Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShocher_Zero-Shot_Super-Resolution_Using_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fassafshocher\u002FZSSR) | 84 | \n| [DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYuliang_Zou_DF-Net_Unsupervised_Joint_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fvt-vl-lab\u002FDF-Net) | 82 | \n| [Multi-View Consistency as Supervisory Signal for Learning Shape and Pose Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTulsiani_Multi-View_Consistency_as_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FmvcSnP) | 80 | \n| [Factorizable Net: An Efficient Subgraph-based Framework for Scene Graph Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYikang_LI_Factorizable_Net_An_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyikang-li\u002FFactorizableNet) | 78 | \n| [Generalizing A Person Retrieval Model Hetero- and Homogeneously](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhun_Zhong_Generalizing_A_Person_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhunzhong07\u002FHHL) | 78 | \n| [Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_Crafting_a_Toolchain_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuke93\u002FRL-Restore) | 77 | \n| [Pairwise Confusion for Fine-Grained Visual Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAbhimanyu_Dubey_Improving_Fine-Grained_Visual_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fabhimanyudubey\u002Fconfusion) | 77 | \n| [Learning to Reweight Examples for Robust Deep Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fren18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdanieltan07\u002Flearning-to-reweight-examples) | 76 | \n| [Improving Generalization via  Scalable Neighborhood Component Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhirong_Wu_Improving_Embedding_Generalization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fsnca.pytorch) | 76 | \n| [SparseMAP: Differentiable Sparse Structured Inference](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fniculae18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fvene\u002Fsparsemap) | 75 | \n| [PDE-Net: Learning PDEs from Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flong18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FZichaoLong\u002FPDE-Net) | 75 | \n| [Pose-Normalized Image Generation for Person Re-identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuelin_Qian_Pose-Normalized_Image_Generation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fnaiq\u002FPN_GAN) | 75 | \n| [Disentangled Person Image Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMa_Disentangled_Person_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcharliememory\u002FDisentangled-Person-Image-Generation) | 75 | \n| [Learning to Navigate for Fine-grained Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZe_Yang_Learning_to_Navigate_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyangze0930\u002FNTS-Net) | 74 | \n| [Superpixel Sampling Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FVarun_Jampani_Superpixel_Sampling_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fssn_superpixels) | 74 | \n| [Shift-Net: Image Inpainting via Deep Feature Rearrangement](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FZhaoyi-Yan\u002FShift-Net_pytorch) | 74 | \n| [3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAngela_Dai_3DMV_Joint_3D-Multi-View_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fangeladai\u002F3DMV) | 74 | \n| [Ordinal Depth Supervision for 3D Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPavlakos_Ordinal_Depth_Supervision_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeopavlakos\u002Fordinal-pose3d) | 74 | \n| [Path-Level Network Transformation for Efficient Architecture Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fcai18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhan-cai\u002FPathLevel-EAS) | 73 | \n| [Diverse Image-to-Image Translation via Disentangled Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHsin-Ying_Lee_Diverse_Image-to-Image_Translation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftaki0112\u002FDRIT-Tensorflow) | 72 | \n| [Visual Feature Attribution Using Wasserstein GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBaumgartner_Visual_Feature_Attribution_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Forobix\u002FVisual-Feature-Attribution-Using-Wasserstein-GANs-Pytorch) | 72 | \n| [Real-World Anomaly Detection in Surveillance Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSultani_Real-World_Anomaly_Detection_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWaqasSultani\u002FAnomalyDetectionCVPR2018) | 72 | \n| [Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Self-Supervised_Adversarial_Hashing_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flelan-li\u002FSSAH) | 72 | \n| [Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSiyuan_Huang_Monocular_Scene_Parsing_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fthusiyuan\u002Fholistic_scene_parsing) | 72 | \n| [Learning to Find Good Correspondences](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYi_Learning_to_Find_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flearned-correspondence-release) | 72 | \n| [Learning Less Is More - 6D Camera Localization via 3D Surface Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBrachmann_Learning_Less_Is_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvislearn\u002FLessMore) | 72 | \n| [Object Level Visual Reasoning in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFabien_Baradel_Object_Level_Visual_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffabienbaradel\u002Fobject_level_visual_reasoning) | 71 | \n| [Weakly-Supervised Semantic Segmentation Network With Deep Seeded Region Growing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_Weakly-Supervised_Semantic_Segmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fspeedinghzl\u002FDSRG) | 71 | \n| [Avatar-Net: Multi-Scale Zero-Shot Style Transfer by Feature Decoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSheng_Avatar-Net_Multi-Scale_Zero-Shot_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLucasSheng\u002Favatar-net) | 71 | \n| [Fast and Accurate Single Image Super-Resolution via Information Distillation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHui_Fast_and_Accurate_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZheng222\u002FIDN-Caffe) | 71 | \n| [Regularizing RNNs for Caption Generation by Reconstructing the Past With the Present](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Regularizing_RNNs_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenxinpeng\u002FARNet) | 70 | \n| [Multi-Shot Pedestrian Re-Identification via Sequential Decision Making](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Multi-Shot_Pedestrian_Re-Identification_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FTuSimple\u002Frl-multishot-reid) | 70 | \n| [PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FUy_PointNetVLAD_Deep_Point_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmikacuy\u002Fpointnetvlad) | 69 | \n| [Progressive Neural Architecture Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChenxi_Liu_Progressive_Neural_Architecture_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftitu1994\u002Fprogressive-neural-architecture-search) | 68 | \n| [Generative Neural Machine Translation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.05138v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FZhenYangIACAS\u002FNMT_GAN) | 68 | \n| [Learning Latent Super-Events to Detect Multiple Activities in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPiergiovanni_Learning_Latent_Super-Events_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpiergiaj\u002Fsuper-events-cvpr18) | 67 | \n| [Generate to Adapt: Aligning Domains Using Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSankaranarayanan_Generate_to_Adapt_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyogeshbalaji\u002FGenerate_To_Adapt) | 67 | \n| [Adversarial Feature Augmentation for Unsupervised Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVolpi_Adversarial_Feature_Augmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fadversarial-feature-augmentation) | 67 | \n| [Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Learning_Attentions_Residual_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002FRASNet) | 67 | \n| [Pointwise Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHua_Pointwise_Convolutional_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fscenenn\u002Fpointwise) | 67 | \n| [Optimizing the Latent Space of Generative Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbojanowski18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftneumann\u002Fminimal_glo) | 66 | \n| [Part-Aligned Bilinear Representations for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYumin_Suh_Part-Aligned_Bilinear_Representations_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyuminsuh\u002Fpart_bilinear_reid) | 64 | \n| [Geometry-Aware Learning of Maps for Camera Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBrahmbhatt_Geometry-Aware_Learning_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamarth-robo\u002FMapNet) | 63 | \n| [Fighting Fake News: Image Splice Detection via Learned Self-Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJacob_Huh_Fighting_Fake_News_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fminyoungg\u002Fselfconsistency) | 62 | \n| [Isolating Sources of Disentanglement in Variational Autoencoders](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04942v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frtqichen\u002Fbeta-tcvae) | 62 | \n| [Neural Program Synthesis from Diverse Demonstration Videos](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsun18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fshaohua0116\u002Fdemo2program) | 62 | \n| [Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhaoyang_Lv_Learning_Rigidity_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Flearningrigidity) | 61 | \n| [Rotation-Sensitive Regression for Oriented Scene Text Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiao_Rotation-Sensitive_Regression_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMhLiao\u002FRRD) | 61 | \n| [Human Semantic Parsing for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKalayeh_Human_Semantic_Parsing_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Femrahbasaran\u002FSPReID) | 61 | \n| [Unsupervised Discovery of Object Landmarks as Structural Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Unsupervised_Discovery_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYutingZhang\u002Flmdis-rep) | 61 | \n| [IQA: Visual Question Answering in Interactive Environments](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGordon_IQA_Visual_Question_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanielgordon10\u002Fthor-iqa-cvpr-2018) | 60 | \n| [Hierarchical Long-term Video Prediction without Supervision](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwichers18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Flong-term-video-prediction-without-supervision) | 60 | \n| [Unsupervised Domain Adaptation for 3D Keypoint Estimation via View Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXingyi_Zhou_Unsupervised_Domain_Adaptation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002F3DKeypoints-DA) | 60 | \n| [Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification by Stepwise Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_Exploit_the_Unknown_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYu-Wu\u002FExploit-Unknown-Gradually) | 59 | \n| [Neural Style Transfer via Meta Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShen_Neural_Style_Transfer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FFalongShen\u002Fstyletransfer) | 59 | \n| [Frame-Recurrent Video Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSajjadi_Frame-Recurrent_Video_Super-Resolution_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsmsajjadi\u002FFRVSR) | 58 | \n| [PlaneMatch: Patch Coplanarity Prediction for Robust RGB-D Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYifei_Shi_PlaneMatch_Patch_Coplanarity_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyifeishi\u002FPlaneMatch) | 57 | \n| [CBAM: Convolutional Block Attention Module](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYoungkl0726\u002FConvolutional-Block-Attention-Module) | 57 | \n| [Decorrelated Batch Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_Decorrelated_Batch_Normalization_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fumich-vl\u002FDecorrelatedBN) | 57 | \n| [Learning Conditioned Graph Structures for Interpretable Visual Question Answering](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Faimbrain\u002Fvqa-project) | 57 | \n| [Hierarchical Bilinear Pooling for Fine-Grained Visual Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChaojian_Yu_Hierarchical_Bilinear_Pooling_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FChaojianYu\u002FHierarchical-Bilinear-Pooling) | 57 | \n| [Leveraging Unlabeled Data for Crowd Counting by Learning to Rank](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Leveraging_Unlabeled_Data_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxialeiliu\u002FCrowdCountingCVPR18) | 56 | \n| [Deep Marching Cubes: Learning Explicit Surface Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiao_Deep_Marching_Cubes_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyiyiliao\u002Fdeep_marching_cubes) | 56 | \n| [Learning From Synthetic Data: Addressing Domain Shift for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSankaranarayanan_Learning_From_Synthetic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fswamiviv\u002FLSD-seg) | 56 | \n| [LF-Net: Learning Local Features from Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09662) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fvcg-uvic\u002Flf-net-release) | 55 | \n| [Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBaris_Gecer_Semi-supervised_Adversarial_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fbarisgecer\u002Ffacegan) | 55 | \n| [Discriminability Objective for Training Descriptive Captions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLuo_Discriminability_Objective_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002FDiscCaptioning) | 54 | \n| [BlockDrop: Dynamic Inference Paths in Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWu_BlockDrop_Dynamic_Inference_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FTushar-N\u002Fblockdrop) | 54 | \n| [Conditional Probability Models for Deep Image Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMentzer_Conditional_Probability_Models_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffab-jul\u002Fimgcomp-cvpr) | 54 | \n| [Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPeng_Jointly_Optimize_Data_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhiqiangdon\u002Fpose-adv-aug) | 54 | \n| [Learning towards Minimum Hyperspherical Energy](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09298v4) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002FMHE) | 54 | \n| [DeepVS: A Deep Learning Based Video Saliency Prediction Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLai_Jiang_DeepVS_A_Deep_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fremega\u002FOMCNN_2CLSTM) | 53 | \n| [Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWei_Liu_Learning_Efficient_Single-stage_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fliuwei16\u002FALFNet) | 52 | \n| [Learning Pixel-Level Semantic Affinity With Image-Level Supervision for Weakly Supervised Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAhn_Learning_Pixel-Level_Semantic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiwoon-ahn\u002Fpsa) | 52 | \n| [Wasserstein Introspective Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_Wasserstein_Introspective_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkjunelee\u002FWINN) | 51 | \n| [SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_SketchyGAN_Towards_Diverse_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwchen342\u002FSketchyGAN) | 51 | \n| [Self-produced Guidance for Weakly-supervised Object Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaolin_Zhang_Self-produced_Guidance_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxiaomengyc\u002FSPG) | 51 | \n| [Measuring abstract reasoning in neural networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsantoro18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fabstract-reasoning-matrices) | 51 | \n| [A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.01361) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FXenderLiu\u002FUFDN) | 51 | \n| [RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPaschalidou_RayNet_Learning_Volumetric_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpaschalidoud\u002Fraynet) | 51 | \n| [Coloring with Words: Guiding Image Colorization Through Text-based Palette Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHyojin_Bahng_Coloring_with_Words_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fawesome-davian\u002FText2Colors) | 50 | \n| [Efficient end-to-end learning for quantizable representations](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fjeong18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmaestrojeong\u002FDeep-Hash-Table-ICML18) | 50 | \n| [Visual Question Generation as Dual Task of Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Visual_Question_Generation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyikang-li\u002FiQAN) | 50 | \n| [Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkhan18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Femtiyaz\u002Fvadam) | 49 | \n| [Surface Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKostrikov_Surface_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangzhongshi\u002FSurfaceNetworks) | 48 | \n| [Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwu18h.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSandbox3aster\u002FDeep-K-Means-pytorch) | 48 | \n| [Stacked Cross Attention for Image-Text Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKuang-Huei_Lee_Stacked_Cross_Attention_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fkuanghuei\u002FSCAN) | 48 | \n| [Actor and Observer: Joint Modeling of First and Third-Person Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSigurdsson_Actor_and_Observer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgsig\u002Factor-observer) | 48 | \n| [Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJiang_Super_SloMo_High_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FTheFairBear\u002FSuper-SlowMo) | 47 | \n| [Learning-based Video Motion Magnification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTae-Hyun_Oh_Learning-based_Video_Motion_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002F12dmodel\u002Fdeep_motion_mag) | 47 | \n| [Pose Partition Networks for Multi-Person Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuecheng_Nie_Pose_Partition_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNieXC\u002Fpytorch-ppn) | 47 | \n| [Neural Autoregressive Flows](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fhuang18d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FCW-Huang\u002FNAF) | 47 | \n| [Weakly- and Semi-Supervised Panoptic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAnurag_Arnab_Weakly-_and_Semi-Supervised_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fqizhuli\u002FWeakly-Supervised-Panoptic-Segmentation) | 46 | \n| [Video Re-localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYang_Feng_Video_Re-localization_via_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffengyang0317\u002Fvideo_reloc) | 46 | \n| [Real-time 'Actor-Critic' Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBoyu_Chen_Real-time_Actor-Critic_Tracking_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fbychen515\u002FACT) | 46 | \n| [Black-box Adversarial Attacks with Limited Queries and Information](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Filyas18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flabsix\u002Flimited-blackbox-attacks) | 46 | \n| [Hyperbolic Entailment Cones for Learning Hierarchical Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fganea18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdalab\u002Fhyperbolic_cones) | 46 | \n| [Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXu_Structured_Attention_Guided_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanxuhk\u002FStructuredAttentionDepthEstimation) | 46 | \n| [Differentiable Compositional Kernel Learning for Gaussian Processes](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsun18e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fssydasheng\u002FNeural-Kernel-Network) | 45 | \n| [Visualizing and Understanding Atari Agents](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fgreydanus18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fgreydanus\u002Fvisualize_atari) | 45 | \n| [Image Manipulation with Perceptual Discriminators](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDiana_Sungatullina_Image_Manipulation_with_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fegorzakharov\u002FPerceptualGAN) | 45 | \n| [Learning Intrinsic Image Decomposition From Watching the World](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Learning_Intrinsic_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flixx2938\u002Funsupervised-learning-intrinsic-images) | 45 | \n| [Overcoming Catastrophic Forgetting with Hard Attention to the Task](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fserra18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjoansj\u002Fhat) | 44 | \n| [Learning Pose Specific Representations by Predicting Different Views](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPoier_Learning_Pose_Specific_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpoier\u002FPreView) | 44 | \n| [Zero-Shot Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAnkan_Bansal_Zero-Shot_Object_Detection_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsalman-h-khan\u002FZSD_Release) | 43 | \n| [Mean Field Multi-Agent Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fyang18d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmlii\u002Fmfrl) | 43 | \n| [Partial Adversarial Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZhangjie_Cao_Partial_Adversarial_Domain_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FPADA) | 43 | \n| [Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuecheng_Nie_Mutual_Learning_to_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FNieXC\u002Fpytorch-mula) | 43 | \n| [Robust Classification With Convolutional Prototype Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_Robust_Classification_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYangHM\u002FConvolutional-Prototype-Learning) | 43 | \n| [SimplE Embedding for Link Prediction in Knowledge Graphs](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04868v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMehran-k\u002FSimplE) | 42 | \n| [PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwang18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FYunbo426\u002Fpredrnn-pp) | 42 | \n| [Learning to Blend Photos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FWei-Chih_Hung_Learning_to_Blend_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhfslyc\u002FLearnToBlend) | 42 | \n| [Mask-Guided Contrastive Attention Model for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSong_Mask-Guided_Contrastive_Attention_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdevelopfeng\u002FMGCAM) | 41 | \n| [Link Prediction Based on Graph Neural Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09691v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmuhanzhang\u002FSEAL) | 41 | \n| [Generalisation in humans and deep neural networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1808.08750v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frgeirhos\u002Fgeneralisation-humans-DNNs) | 41 | \n| [Towards Binary-Valued Gates for Robust LSTM Training](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fli18c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhuohan123\u002Fg2-lstm) | 41 | \n| [Multi-scale Residual Network for Image Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJuncheng_Li_Multi-scale_Residual_Network_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FMIVRC\u002FMSRN-PyTorch) | 41 | \n| [Fully Motion-Aware Network for Video Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShiyao_Wang_Fully_Motion-Aware_Network_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fwangshy31\u002FMANet_for_Video_Object_Detection) | 41 | \n| [Interpretable Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Interpretable_Convolutional_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fseongjunyun\u002FCNN-with-Dual-Local-and-Global-Attention) | 40 | \n| [Generative Adversarial Perturbations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPoursaeed_Generative_Adversarial_Perturbations_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FOmidPoursaeed\u002FGenerative_Adversarial_Perturbations) | 40 | \n| [The Sound of Pixels](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHang_Zhao_The_Sound_of_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Froudimit\u002FMUSIC_dataset) | 40 | \n| [Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Towards_Faster_Training_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangtaoxie\u002Ffast-MPN-COV) | 40 | \n| [Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRamprasaath_Ramasamy_Selvaraju_Choose_Your_Neuron_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Framprs\u002Fneuron-importance-zsl) | 40 | \n| [Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.09987) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002FMulti-View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) | 40 | \n| [Learning Warped Guidance for Blind Face Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaoming_Li_Learning_Warped_Guidance_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fcsxmli2016\u002FGFRNet) | 39 | \n| [Adversarial Complementary Learning for Weakly Supervised Object Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Adversarial_Complementary_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxiaomengyc\u002FACoL) | 39 | \n| [Learning Semantic Representations for Unsupervised Domain Adaptation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fxie18c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FMid-Push\u002FMoving-Semantic-Transfer-Network) | 39 | \n| [Neural Architecture Search with Bayesian Optimisation and Optimal Transport](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.07191) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkirthevasank\u002Fnasbot) | 39 | \n| [Mutual Information Neural Estimation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbelghazi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FMasanoriYamada\u002FMine_pytorch) | 39 | \n| [NetGAN: Generating Graphs via Random Walks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbojchevski18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdanielzuegner\u002Fnetgan) | 39 | \n| [Learning to Evaluate Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCui_Learning_to_Evaluate_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fcvpr18-caption-eval) | 38 | \n| [Hyperbolic Neural Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09112v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdalab\u002Fhyperbolic_nn) | 37 | \n| [Unsupervised Geometry-Aware Representation for 3D Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHelge_Rhodin_Unsupervised_Geometry-Aware_Representation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhrhodin\u002FUnsupervisedGeometryAwareRepresentationLearning) | 37 | \n| [Adversarially Learned One-Class Classifier for Novelty Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSabokrou_Adversarially_Learned_One-Class_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkhalooei\u002FALOCC-CVPR2018) | 37 | \n| [Disentangling by Factorising](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkim18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002F1Konny\u002FFactorVAE) | 37 | \n| [Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fweiss18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftech-srl\u002Flstar_extraction) | 37 | \n| [Tangent Convolutions for Dense Prediction in 3D](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTatarchenko_Tangent_Convolutions_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftatarchm\u002Ftangent_conv) | 37 | \n| [Few-Shot Image Recognition by Predicting Parameters From Activations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQiao_Few-Shot_Image_Recognition_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjoe-siyuan-qiao\u002FFewShot-CVPR) | 37 | \n| [Real-Time Monocular Depth Estimation Using Synthetic Data With Domain Adaptation via Image Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAtapour-Abarghouei_Real-Time_Monocular_Depth_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fatapour\u002FmonocularDepth-Inference) | 37 | \n| [Generalizing to Unseen Domains via Adversarial Data Augmentation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.12018v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fgeneralize-unseen-domains) | 36 | \n| [SeGAN: Segmenting and Generating the Invisible](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FEhsani_SeGAN_Segmenting_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fehsanik\u002FSeGAN) | 36 | \n| [Graphical Generative Adversarial Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03429v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzhenxuan00\u002Fgraphical-gan) | 36 | \n| [PieAPP: Perceptual Image-Error Assessment Through Pairwise Preference](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPrashnani_PieAPP_Perceptual_Image-Error_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fprashnani\u002FPerceptualImageError) | 36 | \n| [Gated Fusion Network for Single Image Dehazing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRen_Gated_Fusion_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frwenqi\u002FGFN-dehazing) | 35 | \n| [Neural Code Comprehension: A Learnable Representation of Code Semantics](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.07336v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fspcl\u002Fncc) | 35 | \n| [Eye In-Painting With Exemplar Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDolhansky_Eye_In-Painting_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangqianhui\u002FExemplar-GAN-Eye-Inpainting-Tensorflow) | 35 | \n| [Deep One-Class Classification](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fruff18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flukasruff\u002FDeep-SVDD) | 34 | \n| [Deep Regression Tracking with Shrinkage Loss](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiankai_Lu_Deep_Regression_Tracking_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fchaoma99\u002FDSLT) | 34 | \n| [Deflecting Adversarial Attacks With Pixel Deflection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPrakash_Deflecting_Adversarial_Attacks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiamaaditya\u002Fpixel-deflection) | 34 | \n| [Learning Visual Question Answering by Bootstrapping Hard Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMateusz_Malinowski_Learning_Visual_Question_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgnouhp\u002FPyTorch-AdaHAN) | 33 | \n| [Human-Centric Indoor Scene Synthesis Using Stochastic Grammar](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Human-Centric_Indoor_Scene_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fhuman-centric-scene-synthesis) | 33 | \n| [Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNguyen_Improved_Fusion_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcvlab-tohoku\u002FDense-CoAttention-Network) | 33 | \n| [CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_CleanNet_Transfer_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkuanghuei\u002Fclean-net) | 33 | \n| [Speaker-Follower Models for Vision-and-Language Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.02724) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fspeaker_follower) | 33 | \n| [Improving Shape Deformation in Unsupervised Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAaron_Gokaslan_Improving_Shape_Deformation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fbrownvc\u002Fganimorph) | 33 | \n| [Learning Single-View 3D Reconstruction with Limited Pose Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuandao_Yang_A_Unified_Framework_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fstevenygd\u002F3d-recon) | 33 | \n| [3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02547) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmariogeiger\u002Fse3cnn) | 33 | \n| [Adversarial Logit Pairing](http:\u002F\u002Farxiv.org\u002Fabs\u002F1803.06373v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flabsix\u002Fadversarial-logit-pairing-analysis) | 32 | \n| [Attention in Convolutional LSTM for Gesture Recognition](https:\u002F\u002Fnips.cc\u002FConferences\u002F2018\u002FSchedule?showEvent=11207) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FGuangmingZhu\u002FAttentionConvLSTM) | 32 | \n| [Graph-Cut RANSAC](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBarath_Graph-Cut_RANSAC_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanini\u002Fgraph-cut-ransac) | 32 | \n| [Neural Guided Constraint Logic Programming for Program Synthesis](http:\u002F\u002Farxiv.org\u002Fabs\u002F1809.02840v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fxuexue\u002Fneuralkanren) | 32 | \n| [Learning Dynamic Memory Networks for Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTianyu_Yang_Learning_Dynamic_Memory_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fskyoung\u002FMemTrack) | 32 | \n| [GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZixin_Luo_Learning_Local_Descriptors_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flzx551402\u002Fgeodesc) | 32 | \n| [A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpokaxpoka\u002Fdeep_Mahalanobis_detector) | 32 | \n| [Flow-Grounded Spatial-Temporal Video Prediction from Still Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYijun_Li_Flow-Grounded_Spatial-Temporal_Video_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FFlowGrounded-VideoPrediction) | 32 | \n| [Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLei_Zhu_Bi-directional_Feature_Pyramid_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzijundeng\u002FBDRAR) | 32 | \n| [On the Robustness of Semantic Segmentation Models to Adversarial Attacks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FArnab_On_the_Robustness_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhmph\u002Fadversarial-attacks) | 31 | \n| [Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCui_Large_Scale_Fine-Grained_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fcvpr18-inaturalist-transfer) | 31 | \n| [SketchyScene: Richly-Annotated Scene Sketches](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChangqing_Zou_SketchyScene_Richly-Annotated_Scene_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FSketchyScene\u002FSketchyScene) | 31 | \n| [Deep Randomized Ensembles for Metric Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHong_Xuan_Randomized_Ensemble_Embeddings_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flittleredxh\u002FDREML) | 30 | \n| [Deep High Dynamic Range Imaging with Large Foreground Motions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShangzhe_Wu_Deep_High_Dynamic_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Felliottwu\u002FDeepHDR) | 30 | \n| [Revisiting Video Saliency: A Large-Scale Benchmark and a New Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Revisiting_Video_Saliency_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FDHF1K) | 30 | \n| [Blazingly Fast Video Object Segmentation With Pixel-Wise Metric Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Blazingly_Fast_Video_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuhuayc\u002Ffast-vos) | 30 | \n| [Deep Model-Based 6D Pose Refinement in RGB](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFabian_Manhardt_Deep_Model-Based_6D_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffabi92\u002Feccv18-rgb_pose_refinement) | 30 | \n| [TOM-Net: Learning Transparent Object Matting From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_TOM-Net_Learning_Transparent_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fguanyingc\u002FTOM-Net) | 30 | \n| [Quaternion Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXuanyu_Zhu_Quaternion_Convolutional_Neural_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FTParcollet\u002FQuaternion-Convolutional-Neural-Networks-for-End-to-End-Automatic-Speech-Recognition) | 30 | \n| [Densely Connected Attention Propagation for Reading Comprehension](https:\u002F\u002Fnips.cc\u002FConferences\u002F2018\u002FSchedule?showEvent=11481) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fvanzytay\u002FNIPS2018_DECAPROP) | 30 | \n| [A Trilateral Weighted Sparse Coding Scheme for Real-World Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXU_JUN_A_Trilateral_Weighted_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FTWSC-ECCV2018) | 30 | \n| [Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fco-reyes18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fwyndwarrior\u002FSectar) | 29 | \n| [Video Rain Streak Removal by Multiscale Convolutional Sparse Coding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Video_Rain_Streak_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMinghanLi\u002FMS-CSC-Rain-Streak-Removal) | 29 | \n| [Recurrent Scene Parsing With Perspective Understanding in the Loop](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKong_Recurrent_Scene_Parsing_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faimerykong\u002FRecurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop) | 29 | \n| [Single Shot Scene Text Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLluis_Gomez_Single_Shot_Scene_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flluisgomez\u002Fsingle-shot-str) | 29 | \n| [Toward Characteristic-Preserving Image-based Virtual Try-On Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBochao_Wang_Toward_Characteristic-Preserving_Image-based_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsergeywong\u002Fcp-vton) | 29 | \n| [Explainable Neural Computation via Stack Neural Module Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRonghang_Hu_Explainable_Neural_Computation_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fsnmn) | 29 | \n| [Exploring Disentangled Feature Representation Beyond Face Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Exploring_Disentangled_Feature_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FD2AE-Face-Generator) | 29 | \n| [Controllable Video Generation With Sparse Trajectories](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHao_Controllable_Video_Generation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzekunhao1995\u002FControllableVideoGen) | 28 | \n| [Layer-structured 3D Scene Inference via View Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShubham_Tulsiani_Layer-structured_3D_Scene_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Flayered-scene-inference) | 28 | \n| [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLiang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fqixuxiang\u002Fdeeplabv3plus) | 28 | \n| [PiCANet: Learning Pixel-Wise Contextual Attention for Saliency Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_PiCANet_Learning_Pixel-Wise_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FUgness\u002FPiCANet-Implementation) | 28 | \n| [Learning Rich Features for Image Manipulation Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhou_Learning_Rich_Features_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLarryJiang134\u002FImage_manipulation_detection) | 27 | \n| [Fast Video Object Segmentation by Reference-Guided Mask Propagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FOh_Fast_Video_Object_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fseoungwugoh\u002FRGMP) | 27 | \n| [3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZi_Jian_Yew_3DFeat-Net_Weakly_Supervised_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyewzijian\u002F3DFeatNet) | 27 | \n| [Who Let the Dogs Out? Modeling Dog Behavior From Visual Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FEhsani_Who_Let_the_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fehsanik\u002FdogTorch) | 27 | \n| [EC-Net: an Edge-aware Point set Consolidation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLequan_Yu_EC-Net_an_Edge-aware_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyulequan\u002FEC-Net) | 27 | \n| [Interpretable Intuitive Physics Model](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FTian_Ye_Interpretable_Intuitive_Physics_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftianye95\u002Finterpretable-intuitive-physics-model) | 27 | \n| [Learning a Discriminative Feature Network for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYu_Learning_a_Discriminative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FlxtGH\u002Fdfn_seg) | 26 | \n| [Partial Transfer Learning With Selective Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_Partial_Transfer_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FSAN) | 26 | \n| [Cross-Modal Deep Variational Hand Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSpurr_Cross-Modal_Deep_Variational_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fspurra\u002Fvae-hands-3d) | 26 | \n| [Between-Class Learning for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTokozume_Between-Class_Learning_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmil-tokyo\u002Fbc_learning_image) | 26 | \n| [AON: Towards Arbitrarily-Oriented Text Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCheng_AON_Towards_Arbitrarily-Oriented_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhuizhang0110\u002FAON) | 26 | \n| [Conditional Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLin_Conditional_Image-to-Image_Translation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-Conditional-image-to-image-translation) | 25 | \n| [Learning Convolutional Networks for Content-Weighted Image Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Learning_Convolutional_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flimuhit\u002FImageCompression) | 25 | \n| [Diversity Regularized Spatiotemporal Attention for Video-Based Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Diversity_Regularized_Spatiotemporal_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShuangLI59\u002FDiversity-Regularized-Spatiotemporal-Attention) | 25 | \n| [Dynamic Multimodal Instance Segmentation Guided by Natural Language Queries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FEdgar_Margffoy-Tuay_Dynamic_Multimodal_Instance_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FBCV-Uniandes\u002Fquery-objseg) | 25 | \n| [CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBatsos_CBMV_A_Coalesced_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkbatsos\u002FCBMV) | 25 | \n| [Deep Texture Manifold for Ground Terrain Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXue_Deep_Texture_Manifold_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiaxue1993\u002FDeep-Encoding-Pooling-Network-DEP-) | 25 | \n| [Audio-Visual Event Localization in Unconstrained Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYapeng_Tian_Audio-Visual_Event_Localization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYapengTian\u002FAVE-ECCV18) | 25 | \n| [First Order Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fseward18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzalandoresearch\u002Ffirst_order_gan) | 25 | \n| [Visual Coreference Resolution in Visual Dialog using Neural Module Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSatwik_Kottur_Visual_Coreference_Resolution_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fcorefnmn) | 25 | \n| [SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFaraone_SYQ_Learning_Symmetric_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjulianfaraone\u002FSYQ) | 24 | \n| [Deep Reinforcement Learning of Marked Temporal Point Processes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09360v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FNetworks-Learning\u002Ftpprl) | 24 | \n| [Explicit Inductive Bias for Transfer Learning with Convolutional Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fli18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fholyseven\u002FTransferLearningClassification) | 24 | \n| [LEGO: Learning Edge With Geometry All at Once by Watching Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_LEGO_Learning_Edge_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhenheny\u002FLEGO) | 24 | \n| [Verisimilar Image Synthesis for Accurate Detection and Recognition of Texts in Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFangneng_Zhan_Verisimilar_Image_Synthesis_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffnzhan\u002FVerisimilar-Image-Synthesis-for-Accurate-Detection-and-Recognition-of-Texts-in-Scenes) | 24 | \n| [Multi-Agent Diverse Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGhosh_Multi-Agent_Diverse_Generative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Farnabgho\u002FMADGAN) | 23 | \n| [Face Aging With Identity-Preserved Conditional Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Face_Aging_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdawei6875797\u002FFace-Aging-with-Identity-Preserved-Conditional-Generative-Adversarial-Networks) | 23 | \n| [Learning to Separate Object Sounds by Watching Unlabeled Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRuohan_Gao_Learning_to_Separate_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Frhgao\u002Fseparating-object-sounds) | 23 | \n| [Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsuganuma18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsg-nm\u002FEvolutionary-Autoencoders) | 23 | \n| [To Trust Or Not To Trust A Classifier](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11783v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FTrustScore) | 23 | \n| [Im2Flow: Motion Hallucination From Static Images for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGao_Im2Flow_Motion_Hallucination_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frhgao\u002FIm2Flow) | 22 | \n| [ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjianzhangcs\u002FISTA-Net) | 22 | \n| [Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLin_Hallucinated-IQA_No-Reference_Image_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkwanyeelin\u002FHIQA) | 22 | \n| [Anonymous Walk Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fivanov18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fnd7141\u002FAWE) | 22 | \n| [Learning to Multitask](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07541v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjfutoma\u002FMGP-RNN) | 22 | \n| [CondenseNet: An Efficient DenseNet Using Learned Group Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_CondenseNet_An_Efficient_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmarkdtw\u002Fcondensenet-tensorflow) | 22 | \n| [HashGAN: Deep Learning to Hash With Pair Conditional Wasserstein GAN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_HashGAN_Deep_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FHashGAN) | 22 | \n| [Hierarchical Relational Networks for Group Activity Recognition and Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMostafa_Ibrahim_Hierarchical_Relational_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmostafa-saad\u002Fhierarchical-relational-network) | 22 | \n| [Collaborative and Adversarial Network for Unsupervised Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Collaborative_and_Adversarial_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmahfuj9346449\u002FiCAN) | 22 | \n| [Geometry-Aware Scene Text Detection With Instance Transformation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Geometry-Aware_Scene_Text_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzlmzju\u002Fitn) | 22 | \n| [Learning to Promote Saliency Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZeng_Learning_to_Promote_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzengxianyu\u002Flps) | 21 | \n| [CSGNet: Neural Shape Parser for Constructive Solid Geometry](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSharma_CSGNet_Neural_Shape_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHippogriff\u002FCSGNet) | 21 | \n| [Local Spectral Graph Convolution for Point Set Feature Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChu_Wang_Local_Spectral_Graph_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffate3439\u002FLocalSpecGCN) | 21 | \n| [HiDDeN: Hiding Data with Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJiren_Zhu_HiDDeN_Hiding_Data_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fjirenz\u002FHiDDeN) | 21 | \n| [GraphBit: Bitwise Interaction Mining via Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDuan_GraphBit_Bitwise_Interaction_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fduanyq14\u002FGraphBit) | 20 | \n| [Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Stacked_Conditional_Generative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDeepInsight-PCALab\u002FST-CGAN) | 20 | \n| [Fully-Convolutional Point Networks for Large-Scale Point Clouds](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDario_Rethage_Fully-Convolutional_Point_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdrethage\u002Ffully-convolutional-point-network) | 20 | \n| [Learning Superpixels With Segmentation-Aware Affinity Loss](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTu_Learning_Superpixels_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwctu\u002FSEAL) | 20 | \n| [Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Zero-Shot_Visual_Recognition_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzjuchenlong\u002Fsp-aen.cvpr18) | 20 | \n| [Crowd Counting With Deep Negative Correlation Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShi_Crowd_Counting_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshizenglin\u002FDeep-NCL) | 20 | \n| [Dimensionality-Driven Learning with Noisy Labels](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fma18d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fxingjunm\u002Fdimensionality-driven-learning) | 20 | \n| [Objects that Sound](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FRelja_Arandjelovic_Objects_that_Sound_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Frohitrango\u002Fobjects-that-sound) | 20 | \n| [Deep Expander Networks: Efficient Deep Networks from Graph Theory](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAmeya_Prabhu_Deep_Expander_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FDrImpossible\u002FDeep-Expander-Networks) | 19 | \n| [Low-Shot Learning With Large-Scale Diffusion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDouze_Low-Shot_Learning_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Flow-shot-with-diffusion) | 19 | \n| [Low-Shot Learning With Imprinted Weights](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FQi_Low-Shot_Learning_With_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYU1ut\u002Fimprinted-weights) | 19 | \n| [Cross-Domain Self-Supervised Multi-Task Feature Learning Using Synthetic Imagery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRen_Cross-Domain_Self-Supervised_Multi-Task_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjason718\u002Fgame-feature-learning) | 19 | \n| [Learning Descriptor Networks for 3D Shape Synthesis and Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXie_Learning_Descriptor_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjianwen-xie\u002F3DDescriptorNet) | 19 | \n| [Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FAnanya_Harsh_Jha_Disentangling_Factors_of_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fananyahjha93\u002Fcycle-consistent-vae) | 19 | \n| [CTAP: Complementary Temporal Action Proposal Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FJiyang_Gao_CTAP_Complementary_Temporal_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fjiyanggao\u002FCTAP) | 18 | \n| [DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07445v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdojoteef\u002Fdvae) | 18 | \n| [Conditional Image-Text Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBryan_Plummer_Conditional_Image-Text_Embedding_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FBryanPlummer\u002Fcite) | 18 | \n| [EPINET: A Fully-Convolutional Neural Network Using Epipolar Geometry for Depth From Light Field Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShin_EPINET_A_Fully-Convolutional_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchshin10\u002Fepinet) | 18 | \n| [Glimpse Clouds: Human Activity Recognition From Unstructured Feature Points](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBaradel_Glimpse_Clouds_Human_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffabienbaradel\u002Fglimpse_clouds) | 18 | \n| [Bayesian Optimization of Combinatorial Structures](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbaptista18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbaptistar\u002FBOCS) | 18 | \n| [FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVerma_FeaStNet_Feature-Steered_Graph_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnitika-verma\u002FFeaStNet) | 18 | \n| [Learning Type-Aware Embeddings for Fashion Compatibility](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMariya_Vasileva_Learning_Type-Aware_Embeddings_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmvasil\u002Ffashion-compatibility) | 17 | \n| [Sliced Wasserstein Distance for Learning Gaussian Mixture Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKolouri_Sliced_Wasserstein_Distance_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fskolouri\u002Fswgmm) | 17 | \n| [Revisiting Deep Intrinsic Image Decompositions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFan_Revisiting_Deep_Intrinsic_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffqnchina\u002FIntrinsicImage) | 17 | \n| [A Spectral Approach to Gradient Estimation for Implicit Distributions](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fshi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fthjashin\u002Fspectral-stein-grad) | 17 | \n| [Hierarchical Novelty Detection for Visual Object Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_Hierarchical_Novelty_Detection_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkibok90\u002Fcvpr2018-hnd) | 17 | \n| [Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FJoo_Total_Capture_A_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMyzhencai\u002FTotal-Capture) | 17 | \n| [Learning Generative ConvNets via Multi-Grid Modeling and Sampling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGao_Learning_Generative_ConvNets_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fruiqigao\u002FMultigrid_learning) | 17 | \n| [Learning 3D Shape Completion From Laser Scan Data With Weak Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FStutz_Learning_3D_Shape_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdavidstutz\u002Fcvpr2018-shape-completion) | 17 | \n| [Triplet Loss in Siamese Network for Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXingping_Dong_Triplet_Loss_with_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fshenjianbing\u002FTripletTracking) | 17 | \n| [Adversarial Attack on Graph Structured Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fdai18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FHanjun-Dai\u002Fgraph_adversarial_attack) | 17 | \n| [Arbitrary Style Transfer With Deep Feature Reshuffle](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FGu_Arbitrary_Style_Transfer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FStyle-Feature-Reshuffle) | 17 | \n| [Visual Question Reasoning on General Dependency Tree](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCao_Visual_Question_Reasoning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbezorro\u002FACMN-Pytorch) | 17 | \n| [Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHuang_Predicting_Gaze_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhyf015\u002Fegocentric-gaze-prediction) | 16 | \n| [Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04034) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fytsmiling\u002Flmt) | 16 | \n| [Coded Sparse Matrix Multiplication](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwang18e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fksopyla\u002FCudaDotProd) | 16 | \n| [Weakly-Supervised Action Segmentation With Iterative Soft Boundary Assignment](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDing_Weakly-Supervised_Action_Segmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZephyr-D\u002FTCFPN-ISBA) | 16 | \n| [Recovering 3D Planes from a Single Image via Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFengting_Yang_Recovering_3D_Planes_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffuy34\u002Fplanerecover) | 16 | \n| [SegStereo: Exploiting Semantic Information for Disparity Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FGuorun_Yang_SegStereo_Exploiting_Semantic_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyangguorun\u002FSegStereo) | 16 | \n| [Functional Gradient Boosting based on Residual Network Perception](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fnitanda18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fanitan0925\u002FResFGB) | 16 | \n| [NAG: Network for Adversary Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMopuri_NAG_Network_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Fnag) | 16 | \n| [Generative Probabilistic Novelty Detection with Adversarial Autoencoders](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02588v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpodgorskiy\u002FGPND) | 16 | \n| [Hashing as Tie-Aware Learning to Rank](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHe_Hashing_as_Tie-Aware_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkunhe\u002FTALR) | 15 | \n| [Pose Proposal Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSekii_Pose_Proposal_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsalihkaragoz\u002FMultiPerson-pose-estimation) | 15 | \n| [Convolutional Sequence to Sequence Model for Human Dynamics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Convolutional_Sequence_to_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchaneyddtt\u002FConvolutional-Sequence-to-Sequence-Model-for-Human-Dynamics) | 15 | \n| [Joint Pose and Expression Modeling for Facial Expression Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Joint_Pose_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FFFZhang1231\u002FFacial-expression-recognition) | 15 | \n| [Grounding Referring Expressions in Images by Variational Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Grounding_Referring_Expressions_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuleiniu\u002Fvc) | 15 | \n| [Rethinking the Form of Latent States in Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBo_Dai_Rethinking_the_Form_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002F2dcaption_eccv2018) | 15 | \n| [Open Set Domain Adaptation by Backpropagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKuniaki_Saito_Adversarial_Open_Set_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYU1ut\u002Fopenset-DA) | 15 | \n| [Neural Sign Language Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCamgoz_Neural_Sign_Language_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fneccam\u002Fnslt) | 15 | \n| [SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYifan_Xu_SpiderCNN_Deep_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxyf513\u002FSpiderCNN) | 15 | \n| [Efficient Neural Audio Synthesis](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkalchbrenner18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffedden\u002FTensorFlow-Efficient-Neural-Audio-Synthesis) | 15 | \n| [Deep Learning Under Privileged Information Using Heteroscedastic Dropout](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLambert_Deep_Learning_Under_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjohnwlambert\u002Fdlupi-heteroscedastic-dropout) | 14 | \n| [Image Transformer](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fparmar18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fssingal05\u002FImageTransformer) | 14 | \n| [Learning to Understand Image Blur](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Learning_to_Understand_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLotuslisa\u002FUnderstand_Image_Blur) | 14 | \n| [Learning and Using the Arrow of Time](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWei_Learning_and_Using_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdonglaiw\u002FAoT_TCAM) | 14 | \n| [Action Sets: Weakly Supervised Action Segmentation Without Ordering Constraints](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRichard_Action_Sets_Weakly_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falexanderrichard\u002Faction-sets) | 14 | \n| [Learning to Forecast and Refine Residual Motion for Image-to-Video Generation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLong_Zhao_Learning_to_Forecast_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fgaryzhao\u002FFRGAN) | 14 | \n| [Multi-Scale Weighted Nuclear Norm Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYair_Multi-Scale_Weighted_Nuclear_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FnoamyairTC\u002FMSWNNM) | 14 | \n| [Synthesizing Robust Adversarial Examples](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fathalye18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fprabhant\u002Fsynthesizing-robust-adversarial-examples) | 13 | \n| [Fine-Grained Visual Categorization using Meta-Learning Optimization with Sample Selection of Auxiliary Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYabin_Zhang_Fine-Grained_Visual_Categorization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYabinZhang1994\u002FMetaFGNet) | 13 | \n| [Assessing Generative Models via Precision and Recall](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.00035v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmsmsajjadi\u002Fprecision-recall-distributions) | 13 | \n| [Deep Diffeomorphic Transformer Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FDetlefsen_Deep_Diffeomorphic_Transformer_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSkafteNicki\u002Fddtn) | 13 | \n| [Learning by Asking Questions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMisra_Learning_by_Asking_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyanghoonkim\u002Fquestion_generation) | 13 | \n| [Towards Human-Machine Cooperation: Self-Supervised Sample Mining for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Towards_Human-Machine_Cooperation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyanxp\u002FSSM) | 13 | \n| [Variational Autoencoders for Deforming 3D Mesh Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTan_Variational_Autoencoders_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faldehydecho\u002FMesh-VAE) | 13 | \n| [Min-Entropy Latent Model for Weakly Supervised Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWan_Min-Entropy_Latent_Model_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWinfrand\u002FMELM) | 13 | \n| [Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAnderson_Bottom-Up_and_Top-Down_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWentong-DST\u002Fup-down-captioner) | 13 | \n| [Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flee18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fyoonholee\u002FMT-net) | 13 | \n| [Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Learning_a_Discriminative_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhubeihubei\u002FDFL-CNN-pytorch) | 13 | \n| [Finding Influential Training Samples for Gradient Boosted Decision Trees](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsharchilev18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbsharchilev\u002Finfluence_boosting) | 13 | \n| [Gesture Recognition: Focus on the Hands](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNarayana_Gesture_Recognition_Focus_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbeckabec\u002FHandDetection) | 12 | \n| [Cross-View Image Synthesis Using Conditional GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FRegmi_Cross-View_Image_Synthesis_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkregmi\u002Fcross-view-image-synthesis) | 12 | \n| [Joint Optimization Framework for Learning With Noisy Labels](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTanaka_Joint_Optimization_Framework_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDaikiTanaka-UT\u002FJointOptimization) | 12 | \n| [Future Person Localization in First-Person Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYagi_Future_Person_Localization_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftakumayagi\u002Ffpl) | 12 | \n| [AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZheng_Shou_AutoLoc_Weakly-supervised_Temporal_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhengshou\u002FAutoLoc) | 12 | \n| [Learning Transferable Architectures for Scalable Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZoph_Learning_Transferable_Architectures_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faussetg\u002Fnasnet.pytorch) | 12 | \n| [Clipped Action Policy Gradient](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ffujita18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fpfnet-research\u002Fcapg) | 12 | \n| [Mix and Match Networks: Encoder-Decoder Alignment for Zero-Pair Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Mix_and_Match_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyaxingwang\u002FMix-and-match-networks) | 12 | \n| [Decouple Learning for Parameterized Image Operators](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FQingnan_Fan_Learning_to_Learn_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffqnchina\u002FDecoupleLearning) | 12 | \n| [Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fqi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fgeneralized-earley-parser) | 12 | \n| [Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.04768) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fneitzal\u002Fadaptive-skip-intervals) | 12 | \n| [AMNet: Memorability Estimation With Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FFajtl_AMNet_Memorability_Estimation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fok1zjf\u002FAMNet) | 12 | \n| [Adversarial Time-to-Event Modeling](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchapfuwa18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fpaidamoyo\u002Fadversarial_time_to_event) | 12 | \n| [Reversible Recurrent Neural Networks](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgan3sh500\u002Frevrnn) | 12 | \n| [Human Pose Estimation With Parsing Induced Learner](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNie_Human_Pose_Estimation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNieXC\u002Fpytorch-pil) | 11 | \n| [ShapeStacks: Learning Vision-Based Physical Intuition for Generalised Object Stacking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FOliver_Groth_ShapeStacks_Learning_Vision-Based_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fogroth\u002Fshapestacks) | 11 | \n| [A Joint Sequence Fusion Model for Video Question Answering and Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYoungjae_Yu_A_Joint_Sequence_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fyj-yu\u002Flsmdc) | 11 | \n| [Learning Face Age Progression: A Pyramid Architecture of GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FYang_Learning_Face_Age_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fajithvallabai\u002FPyramid-Architecture-of-GANs) | 11 | \n| [Robust Physical-World Attacks on Deep Learning Visual Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FEykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fevtimovi\u002Frobust_physical_perturbations) | 11 | \n| [High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fpearce18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FTeaPearce\u002FDeep_Learning_Prediction_Intervals) | 11 | \n| [Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Famit18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fron-amit\u002Fmeta-learning-adjusting-priors) | 11 | \n| [Multimodal Explanations: Justifying Decisions and Pointing to the Evidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPark_Multimodal_Explanations_Justifying_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSeth-Park\u002FMultimodalExplanations) | 11 | \n| [Accelerating Natural Gradient with Higher-Order Invariance](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fsong18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fhigher_order_invariance) | 11 | \n| [Hierarchical Multi-Label Classification Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwehrmann18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fomoju\u002FreceiptdID) | 11 | \n| [Convolutional Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAneja_Convolutional_Image_Captioning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FcaptionGeneration.torch) | 11 | \n| [Boosting Domain Adaptation by Discovering Latent Domains](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMancini_Boosting_Domain_Adaptation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002Flatent_domains_DA) | 11 | \n| [Logo Synthesis and Manipulation With Clustered Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSage_Logo_Synthesis_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falex-sage\u002Flogo-gen) | 10 | \n| [PacGAN: The power of two samples in generative adversarial networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1712.04086v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffjxmlzn\u002FPacGAN) | 10 | \n| [Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLong_Attention_Clusters_Purely_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpomonam\u002FAttentionCluster) | 10 | \n| [End-to-End Incremental Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFrancisco_M._Castro_End-to-End_Incremental_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ffmcp\u002FEndToEndIncrementalLearning) | 10 | \n| [Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLyu_Multi-Oriented_Scene_Text_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJK-Rao\u002FCorner_Segmentation_TextDetection) | 10 | \n| [On GANs and GMMs](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.12462v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feitanrich\u002Fgans-n-gmms) | 10 | \n| [Salient Object Detection Driven by Fixation Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Salient_Object_Detection_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwenguanwang\u002FASNet) | 9 | \n| [Semantic Video Segmentation by Gated Recurrent Flow Propagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNilsson_Semantic_Video_Segmentation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FD-Nilsson\u002FGRFP) | 9 | \n| [Constraint-Aware Deep Neural Network Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FChangan_Chen_Constraints_Matter_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FChanganVR\u002FConstraintAwareCompression) | 9 | \n| [Statistically-motivated Second-order Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FKaicheng_Yu_Statistically-motivated_Second-order_Pooling_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fkcyu2014\u002Fsmsop) | 9 | \n| [Excitation Backprop for RNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBargal_Excitation_Backprop_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsbargal\u002FCaffe-ExcitationBP-RNNs) | 9 | \n| [Analyzing Uncertainty in Neural Machine Translation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fott18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fanalyzing-uncertainty-nmt) | 9 | \n| [Learning Dynamics of Linear Denoising Autoencoders](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fpretorius18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Farnupretorius\u002Flindaedynamics_icml2018) | 9 | \n| [Saliency Detection in 360° Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FZiheng_Zhang_Saliency_Detection_in_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fxuyanyu-shh\u002FSaliency-detection-in-360-video) | 9 | \n| [Density Adaptive Point Set Registration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLawin_Density_Adaptive_Point_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffelja633\u002FDARE) | 9 | \n| [Decoupled Parallel Backpropagation with Convergence Guarantee](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fhuo18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fslowbull\u002FDDG) | 9 | \n| [Classification from Pairwise Similarity and Unlabeled Data](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbao18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flevelfour\u002FSU_Classification) | 9 | \n| [oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fainsworth18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsamuela\u002Foi-vae) | 9 | \n| [Modeling Sparse Deviations for Compressed Sensing using Generative Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fdhar18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Fsparse_gen) | 9 | \n| [Pixels, Voxels, and Views: A Study of Shape Representations for Single View 3D Object Shape Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShin_Pixels_Voxels_and_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdaeyun\u002Fobject-shapes-cvpr18) | 9 | \n| [Towards Open-Set Identity Preserving Face Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBao_Towards_Open-Set_Identity_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchloeguoqing\u002FTowards-Open-Set-Identity-Preserving-Face-Synthesis) | 9 | \n| [Five-Point Fundamental Matrix Estimation for Uncalibrated Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FBarath_Five-Point_Fundamental_Matrix_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanini\u002Ffive-point-fundamental) | 8 | \n| [BourGAN: Generative Networks with Metric Embeddings](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07674) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fa554b554\u002FBourGAN) | 8 | \n| [Fast Information-theoretic Bayesian Optimisation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fru18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Frubinxin\u002FFITBO) | 8 | \n| [Deep Variational Reinforcement Learning for POMDPs](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Figl18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Foxwhirl\u002FDeep-Variational-Reinforcement-Learning) | 8 | \n| [Specular-to-Diffuse Translation for Multi-View Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShihao_Wu_Specular-to-Diffuse_Translation_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fwsh312\u002FS2Dnet) | 8 | \n| [Dynamic Conditional Networks for Few-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FFang_Zhao_Dynamic_Conditional_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FZhaoJ9014\u002FDynamic-Conditional-Networks-for-Few-Shot-Learning.pytorch) | 8 | \n| [Learning Facial Action Units From Web Images With Scalable Weakly Supervised Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhao_Learning_Facial_Action_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzkl20061823\u002FWSC) | 8 | \n| [High-Resolution Image Synthesis and Semantic Manipulation With Conditional GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenxli\u002FHigh-Resolution-Image-Synthesis-and-Semantic-Manipulation-with-Conditional-GANsl-) | 8 | \n| [Deep Defense: Training DNNs with Improved Adversarial Robustness](http:\u002F\u002Farxiv.org\u002Fabs\u002F1803.00404v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FZiangYan\u002Fdeepdefense.pytorch) | 8 | \n| [Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchen18g.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fchentingpc\u002Fkdcode-lm) | 8 | \n| [Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-based Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FHiroaki_Santo_Light_Structure_from_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhiroaki-santo\u002Flight-structure-from-pin-motion) | 7 | \n| [Non-metric Similarity Graphs for Maximum Inner Product Search](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fstanis-morozov\u002Fip-nsw) | 7 | \n| [Towards Realistic Predictors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FPei_Wang_Towards_Realistic_Predictors_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fpeiwang062\u002Ftowards-realistic-predictors) | 7 | \n| [Deep Non-Blind Deconvolution via Generalized Low-Rank Approximation](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frwenqi\u002FNBD-GLRA) | 7 | \n| [Don’t Just Assume  Look and Answer: Overcoming Priors for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAgrawal_Dont_Just_Assume_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FAishwaryaAgrawal\u002FGVQA) | 7 | \n| [Learning Dual Convolutional Neural Networks for Low-Level Vision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPan_Learning_Dual_Convolutional_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgalad-loth\u002FDualCNN-TF) | 7 | \n| [The Mirage of Action-Dependent Baselines in Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftucker18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Fmirage-rl) | 7 | \n| [DVQA: Understanding Data Visualizations via Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKafle_DVQA_Understanding_Data_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkushalkafle\u002FDVQA_dataset) | 7 | \n| [A Two-Step Disentanglement Method](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHadad_A_Two-Step_Disentanglement_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnaamahadad\u002FA-Two-Step-Disentanglement-Method) | 7 | \n| [Detecting and Correcting for Label Shift with Black Box Predictors](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flipton18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Flabel-shift) | 7 | \n| [Conditional Prior Networks for Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYanchao_Yang_Conditional_Prior_Networks_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FYanchaoYang\u002FConditional-Prior-Networks) | 7 | \n| [Generative Adversarial Learning Towards Fast Weakly Supervised Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FShen_Generative_Adversarial_Learning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshenyunhang\u002FGAL-fWSD) | 7 | \n| [Adversarial Learning with Local Coordinate Coding](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fcao18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fguoyongcs\u002FLCCGAN) | 7 | \n| [Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKuen_Stochastic_Downsampling_for_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxternalz\u002FSDPoint) | 7 | \n| [AttnGAN: Fine-Grained Text to Image Generation With Attentional Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXu_AttnGAN_Fine-Grained_Text_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWentong-DST\u002Fattn-gan) | 7 | \n| [Learning to Explain: An Information-Theoretic Perspective on Model Interpretation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchen18j.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fnickvosk\u002Facl2015-dataset-learning-to-explain-entity-relationships) | 7 | \n| [Banach Wasserstein GAN](http:\u002F\u002Farxiv.org\u002Fabs\u002F1806.06621v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fadler-j\u002Fbwgan) | 7 | \n| [Gradually Updated Neural Networks for Large-Scale Image Recognition](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fqiao18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjoe-siyuan-qiao\u002FGUNN) | 7 | \n| [Learning Steady-States of Iterative Algorithms over Graphs](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fdai18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FHanjun-Dai\u002Fsteady_state_embedding) | 7 | \n| [Progressive Attention Guided Recurrent Network for Salient Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Progressive_Attention_Guided_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangxiaoning666\u002FPAGR) | 7 | \n| [Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FPang_Zoom_and_Learn_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FArtifineuro\u002Fzole) | 6 | \n| [Unsupervised holistic image generation from key local patches](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FDonghoon_Lee_Unsupervised_holistic_image_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhellbell\u002FKeyPatchGan) | 6 | \n| [Inner Space Preserving Generative Pose Machine](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FShuangjun_Liu_Inner_Space_Preserving_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fostadabbas\u002Fisp-gpm) | 6 | \n| [Bilevel Programming for Hyperparameter Optimization and Meta-Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ffranceschi18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fprolearner\u002Fhyper-representation) | 6 | \n| [Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSun_Optical_Flow_Guided_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkitsune999\u002FOptical-Flow-Guided-Feature) | 6 | \n| [Breaking the Activation Function Bottleneck through Adaptive Parameterization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08574) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fflennerhag\u002Falstm) | 6 | \n| [Ultra Large-Scale Feature Selection using Count-Sketches](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Faghazadeh18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Frdspring1\u002FMISSION) | 6 | \n| [Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhang_Dynamic_Scene_Deblurring_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhjwustc\u002Fcvpr18_rnn_deblur_matcaffe) | 6 | \n| [Orthogonally Decoupled Variational Gaussian Processes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1809.08820v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhughsalimbeni\u002Forth_decoupled_var_gps) | 6 | \n| [Batch Bayesian Optimization via Multi-objective Acquisition Ensemble for Automated Analog Circuit Design](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flyu18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAlaya-in-Matrix\u002FMACE) | 6 | \n| [A Modulation Module for Multi-task Learning with Applications in Image Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiangyun_Zhao_A_Modulation_Module_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FZhaoxiangyun\u002FMulti-Task-Modulation-Module) | 6 | \n| [A Memory Network Approach for Story-Based Temporal Summarization of 360° Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_A_Memory_Network_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsangho-vision\u002FPFMN) | 6 | \n| [Towards Effective Low-Bitwidth Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnowgood\u002FQuantizeCNNModel) | 5 | \n| [Disentangling Factors of Variation by Mixing Them](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Disentangling_Factors_of_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHuQyang\u002FDisentangling-Factors-of-Variation-by-Mixing-Them) | 5 | \n| [Weakly-supervised Video Summarization using Variational Encoder-Decoder and Web Prior](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSijia_Cai_Weakly-supervised_Video_Summarization_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fcssjcai\u002Fvesd) | 5 | \n| [Learning Longer-term Dependencies in RNNs with Auxiliary Losses](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftrinh18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbelepi93\u002Frnn-auxiliary-loss) | 5 | \n| [Contour Knowledge Transfer for Salient Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXin_Li_Contour_Knowledge_Transfer_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flixin666\u002FC2SNet) | 5 | \n| [HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FThomas_Robert_HybridNet_Classification_and_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fdakshitagrawal97\u002FHybridNet) | 5 | \n| [Sidekick Policy Learning for Active Visual Exploration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSanthosh_Kumar_Ramakrishnan_Sidekick_Policy_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsrama2512\u002Fsidekicks) | 5 | \n| [Learning to Localize Sound Source in Visual Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSenocak_Learning_to_Localize_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fardasnck\u002Flearning_to_localize_sound) | 5 | \n| [Neural Architecture Optimization](http:\u002F\u002Farxiv.org\u002Fabs\u002F1808.07233v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdicarlolab\u002Farchconvnets) | 5 | \n| [COLA: Decentralized Linear Learning](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fepfml\u002Fcola) | 5 | \n| [Diverse and Coherent Paragraph Generation from Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMoitreya_Chatterjee_Diverse_and_Coherent_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmetro-smiles\u002FCapG_RevG_Code) | 5 | \n| [DRACO: Byzantine-resilient Distributed Training via Redundant Gradients](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fchen18l.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhwang595\u002FDraco) | 5 | \n| [Inter and Intra Topic Structure Learning with Word Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhao18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fethanhezhao\u002FWEDTM) | 5 | \n| [Estimating the Success of Unsupervised Image to Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FLior_Wolf_Estimating_the_Success_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fsagiebenaim\u002Fgan_bound) | 5 | \n| [Dynamic-Structured Semantic Propagation Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiang_Dynamic-Structured_Semantic_Propagation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flimberc\u002FDSSPN) | 5 | \n| [The Description Length of Deep Learning models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.07044) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fleonardblier\u002Fdescriptionlengthdeeplearning) | 5 | \n| [Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FPeiliang_LI_Stereo_Vision-based_Semantic_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fzhanghanduo\u002Fstereo_semantic_mapping) | 5 | \n| [Blind Justice: Fairness with Encrypted Sensitive Attributes](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fkilbertus18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fnikikilbertus\u002Fblind-justice) | 5 | \n| [Transfer Learning via Learning to Transfer](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fwei18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FQuebecAI\u002Fwebcam-transfer-learning-v1) | 5 | \n| [Deepcode: Feedback Codes via Deep Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.00801v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhyejikim1\u002FDeepcode) | 4 | \n| [Configurable Markov Decision Processes](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fmetelli18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Falbertometelli\u002FConfigurable-Markov-Decision-Processes-ICML-2018) | 4 | \n| [A Framework for Evaluating 6-DOF Object Trackers](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FMathieu_Garon_A_Framework_for_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Flvsn\u002F6DOF_tracking_evaluation) | 4 | \n| [Differentially Private Database Release via Kernel Mean Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbalog18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmatejbalog\u002FRKHS-private-database) | 4 | \n| [Recognizing Human Actions as the Evolution of Pose Estimation Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Recognizing_Human_Actions_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnkliuyifang\u002FSkeleton-based-Human-Action-Recognition) | 4 | \n| [Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FOrekondy_Connecting_Pixels_to_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fvisual_redactions) | 4 | \n| [DeLS-3D: Deep Localization and Segmentation With a 3D Semantic Map](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_DeLS-3D_Deep_Localization_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpengwangucla\u002FDeLS-3D) | 4 | \n| [Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FEric_Muller-Budack_Geolocation_Estimation_of_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FTIBHannover\u002FGeoEstimation) | 4 | \n| [Tracking Emerges by Colorizing Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FCarl_Vondrick_Self-supervised_Tracking_by_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FOh-Yoojin\u002FTracking-Emerges-by-Colorizing-Videos) | 4 | \n| [Diverse Conditional Image Generation by Stochastic Regression with Latent Drop-Out Codes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYang_He_Diverse_Conditional_Image_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FSSAW14\u002FImage_Generation_with_Latent_Code) | 4 | \n| [Inference Suboptimality in Variational Autoencoders](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fcremer18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flxuechen\u002Finference-suboptimality) | 4 | \n| [Black Box FDR](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftansey18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftansey\u002Fbb-fdr) | 4 | \n| [Feedback-Prop: Convolutional Neural Network Inference Under Partial Evidence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWang_Feedback-Prop_Convolutional_Neural_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fuvavision\u002Ffeedbackprop) | 4 | \n| [Quadrature-based features for kernel approximation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03832v3) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fquffka\u002Fquffka) | 4 | \n| [Joint Representation and Truncated Inference Learning for Correlation Filter based Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FYingjie_Yao_Joint_Representation_and_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Ftourmaline612\u002FRTINet) | 4 | \n| [Transferable Adversarial Perturbations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FBruce_Hou_Transferable_Adversarial_Perturbations_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fvinayprabhu\u002FGainsboro-box-attacks-) | 4 | \n| [Single Image Water Hazard Detection using FCN with Reflection Attention Units](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FXiaofeng_Han_Single_Image_Water_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002FCow911\u002FSingleImageWaterHazardDetectionWithRAU) | 4 | \n| [Multimodal Generative Models for Scalable Weakly-Supervised Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05335v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmhw32\u002Fmultimodal-vae-public) | 4 | \n| [Importance Weighted Transfer of Samples in Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftirinzoni18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAndreaTirinzoni\u002Fiw-transfer-rl) | 3 | \n| [Feature Generating Networks for Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXian_Feature_Generating_Networks_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fakku1506\u002FFeature-Generating-Networks-for-ZSL) | 3 | \n| [DICOD: Distributed Convolutional Coordinate Descent for Convolutional Sparse Coding](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fmoreau18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FtomMoral\u002FDicod) | 3 | \n| [CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmaple-research-lab\u002FCapProNet) | 3 | \n| [Bidirectional Retrieval Made Simple](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FWehrmann_Bidirectional_Retrieval_Made_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjwehrmann\u002Fchain-vse) | 3 | \n| [Multilingual Anchoring: Interactive Topic Modeling and Alignment Across Languages](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fforest-snow\u002Fmtanchor_demo) | 3 | \n| [A Hybrid l1-l0 Layer Decomposition Model for Tone Mapping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiang_A_Hybrid_l1-l0_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FL1L0_TM-CVPR2018) | 3 | \n| [Spatially-Adaptive Filter Units for Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FTabernik_Spatially-Adaptive_Filter_Units_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fskokec\u002FDAU-ConvNet) | 3 | \n| [Learning to Branch](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fbalcan18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FStoneyJackson\u002Fgithub-workflow-activity) | 3 | \n| [Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives](nan) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FIBM\u002FContrastive-Explanation-Method) | 3 | \n| [Lifelong Learning via Progressive Distillation and Retrospection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FSaihui_Hou_Progressive_Lifelong_Learning_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fhshustc\u002FECCV18_Lifelong_Learning) | 3 | \n| [CLEAR: Cumulative LEARning for One-Shot One-Class Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FKozerawski_CLEAR_Cumulative_LEARning_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJKozerawski\u002FCLEAR-osoc) | 3 | \n| [Not to Cry Wolf: Distantly Supervised Multitask Learning in Critical Care](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fschwab18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fd909b\u002FDSMT-Nets) | 3 | \n| [Learning Answer Embeddings for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHu_Learning_Answer_Embeddings_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhexiang-hu\u002Fanswer_embedding) | 3 | \n| [Information Constraints on Auto-Encoding Variational Bayes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08672v2) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fromain-lopez\u002FHCV) | 3 | \n| [Parallel Bayesian Network Structure Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fgao18b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbign8\u002FPyStruct) | 3 | \n| [Ring Loss: Convex Feature Normalization for Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZheng_Ring_Loss_Convex_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvsatyakumar\u002FRing-Loss-Keras) | 3 | \n| [Teaching Categories to Human Learners With Visual Explanations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FAodha_Teaching_Categories_to_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmacaodha\u002Fexplain_teach) | 3 | \n| [Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fzhang18g.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhangjiong724\u002Fspectral-RNN) | 3 | \n| [Deep Burst Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fhtml\u002FClement_Godard_Deep_Burst_Denoising_ECCV_2018_paper.html) | ECCV | [code](https:\u002F\u002Fgithub.com\u002Fmrharicot\u002Fdeep_burst_denoising) | 3 | \n| [Convergent Tree Backup and Retrace with Function Approximation](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftouati18a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fahmed-touati\u002Fconvergent-off-policy) | 3 | \n| [Gaze Prediction in Dynamic 360° Immersive Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXu_Gaze_Prediction_in_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxuyanyu-shh\u002FVR-EyeTracking) | 3 | \n| [Statistical Recurrent Models on Manifold valued Data](http:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11204v1) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzhenxingjian\u002FSPD-SRU) | 3 | \n| [End-to-End Flow Correlation Tracking With Spatial-Temporal Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhu_End-to-End_Flow_Correlation_CVPR_2018_paper.pdf) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhengzhugithub\u002FFlowTrack) | 3 |\n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ 返回顶部\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2017\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Bridging the Gap Between Value and Policy Based Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6870-bridging-the-gap-between-value-and-policy-based-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels) | 46593 | \n| [REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6856-rebar-low-variance-unbiased-gradient-estimates-for-discrete-latent-variable-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels) | 46593 | \n| [Focal Loss for Dense Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLin_Focal_Loss_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Mask R-CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHe_Mask_R-CNN_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmatterport\u002FMask_RCNN) | 9493 | \n| [Deep Photo Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLuan_Deep_Photo_Style_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluanfujun\u002Fdeep-photo-styletransfer) | 8655 | \n| [LightGBM: A Highly Efficient Gradient Boosting Decision Tree](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FLightGBM) | 7536 | \n| [Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7112-scalable-trust-region-method-for-deep-reinforcement-learning-using-kronecker-factored-approximation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines) | 6449 | \n| [Attention is All you Need](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7181-attention-is-all-you-need.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensor2tensor) | 6288 | \n| [Large Pose 3D Face Reconstruction From a Single Image via Direct Volumetric CNN Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FJackson_Large_Pose_3D_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FAaronJackson\u002Fvrn) | 3354 | \n| [Densely Connected Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Densely_Connected_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002FDenseNet) | 3130 | \n| [A Unified Approach to Interpreting Model Predictions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7062-a-unified-approach-to-interpreting-model-predictions.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap) | 3122 | \n| [Deformable Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDai_Deformable_Convolutional_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FDeformable-ConvNets) | 2165 | \n| [ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6859-elf-an-extensive-lightweight-and-flexible-research-platform-for-real-time-strategy-games.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FELF) | 1823 | \n| [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FQi_PointNet_Deep_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet) | 1523 | \n| [Improved Training of Wasserstein GANs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7159-improved-training-of-wasserstein-gans.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Figul222\u002Fimproved_wgan_training) | 1405 | \n| [Fully Convolutional Instance-Aware Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Fully_Convolutional_Instance-Aware_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FFCIS) | 1395 | \n| [Aggregated Residual Transformations for Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXie_Aggregated_Residual_Transformations_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FResNeXt) | 1361 | \n| [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLedig_Photo-Realistic_Single_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fsrgan) | 1301 | \n| [Unsupervised Image-to-Image Translation Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6672-unsupervised-image-to-image-translation-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmingyuliutw\u002Funit) | 1205 | \n| [Photographic Image Synthesis With Cascaded Refinement Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Photographic_Image_Synthesis_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCQFIO\u002FPhotographicImageSynthesis) | 1142 | \n| [High-Resolution Image Inpainting Using Multi-Scale Neural Patch Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_High-Resolution_Image_Inpainting_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002FFaster-High-Res-Neural-Inpainting) | 1072 | \n| [SphereFace: Deep Hypersphere Embedding for Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_SphereFace_Deep_Hypersphere_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002Fsphereface) | 1048 | \n| [Deep Feature Flow for Video Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhu_Deep_Feature_Flow_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FDeep-Feature-Flow) | 966 | \n| [Bayesian GAN](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6953-bayesian-gan.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fandrewgordonwilson\u002Fbayesgan) | 942 | \n| [Pyramid Scene Parsing Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhao_Pyramid_Scene_Parsing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhszhao\u002FPSPNet) | 934 | \n| [Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7098-efficient-modeling-of-latent-information-in-supervised-learning-using-gaussian-processes.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FSheffieldML\u002FGPy) | 906 | \n| [Finding Tiny Faces](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_Finding_Tiny_Faces_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpeiyunh\u002Ftiny) | 856 | \n| [Toward Multimodal Image-to-Image Translation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6650-toward-multimodal-image-to-image-translation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002FBiCycleGAN) | 794 | \n| [Learning to Discover Cross-Domain Relations with Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkim17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDiscoGAN-pytorch) | 784 | \n| [YOLO9000: Better, Faster, Stronger](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRedmon_YOLO9000_Better_Faster_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphilipperemy\u002Fyolo-9000) | 773 | \n| [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7095-pointnet-deep-hierarchical-feature-learning-on-point-sets-in-a-metric-space.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcharlesq34\u002Fpointnet2) | 772 | \n| [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ffinn17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcbfinn\u002Fmaml) | 729 | \n| [FlowNet 2.0: Evolution of Optical Flow Estimation With Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIlg_FlowNet_2.0_Evolution_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fflownet2) | 720 | \n| [Channel Pruning for Accelerating Very Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHe_Channel_Pruning_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyihui-he\u002Fchannel-pruning) | 649 | \n| [Dilated Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYu_Dilated_Residual_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffyu\u002Fdrn) | 640 | \n| [Inferring and Executing Programs for Visual Reasoning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FJohnson_Inferring_and_Executing_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fclevr-iep) | 636 | \n| [DSOD: Learning Deeply Supervised Object Detectors From Scratch](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FShen_DSOD_Learning_Deeply_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fszq0214\u002FDSOD) | 582 | \n| [Arbitrary Style Transfer in Real-Time With Adaptive Instance Normalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Arbitrary_Style_Transfer_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxunhuang1995\u002FAdaIN-style) | 572 | \n| [Accelerating Eulerian Fluid Simulation With Convolutional Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ftompson17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FFluidNet) | 570 | \n| [Learning Disentangled Representations with Semi-Supervised Deep Generative Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7174-learning-disentangled-representations-with-semi-supervised-deep-generative-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fprobtorch\u002Fprobtorch) | 556 | \n| [Inductive Representation Learning on Large Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6703-inductive-representation-learning-on-large-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwilliamleif\u002FGraphSAGE) | 552 | \n| [Regressing Robust and Discriminative 3D Morphable Models With a Very Deep Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTran_Regressing_Robust_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fanhttran\u002F3dmm_cnn) | 537 | \n| [How Far Are We From Solving the 2D & 3D Face Alignment Problem? (And a Dataset of 230,000 3D Facial Landmarks)](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBulat_How_Far_Are_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002F1adrianb\u002F2D-and-3D-face-alignment) | 526 | \n| [SSH: Single Stage Headless Face Detector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FNajibi_SSH_Single_Stage_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmahyarnajibi\u002FSSH) | 515 | \n| [Learning From Simulated and Unsupervised Images Through Adversarial Training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShrivastava_Learning_From_Simulated_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002Fsimulated-unsupervised-tensorflow) | 492 | \n| [Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FNguyen_Plug__Play_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FEvolving-AI-Lab\u002Fppgn) | 487 | \n| [Video Frame Interpolation via Adaptive Convolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FNiklaus_Video_Frame_Interpolation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-sepconv) | 482 | \n| [Video Frame Interpolation via Adaptive Separable Convolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FNiklaus_Video_Frame_Interpolation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-sepconv) | 482 | \n| [GMS: Grid-based Motion Statistics for Fast, Ultra-Robust Feature Correspondence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBian_GMS_Grid-based_Motion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJiawangBian\u002FGMS-Feature-Matcher) | 460 | \n| [Joint Detection and Identification Feature Learning for Person Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXiao_Joint_Detection_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShuangLI59\u002Fperson_search) | 459 | \n| [Dual Path Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7033-dual-path-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcypw\u002FDPNs) | 451 | \n| [Flow-Guided Feature Aggregation for Video Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Flow-Guided_Feature_Aggregation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmsracver\u002FFlow-Guided-Feature-Aggregation) | 436 | \n| [Deep Image Matting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Deep_Image_Matting_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJoker316701882\u002FDeep-Image-Matting) | 434 | \n| [Richer Convolutional Features for Edge Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Richer_Convolutional_Features_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyun-liu\u002Frcf) | 399 | \n| [Annotating Object Instances With a Polygon-RNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCastrejon_Annotating_Object_Instances_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffidler-lab\u002Fpolyrnn-pp-pytorch) | 397 | \n| [Recurrent Highway Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzilly17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjulian121266\u002FRecurrentHighwayNetworks) | 397 | \n| [Detect to Track and Track to Detect](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFeichtenhofer_Detect_to_Track_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002FDetect-Track) | 387 | \n| [RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_RefineNet_Multi-Path_Refinement_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fguosheng\u002Frefinenet) | 379 | \n| [Detecting Oriented Text in Natural Images by Linking Segments](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShi_Detecting_Oriented_Text_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdengdan\u002Fseglink) | 364 | \n| [Deep Lattice Networks and Partial Monotonic Functions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6891-deep-lattice-networks-and-partial-monotonic-functions.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Flattice) | 349 | \n| [Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6719-mean-teachers-are-better-role-models-weight-averaged-consistency-targets-improve-semi-supervised-deep-learning-results.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FCuriousAI\u002Fmean-teacher\u002F) | 347 | \n| [RON: Reverse Connection With Objectness Prior Networks for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKong_RON_Reverse_Connection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftaokong\u002FRON) | 345 | \n| [Universal Style Transfer via Feature Transforms](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6642-universal-style-transfer-via-feature-transforms.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FUniversalStyleTransfer) | 344 | \n| [Residual Attention Network for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Residual_Attention_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffwang91\u002Fresidual-attention-network) | 329 | \n| [One-Shot Video Object Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCaelles_One-Shot_Video_Object_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fscaelles\u002FOSVOS-TensorFlow) | 316 | \n| [Accurate Single Stage Detector Using Recurrent Rolling Convolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRen_Accurate_Single_Stage_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FxiaohaoChen\u002Frrc_detection) | 314 | \n| [Feature Pyramid Networks for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_Feature_Pyramid_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Funsky\u002FFPN) | 310 | \n| [Efficient softmax approximation for GPUs](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fgrave17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fadaptive-softmax) | 304 | \n| [OctNet: Learning Deep 3D Representations at High Resolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRiegler_OctNet_Learning_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgriegler\u002Foctnet) | 302 | \n| [Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLai_Deep_Laplacian_Pyramid_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphoenix104104\u002FLapSRN) | 301 | \n| [Pixel Recursive Super Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDahl_Pixel_Recursive_Super_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fnilboy\u002Fpixel-recursive-super-resolution) | 301 | \n| [Self-Critical Sequence Training for Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRennie_Self-Critical_Sequence_Training_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002Fself-critical.pytorch) | 299 | \n| [Age Progression\u002FRegression by Conditional Adversarial Autoencoder](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Age_ProgressionRegression_by_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZZUTK\u002FFace-Aging-CAAE) | 297 | \n| [Style Transfer from Non-Parallel Text by Cross-Alignment](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7259-style-transfer-from-non-parallel-text-by-cross-alignment.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fshentianxiao\u002Flanguage-style-transfer) | 296 | \n| [Dilated Recurrent Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6613-dilated-recurrent-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcode-terminator\u002FDilatedRNN) | 285 | \n| [Lifting From the Deep: Convolutional 3D Pose Estimation From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTome_Lifting_From_the_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDenisTome\u002FLifting-from-the-Deep-release) | 280 | \n| [DeepBach: a Steerable Model for Bach Chorales Generation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhadjeres17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FGhadjeres\u002FDeepBach) | 276 | \n| [The Predictron:  End-To-End Learning and Planning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fsilver17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhongwen\u002Fpredictron) | 274 | \n| [Convolutional Sequence to Sequence Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fgehring17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftobyyouup\u002Fconv_seq2seq) | 258 | \n| [OptNet: Differentiable Optimization as a Layer in Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Famos17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Foptnet) | 245 | \n| [Prototypical Networks for Few-shot Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6996-prototypical-networks-for-few-shot-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjakesnell\u002Fprototypical-networks) | 244 | \n| [Deep Voice: Real-time Neural Text-to-Speech](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Farik17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fisraelg99\u002Fdeepvoice) | 242 | \n| [Reinforcement Learning with Deep Energy-Based Policies](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhaarnoja17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhaarnoja\u002Fsoftqlearning) | 233 | \n| [Learning Deep CNN Denoiser Prior for Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Learning_Deep_CNN_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcszn\u002FIRCNN) | 231 | \n| [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7240-gans-trained-by-a-two-time-scale-update-rule-converge-to-a-local-nash-equilibrium.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbioinf-jku\u002FTTUR) | 229 | \n| [A Point Set Generation Network for 3D Object Reconstruction From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFan_A_Point_Set_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffanhqme\u002FPointSetGeneration) | 228 | \n| [Deeply Supervised Salient Object Detection With Short Connections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHou_Deeply_Supervised_Salient_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJoker316701882\u002FSalient-Object-Detection) | 228 | \n| [BlitzNet: A Real-Time Deep Network for Scene Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDvornik_BlitzNet_A_Real-Time_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdvornikita\u002Fblitznet) | 227 | \n| [Language Modeling with Gated Convolutional Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fdauphin17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fanantzoid\u002FLanguage-Modeling-GatedCNN) | 221 | \n| [Unlabeled Samples Generated by GAN Improve the Person Re-Identification Baseline in Vitro](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZheng_Unlabeled_Samples_Generated_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson-reID_GAN) | 215 | \n| [Stacked Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Stacked_Generative_Adversarial_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxunhuang1995\u002FSGAN) | 215 | \n| [RMPE: Regional Multi-Person Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFang_RMPE_Regional_Multi-Person_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FMVIG-SJTU\u002FRMPE) | 215 | \n| [Knowing When to Look: Adaptive Attention via a Visual Sentinel for Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLu_Knowing_When_to_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FAdaptiveAttention) | 214 | \n| [Generative Face Completion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Generative_Face_Completion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FGenerativeFaceCompletion) | 212 | \n| [VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLee_VPGNet_Vanishing_Point_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSeokjuLee\u002FVPGNet) | 210 | \n| [The Reversible Residual Network: Backpropagation Without Storing Activations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6816-the-reversible-residual-network-backpropagation-without-storing-activations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frenmengye\u002Frevnet-public) | 210 | \n| [Recurrent Scale Approximation for Object Detection in CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Recurrent_Scale_Approximation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FRSA-for-object-detection) | 209 | \n| [Learning From Synthetic Humans](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FVarol_Learning_From_Synthetic_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgulvarol\u002Fsurreal) | 207 | \n| [Spatially Adaptive Computation Time for Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFigurnov_Spatially_Adaptive_Computation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmfigurnov\u002Fsact) | 203 | \n| [Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Beyond_Face_Rotation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHRLTY\u002FTP-GAN) | 202 | \n| [3D Bounding Box Estimation Using Deep Learning and Geometry](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMousavian_3D_Bounding_Box_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsmallcorgi\u002F3D-Deepbox) | 200 | \n| [Multi-View 3D Object Detection Network for Autonomous Driving](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_Multi-View_3D_Object_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbostondiditeam\u002FMV3D) | 199 | \n| [Visual Dialog](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDas_Visual_Dialog_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbatra-mlp-lab\u002Fvisdial) | 199 | \n| [Interpretable Explanations of Black Boxes by Meaningful Perturbation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFong_Interpretable_Explanations_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-explain-black-box) | 192 | \n| [Inverse Compositional Spatial Transformer Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_Inverse_Compositional_Spatial_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchenhsuanlin\u002Finverse-compositional-STN) | 189 | \n| [FastMask: Segment Multi-Scale Object Candidates in One Shot](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_FastMask_Segment_Multi-Scale_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvoidrank\u002FFastMask) | 189 | \n| [OnACID: Online Analysis of Calcium Imaging Data in Real Time](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6832-onacid-online-analysis-of-calcium-imaging-data-in-real-time.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsimonsfoundation\u002Fcaiman) | 189 | \n| [Semantic Scene Completion From a Single Depth Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSong_Semantic_Scene_Completion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshurans\u002Fsscnet) | 188 | \n| [Learning Efficient Convolutional Networks Through Network Slimming](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Learning_Efficient_Convolutional_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fliuzhuang13\u002Fslimming) | 186 | \n| [Learning Feature Pyramids for Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FYang_Learning_Feature_Pyramids_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002FPyraNet) | 185 | \n| [Be Your Own Prada: Fashion Synthesis With Structural Coherence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Be_Your_Own_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzhusz\u002FICCV17-fashionGAN) | 183 | \n| [Scene Graph Generation by Iterative Message Passing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Scene_Graph_Generation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FdanfeiX\u002Fscene-graph-TF-release) | 182 | \n| [Fast Image Processing With Fully-Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Fast_Image_Processing_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCQFIO\u002FFastImageProcessing) | 180 | \n| [Learning Multiple Tasks with Multilinear Relationship Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6757-learning-multiple-tasks-with-multilinear-relationship-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FXlearn) | 178 | \n| [Learning to Reason: End-To-End Module Networks for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHu_Learning_to_Reason_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fn2nmn) | 178 | \n| [Single Shot Text Detector With Regional Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHe_Single_Shot_Text_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FBestSonny\u002FSSTD) | 176 | \n| [Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment With Limited Resources](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBulat_Binarized_Convolutional_Landmark_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fbinary-human-pose-estimation) | 175 | \n| [Deep Feature Interpolation for Image Content Changes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FUpchurch_Deep_Feature_Interpolation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpaulu\u002Fdeepfeatinterp) | 170 | \n| [On Human Motion Prediction Using Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMartinez_On_Human_Motion_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Funa-dinosauria\u002Fhuman-motion-prediction) | 167 | \n| [Image Super-Resolution via Deep Recursive Residual Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTai_Image_Super-Resolution_via_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftyshiwo\u002FDRRN_CVPR17) | 163 | \n| [Learning Cross-Modal Embeddings for Cooking Recipes and Food Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSalvador_Learning_Cross-Modal_Embeddings_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftorralba-lab\u002Fim2recipe) | 160 | \n| [Input Convex Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Famos17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Ficnn) | 159 | \n| [Simple Does It: Weakly Supervised Instance and Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKhoreva_Simple_Does_It_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphilferriere\u002Ftfwss) | 159 | \n| [Low-Shot Visual Recognition by Shrinking and Hallucinating Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHariharan_Low-Shot_Visual_Recognition_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Flow-shot-shrink-hallucinate) | 158 | \n| [Oriented Response Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhou_Oriented_Response_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZhouYanzhao\u002FORN) | 157 | \n| [Soft Proposal Networks for Weakly Supervised Object Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Soft_Proposal_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyeezhu\u002FSPN.pytorch) | 154 | \n| [Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmescheder17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FAdversarialVariationalBayes) | 147 | \n| [Axiomatic Attribution for Deep Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fsundararajan17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhiranumn\u002FIntegratedGradients) | 146 | \n| [Gradient Episodic Memory for Continual Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7225-gradient-episodic-memory-for-continual-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FGradientEpisodicMemory) | 146 | \n| [DSAC - Differentiable RANSAC for Camera Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBrachmann_DSAC_-_Differentiable_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcvlab-dresden\u002FDSAC) | 144 | \n| [Attend to You: Personalized Image Captioning With Context Sequence Memory Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPark_Attend_to_You_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcesc-park\u002Fattend2u) | 143 | \n| [Conditional Similarity Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FVeit_Conditional_Similarity_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fandreasveit\u002Fconditional-similarity-networks) | 142 | \n| [Language Modeling with Recurrent Highway Hypernetworks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6919-language-modeling-with-recurrent-highway-hypernetworks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjsuarez5341\u002FRecurrent-Highway-Hypernetworks-NIPS) | 141 | \n| [Triple Generative Adversarial Nets](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6997-triple-generative-adversarial-nets.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzhenxuan00\u002Ftriple-gan) | 138 | \n| [Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6974-interpolated-policy-gradient-merging-on-policy-and-off-policy-gradient-estimation-for-deep-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fshaneshixiang\u002Frllabplusplus) | 138 | \n| [One-Sided Unsupervised Domain Mapping](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6677-one-sided-unsupervised-domain-mapping.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsagiebenaim\u002FDistanceGAN) | 137 | \n| [Detecting Visual Relationships With Deep Relational Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDai_Detecting_Visual_Relationships_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002Fdrnet_cvpr2017) | 137 | \n| [Attentive Recurrent Comparators](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fshyam17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsanyam5\u002Farc-pytorch) | 136 | \n| [Towards 3D Human Pose Estimation in the Wild: A Weakly-Supervised Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhou_Towards_3D_Human_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpose-hg-3d) | 136 | \n| [Learning a Multi-View Stereo Machine](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6640-learning-a-multi-view-stereo-machine.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fakar43\u002Flsm) | 135 | \n| [Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7145-deep-learning-for-precipitation-nowcasting-a-benchmark-and-a-new-model.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsxjscience\u002FHKO-7) | 134 | \n| [Multi-Context Attention for Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChu_Multi-Context_Attention_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fpose-attention) | 131 | \n| [Controlling Perceptual Factors in Neural Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGatys_Controlling_Perceptual_Factors_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleongatys\u002FNeuralImageSynthesis) | 130 | \n| [Bayesian Compression for Deep Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6921-bayesian-compression-for-deep-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FKarenUllrich\u002FTutorial_BayesianCompressionForDL) | 130 | \n| [Adversarial Discriminative Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTzeng_Adversarial_Discriminative_Domain_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcorenel\u002Fpytorch-adda) | 129 | \n| [Working hard to know your neighbor's margins: Local descriptor learning loss](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7068-working-hard-to-know-your-neighbors-margins-local-descriptor-learning-loss.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDagnyT\u002Fhardnet) | 128 | \n| [Concrete Dropout](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6949-concrete-dropout.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fyaringal\u002FConcreteDropout) | 127 | \n| [SegFlow: Joint Learning for Video Object Segmentation and Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCheng_SegFlow_Joint_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJingchunCheng\u002FSegFlow) | 127 | \n| [Segmentation-Aware Convolutional Networks Using Local Attention Masks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHarley_Segmentation-Aware_Convolutional_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faharley\u002Fsegaware) | 126 | \n| [Detail-Revealing Deep Video Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTao_Detail-Revealing_Deep_Video_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiangsutx\u002FSPMC_VideoSR) | 126 | \n| [CREST: Convolutional Residual Learning for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSong_CREST_Convolutional_Residual_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fybsong00\u002FCREST-Release) | 126 | \n| [Discriminative Correlation Filter With Channel and Spatial Reliability](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLukezic_Discriminative_Correlation_Filter_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falanlukezic\u002Fcsr-dcf) | 124 | \n| [SVDNet for Pedestrian Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSun_SVDNet_for_Pedestrian_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsyfafterzy\u002FSVDNet-for-Pedestrian-Retrieval) | 121 | \n| [Semantic Image Synthesis via Adversarial Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDong_Semantic_Image_Synthesis_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fwoozzu\u002Fdong_iccv_2017) | 121 | \n| [Spatiotemporal Multiplier Networks for Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFeichtenhofer_Spatiotemporal_Multiplier_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002Fst-resnet) | 121 | \n| [PoseTrack: Joint Multi-Person Pose Estimation and Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIqbal_PoseTrack_Joint_Multi-Person_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiqbalu\u002FPoseTrack-CVPR2017) | 121 | \n| [Hierarchical Attentive Recurrent Tracking](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6898-hierarchical-attentive-recurrent-tracking.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fakosiorek\u002Fhart) | 121 | \n| [Good Semi-supervised Learning That Requires a Bad GAN](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7229-good-semi-supervised-learning-that-requires-a-bad-gan.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkimiyoung\u002Fssl_bad_gan) | 120 | \n| [Deep Watershed Transform for Instance Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBai_Deep_Watershed_Transform_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmin2209\u002Fdwt) | 120 | \n| [Associative Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHaeusser_Associative_Domain_Adaptation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhaeusser\u002Flearning_by_association) | 119 | \n| [Learning by Association -- A Versatile Semi-Supervised Training Method for Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHaeusser_Learning_by_Association_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhaeusser\u002Flearning_by_association) | 119 | \n| [Value Prediction Network](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7192-value-prediction-network.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjunhyukoh\u002Fvalue-prediction-network) | 119 | \n| [Unrestricted Facial Geometry Reconstruction Using Image-To-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSela_Unrestricted_Facial_Geometry_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmatansel\u002Fpix2vertex) | 119 | \n| [MemNet: A Persistent Memory Network for Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTai_MemNet_A_Persistent_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftyshiwo\u002FMemNet) | 119 | \n| [Bayesian Optimization with Gradients](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7111-bayesian-optimization-with-gradients.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwujian16\u002FCornell-MOE) | 117 | \n| [TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6749-terngrad-ternary-gradients-to-reduce-communication-in-distributed-deep-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwenwei202\u002Fterngrad) | 117 | \n| [Compressed Sensing using Generative Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbora17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAshishBora\u002Fcsgm) | 116 | \n| [Switching Convolutional Neural Network for Crowd Counting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSam_Switching_Convolutional_Neural_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Fcrowd-counting-scnn) | 116 | \n| [WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDurand_WILDCAT_Weakly_Supervised_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdurandtibo\u002Fwildcat.pytorch) | 116 | \n| [Show, Adapt and Tell: Adversarial Training of Cross-Domain Image Captioner](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Show_Adapt_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftsenghungchen\u002Fshow-adapt-and-tell) | 115 | \n| [Video Frame Synthesis Using Deep Voxel Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Video_Frame_Synthesis_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fliuziwei7\u002Fvoxel-flow) | 114 | \n| [Multiple Instance Detection Network With Online Instance Classifier Refinement](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTang_Multiple_Instance_Detection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fppengtang\u002Foicr) | 113 | \n| [Deep Pyramidal Residual Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHan_Deep_Pyramidal_Residual_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjhkim89\u002FPyramidNet) | 112 | \n| [Train longer, generalize better: closing the generalization gap in large batch training of neural networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feladhoffer\u002FbigBatch) | 112 | \n| [Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Split-Brain_Autoencoders_Unsupervised_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fsplitbrainauto) | 110 | \n| [Unite the People: Closing the Loop Between 3D and 2D Human Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLassner_Unite_the_People_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fclassner\u002Fup) | 110 | \n| [Learning Combinatorial Optimization Algorithms over Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7214-learning-combinatorial-optimization-algorithms-over-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHanjun-Dai\u002Fgraph_comb_opt) | 109 | \n| [FeUdal Networks for Hierarchical Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fvezhnevets17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdmakian\u002Ffeudal_networks) | 107 | \n| [ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLuo_ThiNet_A_Filter_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FRoll920\u002FThiNet) | 105 | \n| [Learning a Deep Embedding Model for Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Learning_a_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FDeepEmbeddingModel_ZSL) | 104 | \n| [ECO: Efficient Convolution Operators for Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDanelljan_ECO_Efficient_Convolution_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnicewsyly\u002FECO) | 103 | \n| [SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_SCA-CNN_Spatial_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzjuchenlong\u002Fsca-cnn.cvpr17) | 102 | \n| [Multi-View Supervision for Single-View Reconstruction via Differentiable Ray Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTulsiani_Multi-View_Supervision_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002Fdrc) | 100 | \n| [Task-based End-to-end Model Learning in Stochastic Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7132-task-based-end-to-end-model-learning-in-stochastic-optimization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fe2e-model-learning) | 100 | \n| [Learning to Compose Domain-Specific Transformations for Data Augmentation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6916-learning-to-compose-domain-specific-transformations-for-data-augmentation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Ftanda) | 97 | \n| [Genetic CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXie_Genetic_CNN_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faqibsaeed\u002FGenetic-CNN) | 97 | \n| [HashNet: Deep Learning to Hash by Continuation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCao_HashNet_Deep_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FHashNet) | 97 | \n| [Interleaved Group Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Interleaved_Group_Convolutions_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhellozting\u002FInterleavedGroupConvolutions) | 95 | \n| [Deeply-Learned Part-Aligned Representations for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhao_Deeply-Learned_Part-Aligned_Representations_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzlmzju\u002Fpart_reid) | 95 | \n| [Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6635-best-of-both-worlds-transferring-knowledge-from-discriminative-learning-to-a-generative-visual-dialog-model.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjiasenlu\u002FvisDial.pytorch) | 94 | \n| [Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Multi-Scale_Continuous_CRFs_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanxuhk\u002FContinuousCRF-CNN) | 93 | \n| [Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTatarchenko_Octree_Generating_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fogn) | 92 | \n| [Semantic Autoencoder for Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKodirov_Semantic_Autoencoder_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FElyorcv\u002FSAE) | 92 | \n| [Deep Hyperspherical Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6984-deep-hyperspherical-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwy1iu\u002FSphereNet) | 92 | \n| [Decoupled Neural Interfaces using Synthetic Gradients](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fjaderberg17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fandrewliao11\u002Fdni.pytorch) | 90 | \n| [Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6960-geometric-matrix-completion-with-recurrent-multi-graph-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffmonti\u002Fmgcnn) | 90 | \n| [Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6780-practical-bayesian-optimization-for-model-fitting-with-bayesian-adaptive-direct-search.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flacerbi\u002Fbads) | 90 | \n| [Optical Flow Estimation Using a Spatial Pyramid Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRanjan_Optical_Flow_Estimation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-spynet) | 90 | \n| [AMC: Attention guided Multi-modal Correlation Learning for Image Search](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_AMC_Attention_guided_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FAMC_ATT) | 90 | \n| [Deep Video Deblurring for Hand-Held Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSu_Deep_Video_Deblurring_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshuochsu\u002FDeepVideoDeblurring) | 89 | \n| [Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6784-unsupervised-learning-of-disentangled-and-interpretable-representations-from-sequential-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwnhsu\u002FFactorizedHierarchicalVAE) | 88 | \n| [Causal Effect Inference with Deep Latent-Variable Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7223-causal-effect-inference-with-deep-latent-variable-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FCEVAE) | 87 | \n| [GANs for Biological Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOsokin_GANs_for_Biological_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faosokin\u002Fbiogans) | 85 | \n| [MMD GAN: Towards Deeper Understanding of Moment Matching Network](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6815-mmd-gan-towards-deeper-understanding-of-moment-matching-network.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FOctoberChang\u002FMMD-GAN) | 84 | \n| [Representation Learning by Learning to Count](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FNoroozi_Representation_Learning_by_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fgitlimlab\u002FRepresentation-Learning-by-Learning-to-Count) | 84 | \n| [Optical Flow in Mostly Rigid Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWulff_Optical_Flow_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjswulff\u002Fmrflow) | 83 | \n| [Fast-Slow Recurrent Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7173-fast-slow-recurrent-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Famujika\u002FFast-Slow-LSTM) | 82 | \n| [Unsupervised Video Summarization With Adversarial LSTM Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMahasseni_Unsupervised_Video_Summarization_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fj-min\u002FAdversarial_Video_Summary) | 82 | \n| [Constrained Policy Optimization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fachiam17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjachiam\u002Fcpo) | 81 | \n| [A-NICE-MC: Adversarial Training for MCMC](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7099-a-nice-mc-adversarial-training-for-mcmc.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fjiamings\u002Fa-nice-mc) | 80 | \n| [Coarse-To-Fine Volumetric Prediction for Single-Image 3D Human Pose](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPavlakos_Coarse-To-Fine_Volumetric_Prediction_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeopavlakos\u002Fc2f-vol-train) | 80 | \n| [End-To-End Instance Segmentation With Recurrent Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRen_End-To-End_Instance_Segmentation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frenmengye\u002Frec-attend-public) | 78 | \n| [DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGurumurthy_DeLiGAN__Generative_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Fdeligan) | 78 | \n| [Learning Shape Abstractions by Assembling Volumetric Primitives](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTulsiani_Learning_Shape_Abstractions_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FvolumetricPrimitives) | 77 | \n| [Local Binary Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJuefei-Xu_Local_Binary_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjuefeix\u002Flbcnn.torch) | 77 | \n| [Raster-To-Vector: Revisiting Floorplan Transformation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLiu_Raster-To-Vector_Revisiting_Floorplan_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fart-programmer\u002FFloorplanTransformation) | 76 | \n| [Positive-Unlabeled Learning with Non-Negative Risk Estimator](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6765-positive-unlabeled-learning-with-non-negative-risk-estimator.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkiryor\u002FnnPUlearning) | 76 | \n| [Hard-Aware Deeply Cascaded Embedding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FYuan_Hard-Aware_Deeply_Cascaded_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FPkuRainBow\u002FHard-Aware-Deeply-Cascaded-Embedding_release) | 75 | \n| [Deep Image Harmonization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTsai_Deep_Image_Harmonization_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwasidennis\u002FDeepHarmonization) | 73 | \n| [Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDai_Shape_Completion_Using_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fangeladai\u002Fcnncomplete) | 73 | \n| [Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Not_All_Pixels_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliuziwei7\u002Fregion-conv) | 73 | \n| [Improved Stereo Matching With Constant Highway Networks and Reflective Confidence Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShaked_Improved_Stereo_Matching_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famitshaked\u002Fresmatch) | 72 | \n| [Query-Guided Regression Network With Context Policy for Phrase Grounding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Query-Guided_Regression_Network_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FQRC-Net) | 72 | \n| [Top-Down Visual Saliency Guided by Captions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRamanishka_Top-Down_Visual_Saliency_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FVisionLearningGroup\u002Fcaption-guided-saliency) | 72 | \n| [Feedback Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZamir_Feedback_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famir32002\u002Ffeedback-networks) | 72 | \n| [What Actions Are Needed for Understanding Human Actions in Videos?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSigurdsson_What_Actions_Are_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fgsig\u002Factions-for-actions) | 71 | \n| [Xception: Deep Learning With Depthwise Separable Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChollet_Xception_Deep_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftstandley\u002FXception-PyTorch) | 71 | \n| [Action-Decision Networks for Visual Tracking With Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYun_Action-Decision_Networks_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhellbell\u002FADNet) | 71 | \n| [Video Propagation Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJampani_Video_Propagation_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvarunjampani\u002Fvideo_prop_networks) | 70 | \n| [Image-To-Image Translation With Conditional Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIsola_Image-To-Image_Translation_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FwilliamFalcon\u002Fpix2pix-keras) | 70 | \n| [Quality Aware Network for Set to Set Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Quality_Aware_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FQuality-Aware-Network) | 69 | \n| [Self-Supervised Learning of Visual Features Through Embedding Images Into Text Topic Spaces](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGomez_Self-Supervised_Learning_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flluisgomez\u002FTextTopicNet) | 69 | \n| [Deep Subspace Clustering Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6608-deep-subspace-clustering-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpanji1990\u002FDeep-subspace-clustering-networks) | 68 | \n| [Escape From Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKlokov_Escape_From_Cells_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fkdnet.pytorch) | 68 | \n| [A Distributional Perspective on Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbellemare17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSilvicek\u002Fdistributional-dqn) | 68 | \n| [Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Physically-Based_Rendering_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyindaz\u002Fpbrs) | 67 | \n| [Deep Transfer Learning with Joint Adaptation Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flong17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FUSTCPCS\u002FCVPR2018_attention) | 67 | \n| [Training Deep Networks without Learning Rates Through Coin Betting](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6811-training-deep-networks-without-learning-rates-through-coin-betting.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbremen79\u002Fcocob) | 66 | \n| [Full Resolution Image Compression With Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FToderici_Full_Resolution_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002F1zb\u002Fpytorch-image-comp-rnn) | 66 | \n| [SurfaceNet: An End-To-End 3D Neural Network for Multiview Stereopsis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FJi_SurfaceNet_An_End-To-End_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FmjiUST\u002FSurfaceNet) | 66 | \n| [Doubly Stochastic Variational Inference for Deep Gaussian Processes](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7045-doubly-stochastic-variational-inference-for-deep-gaussian-processes.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FICL-SML\u002FDoubly-Stochastic-DGP) | 66 | \n| [TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGao_TURN_TAP_Temporal_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiyanggao\u002FTURN-TAP) | 66 | \n| [Jointly Attentive Spatial-Temporal Pooling Networks for Video-Based Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXu_Jointly_Attentive_Spatial-Temporal_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshuangjiexu\u002FSpatial-Temporal-Pooling-Networks-ReID) | 65 | \n| [Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSoltani_Synthesizing_3D_Shapes_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FAmir-Arsalan\u002FSynthesize3DviaDepthOrSil) | 65 | \n| [Dance Dance Convolution](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fdonahue17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fchrisdonahue\u002Fddc) | 65 | \n| [Borrowing Treasures From the Wealthy: Deep Transfer Learning Through Selective Joint Fine-Tuning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGe_Borrowing_Treasures_From_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZYYSzj\u002FSelective-Joint-Fine-tuning) | 64 | \n| [Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Curriculum_Domain_Adaptation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FYangZhang4065\u002FAdaptationSeg) | 64 | \n| [Toward Controlled Generation of Text](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhu17e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FGBLin5566\u002Ftoward-controlled-generation-of-text-pytorch) | 63 | \n| [Person Re-Identification in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZheng_Person_Re-Identification_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliangzheng06\u002FPRW-baseline) | 63 | \n| [ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7133-alice-towards-understanding-adversarial-learning-for-joint-distribution-matching.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FChunyuanLI\u002FALICE) | 63 | \n| [Differentiable Learning of Logical Rules for Knowledge Base Reasoning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6826-differentiable-learning-of-logical-rules-for-knowledge-base-reasoning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffanyangxyz\u002FNeural-LP) | 62 | \n| [Person Search With Natural Language Description](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Person_Search_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShuangLI59\u002FPerson-Search-with-Natural-Language-Description) | 61 | \n| [Multi-Channel Weighted Nuclear Norm Minimization for Real Color Image Denoising](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXu_Multi-Channel_Weighted_Nuclear_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcsjunxu\u002FMCWNNM-ICCV2017) | 61 | \n| [Playing for Benchmarks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRichter_Playing_for_Benchmarks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FPatrykChrabaszcz\u002FCanonical_ES_Atari) | 61 | \n| [Unsupervised Learning by Predicting Noise](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbojanowski17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fnoise-as-targets) | 60 | \n| [Localizing Moments in Video With Natural Language](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHendricks_Localizing_Moments_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FLisaAnne\u002FLocalizingMoments) | 60 | \n| [End-To-End 3D Face Reconstruction With Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDou_End-To-End_3D_Face_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FShownX\u002Fmxnet-E2FAR) | 60 | \n| [CoupleNet: Coupling Global Structure With Local Parts for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_CoupleNet_Coupling_Global_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftshizys\u002FCoupleNet) | 59 | \n| [AdaGAN: Boosting Generative Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7126-adagan-boosting-generative-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftolstikhin\u002Fadagan) | 59 | \n| [Convolutional Gaussian Processes](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6877-convolutional-gaussian-processes.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmarkvdw\u002Fconvgp\u002F) | 57 | \n| [A Deep Regression Architecture With Two-Stage Re-Initialization for High Performance Facial Landmark Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLv_A_Deep_Regression_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshaoxiaohu\u002FFace_Alignment_Two_Stage_Re-initialization) | 57 | \n| [Modeling Relationships in Referential Expressions With Compositional Modular Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_Modeling_Relationships_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fcmn) | 57 | \n| [Curiosity-driven Exploration by Self-supervised Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fpathak17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Fpytorch-noreward-rl) | 56 | \n| [Wavelet-SRNet: A Wavelet-Based CNN for Multi-Scale Face Super Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Wavelet-SRNet_A_Wavelet-Based_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhhb072\u002FWaveletSRNet) | 56 | \n| [The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7252-the-neural-hawkes-process-a-neurally-self-modulating-multivariate-point-process.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHMEIatJHU\u002Fneurawkes) | 56 | \n| [Online and Linear-Time Attention by Enforcing Monotonic Alignments](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fraffel17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fcraffel\u002Fmad) | 56 | \n| [Neural Expectation Maximization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7246-neural-expectation-maximization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsjoerdvansteenkiste\u002FNeural-EM) | 56 | \n| [Dense-Captioning Events in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKrishna_Dense-Captioning_Events_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Franjaykrishna\u002Fdensevid_eval) | 55 | \n| [Factorized Bilinear Models for Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLi_Factorized_Bilinear_Models_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flyttonhao\u002FFactorized-Bilinear-Network) | 55 | \n| [Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6910-net-trim-convex-pruning-of-deep-neural-networks-with-performance-guarantee.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDNNToolBox\u002FNet-Trim-v1) | 54 | \n| [On-the-fly Operation Batching in Dynamic Computation Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6986-on-the-fly-operation-batching-in-dynamic-computation-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fneulab\u002Fdynet-benchmark) | 54 | \n| [Visual Translation Embedding Network for Visual Relation Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Visual_Translation_Embedding_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzawlin\u002Fcvpr17_vtranse) | 54 | \n| [Learning Blind Motion Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWieschollek_Learning_Blind_Motion_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcgtuebingen\u002Flearning-blind-motion-deblurring) | 54 | \n| [A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6951-a-disentangled-recognition-and-nonlinear-dynamics-model-for-unsupervised-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsimonkamronn\u002Fkvae) | 53 | \n| [Towards Diverse and Natural Image Descriptions via a Conditional GAN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDai_Towards_Diverse_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002Fgancaption_iccv2017) | 53 | \n| [CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShou_CDC_Convolutional-De-Convolutional_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FColumbiaDVMM\u002FCDC) | 53 | \n| [A Generic Deep Architecture for Single Image Reflection Removal and Image Smoothing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFan_A_Generic_Deep_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffqnchina\u002FCEILNet) | 52 | \n| [Deep IV: A Flexible Approach for Counterfactual Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhartford17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjhartford\u002FDeepIV) | 52 | \n| [Triangle Generative Adversarial Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7109-triangle-generative-adversarial-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLiqunChen0606\u002FTriangle-GAN) | 51 | \n| [EAST: An Efficient and Accurate Scene Text Detector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhou_EAST_An_Efficient_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKathrine94\u002FEAST) | 51 | \n| [SST: Single-Stream Temporal Action Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBuch_SST_Single-Stream_Temporal_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Franjaykrishna\u002FSST) | 51 | \n| [Predicting Deeper Into the Future of Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLuc_Predicting_Deeper_Into_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FSegmPred) | 51 | \n| [L2-Net: Deep Learning of Discriminative Patch Descriptor in Euclidean Space](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTian_L2-Net_Deep_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuruntian\u002FL2-Net) | 51 | \n| [TALL: Temporal Activity Localization via Language Query](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGao_TALL_Temporal_Activity_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiyanggao\u002FTALL) | 50 | \n| [Hybrid Reward Architecture for Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7123-hybrid-reward-architecture-for-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMaluuba\u002Fhra) | 50 | \n| [Fast Fourier Color Constancy](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBarron_Fast_Fourier_Color_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fffcc) | 49 | \n| [Modulating early visual processing by language](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7237-modulating-early-visual-processing-by-language.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FGuessWhatGame\u002Fguesswhat) | 49 | \n| [Adversarial Examples for Semantic Segmentation and Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FXie_Adversarial_Examples_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcihangxie\u002FDAG) | 49 | \n| [Learning Discrete Representations via Information Maximizing Self-Augmented Training](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhu17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fweihua916\u002Fimsat) | 49 | \n| [Efficient Diffusion on Region Manifolds: Recovering Small Objects With Compact CNN Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIscen_Efficient_Diffusion_on_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahmetius\u002Fdiffusion-retrieval) | 48 | \n| [Real Time Image Saliency for Black Box Classifiers](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7272-real-time-image-saliency-for-black-box-classifiers.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FPiotrDabkowski\u002Fpytorch-saliency) | 48 | \n| [FC4: Fully Convolutional Color Constancy With Confidence-Weighted Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_FC4_Fully_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuanming-hu\u002Ffc4) | 47 | \n| [Multiple People Tracking by Lifted Multicut and Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTang_Multiple_People_Tracking_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjutanke\u002Fcabbage) | 47 | \n| [Learned D-AMP: Principled Neural Network based Compressive Image Recovery](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6774-learned-d-amp-principled-neural-network-based-compressive-image-recovery.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fricedsp\u002FD-AMP_Toolbox) | 47 | \n| [GP CaKe: Effective brain connectivity with causal kernels](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6696-gp-cake-effective-brain-connectivity-with-causal-kernels.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLucaAmbrogioni\u002FGP-CaKe-project) | 46 | \n| [Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6854-predicting-organic-reaction-outcomes-with-weisfeiler-lehman-network.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwengong-jin\u002Fnips17-rexgen) | 46 | \n| [Semantic Video CNNs Through Representation Warping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGadde_Semantic_Video_CNNs_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fraghudeep\u002Fnetwarp_public) | 46 | \n| [Grammar Variational Autoencoder](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkusner17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fepisodeyang\u002Fgrammar_variational_autoencoder) | 46 | \n| [EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSajjadi_EnhanceNet_Single_Image_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmsmsajjadi\u002FEnhanceNet-Code) | 46 | \n| [Safe Model-based Reinforcement Learning with Stability Guarantees](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6692-safe-model-based-reinforcement-learning-with-stability-guarantees.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbefelix\u002Fsafe_learning) | 45 | \n| [Deep Spectral Clustering Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flaw17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fwlwkgus\u002FDeepSpectralClustering) | 45 | \n| [Semantic Compositional Networks for Visual Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGan_Semantic_Compositional_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhegan27\u002FSemantic_Compositional_Nets) | 45 | \n| [On-Demand Learning for Deep Image Restoration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGao_On-Demand_Learning_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frhgao\u002Fon-demand-learning) | 45 | \n| [Video Pixel Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkalchbrenner17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002F3ammor\u002FVideo-Pixel-Networks) | 45 | \n| [Stabilizing Training of Generative Adversarial Networks through Regularization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6797-stabilizing-training-of-generative-adversarial-networks-through-regularization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frothk\u002FStabilizing_GANs) | 45 | \n| [Structured Bayesian Pruning via Log-Normal Multiplicative Noise](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7254-structured-bayesian-pruning-via-log-normal-multiplicative-noise.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fnecludov\u002Fgroup-sparsity-sbp) | 44 | \n| [Deriving Neural Architectures from Sequence and Graph Kernels](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flei17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Ficml17_knn) | 44 | \n| [Masked Autoregressive Flow for Density Estimation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6828-masked-autoregressive-flow-for-density-estimation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgpapamak\u002Fmaf) | 44 | \n| [Unsupervised Adaptation for Deep Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTonioni_Unsupervised_Adaptation_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FUnsupervised-Adaptation-for-Deep-Stereo) | 44 | \n| [Learning Residual Images for Face Attribute Manipulation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShen_Learning_Residual_Images_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZhongdao\u002FFaceAttributeManipulation) | 43 | \n| [Learning to Generate Long-term Future via Hierarchical Prediction](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fvillegas17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Frubenvillegas\u002Ficml2017hierchvid) | 43 | \n| [Accurate Optical Flow via Direct Cost Volume Processing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Accurate_Optical_Flow_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FIntelVCL\u002Fdcflow) | 42 | \n| [Generalized Orderless Pooling Performs Implicit Salient Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSimon_Generalized_Orderless_Pooling_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcvjena\u002Falpha_pooling) | 42 | \n| [Comparative Evaluation of Hand-Crafted and Learned Local Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSchonberger_Comparative_Evaluation_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahojnnes\u002Flocal-feature-evaluation) | 42 | \n| [SchNet: A continuous-filter convolutional neural network for modeling quantum interactions](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6700-schnet-a-continuous-filter-convolutional-neural-network-for-modeling-quantum-interactions.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fatomistic-machine-learning\u002FSchNet) | 41 | \n| [Temporal Generative Adversarial Nets With Singular Value Clipping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSaito_Temporal_Generative_Adversarial_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpfnet-research\u002Ftgan) | 41 | \n| [Multiplicative Normalizing Flows for Variational Bayesian Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flouizos17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FMNF_VBNN) | 41 | \n| [Neural Scene De-Rendering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWu_Neural_Scene_De-Rendering_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiajunwu\u002Fnsd) | 40 | \n| [Semantic Image Inpainting With Deep Generative Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYeh_Semantic_Image_Inpainting_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FChengBinJin\u002Fsemantic-image-inpainting) | 40 | \n| [A Linear-Time Kernel Goodness-of-Fit Test](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6630-a-linear-time-kernel-goodness-of-fit-test.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwittawatj\u002Fkernel-gof) | 40 | \n| [Least Squares Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMao_Least_Squares_Generative_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FGunhoChoi\u002FLSGAN-TF) | 39 | \n| [Diversified Texture Synthesis With Feed-Forward Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Diversified_Texture_Synthesis_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYijunmaverick\u002FMultiTextureSynthesis) | 39 | \n| [No Fuss Distance Metric Learning Using Proxies](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMovshovitz-Attias_No_Fuss_Distance_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdichotomies\u002Fproxy-nca) | 38 | \n| [Template Matching With Deformable Diversity Similarity](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTalmi_Template_Matching_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Froimehrez\u002FDDIS) | 38 | \n| [What's in a Question: Using Visual Questions as a Form of Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGanju_Whats_in_a_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsidgan\u002Fwhats_in_a_question) | 38 | \n| [Face Normals \"In-The-Wild\" Using Fully Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTrigeorgis_Face_Normals_In-The-Wild_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftrigeorgis\u002Fface_normals_cvpr17) | 38 | \n| [Conditional Image Synthesis with Auxiliary Classifier GANs](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fodena17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fkimhc6028\u002Facgan-pytorch) | 37 | \n| [Neural Episodic Control](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fpritzel17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FEndingCredits\u002FNeural-Episodic-Control) | 37 | \n| [3D-PRNN: Generating Shape Primitives With Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZou_3D-PRNN_Generating_Shape_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzouchuhang\u002F3D-PRNN) | 37 | \n| [Structured Embedding Models for Grouped Data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6629-structured-embedding-models-for-grouped-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmariru\u002Fstructured_embeddings) | 36 | \n| [Learning Active Learning from Data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7010-learning-active-learning-from-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fksenia-konyushkova\u002FLAL) | 36 | \n| [Unified Deep Supervised Domain Adaptation and Generalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMotiian_Unified_Deep_Supervised_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsamotiian\u002FCCSA) | 35 | \n| [Transformation-Grounded Image Generation Network for Novel 3D View Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPark_Transformation-Grounded_Image_Generation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsilverbottlep\u002Ftvsn) | 35 | \n| [Structured Attentions for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Structured_Attentions_for_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshtechair\u002Fvqa-sva) | 34 | \n| [Geometric Loss Functions for Camera Pose Regression With Deep Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKendall_Geometric_Loss_Functions_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffuturely\u002Fdeep-camera-relocalization) | 34 | \n| [VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FClark_VidLoc_A_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffuturely\u002Fdeep-camera-relocalization) | 34 | \n| [QMDP-Net: Deep Learning for Planning under Partial Observability](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7055-qmdp-net-deep-learning-for-planning-under-partial-observability.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FAdaCompNUS\u002Fqmdp-net) | 34 | \n| [Using Ranking-CNN for Age Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_Using_Ranking-CNN_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRankingCNN\u002FUsing-Ranking-CNN-for-Age-Estimation) | 33 | \n| [Hierarchical Boundary-Aware Neural Encoder for Video Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBaraldi_Hierarchical_Boundary-Aware_Neural_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYugnaynehc\u002Fbanet) | 33 | \n| [Unsupervised Learning of Disentangled Representations from Video](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7028-unsupervised-learning-of-disentangled-representations-from-video.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fedenton\u002Fdrnet-py) | 32 | \n| [Deep Learning on Lie Groups for Skeleton-Based Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Deep_Learning_on_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzzhiwu\u002FLieNet) | 32 | \n| [Deep Variation-Structured Reinforcement Learning for Visual Relationship and Attribute Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiang_Deep_Variation-Structured_Reinforcement_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnexusapoorvacus\u002FDeepVariationStructuredRL) | 32 | \n| [3D Point Cloud Registration for Localization Using a Deep Neural Network Auto-Encoder](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FElbaz_3D_Point_Cloud_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgilbaz\u002FLORAX) | 32 | \n| [StyleNet: Generating Attractive Visual Captions With Styles](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGan_StyleNet_Generating_Attractive_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkacky24\u002Fstylenet) | 32 | \n| [Dynamic Word Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbamler17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FYingyuLiang\u002FSemanticVector) | 32 | \n| [Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7071-learning-to-prune-deep-neural-networks-via-layer-wise-optimal-brain-surgeon.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcsyhhu\u002FL-OBS) | 31 | \n| [Continual Learning Through Synaptic Intelligence](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzenke17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fganguli-lab\u002Fpathint) | 31 | \n| [Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPohlen_Full-Resolution_Residual_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhiwonjoon\u002Ftf-frrn) | 31 | \n| [Learning Detection With Diverse Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAzadi_Learning_Detection_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fazadis\u002FLDDP) | 31 | \n| [LCNN: Lookup-Based Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBagherinezhad_LCNN_Lookup-Based_Convolutional_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhessamb\u002Flcnn) | 31 | \n| [Towards Accurate Multi-Person Pose Estimation in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPapandreou_Towards_Accurate_Multi-Person_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhackiey\u002Fkeypoints) | 30 | \n| [Real-Time Neural Style Transfer for Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_Real-Time_Neural_Style_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcuraai00\u002FRT-StyleTransfer-forVideo) | 30 | \n| [Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FShetty_Speaking_the_Same_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FrakshithShetty\u002FcaptionGAN) | 30 | \n| [Deep Co-Occurrence Feature Learning for Visual Object Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShih_Deep_Co-Occurrence_Feature_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyafangshih\u002FDeep-COOC) | 29 | \n| [Joint distribution optimal transportation for domain adaptation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6963-joint-distribution-optimal-transportation-for-domain-adaptation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frflamary\u002FJDOT) | 29 | \n| [Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCao_Realtime_Multi-Person_2D_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FPoseAIChallenger\u002Fmxnet_pose_for_AI_challenger) | 29 | \n| [SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkim17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdalgu90\u002Fsplitnet-wrn) | 29 | \n| [The Statistical Recurrent Unit](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Foliva17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FDLHacks\u002FSRU) | 29 | \n| [A Unified Approach of Multi-Scale Deep and Hand-Crafted Features for Defocus Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPark_A_Unified_Approach_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzzangjinsun\u002FDHDE_CVPR17) | 28 | \n| [Learning Spread-Out Local Feature Descriptors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Learning_Spread-Out_Local_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FColumbiaDVMM\u002FSpread-out_Local_Feature_Descriptor) | 28 | \n| [Event-Based Visual Inertial Odometry](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhu_Event-Based_Visual_Inertial_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdaniilidis-group\u002Fevent_feature_tracking) | 27 | \n| [DropoutNet: Addressing Cold Start in Recommender Systems](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7081-dropoutnet-addressing-cold-start-in-recommender-systems.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flayer6ai-labs\u002FDropoutNet) | 27 | \n| [Phrase Localization and Visual Relationship Detection With Comprehensive Image-Language Cues](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FPlummer_Phrase_Localization_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FBryanPlummer\u002Fpl-clc) | 27 | \n| [Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPavlakos_Harvesting_Multiple_Views_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeopavlakos\u002Fharvesting) | 27 | \n| [Deep 360 Pilot: Learning a Deep Agent for Piloting Through 360deg Sports Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHu_Deep_360_Pilot_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Feborboihuc\u002FDeep360Pilot-CVPR17) | 27 | \n| [Neural Message Passing for Quantum Chemistry](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fgilmer17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbrain-research\u002Fmpnn) | 27 | \n| [State-Frequency Memory Recurrent Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhu17c.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhhkunming\u002FState-Frequency-Memory-Recurrent-Neural-Networks) | 27 | \n| [DeepCD: Learning Deep Complementary Descriptors for Patch Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FYang_DeepCD_Learning_Deep_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshamangary\u002FDeepCD) | 26 | \n| [Contrastive Learning for Image Captioning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6691-contrastive-learning-for-image-captioning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdoubledaibo\u002Fclcaption_nips2017) | 26 | \n| [Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6760-stochastic-optimization-with-variance-reduction-for-infinite-datasets-with-finite-sum-structure.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Falbietz\u002Fstochs) | 26 | \n| [Learning High Dynamic Range From Outdoor Panoramas](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhang_Learning_High_Dynamic_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjacenfox\u002Fldr2hdr-public) | 26 | \n| [Speed\u002FAccuracy Trade-Offs for Modern Convolutional Object Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHuang_SpeedAccuracy_Trade-Offs_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frayanelleuch\u002FSpeed-accuracy-trade-offs-for-modern-convolutional-object-detectors) | 26 | \n| [Learning to Detect Salient Objects With Image-Level Supervision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Learning_to_Detect_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fscott89\u002FWSS) | 26 | \n| [Improved Variational Autoencoders for Text Modeling using Dilated Convolutions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fyang17d.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fryokamoi\u002Fdcnn_textvae) | 26 | \n| [Interspecies Knowledge Transfer for Facial Keypoint Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRashid_Interspecies_Knowledge_Transfer_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmenorashid\u002Fanimal_human_kp) | 25 | \n| [YASS: Yet Another Spike Sorter](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6989-yass-yet-another-spike-sorter.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fpaninski-lab\u002Fyass) | 25 | \n| [Open Set Domain Adaptation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBusto_Open_Set_Domain_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHeliot7\u002Fopen-set-da) | 25 | \n| [Domain-Adaptive Deep Network Compression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMasana_Domain-Adaptive_Deep_Network_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmmasana\u002FDALR) | 24 | \n| [Long Short-Term Memory Kalman Filters: Recurrent Neural Estimators for Pose Regularization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCoskun_Long_Short-Term_Memory_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSeleucia\u002Flstmkf_ICCV2017) | 24 | \n| [Temporal Context Network for Activity Localization in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDai_Temporal_Context_Network_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fvdavid70619\u002FTCN) | 24 | \n| [Incremental Learning of Object Detectors Without Catastrophic Forgetting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FShmelkov_Incremental_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkshmelkov\u002Fincremental_detectors) | 24 | \n| [Dense Captioning With Joint Inference and Visual Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_Dense_Captioning_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flinjieyangsc\u002Fdensecap) | 24 | \n| [Universal Adversarial Perturbations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMoosavi-Dezfooli_Universal_Adversarial_Perturbations_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002Ffast-feature-fool) | 24 | \n| [Asymmetric Tri-training for Unsupervised Domain Adaptation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fsaito17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fvtddggg\u002FATDA) | 24 | \n| [Reducing Reparameterization Gradient Variance](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6961-reducing-reparameterization-gradient-variance.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fandymiller\u002FReducedVarianceReparamGradients) | 24 | \n| [Exploiting Saliency for Object Segmentation From Image Level Labels](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FOh_Exploiting_Saliency_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcoallaoh\u002FGuidedLabelling) | 24 | \n| [A Dirichlet Mixture Model of Hawkes Processes for Event Sequence Clustering](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6734-a-dirichlet-mixture-model-of-hawkes-processes-for-event-sequence-clustering.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FHongtengXu\u002FHawkes-Process-Toolkit) | 24 | \n| [Shading Annotations in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKovacs_Shading_Annotations_in_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkovibalu\u002Fsaw_release) | 24 | \n| [Straight to Shapes: Real-Time Detection of Encoded Shapes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJetley_Straight_to_Shapes_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftorrvision\u002Fstraighttoshapes) | 23 | \n| [Dual Discriminator Generative Adversarial Nets](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6860-dual-discriminator-generative-adversarial-nets.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftund\u002FD2GAN) | 23 | \n| [Zero-Order Reverse Filtering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTao_Zero-Order_Reverse_Filtering_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjiangsutx\u002FDeFilter) | 23 | \n| [Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7026-variational-walkback-learning-a-transition-operator-as-a-stochastic-recurrent-net.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fanirudh9119\u002Fwalkback_nips17) | 23 | \n| [Learning Spherical Convolution for Fast Features from 360° Imagery](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6656-learning-spherical-convolution-for-fast-features-from-360-imagery.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsammy-su\u002FSpherical-Convolution) | 22 | \n| [Learning to Detect Sepsis with a Multitask Gaussian Process RNN Classifier](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ffutoma17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjfutoma\u002FMGP-RNN) | 22 | \n| [Deep Cross-Modal Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJiang_Deep_Cross-Modal_Hashing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiangqy\u002FDCMH-CVPR2017) | 22 | \n| [When Unsupervised Domain Adaptation Meets Tensor Representations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLu_When_Unsupervised_Domain_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpoppinace\u002FTAISL) | 22 | \n| [Image Super-Resolution Using Dense Skip Connections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTong_Image_Super-Resolution_Using_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkweisamx\u002FTensorFlow-SR-DenseNet) | 22 | \n| [Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Multimodal_Transfer_A_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffullfanta\u002Fmultimodal_transfer) | 22 | \n| [STD2P: RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHe_STD2P_RGBD_Semantic_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FSSAW14\u002FSTD2P) | 22 | \n| [Learning Continuous Semantic Representations of Symbolic Expressions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fallamanis17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmast-group\u002Feqnet) | 22 | \n| [Deep Growing Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWang_Deep_Growing_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FQData\u002Fdeep2Read) | 21 | \n| [Combined Group and Exclusive Sparsity for Deep Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fyoon17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjaehong-yoon93\u002FCGES) | 21 | \n| [Hash Embeddings for Efficient Word Representations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7078-hash-embeddings-for-efficient-word-representations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdsv77\u002Fhashembedding\u002F) | 21 | \n| [Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6850-accuracy-first-selecting-a-differential-privacy-level-for-accuracy-constrained-erm.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsteven7woo\u002FAccuracy-First-Differential-Privacy) | 21 | \n| [Disentangled Representation Learning GAN for Pose-Invariant Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTran_Disentangled_Representation_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangjunh\u002FDR-GAN-by-pytorch) | 21 | \n| [Learning to Pivot with Adversarial Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6699-learning-to-pivot-with-adversarial-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fglouppe\u002Fpaper-learning-to-pivot) | 21 | \n| [Learning Dynamic Siamese Network for Visual Object Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGuo_Learning_Dynamic_Siamese_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftsingqguo\u002FDSiam) | 21 | \n| [POSEidon: Face-From-Depth for Driver Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBorghi_POSEidon_Face-From-Depth_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgdubrg\u002FPOSEidon-Biwi) | 20 | \n| [Deep Metric Learning via Facility Location](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSong_Deep_Metric_Learning_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FCongWeilin\u002Fcluster-loss-tensorflow) | 20 | \n| [Automatic Spatially-Aware Fashion Concept Discovery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHan_Automatic_Spatially-Aware_Fashion_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxthan\u002Ffashion-200k) | 20 | \n| [The Numerics of GANs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6779-the-numerics-of-gans.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLMescheder\u002FTheNumericsOfGANs) | 20 | \n| [From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGong_From_Motion_Blur_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdonggong1\u002Fmotion-flow-syn) | 20 | \n| [Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fadepierre\u002FCaffe_CycleGAN) | 20 | \n| [Zero-Inflated Exponential Family Embeddings](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fliu17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fblei-lab\u002Fzero-inflated-embedding) | 20 | \n| [InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6971-infogail-interpretable-imitation-learning-from-visual-demonstrations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fermongroup\u002Finfogail) | 20 | \n| [Weakly-Supervised Learning of Visual Relations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FPeyre_Weakly-Supervised_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjpeyre\u002Funrel) | 20 | \n| [Multi-Label Image Recognition by Recurrently Discovering Attentional Regions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWang_Multi-Label_Image_Recognition_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJames-Yip\u002FAttentionImageClass) | 20 | \n| [Scene Parsing With Global Context Embedding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHung_Scene_Parsing_With_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhfslyc\u002FGCPNet) | 20 | \n| [Context Selection for Embedding Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7067-context-selection-for-embedding-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fblei-lab\u002Fcontext-selection-embedding) | 20 | \n| [Deep Mean-Shift Priors for Image Restoration](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6678-deep-mean-shift-priors-for-image-restoration.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FsiavashBigdeli\u002FDMSP) | 20 | \n| [Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Skeleton_Key_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffeiyu1990\u002FSkeleton-key) | 20 | \n| [Fully-Adaptive Feature Sharing in Multi-Task Networks With Applications in Person Attribute Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLu_Fully-Adaptive_Feature_Sharing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluyongxi\u002Fdeep_share) | 19 | \n| [Learning Compact Geometric Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKhoury_Learning_Compact_Geometric_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmarckhoury\u002FCGF) | 19 | \n| [Structured Generative Adversarial Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6979-structured-generative-adversarial-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthudzj\u002FStructuredGAN) | 19 | \n| [Joint Gap Detection and Inpainting of Line Drawings](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSasaki_Joint_Gap_Detection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkaidlc\u002FCVPR2017_linedrawings) | 19 | \n| [Chained Multi-Stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZolfaghari_Chained_Multi-Stream_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmzolfaghari\u002Fchained-multistream-networks) | 19 | \n| [Adversarial Feature Matching for Text Generation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzhang17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FJeff-HOU\u002FUROP-Adversarial-Feature-Matching-for-Text-Generation) | 18 | \n| [BIER - Boosting Independent Embeddings Robustly](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOpitz_BIER_-_Boosting_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmop\u002Fbier) | 18 | \n| [Predictive-Corrective Networks for Action Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDave_Predictive-Corrective_Networks_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fachalddave\u002Fpredictive-corrective) | 18 | \n| [Stochastic Generative Hashing](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fdai17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdoubling\u002FStochastic_Generative_Hashing) | 18 | \n| [A Bayesian Data Augmentation Approach for Learning Deep Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6872-a-bayesian-data-augmentation-approach-for-learning-deep-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftoantm\u002Fkeras-bda) | 18 | \n| [Attentive Semantic Video Generation Using Captions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMarwah_Attentive_Semantic_Video_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSingularity42\u002Fcap2vid) | 18 | \n| [MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_MDNet_A_Semantically_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzizhaozhang\u002Fmdnet-cvpr2017) | 18 | \n| [Deep Unsupervised Similarity Learning Using Partially Ordered Sets](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBautista_Deep_Unsupervised_Similarity_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fasanakoy\u002Fdeep_unsupervised_posets) | 17 | \n| [DualNet: Learn Complementary Features for Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHou_DualNet_Learn_Complementary_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fustc-vim\u002Fdualnet) | 17 | \n| [Neural system identification for large populations separating “what” and “where”](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6942-neural-system-identification-for-large-populations-separating-what-and-where.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdavid-klindt\u002FNIPS2017) | 17 | \n| [FALKON: An Optimal Large Scale Kernel Method](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6978-falkon-an-optimal-large-scale-kernel-method.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FLCSL\u002FFALKON_paper) | 17 | \n| [Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Deep_Future_Gaze_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMengmi\u002Fdeepfuturegaze_gan) | 17 | \n| [Deep Learning with Topological Signatures](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6761-deep-learning-with-topological-signatures.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fc-hofer\u002Fnips2017) | 17 | \n| [Streaming Sparse Gaussian Process Approximations](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6922-streaming-sparse-gaussian-process-approximations.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthangbui\u002Fstreaming_sparse_gp) | 17 | \n| [RPAN: An End-To-End Recurrent Pose-Attention Network for Action Recognition in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDu_RPAN_An_End-To-End_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fagethen\u002FRPAN) | 17 | \n| [Awesome Typography: Statistics-Based Text Effects Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_Awesome_Typography_Statistics-Based_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwilliamyang1991\u002FText-Effects-Transfer) | 17 | \n| [RoomNet: End-To-End Room Layout Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLee_RoomNet_End-To-End_Room_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FGitBoSun\u002Froomnet) | 17 | \n| [Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSong_Deep_Spatial-Semantic_Attention_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyuchuochuo1023\u002FDeep_SBIR_tf) | 16 | \n| [Deep Supervised Discrete Hashing](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6842-deep-supervised-discrete-hashing.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fliqi-casia\u002FDSDH-HashingCode) | 16 | \n| [Few-Shot Learning Through an Information Retrieval  Lens](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6820-few-shot-learning-through-an-information-retrieval-lens.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FeleniTriantafillou\u002Ffew_shot_mAP_public) | 16 | \n| [Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7023-estimating-accuracy-from-unlabeled-data-a-probabilistic-logic-approach.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feaplatanios\u002Fmakina) | 16 | \n| [Learning to Push the Limits of Efficient FFT-Based Image Deconvolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKruse_Learning_to_Push_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fuschmidt83\u002Ffourier-deconvolution-network) | 16 | \n| [Federated Multi-Task Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7029-federated-multi-task-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgingsmith\u002Ffmtl) | 16 | \n| [Label Distribution Learning Forests](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6685-label-distribution-learning-forests.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fshenwei1231\u002Fcaffe-LDLForests) | 16 | \n| [Deep Multitask Architecture for Integrated 2D and 3D Human Sensing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPopa_Deep_Multitask_Architecture_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falinionutpopa\u002Fdmhs) | 16 | \n| [Estimating Mutual Information for Discrete-Continuous Mixtures](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7180-estimating-mutual-information-for-discrete-continuous-mixtures.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwgao9\u002Fmixed_KSG) | 16 | \n| [Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGolestaneh_Spatially-Varying_Blur_Detection_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fisalirezag\u002FHiFST) | 16 | \n| [StyleBank: An Explicit Representation for Neural Image Style Transfer](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_StyleBank_An_Explicit_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjxcodetw\u002FStylebank) | 16 | \n| [Surface Normals in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChen_Surface_Normals_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fumich-vl\u002Fsurface_normals) | 15 | \n| [Automatic Discovery of the Statistical Types of Variables in a Dataset](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fvalera17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FivaleraM\u002FDataTypes) | 15 | \n| [Learning Diverse Image Colorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FDeshpande_Learning_Diverse_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faditya12agd5\u002Fdivcolor) | 15 | \n| [Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMeinhardt_Learning_Proximal_Operators_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftum-vision\u002Flearn_prox_ops) | 15 | \n| [Non-Local Deep Features for Salient Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLuo_Non-Local_Deep_Features_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FAceCoooool\u002FNLFD-pytorch) | 15 | \n| [Structure-Measure: A New Way to Evaluate Foreground Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FFan_Structure-Measure_A_New_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FDengPingFan\u002FS-measure) | 15 | \n| [Shallow Updates for Deep Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6906-shallow-updates-for-deep-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FShallow-Updates-for-Deep-RL\u002FShallow_Updates_for_Deep_RL) | 15 | \n| [Wasserstein Generative Adversarial Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Farjovsky17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fluslab\u002FscRNAseq-WGAN-GP) | 15 | \n| [Recurrent 3D Pose Sequence Machines](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLin_Recurrent_3D_Pose_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMudeLin\u002FRPSM) | 15 | \n| [Variational Dropout Sparsifies Deep Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmolchanov17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fsoskek\u002Fvariational_dropout_sparsifies_dnn) | 15 | \n| [Captioning Images With Diverse Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FVenugopalan_Captioning_Images_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvsubhashini\u002Fnoc) | 15 | \n| [Off-policy evaluation for slate recommendation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6954-off-policy-evaluation-for-slate-recommendation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fadith387\u002Fslates_semisynth_expts) | 15 | \n| [Attributes2Classname: A Discriminative Model for Attribute-Based Unsupervised Zero-Shot Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDemirel_Attributes2Classname_A_Discriminative_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fberkandemirel\u002Fattributes2classname) | 14 | \n| [Benchmarking Denoising Algorithms With Real Photographs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPlotz_Benchmarking_Denoising_Algorithms_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flbasek\u002Fimage-denoising-benchmark) | 14 | \n| [Neural Aggregation Network for Video Face Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FYang_Neural_Aggregation_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjinyanxu\u002FNeural-Aggregation-Network-for-Video-Face-Recognition) | 14 | \n| [Learned Contextual Feature Reweighting for Image Geo-Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKim_Learned_Contextual_Feature_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhyojinie\u002Fcrn) | 14 | \n| [Streaming Weak Submodularity: Interpreting Neural Networks on the Fly](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6993-streaming-weak-submodularity-interpreting-neural-networks-on-the-fly.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feelenberg\u002Fstreak) | 14 | \n| [CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBao_CVAE-GAN_Fine-Grained_Image_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyanzhicong\u002FVAE-GAN) | 14 | \n| [VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FGan_VQS_Linking_Segmentations_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCold-Winter\u002Fvqs) | 14 | \n| [Spherical convolutions and their application in molecular modelling](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6935-spherical-convolutions-and-their-application-in-molecular-modelling.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdeepfold\u002FNIPS2017) | 14 | \n| [Multi-Information Source Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7016-multi-information-source-optimization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fdeepfold\u002FNIPS2017) | 14 | \n| [Convolutional Neural Network Architecture for Geometric Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRocco_Convolutional_Neural_Network_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhjweide\u002Fconvnet-for-geometric-matching) | 14 | \n| [Neural Face Editing With Intrinsic Image Disentangling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FShu_Neural_Face_Editing_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhixinshu\u002FNeuralFaceEditing) | 14 | \n| [Realistic Dynamic Facial Textures From a Single Image Using GANs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOlszewski_Realistic_Dynamic_Facial_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fleehomyc\u002FICCV-2017-Paper) | 14 | \n| [Predictive State Recurrent Neural Networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7186-predictive-state-recurrent-neural-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcmdowney\u002Fpsrnn) | 13 | \n| [Deep TextSpotter: An End-To-End Trainable Scene Text Localization and Recognition Framework](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FBusta_Deep_TextSpotter_An_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FVeitL\u002FOCR) | 13 | \n| [ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6932-extremeweather-a-large-scale-climate-dataset-for-semi-supervised-detection-localization-and-understanding-of-extreme-weather-events.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feracah\u002Fhur-detect) | 13 | \n| [Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6614-hunt-for-the-unique-stable-sparse-and-fast-feature-learning-on-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FvermaMachineLearning\u002FFGSD) | 13 | \n| [Consensus Convolutional Sparse Coding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChoudhury_Consensus_Convolutional_Sparse_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fvccimaging\u002FCCSC_code_ICCV2017) | 13 | \n| [Weakly Supervised Affordance Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSawatzky_Weakly_Supervised_Affordance_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fykztawas\u002FWeakly-Supervised-Affordance-Detection) | 13 | \n| [Joint Learning of Object and Action Detectors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKalogeiton_Joint_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fvkalogeiton\u002Fjoint-object-action-learning) | 13 | \n| [Light Field Blind Motion Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSrinivasan_Light_Field_Blind_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpratulsrinivasan\u002FLight_Field_Blind_Motion_Deblurring) | 13 | \n| [Asynchronous Stochastic Gradient Descent with Delay Compensation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzheng17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FDelayed-Compensation-Asynchronous-Stochastic-Gradient-Descent-for-Multiverso) | 13 | \n| [Unrolled Memory Inner-Products: An Abstract GPU Operator for Efficient Vision-Related Computations](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLin_Unrolled_Memory_Inner-Products_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjohnjohnlin\u002FUMI) | 12 | \n| [Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7125-maximizing-subset-accuracy-with-recurrent-neural-networks-in-multi-label-classification.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FJinseokNam\u002Fmlc2seq) | 12 | \n| [Self-Organized Text Detection With Minimal Post-Processing via Border Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FWu_Self-Organized_Text_Detection_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsaicoco\u002Ftf-sotd) | 12 | \n| [Coordinated Multi-Agent Imitation Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fle17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhoangminhle\u002FMultiAgent-ImitationLearning) | 12 | \n| [Gradient descent GAN optimization is locally stable](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7142-gradient-descent-gan-optimization-is-locally-stable.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fgradient_regularized_gan) | 12 | \n| [Removing Rain From Single Images via a Deep Detail Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FFu_Removing_Rain_From_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FXMU-smartdsp\u002FRemoving_Rain) | 12 | \n| [Convexified Convolutional Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fzhang17f.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzhangyuc\u002FCCNN) | 12 | \n| [Multigrid Neural Architectures](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKe_Multigrid_Neural_Architectures_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbuttomnutstoast\u002FMultigrid-Neural-Architectures) | 12 | \n| [VegFru: A Domain-Specific Dataset for Fine-Grained Visual Categorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHou_VegFru_A_Domain-Specific_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fustc-vim\u002Fvegfru) | 12 | \n| [Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7255-attend-and-predict-understanding-gene-regulation-by-selective-attention-on-chromatin.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FQData\u002FAttentiveChrome) | 12 | \n| [Differential Angular Imaging for Material Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXue_Differential_Angular_Imaging_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiaxue1993\u002FDAIN) | 12 | \n| [A Multilayer-Based Framework for Online Background Subtraction With Freely Moving Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_A_Multilayer-Based_Framework_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FEthanZhu90\u002FMultilayerBSMC_ICCV17) | 11 | \n| [Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6821-formal-guarantees-on-the-robustness-of-a-classifier-against-adversarial-manipulation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmax-andr\u002Fcross-lipschitz) | 11 | \n| [Max-value Entropy Search for Efficient Bayesian Optimization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fwang17e.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fzi-w\u002FMax-value-Entropy-Search) | 11 | \n| [Higher-Order Integration of Hierarchical Convolutional Activations for Fine-Grained Visual Categorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCai_Higher-Order_Integration_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcssjcai\u002Fhihca) | 11 | \n| [Generalized Deep Image to Image Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSanthanam_Generalized_Deep_Image_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvenkai\u002FRBDN) | 11 | \n| [Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOh_Adversarial_Image_Perturbation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcoallaoh\u002FAIP) | 11 | \n| [Predicting Human Activities Using Stochastic Grammar](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FQi_Predicting_Human_Activities_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FSiyuanQi\u002Fgrammar-activity-prediction) | 11 | \n| [DESIRE: Distant Future Prediction in Dynamic Scenes With Interacting Agents](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLee_DESIRE_Distant_Future_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyadrimz\u002FDESIRE) | 11 | \n| [Fisher GAN](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6845-fisher-gan.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftomsercu\u002FFisherGAN) | 11 | \n| [High-Order Attention Models for Visual Question Answering](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6957-high-order-attention-models-for-visual-question-answering.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fidansc\u002FHighOrderAtten) | 11 | \n| [IM2CAD](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FIzadinia_IM2CAD_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyyong119\u002FIM2CAD) | 11 | \n| [On Fairness and Calibration](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7151-on-fairness-and-calibration.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgpleiss\u002Fequalized_odds_and_calibration) | 11 | \n| [DeepPermNet: Visual Permutation Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FSanta_Cruz_DeepPermNet_Visual_Permutation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frfsantacruz\u002Fdeep-perm-net) | 10 | \n| [f-GANs in an Information Geometric Nutshell](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6649-f-gans-in-an-information-geometric-nutshell.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fqulizhen\u002Ffgan_info_geometric) | 10 | \n| [Revisiting IM2GPS in the Deep Learning Era](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FVo_Revisiting_IM2GPS_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flugiavn\u002Frevisiting-im2gps) | 10 | \n| [Attentional Correlation Filter Network for Adaptive Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChoi_Attentional_Correlation_Filter_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjongwon20000\u002FACFN) | 10 | \n| [Learning Cross-Modal Deep Representations for Robust Pedestrian Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FXu_Learning_Cross-Modal_Deep_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanxuhk\u002FCMT-CNN) | 10 | \n| [Confident Multiple Choice Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flee17b.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fchhwang\u002Fcmcl) | 10 | \n| [Curriculum Dropout](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMorerio_Curriculum_Dropout_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpmorerio\u002Fcurriculum-dropout) | 9 | \n| [Cognitive Mapping and Planning for Visual Navigation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGupta_Cognitive_Mapping_and_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fagiantwhale\u002Fcognitive-mapping-agent) | 9 | \n| [Optimized Pre-Processing for Discrimination Prevention](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6988-optimized-pre-processing-for-discrimination-prevention.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffair-preprocessing\u002Fnips2017) | 9 | \n| [Learning Motion Patterns in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FTokmakov_Learning_Motion_Patterns_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpirahansiah\u002Fopencv) | 9 | \n| [Scalable Log Determinants for Gaussian Process Kernel Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7212-scalable-log-determinants-for-gaussian-process-kernel-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fkd383\u002FGPML_SLD) | 9 | \n| [A Hierarchical Approach for Generating Descriptive Image Paragraphs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKrause_A_Hierarchical_Approach_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FInnerPeace-Wu\u002Fim2p-tensorflow) | 9 | \n| [Deep Crisp Boundaries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Deep_Crisp_Boundaries_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FWangyupei\u002FCED) | 9 | \n| [Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6611-breaking-the-nonsmooth-barrier-a-scalable-parallel-method-for-composite-optimization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffabianp\u002FProxASAGA) | 9 | \n| [Practical Data-Dependent Metric Compression with Provable Guarantees](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6855-practical-data-dependent-metric-compression-with-provable-guarantees.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ftalwagner\u002Fquadsketch) | 9 | \n| [Do Deep Neural Networks Suffer from Crowding?](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7146-do-deep-neural-networks-suffer-from-crowding.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FCBMM\u002Feccentricity) | 9 | \n| [A Non-Convex Variational Approach to Photometric Stereo Under Inaccurate Lighting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FQueau_A_Non-Convex_Variational_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyqueau\u002Frobust_ps) | 9 | \n| [End-To-End Learning of Geometry and Context for Deep Stereo Regression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FKendall_End-To-End_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fliuruijin17\u002FRickLiuGC) | 9 | \n| [From Bayesian Sparsity to Gated Recurrent Nets](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7139-from-bayesian-sparsity-to-gated-recurrent-nets.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhehaodele\u002FSBL-LSTM-Net) | 8 | \n| [Regret Minimization in MDPs with Options without Prior Knowledge](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6909-regret-minimization-in-mdps-with-options-without-prior-knowledge.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FRonanFR\u002FUCRL) | 8 | \n| [Following Gaze in Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRecasens_Following_Gaze_in_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frecasens\u002FGaze-Following) | 8 | \n| [Model-Powered Conditional Independence Test](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6888-model-powered-conditional-independence-test.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frajatsen91\u002FCCIT) | 8 | \n| [Cost efficient gradient boosting](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6753-cost-efficient-gradient-boosting.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsvenpeter42\u002FLightGBM-CEGB) | 8 | \n| [Reflectance Adaptive Filtering Improves Intrinsic Image Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FNestmeyer_Reflectance_Adaptive_Filtering_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftnestmeyer\u002Freflectance-filtering) | 8 | \n| [DeepNav: Learning to Navigate Large Cities](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBrahmbhatt_DeepNav_Learning_to_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamarth-robo\u002Fdeepnav_cvpr17) | 8 | \n| [Look, Listen and Learn](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FArandjelovic_Look_Listen_and_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FKajiyu\u002FLLLNet) | 8 | \n| [Attention-Aware Face Hallucination via Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FCao_Attention-Aware_Face_Hallucination_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fykshi\u002Ffacehallucination) | 8 | \n| [Plan, Attend, Generate: Planning for Sequence-to-Sequence Models](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7131-plan-attend-generate-planning-for-sequence-to-sequence-models.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDutil\u002FPAG) | 8 | \n| [Introspective Neural Networks for Generative Modeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLazarow_Introspective_Neural_Networks_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fintermilan\u002Finng) | 8 | \n| [Affinity Clustering: Hierarchical Clustering at Scale](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7262-affinity-clustering-hierarchical-clustering-at-scale.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMahsaDerakhshan\u002FAffinityClustering) | 8 | \n| [Gaze Embeddings for Zero-Shot Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKaressli_Gaze_Embeddings_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNoura-kr\u002FCVPR17) | 8 | \n| [Input Switched Affine Networks: An RNN Architecture Designed for Interpretability](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Ffoerster17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fphilipperemy\u002Ftensorflow-isan-rnn) | 8 | \n| [Online multiclass boosting](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6693-online-multiclass-boosting.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fyhjung88\u002FOnlineBoostingWithVFDT) | 8 | \n| [Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FOrekondy_Towards_a_Visual_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fvpa) | 8 | \n| [SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FCamgoz_SubUNets_End-To-End_Hand_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fneccam\u002FSubUNets) | 7 | \n| [Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6713-learning-koopman-invariant-subspaces-for-dynamic-mode-decomposition.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fthetak11\u002Flearning-kis) | 7 | \n| [Unsupervised Monocular Depth Estimation With Left-Right Consistency](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FGodard_Unsupervised_Monocular_Depth_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyukitsuji\u002Fmonodepth_chainer) | 7 | \n| [Personalized Image Aesthetics](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRen_Personalized_Image_Aesthetics_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Falanspike\u002FpersonalizedImageAesthetics) | 7 | \n| [Reasoning About Fine-Grained Attribute Phrases Using Reference Games](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSu_Reasoning_About_Fine-Grained_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjongchyisu\u002Fattribute_phrases) | 7 | \n| [Lost Relatives of the Gumbel Trick](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbalog17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmatejbalog\u002Fgumbel-relatives) | 7 | \n| [Weakly Supervised Learning of Deep Metrics for Stereo Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FTulyakov_Weakly_Supervised_Learning_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftlkvstepan\u002Fmc-cnn-ws) | 7 | \n| [Centered Weight Normalization in Accelerating Training of Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuang_Centered_Weight_Normalization_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FhuangleiBuaa\u002FCenteredWN) | 6 | \n| [Scalable Planning with Tensorflow for Hybrid Nonlinear Domains](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7207-scalable-planning-with-tensorflow-for-hybrid-nonlinear-domains.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fwuga214\u002FTOOLBOX-Learning-and-Planning-through-Backpropagation) | 6 | \n| [Convex Global 3D Registration With Lagrangian Duality](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBriales_Convex_Global_3D_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjbriales\u002FCVPR17) | 6 | \n| [Building a Regular Decision Boundary With Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FOyallon_Building_a_Regular_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fedouardoyallon\u002Fdeep_separation_contraction) | 6 | \n| [Learning Spatial Regularization With Image-Level Supervisions for Multi-Label Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhu_Learning_Spatial_Regularization_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FEnjia\u002FSpatial-Regularization-Network-in-Tensorflow) | 6 | \n| [Forecasting Human Dynamics From Static Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChao_Forecasting_Human_Dynamics_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fywchao\u002Fskeleton2d3d) | 6 | \n| [AOD-Net: All-In-One Dehazing Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FLi_AOD-Net_All-In-One_Dehazing_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fweber0522bb\u002FAODnet-by-pytorch) | 6 | \n| [K-Medoids For K-Means Seeding](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7104-k-medoids-for-k-means-seeding.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fidiap\u002Fzentas) | 6 | \n| [Diverse Image Annotation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWu_Diverse_Image_Annotation_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwubaoyuan\u002FDIA) | 6 | \n| [Practical Hash Functions for Similarity Estimation and Dimensionality Reduction](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7239-practical-hash-functions-for-similarity-estimation-and-dimensionality-reduction.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fzera\u002FNips_MT) | 6 | \n| [Deep Adaptive Image Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChang_Deep_Adaptive_Image_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHongtaoYang\u002FDAC-tensorflow) | 6 | \n| [Robust Adversarial Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fpinto17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FJekyll1021\u002FRARL) | 6 | \n| [Improving Training of Deep Neural Networks via Singular Value Bounding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJia_Improving_Training_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkui-jia\u002Fsvb) | 6 | \n| [Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6838-analyzing-hidden-representations-in-end-to-end-automatic-speech-recognition-systems.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fboknilev\u002Fasr-repr-analysis) | 6 | \n| [Tensor Belief Propagation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fwrigley17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fakxlr\u002Ftbp) | 6 | \n| [Sparse convolutional coding for neuronal assembly detection](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6958-sparse-convolutional-coding-for-neuronal-assembly-detection.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsccfnad\u002FSparse-convolutional-coding-for-neuronal-assembly-detection) | 6 | \n| [Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBousmalis_Unsupervised_Pixel-Level_Domain_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frhythm92\u002FUnsupervised-Pixel-Level-Domain-Adaptation-with-GAN) | 6 | \n| [Bayesian inference on random simple graphs with power law degree distributions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Flee17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fjuho-lee\u002Fpowerlawgraph) | 6 | \n| [Tensor Biclustering](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6730-tensor-biclustering.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FSoheilFeizi\u002FTensor-Biclustering) | 6 | \n| [Riemannian approach to batch normalization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7107-riemannian-approach-to-batch-normalization.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FMinhyungCho\u002Friemannian-batch-normalization) | 6 | \n| [Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FThewlis_Unsupervised_Learning_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Falldbi\u002FFactorized-Spatial-Embeddings) | 6 | \n| [Rolling-Shutter-Aware Differential SfM and Image Rectification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhuang_Rolling-Shutter-Aware_Differential_SfM_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FThomasZiegler\u002FRS-aware-differential-SfM) | 5 | \n| [Active Decision Boundary Annotation With Deep Generative Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FHuijser_Active_Decision_Boundary_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FMiriamHu\u002FActiveBoundary) | 5 | \n| [Object Co-Skeletonization With Co-Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FJerripothula_Object_Co-Skeletonization_With_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjkoteswarrao\u002FObject-Co-skeletonization-with-Co-segmentation) | 5 | \n| [Discover and Learn New Objects From Documentaries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChen_Discover_and_Learn_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhellock\u002Fdocumentary-learning) | 5 | \n| [Understanding Black-box Predictions via Influence Functions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkoh17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Feolecvk\u002FInfluenceFunctions) | 5 | \n| [Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FPatrini_Making_Deep_Neural_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FGarrettLee\u002Flabel_noise_correction) | 5 | \n| [Decoupling \"when to update\" from \"how to update\"](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6697-decoupling-when-to-update-from-how-to-update.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Femalach\u002FUpdateByDisagreement) | 5 | \n| [MarioQA: Answering Questions by Watching Gameplay Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMun_MarioQA_Answering_Questions_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJonghwanMun\u002FMarioQA) | 5 | \n| [Differentially private Bayesian learning on distributed data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6915-differentially-private-bayesian-learning-on-distributed-data.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FDPBayes\u002Fdca-nips2017) | 5 | \n| [Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSelvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcydonia999\u002FGrad-CAM-in-TensorFlow) | 5 | \n| [Question Asking as Program Generation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6705-question-asking-as-program-generation.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fanselmrothe\u002Fquestion_dataset) | 5 | \n| [Conic Scan-and-Cover algorithms for nonparametric topic modeling](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6977-conic-scan-and-cover-algorithms-for-nonparametric-topic-modeling.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmoonfolk\u002FGeometric-Topic-Modeling) | 5 | \n| [Lip Reading Sentences in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChung_Lip_Reading_Sentences_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flsrock1\u002FWLSNet_pytorch) | 5 | \n| [ROAM: A Rich Object Appearance Model With Application to Rotoscoping](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMiksik_ROAM_A_Rich_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fomiksik\u002Froam) | 5 | \n| [NeuralFDR: Learning Discovery Thresholds from Hypothesis Features](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6752-neuralfdr-learning-discovery-thresholds-from-hypothesis-features.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffxia22\u002FNeuralFDR) | 5 | \n| [Viraliency: Pooling Local Virality](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAlameda-Pineda_Viraliency_Pooling_Local_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxavirema\u002Flena_pooling) | 5 | \n| [Learning Algorithms for Active Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbachman17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fvtphan\u002FCode4Brownies) | 5 | \n| [Point to Set Similarity Based Deep Feature Learning for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhou_Point_to_Set_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamaonline\u002FPoint-to-Set-Similarity-Based-Deep-Feature-Learning-for-Person-Re-identification) | 5 | \n| [Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FSzeto_Click_Here_Human-Localized_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frszeto\u002Fclick-here-cnn) | 5 | \n| [The World of Fast Moving Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FRozumnyi_The_World_of_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FqixuanHou\u002FMapping-My-Break) | 5 | \n| [Cross-Modality Binary Code Learning via Fusion Similarity Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Cross-Modality_Binary_Code_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLynnHongLiu\u002FFSH) | 5 | \n| [Testing and Learning on Distributions with Symmetric Noise Invariance](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6733-testing-and-learning-on-distributions-with-symmetric-noise-invariance.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fhcllaw\u002Fphase_learn) | 5 | \n| [Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7268-sticking-the-landing-simple-lower-variance-gradient-estimators-for-variational-inference.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fgeoffroeder\u002Fiwae) | 5 | \n| [Diving into the shallows: a computational perspective on large-scale shallow learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6968-diving-into-the-shallows-a-computational-perspective-on-large-scale-shallow-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002FEigenPro\u002FEigenPro-tensorflow) | 5 | \n| [Rotation Equivariant Vector Field Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMarcos_Rotation_Equivariant_Vector_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdmarcosg\u002FRotEqNet) | 5 | \n| [Recursive Sampling for the Nystrom Method](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6973-recursive-sampling-for-the-nystrom-method.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fcnmusco\u002Frecursive-nystrom) | 5 | \n| [Learning From Video and Text via Large-Scale Discriminative Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMiech_Learning_From_Video_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fantoine77340\u002Ficcv17learning) | 5 | \n| [Global optimization of Lipschitz functions](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmalherbe17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FSycor4x\u002Flipo) | 5 | \n| [Device Placement Optimization with Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmirhoseini17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Findrajeet95\u002FDevice-Placement-Optimization-with-Reinforcement-Learning) | 4 | \n| [Alternating Direction Graph Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLe-Huu_Alternating_Direction_Graph_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnetw0rkf10w\u002Fadgm) | 4 | \n| [MEC: Memory-efficient Convolution for Deep Neural Network](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fcho17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FCSshengxy\u002FMEC) | 4 | \n| [Expert Gate: Lifelong Learning With a Network of Experts](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FAljundi_Expert_Gate_Lifelong_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frahafaljundi\u002FExpert-Gate) | 4 | \n| [A Simple yet Effective Baseline for 3D Human Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FMartinez_A_Simple_yet_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fnulledge\u002Fbilinear) | 4 | \n| [On Structured Prediction Theory with Calibrated Convex Surrogate Losses](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6634-on-structured-prediction-theory-with-calibrated-convex-surrogate-losses.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Faosokin\u002FconsistentSurrogates_derivations) | 4 | \n| [Sub-sampled Cubic Regularization for Non-convex Optimization](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fkohler17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdalab\u002Fsubsampled_cubic_regularization) | 4 | \n| [Generalized Semantic Preserving Hashing for N-Label Cross-Modal Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMandal_Generalized_Semantic_Preserving_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdevraj89\u002FGeneralized-Semantic-Preserving-Hashing-for-N-Label-Cross-Modal-Retrieval) | 4 | \n| [Bottleneck Conditional Density Estimation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fshu17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002FRuiShu\u002Fbcde) | 4 | \n| [Learning Cooperative Visual Dialog Agents With Deep Reinforcement Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FDas_Learning_Cooperative_Visual_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fschopra8\u002FCooperative_Vis_Diag_RL) | 4 | \n| [Multi-way Interacting Regression via Factorization Machines](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6853-multi-way-interacting-regression-via-factorization-machines.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fmoonfolk\u002FMiFM) | 4 | \n| [Joint Discovery of Object States and Manipulation Actions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FAlayrac_Joint_Discovery_of_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjalayrac\u002Fobject-states-action) | 4 | \n| [Predicting Salient Face in Multiple-Face Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLiu_Predicting_Salient_Face_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftonysy\u002Fsalient-face-in-MUVFET) | 4 | \n| [From Red Wine to Red Tomato: Composition With Context](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FMisra_From_Red_Wine_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fimisra\u002Fcomposing_cvpr17) | 4 | \n| [Encoder Based Lifelong Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FRannen_Encoder_Based_Lifelong_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frahafaljundi\u002FEncoder-Based-Lifelong-learning) | 4 | \n| [Deep Recurrent Neural Network-Based Identification of Precursor microRNAs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6882-deep-recurrent-neural-network-based-identification-of-precursor-micrornas.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Feleventh83\u002FdeepMiRGene) | 4 | \n| [Guarantees for Greedy Maximization of Non-submodular Functions with Applications](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fbian17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fbianan\u002Fnon-submodular-max) | 4 | \n| [Pose-Aware Person Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FKumar_Pose-Aware_Person_Recognition_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvijaykumar01\u002Fperson_recognition) | 4 | \n| [Zero-Shot Recognition Using Dual Visual-Semantic Mapping Paths](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLi_Zero-Shot_Recognition_Using_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYanaLee\u002FZero-Shot-Recognition-using-Dual-Visual-Semantic-Mapping-Paths) | 4 | \n| [Asynchronous Distributed Variational Gaussian Processes for Regression](nan) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fhao-peng\u002FADVGP) | 3 | \n| [Saliency Pattern Detection by Ranking Structured Trees](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FZhu_Saliency_Pattern_Detection_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fzhulei2016\u002FRST-saliency) | 3 | \n| [Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6849-toward-goal-driven-neural-network-models-for-the-rodent-whisker-trigeminal-system.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fneuroailab\u002Fwhisker_model) | 3 | \n| [Learning Non-Maximum Suppression](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FHosang_Learning_Non-Maximum_Suppression_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FXingchenYu\u002Fpedestrian_detection_iosapp) | 3 | \n| [Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fcong17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fmingyuanzhou\u002FDeepLDA_TLASGR_MCMC) | 3 | \n| [Discriminative Bimodal Networks for Visual Localization and Detection With Natural Language Queries](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FZhang_Discriminative_Bimodal_Networks_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYutingZhang\u002Fdbnet-caffe-matlab) | 3 | \n| [AdaNet: Adaptive Structural Learning of Artificial Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fcortes17a.html) | ICML | [code](https:\u002F\u002Fgithub.com\u002Fdavidabek1\u002Fadanet) | 3 | \n| [Large Margin Object Tracking With Circulant Feature Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FWang_Large_Margin_Object_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsallymmx\u002FLMCF) | 3 | \n| [Compatible Reward Inverse Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6800-compatible-reward-inverse-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Falbertometelli\u002Fcrirl) | 3 | \n| [Adversarial Surrogate Losses for Ordinal Regression](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6659-adversarial-surrogate-losses-for-ordinal-regression.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Frizalzaf\u002Fadversarial-ordinal) | 3 | \n| [Non-monotone Continuous DR-submodular  Maximization: Structure and Algorithms](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6652-continuous-dr-submodular-maximization-structure-and-algorithms.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fbianan) | 3 | \n| [Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7154-unifying-pac-and-regret-uniform-pac-bounds-for-episodic-reinforcement-learning.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fchrodan\u002FFiniteEpisodicRL.jl) | 3 | \n| [A framework for Multi-A(rmed)\u002FB(andit) Testing with Online FDR Control](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7177-a-framework-for-multi-armedbandit-testing-with-online-fdr-control.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffanny-yang\u002FMABFDR) | 3 | \n| [Counting Everyday Objects in Everyday Scenes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FChattopadhyay_Counting_Everyday_Objects_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fprithv1\u002Fcvpr2017_counting) | 3 | \n| [Loss Max-Pooling for Semantic Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FBulo_Loss_Max-Pooling_for_CVPR_2017_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjjkke88\u002FLMP) | 3 | \n| [Aesthetic Critiques Generation for Photos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2017\u002Fhtml\u002FChang_Aesthetic_Critiques_Generation_ICCV_2017_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkunghunglu\u002FDeepPhotoCritic-ICCV17) | 3 | \n| [Expectation Propagation with Stochastic Kinetic Model in Complex Interaction Systems](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6798-expectation-propagation-with-stochastic-kinetic-model-in-complex-interaction-systems.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Flefangcs\u002FExpectation-Propagation-with-Stochastic-Kinetic-Model-in-Complex-Interaction-Systems) | 3 | \n| [Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7049-near-optimal-edge-evaluation-in-explicit-generalized-binomial-graphs.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Fsanjibac\u002Fmatlab_learning_collision_checking) | 3 |\n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ 返回顶部\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2016\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [R-FCN: Object Detection via Region-based Fully Convolutional Networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6465-r-fcn-object-detection-via-region-based-fully-convolutional-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Image Style Transfer Using Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGatys_Image_Style_Transfer_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjcjohnson\u002Fneural-style) | 16435 | \n| [Deep Residual Learning for Image Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHe_Deep_Residual_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks) | 4468 | \n| [Convolutional Pose Machines](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWei_Convolutional_Pose_Machines_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZheC\u002FRealtime_Multi-Person_Pose_Estimation) | 3260 | \n| [Synthetic Data for Text Localisation in Natural Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGupta_Synthetic_Data_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fankush-me\u002FSynthText) | 787 | \n| [Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLi_Combining_Markov_Random_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchuanli11\u002FCNNMRF) | 731 | \n| [Instance-Aware Semantic Segmentation via Multi-Task Network Cascades](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FDai_Instance-Aware_Semantic_Segmentation_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdaijifeng001\u002FMNC) | 433 | \n| [Learning Multi-Domain Convolutional Neural Networks for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FNam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHyeonseobNam\u002FMDNet) | 350 | \n| [Convolutional Two-Stream Network Fusion for Video Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FFeichtenhofer_Convolutional_Two-Stream_Network_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffeichtenhofer\u002Ftwostreamfusion) | 342 | \n| [Learning Deep Features for Discriminative Localization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhou_Learning_Deep_Features_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjazzsaxmafia\u002FWeakly_detector) | 323 | \n| [Deep Metric Learning via Lifted Structured Feature Embedding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSong_Deep_Metric_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frksltnl\u002FDeep-Metric-Learning-CVPR16) | 251 | \n| [Learning Deep Representations of Fine-Grained Visual Descriptions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FReed_Learning_Deep_Representations_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Freedscot\u002Fcvpr2016) | 229 | \n| [Eye Tracking for Everyone](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKrafka_Eye_Tracking_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002FGazeCapture) | 223 | \n| [NetVLAD: CNN Architecture for Weakly Supervised Place Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FArandjelovic_NetVLAD_CNN_Architecture_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRelja\u002Fnetvlad) | 204 | \n| [Staple: Complementary Learners for Real-Time Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBertinetto_Staple_Complementary_Learners_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbertinetto\u002Fstaple) | 183 | \n| [Joint Unsupervised Learning of Deep Representations and Image Clusters](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Joint_Unsupervised_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjwyang\u002FJULE.torch) | 182 | \n| [Accurate Image Super-Resolution Using Very Deep Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKim_Accurate_Image_Super-Resolution_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJongchan\u002Ftensorflow-vdsr) | 182 | \n| [Temporal Action Localization in Untrimmed Videos via Multi-Stage CNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShou_Temporal_Action_Localization_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhengshou\u002Fscnn) | 167 | \n| [LocNet: Improving Localization Accuracy for Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGidaris_LocNet_Improving_Localization_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgidariss\u002FLocNet) | 155 | \n| [Shallow and Deep Convolutional Networks for Saliency Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPan_Shallow_and_Deep_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fimatge-upc\u002Fsaliency-2016-cvpr) | 153 | \n| [Compact Bilinear Pooling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGao_Compact_Bilinear_Pooling_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgy20073\u002Fcompact_bilinear_pooling) | 148 | \n| [Learning Compact Binary Descriptors With Unsupervised Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLin_Learning_Compact_Binary_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkevinlin311tw\u002Fcvpr16-deepbit) | 144 | \n| [Dynamic Image Networks for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBilen_Dynamic_Image_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhbilen\u002Fdynamic-image-nets) | 133 | \n| [Rethinking the Inception Architecture for Computer Vision](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSzegedy_Rethinking_the_Inception_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FMoodstocks\u002Finception-v3.torch) | 130 | \n| [Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSong_Deep_Sliding_Shapes_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshurans\u002FDeepSlidingShape) | 126 | \n| [Context Encoders: Feature Learning by Inpainting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPathak_Context_Encoders_Feature_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjazzsaxmafia\u002FInpainting) | 124 | \n| [TI-Pooling: Transformation-Invariant Pooling for Feature Learning in Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLaptev_TI-Pooling_Transformation-Invariant_Pooling_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdlaptev\u002FTI-pooling) | 109 | \n| [Weakly Supervised Deep Detection Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBilen_Weakly_Supervised_Deep_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhbilen\u002FWSDDN) | 103 | \n| [Natural Language Object Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHu_Natural_Language_Object_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fronghanghu\u002Fnatural-language-object-retrieval) | 100 | \n| [Deeply-Recursive Convolutional Network for Image Super-Resolution](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKim_Deeply-Recursive_Convolutional_Network_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjiny2001\u002Fdeeply-recursive-cnn-tf) | 96 | \n| [Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShi_Real-Time_Single_Image_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fleftthomas\u002FESPCN) | 92 | \n| [Image Question Answering Using Convolutional Neural Network With Dynamic Parameter Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FNoh_Image_Question_Answering_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHyeonwooNoh\u002FDPPnet) | 88 | \n| [Recurrent Convolutional Network for Video-Based Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMcLaughlin_Recurrent_Convolutional_Network_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fniallmcl\u002FRecurrent-Convolutional-Video-ReID) | 82 | \n| [A Comparative Study for Single Image Blind Deblurring](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLai_A_Comparative_Study_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphoenix104104\u002Fcvpr16_deblur_study) | 82 | \n| [Neural Module Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FAndreas_Neural_Module_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHarshTrivedi\u002Fnmn-pytorch) | 81 | \n| [Stacked Attention Networks for Image Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Stacked_Attention_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzcyang\u002Fimageqa-san) | 78 | \n| [Progressive Prioritized Multi-View Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLocher_Progressive_Prioritized_Multi-View_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Falexlocher\u002Fhpmvs) | 73 | \n| [Marr Revisited: 2D-3D Alignment via Surface Normal Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBansal_Marr_Revisited_2D-3D_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faayushbansal\u002FMarrRevisited) | 72 | \n| [A Hierarchical Deep Temporal Model for Group Activity Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FIbrahim_A_Hierarchical_Deep_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmostafa-saad\u002Fdeep-activity-rec) | 71 | \n| [Towards Open Set Deep Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBendale_Towards_Open_Set_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fabhijitbendale\u002FOSDN) | 71 | \n| [Robust 3D Hand Pose Estimation in Single Depth Images: From Single-View CNN to Multi-View CNNs](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGe_Robust_3D_Hand_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgeliuhao\u002FCVPR2016_HandPoseEstimation) | 70 | \n| [Bilateral Space Video Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMaerki_Bilateral_Space_Video_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fowang\u002FBilateralVideoSegmentation) | 63 | \n| [Deep Compositional Captioning: Describing Novel Object Categories Without Paired Training Data](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHendricks_Deep_Compositional_Captioning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLisaAnne\u002FDCC) | 57 | \n| [Efficient 3D Room Shape Recovery From a Single Panorama](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Efficient_3D_Room_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FYANG-H\u002FPanoramix) | 55 | \n| [Non-Local Image Dehazing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBerman_Non-Local_Image_Dehazing_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanaberman\u002Fnon-local-dehazing) | 50 | \n| [Video Segmentation via Object Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FTsai_Video_Segmentation_via_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwasidennis\u002FObjectFlow) | 50 | \n| [Deep Supervised Hashing for Fast Image Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLiu_Deep_Supervised_Hashing_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyg33717\u002FDSH_tensorflow) | 50 | \n| [Deep Region and Multi-Label Learning for Facial Action Unit Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhao_Deep_Region_and_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzkl20061823\u002FDRML) | 43 | \n| [CRAFT Objects From Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_CRAFT_Objects_From_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbyangderek\u002FCRAFT) | 41 | \n| [Slicing Convolutional Neural Network for Crowd Video Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShao_Slicing_Convolutional_Neural_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famandajshao\u002FSlicing-CNN) | 40 | \n| [Sketch Me That Shoe](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYu_Sketch_Me_That_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fseuliufeng\u002FDeepSBIR) | 39 | \n| [Image Captioning With Semantic Attention](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYou_Image_Captioning_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchapternewscu\u002Fimage-captioning-with-semantic-attention) | 35 | \n| [Deep Saliency With Encoded Low Level Distance Map and High Level Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLee_Deep_Saliency_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgylee1103\u002FSaliencyELD) | 34 | \n| [A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPerazzi_A_Benchmark_Dataset_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdavisvideochallenge\u002Fdavis-matlab) | 33 | \n| [A Dual-Source Approach for 3D Pose Estimation From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYasin_A_Dual-Source_Approach_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fiqbalu\u002F3D_Pose_Estimation_CVPR2016) | 32 | \n| [Learning Local Image Descriptors With Deep Siamese and Triplet Convolutional Networks by Minimising Global Loss Functions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FG_Learning_Local_Image_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fvijaykbg\u002Fdeep-patchmatch) | 30 | \n| [Ordinal Regression With Multiple Output CNN for Age Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FNiu_Ordinal_Regression_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fluoyetx\u002FOrdinalRegression) | 30 | \n| [Structured Feature Learning for Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChu_Structured_Feature_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fchuxiaoselena\u002FStructuredFeature) | 29 | \n| [Unsupervised Learning of Edges](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLi_Unsupervised_Learning_of_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhappyharrycn\u002Funsupervised_edges) | 29 | \n| [PatchBatch: A Batch Augmented Loss for Optical Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FGadot_PatchBatch_A_Batch_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FDediGadot\u002FPatchBatch) | 27 | \n| [Dense Human Body Correspondences Using Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWei_Dense_Human_Body_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhalimacc\u002FDenseHumanBodyCorrespondences) | 27 | \n| [Actionness Estimation Using Hybrid Fully Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWang_Actionness_Estimation_Using_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fwanglimin\u002FActionness-Estimation) | 26 | \n| [You Only Look Once: Unified, Real-Time Object Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FRedmon_You_Only_Look_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fandersy005\u002Fkeras-yolo) | 26 | \n| [Fast Training of Triplet-Based Deep Binary Embedding Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhuang_Fast_Training_of_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxwzy\u002FTriplet-deep-hash-pytorch) | 25 | \n| [Recurrent Attention Models for Depth-Based Person Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHaque_Recurrent_Attention_Models_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fahaque\u002Fram_person_id) | 24 | \n| [Detecting Vanishing Points Using Global Image Context in a Non-Manhattan World](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhai_Detecting_Vanishing_Points_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fviibridges\u002Fgc-horizon-detector) | 22 | \n| [First Person Action Recognition Using Deep Learned Descriptors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FSingh_First_Person_Action_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsuriyasingh\u002FEgoConvNet) | 21 | \n| [Proposal Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHam_Proposal_Flow_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbsham\u002FProposalFlow) | 20 | \n| [Scale-Aware Alignment of Hierarchical Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChen_Scale-Aware_Alignment_of_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuhuayc\u002Falign-hier) | 20 | \n| [Quantized Convolutional Neural Networks for Mobile Devices](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWu_Quantized_Convolutional_Neural_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FOluwoleOyetoke\u002FComputer_Vision_Using_TensorFlowLite) | 20 | \n| [Semantic Segmentation With Boundary Neural Fields](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FBertasius_Semantic_Segmentation_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgberta\u002FBNF_globalization) | 19 | \n| [Single-Image Crowd Counting via Multi-Column Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Single-Image_Crowd_Counting_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fuestcchicken\u002Fcrowd-counting-MCNN) | 19 | \n| [Accumulated Stability Voting: A Robust Descriptor From Descriptors of Multiple Scales](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Accumulated_Stability_Voting_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshamangary\u002FASV) | 19 | \n| [Structure From Motion With Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FCrocco_Structure_From_Motion_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdanylaksono\u002FAndroid-SfM-client) | 17 | \n| [Bottom-Up and Top-Down Reasoning With Hierarchical Rectified Gaussians](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHu_Bottom-Up_and_Top-Down_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpeiyunh\u002Frg-mpii) | 16 | \n| [Semantic Filtering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Semantic_Filtering_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshenshen-hungry\u002FSemantic-CNN) | 16 | \n| [Online Detection and Classification of Dynamic Hand Gestures With Recurrent 3D Convolutional Neural Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMolchanov_Online_Detection_and_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbreadbread1984\u002FR3DCNN) | 16 | \n| [ReconNet: Non-Iterative Reconstruction of Images From Compressively Sensed Measurements](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKuldeepKulkarni\u002FReconNet) | 15 | \n| [Interactive Segmentation on RGBD Images via Cue Selection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FFeng_Interactive_Segmentation_on_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FZVsion\u002Frgbd_image_segmentation) | 14 | \n| [Object Contour Detection With a Fully Convolutional Encoder-Decoder Network](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FYang_Object_Contour_Detection_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRaj-08\u002Ftensorflow-object-contour-detection) | 14 | \n| [Automatic Content-Aware Color and Tone Stylization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLee_Automatic_Content-Aware_Color_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjinyu121\u002FACACTS) | 12 | \n| [Similarity Learning With Spatial Constraints for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChen_Similarity_Learning_With_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fdapengchen123\u002FSCSP) | 11 | \n| [Personalizing Human Video Pose Estimation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FCharles_Personalizing_Human_Video_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjjcharles\u002Fpersonalized_pose) | 10 | \n| [Visually Indicated Sounds](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FOwens_Visually_Indicated_Sounds_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkanchen-usc\u002FVIG) | 9 | \n| [Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHou_Patch-Based_Convolutional_Neural_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcheersyouran\u002Fcancer-detector) | 9 | \n| [Region Ranking SVM for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWei_Region_Ranking_SVM_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzijunwei\u002FRegion-Ranking-SVM) | 8 | \n| [Pairwise Matching Through Max-Weight Bipartite Belief Propagation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Pairwise_Matching_Through_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzzhang1987\u002FHungarianBP) | 8 | \n| [Deep Hand: How to Train a CNN on 1 Million Hand Images When Your Data Is Continuous and Weakly Labelled](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKoller_Deep_Hand_How_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fneccam\u002FTF-DeepHand) | 8 | \n| [Cross-Stitch Networks for Multi-Task Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FMisra_Cross-Stitch_Networks_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhelloyide\u002FCross-stitch-Networks-for-Multi-task-Learning) | 8 | \n| [Learning a Discriminative Null Space for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Learning_a_Discriminative_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FNullSpace_ReID) | 8 | \n| [Efficient Deep Learning for Stereo Matching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLuo_Efficient_Deep_Learning_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fhaojeng-wang\u002Fdl_stereo_matching) | 7 | \n| [Globally Optimal Manhattan Frame Estimation in Real-Time](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FJoo_Globally_Optimal_Manhattan_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKyungdon\u002Fmf_estimation) | 7 | \n| [Where to Look: Focus Regions for Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FShih_Where_to_Look_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkevjshih\u002Fwtl_vqa) | 7 | \n| [Detecting Migrating Birds at Night](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FHuang_Detecting_Migrating_Birds_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002FBirdDetection) | 7 | \n| [Unsupervised Learning From Narrated Instruction Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FAlayrac_Unsupervised_Learning_From_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjalayrac\u002FinstructionVideos) | 7 | \n| [Efficient and Robust Color Consistency for Community Photo Collections](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPark_Efficient_and_Robust_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsyncle\u002Fphoto_consistency) | 7 | \n| [Recurrent Attentional Networks for Saliency Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKuen_Recurrent_Attentional_Networks_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhangxiaoning666\u002FPAGR) | 7 | \n| [3D Shape Attributes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FFouhey_3D_Shape_Attributes_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpetermalcolm\u002Festimate3DStep) | 6 | \n| [Beyond Local Search: Tracking Objects Everywhere With Instance-Specific Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhu_Beyond_Local_Search_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FGaoCode\u002FEBT) | 5 | \n| [Functional Faces: Groupwise Dense Correspondence Using Functional Maps](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FZhang_Functional_Faces_Groupwise_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcazhang\u002FfuncFaces) | 5 | \n| [Visual Tracking Using Attention-Modulated Disintegration and Integration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FChoi_Visual_Tracking_Using_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjongwon20000\u002FSCT) | 5 | \n| [Improving Human Action Recognition by Non-Action Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWang_Improving_Human_Action_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyangwangx\u002FNonActionShot) | 4 | \n| [Prior-Less Compressible Structure From Motion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FKong_Prior-Less_Compressible_Structure_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkongchen1992\u002Fcompressible-sfm) | 4 | \n| [DenseCap: Fully Convolutional Localization Networks for Dense Captioning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FJohnson_DenseCap_Fully_Convolutional_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Frampage644\u002Fdensecap-tensorflow) | 4 | \n| [Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLu_Tensor_Robust_Principal_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fcanyilu\u002FTensor-Robust-Principal-Component-Analysis-TRPCA) | 4 | \n| [Force From Motion: Decoding Physical Sensation in a First Person Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FPark_Force_From_Motion_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjyhjinghwang\u002FForce_from_Motion_Gravity_Models) | 3 | \n| [Context-Aware Gaussian Fields for Non-Rigid Point Set Registration](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FWang_Context-Aware_Gaussian_Fields_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgwang-cv\u002FCA-LapGF-Demo) | 3 | \n| [Using Spatial Order to Boost the Elimination of Incorrect Feature Matches](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FTalker_Using_Spatial_Order_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliortalker\u002FSpatialOrder) | 3 | \n| [Fast Algorithms for Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fhtml\u002FLavin_Fast_Algorithms_for_CVPR_2016_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fistoony\u002Fwinograd-convolutional-nn) | 3 |\n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ 返回顶部\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2015\n| Title | Conf | Code | Stars |\n|:--------|:--------:|:--------:|:--------:|\n| [Faster R-CNN: Towards Real-Time Object Detectionwith Region Proposal Networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf) | NIPS | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Fast R-CNN](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FGirshick_Fast_R-CNN_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetectron) | 18356 | \n| [Conditional Random Fields as Recurrent Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZheng_Conditional_Random_Fields_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftorrvision\u002Fcrfasrnn) | 1189 | \n| [Fully Convolutional Networks for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLong_Fully_Convolutional_Networks_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshekkizh\u002FFCN.tensorflow) | 911 | \n| [Learning to Track: Online Multi-Object Tracking by Decision Making](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FXiang_Learning_to_Track_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyuxng\u002FMDP_Tracking) | 308 | \n| [Learning to Compare Image Patches via Convolutional Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZagoruyko_Learning_to_Compare_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fszagoruyko\u002Fcvpr15deepcompare) | 300 | \n| [Learning Deconvolution Network for Semantic Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FNoh_Learning_Deconvolution_Network_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FHyeonwooNoh\u002FDeconvNet) | 296 | \n| [Single Image Super-Resolution From Transformed Self-Exemplars](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHuang_Single_Image_Super-Resolution_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002FSelfExSR) | 289 | \n| [Sequence to Sequence - Video to Text](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FVenugopalan_Sequence_to_Sequence_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjazzsaxmafia\u002Fvideo_to_sequence) | 239 | \n| [Deep Colorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FCheng_Deep_Colorization_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fcolorization-pytorch) | 198 | \n| [Deep Neural Decision Forests](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKontschieder_Deep_Neural_Decision_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fchrischoy\u002Ffully-differentiable-deep-ndf-tf) | 192 | \n| [Hierarchical Convolutional Features for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FMa_Hierarchical_Convolutional_Features_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjbhuang0604\u002FCF2) | 179 | \n| [Render for CNN: Viewpoint Estimation in Images Using CNNs Trained With Rendered 3D Model Views](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSu_Render_for_CNN_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FShapeNet\u002FRenderForCNN) | 176 | \n| [Realtime Edge-Based Visual Odometry for a Monocular Camera](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTarrio_Realtime_Edge-Based_Visual_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FJuanTarrio\u002Frebvo) | 175 | \n| [Understanding Deep Image Representations by Inverting Them](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FMahendran_Understanding_Deep_Image_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Faravindhm\u002Fdeep-goggle) | 154 | \n| [Context-Aware CNNs for Person Head Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FVu_Context-Aware_CNNs_for_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faosokin\u002Fcnn_head_detection) | 153 | \n| [Show and Tell: A Neural Image Caption Generator](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FVinyals_Show_and_Tell_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FKranthiGV\u002FPretrained-Show-and-Tell-model) | 141 | \n| [Face Alignment by Coarse-to-Fine Shape Searching](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhu_Face_Alignment_by_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fzhusz\u002FCVPR15-CFSS) | 140 | \n| [An Improved Deep Learning Architecture for Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FAhmed_An_Improved_Deep_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNing-Ding\u002FImplementation-CVPR2015-CNN-for-ReID) | 127 | \n| [FaceNet: A Unified Embedding for Face Recognition and Clustering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FSchroff_FaceNet_A_Unified_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fliorshk\u002Ffacenet_pytorch) | 124 | \n| [Depth-Based Hand Pose Estimation: Data, Methods, and Challenges](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSupancic_Depth-Based_Hand_Pose_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjsupancic\u002Fdeep_hand_pose) | 121 | \n| [DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FNewcombe_DynamicFusion_Reconstruction_and_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmihaibujanca\u002Fdynamicfusion) | 118 | \n| [Massively Parallel Multiview Stereopsis by Surface Normal Diffusion](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FGalliani_Massively_Parallel_Multiview_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fkysucix\u002Fgipuma) | 105 | \n| [Learning to Propose Objects](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FKrahenbuhl_Learning_to_Propose_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fphilkr\u002Flpo) | 91 | \n| [Learning Spatially Regularized Correlation Filters for Visual Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDanelljan_Learning_Spatially_Regularized_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flifeng9472\u002FSTRCF) | 86 | \n| [A Convolutional Neural Network Cascade for Face Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_A_Convolutional_Neural_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmks0601\u002FA-Convolutional-Neural-Network-Cascade-for-Face-Detection) | 85 | \n| [Discriminative Learning of Deep Convolutional Feature Point Descriptors](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSimo-Serra_Discriminative_Learning_of_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fetrulls\u002Fdeepdesc-release) | 77 | \n| [Unsupervised Visual Representation Learning by Context Prediction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDoersch_Unsupervised_Visual_Representation_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcdoersch\u002Fdeepcontext) | 73 | \n| [Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FNguyen_Deep_Neural_Networks_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fabhijitbendale\u002FOSDN) | 71 | \n| [Deep Filter Banks for Texture Recognition and Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FCimpoi_Deep_Filter_Banks_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmcimpoi\u002Fdeep-fbanks) | 68 | \n| [Saliency Detection by Multi-Context Deep Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhao_Saliency_Detection_by_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FRobert0812\u002Fdeepsaldet) | 66 | \n| [Multi-Objective Convolutional Learning for Face Labeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLiu_Multi-Objective_Convolutional_Learning_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLiusifei\u002FFace_Parsing_2016) | 55 | \n| [Finding Action Tubes](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FGkioxari_Finding_Action_Tubes_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgkioxari\u002FActionTubes) | 51 | \n| [Category-Specific Object Reconstruction From a Single Image](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FKar_Category-Specific_Object_Reconstruction_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fakar43\u002FCategoryShapes) | 48 | \n| [Convolutional Color Constancy](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FBarron_Convolutional_Color_Constancy_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fyuanming-hu\u002Ffc4) | 47 | \n| [Face Flow](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSnape_Face_Flow_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshashanktyagi\u002FHyperFace-TensorFlow-implementation) | 45 | \n| [P-CNN: Pose-Based CNN Features for Action Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FCheron_P-CNN_Pose-Based_CNN_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fgcheron\u002FP-CNN) | 45 | \n| [Learning From Massive Noisy Labeled Data for Image Classification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FXiao_Learning_From_Massive_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FCysu\u002Fnoisy_label) | 45 | \n| [Image Specificity](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FJas_Image_Specificity_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FburliEnterprises\u002Ftensorflow-image-classifier) | 40 | \n| [Predicting Depth, Surface Normals and Semantic Labels With a Common Multi-Scale Convolutional Architecture](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FEigen_Predicting_Depth_Surface_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FRostifar\u002FNYUDepthNet) | 35 | \n| [Neural Activation Constellations: Unsupervised Part Model Discovery With Convolutional Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSimon_Neural_Activation_Constellations_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcvjena\u002Fpart_constellation_models) | 35 | \n| [VQA: Visual Question Answering](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FAntol_VQA_Visual_Question_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fimatge-upc\u002Fvqa-2016-cvprw) | 35 | \n| [Mid-Level Deep Pattern Mining](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_Mid-Level_Deep_Pattern_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FyaoliUoA\u002FMDPM) | 34 | \n| [PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKendall_PoseNet_A_Convolutional_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffuturely\u002Fdeep-camera-relocalization) | 34 | \n| [Parsimonious Labeling](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDokania_Parsimonious_Labeling_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faimerykong\u002FPixel-Attentional-Gating) | 33 | \n| [Car That Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FJain_Car_That_Knows_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fasheshjain399\u002FICCV2015_Brain4Cars) | 33 | \n| [Recurrent Convolutional Neural Network for Object Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLiang_Recurrent_Convolutional_Neural_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FJimLee4530\u002FRCNN) | 32 | \n| [TILDE: A Temporally Invariant Learned DEtector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FVerdie_TILDE_A_Temporally_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fkmyid\u002FTILDE) | 30 | \n| [In Defense of Color-Based Model-Free Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FPossegger_In_Defense_of_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002FDAT) | 30 | \n| [Fast Bilateral-Space Stereo for Synthetic Defocus](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FBarron_Fast_Bilateral-Space_Stereo_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftvandenzegel\u002Ffast_bilateral_space_stereo) | 29 | \n| [Phase-Based Frame Interpolation for Video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FMeyer_Phase-Based_Frame_Interpolation_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fowang\u002FPhaseBasedInterpolation) | 28 | \n| [Understanding Tools: Task-Oriented Object Modeling, Learning and Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhu_Understanding_Tools_Task-Oriented_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fxiaozhuchacha\u002FKinect2Toolbox) | 27 | \n| [Deeply Learned Attributes for Crowded Scene Understanding](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FShao_Deeply_Learned_Attributes_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Famandajshao\u002Fwww_deep_crowd) | 27 | \n| [Unconstrained 3D Face Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FRoth_Unconstrained_3D_Face_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FNJUPole\u002FCVPR2015-Unconstrained-3D-Face-Reconstruction) | 26 | \n| [Viewpoints and Keypoints](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FTulsiani_Viewpoints_and_Keypoints_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FViewpointsAndKeypoints) | 25 | \n| [Holistically-Nested Edge Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FXie_Holistically-Nested_Edge_Detection_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fs9xie\u002Fhed_release-deprecated) | 25 | \n| [Going Deeper With Convolutions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FSzegedy_Going_Deeper_With_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fnutszebra\u002Fgooglenet) | 25 | \n| [Reconstructing the World* in Six Days *(As Captured by the Yahoo 100 Million Image Dataset)](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHeinly_Reconstructing_the_World_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fjheinly\u002Fstreaming_connected_component_discovery) | 25 | \n| [Data-Driven 3D Voxel Patterns for Object Category Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FXiang_Data-Driven_3D_Voxel_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuxng\u002F3DVP) | 24 | \n| [L0TV: A New Method for Image Restoration in the Presence of Impulse Noise](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FYuan_L0TV_A_New_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fpeisuke\u002FL0TV) | 22 | \n| [Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhang_Beyond_Frontal_Faces_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsciencefans\u002FBeyond-Frontal-Faces) | 21 | \n| [Understanding Deep Features With Computer-Generated Imagery](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FAubry_Understanding_Deep_Features_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmathieuaubry\u002Ffeatures_analysis) | 19 | \n| [HICO: A Benchmark for Recognizing Human-Object Interactions in Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FChao_HICO_A_Benchmark_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fywchao\u002Fhico_benchmark) | 18 | \n| [Structured Feature Selection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FGao_Structured_Feature_Selection_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcsliangdu\u002FFSASL) | 17 | \n| [Learning Large-Scale Automatic Image Colorization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDeshpande_Learning_Large-Scale_Automatic_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faditya12agd5\u002Ficcv15_lscolorization) | 17 | \n| [Semantic Component Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FMurdock_Semantic_Component_Analysis_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Faubry74\u002Fvisual-word2vec) | 17 | \n| [Simultaneous Feature Learning and Hash Coding With Deep Neural Networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLai_Simultaneous_Feature_Learning_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHYPJUDY\u002Fcaffe-dnnh) | 16 | \n| [3D Object Reconstruction From Hand-Object Interactions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTzionas_3D_Object_Reconstruction_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdimtziwnas\u002FInHandScanningICCV15_Reconstruction) | 15 | \n| [Learning Temporal Embeddings for Complex Video Analysis](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FRamanathan_Learning_Temporal_Embeddings_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Feevignesh\u002Fvideovector) | 14 | \n| [Learning to See by Moving](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FAgrawal_Learning_to_See_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fpulkitag\u002Flearning-to-see-by-moving) | 14 | \n| [Reflection Removal Using Ghosting Cues](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FShih_Reflection_Removal_Using_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fthongnguyendev\u002Fsingle_image) | 14 | \n| [Where to Buy It: Matching Street Clothing Photos in Online Shops](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FKiapour_Where_to_Buy_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjfuentescpp\u002Fwhere_to_buy_it) | 14 | \n| [Oriented Edge Forests for Boundary Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHallman_Oriented_Edge_Forests_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fsamhallman\u002Foef) | 13 | \n| [A Large-Scale Car Dataset for Fine-Grained Categorization and Verification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FYang_A_Large-Scale_Car_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbogger\u002Fcaffe-multigpu) | 11 | \n| [Appearance-Based Gaze Estimation in the Wild](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FZhang_Appearance-Based_Gaze_Estimation_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ftrakaros\u002FMPIIGaze) | 10 | \n| [Learning a Descriptor-Specific 3D Keypoint Detector](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSalti_Learning_a_Descriptor-Specific_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FCVLAB-Unibo\u002FKeypoint-Learning) | 10 | \n| [Robust Image Filtering Using Joint Static and Dynamic Guidance](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FHam_Robust_Image_Filtering_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fbsham\u002FSDFilter) | 10 | \n| [Partial Person Re-Identification](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZheng_Partial_Person_Re-Identification_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flingxiao-he\u002FDeep-Spatial-Feature-Reconstruction-for-Partial-Person-Re-identification) | 9 | \n| [High Quality Structure From Small Motion for Rolling Shutter Cameras](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FIm_High_Quality_Structure_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fsunghoonim\u002FSfSM) | 9 | \n| [Boosting Object Proposals: From Pascal to COCO](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FPont-Tuset_Boosting_Object_Proposals_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fjponttuset\u002FBOP) | 8 | \n| [Convolutional Channel Features](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FYang_Convolutional_Channel_Features_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fbyangderek\u002FCCF) | 8 | \n| [Live Repetition Counting](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FLevy_Live_Repetition_Counting_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ftomrunia\u002FDeepRepICCV2015) | 8 | \n| [Unsupervised Learning of Visual Representations Using Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FWang_Unsupervised_Learning_of_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fcoreylynch\u002Funsupervised-triplet-embedding) | 8 | \n| [Supervised Discrete Hashing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FShen_Supervised_Discrete_Hashing_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fgoukoutaki\u002FFSDH) | 7 | \n| [Multi-View Convolutional Neural Networks for 3D Shape Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSu_Multi-View_Convolutional_Neural_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshawnxu1318\u002FMVCNN-Multi-View-Convolutional-Neural-Networks) | 7 | \n| [Simpler Non-Parametric Methods Provide as Good or Better Results to Multiple-Instance Learning](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FVenkatesan_Simpler_Non-Parametric_Methods_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fragavvenkatesan\u002Fnp-mil) | 7 | \n| [Finding Distractors In Images](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FFried_Finding_Distractors_In_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fohadf\u002Fdistractors) | 7 | \n| [Piecewise Flat Embedding for Image Segmentation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FYu_Piecewise_Flat_Embedding_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fchaoweifang\u002FPFE) | 7 | \n| [Long-Term Correlation Tracking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FMa_Long-Term_Correlation_Tracking_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fmalreddysid\u002Flong-term-correlation-tracking) | 6 | \n| [Towards Open World Recognition](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FBendale_Towards_Open_World_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fabhijitbendale\u002FOWR) | 6 | \n| [Pooled Motion Features for First-Person Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FRyoo_Pooled_Motion_Features_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FUSCDataScience\u002Fhadoop-pot) | 6 | \n| [Simultaneous Deep Transfer Across Domains and Tasks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTzeng_Simultaneous_Deep_Transfer_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fmahfujau\u002Fdomain_adaptation_iccv15) | 6 | \n| [What Makes an Object Memorable?](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDubey_What_Makes_an_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FqixuanHou\u002FMapping-My-Break) | 5 | \n| [Mining Semantic Affordances of Visual Object Categories](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FChao_Mining_Semantic_Affordances_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fywchao\u002Fsemantic_affordance) | 5 | \n| [Dense Semantic Correspondence Where Every Pixel is a Classifier](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FBristow_Dense_Semantic_Correspondence_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fhbristow\u002Fepic) | 5 | \n| [Segment Graph Based Image Filtering: Fast Structure-Preserving Smoothing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZhang_Segment_Graph_Based_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffeihuzhang\u002FSGF) | 5 | \n| [Fast Randomized Singular Value Thresholding for Nuclear Norm Minimization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FOh_Fast_Randomized_Singular_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FHlG4399\u002FFRSVT) | 5 | \n| [Unsupervised Generation of a Viewpoint Annotated Car Dataset From Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FSedaghat_Unsupervised_Generation_of_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Funsup-car-dataset) | 5 | \n| [Multi-Label Cross-Modal Retrieval](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FRanjan_Multi-Label_Cross-Modal_Retrieval_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FViresh-R\u002Fml-CCA) | 4 | \n| [Superdifferential Cuts for Binary Energies](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FTaniai_Superdifferential_Cuts_for_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Ft-taniai\u002FSDC_CVPR2015) | 4 | \n| [Pose Induction for Novel Object Categories](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FTulsiani_Pose_Induction_for_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fshubhtuls\u002FposeInduction) | 4 | \n| [Efficient Minimal-Surface Regularization of Perspective Depth Maps in Variational Stereo](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FGraber_Efficient_Minimal-Surface_Regularization_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FVLOGroup\u002Fsurface-area-regularization) | 4 | \n| [Low-Rank Matrix Factorization Under General Mixture Noise Distributions](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FCao_Low-Rank_Matrix_Factorization_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fxiangyongcao\u002FPMoEP) | 4 | \n| [Robust Saliency Detection via Regularized Random Walks Ranking](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_Robust_Saliency_Detection_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002Fyuanyc06\u002Frr) | 3 | \n| [Simultaneous Video Defogging and Stereo Reconstruction](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fhtml\u002FLi_Simultaneous_Video_Defogging_2015_CVPR_paper.html) | CVPR | [code](https:\u002F\u002Fgithub.com\u002FLashuk1729\u002FDIP-Project-Video-Dehazing) | 3 | \n| [Hyperspectral Super-Resolution by Coupled Spectral Unmixing](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FLanaras_Hyperspectral_Super-Resolution_by_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Flanha\u002FSupResPALM) | 3 | \n| [Oriented Object Proposals](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FHe_Oriented_Object_Proposals_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Ffrutuozo29\u002FWebServiceRESTFul) | 3 | \n| [kNN Hashing With Factorized Neighborhood Representation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FDing_kNN_Hashing_With_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002Fdooook\u002FkNN-hashing) | 3 | \n| [Minimum Barrier Salient Object Detection at 80 FPS](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fhtml\u002FZhang_Minimum_Barrier_Salient_ICCV_2015_paper.html) | ICCV | [code](https:\u002F\u002Fgithub.com\u002FcoderSkyChen\u002FMBS_Cplus_c-) | 3 |\n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ 返回顶部\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n## 2014年\n| 标题 | 会议 | 代码链接 | 引用次数 |\n|:--------|:--------:|:--------:|:--------:|\n| [用于精确目标检测和语义分割的丰富特征层次结构](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FGirshick_Rich_Feature_Hierarchies_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Frbgirshick\u002Frcnn) | 1681 |\n| [用于近似最近邻搜索的局部优化乘积量化](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKalantidis_Locally_Optimized_Product_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fyahoo\u002Flopq) | 437 |\n| [通过联合图像分割与标注进行服装协同解析](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FYang_Clothing_Co-Parsing_by_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fbearpaw\u002Fclothing-co-parsing) | 218 |\n| [多尺度组合分组](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FArbelaez_Multiscale_Combinatorial_Grouping_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fjponttuset\u002Fmcg) | 185 |\n| [通过回归局部二值特征实现每秒3000帧的人脸对齐](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FRen_Face_Alignment_at_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fluoyetx\u002Fface-alignment-at-3000fps) | 164 |\n| [用于立体匹配的跨尺度代价聚合](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZhang_Cross-Scale_Cost_Aggregation_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Frookiepig\u002FCrossScaleStereo) | 106 |\n| [用于无监督域适应的迁移联合匹配](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FLong_Transfer_Joint_Matching_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002FUSTCPCS\u002FCVPR2018_attention) | 67 |\n| [基于预测10,000个类别的深度学习人脸表征](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FSun_Deep_Learning_Face_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fjoyhuang9473\u002Fdeepid-implementation) | 62 |\n| [BING：用于每秒300帧目标性估计的二值化归一化梯度](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FCheng_BING_Binarized_Normed_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Falessandroferrari\u002FBING-Objectness) | 44 |\n| [使用回归树集成实现毫秒级人脸对齐](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKazemi_One_Millisecond_Face_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002FjjrCN\u002FERT-GBDT_Face_Alignment) | 43 |\n| [从偶然运动中进行3D重建](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FYu_3D_Reconstruction_from_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Ffyu\u002Ftiny) | 42 |\n| [预测匹配性](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FHartmann_Predicting_Matchability_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fjacekm-git\u002FBetBoy) | 38 |\n| [结合物体与属性的密集语义图像分割](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZheng_Dense_Semantic_Image_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fbittnt\u002FImageSpirit) | 28 |\n| [人群中的场景无关群体画像](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FShao_Scene-Independent_Group_Profiling_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Famandajshao\u002Fcrowd_group_profile) | 28 |\n| [收缩场用于高效图像修复](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FSchmidt_Shrinkage_Fields_for_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fuschmidt83\u002Fshrinkage-fields) | 25 |\n| [用于实时视觉跟踪的自适应颜色属性](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FDanelljan_Adaptive_Color_Attributes_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fmostafaizz\u002FColorTracker) | 25 |\n| [从运动恢复结构模型中提取最小化场景描述](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FCao_Minimal_Scene_Descriptions_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fcaosong\u002Fminimal_scene) | 22 |\n| [视差容忍的图像拼接](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZhang_Parallax-tolerant_Image_Stitching_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fgain2217\u002FRobust_Elastic_Warping) | 20 |\n| [用于行人再识别的中层滤波器学习](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FZhao_Learning_Mid-level_Filters_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002FRobert0812\u002Fmidfilter_reid) | 20 |\n| [用于大位移光流的快速边缘保真PatchMatch算法](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FBao_Fast_Edge-Preserving_PatchMatch_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Flinchaobao\u002FEPPM) | 18 |\n| [产品稀疏编码](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FGe_Product_Sparse_Coding_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fksopyla\u002FCudaDotProd) | 16 |\n| [用于无参考图像质量评估的卷积神经网络](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKang_Convolutional_Neural_Networks_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Flidq92\u002FCNNIQA) | 16 |\n| [看见3D椅子：利用大型CAD模型数据集进行示例部件驱动的2D-3D对齐](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FAubry_Seeing_3D_Chairs_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fmathieuaubry\u002Fseeing3Dchairs) | 15 |\n| [故事图：将角色交互可视化为时间轴](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FTapaswi_StoryGraphs_Visualizing_Character_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fmakarandtapaswi\u002FStoryGraphs_CVPR2014) | 14 |\n| [用于细粒度识别的非参数部件迁移](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FGoring_Nonparametric_Part_Transfer_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fcvjena\u002Ffinegrained-cvpr2014) | 13 |\n| [用于场景分类的可扩展多任务表征学习](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FLapin_Scalable_Multitask_Representation_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fmlapin\u002Fcvpr14mtl) | 11 |\n| [在图像去雾的学习框架中探究与雾霾相关的特征](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FTang_Investigating_Haze-relevant_Features_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fzlinker\u002Fhaze_2014) | 7 |\n| [重建PASCAL VOC数据集](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FVicente_Reconstructing_PASCAL_VOC_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fyihui-he\u002Freconstructing-pascal-voc) | 6 |\n| [协作哈希](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FLiu_Collaborative_Hashing_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002F27359794\u002Flsh-collab-filtering) | 6 |\n| [告诉我你看到了什么，我就告诉你它在哪里](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FXu_Tell_Me_What_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002FMarkipTheMudkip\u002Fin-class-project-2) | 6 |\n| [通过高维颜色变换进行显著区域检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2014\u002Fhtml\u002FKim_Salient_Region_Detection_2014_CVPR_paper.html) | CVPR | [代码](https:\u002F\u002Fgithub.com\u002Fjhkim89\u002FSaliency-HDCT) | 6 |\n\n\u003Cdiv align=\"right\">\n\u003Cb>\u003Ca href=\"#----\">↥ 返回顶部\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fdiv>\n\n\n\n## 2013\n| 标题 | 会议 | 代码 | 星数 |\n|:--------|:--------:|:--------:|:--------:|\n| [一种通用的去中心化信任管理框架](http:\u002F\u002Fwww.cs.technion.ac.il\u002Fusers\u002Fwwwb\u002Fcgi-bin\u002Ftr-get.cgi\u002F2012\u002FMSC\u002FMSC-2012-22.pdf) | SPE | [代码](https:\u002F\u002Fgithub.com\u002Famitport\u002Fgraphpack) | 6 |","# PWC (Papers With Code) 快速上手指南\n\n**工具简介**：\n`pwc` 是一个持续更新的开源列表项目，旨在按年份和星级整理机器学习领域的顶级论文及其对应的代码实现。它帮助开发者快速定位高质量、有代码复现的学术成果（如 CVPR, NIPS, ECCV 等会议论文）。\n\n> **注意**：本项目主要是一个 curated list（精选列表），而非一个需要编译安装的软件库。以下指南将指导你如何获取和使用该资源。\n\n## 环境准备\n\n本项目无需复杂的系统依赖，仅需具备基础的开发环境即可浏览和克隆代码。\n\n*   **操作系统**：Windows, macOS, 或 Linux\n*   **前置依赖**：\n    *   `Git`：用于克隆仓库\n    *   浏览器：用于查看渲染后的 Markdown 表格或直接访问论文\u002F代码链接\n*   **网络建议**：\n    *   由于原始仓库托管在 GitHub，国内用户访问可能较慢。推荐使用 **Gitee 镜像**（如果可用）或配置 Git 代理加速。\n    *   部分论文链接指向 arXiv 或会议官网，代码链接指向原作者仓库（如 GitHub），请确保网络通畅。\n\n## 安装步骤\n\n本项目以 Git 仓库形式存在，直接克隆到本地即可使用。\n\n### 1. 克隆仓库\n\n打开终端（Terminal 或 CMD），执行以下命令：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc.git\n```\n\n**国内加速方案**（如果上述命令速度慢）：\n可以使用 Gitee 的导入功能或第三方加速服务（示例地址仅供参考，若失效请使用官方源）：\n\n```bash\n# 示例：使用镜像源克隆（需确认镜像是否同步更新）\ngit clone https:\u002F\u002Fgitee.com\u002Fmirror\u002Fpwc.git \n# 或者使用 git 深度克隆以减少下载量\ngit clone --depth 1 https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc.git\n```\n\n### 2. 进入目录\n\n```bash\ncd pwc\n```\n\n## 基本使用\n\n获取仓库后，你有两种主要方式使用这份资源：\n\n### 方式一：本地查看（推荐）\n\n直接在本地编辑器或 IDE（如 VS Code）中打开 `README.md` 文件。该文件包含了按年份（2018, 2017...）分类的论文列表，包含标题、会议名称、代码链接和 Star 数量。\n\n*   **查找论文**：使用编辑器的搜索功能（`Ctrl+F` \u002F `Cmd+F`）搜索关键词（例如 \"Object Detection\" 或 \"StarGAN\"）。\n*   **获取代码**：点击表格中 `[code]` 列的链接，直接跳转到对应论文的官方代码仓库进行下载或学习。\n\n### 方式二：在线浏览\n\n如果你不想克隆仓库，可以直接在 GitHub 上查看实时更新的列表：\n\n1.  访问项目主页：[https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc](https:\u002F\u002Fgithub.com\u002Fzziz\u002Fpwc)\n2.  向下滚动至对应年份章节（如 `## 2018`）。\n3.  点击表格中的链接直接访问论文原文或代码仓库。\n\n### 使用示例：查找并运行一个模型\n\n假设你想研究 2018 年的 **StarGAN** 模型：\n\n1.  在 `README.md` 的 **2018** 章节找到 `StarGAN` 条目。\n2.  点击 `[code]` 列的链接：`https:\u002F\u002Fgithub.com\u002Fyunjey\u002FStarGAN`。\n3.  进入该代码仓库，按照其独立的 `README` 指示进行安装和运行（通常如下）：\n\n```bash\n# 进入 StarGAN 代码目录后的典型操作\ngit clone https:\u002F\u002Fgithub.com\u002Fyunjey\u002FStarGAN.git\ncd StarGAN\npip install -r requirements.txt\npython main.py --mode train --dataset CelebA ...\n```\n\n> **提示**：`pwc` 列表中的每个 `[code]` 链接都指向独立的开源项目，具体的环境配置和运行命令请以各子项目的说明文档为准。","某计算机视觉初创团队的算法工程师正在为低光照监控场景寻找最优的图像增强方案，急需复现前沿论文效果以验证技术可行性。\n\n### 没有 pwc 时\n- **检索效率低下**：需要在 arXiv、GitHub 和会议官网之间反复切换搜索，花费数小时才能确认某篇论文（如《Learning to See in the Dark》）是否有官方开源代码。\n- **代码质量难辨**：找到的仓库往往缺乏星级参考或会议背书，难以判断是官方实现还是不可靠的第三方复现，极易踩坑。\n- **前沿追踪滞后**：无法系统性地按年份或顶会（如 CVPR、NIPS）筛选最新成果，容易错过像 StarGAN 或 Deep Image Prior 这样的关键突破。\n- **复现成本高昂**：因缺少直接对应的代码链接，团队需从零开始推导公式并编写代码，导致项目原型验证周期被迫延长数周。\n\n### 使用 pwc 后\n- **一站式精准定位**：通过 pwc 按年份或会议索引，秒级锁定目标论文及其对应的高星 GitHub 仓库，直接获取 NVIDIA 等机构的官方实现。\n- **可信度直观评估**：利用列表中的会议来源（如 CVPR）和 Stars 数量（如 5000+），快速甄别出成熟可靠的算法模型，降低试错风险。\n- **系统化技术调研**：借助按 2018、2017 等年份分类的结构，迅速梳理出图像生成与修复领域的技术演进路线，制定合理的技术选型。\n- **研发加速落地**：直接基于 pwc 提供的高质量代码进行微调适配，将原本数周的复现工作压缩至几天，大幅加快产品迭代速度。\n\npwc 通过构建论文与代码的直接映射，将算法工程师从繁琐的文献挖掘中解放出来，实现了从“找代码”到“用代码”的效率飞跃。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzziz_pwc_7514d07f.png","zziz","Zaur Fataliyev","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzziz_aa6c61f8.jpg","Research Scientist @facebook","Meta","Vancouver, Canada","fvzaur@gmail.com","fvzaur",null,"https:\u002F\u002Fgithub.com\u002Fzziz",15350,2443,"2026-04-02T14:51:01",5,"","未说明",{"notes":89,"python":87,"dependencies":90},"该仓库（pwc）并非一个可运行的 AI 工具或代码库，而是一个由社区维护的‘论文与代码’（Papers With Code）索引列表。它按年份和星级整理了大量计算机视觉和机器学习领域的论文及其对应的官方代码仓库链接（如 StarGAN, PWC-Net 等）。用户需点击表格中的具体项目链接，前往各自的源代码仓库查看具体的运行环境需求、依赖库及安装说明。",[],[35,15,52,14],[93,94,95,96,97,98,99,100],"machine-learning","paper","code","cvpr","nips","icml","iccv","eccv","2026-03-27T02:49:30.150509","2026-04-11T18:32:02.143853",[],[]]