[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hzwer--Awesome-Optical-Flow":3,"tool-hzwer--Awesome-Optical-Flow":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":74,"owner_email":74,"owner_twitter":72,"owner_website":76,"owner_url":77,"languages":74,"stars":78,"forks":79,"last_commit_at":80,"license":74,"difficulty_score":81,"env_os":82,"env_gpu":83,"env_ram":83,"env_deps":84,"category_tags":87,"github_topics":88,"view_count":32,"oss_zip_url":74,"oss_zip_packed_at":74,"status":17,"created_at":92,"updated_at":93,"faqs":94,"releases":95},5727,"hzwer\u002FAwesome-Optical-Flow","Awesome-Optical-Flow","This is a list of awesome paper about optical flow and related work.","Awesome-Optical-Flow 是一个专注于光流估计领域的开源论文与项目精选清单。光流技术旨在通过分析连续视频帧中像素的运动轨迹，让计算机“看懂”物体的移动方式，这是视频分析、自动驾驶和动作识别等任务的核心基础。该资源库系统地梳理了从经典算法到前沿研究的演进脉络，有效解决了研究人员在面对海量文献时难以快速定位高质量成果、复现代码或追踪技术趋势的痛点。\n\n这份清单特别适合计算机视觉领域的研究人员、算法工程师以及高校学生使用。它不仅按时间顺序收录了包括 CVPR、ECCV、NeurIPS 等顶级会议的最新论文，还贴心地附带了对应的官方代码仓库链接及热度指标。内容涵盖了监督学习模型等多种技术路线，其中不乏如 RAFT、FlowFormer、GMFlow 等具有里程碑意义的架构，展示了从循环神经网络到 Transformer、再到全局匹配机制等技术亮点的演变。无论是希望入门光流领域的新手，还是寻求最新灵感的专业开发者，都能在这里高效获取经过筛选的优质资源，加速科研与开发进程。","# Awesome-Optical-Flow\nThis is a list of awesome articles about optical flow and related work. [Click here to read in full screen.](https:\u002F\u002Fgithub.com\u002Fhzwer\u002FAwesome-Optical-Flow\u002Fblob\u002Fmain\u002FREADME.md)\n\nThe table of contents is on the right side of the \"README.md\".\n\nRecently, I write [A Survey on Future Frame Synthesis: Bridging Deterministic and Generative Approaches](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.14718\n), welcome to read.\n\n## Optical Flow\n\n### Supervised Models\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|CVPR24|[MemFlow: Optical Flow Estimation and Prediction with Memory](https:\u002F\u002Fdqiaole.github.io\u002FMemFlow\u002F)|[MemFlow](https:\u002F\u002Fgithub.com\u002FDQiaole\u002FMemFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDQiaole\u002FMemFlow)|\n|CVPR23|[DistractFlow: Improving Optical Flow Estimation via Realistic Distractions and Pseudo-Labeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14078)\n|CVPR23|[Masked Cost Volume Autoencoding for Pretraining Optical Flow Estimation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FShi_FlowFormer_Masked_Cost_Volume_Autoencoding_for_Pretraining_Optical_Flow_Estimation_CVPR_2023_paper.html)|[FlowFormerPlusPlus](https:\u002F\u002Fgithub.com\u002FXiaoyuShi97\u002FFlowFormerPlusPlus) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FXiaoyuShi97\u002FFlowFormerPlusPlus)|\n|NeurIPS22|[SKFlow: Learning Optical Flow with Super Kernels](https:\u002F\u002Fopenreview.net\u002Fforum?id=v2es9YoukWO)|[SKFlow](https:\u002F\u002Fgithub.com\u002Flittlespray\u002FSKFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flittlespray\u002FSKFlow)|\n|ECCV22|[Disentangling architecture and training for optical flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.10712)|[Autoflow](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fopticalflow-autoflow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoogle-research\u002Fopticalflow-autoflow)|\n|ECCV22|[FlowFormer: A Transformer Architecture for Optical Flow](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.16194.pdf)|[FlowFormer](https:\u002F\u002Fgithub.com\u002Fdrinkingcoder\u002FFlowFormer-Official\u002F) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdrinkingcoder\u002FFlowFormer-Official)|\n|CVPR22|[Learning Optical Flow with Kernel Patch Attention](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FLuo_Learning_Optical_Flow_With_Kernel_Patch_Attention_CVPR_2022_paper.pdf)|[KPAFlow](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FKPAFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FKPAFlow)|\n|CVPR22|[GMFlow: Learning Optical Flow via Global Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13680)|[gmflow](https:\u002F\u002Fgithub.com\u002Fhaofeixu\u002Fgmflow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhaofeixu\u002Fgmflow)|\n|CVPR22|[Deep Equilibrium Optical Flow Estimation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08442.pdf)|[deq-flow](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fdeq-flow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flocuslab\u002Fdeq-flow)|\n|ICCV21|[High-Resolution Optical Flow from 1D Attention and Correlation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13918)|[flow1d](https:\u002F\u002Fgithub.com\u002Fhaofeixu\u002Fflow1d)![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhaofeixu\u002Fflow1d)|\n|ICCV21|[Learning to Estimate Hidden Motions with Global Motion Aggregation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02409)|[GMA](https:\u002F\u002Fgithub.com\u002Fzacjiang\u002FGMA) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzacjiang\u002FGMA)|\n|CVPR21|[Learning Optical Flow from a Few Matches](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02166)|[SCV](https:\u002F\u002Fgithub.com\u002Fzacjiang\u002FSCV) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzacjiang\u002FSCV)|\n|TIP21|[Detail Preserving Coarse-to-Fine Matching for Stereo Matching and Optical Flow](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9459444)\n|ECCV20|[RAFT: Recurrent All Pairs Field Transforms for Optical Flow](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.12039.pdf)|[RAFT](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fprinceton-vl\u002FRAFT)\n|CVPR20|[MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.10955)|[MaskFlownet](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FMaskFlownet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmicrosoft\u002FMaskFlownet)\n|CVPR20|[ScopeFlow: Dynamic Scene Scoping for Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10770)|[ScopeFlow](https:\u002F\u002Fgithub.com\u002Favirambh\u002FScopeFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Favirambh\u002FScopeFlow)\n|TPAMI20|[A Lightweight Optical Flow CNN - Revisiting Data Fidelity and Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.07414)|[LiteFlowNet2](https:\u002F\u002Fgithub.com\u002Ftwhui\u002FLiteFlowNet2) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftwhui\u002FLiteFlowNet2)\n\n### Multi-Frame Supervised Models\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ECCV24|[Local All-Pair Correspondence for Point Tracking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15420)\n|CVPR24|[FlowTrack: Revisiting Optical Flow for Long-Range Dense Tracking](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FCho_FlowTrack_Revisiting_Optical_Flow_for_Long-Range_Dense_Tracking_CVPR_2024_paper.html)\n|CVPR24|[Dense Optical Tracking: Connecting the Dots](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00786)|[dot](https:\u002F\u002Fgithub.com\u002F16lemoing\u002Fdot) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002F16lemoing\u002Fdot)|\n|ICCV23|[Tracking Everything Everywhere All at Once](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05422)|[omnimotion](https:\u002F\u002Fgithub.com\u002Fqianqianwang68\u002Fomnimotion) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fqianqianwang68\u002Fomnimotion)|\n|ICCV23|[AccFlow: Backward Accumulation for Long-Range Optical Flow](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.13133.pdf)|[AccFlow](https:\u002F\u002Fgithub.com\u002Fmulns\u002FAccFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmulns\u002FAccFlow)|\n|ICCV23|[VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08340)|[VideoFlow](https:\u002F\u002Fgithub.com\u002FXiaoyuShi97\u002FVideoFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FXiaoyuShi97\u002FVideoFlow)|\n|ECCV22|[Particle Video Revisited: Tracking Through Occlusions Using Point Trajectories](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04153)|[PIPs](https:\u002F\u002Fgithub.com\u002Faharley\u002Fpips) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Faharley\u002Fpips)|\n\n\n### Semi-Supervised Models\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ECCV22|[Semi-Supervised Learning of Optical Flow by Flow Supervisor](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10314)\n\n### Data Synthesis\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ECCV22|[RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos]()|[RealFlow](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FRealFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FRealFlow)\n|CVPR21|[AutoFlow: Learning a Better Training Set for Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14544)|[autoflow](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fopticalflow-autoflow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoogle-research\u002Fopticalflow-autoflow)\n|CVPR21|[Learning Optical Flow from Still Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.03965)|[depthstillation](https:\u002F\u002Fgithub.com\u002Fmattpoggi\u002Fdepthstillation) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmattpoggi\u002Fdepthstillation)\n|arXiv21.04|[Optical Flow Dataset Synthesis from Unpaired Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02615)\n\n### Unsupervised Models\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ECCV22|[Optical Flow Training under Limited Label Budget via Active Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05053.pdf)|[optical-flow-active-learning-release](https:\u002F\u002Fgithub.com\u002Fduke-vision\u002Foptical-flow-active-learning-release) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fduke-vision\u002Foptical-flow-active-learning-release)\n|CVPR21|[SMURF: Self-Teaching Multi-Frame Unsupervised RAFT with Full-Image Warping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.07014)|[smurf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fsmurf) GoogleResearch\n|CVPR21|[UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLuo_UPFlow_Upsampling_Pyramid_for_Unsupervised_Optical_Flow_Learning_CVPR_2021_paper.pdf)|[UPFlow_pytorch](https:\u002F\u002Fgithub.com\u002Fcoolbeam\u002FUPFlow_pytorch) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcoolbeam\u002FUPFlow_pytorch)\n|TIP21|[OccInpFlow: Occlusion-Inpainting Optical Flow Estimation by Unsupervised Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16637)|[depthstillation](https:\u002F\u002Fgithub.com\u002Fcoolbeam\u002FOIFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcoolbeam\u002FOIFlow)\n|ECCV20|[What Matters in Unsupervised Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.04902)|[uflow](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fuflow) GoogleResearch\n|CVPR20|[Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13045)|[ARFlow](https:\u002F\u002Fgithub.com\u002Flliuz\u002FARFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flliuz\u002FARFlow)\n|CVPR20|[Flow2Stereo: Effective Self-Supervised Learning of Optical Flow and Stereo Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.02138)\n\n### Joint Learning\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|arXiv21.11|[Unifying Flow, Stereo and Depth Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.05783)|[unimatch](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Funimatch) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fautonomousvision\u002Funimatch)|\n|CVPR21|[EffiScene: Efficient Per-Pixel Rigidity Inference for Unsupervised Joint Learning of Optical Flow, Depth, Camera Pose and Motion Segmentation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FJiao_EffiScene_Efficient_Per-Pixel_Rigidity_Inference_for_Unsupervised_Joint_Learning_of_CVPR_2021_paper.html)\n|CVPR21|[Feature-Level Collaboration: Joint Unsupervised Learning of Optical Flow, Stereo Depth and Camera Motion](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FChi_Feature-Level_Collaboration_Joint_Unsupervised_Learning_of_Optical_Flow_Stereo_Depth_CVPR_2021_paper.html)\n\n### Special Scene\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|CVPR23|[Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07564) |[UCDA-Flow](https:\u002F\u002Fgithub.com\u002Fhyzhouboy\u002FUCDA-Flow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhyzhouboy\u002FUCDA-Flow)\n|ECCV22|[Deep 360∘ Optical Flow Estimation Based on Multi-Projection Fusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.00776)\n|AAAI21|[Optical flow estimation from a single motion-blurred image](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-3339.ArgawD.pdf)|\n|CVPR20|[Optical Flow in Dense Foggy Scenes using Semi-Supervised Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.01905)\n|CVPR20|[Optical Flow in the Dark](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FZheng_Optical_Flow_in_the_Dark_CVPR_2020_paper.html)|[Optical-Flow-in-the-Dark](https:\u002F\u002Fgithub.com\u002Fmf-zhang\u002FOptical-Flow-in-the-Dark) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmf-zhang\u002FOptical-Flow-in-the-Dark)\n\n### Special Device\n\n**Event Camera** [event-based_vision_resources](https:\u002F\u002Fgithub.com\u002Fuzh-rpg\u002Fevent-based_vision_resources#optical-flow-estimation) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fuzh-rpg\u002Fevent-based_vision_resources#optical-flow-estimation)\n\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ArXiv23.03|[Learning Optical Flow from Event Camera with Rendered Dataset](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11011)\n|ECCV22|[Secrets of Event-Based Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10022)|[event_based_optical_flow](https:\u002F\u002Fgithub.com\u002Ftub-rip\u002Fevent_based_optical_flow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftub-rip\u002Fevent_based_optical_flow)\n|ICCV21|[GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13725)|[GyroFlow](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FGyroFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FGyroFlow)\n\n## Scene Flow\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|CVPR21|[RAFT-3D: Scene Flow Using Rigid-Motion Embeddings](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.00726.pdf)\n|CVPR21|[Just Go With the Flow: Self-Supervised Scene Flow Estimation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.00497.pdf)|[Just-Go-with-the-Flow-Self-Supervised-Scene-Flow-Estimation](https:\u002F\u002Fgithub.com\u002FHimangiM\u002FJust-Go-with-the-Flow-Self-Supervised-Scene-Flow-Estimation) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FHimangiM\u002FJust-Go-with-the-Flow-Self-Supervised-Scene-Flow-Estimation)\n|CVPR21|[Learning to Segment Rigid Motions from Two Frames](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03694)|[rigidmask](https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Frigidmask)![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgengshan-y\u002Frigidmask)\n|CVPR20|[Upgrading Optical Flow to 3D Scene Flow through Optical Expansion](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FYang_Upgrading_Optical_Flow_to_3D_Scene_Flow_Through_Optical_Expansion_CVPR_2020_paper.html)|[expansion](https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Fexpansion) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgengshan-y\u002Fexpansion)\n|CVPR20|[Self-Supervised Monocular Scene Flow Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.04143)|[self-mono-sf](https:\u002F\u002Fgithub.com\u002Fvisinf\u002Fself-mono-sf) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvisinf\u002Fself-mono-sf)\n\n## Applications\n### Video Synthesis\u002FGeneration\n| Time | Paper | Repo |\n| -------- | -------- | -------- \n|ECCV24|[Clearer Frames, Anytime: Resolving Velocity Ambiguity in Video Frame Interpolation](https:\u002F\u002Fjianwang-cmu.github.io\u002F23VFI\u002F04908.pdf)|[InterpAny-Clearer](https:\u002F\u002Fgithub.com\u002Fzzh-tech\u002FInterpAny-Clearer) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzzh-tech\u002FInterpAny-Clearer)\n|arXiv23.11|[MoVideo: Motion-Aware Video Generation with Diffusion Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11325)\n|CVPR24|[FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.17681.pdf)\n|WACV24|[Scale-Adaptive Feature Aggregation for Efficient Space-Time Video Super-Resolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17294)|[SAFA](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FWACV2024-SAFA) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FWACV2024-SAFA)\n|CVPR23|[A Dynamic Multi-Scale Voxel Flow Network for Video Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09875)|[DMVFN](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FCVPR2023-DMVFN) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FCVPR2023-DMVFN)\n|CVPR23|[Conditional Image-to-Video Generation with Latent Flow Diffusion Models](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FNi_Conditional_Image-to-Video_Generation_With_Latent_Flow_Diffusion_Models_CVPR_2023_paper.pdf)|[LFDM](https:\u002F\u002Fgithub.com\u002Fnihaomiao\u002FCVPR23_LFDM) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fnihaomiao\u002FCVPR23_LFDM)\n|CVPR23|[A Unified Pyramid Recurrent Network for Video Frame Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.03456)|[UPR-Net](https:\u002F\u002Fgithub.com\u002Fsrcn-ivl\u002FUPR-Net) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsrcn-ivl\u002FUPR-Net)\n|CVPR23|[Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.00440)|[EMA-VFI](https:\u002F\u002Fgithub.com\u002FMCG-NJU\u002FEMA-VFI) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FMCG-NJU\u002FEMA-VFI)\n|WACV23|[Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2023\u002Fpapers\u002FFigueiredo_Frame_Interpolation_for_Dynamic_Scenes_With_Implicit_Flow_Encoding_WACV_2023_paper.pdf)|[frameintIFE](https:\u002F\u002Fgithub.com\u002Fpedrovfigueiredo\u002FframeintIFE) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fpedrovfigueiredo\u002FframeintIFE)\n|ACMMM22|[Neighbor correspondence matching for flow-based video frame synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.06763)|\n|ECCV22|[Improving the Perceptual Quality of 2D Animation Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06294)|[eisai](https:\u002F\u002Fgithub.com\u002FShuhongChen\u002Feisai-anime-interpolator) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FShuhongChen\u002Feisai-anime-interpolator)\n|ECCV22|[Real-Time Intermediate Flow Estimation for Video Frame Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06294)|[RIFE](https:\u002F\u002Fgithub.com\u002Fhzwer\u002FECCV2022-RIFE) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhzwer\u002FECCV2022-RIFE)\n|CVPR22|[VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04647.pdf)|[VideoINR](https:\u002F\u002Fgithub.com\u002FPicsart-AI-Research\u002FVideoINR-Continuous-Space-Time-Super-Resolution) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FPicsart-AI-Research\u002FVideoINR-Continuous-Space-Time-Super-Resolution)\n|CVPR22|[IFRNet: Intermediate Feature Refine Network for Efficient Frame Interpolation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.14620.pdf)|[IFRNet](https:\u002F\u002Fgithub.com\u002Fltkong218\u002FIFRNet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fltkong218\u002FIFRNet)\n|TOG21|[Neural Frame Interpolation for Rendered Content](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3478513.3480553)\n|CVPR21|[Deep Animation Video Interpolation in the Wild](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02495)|[AnimeInterp](https:\u002F\u002Fgithub.com\u002Flisiyao21\u002FAnimeInterp) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flisiyao21\u002FAnimeInterp)\n|CVPR20|[Softmax Splatting for Video Frame Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.05534)|[softmax-splatting](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fsoftmax-splatting) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fsoftmax-splatting)\n|CVPR20|[Adaptive Collaboration of Flows for Video Frame Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.10244)|[AdaCoF-pytorch](https:\u002F\u002Fgithub.com\u002FHyeongminLEE\u002FAdaCoF-pytorch) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FHyeongminLEE\u002FAdaCoF-pytorch)\n|CVPR20|[FeatureFlow: Robust Video Interpolation via Structure-to-Texture Generation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FGui_FeatureFlow_Robust_Video_Interpolation_via_Structure-to-Texture_Generation_CVPR_2020_paper.pdf)|[FeatureFlow](https:\u002F\u002Fgithub.com\u002FCM-BF\u002FFeatureFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FCM-BF\u002FFeatureFlow)\n\n### Video Inpainting\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ECCV22|[Flow-Guided Transformer for Video Inpainting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.06768)|[FGT](https:\u002F\u002Fgithub.com\u002Fhitachinsk\u002FFGT) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhitachinsk\u002FFGT)\n|CVPR22|[Inertia-Guided Flow Completion and Style Fusion for Video Inpainting](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhang_Inertia-Guided_Flow_Completion_and_Style_Fusion_for_Video_Inpainting_CVPR_2022_paper.pdf)|[isvi](https:\u002F\u002Fgithub.com\u002Fhitachinsk\u002Fisvi) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhitachinsk\u002Fisvi)\n\n### Video Stabilization\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|CVPR20|[Learning Video Stabilization Using Optical Flow](https:\u002F\u002Fcseweb.ucsd.edu\u002F~ravir\u002Fjiyang_cvpr20.pdf)|[jiyang.fun](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1wQJYFd8TMbCRzhmFfDyBj7oHAGfyr1j6\u002Fview)\n\n### Low Level Vision\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|ICCV21|[Deep Reparametrization of Multi-Frame Super-Resolution and Denoising](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.08286)|[deep-rep](https:\u002F\u002Fgithub.com\u002Fgoutamgmb\u002Fdeep-rep) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoutamgmb\u002Fdeep-rep)\n|CVPR21|[Deep Burst Super-Resolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.10997)|[deep-burst-sr](https:\u002F\u002Fgithub.com\u002Fgoutamgmb\u002Fdeep-burst-sr) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoutamgmb\u002Fdeep-burst-sr)\n|CVPR20|[Efficient Dynamic Scene Deblurring Using Spatially Variant Deconvolution Network With Optical Flow Guided Training](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FYuan_Efficient_Dynamic_Scene_Deblurring_Using_Spatially_Variant_Deconvolution_Network_With_CVPR_2020_paper.html)|\n|TIP20|[Deep video super-resolution using HR optical flow estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.02129)|[SOF-VSR](https:\u002F\u002Fgithub.com\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FSOF-VSR) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FSOF-VSR)\n\n### Stereo and SLAM\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|3DV21|[RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07547.pdf)|[RAFT-Stereo](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT-Stereo) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fprinceton-vl\u002FRAFT-Stereo)\n|CVPR20|[VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FMin_VOLDOR_Visual_Odometry_From_Log-Logistic_Dense_Optical_Flow_Residuals_CVPR_2020_paper.html)|[VOLDOR](https:\u002F\u002Fgithub.com\u002Fhtkseason\u002FVOLDOR) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhtkseason\u002FVOLDOR)\n\n\n## Before 2020\n\n### Classical Estimation Methods\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|IJCAI1981|[An iterative image registration technique with an application to stereo vision](http:\u002F\u002Fciteseer.ist.psu.edu\u002Fviewdoc\u002Fdownload;jsessionid=C41563DCDDC44CB0E13D6D64D89FF3FD?doi=10.1.1.421.4619&rep=rep1&type=pdf)||\n|AI1981|[Determining optical flow](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.66.562&rep=rep1&type=pdf)|\n|TPAMI10|[Motion Detail Preserving Optical Flow Estimation](https:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.221.896&rep=rep1&type=pdf)\n|CVPR10|[Secrets of Optical Flow Estimation and Their Principles](https:\u002F\u002Fusers.soe.ucsc.edu\u002F~pang\u002F200\u002Ff18\u002Fpapers\u002F2018\u002F05539939.pdf)\n|ICCV13|[DeepFlow: Large Displacement Optical Flow with Deep Matching](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FWeinzaepfel_DeepFlow_Large_Displacement_2013_ICCV_paper.pdf)|[Project](https:\u002F\u002Fthoth.inrialpes.fr\u002Fsrc\u002Fdeepflow\u002F)\n|ECCV14|[Optical Flow Estimation with Channel Constancy](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-319-10590-1_28.pdf)\n|CVPR17|[S2F: Slow-To-Fast Interpolator Flow](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FYang_S2F_Slow-To-Fast_Interpolator_CVPR_2017_paper.pdf)\n\n### Others\n\n| Time | Paper | Repo |\n| -------- | -------- | -------- |\n|NeurIPS19|[Volumetric Correspondence Networks for Optical Flow](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fbbf94b34eb32268ada57a3be5062fe7d-Abstract.html)|[VCN](https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002FVCN) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgengshan-y\u002FVCN)\n|CVPR19|[Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.05290.pdf)|[irr](https:\u002F\u002Fgithub.com\u002Fvisinf\u002Firr) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvisinf\u002Firr)\n|CVPR18|[PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02371)|[PWC-Net](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FPWC-Net) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FNVlabs\u002FPWC-Net) | [pytorch-pwc](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-pwc) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fpytorch-pwc) \n|CVPR18|[LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07036)|[LiteFlowNet](https:\u002F\u002Fgithub.com\u002Ftwhui\u002FLiteFlowNet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftwhui\u002FLiteFlowNet) | [pytorch-liteflownet](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-liteflownet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fpytorch-liteflownet)\n|CVPR17|[FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01925)|[flownet2-pytorch](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fflownet2-pytorch) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FNVIDIA\u002Fflownet2-pytorch) \u003Cbr> [flownet2](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fflownet2) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flmb-freiburg\u002Fflownet2) \u003Cbr> [flownet2-tf](https:\u002F\u002Fgithub.com\u002Fsampepose\u002Fflownet2-tf) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsampepose\u002Fflownet2-tf)\n|CVPR17|[Optical Flow Estimation using a Spatial Pyramid Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.00850)|[spynet](https:\u002F\u002Fgithub.com\u002Fanuragranj\u002Fspynet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fanuragranj\u002Fspynet) | [pytorch-spynet](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-spynet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fpytorch-spynet)\n|ICCV15|[FlowNet: Learning Optical Flow with Convolutional Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1504.06852)|[FlowNetPytorch](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FFlowNetPytorch) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FClementPinard\u002FFlowNetPytorch)\n|AAAI19|[DDFlow: Learning Optical Flow with Unlabeled Data Distillation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09145)|[DDFlow](https:\u002F\u002Fgithub.com\u002Fppliuboy\u002FDDFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fppliuboy\u002FDDFlow)\n|CVPR19|[SelFlow: Self-Supervised Learning of Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09117)|[SelFlow](https:\u002F\u002Fgithub.com\u002Fppliuboy\u002FSelFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fppliuboy\u002FSelFlow)\n|CVPR19|[Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.03848)|[EPIFlow](https:\u002F\u002Fgithub.com\u002Fyiranzhong\u002FEPIflow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fyiranzhong\u002FEPIflow)\n|CVPR18|[Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.02276)|[GeoNet](https:\u002F\u002Fgithub.com\u002Fyzcjtr\u002FGeoNet) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fyzcjtr\u002FGeoNet)\n|ICCV19|[RainFlow: Optical Flow under Rain Streaks and Rain Veiling Effect](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FLi_RainFlow_Optical_Flow_Under_Rain_Streaks_and_Rain_Veiling_Effect_ICCV_2019_paper.html)\n|CVPR18|[Robust Optical Flow Estimation in Rainy Scenes](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.05239)\n|NIPS19|[Quadratic Video Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.00627)\n|CVPR19|[Depth-Aware Video Frame Interpolation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FBao_Depth-Aware_Video_Frame_Interpolation_CVPR_2019_paper.pdf)|[DAIN](https:\u002F\u002Fgithub.com\u002Fbaowenbo\u002FDAIN) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbaowenbo\u002FDAIN)\n|CVPR18|[Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00080)|[Super-SloMo](https:\u002F\u002Fgithub.com\u002Favinashpaliwal\u002FSuper-SloMo) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Favinashpaliwal\u002FSuper-SloMo)\n|ICCV17|[Video Frame Synthesis using Deep Voxel Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.02463)|[voxel-flow](https:\u002F\u002Fgithub.com\u002Fliuziwei7\u002Fvoxel-flow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fliuziwei7\u002Fvoxel-flow) | [pytorch-voxel-flow](https:\u002F\u002Fgithub.com\u002Flxx1991\u002Fpytorch-voxel-flow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flxx1991\u002Fpytorch-voxel-flow)\n|CVPR19|[DVC: An End-to-end Deep Video Compression Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.00101)|[PyTorchVideoCompression](https:\u002F\u002Fgithub.com\u002FZhihaoHu\u002FPyTorchVideoCompression) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FZhihaoHu\u002FPyTorchVideoCompression)\n|ICCV17|[SegFlow: Joint Learning for Video Object Segmentation and Optical Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06750)|[SegFlow](https:\u002F\u002Fgithub.com\u002FJingchunCheng\u002FSegFlow) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingchunCheng\u002FSegFlow)\n|CVPR18|[End-to-end Flow Correlation Tracking with Spatial-temporal Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.01124)\n|CVPR18|[Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.11152)|[Optical-Flow-Guided-Feature](https:\u002F\u002Fgithub.com\u002Fkevin-ssy\u002FOptical-Flow-Guided-Feature) ![Github stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkevin-ssy\u002FOptical-Flow-Guided-Feature)\n|GCPR18|[On the Integration of Optical Flow and Action Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.08416)\n|CVPR14|[Spatially Smooth Optical Flow for Video Stabilization](http:\u002F\u002Fwww.liushuaicheng.org\u002FCVPR2014\u002FSteadyFlow.pdf)\n","# 令人惊叹的光流\n这是一份关于光流及相关工作的优秀文章列表。[点击此处以全屏阅读。](https:\u002F\u002Fgithub.com\u002Fhzwer\u002FAwesome-Optical-Flow\u002Fblob\u002Fmain\u002FREADME.md)\n\n目录位于“README.md”的右侧。\n\n最近，我撰写了《未来帧合成综述：连接确定性与生成方法》（[arXiv链接](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.14718)），欢迎大家阅读。\n\n## 光流\n\n### 监督学习模型\n| 时间 | 论文 | 代码库 |\n| -------- | -------- | -------- |\n|CVPR24|[MemFlow: 基于记忆的光流估计与预测](https:\u002F\u002Fdqiaole.github.io\u002FMemFlow\u002F)|[MemFlow](https:\u002F\u002Fgithub.com\u002FDQiaole\u002FMemFlow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDQiaole\u002FMemFlow)|\n|CVPR23|[DistractFlow: 通过真实干扰和伪标签改进光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14078)\n|CVPR23|[用于预训练光流估计的掩码代价体积自编码](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FShi_FlowFormer_Masked_Cost_Volume_Autoencoding_for_Pretraining_Optical_Flow_Estimation_CVPR_2023_paper.html)|[FlowFormerPlusPlus](https:\u002F\u002Fgithub.com\u002FXiaoyuShi97\u002FFlowFormerPlusPlus) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FXiaoyuShi97\u002FFlowFormerPlusPlus)|\n|NeurIPS22|[SKFlow: 使用超级核学习光流](https:\u002F\u002Fopenreview.net\u002Fforum?id=v2es9YoukWO)|[SKFlow](https:\u002F\u002Fgithub.com\u002Flittlespray\u002FSKFlow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flittlespray\u002FSKFlow)|\n|ECCV22|[解耦光流架构与训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.10712)|[Autoflow](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fopticalflow-autoflow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoogle-research\u002Fopticalflow-autoflow)|\n|ECCV22|[FlowFormer: 一种用于光流的Transformer架构](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.16194.pdf)|[FlowFormer](https:\u002F\u002Fgithub.com\u002Fdrinkingcoder\u002FFlowFormer-Official\u002F) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fdrinkingcoder\u002FFlowFormer-Official)|\n|CVPR22|[利用核块注意力学习光流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FLuo_Learning_Optical_Flow_With_Kernel_Patch_Attention_CVPR_2022_paper.pdf)|[KPAFlow](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FKPAFlow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FKPAFlow)|\n|CVPR22|[GMFlow: 通过全局匹配学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13680)|[gmflow](https:\u002F\u002Fgithub.com\u002Fhaofeixu\u002Fgmflow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhaofeixu\u002Fgmflow)|\n|CVPR22|[深度均衡光流估计](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08442.pdf)|[deq-flow](https:\u002F\u002Fgithub.com\u002Flocuslab\u002Fdeq-flow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flocuslab\u002Fdeq-flow)|\n|ICCV21|[基于1D注意力与相关性的高分辨率光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13918)|[flow1d](https:\u002F\u002Fgithub.com\u002Fhaofeixu\u002Fflow1d)![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhaofeixu\u002Fflow1d)|\n|ICCV21|[通过全局运动聚合学习隐藏运动](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02409)|[GMA](https:\u002F\u002Fgithub.com\u002Fzacjiang\u002FGMA) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzacjiang\u002FGMA)|\n|CVPR21|[从少量匹配中学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02166)|[SCV](https:\u002F\u002Fgithub.com\u002Fzacjiang\u002FSCV) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzacjiang\u002FSCV)|\n|TIP21|[用于立体匹配和光流的细节保留型粗细匹配](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9459444)\n|ECCV20|[RAFT: 用于光流的循环式全对场变换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.12039.pdf)|[RAFT](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fprinceton-vl\u002FRAFT)|\n|CVPR20|[MaskFlownet: 带可学习遮挡掩码的非对称特征匹配](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.10955)|[MaskFlownet](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FMaskFlownet) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmicrosoft\u002FMaskFlownet)|\n|CVPR20|[ScopeFlow: 针对光流的动态场景范围划分](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10770)|[ScopeFlow](https:\u002F\u002Fgithub.com\u002Favirambh\u002FScopeFlow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Favirambh\u002FScopeFlow)|\n|TPAMI20|[一种轻量级光流CNN——重新审视数据保真度与正则化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.07414)|[LiteFlowNet2](https:\u002F\u002Fgithub.com\u002Ftwhui\u002FLiteFlowNet2) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftwhui\u002FLiteFlowNet2)\n\n### 多帧监督学习模型\n| 时间 | 论文 | 代码库 |\n| -------- | -------- | -------- |\n|ECCV24|[用于点跟踪的局部全对对应关系](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15420)\n|CVPR24|[FlowTrack: 为长距离密集跟踪重新审视光流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FCho_FlowTrack_Revisiting_Optical_Flow_for_Long-Range_Dense_Tracking_CVPR_2024_paper.html)\n|CVPR24|[密集光流跟踪：连接各点](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00786)|[dot](https:\u002F\u002Fgithub.com\u002F16lemoing\u002Fdot) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002F16lemoing\u002Fdot)|\n|ICCV23|[同时、处处、追踪一切](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05422)|[omnimotion](https:\u002F\u002Fgithub.com\u002Fqianqianwang68\u002Fomnimotion) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fqianqianwang68\u002Fomnimotion)|\n|ICCV23|[AccFlow: 用于长距离光流的反向累积](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.13133.pdf)|[AccFlow](https:\u002F\u002Fgithub.com\u002Fmulns\u002FAccFlow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmulns\u002FAccFlow)|\n|ICCV23|[VideoFlow: 利用时间线索进行多帧光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08340)|[VideoFlow](https:\u002F\u002Fgithub.com\u002FXiaoyuShi97\u002FVideoFlow) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FXiaoyuShi97\u002FVideoFlow)|\n|ECCV22|[重访粒子视频：利用点轨迹穿越遮挡进行跟踪](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04153)|[PIPs](https:\u002F\u002Fgithub.com\u002Faharley\u002Fpips) ![GitHub星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Faharley\u002Fpips)|\n\n\n### 半监督学习模型\n| 时间 | 论文 | 代码库 |\n| -------- | -------- | -------- |\n|ECCV22|[由光流监督器实现的光流半监督学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10314)\n\n### 数据合成\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|ECCV22|[RealFlow: 基于EM的从视频生成逼真光流数据集]()|[RealFlow](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FRealFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FRealFlow)\n|CVPR21|[AutoFlow: 学习更好的光流训练集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.14544)|[autoflow](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fopticalflow-autoflow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoogle-research\u002Fopticalflow-autoflow)\n|CVPR21|[从静态图像中学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.03965)|[depthstillation](https:\u002F\u002Fgithub.com\u002Fmattpoggi\u002Fdepthstillation) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmattpoggi\u002Fdepthstillation)\n|arXiv21.04|[从无配对图像合成光流数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02615)\n\n### 无监督模型\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|ECCV22|[通过主动学习在有限标注预算下进行光流训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05053.pdf)|[optical-flow-active-learning-release](https:\u002F\u002Fgithub.com\u002Fduke-vision\u002Foptical-flow-active-learning-release) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fduke-vision\u002Foptical-flow-active-learning-release)\n|CVPR21|[SMURF: 自我教学的多帧无监督RAFT，带全图变形](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.07014)|[smurf](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fsmurf) GoogleResearch\n|CVPR21|[UPFlow: 用于无监督光流学习的上采样金字塔](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLuo_UPFlow_Upsampling_Pyramid_for_Unsupervised_Optical_Flow_Learning_CVPR_2021_paper.pdf)|[UPFlow_pytorch](https:\u002F\u002Fgithub.com\u002Fcoolbeam\u002FUPFlow_pytorch) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcoolbeam\u002FUPFlow_pytorch)\n|TIP21|[OccInpFlow: 基于无监督学习的遮挡修复光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16637)|[depthstillation](https:\u002F\u002Fgithub.com\u002Fcoolbeam\u002FOIFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fcoolbeam\u002FOIFlow)\n|ECCV20|[无监督光流中什么最重要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.04902)|[uflow](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fuflow) GoogleResearch\n|CVPR20|[类比学习：通过变换获得可靠的监督信号以进行无监督光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.13045)|[ARFlow](https:\u002F\u002Fgithub.com\u002Flliuz\u002FARFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flliuz\u002FARFlow)\n|CVPR20|[Flow2Stereo: 光流与立体匹配的有效自监督学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.02138)\n\n### 联合学习\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|arXiv21.11|[统一光流、立体和深度估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.05783)|[unimatch](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Funimatch) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fautonomousvision\u002Funimatch)|\n|CVPR21|[EffiScene: 针对无监督联合学习光流、深度、相机姿态和运动分割的高效逐像素刚性推理](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FJiao_EffiScene_Efficient_Per-Pixel_Rigidity_Inference_for_Unsupervised_Joint_Learning_of_CVPR_2021_paper.html)\n|CVPR21|[特征级协作：光流、立体深度和相机运动的联合无监督学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fhtml\u002FChi_Feature-Level_Collaboration_Joint_Unsupervised_Learning_of_Optical_Flow_Stereo_Depth_CVPR_2021_paper.html)\n\n### 特殊场景\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|CVPR23|[针对雾天场景光流的无监督累积域适应](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07564) |[UCDA-Flow](https:\u002F\u002Fgithub.com\u002Fhyzhouboy\u002FUCDA-Flow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhyzhouboy\u002FUCDA-Flow)\n|ECCV22|[基于多投影融合的深度360°光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.00776)\n|AAAI21|[从单张运动模糊图像中估计光流](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-3339.ArgawD.pdf)|\n|CVPR20|[使用半监督学习处理浓雾场景中的光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.01905)\n|CVPR20|[黑暗中的光流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FZheng_Optical_Flow_in_the_Dark_CVPR_2020_paper.html)|[Optical-Flow-in-the-Dark](https:\u002F\u002Fgithub.com\u002Fmf-zhang\u002FOptical-Flow-in-the-Dark) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmf-zhang\u002FOptical-Flow-in-the-Dark)\n\n### 特殊设备\n\n**事件相机** [event-based_vision_resources](https:\u002F\u002Fgithub.com\u002Fuzh-rpg\u002Fevent-based_vision_resources#optical-flow-estimation) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fuzh-rpg\u002Fevent-based_vision_resources#optical-flow-estimation)\n\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|ArXiv23.03|[利用渲染数据集从事件相机学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11011)\n|ECCV22|[基于事件的光流的秘密](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10022)|[event_based_optical_flow](https:\u002F\u002Fgithub.com\u002Ftub-rip\u002Fevent_based_optical_flow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftub-rip\u002Fevent_based_optical_flow)\n|ICCV21|[GyroFlow: 陀螺仪引导的无监督光流学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13725)|[GyroFlow](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FGyroFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FGyroFlow)\n\n## 场景流\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|CVPR21|[RAFT-3D: 使用刚体运动嵌入的场景流](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.00726.pdf)\n|CVPR21|[顺其自然：自监督场景流估计](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.00497.pdf)|[Just-Go-with-the-Flow-Self-Supervised-Scene-Flow-Estimation](https:\u002F\u002Fgithub.com\u002FHimangiM\u002FJust-Go-with-the-Flow-Self-Supervised-Scene-Flow-Estimation) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FHimangiM\u002FJust-Go-with-the-Flow-Self-Supervised-Scene-Flow-Estimation)\n|CVPR21|[从两帧中学习分割刚体运动](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03694)|[rigidmask](https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Frigidmask)![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgengshan-y\u002Frigidmask)\n|CVPR20|[通过光学膨胀将光流升级为3D场景流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FYang_Upgrading_Optical_Flow_to_3D_Scene_Flow_Through_Optical_Expansion_CVPR_2020_paper.html)|[expansion](https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002Fexpansion) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgengshan-y\u002Fexpansion)\n|CVPR20|[自监督单目场景流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.04143)|[self-mono-sf](https:\u002F\u002Fgithub.com\u002Fvisinf\u002Fself-mono-sf) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvisinf\u002Fself-mono-sf)\n\n## 应用\n\n### 视频合成\u002F生成\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- \n|ECCV24|[更清晰的帧，随时可用：解决视频帧插值中的速度模糊问题](https:\u002F\u002Fjianwang-cmu.github.io\u002F23VFI\u002F04908.pdf)|[InterpAny-Clearer](https:\u002F\u002Fgithub.com\u002Fzzh-tech\u002FInterpAny-Clearer) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzzh-tech\u002FInterpAny-Clearer)\n|arXiv23.11|[MoVideo：基于扩散模型的运动感知视频生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11325)\n|CVPR24|[FlowVid：驯服不完美的光流以实现一致的视频到视频合成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.17681.pdf)\n|WACV24|[用于高效时空视频超分辨率的尺度自适应特征聚合](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17294)|[SAFA](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FWACV2024-SAFA) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FWACV2024-SAFA)\n|CVPR23|[用于视频预测的动态多尺度体素流网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09875)|[DMVFN](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FCVPR2023-DMVFN) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmegvii-research\u002FCVPR2023-DMVFN)\n|CVPR23|[基于潜在流扩散模型的条件图像到视频生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FNi_Conditional_Image-to-Video_Generation_With_Latent_Flow_Diffusion_Models_CVPR_2023_paper.pdf)|[LFDM](https:\u002F\u002Fgithub.com\u002Fnihaomiao\u002FCVPR23_LFDM) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fnihaomiao\u002FCVPR23_LFDM)\n|CVPR23|[用于视频帧插值的统一金字塔递归网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.03456)|[UPR-Net](https:\u002F\u002Fgithub.com\u002Fsrcn-ivl\u002FUPR-Net) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsrcn-ivl\u002FUPR-Net)\n|CVPR23|[通过帧间注意力提取运动和外观信息以实现高效的视频帧插值](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.00440)|[EMA-VFI](https:\u002F\u002Fgithub.com\u002FMCG-NJU\u002FEMA-VFI) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FMCG-NJU\u002FEMA-VFI)\n|WACV23|[利用隐式流编码进行动态场景的帧插值](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2023\u002Fpapers\u002FFigueiredo_Frame_Interpolation_for_Dynamic_Scenes_With_Implicit_Flow_Encoding_WACV_2023_paper.pdf)|[frameintIFE](https:\u002F\u002Fgithub.com\u002Fpedrovfigueiredo\u002FframeintIFE) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fpedrovfigueiredo\u002FframeintIFE)\n|ACMMM22|[基于光流的视频帧合成中的邻域对应匹配](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.06763)|\n|ECCV22|[提升2D动画插值的感知质量](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06294)|[eisai](https:\u002F\u002Fgithub.com\u002FShuhongChen\u002Feisai-anime-interpolator) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FShuhongChen\u002Feisai-anime-interpolator)\n|ECCV22|[用于视频帧插值的实时中间流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06294)|[RIFE](https:\u002F\u002Fgithub.com\u002Fhzwer\u002FECCV2022-RIFE) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhzwer\u002FECCV2022-RIFE)\n|CVPR22|[VideoINR：学习视频隐式神经表示以实现连续时空超分辨率](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.04647.pdf)|[VideoINR](https:\u002F\u002Fgithub.com\u002FPicsart-AI-Research\u002FVideoINR-Continuous-Space-Time-Super-Resolution) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FPicsart-AI-Research\u002FVideoINR-Continuous-Space-Time-Super-Resolution)\n|CVPR22|[IFRNet：用于高效帧插值的中间特征精炼网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.14620.pdf)|[IFRNet](https:\u002F\u002Fgithub.com\u002Fltkong218\u002FIFRNet) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fltkong218\u002FIFRNet)\n|TOG21|[渲染内容的神经帧插值](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3478513.3480553)\n|CVPR21|[野外环境下的深度动画视频插值](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.02495)|[AnimeInterp](https:\u002F\u002Fgithub.com\u002Flisiyao21\u002FAnimeInterp) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flisiyao21\u002FAnimeInterp)\n|CVPR20|[用于视频帧插值的Softmax Splatting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.05534)|[softmax-splatting](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fsoftmax-splatting) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fsoftmax-splatting)\n|CVPR20|[用于视频帧插值的流自适应协作](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.10244)|[AdaCoF-pytorch](https:\u002F\u002Fgithub.com\u002FHyeongminLEE\u002FAdaCoF-pytorch) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FHyeongminLEE\u002FAdaCoF-pytorch)\n|CVPR20|[FeatureFlow：通过结构到纹理的生成实现鲁棒的视频插值](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FGui_FeatureFlow_Robust_Video_Interpolation_via_Structure-to-Texture_Generation_CVPR_2020_paper.pdf)|[FeatureFlow](https:\u002F\u002Fgithub.com\u002FCM-BF\u002FFeatureFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FCM-BF\u002FFeatureFlow)\n\n### 视频修复\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|ECCV22|[用于视频修复的流引导Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.06768)|[FGT](https:\u002F\u002Fgithub.com\u002Fhitachinsk\u002FFGT) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhitachinsk\u002FFGT)\n|CVPR22|[惯性引导的光流补全与风格融合用于视频修复](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhang_Inertia-Guided_Flow_Completion_and_Style_Fusion_for_Video_Inpainting_CVPR_2022_paper.pdf)|[isvi](https:\u002F\u002Fgithub.com\u002Fhitachinsk\u002Fisvi) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhitachinsk\u002Fisvi)\n\n### 视频稳定\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|CVPR20|[利用光流学习视频稳定](https:\u002F\u002Fcseweb.ucsd.edu\u002F~ravir\u002Fjiyang_cvpr20.pdf)|[jiyang.fun](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1wQJYFd8TMbCRzhmFfDyBj7oHAGfyr1j6\u002Fview)\n\n### 低层视觉\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|ICCV21|[多帧超分辨率与去噪的深度重参数化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.08286)|[deep-rep](https:\u002F\u002Fgithub.com\u002Fgoutamgmb\u002Fdeep-rep) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoutamgmb\u002Fdeep-rep)\n|CVPR21|[深度突发超分辨率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.10997)|[deep-burst-sr](https:\u002F\u002Fgithub.com\u002Fgoutamgmb\u002Fdeep-burst-sr) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgoutamgmb\u002Fdeep-burst-sr)\n|CVPR20|[利用空间变换单元网络结合光流指导训练实现高效的动态场景去模糊](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FYuan_Efficient_Dynamic_Scene_Deblurring_Using_Spatially_Variant_Deconvolution_Network_With_CVPR_2020_paper.html)|\n|TIP20|[利用高分辨率光流估计进行深度视频超分辨率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.02129)|[SOF-VSR](https:\u002F\u002Fgithub.com\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FSOF-VSR) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FSOF-VSR)\n\n### 立体视觉与SLAM\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|3DV21|[RAFT-Stereo：用于立体匹配的多级递归场变换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07547.pdf)|[RAFT-Stereo](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT-Stereo) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fprinceton-vl\u002FRAFT-Stereo)\n|CVPR20|[VOLDOR：基于对数逻辑密集光流残差的视觉里程计](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fhtml\u002FMin_VOLDOR_Visual_Odometry_From_Log-Logistic_Dense_Optical_Flow_Residuals_CVPR_2020_paper.html)|[VOLDOR](https:\u002F\u002Fgithub.com\u002Fhtkseason\u002FVOLDOR) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fhtkseason\u002FVOLDOR)\n\n\n## 2020年之前\n\n### 经典估计方法\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|IJCAI1981|[一种用于立体视觉的迭代图像配准技术](http:\u002F\u002Fciteseer.ist.psu.edu\u002Fviewdoc\u002Fdownload;jsessionid=C41563DCDDC44CB0E13D6D64D89FF3FD?doi=10.1.1.421.4619&rep=rep1&type=pdf)|||\n|AI1981|[确定光流](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.66.562&rep=rep1&type=pdf)|\n|TPAMI10|[保持运动细节的光流估计](https:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.221.896&rep=rep1&type=pdf)\n|CVPR10|[光流估计的秘密及其原理](https:\u002F\u002Fusers.soe.ucsc.edu\u002F~pang\u002F200\u002Ff18\u002Fpapers\u002F2018\u002F05539939.pdf)\n|ICCV13|[DeepFlow：基于深度匹配的大位移光流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FWeinzaepfel_DeepFlow_Large_Displacement_2013_ICCV_paper.pdf)|[项目](https:\u002F\u002Fthoth.inrialpes.fr\u002Fsrc\u002Fdeepflow\u002F)\n|ECCV14|[基于通道恒定性的光流估计](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-319-10590-1_28.pdf)\n|CVPR17|[S2F：慢速到快速插值光流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FYang_S2F_Slow-To-Fast_Interpolator_CVPR_2017_paper.pdf)\n\n### 其他\n\n| 时间 | 论文 | 仓库 |\n| -------- | -------- | -------- |\n|NeurIPS19|[用于光流的体积对应网络](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fbbf94b34eb32268ada57a3be5062fe7d-Abstract.html)|[VCN](https:\u002F\u002Fgithub.com\u002Fgengshan-y\u002FVCN) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgengshan-y\u002FVCN)\n|CVPR19|[用于联合光流和遮挡估计的迭代残差精炼](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.05290.pdf)|[irr](https:\u002F\u002Fgithub.com\u002Fvisinf\u002Firr) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvisinf\u002Firr)\n|CVPR18|[PWC-Net：使用金字塔、变形和代价体的光流CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02371)|[PWC-Net](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FPWC-Net) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FNVlabs\u002FPWC-Net) | [pytorch-pwc](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-pwc) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fpytorch-pwc) \n|CVPR18|[LiteFlowNet：用于光流估计的轻量级卷积神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.07036)|[LiteFlowNet](https:\u002F\u002Fgithub.com\u002Ftwhui\u002FLiteFlowNet) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Ftwhui\u002FLiteFlowNet) | [pytorch-liteflownet](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-liteflownet) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fpytorch-liteflownet)\n|CVPR17|[FlowNet 2.0：基于深度网络的光流估计演进](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01925)|[flownet2-pytorch](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fflownet2-pytorch) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FNVIDIA\u002Fflownet2-pytorch) \u003Cbr> [flownet2](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fflownet2) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flmb-freiburg\u002Fflownet2) \u003Cbr> [flownet2-tf](https:\u002F\u002Fgithub.com\u002Fsampepose\u002Fflownet2-tf) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsampepose\u002Fflownet2-tf)\n|CVPR17|[使用空间金字塔网络进行光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.00850)|[spynet](https:\u002F\u002Fgithub.com\u002Fanuragranj\u002Fspynet) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fanuragranj\u002Fspynet) | [pytorch-spynet](https:\u002F\u002Fgithub.com\u002Fsniklaus\u002Fpytorch-spynet) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsniklaus\u002Fpytorch-spynet)\n|ICCV15|[FlowNet：使用卷积网络学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1504.06852)|[FlowNetPytorch](https:\u002F\u002Fgithub.com\u002FClementPinard\u002FFlowNetPytorch) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FClementPinard\u002FFlowNetPytorch)\n|AAAI19|[DDFlow：利用无标签数据蒸馏学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09145)|[DDFlow](https:\u002F\u002Fgithub.com\u002Fppliuboy\u002FDDFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fppliuboy\u002FDDFlow)\n|CVPR19|[SelFlow：自监督学习光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09117)|[SelFlow](https:\u002F\u002Fgithub.com\u002Fppliuboy\u002FSelFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fppliuboy\u002FSelFlow)\n|CVPR19|[适用于静态或动态场景的无监督深度极线光流](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.03848)|[EPIFlow](https:\u002F\u002Fgithub.com\u002Fyiranzhong\u002FEPIflow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fyiranzhong\u002FEPIflow)\n|CVPR18|[密集深度、光流和相机姿态的无监督学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.02276)|[GeoNet](https:\u002F\u002Fgithub.com\u002Fyzcjtr\u002FGeoNet) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fyzcjtr\u002FGeoNet)\n|ICCV19|[RainFlow：雨线与雨幕效应下的光流](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FLi_RainFlow_Optical_Flow_Under_Rain_Streaks_and_Rain_Veiling_Effect_ICCV_2019_paper.html)\n|CVPR18|[雨天场景中鲁棒的光流估计](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.05239)\n|NIPS19|[二次视频插帧](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.00627)\n|CVPR19|[深度感知视频帧插值](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FBao_Depth-Aware_Video_Frame_Interpolation_CVPR_2019_paper.pdf)|[DAIN](https:\u002F\u002Fgithub.com\u002Fbaowenbo\u002FDAIN) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbaowenbo\u002FDAIN)\n|CVPR18|[Super SloMo：高质量估算多帧中间帧以进行视频插值](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00080)|[Super-SloMo](https:\u002F\u002Fgithub.com\u002Favinashpaliwal\u002FSuper-SloMo) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Favinashpaliwal\u002FSuper-SloMo)\n|ICCV17|[使用深度体素流进行视频帧合成](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.02463)|[voxel-flow](https:\u002F\u002Fgithub.com\u002Fliuziwei7\u002Fvoxel-flow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fliuziwei7\u002Fvoxel-flow) | [pytorch-voxel-flow](https:\u002F\u002Fgithub.com\u002Flxx1991\u002Fpytorch-voxel-flow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Flxx1991\u002Fpytorch-voxel-flow)\n|CVPR19|[DVC：端到端的深度视频压缩框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.00101)|[PyTorchVideoCompression](https:\u002F\u002Fgithub.com\u002FZhihaoHu\u002FPyTorchVideoCompression) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FZhihaoHu\u002FPyTorchVideoCompression)\n|ICCV17|[SegFlow：视频目标分割与光流的联合学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06750)|[SegFlow](https:\u002F\u002Fgithub.com\u002FJingchunCheng\u002FSegFlow) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FJingchunCheng\u002FSegFlow)\n|CVPR18|[具有时空注意力的端到端光流相关性跟踪](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.01124)\n|CVPR18|[光流引导特征：一种快速且鲁棒的运动表示，用于视频动作识别](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.11152)|[Optical-Flow-Guided-Feature](https:\u002F\u002Fgithub.com\u002Fkevin-ssy\u002FOptical-Flow-Guided-Feature) ![Github星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkevin-ssy\u002FOptical-Flow-Guided-Feature)\n|GCPR18|[关于光流与动作识别的融合](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.08416)\n|CVPR14|[用于视频稳定的空间平滑光流](http:\u002F\u002Fwww.liushuaicheng.org\u002FCVPR2014\u002FSteadyFlow.pdf)","# Awesome-Optical-Flow 快速上手指南\n\n**Awesome-Optical-Flow** 并非单一的可执行软件，而是一个精选的光流（Optical Flow）相关论文、代码库和数据集的开源列表。本指南将指导开发者如何利用该列表快速找到适合的光流模型（如 RAFT, GMFlow, FlowFormer 等），并以列表中热门的 **RAFT** 模型为例，演示环境搭建与基本使用流程。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求。大多数现代光流模型依赖 PyTorch 和 CUDA 进行加速。\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS (部分模型支持，但推理速度较慢)。\n*   **硬件要求**: NVIDIA GPU (建议显存 ≥ 4GB)，已安装对应的 CUDA Toolkit (通常建议 11.1+)。\n*   **前置依赖**:\n    *   Python 3.7+\n    *   Git\n    *   Conda (推荐用于管理虚拟环境)\n\n## 2. 安装步骤\n\n由于列表中包含多个独立项目，以下步骤以经典的 **RAFT** 模型为例展示通用安装流程。其他模型的安装逻辑类似，请参考对应仓库的 `README`。\n\n### 2.1 创建虚拟环境\n```bash\nconda create -n optical_flow python=3.8\nconda activate optical_flow\n```\n\n### 2.2 安装 PyTorch (推荐使用国内镜像源)\n访问 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002F) 获取对应命令，或使用清华源加速安装：\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n*(注：请根据您的实际 CUDA 版本调整 `cu118` 参数)*\n\n### 2.3 克隆目标仓库并安装依赖\n以 RAFT 为例：\n```bash\n# 克隆代码\ngit clone https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT.git\ncd RAFT\n\n# 安装核心依赖\npip install -r requirements.txt\n\n# 安装 corrcheck 和 altcorr 扩展 (光流模型常用自定义算子)\ncd core\u002Futils\npython setup.py build_ext --inplace\ncd ..\u002F..\u002F\n```\n\n*提示：若遇到网络问题克隆 GitHub 仓库失败，可尝试使用 Gitee 镜像或配置代理。*\n\n## 3. 基本使用\n\n安装完成后，您可以下载预训练权重并对图像对进行光流估计。\n\n### 3.1 下载预训练模型\n大多数仓库提供直接下载的脚本或链接。以 RAFT 为例：\n```bash\nwget https:\u002F\u002Fdl.dropboxusercontent.com\u002Fs\u002F4j4z58wuv8o0mfz\u002Fmodels.zip\nunzip models.zip\n```\n\n### 3.2 运行推理示例\n使用提供的演示脚本处理两张图片（frame1 和 frame2），生成光流图。\n\n```bash\npython demo.py --model=models\u002Fraft-things.pth --path=demo-frames\n```\n\n**参数说明：**\n*   `--model`: 指定预训练权重路径（如 `raft-things.pth` 或 `raft-sintel.pth`）。\n*   `--path`: 输入图像文件夹路径或单张图像路径。\n*   `--mixed_precision`: (可选) 启用混合精度推理以加快速度。\n\n### 3.3 探索更多模型\n回到 **Awesome-Optical-Flow** 列表，您可以根据需求选择不同特性的模型：\n*   **高精度**: 参考 `FlowFormer`, `GMFlow` (CVPR 2022)。\n*   **长序列跟踪**: 参考 `FlowTrack`, `AccFlow` (CVPR\u002FICCV 2023-2024)。\n*   **无监督学习**: 参考 `UPFlow`, `SMURF`。\n*   **特殊场景**: 如雾天 (`UCDA-Flow`) 或暗光环境 (`Optical-Flow-in-the-Dark`)。\n\n只需替换上述“安装步骤”中的仓库地址和对应的运行命令即可切换模型。","某自动驾驶初创公司的算法团队正在研发夜间复杂路况下的车辆运动感知模块，急需高精度的光流估计技术来追踪动态障碍物。\n\n### 没有 Awesome-Optical-Flow 时\n- **检索效率低下**：工程师需在 arXiv、Google Scholar 等多个平台分散搜索，耗费数周筛选“光流”相关论文，且难以区分哪些已开源代码。\n- **技术选型盲目**：面对 RAFT、FlowFormer 等众多模型，缺乏横向对比依据，容易误选不适合低光照场景的旧架构，导致初期验证失败。\n- **复现成本高昂**：找到的论文往往缺少官方实现链接，团队需从头复现算法，因细节缺失导致性能无法达到论文水平，严重拖慢研发进度。\n- **前沿洞察滞后**：难以系统性发现如 MemFlow（引入记忆机制）或 DistractFlow（抗干扰训练）等最新突破，错失提升鲁棒性的关键机会。\n\n### 使用 Awesome-Optical-Flow 后\n- **一站式资源聚合**：团队直接查阅该清单，按时间或会议（如 CVPR24、NeurIPS22）快速定位到带官方 Repo 链接的顶会论文，将调研周期从数周缩短至两天。\n- **精准匹配需求**：通过列表中的模型特性描述，迅速锁定针对遮挡和动态模糊优化的 GMA 或 GMFlow 模型，避免了无效的试错成本。\n- **无缝对接开发**：每个条目均附带高星 GitHub 仓库链接，开发人员可直接拉取预训练权重和推理代码，当天即可完成基线模型部署。\n- **紧跟技术演进**：清单持续更新的“监督式模型”板块让团队即时掌握了结合生成式方法的最新趋势，为下一代预测算法提供了明确演进路径。\n\nAwesome-Optical-Flow 将碎片化的学术成果转化为结构化的工程资产，让研发团队能从繁琐的文献挖掘中解脱，专注于核心算法的落地与优化。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhzwer_Awesome-Optical-Flow_b9814c6e.png","hzwer","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhzwer_afcaad10.jpg",null,"@stepfun-ai","https:\u002F\u002Fscholar.google.com.hk\u002Fcitations?user=zJEkaG8AAAAJ&hl=zh-CN&oi=ao","https:\u002F\u002Fgithub.com\u002Fhzwer",637,39,"2026-04-08T07:58:09",5,"","未说明",{"notes":85,"python":83,"dependencies":86},"Awesome-Optical-Flow 本身是一个光流相关论文和开源项目的列表（Awesome List），并非一个可直接运行的单一软件工具。README 中列出了数十个独立的光流模型（如 RAFT, FlowFormer, GMFlow 等），每个模型都有各自独立的代码仓库和特定的运行环境需求。用户需前往列表中具体模型的 GitHub 页面查看其详细的安装指南、依赖库版本及硬件要求。",[],[14,15],[89,90,91],"computer-vision","deep-learning","optical-flow","2026-03-27T02:49:30.150509","2026-04-09T09:32:48.493100",[],[]]