[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ZitongYu--DeepFAS":3,"tool-ZitongYu--DeepFAS":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":79,"stars":81,"forks":82,"last_commit_at":83,"license":79,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":91,"updated_at":92,"faqs":93,"releases":121},2161,"ZitongYu\u002FDeepFAS","DeepFAS","🔥Deep Learning for Face Anti-Spoofing","DeepFAS 是一个专注于人脸活体检测（Face Anti-Spoofing）的深度学习开源项目，旨在系统性地梳理和整合该领域的前沿研究成果。面对人脸识别系统中常见的照片打印、屏幕重放及 3D 面具等欺骗攻击，DeepFAS 通过汇总 2018 至 2022 年间的主流算法，为开发者提供了一套从传统混合方法到纯深度学习、再到广义化学习的完整技术图谱。\n\n该项目不仅涵盖了基于普通 RGB 摄像头的单模态检测方案，还深入探讨了利用多模态数据及专用传感器的进阶策略。其核心亮点在于构建了一个详尽的公共资源库，整理了包括 NUAA、CASIA-MFSD、REPLAY-ATTACK 在内的多个经典数据集，并对比了不同的评估协议与攻击类型。此外，DeepFAS 还特别关注域适应、零样本学习、异常检测及自监督学习等前沿方向，帮助从业者应对复杂多变的应用场景。\n\nDeepFAS 非常适合计算机视觉领域的研究人员、算法工程师及安全系统开发者使用。无论是希望快速了解行业现状的初学者，还是寻求最新模型架构进行二次开发的资深专家，都能从中获得宝贵的参考依据和数据支持，从而高效地构建更安全、鲁棒的人脸识别系统。","# 👏 Survey of Deep Face Anti-spoofing 🔥\n\nThis is the official repository of \"**[Deep Learning for Face Anti-Spoofing: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.14948)**\", a comprehensive survey \nof recent progress in deep learning methods for face anti-spoofing (FAS) as well as the datasets and protocols.\n\n\n\n### Citation\nIf you find our work useful in your research, please consider citing:\n\n    @article{yu2022deep,\n      title={Deep Learning for Face Anti-Spoofing: A Survey},\n      author={Yu, Zitong and Qin, Yunxiao and Li, Xiaobai and Zhao, Chenxu and Lei, Zhen and Zhao, Guoying},\n      journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},\n      year={2022}\n    }\n\n\n## Introduction\nWe present a comprehensive review of recent deep learning methods for face anti-spoofing (mostly from 2018 to 2022). It covers hybrid (handcrafted+deep), pure deep learning, and generalized learning based methods for monocular RGB face anti-spoofing. It also includes multi-modal learning based methods as well as specialized sensor based FAS. It also presents detailed comparision among publicly available datasets, together with several classical evaluation protocols.\n\n🔔 We will update this page frequently~ :tada::tada::tada:\n\n---\n## Contents\n\n- [Datasets](#data)\n  - [Using commercial RGB camera](#data_RGB)\n  - [With multiple modalities or specialized sensors](#data_Multimodal)\n- [Deep FAS methods with commercial RGB camera](#methods_RGB)\n  - [Hybrid (handcrafted + deep)](#hybrid)\n  - [End-to-end binary cross-entropy supervision](#binary)\n  - [Pixel-wise auxiliary supervision](#auxiliary)\n  - [Generative model with pixel-wise supervision](#generative)\n  - [Domain adaptation](#DA)\n  - [Domain generalization](#DG)\n  - [Zero\u002FFew-shot learning](#zero-shot)\n  - [Anomaly detection](#oneclass)\n  - [Semi-supervision & Self-supervision](#semiself)\n  - [Continual learning](#CL)\n- [Deep FAS methods with advanced sensor](#methods_advanced)\n  - [Learning upon specialized sensor](#sensor)\n  - [Multi-modal learning](#multimodal)\n  - [Flexible-modal learning](#flexmodal)\n\n---\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FZitongYu_DeepFAS_readme_13acccf18a75.png)   \n  \n---\n\n\n\u003Ca name=\"data\" \u002F>\n\n### 1️⃣ Datasets\n\n\u003Ca name=\"data_RGB\" \u002F>\n\n#### Datasets recorded with commercial RGB camera\n\n| Dataset    | Year | #Live\u002FSpoof | #Sub. |  Setup | Attack Types |\n| --------   | -----    | -----  |  -----  | ----- |------------------------|\n| [NUAA](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.607.5449&rep=rep1&type=pdf)   | 2010 | 5105\u002F7509(I) | 15 |  N\u002FR | Print(flat, wrapped)|\n| [YALE Recaptured](https:\u002F\u002Fwww.ic.unicamp.br\u002F~rocha\u002Fpub\u002Fpapers\u002F2011-icip-spoofing-detection.pdf)   | 2011 | 640\u002F1920(I) | 10 |  50cm-distance from 3 LCD minitors | Print(flat) |\n| [CASIA-MFSD](http:\u002F\u002Fwww.cbsr.ia.ac.cn\u002Fusers\u002Fjjyan\u002FZHANG-ICB2012.pdf)   | 2012 | 150\u002F450(V) | 50 |  7 scenarios and 3 image quality | Print(flat, wrapped, cut), Replay(tablet)|\n| [REPLAY-ATTACK](http:\u002F\u002Fpublications.idiap.ch\u002Fdownloads\u002Fpapers\u002F2012\u002FChingovska_IEEEBIOSIG2012_2012.pdf)   | 2012 | 200\u002F1000(V) | 50 |  Lighting and holding | Print(flat), Replay(tablet, phone) |\n| [Kose and Dugelay](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6595862)   | 2013 | 200\u002F198(I) | 20 |  N\u002FR | Mask(hard resin) |\n| [MSU-MFSD](http:\u002F\u002Fbiometrics.cse.msu.edu\u002FPublications\u002FFace\u002FWenHanJain_FaceSpoofDetection_TIFS15.pdf)   | 2014 | 70\u002F210(V) | 35 |  Indoor scenario; 2 types of cameras | Print(flat), Replay(tablet, phone) |\n| [UVAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7017526)   | 2015 | 808\u002F16268(V) | 404 | Different lighting, background and places in two sections | Replay(monitor) |\n| [REPLAY-Mobile](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7736936)   | 2016 | 390\u002F640(V) | 40 |  5 lighting conditions | Print(flat), Replay(monitor) |\n| [HKBU-MARs V2](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-46478-7_6)   | 2016 | 504\u002F504(V) | 12 | 7 cameras from stationary and mobile devices and 6 lighting settings | Mask(hard resin) from Thatsmyface and REAL-f |\n| [MSU USSA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7487030)   | 2016 | 1140\u002F9120(I) | 1140 |  Uncontrolled; 2 types of cameras | Print(flat), Replay(laptop, tablet, phone)|\n| [SMAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7867821)   | 2017 | 65\u002F65(V) | - |  Color images from online resources | Mask(silicone) |\n| [OULU-NPU](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7961798)   | 2017 | 720\u002F2880(V) | 55 |  Lighting & background in 3 sections | Print(flat), Replay(phone) |\n| [Rose-Youtu](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8279564)   | 2018 | 500\u002F2850(V) | 20 | 5 front-facing phone camera; 5 different illumination conditions | Print(flat), Replay(monitor, laptop),Mask(paper, crop-paper)|\n| [SiW](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.11097)   | 2018 | 1320\u002F3300(V) | 165 |  4 sessions with variations of distance, pose, illumination and expression | Print(flat, wrapped), Replay(phone, tablet, monitor)|\n| [WFFD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.06514)   | 2019 | 2300\u002F2300(I) 140\u002F145(V) | 745 |  Collected online; super-realistic; removed low-quality faces | Waxworks(wax)|\n| [SiW-M](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Deep_Tree_Learning_for_Zero-Shot_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | 2019 | 660\u002F968(V) | 493 |  Indoor environment with pose, lighting and expression variations | Print(flat), Replay, Mask(hard resin, plastic, silicone, paper, Mannequin), Makeup(cosmetics, impersonation, Obfuscation), Partial(glasses, cut paper)|\n| [Swax](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.09642)   | 2020 | Total 1812(I) 110(V) | 55 |  Collected online; captured under uncontrolled scenarios | Waxworks(wax)|\n| [CelebA-Spoof](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58610-2_5)   | 2020 | 156384\u002F469153(I) | 10177 |  4 illumination conditions; indoor & outdoor; rich annotations | Print(flat, wrapped), Replay(monitor tablet, phone), Mask(paper)|\n| [RECOD-Mtablet](https:\u002F\u002Fjournals.plos.org\u002Fplosone\u002Farticle?id=10.1371\u002Fjournal.pone.0238058)   | 2020 | 450\u002F1800(V) | 45 | Outdoor environment and low-light & dynamic sessions | Print(flat), Replay(monitor) |\n| [CASIA-SURF 3DMask](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9252183)   | 2020 | 288\u002F864(V)  | 48 |  High-quality identity-preserved; 3 decorations and 6 environments | Mask(mannequin with 3D print) |\n| [HiFiMask](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06148)   | 2021 | 13650\u002F40950(V) | 75 |  three mask decorations; 7 recording devices; 6 lighting conditions; 6 scenes | Mask(transparent, plaster, resin)|\n| [SiW-M v2](https:\u002F\u002Fgithub.com\u002FCHELSEA234\u002FMulti-domain-learning-FAS)   | 2022 | 785\u002F915 (V) | 1093(493\u002F600) |  Both indoor and outdoor, diverse age and enthnicity, 7 illumiations | IAPRA-verified 14 spoof attacks (4 coverings, 3 makeups, 3 masks, 2 human models, replay and print)|\n| [SuHiFiMask](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00975)   | 2022 | 10195\u002F10195 (V) | 101 |  Long distance using Surveillance cameras, recording in 3 scenes, and 3 lightings, 4 whethers  | 2D image, Video replay, 3D Mask with materials Resin, Plaster, Silicone, Paper|\n| [WFAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05753)  | 2023 |  529,571\u002F 853,729 (I) | 469,920 |  Internet, unconstrained settings  | 17 PAs, Print(newspaper, poster, photo, album, picture book, scan photo, packging, cloth), Display(phone, tablet, TV, computer), Mask, 3D Model(garage kit, doll, adult doll, waxwork)|\n\n\n\u003Ca name=\"data_Multimodal\" \u002F>\n\n#### Datasets with multiple modalities or specialized sensors\n\n| Dataset    | Year | #Live\u002FSpoof | #Sub. |  M&H | Setup | Attack Types |\n| --------   | -----    | -----  |  -----  | -----  | -----  |------------------------|\n| [3DMAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6810829)   | 2013 | 170\u002F85(V) | 17 |  VIS, Depth | 3 sessions (2 weeks interval) | Mask(paper, hard resin)|\n| [GUC-LiFFAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7018027)   | 2015 | 1798\u002F3028(V) | 80 |  Light field | Distance of 1.5 constrained conditions | Print(Inkjet paper, Laserjet paper), Replay(tablet)|\n| [3DFS-DB](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F277905873_Three-dimensional_and_two-and-a-half-dimensional_face_recognition_spoofing_using_three-dimensional_printed_models)   | 2016 | 260\u002F260(V) | 26 |  VIS, Depth | Head movement with rich angles | Mask(plastic)|\n| [BRSU Skin\u002FFace\u002FSpoof](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7550052)   | 2016 | 102\u002F404(I) | 137 |  VIS, SWIR | multispectral SWIR with 4 wavebands 935nm, 1060nm, 1300nm and 1550nm | Mask(silicon, plastic, resin, latex)|\n| [Msspoof](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-28501-6_8)   | 2016 | 1470\u002F3024(I) | 21 |  VIS, NIR | 7 environmental conditions | Black&white Print(flat) |\n| [MLFP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8014774)   | 2017 | 150\u002F1200(V) | 10 |  VIS, NIR, Thermal | Indoor and outdoor with fixed and random backgrounds | Mask(latex, paper) |\n| [ERPA](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F320177829_What_You_Can't_See_Can_Help_You_-_Extended-Range_Imaging_for_3D-Mask_Presentation_Attack_Detection)   | 2017 | Total 86(V) | 5 |  VIS, Depth, NIR, Thermal | Subject positioned close (0.3∼0.5m) to the 2 types of cameras | Print(flat), Replay(monitor), Mask(resin, silicone) |\n| [LF-SAD ](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_JEI_Face_Liveness.pdf)   | 2018 | 328\u002F596(I) | 50 |  Light field | Indoor fix background, captured by Lytro ILLUM camera | Print(flat, wrapped), Replay(monitor) |\n| [CSMAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8698550)   | 2018 | 104\u002F159(V+I) | 14 |  VIS, Depth, NIR, Thermal | 4 lighting conditions | Mask(custom silicone) |\n| [3DMA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8909845)   | 2019 | 536\u002F384(V) | 67 |  VIS, NIR | 48 masks with different ID; 2 illumination & 4 capturing distances | Mask(plastics) |\n| [CASIA-SURF](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhang_A_Dataset_and_Benchmark_for_Large-Scale_Multi-Modal_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | 2019 | 3000\u002F18000(V) | 1000 |  VIS, Depth, NIR | Background removed; Randomly cut eyes, nose or mouth areas | Print(flat, wrapped, cut) |\n| [WMCA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8714076)   | 2019 | 347\u002F1332(V) | 72 |  VIS, Depth, NIR, Thermal | 6 sessions with different backgrounds and illumination; pulse data for bonafide recordings | Print(flat), Replay(tablet), Partial(glasses), Mask(plastic, silicone, and paper, Mannequin) |\n| [CeFA](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fhtml\u002FLiu_CASIA-SURF_CeFA_A_Benchmark_for_Multi-Modal_Cross-Ethnicity_Face_Anti-Spoofing_WACV_2021_paper.html)   | 2020 | 6300\u002F27900(V) | 1607 |  VIS, Depth, NIR | 3 ethnicities; outdoor & indoor; decoration with wig and glasses | Print(flat, wrapped), Replay, Mask(3D print, silica gel) |\n| [HQ-WMCA](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9146362)   | 2020 | 555\u002F2349(V) | 51 | VIS, Depth, NIR, SWIR, Thermal | Indoor; 14 ‘modalities’, including 4 NIR and 7 SWIR wavelengths; masks and mannequins were heated up to reach body temperature | Laser or inkjet Print(flat), Replay(tablet, phone), Mask(plastic, silicon, paper, mannequin), Makeup, Partial(glasses, wigs, tatoo) |\n| [PADISI-Face](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.12081.pdf)   | 2021 | 1105\u002F924(V) | 360 | VIS, Depth, NIR, SWIR, Thermal | Indoor, fixed background, 60-frame sequence of 1984 × 1264 pixel images | print(flat), replay(tablet, phone), mask(plastic, silicon, transparent, Mannequin), makeup\u002Ftatoo, partial(glasses,funny eye) |\n\n\n\n---\n\u003Ca name=\"methods_RGB\" \u002F>\n\n### 2️⃣ Deep FAS methods with commercial RGB camera\n\n- temp\n\n\u003Ca name=\"hybrid\" \u002F>\n\n#### Hybrid (handcrafted + deep)\n\n| Method    | Year | Backbone | Loss |  Input | Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [DPCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7821013)   | 2016 | VGG-Face | Trained with SVM |  RGB | S|\n| [Multi-cues+NN](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1047320316300244)   | 2016 | MLP | Binary CE loss |  RGB+OFM | D|\n| [CNN LBP-TOP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7984552)   | 2017 | 5-layer CNN | Binary CE loss, SVM |  RGB | D|\n| [DF-MSLBP](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.03850)   | 2018 | Deep forest | Binary CE loss |  HSV+YCbCr | S|\n| [SPMT+SSD](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320318303182)   | 2018 | VGG16 | Binary CE loss, SVM, bbox regression |  RGB, Landmarks | S|\n| [CHIF](http:\u002F\u002Fiab-rubric.org\u002Fpapers\u002FManjani-DDLSpoofing.pdf)   | 2019 | VGG-Face | Trained with SVM |  RGB | S|\n| [DeepLBP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8296251)   | 2019 | VGG-Face | Binary CE loss, SVM |  RGB, HSV, YCbCr | S|\n| [CNN+LBP+WLD](https:\u002F\u002Fdigital-library.theiet.org\u002Fcontent\u002Fjournals\u002F10.1049\u002Fiet-ipr.2018.5560)   | 2019 | CaffeNet | Binary CE loss |  RGB | S|\n| [Intrinsic](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002F10.1049\u002Fiet-bmt.2019.0155)   | 2019 | 1D-CNN | Trained with SVM |  Reflection | D|\n| [FARCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8911314)   | 2019 | Multi-scale attentional CNN | Regression loss, Crystal loss, Center loss |  RGB | S|\n| [CNN-LSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8626161)   | TIFS 2019 | 1D-CNN | Trained with SVM |  RGB | D |\n| [DT-Mask](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8453011)   | 2019 | VGG16 | Binary CE loss, Channel&Spatial discriminability |  RGB+OF | D |\n| [VGG+LBP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8955089)   | 2019 | VGG16 | Binary CE loss |  RGB | S|\n| [CNN+OVLBP](http:\u002F\u002Fwww.mecs-press.org\u002Fijigsp\u002Fijigsp-v11-n2\u002FIJIGSP-V11-N2-2.pdf)   | 2019 | VGG16 | Binary CE loss, NN classifier |  RGB | S|\n| [HOG-Pert.](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-20005-3_1)   | 2019 | Multi-scale CNN | Binary CE loss |  RGB+HOG | S|\n| [LBP-Pert.](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0262885619304512)   | 2020 | Multi-scale CNN | Binary CE loss |  RGB+LBP | S|\n| [TransRPPG](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9460762)   | SPL 2021 | Vision Transformer | Binary CE loss |  rPPG map | D |\n\n\n\n\u003Ca name=\"binary\" \u002F>\n\n#### End-to-end binary cross-entropy supervision\n| Method    | Year | Backbone | Loss |  Input | Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [CNN1](https:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5601)   | 2014 | 8-layer CNN | Trained with SVM |  RGB | S|\n| [LSTM-CNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7486482)   | 2015 | CNN+LSTM | Binary CE loss |  RGB | D|\n| [SpoofNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1410.1980)   | 2015 | 2-layer CNN | Binary CE loss |  RGB | S|\n| [HybridCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8253209)   | 2017 | VGG-Face | Trained with SVM |  RGB | S|\n| [CNN2](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04176)   | 2017 | VGG11 | Binary CE loss |  RGB | S|\n| [Ultra-Deep](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-70096-0_70)   | 2017 | ResNet50+LSTM | Binary CE loss |  RGB | D|\n| [FASNet](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-59876-5_4)   | 2017 | VGG16 | Binary CE loss |  RGB | S|\n| [CNN3](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8166863)   | 2018 | Inception, ResNet | Binary CE loss |  RGB | S|\n| [MILHP](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2018\u002F0113.pdf)   | 2018 | ResNet+STN | Multiple Instances CE loss |  RGB | D|\n| [LSCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8614337)   | 2018 | 9 PatchNets | Binary CE loss |  RGB | S|\n| [LiveNet](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2018_ESA_LiveNet.pdf)   | 2018 | VGG11 | Binary CE loss |  RGB | S|\n| [MS-FANS ](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8546026)   | 2018 | AlexNet+LSTM | Binary CE loss |  RGB | S|\n| [DeepColorFAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8616677)   | 2018 | 5-layer CNN | Binary CE loss |  RGB, HSV, YCbCr | S|\n| [Siamese](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-31654-9_15)   | 2019 | AlexNet | Contrastive loss |  RGB | S|\n| [FSBuster](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.02845)   | 2019 | ResNet50 | Trained with SVM |  RGB | S|\n| [FuseDNG](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_VComm_Face_Liveness)   | 2019 | 7-layer CNN | Binary CE loss, Reconstruction loss |  RGB | S|\n| [STASN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FYang_Face_Anti-Spoofing_Model_Matters_so_Does_Data_CVPR_2019_paper.pdf)   | CVPR 2019 | ResNet50+LSTM | Binary CE loss |  RGB | D|\n| [TSCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8737949)   | TIFS 2019 | ResNet18 | Binary CE loss |  RGB, MSR | S|\n| [FAS-UCM](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.07270)   | 2019 | MobileNetV2, VGG19 | Binary CE loss, Style loss |  RGB | S|\n| [SLRNN](https:\u002F\u002Fbmvc2019.org\u002Fwp-content\u002Fuploads\u002Fpapers\u002F0973-paper.pdf)   | 2019 | ResNet50+LSTM | Binary CE loss |  RGB | D|\n| [GFA-CNN](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3402446)   | 2019 | VGG16 | Binary CE loss |  RGB | S|\n| [3DSynthesis](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8987415)   | 2019 | ResNet15 | Binary CE loss |  RGB | S|\n| [CompactNet](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231220308237?dgcid=rss_sd_all&utm_source=researcher_app&utm_medium=referral&utm_campaign=RESR_MRKT_Researcher_inbound)   | NC 2020 | VGG19 | Points-to-Center triplet loss |  RGB | S|\n| [SSR-FCN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9218954)   | TIFS 2020 | FCN with 6 layers | Binary CE loss |  RGB | S|\n| [FasTCo](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06756)   | 2020 | ResNet50 or MobileNetV2 | Multi-class CE loss, Temporal Consistency loss, Class Consistency loss|  RGB | D|\n| [DRL-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9205636)   | TIFS 2020 | ResNet18+GRU | Binary CE loss |  RGB | S|\n| [SfSNet](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9068268)   | 2020 | 6-layer CNN | Binary CE loss |  Albedo, Depth, Reflection| S|\n| [LivenesSlight](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1801.01949.pdf)   | 2020 | 6-layer CNN | Binary CE loss |  RGB | S|\n| [MotionEnhancement](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9203944)   | 2020 | VGGface+LSTM | Binary CE loss |  RGB | D|\n| [CFSA-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9175520)   | 2020 | ResNet18 | Binary CE loss |  RGB | S|\n| [MC-FBC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.06514)   | 2020 | VGG16, ResNet50 | Binary CE loss |  RGB | S|\n| [SimpleNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16028)   | 2020 | Multi-stream 5-layer CNN | Binary CE loss |  RGB, OF, RP | D|\n| [PatchCNN](https:\u002F\u002Fjournals.plos.org\u002Fplosone\u002Farticle?id=10.1371\u002Fjournal.pone.0238058)   | 2020 | SqueezeNet v1.1 | Binary CE loss, Triplet loss |  RGB | S|\n| [FreqSpatialTempNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03723)   | 2020 | ResNet18 | Binary CE loss |  RGB, HSV, Spectral | D|\n| [ViTranZFAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.08019)   |IJCB 2021 | ViT | Binary CE loss |  RGB | S|\n| [CIFL](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9336714)   | TIFS 2021 | ResNet18 | Binary focal loss, camear type loss |  RGB | S|\n| [XFace-PAD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04862)   | FG 2021 | ResNet50, ViT | Binary CE loss, word-wise CE loss, a sentence discriminative loss, and a sentence semantic loss |  RGB | S|\n| [PCGN](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475305)   | MM 2021 | ResNet101+GCN | CE Loss for node and edge |  RGB whole image | S|\n| [TOD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11046)   | 2021 | ResNet18, Graph Attention Network | CE Loss |  RGB  | S|\n| [MTSS](https:\u002F\u002Fwww.bmvc2021-virtualconference.com\u002Fassets\u002Fpapers\u002F0113.pdf)   | BMVC 2021 | ViT+Multi-Level Attention Module | CE Loss |  RGB  | S|\n| [PatchNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14325)   | CVPR 2022 | ResNet18 | Asymmetric AM-Softmax Loss, Self-Supervised Similarity Loss |  RGB patches | S|\n| [ViTransPAD](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.01562.pdf)   | ICIP 2022 | EfficientNet + VideoViT | CE Loss |  RGB | D|\n| [FGDNet](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9946402)   | TMM 2022 | Convolutional Transformer | 5-class CE Loss |  RGB | S|\n\n\u003Ca name=\"auxiliary\" \u002F>\n\n#### Pixel-wise auxiliary supervision\n\n| Method    | Year | Supervision | Backbone |  Input | Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [Depth&Patch](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8272713\u002F)   | IJCB 2017 | Depth | PatchNet, DepthNet |  YCbCr, HSV | S|\n| [Auxiliary](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fpdfs\u002FLiu_Jourabloo_Liu_CVPR2018.pdf)   | CVPR 2018 | Depth, rPPG spectrum | DepthNet |  RGB, HSV | D|\n| [BASN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCVW_2019\u002Fpapers\u002FDFW\u002FKim_BASN_Enriching_Feature_Representation_Using_Bipartite_Auxiliary_Supervisions_for_Face_ICCVW_2019_paper.pdf)   | ICCVW 2019 | Depth, Reflection | DepthNet, Enrichment |  RGB, HSV | S|\n| [DTN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Deep_Tree_Learning_for_Zero-Shot_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | CVPR 2019 | BinaryMask | Tree Network |  RGB, HSV | S|\n| [PixBiS](http:\u002F\u002Fpublications.idiap.ch\u002Fdownloads\u002Fpapers\u002F2019\u002FGeorge_ICB2019.pdf)   | ICB 2019 | BinaryMask | DenseNet161 |  RGB | S|\n| [A-PixBiS](http:\u002F\u002Fwww.dicta2020.org\u002Fwp-content\u002Fuploads\u002F2020\u002F09\u002F53_CameraReady.pdf)   | 2020 | BinaryMask | DenseNet161 |  RGB | S|\n| [Auto-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9053587)   | ICASSP 2020 | BinaryMask | NAS |  RGB | S|\n| [MRCNN](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167865520300015)   | 2020 | BinaryMask | Shallow CNN |  RGB | S|\n| [FCN-LSA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9056475)   | 2020 | BinaryMask | DepthNet |  RGB | S|\n| [CDCN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FYu_Searching_Central_Difference_Convolutional_Networks_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)   | CVPR 2020 | Depth | DepthNet |  RGB | S|\n| [FAS-SGTD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.08061)   | CVPR 2020 | Depth | DepthNet, STPM |  RGB | D|\n| [TS-FEN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9054115)   | 2020 | Depth | ResNet34, FCN |  RGB, YCbCr, HSV | S|\n| [SAPLC](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9056824)   | 2020 | TernaryMap | DepthNet |  RGB, HSV | S|\n| [BCN](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123520545.pdf)   | ECCV 2020 | BinaryMask, Depth, Reflection | DepthNet |  RGB | S|\n| [Disentangled](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123640630.pdf)   | ECCV 2020 | Depth, TextureMap | DepthNet |  RGB | S|\n| [AENet](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58610-2_5)   | ECCV 2020 | Depth, Reflection | ResNet18 |  RGB | S|\n| [3DPC-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9304873)   | IJCB 2020 | 3D Point Cloud | ResNet18 |  RGB | S|\n| [PS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9375488)   | TBIOM 2020 | BinaryMask or Depth | ResNet50 or CDCN |  RGB | S|\n| [NAS-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=9252183)   | PAMI 2020 | BinaryMask or Depth | NAS |  RGB | D|\n| [DAM](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9382387)   | 2021 | Depth | VGG16, TSM |  RGB | D|\n| [Bi-FPNFAS](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F21\u002F8\u002F2799)   | 2021 | Fourier spectra | EfficientNetB0, FPN |  RGB | S|\n| [DC-CDN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.01290)   | IJCAI 2021 | Depth | CDCN |  RGB | S|\n| [DCN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.10628.pdf)   | IJCB 2021 | Reflection | DepthNet |  RGB | S|\n| [LMFD-PAD](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07950.pdf)   | 2021 | BinaryMask | Dual-ResNet50 |  RGB + frequency map | S|\n| [MPFLN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FHTCV\u002Fpapers\u002FWang_Multi-Perspective_Features_Learning_for_Face_Anti-Spoofing_ICCVW_2021_paper.pdf)   | ICCVW 2021 | Depth, BinaryMask | CDCN, 3D-CDCN |  RGB | S, D|\n| [DSDG+DUM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.00568)   | TIFS 2021 | Depth | CDCN |  RGB | S|\n| [SAFPAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9650907)   | TIFS 2021 | Depth | DepthNet |  RGB & Patch | S|\n| [EPCR](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.12320.pdf)   | 2021 | BinaryMask | CDCN |  RGB | S|\n| [AISL](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167865521004384)   | PRL 2021 | Depth | DepthNet |  RGB | S|\n| [MEGC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10187)   | ICASSP 2022 | Depth, Relection, Moire, Boundary | DepthNet+Feature Enrichment |  RGB, HSV | S|\n| [EulerNet](http:\u002F\u002Fksiresearch.org\u002Fseke\u002Fseke22paper\u002Fpaper076.pdf)   | 2022 | Face Location Map | EulerNet with Temporal Attention, Residual Pyramid |  RGB | D|\n| [TTN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9730902)   | TIFS 2022 | Depth | ViT with Pyramid Temporal Aggregation, Temporal Difference Attentions |  RGB | D|\n| [TransFAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9817442)   | TBIOM 2022 | Depth | ViT with Cross-Layer Attentions |  RGB | S|\n| [DepthSeg](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9892826)   | IJCNN 2022 | Depth | PSPNet, DeepLabv3+ |  RGB | S|\n\n\n\u003Ca name=\"generative\" \u002F>\n\n#### Generative model with pixel-wise supervision\n\n| Method    | Year | Supervision | Backbone |  Input | Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [De-Spoof](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.09968)   | ECCV 2018 | Depth, BinaryMask, FourierMap | DSNet, DepthNet |  RGB, HSV | S|\n| [Reconstruction](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=8997504)   | 2019 | RGB Input (live), ZeroMap (spoof) | U-Net |  RGB | S|\n| [LGSC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.03922)   | 2020 | ZeroMap (live) | U-Net, ResNet18 |  RGB | S|\n| [TAE](http:\u002F\u002Fpublications.idiap.ch\u002Fdownloads\u002Fpapers\u002F2020\u002FMohammadi_InfoVAE_ICASSP_2020.pdf)   | ICASSP 2020 | Binary CE loss, Reconstruction loss | Info-VAE, DenseNet161 |  RGB | S|\n| [STDN](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123630392.pdf)   | ECCV 2020 | BinaryMask, RGB Input (live) | U-Net, PatchGAN |  RGB | S|\n| [GOGen](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FStehouwer_Noise_Modeling_Synthesis_and_Classification_for_Generic_Object_Anti-Spoofing_CVPR_2020_paper.pdf)   | CVPR 2020 | RGB input |  DepthNet |  RGB+one-hot vector | S|\n| [PhySTD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.05185)   | PAMI 2022 | Depth, RGB Input (live) |  U-Net, PatchGAN |  Frequency Trace | S|\n| [MT-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9462562)   | PAMI 2021 | ZeroMap (live), LearnableMap (Spoof) |  DepthNet |  RGB | S|\n| [IF-OM](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.04100.pdf)   | 2021 | RGB input, mixed input features |  MobileNetV2 + UNet |  RGB, mixed RGB, folded RGB | S|\n| [Dual-Stage Disentanglement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.09157)   | WACV 2021 | ZeroMap (live), RGB Input for reconstruction  | U-Net, ResNet18 |  RGB | S|\n\n\n\u003Ca name=\"DA\" \u002F>\n\n#### Domain adaptation\n\n| Method    | Year | Backbone | Loss |  Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | \n| [OR-DA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8279564)   | TIFS 2018 | AlexNet | Binary CE loss, MMD loss |  S|\n| [DTCNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05633)   | 2019 | AlexNet | Binary CE loss, MMD loss |  S|\n| [Adversarial](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8987254)   | ICB 2019 | ResNet18 | Triplet loss, Adversarial loss |  S|\n| [ML-MMD](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8795006)   | ICMEW 2019 | Multi-scale FCN | CE loss, MMD loss |  S|and unlabeled sets\n| [OCA-FAS](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231220313540)   | NC 2020 | DepthNet | Binary CE loss, Pixel-wise binary loss |  S|\n| [DR-UDA](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9116802)   | TIFS 2020 | ResNet18 | Center&Triplet loss, Adversarial loss, Disentangled loss |  S|\n| [DGP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9053685)   |ICASSP 2020 | DenseNet161 | Feature divergence measure, BinaryMask loss |  S|\n| [Distillation](https:\u002F\u002Fsignalprocessingsociety.org\u002Fpublications-resources\u002Fieee-journal-selected-topics-signal-processing\u002Fface-anti-spoofing-deep-neural)   | J-STSP 2020 | AlexNet | Binary CE loss, MMD loss , Paired Similarity |  S|\n| [SASA](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.14162.pdf)   | 2021 | ResNet18 | CE Loss, Adversarial loss, Less-forgetting constraints, Contrastive semantic alignment |  S|\n| [GDA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10015)   |ECCV 2022 | DepthNet | CE Loss, Depth loss, Inter-domain Neural Statistic Consistency, phase consistency, Perceptual loss |  S|\n| [CDFTN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03651)   |AAAI 2023 | ResNet18 | CE Loss, Reconstruction loss, triplet loss |  S|\n\n\n\n\u003Ca name=\"DG\" \u002F>\n\n#### Domain generalization\n\n\n| Method    | Year | Backbone | Loss |  Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | \n| [MADDG](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShao_Multi-Adversarial_Discriminative_Deep_Domain_Generalization_for_Face_Presentation_Attack_Detection_CVPR_2019_paper.pdf)   | CVPR 2019 | DepthNet | Binary CE & Depth loss, Multi-adversarial loss, Dual-force Triplet loss |  S|\n| [PAD-GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.01959)   | CVPR 2020 | ResNet18 | Binary CE & Depth loss, Multi-adversarial loss, Dual-force Triplet loss |  S|\n| [DASN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9423958)   | 2020 | ResNet18 | Binary CE & Spoof-irrelevant factor loss |  S|\n| [SSDG](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)   | CVPR 2020  | ResNet18 | Binary CE loss, Single-Side adversarial loss, Asymmetric Triplet loss |  S|\n| [RF-Meta](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.10771)   | AAAI 2020 | DepthNet | Binary CE loss, Depth loss |  S|\n| [CCDD](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw48\u002FSaha_Domain_Agnostic_Feature_Learning_for_Image_and_Video_Based_Face_CVPRW_2020_paper.pdf)   | CVPRW 2020 | ResNet50+LSTM | Binary CE loss, Class-conditional loss |  D|\n| [SDA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.12129)   |  AAAI 2021 | DepthNet | Binary CE & Depth loss, Reconstruction loss, Orthogonality regularization |  S|\n| [D2AM](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F16199)   |AAAI 2021 | DepthNet | Binary CE loss, Depth loss, MMD loss |  S|\n| [DRDG](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.16128.pdf)   |  IJCAI 2021 | DepthNet | Binary CE loss, Depth loss, Domain loss |  S|\n| [PDL-FAS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.06552.pdf)   |  2021 | DepthNet | Binary CE loss, Depth loss |  S|\n| [ANRL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02667)   | ACMMM 2021 | DepthNet | Binary CE loss, Depth loss, Inter-Domain Compatible Loss, Inter-Class Separable Loss |  S|\n| [HFN+MP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06753)   | 2021 | Two-stream ResNet50 | Binary CE loss, MSE loss |  S|\n| [SDFANet](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9600829)   | TIFS 2021 | ResNet-18 | BCE loss + multi-grained loss + center loss + asymmetric triplet loss  |  S|\n| [VLAD-VSA](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3474085.3475284)   | ACMMM 2021 | DepthNet or ResNet18 | BCE loss + triplet loss + domain adversarial loss + orthogonal loss +  centroid adaptation loss + intra loss  |  S|\n| [FGHV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.14894)   | AAAI 2022 | DepthNet | Variance + Relative Correlation + Distribution Discrimination Constraints  |  S|\n| [SSAN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05340.pdf)   | CVPR 2022 | DepthNet\u002FResNet18 | CE loss + Domain Adversarial loss + Contrastive loss  |  S|\n| [AMEL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.09868)   | ACMMM 2022 | DepthNet | CE loss, Depth loss, Feature consistency loss  |  S|\n| [MD-FAS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.11148.pdf)   | ECCV 2022 | PhySTD | CE loss, Binary Mask loss, Source & Target distillation loss  |  S|\n| [FRT-PAD](https:\u002F\u002Fwentianzhang-ml.github.io\u002Fpad)   | ECCV 2022 | ResNet18+GAT | CE loss  |  S|\n| [CIFAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9859783)   | ICME 2022 | ResNet18 | CE loss, triplet loss  |  S|\n| [OneSideTriplet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.15955.pdf)   | FG 2023 | DepthNet+UNet | CE loss, triplet loss, Depth loss, Segmentation loss  |  S|\n| [DiVT](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2023\u002Fpapers\u002FLiao_Domain_Invariant_Vision_Transformer_Learning_for_Face_Anti-Spoofing_WACV_2023_paper.pdf)   | WACV 2023 |  MobileViT-S | Domain-invariant Concentration and Attack-separation Loss  |  S|\n| [ALDICF](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-023-01778-x)   | IJCV 2023 |  ResNet18, ResNet50 | Intra-domain and cross-domain discrimination loss, Conditional Domain Adversarial loss   |  S|\n| [DKG+CSA+AIAW](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05640)   | CVPR 2023 |  DepthNet | BCE loss, Depth loss, Asymmetric Instance Adaptive Whiting loss   |  S|\n| [SA-FAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13662)   | CVPR 2023 |  ResNet18 | Contrastive loss, Alignment loss   |  S|\n| [SPDA]([https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13662](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10095730))   | ICASSP 2023 |  ResNet18 | BCE loss, Domain loss, Self-paced Cluster Mining loss, orthogonal loss   |  S|\n| [CRFAS]([https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13662](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10095329))   | ICASSP 2023 |  ResNet18 | BCE loss, Domain loss,  asymmetric triplet loss, Counterfactual Feature Generation loss   |  S|\n\n\u003Ca name=\"zero-shot\" \u002F>\n\n#### Zero\u002FFew-shot learning\n\n\n| Method    | Year | Backbone | Loss |  Input |\n| --------   | -----    | -----  |  -----  | -----  | \n| [DTN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Deep_Tree_Learning_for_Zero-Shot_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | CVPR 2019 | Deep Tree Network | Binary CE loss, Pixel-wise binary loss, Unsupervised Tree loss |  RGB, HSV|\n| [AIM-FAS](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6866)   | AAAI 2020 | DepthNet | Depth loss, Contrastive Depth loss |  RGB |\n| [CM-PAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9304920)   | IJCB 2021 | DepthNet, ResNet | Binary CE loss, Depth loss, Gradient alignment |  RGB|\n| [ViTAF](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12175)   | ECCV 2022 | ViT+adaptor | CE Loss,  Cosine loss |  S|\n\n\n\n\n\u003Ca name=\"oneclass\" \u002F>\n\n#### Anomaly detection\n\n\n| Method    | Year | Backbone | Loss |  Input |\n| --------   | -----    | -----  |  -----  | -----  | \n| [AE+LBP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8698574)   | 2018 | AutoEncoder | Reconstruction loss |  RGB|\n| [Anomaly](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FCFS\u002FPerez-Cabo_Deep_Anomaly_Detection_for_Generalized_Face_Anti-Spoofing_CVPRW_2019_paper.pdf)   | 2019 | ResNet50 | Triplet focal loss, Metric-Softmax loss |  RGB|\n| [Anomaly2](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8682253)   | 2019 | GoogLeNet or ResNet50 | Mahalanobis distance |  RGB|\n| [Hypersphere](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F338920244_UNSEEN_FACE_PRESENTATION_ATTACK_DETECTION_WITH_HYPERSPHERE_LOSS)   | 2020 | ResNet18 | Hypersphere loss |  RGB, HSV |\n| [Ensemble-Anomaly](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9190814)   | 2020 | GoogLeNet or ResNet50 | Gaussian Mixture Model (not end-to-end) |  RGB, patches|\n| [MCCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9153044)   | 2020 | LightCNN | Binary CE loss, Contrastive loss |  Grayscale, IR, Depth, Thermal|\n| [End2End-Anomaly](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.05856)   | 2020 | VGG-Face | Binary CE loss, Pairwise confusion |  RGB|\n| [ClientAnomaly](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320320304994)   | PR 2020 | ResNet50 or GoogLeNet or VGG16 | One-class SVM or Mahalanobis distance or Gaussian Mixture Model |  RGB|\n| [ContrastiveEVT](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3474085.3475538)   | ACM MM 2021 | cVAE | Binary CE loss, reconstruction loss, contrastive loss|  RGB|\n| [OneClassKD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03792)   | TIFS 2022 | DepthNet | Pixel-wise Binary CE loss, multi-level KD loss|  RGB|\n\n\n\u003Ca name=\"semiself\" \u002F>\n\n#### Semi- & Self-supervision\n\n\n| Method    | Year | Semi\u002FSelf | Backbone | Loss |  \n| --------   | -----    | -----  |  -----  | -----  | \n| [SCNN++PL+TC](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9387164)   | TIP 2021 | Semi; Pseudo-label| ResNet18 | CE Loss |  \n| [USDAN](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320321000753?via%3Dihub)   | PR 2021 | Semi; Distribution Alignment| ResNet18 | Adaptive binary CE loss, Entropy loss, Adversarial loss | \n| [EPCR](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10012352)   | TIFS 2023 | Semi; Consistency Regularization | CDCN |  Prediction- & Embedding-level reconstruction loss|\n| [TSS](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167865522000605)   | PRL 2022 | Self; Pretext task | ResNet18+BiLSTM |  CE loss for temporal sampling prediction|\n| [ACL-FAS](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-18910-4_39)   | PRCV 2022 | Self; Contrastive learning | - |  Region-Based Similarity Loss, Contrastive & Anti-contrastive loss|\n| [MIM-FAS](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-18907-4_62)   | PRCV 2022 | Self; Masked Image Modeling | ViT |  Reconstruction loss|\n| [DF-DM](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10051654)   | TNNLS 2023 | Self; Pretext task| DeepPixBiS, SSDG-R,  CDCN | GAN loss, Interpolation-based Consistency loss |\n| [MCAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.08674)   | 2023 | Self+Supervised; Masked Image Modeling | ViT |  Reconstruction loss + Supervised Contrastive loss|\n| [AMA+M2A2E](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05744.pdf)   | 2023 | Self; Masked Image Modeling| ViT | Reconstruction loss |\n\n\n\n\n\u003Ca name=\"CL\" \u002F>\n\n#### Continual learning\n\n\n| Method    | Year | Replay or not | Backbone |  Loss |\n| --------   | -----    | -----  |  -----  | -----  | \n| [CM-PAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9304920)   | IJCB 2020 | with Replay | DepthNet |  batch\u002Foverall meta loss|\n| [Experience Replay](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FRostami_Detection_and_Continual_Learning_of_Novel_Face_Presentation_Attacks_ICCV_2021_paper.pdf)   | ICCV 2021 | with Replay| ResNet50 | BCE loss for identified novel and replayed samples |  \n| [DCDCA+PPCR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09914)   | 2023 | Rehearsal-Free | ViT | BCE loss, Proxy Prototype Contrastive Regularization |\n\n\n\n---\n\u003Ca name=\"methods_advanced\" \u002F>\n\n### 3️⃣ Deep FAS methods with advanced sensor\n\n\n\u003Ca name=\"sensor\" \u002F>\n\n#### Learning upon specialized sensor\n\n\n| Method    | Year | Backbone | Loss |  Input | Static\u002FDynamic |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [Thermal-FaceCNN](https:\u002F\u002Fwww.mdpi.com\u002F2073-8994\u002F11\u002F3\u002F360)   | 2019 | AlexNet | Regression loss |  Thermal infrared face image | S|\n| [SLNet](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_ESA_SLNet.pdf)   | 2019 | 17-layer CNN | Binary CE loss |  Stereo (left&right) face images | S|\n| [Aurora-Guard](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10311)   | 2019 | U-Net | Binary CE loss, Depth regression, Light Regression |  Casted face with dynamic changing light specified by random light CAPTCHA | D|\n| [LFC](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_JEI_Face_Liveness.pdf)   | 2019 | AlexNet | Binary CE loss |  Ray difference\u002Fmicrolens images from light field camera | S|\n| [PAAS](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3441250.3441254)   | 2020 | MobileNetV2 | Contrastive loss, SVM |  Four-directional polarized face image | S|\n| [Face-Revelio](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3372224.3419206)   | 2020 | Siamese-AlexNet | L1 distance |  Four flash lights displayed on four quarters of a screen | D|\n| [SpecDiff](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.12400)   | 2020 | ResNet4 | Binary CE loss |  Concatenated face images w\u002F and w\u002Fo flash | S|\n| [MC-PixBiS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11469)   | 2020 | DenseNet161 | Binary mask loss |  SWIR images differences | S|\n| [Thermalization](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F20\u002F14\u002F3988)   | 2020 | YOLO V3+GoogLeNet | Binary CE loss |  Thermal infrared face image | S|\n| [DP Bin-Cls-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9248008)   | 2021 | Shallow U-Net + Xception | Transformation consistency, Relative disparity loss, Binary CE loss |  DP image pair | S|\n\n\n\n\n\n\u003Ca name=\"multimodal\" \u002F>\n\n#### Multi-modal learning\n\n| Method    | Year | Backbone | Loss |  Input | Fusion |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [FaceBagNet](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fhtml\u002FCFS\u002FShen_FaceBagNet_Bag-Of-Local-Features_Model_for_Multi-Modal_Face_Anti-Spoofing_CVPRW_2019_paper.html)   | 2019 | Multi-stream CNN | Binary CE loss |  RGB, Depth, NIR face patches | Feature-level|\n| [FeatherNets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09290)   | 2019 | Ensemble-FeatherNet | Binary CE loss |  Depth, NIR | Decision-level |\n| [Attention](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fhtml\u002FCFS\u002FWang_Multi-Modal_Face_Presentation_Attack_Detection_via_Spatial_and_Channel_Attentions_CVPRW_2019_paper.html)   | 2019 | ResNet18 | Binary CE loss, Center loss |  RGB, Depth, NIR | Feature-level|\n| [mmfCNN](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3343031.3351001)   | ACMMM 2019 | ResNet34 | Binary CE loss, Binary Center Loss | RGB, NIR, Depth, HSV, YCbCr | Feature-level|\n| [MM-FAS](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FCFS\u002FParkin_Recognizing_Multi-Modal_Face_Spoofing_With_Face_Recognition_Networks_CVPRW_2019_paper.pdf)   | 2019 | ResNet18\u002F50 | Binary CE loss |  RGB, NIR, Depth | Feature-level|\n| [AEs+MLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.04048)   | 2019 | Autoencoder, MLP | Binary CE loss, Reconstruction loss |  Grayscale-Depth-Infrared composition| Input-level|\n| [SD-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8995504\u002F)   | 2019 | ResNet18 | Binary CE loss |  RGB, NIR, Depth | Feature-level|\n| [Dual-modal](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8924988)   | 2019 | MoblienetV3 | Binary CE loss |  RGB, IR | Feature-level|\n| [Parallel-CNN](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1742-6596\u002F1549\u002F4\u002F042069)   | 2020 | Attentional CNN | Binary CE loss |  Depth, NIR | Feature-level|\n| [Multi-Channel Detector](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16836)   | 2020 | RetinaNet (FPN+ResNet18) | Landmark regression, Focal loss |  Grayscale-Depth-Infrared composition | Input-level|\n| [PSMM-Net](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fhtml\u002FLiu_CASIA-SURF_CeFA_A_Benchmark_for_Multi-Modal_Cross-Ethnicity_Face_Anti-Spoofing_WACV_2021_paper.html)   | 2020 | ResNet18 | Binary CE loss for each stream |  RGB, Depth, NIR | Feature-level|\n| [PipeNet](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw39\u002FYang_PipeNet_Selective_Modal_Pipeline_of_Fusion_Network_for_Multi-Modal_Face_CVPRW_2020_paper.pdf)   | 2020 | SENet154 | Binary CE loss |  RGB, Depth, NIR face patches | Feature-level|\n| [MM-CDCN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw39\u002FYu_Multi-Modal_Face_Anti-Spoofing_Based_on_Central_Difference_Networks_CVPRW_2020_paper.pdf)   | 2020 | CDCN | Pixel-wise binary loss, Contrastive depth loss |  RGB, Depth, NIR | Feature&Decision-level|\n| [HGCNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11594)   | 2020 | Hypergraph-CNN, MLP | Binary CE loss |  RGB, Depth | Feature-level|\n| [MCT-GAN](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11042-020-08952-0)   | 2020 | CycleGAN, ResNet50 | GAN loss, Binary CE loss |  RGB, NIR | Input-level|\n| [D-M-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9372969)   | 2021 | ResNeXt | Binary CE loss |  Multi-preprocessed Depth, RGB-NIR composition | Input&Feature-level|\n| [MA-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9374963)   | TIFS 2021 | CycleGAN, ResNet18 | Binary CE loss, GAN loss |  RGB, NIR | Feature-level|\n| [AMT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.09108)   | TMM 2021 | Translator: shallow encoder+decoder + ResNet; Discriminator: DenseNet | BCE loss, Pixel-wise binary loss, reconstruction loss |  illumination normalized RGB or NIR or thermal or Depth | Input-level|\n| [CompreEval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10286)   | 2022 | DenseNet-161  | BCE loss, Pixel-wise binary loss |  RGB, Depth, NIR, SWIR, Thermal | Input-level|\n| [Conv-MLP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9796574)   | TIFS 2022 | Conv-MLP | Binary CE Loss, Moat Loss |  RGB, Depth, NIR | Input-level|\n| [Echo-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9868051)   | TIFS 2022 | ResNet18, Transformer | Binary CE Loss |  RGB, Vocal | Feature-level|\n| [AMA+M2A2E](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05744.pdf)   | 2023 | ViT | BCE Loss, reconstruction loss for MAE |  RGB, Depth, IR | Feature-level|\n| [SNM]([https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05744.pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10176121))   | TIFS 2023 | ResNet18 | BCE Loss, center loss, cosine loss |  Depth, IR | Feature-level|\n\n\n\u003Ca name=\"flexmodal\" \u002F>\n\n#### Flexible-modal learning\n\n| Method    | Year | Backbone | Loss |  Input | Fusion |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [CMFL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00948)   | CVPR 2021 | DenseNet161 | Binary CE loss, Cross modal focal loss |  RGB, Depth | Feature-level|\n| [MA-ViT](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2022\u002F0165.pdf)   | IJCAI 2022 | ViT-S\u002F16 | Binary CE Loss on image and modality |  RGB, Depth, NIR | Input&Feature-level|\n| [FlexModal-FAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08192)   | CVPRW 2023 | CDCN, ResNet50, ViT | BCE loss, Pixel-wise binary loss |  RGB, Depth, IR | Feature-level|\n| [FM-ViT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03277)   | TIFS 2023 | ViT | BCE loss for flexible-modal classification heads |  RGB, Depth, IR | Feature-level|\n\n","# 👏 深度人脸防伪技术综述 🔥\n\n这是 \"**[深度学习在人脸防伪中的应用：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.14948)**\" 的官方仓库，这是一篇关于深度学习方法在人脸防伪（FAS）领域最新进展、数据集及评估协议的全面综述。\n\n\n\n### 引用\n如果您在研究中使用了我们的工作，请考虑引用：\n\n    @article{yu2022deep,\n      title={Deep Learning for Face Anti-Spoofing: A Survey},\n      author={Yu, Zitong and Qin, Yunxiao and Li, Xiaobai and Zhao, Chenxu and Lei, Zhen and Zhao, Guoying},\n      journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},\n      year={2022}\n    }\n\n\n## 简介\n我们对近年来用于人脸防伪的深度学习方法进行了全面回顾（主要集中在2018年至2022年间）。内容涵盖了基于单目RGB图像的人脸防伪技术，包括混合型（手工特征+深度学习）、纯深度学习以及广义学习方法。此外，还介绍了多模态学习方法和基于专用传感器的人脸防伪技术。同时，我们也对公开可用的数据集及其经典评估协议进行了详细比较。\n\n🔔 我们将定期更新此页面~ :tada::tada::tada:\n\n---\n## 目录\n\n- [数据集](#data)\n  - [使用商用RGB摄像头](#data_RGB)\n  - [多模态或专用传感器](#data_Multimodal)\n- [基于商用RGB摄像头的深度FAS方法](#methods_RGB)\n  - [混合型（手工特征+深度学习）](#hybrid)\n  - [端到端二分类交叉熵监督](#binary)\n  - [像素级辅助监督](#auxiliary)\n  - [基于像素级监督的生成模型](#generative)\n  - [领域适应](#DA)\n  - [领域泛化](#DG)\n  - [零\u002F少样本学习](#zero-shot)\n  - [异常检测](#oneclass)\n  - [半监督与自监督](#semiself)\n  - [持续学习](#CL)\n- [基于高级传感器的深度FAS方法](#methods_advanced)\n  - [基于专用传感器的学习](#sensor)\n  - [多模态学习](#multimodal)\n  - [灵活模态学习](#flexmodal)\n\n---\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FZitongYu_DeepFAS_readme_13acccf18a75.png)   \n  \n---\n\n\n\u003Ca name=\"data\" \u002F>\n\n### 1️⃣ 数据集\n\n\u003Ca name=\"data_RGB\" \u002F>\n\n#### 使用商用RGB摄像头采集的数据集\n\n| 数据集    | 年份 | 活体\u002F欺骗 | 受试者数 | 设置 | 攻击类型 |\n| --------   | -----    | -----  |  -----  | ----- |------------------------|\n| [NUAA](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.607.5449&rep=rep1&type=pdf)   | 2010 | 5105\u002F7509(I) | 15 |  N\u002FR | 打印（平面、包裹）|\n| [YALE Recaptured](https:\u002F\u002Fwww.ic.unicamp.br\u002F~rocha\u002Fpub\u002Fpapers\u002F2011-icip-spoofing-detection.pdf)   | 2011 | 640\u002F1920(I) | 10 |  距离3台液晶显示器50厘米 | 打印（平面） |\n| [CASIA-MFSD](http:\u002F\u002Fwww.cbsr.ia.ac.cn\u002Fusers\u002Fjjyan\u002FZHANG-ICB2012.pdf)   | 2012 | 150\u002F450(V) | 50 |  7种场景和3种图像质量 | 打印（平面、包裹、切割），回放（平板电脑）|\n| [REPLAY-ATTACK](http:\u002F\u002Fpublications.idiap.ch\u002Fdownloads\u002Fpapers\u002F2012\u002FChingovska_IEEEBIOSIG2012_2012.pdf)   | 2012 | 200\u002F1000(V) | 50 |  照明和手持 | 打印（平面），回放（平板电脑、手机） |\n| [Kose and Dugelay](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6595862)   | 2013 | 200\u002F198(I) | 20 |  N\u002FR | 面具（硬质树脂） |\n| [MSU-MFSD](http:\u002F\u002Fbiometrics.cse.msu.edu\u002FPublications\u002FFace\u002FWenHanJain_FaceSpoofDetection_TIFS15.pdf)   | 2014 | 70\u002F210(V) | 35 |  室内场景；2种类型的相机 | 打印（平面），回放（平板电脑、手机） |\n| [UVAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7017526)   | 2015 | 808\u002F16268(V) | 404 | 不同的光照、背景和地点，分为两部分 | 回放（显示器） |\n| [REPLAY-Mobile](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7736936)   | 2016 | 390\u002F640(V) | 40 |  5种光照条件 | 打印（平面），回放（显示器） |\n| [HKBU-MARs V2](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-46478-7_6)   | 2016 | 504\u002F504(V) | 12 | 7台固定和移动设备上的摄像头以及6种光照设置 | 使用Thatsmyface和REAL-f提供的硬质树脂面具 |\n| [MSU USSA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7487030)   | 2016 | 1140\u002F9120(I) | 1140 |  条件不受控；2种类型的相机 | 打印（平面），回放（笔记本电脑、平板电脑、手机）|\n| [SMAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7867821)   | 2017 | 65\u002F65(V) | - |  来自网络资源的彩色图像 | 硅胶面具 |\n| [OULU-NPU](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7961798)   | 2017 | 720\u002F2880(V) | 55 |  分为3个部分，光照和背景各不相同 | 打印（平面），回放（手机） |\n| [Rose-Youtu](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8279564)   | 2018 | 500\u002F2850(V) | 20 | 5部前置手机摄像头；5种不同的光照条件 | 打印（平面），回放（显示器、笔记本电脑），纸制或裁剪纸制面具 |\n| [SiW](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.11097)   | 2018 | 1320\u002F3300(V) | 165 |  4次会话，距离、姿态、光照和表情各异 | 打印（平面、包裹），回放（手机、平板电脑、显示器）|\n| [WFFD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.06514)   | 2019 | 2300\u002F2300(I) 140\u002F145(V) | 745 |  在线收集；超逼真；剔除了低质量人脸 | 蜡像（蜡）|\n| [SiW-M](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Deep_Tree_Learning_for_Zero-Shot_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | 2019 | 660\u002F968(V) | 493 |  室内环境，姿态、光照和表情多变 | 打印（平面），回放，面具（硬质树脂、塑料、硅胶、纸、假人），化妆（化妆品、模仿、伪装），局部遮挡（眼镜、裁剪纸）|\n| [Swax](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.09642)   | 2020 | 总计1812(I) 110(V) | 55 |  在线收集；在非受控场景下采集 | 蜡像（蜡）|\n| [CelebA-Spoof](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58610-2_5)   | 2020 | 156384\u002F469153(I) | 10177 |  4种光照条件；室内与室外；丰富的标注 | 打印（平面、包裹），回放（显示器、平板电脑、手机），纸制面具 |\n| [RECOD-Mtablet](https:\u002F\u002Fjournals.plos.org\u002Fplosone\u002Farticle?id=10.1371\u002Fjournal.pone.0238058)   | 2020 | 450\u002F1800(V) | 45 |  户外环境，光线较弱且动态变化的场景 | 打印（平面），回放（显示器） |\n| [CASIA-SURF 3DMask](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9252183)   | 2020 | 288\u002F864(V)  | 48 |  高质量且保留身份信息；3种装饰和6种环境 | 假人面具（3D打印） |\n| [HiFiMask](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06148)   | 2021 | 13650\u002F40950(V) | 75 |  三种面具装饰；7种录制设备；6种光照条件；6种场景 | 面具（透明、石膏、树脂）|\n| [SiW-M v2](https:\u002F\u002Fgithub.com\u002FCHELSEA234\u002FMulti-domain-learning-FAS)   | 2022 | 785\u002F915 (V) | 1093(493\u002F600) |  室内外均有；年龄和种族多样；7种光照条件 | 经IAPRA验证的14种欺骗攻击方式（4种覆盖物、3种化妆、3种面具、2种真人模型、回放和打印）|\n| [SuHiFiMask](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00975)   | 2022 | 10195\u002F10195 (V) | 101 |  远程使用监控摄像头，在3个场景中录制，采用3种光照和4种天气情况 | 2D图像、视频回放、3D面具，材料包括树脂、石膏、硅胶、纸|\n| [WFAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05753)  | 2023 |  529,571\u002F 853,729 (I) | 469,920 |  互联网，无约束环境  | 17种欺骗手段，包括打印（报纸、海报、照片、相册、图画书、扫描照片、包装、布料），显示（手机、平板电脑、电视、电脑），面具，3D模型（拼装套件、玩偶、成人娃娃、蜡像）|\n\n\n\u003Ca name=\"data_Multimodal\" \u002F>\n\n#### 具有多模态数据或专用传感器的数据集\n\n| 数据集    | 年份 | #活体\u002F欺骗 | #受试者 | 男\u002F女 | 实验设置 | 攻击类型 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |------------------------|\n| [3DMAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F6810829)   | 2013 | 170\u002F85(V) | 17 |  可见光, 深度 | 3次会话（间隔2周） | 面具（纸质、硬质树脂）|\n| [GUC-LiFFAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7018027)   | 2015 | 1798\u002F3028(V) | 80 |  光场 | 距离为1.5米的受限条件 | 打印（喷墨纸、激光打印纸）、回放（平板电脑）|\n| [3DFS-DB](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F277905873_Three-dimensional_and_two-and-a-half-dimensional_face_recognition_spoofing_using_three-dimensional_printed_models)   | 2016 | 260\u002F260(V) | 26 |  可见光, 深度 | 多角度头部运动 | 面具（塑料）|\n| [BRSU Skin\u002FFace\u002FSpoof](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7550052)   | 2016 | 102\u002F404(I) | 137 |  可见光, 短波红外 | 多光谱短波红外，包含4个波段：935nm、1060nm、1300nm和1550nm | 面具（硅胶、塑料、树脂、乳胶）|\n| [Msspoof](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-28501-6_8)   | 2016 | 1470\u002F3024(I) | 21 |  可见光, 近红外 | 7种环境条件 | 黑白打印（平面）|\n| [MLFP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8014774)   | 2017 | 150\u002F1200(V) | 10 |  可见光, 近红外, 热成像 | 室内与室外，背景固定或随机 | 面具（乳胶、纸）|\n| [ERPA](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F320177829_What_You_Can't_See_Can_Help_You_-_Extended-Range_Imaging_for_3D-Mask_Presentation_Attack_Detection)   | 2017 | 总计86(V) | 5 |  可见光, 深度, 近红外, 热成像 | 受试者靠近两种相机（0.3～0.5米） | 打印（平面）、回放（显示器）、面具（树脂、硅胶）|\n| [LF-SAD ](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_JEI_Face_Liveness.pdf)   | 2018 | 328\u002F596(I) | 50 |  光场 | 室内固定背景，由Lytro ILLUM相机拍摄 | 打印（平面、包裹式）、回放（显示器）|\n| [CSMAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8698550)   | 2018 | 104\u002F159(V+I) | 14 |  可见光, 深度, 近红外, 热成像 | 4种光照条件 | 面具（定制硅胶）|\n| [3DMA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8909845)   | 2019 | 536\u002F384(V) | 67 |  可见光, 近红外 | 48种不同ID的面具；2种光照条件及4种采集距离 | 面具（塑料）|\n| [CASIA-SURF](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhang_A_Dataset_and_Benchmark_for_Large-Scale_Multi-Modal_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | 2019 | 3000\u002F18000(V) | 1000 |  可见光, 深度, 近红外 | 背景已移除；随机裁剪眼睛、鼻子或嘴巴区域 | 打印（平面、包裹式、裁剪式）|\n| [WMCA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8714076)   | 2019 | 347\u002F1332(V) | 72 |  可见光, 深度, 近红外, 热成像 | 6次会话，背景和光照各不相同；真样本记录了脉搏数据 | 打印（平面）、回放（平板电脑）、部分遮挡（眼镜）、面具（塑料、硅胶、纸、假人）|\n| [CeFA](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fhtml\u002FLiu_CASIA-SURF_CeFA_A_Benchmark_for_Multi-Modal_Cross-Ethnicity_Face_Anti-Spoofing_WACV_2021_paper.html)   | 2020 | 6300\u002F27900(V) | 1607 |  可见光, 深度, 近红外 | 3种族；室内外场景；使用假发和眼镜进行装饰 | 打印（平面、包裹式）、回放、面具（3D打印、硅胶）|\n| [HQ-WMCA](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9146362)   | 2020 | 555\u002F2349(V) | 51 | 可见光, 深度, 近红外, 短波红外, 热成像 | 室内；14种“模态”，包括4种近红外和7种短波红外波长；面具和假人均被加热至体温 | 激光或喷墨打印（平面）、回放（平板电脑、手机）、面具（塑料、硅胶、纸、假人）、化妆、部分遮挡（眼镜、假发、纹身）|\n| [PADISI-Face](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.12081.pdf)   | 2021 | 1105\u002F924(V) | 360 | 可见光, 深度, 近红外, 短波红外, 热成像 | 室内，固定背景，60帧序列，每帧图像分辨率为1984×1264像素 | 打印（平面）、回放（平板电脑、手机）、面具（塑料、硅胶、透明材质、假人）、化妆\u002F纹身、部分遮挡（眼镜、趣味性眼饰）|\n\n\n\n---\n\u003Ca name=\"methods_RGB\" \u002F>\n\n\n\n### 2️⃣ 基于商用RGB摄像头的深度学习人脸反欺骗方法\n\n- temp\n\n\u003Ca name=\"hybrid\" \u002F>\n\n#### 混合型（手工特征+深度学习）\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [DPCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7821013)   | 2016 | VGG-Face | 使用SVM训练 |  RGB | S|\n| [Multi-cues+NN](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1047320316300244)   | 2016 | MLP | 二分类交叉熵损失 |  RGB+OFM | D|\n| [CNN LBP-TOP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7984552)   | 2017 | 5层CNN | 二分类交叉熵损失、SVM |  RGB | D|\n| [DF-MSLBP](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.03850)   | 2018 | 深度森林 | 二分类交叉熵损失 |  HSV+YCbCr | S|\n| [SPMT+SSD](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320318303182)   | 2018 | VGG16 | 二分类交叉熵损失、SVM、边界框回归 |  RGB、地标 | S|\n| [CHIF](http:\u002F\u002Fiab-rubric.org\u002Fpapers\u002FManjani-DDLSpoofing.pdf)   | 2019 | VGG-Face | 使用SVM训练 |  RGB | S|\n| [DeepLBP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8296251)   | 2019 | VGG-Face | 二分类交叉熵损失、SVM |  RGB、HSV、YCbCr | S|\n| [CNN+LBP+WLD](https:\u002F\u002Fdigital-library.theiet.org\u002Fcontent\u002Fjournals\u002F10.1049\u002Fiet-ipr.2018.5560)   | 2019 | CaffeNet | 二分类交叉熵损失 |  RGB | S|\n| [Intrinsic](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002F10.1049\u002Fiet-bmt.2019.0155)   | 2019 | 1D-CNN | 使用SVM训练 |  反射信号 | D|\n| [FARCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8911314)   | 2019 | 多尺度注意力CNN | 回归损失、Crystal损失、Center损失 |  RGB | S|\n| [CNN-LSP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8626161)   | TIFS 2019 | 1D-CNN | 使用SVM训练 |  RGB | D |\n| [DT-Mask](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8453011)   | 2019 | VGG16 | 二分类交叉熵损失、通道与空间可区分性 |  RGB+OF | D |\n| [VGG+LBP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8955089)   | 2019 | VGG16 | 二分类交叉熵损失 |  RGB | S|\n| [CNN+OVLBP](http:\u002F\u002Fwww.mecs-press.org\u002Fijigsp\u002Fijigsp-v11-n2\u002FIJIGSP-V11-N2-2.pdf)   | 2019 | VGG16 | 二分类交叉熵损失、神经网络分类器 |  RGB | S|\n| [HOG-Pert.](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-20005-3_1)   | 2019 | 多尺度CNN | 二分类交叉熵损失 |  RGB+HOG | S|\n| [LBP-Pert.](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0262885619304512)   | 2020 | 多尺度CNN | 二分类交叉熵损失 |  RGB+LBP | S|\n| [TransRPPG](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9460762)   | SPL 2021 | Vision Transformer | 二分类交叉熵损失 |  rPPG图 | D |\n\n\n\n\u003Ca name=\"binary\" \u002F>\n\n#### 端到端二元交叉熵监督\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [CNN1](https:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5601)   | 2014 | 8层CNN | 使用SVM训练 |  RGB | S|\n| [LSTM-CNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7486482)   | 2015 | CNN+LSTM | 二元交叉熵损失 |  RGB | D|\n| [SpoofNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1410.1980)   | 2015 | 2层CNN | 二元交叉熵损失 |  RGB | S|\n| [HybridCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8253209)   | 2017 | VGG-Face | 使用SVM训练 |  RGB | S|\n| [CNN2](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04176)   | 2017 | VGG11 | 二元交叉熵损失 |  RGB | S|\n| [Ultra-Deep](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-70096-0_70)   | 2017 | ResNet50+LSTM | 二元交叉熵损失 |  RGB | D|\n| [FASNet](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-319-59876-5_4)   | 2017 | VGG16 | 二元交叉熵损失 |  RGB | S|\n| [CNN3](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8166863)   | 2018 | Inception、ResNet | 二元交叉熵损失 |  RGB | S|\n| [MILHP](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2018\u002F0113.pdf)   | 2018 | ResNet+STN | 多实例交叉熵损失 |  RGB | D|\n| [LSCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8614337)   | 2018 | 9个PatchNet | 二元交叉熵损失 |  RGB | S|\n| [LiveNet](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2018_ESA_LiveNet.pdf)   | 2018 | VGG11 | 二元交叉熵损失 |  RGB | S|\n| [MS-FANS ](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8546026)   | 2018 | AlexNet+LSTM | 二元交叉熵损失 |  RGB | S|\n| [DeepColorFAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8616677)   | 2018 | 5层CNN | 二元交叉熵损失 |  RGB、HSV、YCbCr | S|\n| [Siamese](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-31654-9_15)   | 2019 | AlexNet | 对比损失 |  RGB | S|\n| [FSBuster](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.02845)   | 2019 | ResNet50 | 使用SVM训练 |  RGB | S|\n| [FuseDNG](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_VComm_Face_Liveness)   | 2019 | 7层CNN | 二元交叉熵损失、重建损失 |  RGB | S|\n| [STASN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FYang_Face_Anti-Spoofing_Model_Matters_so_Does_Data_CVPR_2019_paper.pdf)   | CVPR 2019 | ResNet50+LSTM | 二元交叉熵损失 |  RGB | D|\n| [TSCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8737949)   | TIFS 2019 | ResNet18 | 二元交叉熵损失 |  RGB、MSR | S|\n| [FAS-UCM](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.07270)   | 2019 | MobileNetV2、VGG19 | 二元交叉熵损失、风格损失 |  RGB | S|\n| [SLRNN](https:\u002F\u002Fbmvc2019.org\u002Fwp-content\u002Fuploads\u002Fpapers\u002F0973-paper.pdf)   | 2019 | ResNet50+LSTM | 二元交叉熵损失 |  RGB | D|\n| [GFA-CNN](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3402446)   | 2019 | VGG16 | 二元交叉熵损失 |  RGB | S|\n| [3DSynthesis](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8987415)   | 2019 | ResNet15 | 二元交叉熵损失 |  RGB | S|\n| [CompactNet](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231220308237?dgcid=rss_sd_all&utm_source=researcher_app&utm_medium=referral&utm_campaign=RESR_MRKT_Researcher_inbound)   | NC 2020 | VGG19 | 点到中心三元组损失 |  RGB | S|\n| [SSR-FCN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9218954)   | TIFS 2020 | 具有6层的FCN | 二元交叉熵损失 |  RGB | S|\n| [FasTCo](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06756)   | 2020 | ResNet50或MobileNetV2 | 多分类交叉熵损失、时间一致性损失、类别一致性损失 |  RGB | D|\n| [DRL-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9205636)   | TIFS 2020 | ResNet18+GRU | 二元交叉熵损失 |  RGB | S|\n| [SfSNet](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9068268)   | 2020 | 6层CNN | 二元交叉熵损失 |  反照率、深度、反射 | S|\n| [LivenesSlight](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1801.01949.pdf)   | 2020 | 6层CNN | 二元交叉熵损失 |  RGB | S|\n| [MotionEnhancement](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9203944)   | 2020 | VGGface+LSTM | 二元交叉熵损失 |  RGB | D|\n| [CFSA-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9175520)   | 2020 | ResNet18 | 二元交叉熵损失 |  RGB | S|\n| [MC-FBC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.06514)   | 2020 | VGG16、ResNet50 | 二元交叉熵损失 |  RGB | S|\n| [SimpleNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16028)   | 2020 | 多流5层CNN | 二元交叉熵损失 |  RGB、光流、反射 | D|\n| [PatchCNN](https:\u002F\u002Fjournals.plos.org\u002Fplosone\u002Farticle?id=10.1371\u002Fjournal.pone.0238058)   | 2020 | SqueezeNet v1.1 | 二元交叉熵损失、三元组损失 |  RGB | S|\n| [FreqSpatialTempNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03723)   | 2020 | ResNet18 | 二元交叉熵损失 |  RGB、HSV、光谱 | D|\n| [ViTranZFAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.08019)   | IJCB 2021 | ViT | 二元交叉熵损失 |  RGB | S|\n| [CIFL](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9336714)   | TIFS 2021 | ResNet18 | 二元焦点损失、相机类型损失 |  RGB | S|\n| [XFace-PAD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04862)   | FG 2021 | ResNet50、ViT | 二元交叉熵损失、逐词交叉熵损失、句子判别损失以及句子语义损失 |  RGB | S|\n| [PCGN](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475305)   | MM 2021 | ResNet101+GCN | 节点和边的交叉熵损失 |  RGB整图 | S|\n| [TOD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11046)   | 2021 | ResNet18、图注意力网络 | 交叉熵损失 |  RGB  | S|\n| [MTSS](https:\u002F\u002Fwww.bmvc2021-virtualconference.com\u002Fassets\u002Fpapers\u002F0113.pdf)   | BMVC 2021 | ViT+多级注意力模块 | 交叉熵损失 |  RGB  | S|\n| [PatchNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14325)   | CVPR 2022 | ResNet18 | 非对称AM-Softmax损失、自监督相似性损失 |  RGB补丁 | S|\n| [ViTransPAD](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.01562.pdf)   | ICIP 2022 | EfficientNet + VideoViT | 交叉熵损失 |  RGB | D|\n| [FGDNet](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9946402)   | TMM 2022 | 卷积Transformer | 5类交叉熵损失 |  RGB | S|\n\n\u003Ca name=\"auxiliary\" \u002F>\n\n#### 像素级辅助监督\n\n| 方法    | 年份 | 监督信号 | 主干网络 | 输入 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [Depth&Patch](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8272713\u002F)   | IJCB 2017 | 深度 | PatchNet, DepthNet |  YCbCr, HSV | S|\n| [Auxiliary](http:\u002F\u002Fcvlab.cse.msu.edu\u002Fpdfs\u002FLiu_Jourabloo_Liu_CVPR2018.pdf)   | CVPR 2018 | 深度, rPPG频谱 | DepthNet |  RGB, HSV | D|\n| [BASN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCVW_2019\u002Fpapers\u002FDFW\u002FKim_BASN_Enriching_Feature_Representation_Using_Bipartite_Auxiliary_Supervisions_for_Face_ICCVW_2019_paper.pdf)   | ICCVW 2019 | 深度, 反射 | DepthNet, Enrichment |  RGB, HSV | S|\n| [DTN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Deep_Tree_Learning_for_Zero-Shot_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | CVPR 2019 | BinaryMask | 树形网络 |  RGB, HSV | S|\n| [PixBiS](http:\u002F\u002Fpublications.idiap.ch\u002Fdownloads\u002Fpapers\u002F2019\u002FGeorge_ICB2019.pdf)   | ICB 2019 | BinaryMask | DenseNet161 |  RGB | S|\n| [A-PixBiS](http:\u002F\u002Fwww.dicta2020.org\u002Fwp-content\u002Fuploads\u002F2020\u002F09\u002F53_CameraReady.pdf)   | 2020 | BinaryMask | DenseNet161 |  RGB | S|\n| [Auto-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9053587)   | ICASSP 2020 | BinaryMask | NAS |  RGB | S|\n| [MRCNN](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167865520300015)   | 2020 | BinaryMask | 浅层CNN |  RGB | S|\n| [FCN-LSA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9056475)   | 2020 | BinaryMask | DepthNet |  RGB | S|\n| [CDCN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FYu_Searching_Central_Difference_Convolutional_Networks_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)   | CVPR 2020 | 深度 | DepthNet |  RGB | S|\n| [FAS-SGTD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.08061)   | CVPR 2020 | 深度 | DepthNet, STPM |  RGB | D|\n| [TS-FEN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9054115)   | 2020 | 深度 | ResNet34, FCN |  RGB, YCbCr, HSV | S|\n| [SAPLC](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9056824)   | 2020 | 三元图 | DepthNet |  RGB, HSV | S|\n| [BCN](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123520545.pdf)   | ECCV 2020 | BinaryMask, 深度, 反射 | DepthNet |  RGB | S|\n| [Disentangled](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123640630.pdf)   | ECCV 2020 | 深度, 纹理图 | DepthNet |  RGB | S|\n| [AENet](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58610-2_5)   | ECCV 2020 | 深度, 反射 | ResNet18 |  RGB | S|\n| [3DPC-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9304873)   | IJCB 2020 | 3D点云 | ResNet18 |  RGB | S|\n| [PS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9375488)   | TBIOM 2020 | BinaryMask或深度 | ResNet50或CDCN |  RGB | S|\n| [NAS-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=9252183)   | PAMI 2020 | BinaryMask或深度 | NAS |  RGB | D|\n| [DAM](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9382387)   | 2021 | 深度 | VGG16, TSM |  RGB | D|\n| [Bi-FPNFAS](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F21\u002F8\u002F2799)   | 2021 | 傅里叶频谱 | EfficientNetB0, FPN |  RGB | S|\n| [DC-CDN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.01290)   | IJCAI 2021 | 深度 | CDCN |  RGB | S|\n| [DCN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.10628.pdf)   | IJCB 2021 | 反射 | DepthNet |  RGB | S|\n| [LMFD-PAD](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.07950.pdf)   | 2021 | BinaryMask | Dual-ResNet50 |  RGB + 频率图 | S|\n| [MPFLN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021W\u002FHTCV\u002Fpapers\u002FWang_Multi-Perspective_Features_Learning_for_Face_Anti-Spoofing_ICCVW_2021_paper.pdf)   | ICCVW 2021 | 深度, BinaryMask | CDCN, 3D-CDCN |  RGB | S, D|\n| [DSDG+DUM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.00568)   | TIFS 2021 | 深度 | CDCN |  RGB | S|\n| [SAFPAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9650907)   | TIFS 2021 | 深度 | DepthNet |  RGB & 补丁 | S|\n| [EPCR](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.12320.pdf)   | 2021 | BinaryMask | CDCN |  RGB | S|\n| [AISL](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167865521004384)   | PRL 2021 | 深度 | DepthNet |  RGB | S|\n| [MEGC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10187)   | ICASSP 2022 | 深度, 反射, 莫尔条纹, 边界 | DepthNet+特征增强 |  RGB, HSV | S|\n| [EulerNet](http:\u002F\u002Fksiresearch.org\u002Fseke\u002Fseke22paper\u002Fpaper076.pdf)   | 2022 | 人脸位置图 | EulerNet结合时间注意力、残差金字塔 |  RGB | D|\n| [TTN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9730902)   | TIFS 2022 | 深度 | ViT结合金字塔时间聚合、时间差注意力 |  RGB | D|\n| [TransFAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9817442)   | TBIOM 2022 | 深度 | ViT结合跨层注意力 |  RGB | S|\n| [DepthSeg](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9892826)   | IJCNN 2022 | 深度 | PSPNet, DeepLabv3+ |  RGB | S|\n\n\n\u003Ca name=\"generative\" \u002F>\n\n#### 基于像素级监督的生成模型\n\n| 方法    | 年份 | 监督信号 | 主干网络 | 输入 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [De-Spoof](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.09968)   | ECCV 2018 | 深度, BinaryMask, 傅里叶图 | DSNet, DepthNet |  RGB, HSV | S|\n| [Reconstruction](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=8997504)   | 2019 | 实人RGB输入，欺骗零矩阵 | U-Net |  RGB | S|\n| [LGSC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.03922)   | 2020 | 实人零矩阵 | U-Net, ResNet18 |  RGB | S|\n| [TAE](http:\u002F\u002Fpublications.idiap.ch\u002Fdownloads\u002Fpapers\u002F2020\u002FMohammadi_InfoVAE_ICASSP_2020.pdf)   | ICASSP 2020 | 二分类交叉熵损失、重建损失 | Info-VAE, DenseNet161 |  RGB | S|\n| [STDN](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123630392.pdf)   | ECCV 2020 | BinaryMask, 实人RGB输入 | U-Net, PatchGAN |  RGB | S|\n| [GOGen](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FStehouwer_Noise_Modeling_Synthesis_and_Classification_for_Generic_Object_Anti-Spoofing_CVPR_2020_paper.pdf)   | CVPR 2020 | RGB输入 |  DepthNet |  RGB+独热向量 | S|\n| [PhySTD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.05185)   | PAMI 2022 | 深度, 实人RGB输入 |  U-Net, PatchGAN |  频率轨迹 | S|\n| [MT-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9462562)   | PAMI 2021 | 实人零矩阵，可学习欺骗图 |  DepthNet |  RGB | S|\n| [IF-OM](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.04100.pdf)   | 2021 | RGB输入，混合输入特征 |  MobileNetV2 + UNet |  RGB, 混合RGB, 折叠RGB | S|\n| [Dual-Stage Disentanglement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.09157)   | WACV 2021 | 实人零矩阵，用于重建的RGB输入 | U-Net, ResNet18 |  RGB | S|\n\n\n\u003Ca name=\"DA\" \u002F>\n\n#### 域适应\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | \n| [OR-DA](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8279564)   | TIFS 2018 | AlexNet | 二值交叉熵损失、MMD损失 |  S|\n| [DTCNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05633)   | 2019 | AlexNet | 二值交叉熵损失、MMD损失 |  S|\n| [Adversarial](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8987254)   | ICB 2019 | ResNet18 | 三元组损失、对抗损失 |  S|\n| [ML-MMD](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8795006)   | ICMEW 2019 | 多尺度FCN | 交叉熵损失、MMD损失 |  S|以及未标记数据集\n| [OCA-FAS](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231220313540)   | NC 2020 | DepthNet | 二值交叉熵损失、像素级二值损失 |  S|\n| [DR-UDA](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9116802)   | TIFS 2020 | ResNet18 | 中心点&三元组损失、对抗损失、解耦损失 |  S|\n| [DGP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9053685)   |ICASSP 2020 | DenseNet161 | 特征散度度量、BinaryMask损失 |  S|\n| [Distillation](https:\u002F\u002Fsignalprocessingsociety.org\u002Fpublications-resources\u002Fieee-journal-selected-topics-signal-processing\u002Fface-anti-spoofing-deep-neural)   | J-STSP 2020 | AlexNet | 二值交叉熵损失、MMD损失、成对相似性 |  S|\n| [SASA](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.14162.pdf)   | 2021 | ResNet18 | 交叉熵损失、对抗损失、遗忘约束、对比语义对齐 |  S|\n| [GDA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10015)   |ECCV 2022 | DepthNet | 交叉熵损失、深度损失、域间神经统计一致性、相位一致性、感知损失 |  S|\n| [CDFTN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03651)   |AAAI 2023 | ResNet18 | 交叉熵损失、重建损失、三元组损失 |  S|\n\n\n\n\u003Ca name=\"DG\" \u002F>\n\n#### 域泛化\n\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | \n| [MADDG](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShao_Multi-Adversarial_Discriminative_Deep_Domain_Generalization_for_Face_Presentation_Attack_Detection_CVPR_2019_paper.pdf)   | CVPR 2019 | DepthNet | 二值交叉熵与深度损失、多对抗损失、双力三元组损失 |  S|\n| [PAD-GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.01959)   | CVPR 2020 | ResNet18 | 二值交叉熵与深度损失、多对抗损失、双力三元组损失 |  S|\n| [DASN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9423958)   | 2020 | ResNet18 | 二值交叉熵与欺骗无关因素损失 |  S|\n| [SSDG](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)   | CVPR 2020  | ResNet18 | 二值交叉熵损失、单侧对抗损失、非对称三元组损失 |  S|\n| [RF-Meta](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.10771)   | AAAI 2020 | DepthNet | 二值交叉熵损失、深度损失 |  S|\n| [CCDD](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw48\u002FSaha_Domain_Agnostic_Feature_Learning_for_Image_and_Video_Based_Face_CVPRW_2020_paper.pdf)   | CVPRW 2020 | ResNet50+LSTM | 二值交叉熵损失、类条件损失 |  D|\n| [SDA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.12129)   |  AAAI 2021 | DepthNet | 二值交叉熵与深度损失、重建损失、正交性正则化 |  S|\n| [D2AM](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F16199)   |AAAI 2021 | DepthNet | 二值交叉熵损失、深度损失、MMD损失 |  S|\n| [DRDG](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.16128.pdf)   |  IJCAI 2021 | DepthNet | 二值交叉熵损失、深度损失、域损失 |  S|\n| [PDL-FAS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.06552.pdf)   |  2021 | DepthNet | 二值交叉熵损失、深度损失 |  S|\n| [ANRL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02667)   | ACMMM 2021 | DepthNet | 二值交叉熵损失、深度损失、域间兼容损失、类间可分损失 |  S|\n| [HFN+MP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06753)   | 2021 | 双流ResNet50 | 二值交叉熵损失、均方误差损失 |  S|\n| [SDFANet](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9600829)   | TIFS 2021 | ResNet-18 | BCE损失 + 多粒度损失 + 中心点损失 + 非对称三元组损失  |  S|\n| [VLAD-VSA](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3474085.3475284)   | ACMMM 2021 | DepthNet或 ResNet18 | BCE损失 + 三元组损失 + 域对抗损失 + 正交损失 + 质心适应损失 + 内部损失  |  S|\n| [FGHV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.14894)   | AAAI 2022 | DepthNet | 方差 + 相对相关 + 分布鉴别约束  |  S|\n| [SSAN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.05340.pdf)   | CVPR 2022 | DepthNet\u002FResNet18 | CE损失 + 域对抗损失 + 对比损失  |  S|\n| [AMEL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.09868)   | ACMMM 2022 | DepthNet | CE损失、深度损失、特征一致性损失  |  S|\n| [MD-FAS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.11148.pdf)   | ECCV 2022 | PhySTD | CE损失、二值掩膜损失、源域与目标域蒸馏损失  |  S|\n| [FRT-PAD](https:\u002F\u002Fwentianzhang-ml.github.io\u002Fpad)   | ECCV 2022 | ResNet18+GAT | CE损失  |  S|\n| [CIFAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9859783)   | ICME 2022 | ResNet18 | CE损失、三元组损失  |  S|\n| [OneSideTriplet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.15955.pdf)   | FG 2023 | DepthNet+UNet | CE损失、三元组损失、深度损失、分割损失  |  S|\n| [DiVT](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2023\u002Fpapers\u002FLiao_Domain_Invariant_Vision_Transformer_Learning_for_Face_Anti-Spoofing_WACV_2023_paper.pdf)   | WACV 2023 |  MobileViT-S | 域不变集中和攻击分离损失  |  S|\n| [ALDICF](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-023-01778-x)   | IJCV 2023 |  ResNet18, ResNet50 | 域内与域间鉴别损失、条件域对抗损失   |  S|\n| [DKG+CSA+AIAW](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05640)   | CVPR 2023 |  DepthNet | BCE损失、深度损失、非对称实例自适应漂白损失   |  S|\n| [SA-FAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13662)   | CVPR 2023 |  ResNet18 | 对比损失、对齐损失   |  S|\n| [SPDA]([https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13662](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10095730))   | ICASSP 2023 |  ResNet18 | BCE损失、域损失、自我节奏聚类挖掘损失、正交损失   |  S|\n| [CRFAS]([https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13662](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10095329))   | ICASSP 2023 |  ResNet18 | BCE损失、域损失、非对称三元组损失、反事实特征生成损失   |  S|\n\n\u003Ca name=\"zero-shot\" \u002F>\n\n#### 零\u002F少样本学习\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 |\n| --------   | -----    | -----  |  -----  | -----  | \n| [DTN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Deep_Tree_Learning_for_Zero-Shot_Face_Anti-Spoofing_CVPR_2019_paper.pdf)   | CVPR 2019 | 深度树网络 | 二元交叉熵损失、像素级二元损失、无监督树损失 |  RGB, HSV|\n| [AIM-FAS](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6866)   | AAAI 2020 | DepthNet | 深度损失、对比深度损失 |  RGB |\n| [CM-PAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9304920)   | IJCB 2021 | DepthNet, ResNet | 二元交叉熵损失、深度损失、梯度对齐 |  RGB|\n| [ViTAF](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12175)   | ECCV 2022 | ViT+适配器 | 交叉熵损失、余弦损失 |  S|\n\n\n\n\n\u003Ca name=\"oneclass\" \u002F>\n\n#### 异常检测\n\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 |\n| --------   | -----    | -----  |  -----  | -----  | \n| [AE+LBP](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8698574)   | 2018 | 自编码器 | 重建损失 |  RGB|\n| [Anomaly](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FCFS\u002FPerez-Cabo_Deep_Anomaly_Detection_for_Generalized_Face_Anti-Spoofing_CVPRW_2019_paper.pdf)   | 2019 | ResNet50 | 三元组焦点损失、度量-Softmax损失 |  RGB|\n| [Anomaly2](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8682253)   | 2019 | GoogLeNet 或 ResNet50 | 马氏距离 |  RGB|\n| [Hypersphere](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F338920244_UNSEEN_FACE_PRESENTATION_ATTACK_DETECTION_WITH_HYPERSPHERE_LOSS)   | 2020 | ResNet18 | 超球体损失 |  RGB, HSV |\n| [Ensemble-Anomaly](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9190814)   | 2020 | GoogLeNet 或 ResNet50 | 高斯混合模型（非端到端） |  RGB, 图块|\n| [MCCNN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9153044)   | 2020 | LightCNN | 二元交叉熵损失、对比损失 |  灰度、红外、深度、热成像|\n| [End2End-Anomaly](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.05856)   | 2020 | VGG-Face | 二元交叉熵损失、成对混淆 |  RGB|\n| [ClientAnomaly](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320320304994)   | PR 2020 | ResNet50 或 GoogLeNet 或 VGG16 | 单类SVM或马氏距离或高斯混合模型 |  RGB|\n| [ContrastiveEVT](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3474085.3475538)   | ACM MM 2021 | cVAE | 二元交叉熵损失、重建损失、对比损失|  RGB|\n| [OneClassKD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03792)   | TIFS 2022 | DepthNet | 像素级二元交叉熵损失、多级知识蒸馏损失|  RGB|\n\n\n\u003Ca name=\"semiself\" \u002F>\n\n#### 半监督与自监督\n\n\n| 方法    | 年份 | 半\u002F自 | 主干网络 | 损失函数 |  \n| --------   | -----    | -----  |  -----  | -----  | \n| [SCNN++PL+TC](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9387164)   | TIP 2021 | 半; 伪标签| ResNet18 | 交叉熵损失 |  \n| [USDAN](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320321000753?via%3Dihub)   | PR 2021 | 半; 分布对齐| ResNet18 | 自适应二元交叉熵损失、熵损失、对抗损失 | \n| [EPCR](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10012352)   | TIFS 2023 | 半; 一致性正则化 | CDCN |  预测和嵌入级别的重建损失|\n| [TSS](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167865522000605)   | PRL 2022 | 自; 预文本任务 | ResNet18+BiLSTM |  时间采样预测的交叉熵损失|\n| [ACL-FAS](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-18910-4_39)   | PRCV 2022 | 自; 对比学习 | - |  区域相似性损失、对比及反对比损失|\n| [MIM-FAS](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-18907-4_62)   | PRCV 2022 | 自; 掩码图像建模 | ViT |  重建损失|\n| [DF-DM](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10051654)   | TNNLS 2023 | 自; 预文本任务| DeepPixBiS, SSDG-R, CDCN | GAN损失、基于插值的一致性损失 |\n| [MCAE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.08674)   | 2023 | 自+监督; 掩码图像建模 | ViT |  反馈重建损失 + 监督式对比损失|\n| [AMA+M2A2E](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05744.pdf)   | 2023 | 自; 掩码图像建模| ViT | 重建损失 |\n\n\n\n\n\u003Ca name=\"CL\" \u002F>\n\n#### 持续学习\n\n\n| 方法    | 年份 | 是否回放 | 主干网络 | 损失函数 |\n| --------   | -----    | -----  |  -----  | -----  | \n| [CM-PAD](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9304920)   | IJCB 2020 | 带回放 | DepthNet |  批次\u002F整体元损失|\n| [Experience Replay](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FRostami_Detection_and_Continual_Learning_of_Novel_Face_Presentation_Attacks_ICCV_2021_paper.pdf)   | ICCV 2021 | 带回放| ResNet50 | BCE损失用于已识别的新样本和回放样本 |  \n| [DCDCA+PPCR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09914)   | 2023 | 无需排练 | ViT | BCE损失、代理原型对比正则化 |\n\n\n\n---\n\u003Ca name=\"methods_advanced\" \u002F>\n\n\n### 3️⃣ 具有先进传感器的深度FAS方法\n\n\n\u003Ca name=\"sensor\" \u002F>\n\n#### 基于专用传感器的学习\n\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 | 静态\u002F动态 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [Thermal-FaceCNN](https:\u002F\u002Fwww.mdpi.com\u002F2073-8994\u002F11\u002F3\u002F360)   | 2019 | AlexNet | 回归损失 |  热红外人脸图像 | S|\n| [SLNet](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_ESA_SLNet.pdf)   | 2019 | 17层CNN | 二元交叉熵损失 |  立体（左&右）人脸图像 | S|\n| [Aurora-Guard](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10311)   | 2019 | U-Net | 二元交叉熵损失、深度回归、光照回归 |  投影人脸，伴随由随机光CAPTCHA指定的动态光线变化 | D|\n| [LFC](http:\u002F\u002Fwww.ee.cityu.edu.hk\u002F~lmpo\u002Fpublications\u002F2019_JEI_Face_Liveness.pdf)   | 2019 | AlexNet | 二元交叉熵损失 |  来自光场相机的光线差异\u002F微透镜图像 | S|\n| [PAAS](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3441250.3441254)   | 2020 | MobileNetV2 | 对比损失、SVM |  四向偏振人脸图像 | S|\n| [Face-Revelio](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3372224.3419206)   | 2020 | 连体AlexNet | L1距离 |  四盏闪光灯分别显示在屏幕的四个象限 | D|\n| [SpecDiff](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.12400)   | 2020 | ResNet4 | 二元交叉熵损失 |  合并了带闪光灯和不带闪光灯的人脸图像 | S|\n| [MC-PixBiS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11469)   | 2020 | DenseNet161 | 二元掩码损失 |  SWIR图像差异 | S|\n| [Thermalization](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F20\u002F14\u002F3988)   | 2020 | YOLO V3+GoogLeNet | 二元交叉熵损失 |  热红外人脸图像 | S|\n| [DP Bin-Cls-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9248008)   | 2021 | 浅层U-Net + Xception | 变换一致性、相对视差损失、二元交叉熵损失 |  DP图像对 | S|\n\n\n\n\n\n\u003Ca name=\"multimodal\" \u002F>\n\n#### 多模态学习\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 | 融合方式 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [FaceBagNet](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fhtml\u002FCFS\u002FShen_FaceBagNet_Bag-Of-Local-Features_Model_for_Multi-Modal_Face_Anti-Spoofing_CVPRW_2019_paper.html)   | 2019 | 多流CNN | 二元交叉熵损失 | RGB、深度、近红外人脸区域 | 特征级|\n| [FeatherNets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09290)   | 2019 | Ensemble-FeatherNet | 二元交叉熵损失 | 深度、近红外 | 决策级 |\n| [Attention](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fhtml\u002FCFS\u002FWang_Multi-Modal_Face_Presentation_Attack_Detection_via_Spatial_and_Channel_Attentions_CVPRW_2019_paper.html)   | 2019 | ResNet18 | 二元交叉熵损失、中心损失 | RGB、深度、近红外 | 特征级|\n| [mmfCNN](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3343031.3351001)   | ACMMM 2019 | ResNet34 | 二元交叉熵损失、二元中心损失 | RGB、近红外、深度、HSV、YCbCr | 特征级|\n| [MM-FAS](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2019\u002Fpapers\u002FCFS\u002FParkin_Recognizing_Multi-Modal_Face_Spoofing_With_Face_Recognition_Networks_CVPRW_2019_paper.pdf)   | 2019 | ResNet18\u002F50 | 二元交叉熵损失 | RGB、近红外、深度 | 特征级|\n| [AEs+MLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.04048)   | 2019 | 自编码器、MLP | 二元交叉熵损失、重构损失 | 灰度-深度-红外组合 | 输入级|\n| [SD-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8995504\u002F)   | 2019 | ResNet18 | 二元交叉熵损失 | RGB、近红外、深度 | 特征级|\n| [Dual-modal](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8924988)   | 2019 | MoblienetV3 | 二元交叉熵损失 | RGB、IR | 特征级|\n| [Parallel-CNN](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1742-6596\u002F1549\u002F4\u002F042069)   | 2020 | 注意力CNN | 二元交叉熵损失 | 深度、近红外 | 特征级|\n| [Multi-Channel Detector](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16836)   | 2020 | RetinaNet (FPN+ResNet18) | 关键点回归、焦点损失 | 灰度-深度-红外组合 | 输入级|\n| [PSMM-Net](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fhtml\u002FLiu_CASIA-SURF_CeFA_A_Benchmark_for_Multi-Modal_Cross-Ethnicity_Face_Anti-Spoofing_WACV_2021_paper.html)   | 2020 | ResNet18 | 各流分别使用二元交叉熵损失 | RGB、深度、近红外 | 特征级|\n| [PipeNet](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw39\u002FYang_PipeNet_Selective_Modal_Pipeline_of_Fusion_Network_for_Multi-Modal_Face_CVPRW_2020_paper.pdf)   | 2020 | SENet154 | 二元交叉熵损失 | RGB、深度、近红外人脸区域 | 特征级|\n| [MM-CDCN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw39\u002FYu_Multi-Modal_Face_Anti-Spoofing_Based_on_Central_Difference_Networks_CVPRW_2020_paper.pdf)   | 2020 | CDCN | 像素级二元损失、对比度深度损失 | RGB、深度、近红外 | 特征&决策级|\n| [HGCNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11594)   | 2020 | 超图-CNN、MLP | 二元交叉熵损失 | RGB、深度 | 特征级|\n| [MCT-GAN](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11042-020-08952-0)   | 2020 | CycleGAN、ResNet50 | GAN损失、二元交叉熵损失 | RGB、近红外 | 输入级|\n| [D-M-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9372969)   | 2021 | ResNeXt | 二元交叉熵损失 | 多预处理后的深度、RGB-近红外组合 | 输入&特征级|\n| [MA-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9374963)   | TIFS 2021 | CycleGAN、ResNet18 | 二元交叉熵损失、GAN损失 | RGB、近红外 | 特征级|\n| [AMT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.09108)   | TMM 2021 | 译者：浅层编码器+解码器 + ResNet；判别器：DenseNet | BCE损失、像素级二元损失、重构损失 | 光照归一化的RGB或近红外或热成像或深度 | 输入级|\n| [CompreEval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10286)   | 2022 | DenseNet-161  | BCE损失、像素级二元损失 | RGB、深度、近红外、短波红外、热成像 | 输入级|\n| [Conv-MLP](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9796574)   | TIFS 2022 | Conv-MLP | 二元交叉熵损失、护城河损失 | RGB、深度、近红外 | 输入级|\n| [Echo-FAS](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9868051)   | TIFS 2022 | ResNet18、Transformer | 二元交叉熵损失 | RGB、语音 | 特征级|\n| [AMA+M2A2E](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05744.pdf)   | 2023 | ViT | BCE损失、用于MAE的重构损失 | RGB、深度、IR | 特征级|\n| [SNM]([https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05744.pdf](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10176121))   | TIFS 2023 | ResNet18 | BCE损失、中心损失、余弦损失 | 深度、IR | 特征级|\n\n\n\u003Ca name=\"flexmodal\" \u002F>\n\n#### 灵活模态学习\n\n| 方法    | 年份 | 主干网络 | 损失函数 | 输入 | 融合方式 |\n| --------   | -----    | -----  |  -----  | -----  | -----  |\n| [CMFL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00948)   | CVPR 2021 | DenseNet161 | 二元交叉熵损失、跨模态焦点损失 | RGB、深度 | 特征级|\n| [MA-ViT](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2022\u002F0165.pdf)   | IJCAI 2022 | ViT-S\u002F16 | 图像和模态上的二元交叉熵损失 | RGB、深度、近红外 | 输入&特征级|\n| [FlexModal-FAS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08192)   | CVPRW 2023 | CDCN、ResNet50、ViT | BCE损失、像素级二元损失 | RGB、深度、IR | 特征级|\n| [FM-ViT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03277)   | TIFS 2023 | ViT | 用于灵活模态分类头的BCE损失 | RGB、深度、IR | 特征级|","# DeepFAS 快速上手指南\n\nDeepFAS 是一个关于**深度人脸活体检测（Face Anti-Spoofing, FAS）**的综合开源项目，主要提供该领域的最新论文综述、公开数据集整理以及评估协议对比。它旨在帮助研究者快速了解从传统混合方法到最新深度学习范式（如域适应、零样本学习、多模态学习等）的技术进展。\n\n> **注意**：本项目核心内容为**文献综述与数据索引**，而非一个单一的“一键运行”推理模型库。以下指南将指导您如何获取资源并基于此开展研究。\n\n## 1. 环境准备\n\n由于本项目主要涉及文献调研和数据集索引，对运行环境要求较低。若您计划复现列表中提到的具体算法或处理数据集，建议准备以下环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS\n*   **Python 版本**: Python 3.7+\n*   **依赖工具**:\n    *   Git (用于克隆仓库)\n    *   PyTorch \u002F TensorFlow (用于复现具体算法，视具体论文而定)\n    *   Pandas \u002F Markdown 阅读器 (用于查看整理后的数据表格)\n\n**前置依赖安装：**\n```bash\nsudo apt-get update\nsudo apt-get install -y git python3-pip\n```\n\n## 2. 安装步骤\n\n克隆官方仓库以获取最新的综述文档、数据集列表及拓扑图资源。\n\n**使用 GitHub 源（推荐）：**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FZitongYu\u002FDeepFAS.git\ncd DeepFAS\n```\n\n**国内加速方案（如果访问 GitHub 较慢）：**\n您可以使用 Gitee 镜像（如果有）或通过代理加速，或者直接下载 ZIP 包解压。目前主要维护在 GitHub，建议配置好网络环境后直接克隆。\n\n```bash\n# 示例：使用 git 配置加速（可选，视网络情况而定）\ngit clone https:\u002F\u002Fghproxy.com\u002Fhttps:\u002F\u002Fgithub.com\u002FZitongYu\u002FDeepFAS.git\ncd DeepFAS\n```\n\n## 3. 基本使用\n\nDeepFAS 的主要使用方式是作为**研究导航图**。您可以通过浏览仓库中的 `README.md` 文件来查找所需的数据集链接或算法分类。\n\n### 3.1 查阅数据集与协议\n进入目录后，直接查看 `README.md` 文件，其中详细列出了：\n*   **单目 RGB 数据集**：如 CASIA-MFSD, REPLAY-ATTACK, SiW, CelebA-Spoof 等，包含攻击类型、采集设备等信息。\n*   **多模态\u002F专用传感器数据集**：如 3DMAD, WMCA, CASIA-SURF 等，涵盖深度、近红外、热成像等模态。\n*   **算法分类**：按混合方法、端到端监督、像素级辅助监督、域适应 (DA)、域泛化 (DG)、零样本学习等类别整理了相关论文。\n\n**查看内容示例：**\n```bash\ncat README.md\n```\n*在文件中搜索关键词（如 \"Domain generalization\" 或 \"SiW-M\"）即可快速定位相关资源和论文链接。*\n\n### 3.2 获取特定数据集\n根据 `README` 中提供的链接下载数据集。以 **CASIA-MFSD** 为例：\n1.  在文档中找到 [CASIA-MFSD](http:\u002F\u002Fwww.cbsr.ia.ac.cn\u002Fusers\u002Fjjyan\u002FZHANG-ICB2012.pdf) 的链接。\n2.  访问链接并按照该数据集官方的申请流程获取数据（部分数据集需要签署协议）。\n3.  将数据放置在您的项目目录中，例如：\n    ```bash\n    mkdir -p datasets\u002FCASIA_MFSD\n    # 将下载的数据解压至该目录\n    ```\n\n### 3.3 复现算法思路\nDeepFAS 本身不提供统一的训练脚本，但它为每个类别提供了经典论文的链接。\n*   **步骤 1**：在 `README` 的 \"Deep FAS methods with commercial RGB camera\" 章节找到您感兴趣的方法（例如 \"Domain generalization\"）。\n*   **步骤 2**：点击对应的方法名称（通常链接到论文或代码库）。\n*   **步骤 3**：前往该方法的独立代码仓库进行具体的环境配置和训练。\n\n**引用本项目：**\n如果您在研究中使用了此综述整理的数据或分类体系，请在论文中引用：\n```bibtex\n@article{yu2022deep,\n  title={Deep Learning for Face Anti-Spoofing: A Survey},\n  author={Yu, Zitong and Qin, Yunxiao and Li, Xiaobai and Zhao, Chenxu and Lei, Zhen and Zhao, Guoying},\n  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},\n  year={2022}\n}\n```","某金融科技公司正在升级其移动端人脸支付系统，急需解决用户遭遇高清照片、视频回放及 3D 面具攻击的安全隐患。\n\n### 没有 DeepFAS 时\n- **防御手段单一**：团队仅依赖传统手工特征算法，难以识别日益逼真的高清打印照片和屏幕重放攻击，漏报率居高不下。\n- **泛化能力薄弱**：模型在实验室特定光照下表现尚可，但一旦用户处于户外强光或昏暗室内等未知场景，误报率急剧上升。\n- **研发周期漫长**：面对不断涌现的新型攻击（如树脂面具），开发人员需从零收集数据并重新设计网络结构，迭代效率极低。\n- **缺乏权威基准**：团队在选型时难以评估不同数据集（如 CASIA-MFSD 与 REPLAY-ATTACK）的适用性，导致训练数据偏差大。\n\n### 使用 DeepFAS 后\n- **攻防全面升级**：基于 DeepFAS 综述中集成的端到端深度学习方案，系统能精准提取像素级辅助监督信号，有效拦截各类复杂欺骗攻击。\n- **跨域鲁棒增强**：利用工具整理的域适应（Domain Adaptation）与域泛化（Domain Generalization）策略，模型在未见过的光照和设备环境下依然保持高准确率。\n- **技术落地加速**：直接复用仓库中分类清晰的 SOTA 方法（如生成式模型或异常检测），大幅缩短了新防御算法的研发与部署时间。\n- **数据协议规范**：参考工具提供的详细数据集对比与评估协议，团队快速构建了覆盖多模态攻击的高质量测试集，确保评估结果客观可靠。\n\nDeepFAS 通过系统化整合前沿算法与权威数据基准，帮助团队将人脸反欺诈系统的防御能力从“被动修补”提升至“主动免疫”级别。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FZitongYu_DeepFAS_22e8d0ff.png","ZitongYu","Zitong Yu","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FZitongYu_41e5f35c.jpg","Assistant Professor, Great Bay University",null,"https:\u002F\u002Fgithub.com\u002FZitongYu",602,65,"2026-04-02T01:38:57",4,"","未说明",{"notes":88,"python":86,"dependencies":89},"该仓库主要是一个关于深度人脸活体检测（Deep FAS）的综述列表，包含了数据集介绍、方法分类及论文引用信息。提供的 README 内容中并未包含具体的代码实现、安装指南或运行环境配置需求（如操作系统、GPU、Python 版本及依赖库等）。用户需根据列表中引用的具体论文或链接到各个子项目的官方仓库来获取相应的运行环境信息。",[],[14,37],"2026-03-27T02:49:30.150509","2026-04-06T05:17:05.952593",[94,99,103,108,113,117],{"id":95,"question_zh":96,"answer_zh":97,"source_url":98},9975,"这些活体检测方法在现实环境中真的有效吗？是否有基于 RGB 图像的具体实现代码？","方法的有效性高度依赖于训练数据的多样性和质量。这里的“质量”指的是数据域的多样性，包括压缩率、分辨率、传感器 ISP 处理、低\u002F高保真攻击等。如果系统过度依赖输入图像的质量（例如只在高质量图像下工作），那么面对攻击者精心准备的高质量伪造图像时，系统将失效。因此，成功的反欺骗系统必须在包含多种域和质量水平的多样化数据集上进行训练，而不仅仅是依赖单一的高质量输入。目前仓库主要提供方法论和基准，具体的单一反欺骗方法代码需参考相关论文实现或等待官方更新。","https:\u002F\u002Fgithub.com\u002FZitongYu\u002FDeepFAS\u002Fissues\u002F1",{"id":100,"question_zh":101,"answer_zh":102,"source_url":98},9976,"为什么我的模型在高质量输入图像上表现不佳，是否意味着方法失败了？","是的，如果一个反欺骗系统的性能严重依赖于输入图像的质量，这通常被视为一个重大缺陷。因为攻击者在尝试欺骗系统时，往往会使用高质量的打印、重放或面具图像。如果系统仅在低质量图像下能检测出攻击，而在高质量伪造图像前失效，那么它在实际应用中是不可靠的。解决这一问题的关键在于训练数据的构建：必须包含各种质量等级（从低到高）和各种攻击类型（如不同材质的面具、不同分辨率的重放等），以确保模型学习到的是真正的活体特征，而非图像质量的伪影。",{"id":104,"question_zh":105,"answer_zh":106,"source_url":107},9977,"SiW-Mv2 数据集包含哪些具体的攻击类型和环境设置？","SiW-Mv2 (Spoof-in-Wild-with-Multiple-Attack version 2) 数据集包含 785 个活体样本和 915 个伪造样本（视频格式），涉及 1093 名受试者。其环境设置涵盖室内和室外场景，包含多样化的年龄、种族以及 7 种不同的光照条件。攻击类型经过 IAPRA 验证，共 14 种，具体包括：打印攻击（平面）、重放攻击（手机和平板）、面具攻击（纸质、塑料、透明）、人体模型（硅胶、模特头）、化妆攻击（混淆、化妆品、模仿）以及遮盖物攻击（纸眼镜、部分遮挡口眼、趣味眼镜等）。","https:\u002F\u002Fgithub.com\u002FZitongYu\u002FDeepFAS\u002Fissues\u002F4",{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},9978,"如何获取项目中列出的各个数据集的下载链接？","由于版权和数据隐私政策，该项目仓库本身不直接托管数据集文件。用户需要访问每个数据集对应的官方主页或申请页面进行下载。例如，对于 SiW-Mv2 数据集，可以访问其项目主页 (http:\u002F\u002Fcvlab.cse.msu.edu\u002Fsiwm-v2-dataset.html) 或相关的 GitHub 组织页面 (https:\u002F\u002Fgithub.com\u002FCHELSEA234\u002FMulti-domain-learning-FAS) 查看申请流程。建议查阅仓库 README 中的表格，点击数据集名称跳转至官方来源获取最新下载方式。","https:\u002F\u002Fgithub.com\u002FZitongYu\u002FDeepFAS\u002Fissues\u002F2",{"id":114,"question_zh":115,"answer_zh":116,"source_url":98},9979,"训练数据中的“质量”具体指什么？它对反欺骗性能有何影响？","在反欺骗训练中，“质量”不仅仅指图像的清晰度，更指代不同的“域”（Domains）。具体包括：图像压缩程度、分辨率高低、传感器 ISP（图像信号处理）的差异、以及攻击的保真度（低保真 vs 高保真）。如果训练数据缺乏这些质量维度的多样性，模型可能会过拟合到特定的图像质量特征上。当遇到攻击者使用的高质量伪造样本时，模型往往无法识别。因此，构建包含多域、多质量等级的训练数据是提升模型在现实世界鲁棒性的关键。",{"id":118,"question_zh":119,"answer_zh":120,"source_url":107},9980,"SiW-Mv2 数据集相比其他数据集有什么独特之处？","SiW-Mv2 的独特之处在于其“野外”（In-Wild）特性和攻击类型的丰富性。它不仅包含了传统的打印和重放攻击，还涵盖了复杂的物理攻击，如不同材质的面具（纸质、塑料、透明）、硅胶模型、以及各类化妆和面部遮盖物攻击。此外，该数据集特别强调了环境的多样性，采集于室内和室外多种光照条件下，并覆盖了广泛的年龄和种族群体，这使得它非常适合用于评估模型在多域学习和复杂现实场景下的泛化能力。",[]]